<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Incorporating Type Information into Zero-Shot Relation Extraction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Cedric</forename><surname>Möller</surname></persName>
							<email>cedric.moeller@uni-hamburg.de</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Informatics, Semantic Systems</orgName>
								<orgName type="institution">Universität Hamburg</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ricardo</forename><surname>Usbeck</surname></persName>
							<email>ricardo.usbeck@leuphana.de</email>
							<affiliation key="aff1">
								<orgName type="department" key="dep1">Institute for Information Systems</orgName>
								<orgName type="department" key="dep2">Artificial Intelligence and Explainability</orgName>
								<orgName type="institution">Leuphana Universität Lüneburg</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Incorporating Type Information into Zero-Shot Relation Extraction</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DA1A630E66F174C5A7293701493B23C9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Relation Extraction</term>
					<term>Zero-shot</term>
					<term>Entity types R. Usbeck) 0000-0001-6700-3482 (C. Möller); 0000-0002-0191-7211 (R. Usbeck)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The task of zero-shot relation extraction focuses on the extraction of relations not seen during training time. Commonly, additional information about the relation such as the relation name or a description of the relation is utilised. In this work, we analyze whether a relation extractor can benefit from the inclusion of fine-grained type information about the involved entities. This is based on the intuition that relation descriptions might contain ontological information on the domain and range of the entity types that are usually put into relation. For that, we follow a cross-encoding setup where we encode both, the entity information and relation information, as one sequence and learn to score the representation. We examine this method on several datasets and show that the inclusion of the fine-grained type information leads to an improvement in performance.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Identifying the relation that is expressed between entities is a very important subproblem of various downstream tasks. For instance, it is critical to handle semantic-web-related tasks such as knowledge graph question answering or knowledge graph population. Usually, it is assumed that the encountered relations are known before. Zero-shot relation extraction breaks with this assumption. During inference time, the goal is to extract entirely new relations not seen before during training time.</p><p>With the establishment of pre-trained models, this goal becomes achievable. Those models are trained on large corpora of textual data in an unsupervised way. In zero-shot relation extraction, one assumes that some information on the new relations is available. The simplest kind of information is a label describing the relation. But this only works if the relation label co-occurs with a similar context as encountered during the training of the pre-trained models. If this is not the case, using additional information such as a description of the relation is necessary.</p><p>In this work, we analyse the impact of combining fine-grained type information and the relation description on the relation extraction performance. This is based on the assumption that the descriptions contain valuable information on the types of the involved entities. For example, the description of the relation director states director(s) of film, TV-series, stageplay, video game or similar. Therefore it is clear, that the relation should not be used when talking about board members of a company, also sometimes referred to as directors. We incorporate fine-grained type information extracted from Wikidata together with the relation descriptions in the relation extraction process. 1  The contributions are:</p><p>• Zero-shot relation extraction model using fine-grained type information and relation descriptions • Ablation study on the impact of fine-grained type information and relation descriptions on the performance </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Problem Definition</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Method</head><p>To study the impact of fine-grained type information, we opt to extend a simple but powerful model introduced by Lan et al. <ref type="bibr" target="#b0">[1]</ref>. Hence, we cross-encode the information of the text and the relation information in a single input. Different from their work, we do not solely rely on the relation label but also include the relation description. Additionally, we assume the existence of fine-grained types for both, the head and the tail relation, extracted using the P31 relation in Wikidata. We include the relation description under the assumption that it contains valuable ontological information referring to the fine-grained types of the considered entities. For example, for the relation shipping port, the description is shipping port of the vessel (if different from "ship registry"): For civilian ships, the primary port from which the ship operates ...</p><p>We denote the types of the head entity by 𝒯 ℎ and the types of the tail entity by 𝒯 𝑡 . Additionally, for each type of an entity, we extract the label describing the type (e.g., human for Q5). The input 𝑥 to the model then consists of four different segments. The first segment describes the head entity:</p><p>Head Entity : {𝑙 ℎ } with Types : {𝑇 ℎ } and the second segment describes the tail entity:</p><p>Tail Entity : {𝑙 𝑡 } with Types : {𝑇 𝑡 } where 𝑙 □ denotes the label of the head entity ℎ or tail entity 𝑡. 𝑇 □ is the concatenation of the labels of the types of the head or tail entity 𝑇 □ = ⨁︀ 𝑢∈𝒯 □ 𝑙 𝑢 . The third segment gives information on the input text:</p><formula xml:id="formula_0">Context : {𝑐}</formula><p>The final segment gives information on the relation:</p><formula xml:id="formula_1">{𝑙 𝑟 } defined as {𝑑 𝑟 }</formula><p>where 𝑙 𝑟 denotes the label of the relation 𝑟 and 𝑑 𝑟 is the description of the relation 𝑟.</p><p>All segments are then combined into a single coherent text as follows: The whole text is then fed into an encoder-only model 𝑓 (𝑥) which returns a sequence of vectors 𝑒 [𝐶𝐿𝑆] . . . 𝑒 <ref type="bibr">[𝑆𝐸𝑃 ]</ref> . The vector 𝑒 [𝐶𝐿𝑆] is then taken and fed a input to a linear layer which then returns a final score.</p><p>𝑠 𝑟 = 𝑙(𝑒 [𝐶𝐿𝑆] ) An overview of the model can be found in Figure <ref type="figure" target="#fig_1">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Evaluation</head><p>We evaluate the model on two popular datasets, FewRel and Wiki-ZSL. Both datasets were annotated on Wikipedia article texts. FewRel is originally a few-shot relation extraction dataset annotated by Han et al. <ref type="bibr" target="#b1">[2]</ref>. The dataset was modified for zero-shot purposes by Chia et al. <ref type="bibr" target="#b2">[3]</ref>. They split the training, validation and test examples by their relations into disjoint sets. Wiki-ZSL is a zero-shot relation extraction dataset created by Chen et al. <ref type="bibr" target="#b3">[4]</ref> based on the Wiki-KB <ref type="bibr" target="#b4">[5]</ref>. As the entities and relations in both datasets are linked to Wikidata, we focus on it as the knowledge graph providing the fine-grained entity types.</p><p>In each dataset, the set of relations in the training and test dataset is disjoint and randomly assigned. Three different settings are examined per dataset. Each setting considers a different number of relations in the train, validation and test set. The number of relations in the validation/test set varies between 𝑚 = 5, 𝑚 = 10 and 𝑚 = 15 relations. These relations are randomly picked and the remaining relations are assigned to the training set.</p><p>To handle the considerable noise induced by the random selection of the relations, the dataset for 𝑚 = 5, 𝑚 = 10 and 𝑚 = 15 were randomly split into train, validation and test sets for five times. A method is evaluated on each split and the results are averaged.</p><p>As metrics, precision, recall and F1 are calculated. All metrics are computed in a macro setting which means that for each relation the precision, recall and F1 are calculated and then averaged over all relations.</p><p>We compare our method, called TMC-BERT, against several methods: CIM <ref type="bibr" target="#b5">[6]</ref> solves the task as a textual entailment problem where the relation descriptions and the input sentence are given to a Natural Language Inference model to classify whether the input sentence entails the relation description. This is done for all potential relations and the highest scoring is taken. ZS-BERT <ref type="bibr" target="#b6">[7]</ref> encodes the input sentence as well as the relation descriptions into a dense vector space. An nearest neighbor search is conducted over all the encodings of the relation descriptions given the input sentence. The closest relation encoding is the final relation. Tran et al. (2022) <ref type="bibr" target="#b7">[8]</ref> again encode the input sentence and relation descriptions into a dense vector space. They additionally employ a contrastive-learning inspired loss on the input sentence and relation encodings. The final scoring is achieved by concatenating the relation encoding and the sentence encoding and feeding it into a linear layer. RE-Matching <ref type="bibr" target="#b8">[9]</ref> encodes the input sentence and relation descriptions as well but uses feature distillation to calculate a similarity score based on more fine-grained feature interactions. RelationPrompt <ref type="bibr" target="#b2">[3]</ref> relies on a generative model to generate synthetic data as additional training samples. At the same time, the generative model is also used to generate a relation given the sentence and the two entities as input. We compare against the model with (RelationPrompt) and without (RelationPrompt NG) synthetic training data. MC-BERT <ref type="bibr" target="#b0">[1]</ref> models the relation extraction similar to us as a multiple-choice problem where the input sentence and the relation label are rearranged together in a natural sentence, encoded and scored. DSP-ZRSC <ref type="bibr" target="#b9">[10]</ref> solves the problem via Discriminative Soft Prompting where the input text, the entities and all relation labels are concatenated, fed into a prompt discriminative language model and each relation label is scored. Tran et al. (2023) <ref type="bibr" target="#b10">[11]</ref> solve it as a representation learning problem and introduce a second loss term incorporating the degree of correlation between sentences and relations.</p><p>BERT-base-case was used as the model to stay comparable to MC-BERT. The model was fine-tuned on two NVIDIA A6000s with a batch size of 48 and a learning rate of 5𝑒 − 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Results</head><p>As can be seen in Table <ref type="table">1</ref>, the incorporation of type-related information leads to a large increase in performance on several datasets in comparison to regular MC-BERT. On Wiki-ZSL, the performance increases vary between 6 and nearly 8 F1 points. The type-related information has a great impact on, both, recall and precision. On FewRel, the performance increases when considering 5 or 15 unseen relations. However, the performance increases are less pronounced. In comparison to the current SOTA method by Tran et al. <ref type="bibr" target="#b10">[11]</ref>, TMC-BERT considerably surpasses its performance when confronted with 15 unseen relations. This is the most complex setting as much fewer examples and relations are available during training while more potential relations are encountered during inference. Here, the additional type information helps a lot. Furthermore, the inclusion of fine-grained type information is orthogonal to the properties of the method by Tran et al. <ref type="bibr" target="#b10">[11]</ref>. Their method could benefit from it as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Ablation study</head><p>To examine what changes lead to the large increase in performance, we conducted an ablation study on the incorporation of different kinds of information. Here, we differentiated between three cases:</p><p>1. TMC-BERT 2. TMC having the types of the subject and object entity removed (TMC-BERT w/o types) 3. TMC having the description of the relation removed (TMC-BERT w/o desc.)</p><p>As can be seen in Table <ref type="table" target="#tab_1">2</ref>, the addition of the relation description alone was the least beneficial type of information. Adding information on relation types leads to a larger improvement, probably as the pre-trained model already associates specific types with certain relation labels. Finally, the ablation study shows that the relation description and fine-grained entity type information complement each other, as using each separately does not lead to as large a decrease in performance as using them together.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Entity Linking impact</head><p>As it is not realistic that fine-grained type information is always available, we also evaluate the model when identifying entity types using an entity linker (EL). For that, we train the model with known entity types but evaluate with the entity types as retrieved by an entity linker. As an entity linker, we use ReFinED <ref type="bibr" target="#b11">[12]</ref>. As can be seen in Table <ref type="table">3</ref>, the performance diminishes when using types identified through entity linking. On Wiki-ZSL, the performance is still surpassing the existing SOTA results at all times. On FewRel, the performance is still greater when only confronted with five relations but decreases more when having to predict 10 or 15 relations. One reason might be that the entity linking performance is lower on FewRel than on Wiki-ZSL. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 3</head><p>Results on FewRel and Wiki-ZSL when using an entity linker </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Case study</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Related Work</head><p>Commonly, relation extraction is tackled as classification problem. Usually, the input text is encoded and a classification head is attached. To encode text, CNNs <ref type="bibr" target="#b12">[13]</ref>, RNNs <ref type="bibr" target="#b13">[14]</ref> or transformers <ref type="bibr" target="#b14">[15]</ref> are usually employed. Recently, pre-trained models have been fine-tuned on the relation extraction task. Due to the fixed classification head, such trained models are not flexible enough to handle new relations <ref type="bibr" target="#b15">[16]</ref>. Hence, when targeting zero-shot relation extraction other methods are necessary. Representation-learningbased methods <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b10">11]</ref> try to embed the textual information and the relational information in the same vector space. For that, relational information such as labels or descriptions are usually used to get a representation of the relation. The goal is to learn representations such that the representation of the true relation resides close to a representation of the text in the vector space while the representation of false relations is far away. Recently, generative language models have been increasingly utilized for the task <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b9">10]</ref>. Here, the model is prompted with the input text as well as information on the potential relations. The model is then fine-tuned to either generate the relation as expressed in the input text or a full triple consisting of the two entities and relation. For example, Chen et al. <ref type="bibr" target="#b17">[18]</ref> model it as solving a Masked Language Modelling problem. Also, generative models were applied to generate synthetic training data for relation extraction <ref type="bibr" target="#b2">[3]</ref>. Type information was considered in previous works focusing on relation extraction but these works either used very broad types or did not tackle zero-shot relation extraction <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b20">21]</ref>.</p><p>Some methods model the problem as a textual entailment problem <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b23">24]</ref>. Here, the idea is that a model that is pre-trained on the textual entailment task is directly applied to the relation extraction task. The assumption is that the model can identify whether the textual information entails the relation description.</p><p>The method by Lan et al <ref type="bibr" target="#b0">[1]</ref> models relation classification as a multiple-choice problem where the text is encoded with relation information and a score is calculated. This is done for all relations and the relation with the highest score is taken. We extend this approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Future Work</head><p>In this work, we examined the impact of fine-grained type information on the zero-shot relation extraction problem. Different from past methods, we employed fine-grained type information as additional information and showed that combining this information with the description of the relation leads to a synergistic effect, improving the performance overall. We believe that this is the case because the description information provides valuable ontological information on the domain and range of a relation. This domain and range are then compared against the fine-grained type information of the interacting entities. Furthermore, we validated whether the increase in performance did indeed spring from the combination of type and relation description information which is indeed the case. Finally, we studied the impact of using an entity linker to retrieve the entity types. While it leads to a decrease, the performance often still surpasses the current SOTA in the most complex setting considerably.</p><p>In future works, we want to tackle multiple problems. First, it is not certain that one has access to finegrained type information during inference. Therefore, we want to examine, whether the performance of a trained entity typer is sufficient to produce similar results. Secondly, the current architecture follows a cross-encoding approach. While this is not a problem when one encounters only a few relations during inference, in real-world use cases this is not typically the case. There are hundreds of potential different relations that could be encountered during inference. Cross-encoding the text with each one leads to a substantial computational effort. We want to examine whether the relation candidate generation module might also benefit from fine-grained type information. Also, the training process currently only trains the model by randomly sampling other relations. Choosing the relationships in a smarter way might lead to additional improvement. Finally, the impact of fine-grained entity types from other knowledge graphs needs to be evaluated.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>[</head><label></label><figDesc>CLS] Given the Head Entity : {𝑙 ℎ } with Types : {𝑇 ℎ }, Tail Entity : {𝑙 𝑡 } with Types : {𝑇 𝑡 } and Context : {𝑐}, the context expresses the relation [SEP] {𝑙 𝑟 } defined as {𝑑 𝑟 } [SEP]</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Model overview: Green specifies the types, blue the entities, orange the context, red the relation label and Purple the description of the relation.</figDesc><graphic coords="3,72.00,65.60,451.27,214.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The problem of relation extraction can be defined as follows: Given an input text 𝑐, an annotated head ℎ and tail entity 𝑒, identify the correct relation 𝑟 as expressed in the text. Zero-shot relation extraction separates the set of relations encountered during training from the ones encountered during inference. Hence, during training time, the set of available relations is 𝑅 train , while during test time, the set is 𝑅 test . It holds that 𝑅 train ∩ 𝑅 test = ∅. Also, no annotated examples containing any relations in set 𝑅 test are available during inference time. Additional information defining the relation is available. We assume labels, descriptions and type information on entities to be available.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Ablation study of on FewRel and Wiki-ZSL</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>Wiki-ZSL</cell><cell></cell><cell></cell><cell>FewRel</cell><cell></cell></row><row><cell>𝑚</cell><cell>Model</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>P</cell><cell>R</cell><cell>F1</cell></row><row><cell></cell><cell>CIM</cell><cell cols="6">49.63 48.81 49.22 58.05 61.92 59.92</cell></row><row><cell></cell><cell>ZS-BERT</cell><cell cols="6">71.54 72.39 71.96 76.96 78.86 77.90</cell></row><row><cell></cell><cell>Tran et al. (2022)</cell><cell cols="6">87.48 77.50 82.19 87.11 86.29 86.69</cell></row><row><cell>5</cell><cell cols="7">RelationPrompt NG 51.78 46.76 48.93 72.36 58.61 64.57 RelationPrompt 70.66 83.75 76.63 90.15 88.50 89.30</cell></row><row><cell></cell><cell>RE-Matching</cell><cell cols="6">78.19 78.41 78.30 92.82 92.34 92.58</cell></row><row><cell></cell><cell>DSP-ZRSC</cell><cell>94.1</cell><cell>77.1</cell><cell>84.8</cell><cell>93.4</cell><cell>92.5</cell><cell>92.9</cell></row><row><cell></cell><cell>Tran et al. (2023)</cell><cell cols="6">94.50 96.48 95.46 96.36 96.68 96.51</cell></row><row><cell></cell><cell>MC-BERT</cell><cell cols="6">80.28 84.03 82.11 90.82 90.13 90.47</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">90.11 87.89 88.92 93.94 93.30 93.62</cell></row><row><cell></cell><cell>CIM</cell><cell cols="6">46.54 47.90 45.57 47.39 49.11 48.23</cell></row><row><cell></cell><cell>ZS-BERT</cell><cell cols="6">60.51 60.98 60.74 56.92 57.59 57.25</cell></row><row><cell></cell><cell>Tran et al. (2022)</cell><cell cols="6">71.59 64.69 67.94 64.41 62.61 63.50</cell></row><row><cell>10</cell><cell cols="7">RelationPrompt NG 54.87 36.52 43.80 66.47 48.28 55.61 RelationPrompt 68.51 74.76 71.50 80.33 79.62 79.96</cell></row><row><cell></cell><cell>RE-Matching</cell><cell cols="6">74.39 73.54 73.96 83.21 82.64 82.93</cell></row><row><cell></cell><cell>DSP-ZRSC</cell><cell>80.0</cell><cell>74.0</cell><cell>76.9</cell><cell>80.7</cell><cell>88.0</cell><cell>84.2</cell></row><row><cell></cell><cell>Tran et al. (2023)</cell><cell cols="6">85.43 88.14 86.74 81.13 82.24 81.68</cell></row><row><cell></cell><cell>MC-BERT</cell><cell cols="6">72.81 73.96 73.38 86.57 85.27 85.92</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">81.21 81.27 81.23 84.42 84.99 85.68</cell></row><row><cell></cell><cell>CIM</cell><cell cols="6">29.17 30.58 29.86 31.83 33.06 32.43</cell></row><row><cell></cell><cell>ZS-BERT</cell><cell cols="6">34.12 34.38 34.25 35.54 38.19 36.82</cell></row><row><cell></cell><cell>Tran et al. (2022)</cell><cell cols="6">38.37 36.05 37.17 43.96 39.11 41.36</cell></row><row><cell>15</cell><cell cols="7">RelationPrompt NG 54.45 29.43 37.45 66.49 40.05 49.38 RelationPrompt 63.69 67.93 65.74 74.33 72.51 73.40</cell></row><row><cell></cell><cell>RE-Matching</cell><cell cols="6">67.31 67.33 67.32 73.80 73.52 73.66</cell></row><row><cell></cell><cell>DSP-ZRSC</cell><cell>77.5</cell><cell>64.4</cell><cell>70.4</cell><cell>82.9</cell><cell>78.1</cell><cell>80.4</cell></row><row><cell></cell><cell>Tran et al. (2023)</cell><cell cols="6">64.68 65.01 65.30 66.44 69.29 67.82</cell></row><row><cell></cell><cell>MC-BERT</cell><cell cols="6">65.71 67.11 66.40 80.71 79.84 80.27</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">73.62 74.07 73.77 82.11 79.93 81.00</cell></row><row><cell>Table 1</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="2">Results on FewRel and Wiki-ZSL</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>Wiki-ZSL</cell><cell></cell><cell></cell><cell>FewRel</cell><cell></cell></row><row><cell cols="2">𝑚 Model</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>P</cell><cell>R</cell><cell>F1</cell></row><row><cell></cell><cell>TMC-BERT w/o desc.</cell><cell cols="6">85.56 84.07 84.74 93.96 93.26 93.61</cell></row><row><cell>5</cell><cell cols="7">TMC-BERT w/o types 85.00 84.41 84.68 93.33 92.50 92.91</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">90.11 87.89 88.92 93.94 93.30 93.62</cell></row><row><cell></cell><cell>TMC-BERT w/o desc.</cell><cell cols="6">77.26 78.16 77.70 85.24 83.29 84.25</cell></row><row><cell>10</cell><cell cols="7">TMC-BERT w/o types 74.89 76.05 75.46 85.16 83.36 84.24</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">81.21 81.27 81.23 84.42 84.99 85.68</cell></row><row><cell></cell><cell>TMC-BERT w/o desc.</cell><cell cols="6">72.33 71.16 71.73 79.22 76.46 79.79</cell></row><row><cell>15</cell><cell cols="7">TMC-BERT w/o types 68.53 69.81 69.16 79.22 78.19 78.69</cell></row><row><cell></cell><cell>TMC-BERT</cell><cell cols="6">73.62 74.07 73.77 82.11 79.93 81.00</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>illustrates two instances where the inclusion of type information or relation descriptions proved beneficial. In the first case, specifying that MMORPG belongs to the video game genre facilitated the correct classification of the genre relation. In the second example, highlighting that bass is a voice type aligned the type label precisely with the voice type relation label. Additionally, the relation description directly addressed the voice type of bass.</figDesc><table><row><cell>Method</cell><cell>TMC-BERT</cell><cell>MC-BERT</cell></row><row><cell>Sentence</cell><cell cols="2">Gravity Corporation is a South Korean video game corporation primarily known for the</cell></row><row><cell></cell><cell cols="2">development of the MMORPG Ragnarok Online. {MMORPG: video game genre; Ragnarok</cell></row><row><cell></cell><cell>Online: video game}</cell><cell></cell></row><row><cell>Classified Re-</cell><cell>genre</cell><cell>manufacturer</cell></row><row><cell>lation</cell><cell></cell><cell></cell></row><row><cell>Description</cell><cell>creative work's genre or an artist's field of</cell><cell>main use of the subject (includes current</cell></row><row><cell>of Relation</cell><cell>work</cell><cell>and former usage)</cell></row><row><cell>Sentence</cell><cell cols="2">Putnam Griswold (1875-1914) was an American opera singer (bass), born in Minneapolis,</cell></row><row><cell>with entity</cell><cell cols="2">Minnesota. {Putnam Griswold: human, bass: voice type}</cell></row><row><cell>types</cell><cell></cell><cell></cell></row><row><cell>Classified Re-</cell><cell>voice type</cell><cell>use</cell></row><row><cell>lation</cell><cell></cell><cell></cell></row><row><cell>Description</cell><cell>person's voice type. expected values: so-</cell><cell>main use of the subject (includes current</cell></row><row><cell>of Relation</cell><cell>prano, mezzo-soprano, contralto, coun-</cell><cell>and former usage)</cell></row><row><cell></cell><cell>tertenor, tenor, baritone, bass (and deriva-</cell><cell></cell></row><row><cell></cell><cell>tives)</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Comparison of the performance of TMC-BERT and MC-BERT on two different examples. Ground-truth relations are shown in bold. The interacting entities and their types are shown in dictionaries following the sentences.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_0">In our experiments we set 𝑛 = 5.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This project was supported by the House of Computing and Data Science (HCDS) of the Hamburg University within the Cross-Disciplinary Lab programme.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Modeling zero-shot relation classification as a multiplechoice problem</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhao</surname></persName>
		</author>
		<idno type="DOI">10.1109/IJCNN54540.2023.10191459</idno>
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Neural Networks, IJCNN 2023</title>
				<meeting><address><addrLine>Gold Coast, Australia</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2023">June 18-23, 2023. 2023</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Fewrel: A large-scale supervised fewshot relation classification dataset with state-of-the-art evaluation</title>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/D18-1514</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing</title>
				<editor>
			<persName><forename type="first">E</forename><surname>Riloff</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Chiang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Hockenmaier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Tsujii</surname></persName>
		</editor>
		<meeting>the 2018 Conference on Empirical Methods in Natural Language Processing<address><addrLine>Brussels, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-11-04">October 31 -November 4, 2018. 2018</date>
			<biblScope unit="page" from="4803" to="4809" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Relationprompt: Leveraging prompts to generate synthetic data for zero-shot relation triplet extraction</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Chia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Poria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Si</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2022.FINDINGS-ACL.5</idno>
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: ACL 2022</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Muresan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Villavicencio</surname></persName>
		</editor>
		<meeting><address><addrLine>Dublin, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">May 22-27, 2022. 2022</date>
			<biblScope unit="page" from="45" to="57" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">towards zero-shot relation extraction with attribute representation learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zs-Bert</forename></persName>
		</author>
		<idno type="DOI">10.18653/V1/2021.NAACL-MAIN.272</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Hakkani-Tür</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Beltagy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bethard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</meeting>
		<imprint>
			<date type="published" when="2021">June 6-11, 2021. 2021</date>
			<biblScope unit="page" from="3470" to="3479" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Context-aware representations for knowledge base relation extraction</title>
		<author>
			<persName><forename type="first">D</forename><surname>Sorokin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Gurevych</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/D17-1188</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Palmer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Hwa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Riedel</surname></persName>
		</editor>
		<meeting>the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017<address><addrLine>Copenhagen, Denmark</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">September 9-11, 2017. 2017</date>
			<biblScope unit="page" from="1784" to="1789" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Reasoning about entailment with neural attention</title>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grefenstette</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Hermann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kociský</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Blunsom</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1509.06664" />
	</analytic>
	<monogr>
		<title level="m">4th International Conference on Learning Representations, ICLR 2016</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</editor>
		<meeting><address><addrLine>San Juan, Puerto Rico</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">May 2-4, 2016. 2016</date>
		</imprint>
	</monogr>
	<note>Conference Track Proceedings</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">towards zero-shot relation extraction with attribute representation learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zs-Bert</forename></persName>
		</author>
		<idno type="DOI">10.18653/V1/2021.NAACL-MAIN.272</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Hakkani-Tür</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Beltagy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bethard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</meeting>
		<imprint>
			<date type="published" when="2021">June 6-11, 2021. 2021</date>
			<biblScope unit="page" from="3470" to="3479" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Improving discriminative learning for zero-shot relation extraction</title>
		<author>
			<persName><forename type="first">H</forename><surname>V.-H. Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ouchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Watanabe</surname></persName>
		</author>
		<author>
			<persName><surname>Matsumoto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge</title>
				<meeting>the 1st Workshop on Semiparametric Methods in NLP: Decoupling Logic from Knowledge</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Re-matching: A fine-grained semantic matching method for zero-shot relation extraction</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.ACL-LONG.369</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Boyd-Graber</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</editor>
		<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">July 9-14, 2023. 2023</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="6680" to="6691" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">DSP: discriminative soft prompts for zero-shot entity and relation extraction</title>
		<author>
			<persName><forename type="first">B</forename><surname>Lv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.FINDINGS-ACL.339</idno>
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: ACL 2023</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Boyd-Graber</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</editor>
		<meeting><address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">July 9-14, 2023. 2023</date>
			<biblScope unit="page" from="5491" to="5505" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Enhancing semantic correlation between instances and relations for zero-shot relation extraction</title>
		<author>
			<persName><forename type="first">H</forename><surname>V.-H. Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ouchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shindo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Matsumoto</surname></persName>
		</author>
		<author>
			<persName><surname>Watanabe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Natural Language Processing</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="304" to="329" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Refined: An efficient zeroshot-capable approach to end-to-end entity linking</title>
		<author>
			<persName><forename type="first">T</forename><surname>Ayoola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tyagi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fisher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Christodoulopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pierleoni</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2022.NAACL-INDUSTRY.24</idno>
		<idno>NAACL-INDUSTRY.24</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, NAACL 2022, Hybrid</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Loukina</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Gangadharaiah</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Min</surname></persName>
		</editor>
		<meeting>the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies: Industry Track, NAACL 2022, Hybrid<address><addrLine>Seattle, Washington, USA + Online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">July 10-15, 2022. 2022</date>
			<biblScope unit="page" from="209" to="220" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Relation classification via convolutional deep neural network</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/C14-1220/" />
	</analytic>
	<monogr>
		<title level="m">COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Hajic</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Tsujii</surname></persName>
		</editor>
		<meeting><address><addrLine>Dublin, Ireland, ACL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">August 23-29, 2014. 2014</date>
			<biblScope unit="page" from="2335" to="2344" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">End-to-end relation extraction using lstms on sequences and tree structures</title>
		<author>
			<persName><forename type="first">M</forename><surname>Miwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bansal</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/P16-1105</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016</title>
		<title level="s">Long Papers</title>
		<meeting>the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016<address><addrLine>Berlin, Germany</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">August 7-12, 2016. 2016</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>The Association for Computer Linguistics</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A frustratingly easy approach for entity and relation extraction</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2021.NAACL-MAIN.5</idno>
		<idno>NAACL-MAIN.5</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Hakkani-Tür</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Beltagy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bethard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</meeting>
		<imprint>
			<date type="published" when="2021">June 6-11, 2021. 2021</date>
			<biblScope unit="page" from="50" to="61" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Enriching pre-trained language model with entity information for relation classification</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<idno type="DOI">10.1145/3357384.3358119</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019</title>
				<editor>
			<persName><forename type="first">W</forename><surname>Zhu</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Tao</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">X</forename><surname>Cheng</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Cui</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Rundensteiner</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Carmel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Q</forename><surname>He</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">X</forename><surname>Yu</surname></persName>
		</editor>
		<meeting>the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019<address><addrLine>Beijing, China</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019">November 3-7, 2019. 2019</date>
			<biblScope unit="page" from="2361" to="2364" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">A generative model for relation extraction and classification</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rossiello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gliozzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Florian</surname></persName>
		</author>
		<idno>CoRR abs/2202.13229</idno>
		<ptr target="https://arxiv.org/abs/2202.13229.arXiv:2202.13229" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Knowprompt: Knowledge-aware prompt-tuning with synergistic optimization for relation extraction</title>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Si</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1145/3485447.3511998</idno>
	</analytic>
	<monogr>
		<title level="m">WWW &apos;22: The ACM Web Conference 2022, Virtual Event</title>
				<editor>
			<persName><forename type="first">F</forename><surname>Laforest</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Simperl</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Agarwal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Gionis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Herman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Médini</surname></persName>
		</editor>
		<meeting><address><addrLine>Lyon, France</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2022">April 25 -29, 2022. 2022</date>
			<biblScope unit="page" from="2778" to="2788" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Utber: Utilizing fine-grained entity types to relation extraction with distant supervision</title>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1109/SMDS49396.2020.00015</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Smart Data Services, SMDS 2020</title>
				<meeting><address><addrLine>Beijing, China</addrLine></address></meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2020">October 19-23, 2020. 2020</date>
			<biblScope unit="page" from="63" to="71" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Type-aware distantly supervised relation extraction with linked arguments</title>
		<author>
			<persName><forename type="first">M</forename><surname>Koch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gilmer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Soderland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Weld</surname></persName>
		</author>
		<idno type="DOI">10.3115/V1/D14-1203</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Moschitti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Pang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Daelemans</surname></persName>
		</editor>
		<meeting>the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014<address><addrLine>Doha, Qatar</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">October 25-29, 2014. 2014</date>
			<biblScope unit="page" from="1891" to="1901" />
		</imprint>
	</monogr>
	<note>, A meeting of SIGDAT, a Special Interest Group of the ACL, ACL</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Exploring fine-grained entity type constraints for distantly supervised relation extraction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/C14-1199/" />
	</analytic>
	<monogr>
		<title level="m">COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Hajic</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Tsujii</surname></persName>
		</editor>
		<meeting><address><addrLine>Dublin, Ireland, ACL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">August 23-29, 2014. 2014</date>
			<biblScope unit="page" from="2107" to="2116" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Improving zero-shot relation classification via automatically-acquired entailment templates</title>
		<author>
			<persName><forename type="first">M</forename><surname>Rahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Surdeanu</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.REPL4NLP-1.16</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2023</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Can</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Mozes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Cahyawijaya</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Saphra</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Kassner</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Ravfogel</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Ravichander</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Zhao</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Augenstein</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Grefenstette</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Voita</surname></persName>
		</editor>
		<meeting>the 8th Workshop on Representation Learning for NLP, RepL4NLP@ACL 2023<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-07-13">July 13, 2023. 2023</date>
			<biblScope unit="page" from="187" to="195" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Label verbalization and entailment for effective zero and few-shot relation extraction</title>
		<author>
			<persName><forename type="first">O</forename><surname>Sainz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">L</forename><surname>De Lacalle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Labaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barrena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Agirre</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2021.EMNLP-MAIN.92</idno>
		<idno>EMNLP-MAIN.92</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Moens</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Specia</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">W</forename><surname>Yih</surname></persName>
		</editor>
		<meeting>the 2021 Conference on Empirical Methods in Natural Language Processing, EMNLP 2021, Virtual Event / Punta Cana<address><addrLine>, Dominican Republic</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021-11-11">7-11 November, 2021. 2021</date>
			<biblScope unit="page" from="1199" to="1212" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Zero-shot relation classification as textual entailment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Obamuyide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vlachos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the first workshop on fact extraction and VERification (FEVER)</title>
				<meeting>the first workshop on fact extraction and VERification (FEVER)</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="72" to="78" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
