<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Edge Labelling in Narrative Knowledge Graphs</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Vani</forename><surname>Kanjirangat</surname></persName>
							<email>vanik@idsia.ch</email>
							<affiliation key="aff0">
								<orgName type="department">Istituto Dalle Molle di Studi sull&apos;Intelligenza Artificiale (IDSIA)</orgName>
								<orgName type="institution">USI-SUPSI</orgName>
								<address>
									<settlement>Lugano</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Antonucci</surname></persName>
							<email>alessandro@idsia.ch</email>
							<affiliation key="aff0">
								<orgName type="department">Istituto Dalle Molle di Studi sull&apos;Intelligenza Artificiale (IDSIA)</orgName>
								<orgName type="institution">USI-SUPSI</orgName>
								<address>
									<settlement>Lugano</settlement>
									<country key="CH">Switzerland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Edge Labelling in Narrative Knowledge Graphs</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9DB9BDBEA163EA100850271DEA88B99D</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Edge Labels</term>
					<term>Verb Clusters</term>
					<term>Supersenses</term>
					<term>Lowest Common Hypernyms</term>
					<term>Knowledge Graphs</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Edge labelling represents one of the most challenging processes for knowledge graph creation in unsupervised domains. Abstracting the relations between the entities, extracted in the form of triplets, and assigning a single label to a cluster of relations might be quite difficult without supervision and tedious if based on manual annotations. This seems to be particularly the case for applications in literary text understanding, which is the focus of this paper. We present a simple but efficient way to label the edges between the character entities in the knowledge graph extracted from a novel or a short story using a two-level clustering based on BERT-embedding with supersenses and hypernyms. The lack of benchmark datasets in the literary domain poses significant challenges for evaluations. In this work-in-progress paper, we discuss preliminary results to understand the potential for further research.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Extracting structured information from narrative texts is a significant challenge for contemporary AI. The complexity further increases in the case of literary text because of possible ambiguous usage of words, neologisms, unique author writing styles, and many other subtle linguistic aspects. In fact the analysis of literary texts involves various complex steps such as the identification of the main characters and relations and their typification (e.g., gender, partnerships, goodness). Moreover, the high variance in style and the lexicon with frequent use of neologisms <ref type="bibr" target="#b0">[1]</ref> and figures of speech <ref type="bibr" target="#b1">[2]</ref> further complicates the scenario. Most of the past explorations are limited to particular application areas, such as biomedical literature <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>, or news and social media analysis <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. Different embedding techniques and the more recent attention based models, including transformers, evolved as the state-of-the-art for both unsupervised and supervised NLP tasks <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>.</p><p>Identifying a more abstract and meaningful edge label for unsupervised knowledge graph extractions and its evaluation is a challenging process. We report here the current state of our work in the field with preliminary experiments on unsupervised edge labelling of knowledge graphs extracted from literary texts. A simple technique to label the edges in a reasonable way is evaluated. The code is already available in a public repository (github.com/IDSIA/novel2graph).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>The onset of deep learning has given the drive to powerful data processing models to ease NLP applications. For knowledge graphs (KGs), deep models are used to embed the triplet information and address tasks such as link predictions and graph completion <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref> and training embeddings <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>. Another major shift was the introduction of attention, and transformer models <ref type="bibr" target="#b19">[20]</ref>, with many works that adopting attention mechanisms for KG completion and learning tasks <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref>. There has been also focus towards unsupervised learning of KG embeddings <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>.</p><p>The automatic interpretation and visual analysis of literary texts have been explored from various perspectives in the past few years. In <ref type="bibr" target="#b25">[26]</ref>, the literary characters and the network associations have been studied, while in <ref type="bibr" target="#b26">[27]</ref> sentiment relations between (Shakespeare's) characters have been processed. When it comes to unsupervised KG constructions, a combination of classical and deep learning NLP techniques is usually required.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methods</head><p>Let us first briefly discuss the entity extraction process, which is a necessary preprocessing already studied in our previous works before approaching the edge labelling approach. Since, we are dealing with literary text, our entities are the characters in the given input novel or short story, which exhibit various characteristics and relations. As in <ref type="bibr" target="#b27">[28]</ref>, we used the Stanford Named Entity Recognition Tagger<ref type="foot" target="#foot_0">1</ref> together with a character de-aliasing, i.e., unifying the character names that can be possibly referred in different ways (e.g., Ron and Ronald). This is achieved by the DBSCAN clustering algorithm <ref type="bibr" target="#b28">[29]</ref> paired with the Levenshtein string distances. We use the partial_ratio method provided by the fuzzywuzzy module<ref type="foot" target="#foot_1">2</ref> to compute the distance matrix. This is followed by the coreference resolution<ref type="foot" target="#foot_2">3</ref> using the Stanford package and some heuristic adjustments. Each character entity is eventually represented by a unique identifier. These entities define the nodes of the KG. The next step is to label the edges connecting these nodes, which is the major focus of the present work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Verb Extraction and Embedding</head><p>Following <ref type="bibr" target="#b27">[28]</ref>, we extract all the sentences containing two characters/entities and exclude self-relations (e.g., Harry, I am Harry Potter). For simplicity, we also prune the sentences, where the second character appears at the end of the sentence (e.g., said Harry). To split the larger sentences, we use constituency parsing tree <ref type="foot" target="#foot_3">4</ref> to extract the subtrees. Our approach traverses the tree using a depth-first search and extracts each phrase (S) containing at least one noun phrase (NP) and one verb phrase (VP) starting from the bottom of the tree. For instance, consider the sentence:</p><p>CHAR0 is talking to CHAR1, while CHAR1 is cooking for CHAR2.</p><p>The constituency parsing tree returns two extracted phrases (CHAR0 is talking to CHAR1 and CHAR1 is cooking for CHAR2) as depicted in Fig. <ref type="figure" target="#fig_1">1</ref>.  We refer to the set of output sentences as relational sentences. Once we have all the relational sentences, the next step is to extract a representative verb for each relational sentence. Using Part-of-Speech (POS) tagging, we extract the verbs in these relational sentences. Further, we embed the sentences using Sentence BERT (SBERT) <ref type="bibr" target="#b29">[30]</ref> and extract the embeddings of the corresponding extracted verbs. SBERT uses a Siamese network structure <ref type="bibr" target="#b30">[31]</ref> to produce meaningful sentence encodings. Once we have the embedded verbs, we group similar verbs together. Since the embeddings are supposed to encode semantic or contextual information, sentences with similar vector representations are supposed to share similar relations.  To achieve this, we adopt the two-level verb clustering summarised by Algs. 1 and 2. The first step involves grouping the extracted verbs into supersense clusters as given in Alg. 1. Supersense (SS) <ref type="bibr" target="#b31">[32]</ref> is a terminology from WordNet <ref type="bibr" target="#b32">[33]</ref>, where the words are grouped into sets of synonyms called synsets. Each synset is associated with one of the 45 broader semantic categories/SSs, out of which we have 26 nouns, 16 verbs, 3 adjectives, and 1 adverb. This can be regarded as a coarse-grained word sense grouping, but it can be quite helpful for many NLP tasks. We focus on verb SS category only, as we consider the verbs in a sentence as the input. A word can belong to multiple SS categories (as a word can have different senses), and hence SS tagging or disambiguation is another challenging research problem. In the proposed approach, we consider the 16 verb SSs as the category or clusters to which an input verb has to be assigned, which are {body, change, cognition, communication, competition, consumption, contact, creation, emotion, motion, perception, possession, social, stative, weather}. We then compute the embeddings of the extracted verbs with SBERT. Further, we follow the steps from 2 to 8 in Alg. 1 to assign the verb to a specific SS category. If the verb belongs to multiple SS category or to none of these categories, we compute the average of all verb embeddings that belong to each SS category and assign the verb to one with which it has minimum cosine distance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Verb Clustering and Edge Labelling</head><p>The input to second-level, described in Alg. 2 is the SS-based verb clusters. We take all the verb pairs in a cluster and compute the lowest-common-hypernyms (LCHs), which is the lowest common ancestor node between the given synsets in the hierarchy. Since each verb can have multiple synsets, we can have multiple LCHs for a verb pair. These are then sorted based on the frequency of their occurrences, which are related to the strength of association with the verb pair and associate it to the most common LCH. This LCH is considered as the edge label and we generate the triplets (𝐶1, 𝑟,𝐶2), where 𝑟 is the predicate/relation and 𝐶1 and 𝐶2 are the entities/characters. E.g., for the verb cluster {call, pass, share , give, take, spend, buy}, the output is {Synset('move.v.02'): ['save', 'call', 'pass', 'give', 'take'],Synset <ref type="bibr">('act.v</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>We use the first six books of the Harry Potter series by J.K. Rowling (885'943 words). Tab. 1 shows the statistics of the number of sentences extracted before and after co-referencing for the first book. K-means with cosine distance is used for sentence clustering. Algs. 1 and 2 are applied. A snapshot of the supersense-based clusters obtained using the proposed approach defined is in Tab. 2 (left), while the final triplets obtained from verb clusters at level 2 is in Tab. 2 (right). Semantically similar verbs are properly clustered together under the corresponding supersense category. E.g., for category communication, we have verbs such as speak, raise, warn, and mutter. They are closely related to each other in the sense that all these verbs a different ways of communications to express the emotions and further character relations. The preliminary experiments show that our approach yield meaningful clusters and triplets.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>We described our preliminary experiments with an unsupervised edge labelling approach for knowledge graphs. A two-level clustering approach, based on verb supersenses and lowest common hypernyms has been used. To capture semantic similarity, we used the BERT-based embeddings. The approach was empirically evaluated on a literary text. As future work, we aim to enhance sense clustering by approaches such as sense-BERT <ref type="bibr" target="#b33">[34]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Phrase segmentation based on constituency parsing tree.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Algorithm 2 : 2 Take all the verb pairs 3 for each verb pair do 4 5 6 for each verb do 7 8 if 9</head><label>2245789</label><figDesc>Verb Clustering (Level 2) Input: Supersense-based Verb Clusters Output: Triplet (𝐶1, 𝑟,𝐶2) 1 for each supersense-based verb cluster do Compute the lowest common hypernym (LCH) and store them all; Sort the LCHs based on their frequency; Associate it to the most common LCH; no LCH associated to a verb then Consider as outlier; Associate the relation label with the corresponding LCH; 10 Generate the triplets;</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Extracted Verbs [𝑉 1 ,𝑉 2 ,..𝑉 𝑛 ], Supersense Categories (SC) Output: Supersense-based Verb Clusters 1 Find embeddding of [𝑉 1 ,𝑉 2 ,..𝑉 𝑛 ];</figDesc><table><row><cell>Algorithm 1: Verb Clustering (Level 1)</cell></row></table><note>Input: 2 if Verb in single SC then Assign it to that SC; 3 else if Verb in multiple SCs then 4 for SC do 5 Remove the verb from SC; 6 Compute average embedding of SC with the remaining verbs;</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 1</head><label>1</label><figDesc>Statistics after different steps of relational sentence detection.</figDesc><table><row><cell>Type of Sentences</cell><cell># Before/After Co-Referencing</cell></row><row><cell>Identified sentences</cell><cell>6394/6394</cell></row><row><cell>With two chars</cell><cell>566/618</cell></row><row><cell>Asymmetric sentences</cell><cell>511/564</cell></row><row><cell>Two different chars</cell><cell>470/516</cell></row><row><cell>Not included sentences</cell><cell>470/516</cell></row><row><cell>Not "… said charX… "</cell><cell>387/433</cell></row><row><cell>Verb between chars</cell><cell>331/380</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 2</head><label>2</label><figDesc>Supersense category and verb clusters (left), representative verbs and triplets (right).</figDesc><table><row><cell cols="2">Verb Category Verbs stative {shake,lose,study,relax,favor} communication {speak,raise,bully,cheer,warn,mutter}</cell><cell>Verbs play, act</cell><cell>Triplets (Harry,play,Slytherin)</cell></row><row><cell>consume</cell><cell>{growl,scramble,eat}</cell><cell></cell><cell>(Harry,act,Snape)</cell></row><row><cell>motion</cell><cell>{move,walk,slip,look}</cell><cell cols="2">complain, mutter (Harry,mutter,Snape)</cell></row><row><cell>emotion</cell><cell>{fuss,cast,recognize,scare}</cell><cell></cell><cell>(Ron,mutter,Harry)</cell></row><row><cell>possession</cell><cell>{hand,find,clap,save,borrow,award,swap}</cell><cell>block,fight</cell><cell>(Marcus,block,Harry)</cell></row><row><cell>body</cell><cell>{smile,laugh,grin,blink,spit}</cell><cell></cell><cell>(Granger,fight,Snape)</cell></row><row><cell>perception cognition social</cell><cell>{fill,whip,fight,insist,glance,throw,break} {snore,hear,feel,help,share,gasp,linger,dance} {celebrate,dare,punish}</cell><cell>say,repeat</cell><cell>(Quirrell,say,Snape) (Ron,repeat,Hagrid)</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://nlp.stanford.edu/software/CRF-NER.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://pypi.org/project/fuzzywuzzy</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://nlp.stanford.edu/projects/coref.shtml</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://stanfordnlp.github.io/CoreNLP/parse.html</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4">if Verb not in any SC then Compute the average embedding of SCs;</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5">Compute distance between the verb embedding and the average embeddings of SCs;</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_6">Assign the verb to the SC at minimum distance;</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Neologisms in Harry Potter books</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Martínez</forename><surname>Carbajal</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
		<respStmt>
			<orgName>Universidad de Valladolid ; Facultad de Filosofía y Letras</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">Å</forename><surname>Nygren</surname></persName>
		</author>
		<title level="m">Essay on the linguistic features in J.K. Rowling&apos;s Harry Potter and the Philosopher&apos;s Stone</title>
				<imprint>
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A hybrid model based on neural networks for biomedical relation extraction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Biomedical Informatics</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="83" to="92" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Clinical relation extraction with deep learning</title>
		<author>
			<persName><forename type="first">X</forename><surname>Lv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Hybrid Information Technology</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="237" to="248" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">News classification from social media using Twitterbased doc2vec model and automatic query expansion</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">Q</forename><surname>Trieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">Q</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-T</forename><surname>Tran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighth International Symposium on Information and Communication Technology</title>
				<meeting>the Eighth International Symposium on Information and Communication Technology</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="460" to="467" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards automatic fake news classification</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Shah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the Association for Information Science and Technology</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="805" to="807" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Improving distributional similarity with lessons learned from word embeddings</title>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Goldberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dagan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions of the Association for Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="211" to="225" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An analysis of hierarchical text classification using word embeddings</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Stein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Jaques</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Valiati</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">471</biblScope>
			<biblScope unit="page" from="216" to="232" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Wieting</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gimpel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Livescu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1511.08198</idno>
		<title level="m">Towards universal paraphrastic sentence embeddings</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Deep contextualized word representations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Neumann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Iyyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gardner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
				<meeting>the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="2227" to="2237" />
		</imprint>
	</monogr>
	<note>Long Papers</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<title level="m">BERT: Pre-training of deep bidirectional transformers for language understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Xlnet: Generalized autoregressive pretraining for language understanding</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Carbonell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>Salakhutdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="5754" to="5764" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazvininejad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mohamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1910.13461</idno>
		<title level="m">BART: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Kagnet: Knowledge-aware graph networks for commonsense reasoning</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ren</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</title>
				<meeting>the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2822" to="2832" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><surname>Kg-Bert</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1909.03193</idno>
		<title level="m">BERT for knowledge graph completion</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Knowledge graph embedding via dynamic mapping matrix</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing</title>
		<title level="s">Long Papers</title>
		<meeting>the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="687" to="696" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Knowledge graph embedding by translating on hyperplanes</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Eighth AAAI Conference on Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Brodley</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Stone</surname></persName>
		</editor>
		<meeting>the Twenty-Eighth AAAI Conference on Artificial Intelligence<address><addrLine>Québec City, Québec, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2014">July 27 -31, 2014. 2014</date>
			<biblScope unit="page" from="1112" to="1119" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Learning entity and relation embeddings for knowledge graph completion</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Bonet</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Koenig</surname></persName>
		</editor>
		<meeting>the Twenty-Ninth AAAI Conference on Artificial Intelligence<address><addrLine>Austin, Texas, USA</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2015">January 25-30, 2015. 2015</date>
			<biblScope unit="page" from="2181" to="2187" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Translating embeddings for modeling multi-relational data</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bordes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Usunier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garcia-Duran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Weston</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Yakhnenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="2787" to="2795" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ł</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="5998" to="6008" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Ragat: Relation aware graph attention network for knowledge graph completion</title>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="20840" to="20849" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Learning graph attention-aware knowledge graph embedding</title>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Niu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">461</biblScope>
			<biblScope unit="page" from="516" to="529" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Incorporating graph attention mechanism into knowledge graph reasoning based on deep reinforcement learning</title>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP)</title>
				<meeting>the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP)</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="2623" to="2631" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Sheikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Reinwald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Miksovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gschwind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Scotton</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2102.07200</idno>
		<title level="m">Knowledge graph embedding using graph convolutional networks with relation-aware attention</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Unsupervised embedding enhancements of knowledge graphs using textual associations</title>
		<author>
			<persName><forename type="first">N</forename><surname>Veira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Keng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Padmanabhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Veneris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="5218" to="5225" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Studying literary characters and character networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Piper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Algee-Hewitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sinha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ruths</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Vala</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Character-to-character sentiment analysis in shakespeare&apos;s plays</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">T</forename><surname>Nalisnick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Baird</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics</title>
				<meeting>the 51st Annual Meeting of the Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="479" to="483" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Relation clustering in narrative knowledge graphs</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mellace</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kanjirangat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Antonucci</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of AI4Narratives -Workshop on Artificial Intelligence for Narratives in conjunction with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI 2020)</title>
		<title level="s">CEUR Workshop Proceedings, CEUR-WS</title>
		<meeting>AI4Narratives -Workshop on Artificial Intelligence for Narratives in conjunction with the 29th International Joint Conference on Artificial Intelligence and the 17th Pacific Rim International Conference on Artificial Intelligence (IJCAI 2020)<address><addrLine>Yokohama, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2794. 2020</date>
			<biblScope unit="page" from="23" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">ST-DBSCAN: An algorithm for clustering spatial-temporal data</title>
		<author>
			<persName><forename type="first">D</forename><surname>Birant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kut</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data &amp; Knowledge Engineering</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="208" to="221" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Sentence-BERT: Sentence embeddings using Siamese BERTnetworks</title>
		<author>
			<persName><forename type="first">N</forename><surname>Reimers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Gurevych</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</title>
				<meeting>the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="3973" to="3983" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Facenet: A unified embedding for face recognition and clustering</title>
		<author>
			<persName><forename type="first">F</forename><surname>Schroff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kalenichenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Philbin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="page" from="815" to="823" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Supersense tagging of unknown nouns in wordnet</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ciaramita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Johnson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2003 conference on Empirical methods in natural language processing</title>
				<meeting>the 2003 conference on Empirical methods in natural language processing</meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="page" from="168" to="175" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">WordNet: An electronic lexical database</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">A</forename><surname>Miller</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>MIT press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Sensebert: Driving some sense into bert</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Levine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Dagan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Padnos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sharir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shalev-Shwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shashua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shoham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</title>
				<meeting>the 58th Annual Meeting of the Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="4656" to="4667" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
