<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">From Nodes to Narratives: A Knowledge Graph-based Storytelling Approach</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Mike</forename><surname>De Kok</surname></persName>
							<email>de.kok@student.vu.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">Vrije Universiteit Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<settlement>Sophia Antipolis</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Youssra</forename><surname>Rebboud</surname></persName>
							<email>youssra.rebboud@eurecom.fr</email>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<settlement>Sophia Antipolis</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pasquale</forename><surname>Lisena</surname></persName>
							<email>pasquale.lisena@eurecom.fr</email>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<settlement>Sophia Antipolis</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Raphael</forename><surname>Troncy</surname></persName>
							<email>raphael.troncy@eurecom.fr</email>
							<affiliation key="aff1">
								<orgName type="institution">EURECOM</orgName>
								<address>
									<settlement>Sophia Antipolis</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ilaria</forename><surname>Tiddi</surname></persName>
							<email>i.tiddi@vu.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">Vrije Universiteit Amsterdam</orgName>
								<address>
									<settlement>Amsterdam</settlement>
									<country key="NL">The Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">From Nodes to Narratives: A Knowledge Graph-based Storytelling Approach</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2398BF9256D6A44D3BC33A7405589AA6</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:31+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Narratives, Knowledge Graphs, Information Extraction, Event-centric Knowledge Graphs (I. Tiddi) 0009-0009-9843-4707 (M. d. Kok)</term>
					<term>0000-0003-3507-5646 (Y. Rebboud)</term>
					<term>0000-0003-3094-5585 (P. Lisena)</term>
					<term>0000-0003-0457-1436 (R. Troncy)</term>
					<term>0000-0001-7116-9338 (I. Tiddi)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Narratives wield a profound in uence, shaping perceptions, beliefs, and decision-making processes. Although contemporary pre-trained language models have showcased impressive capabilities in text generation and question-answering tasks, they grapple with inherent limitations in knowledge coverage and exhibit vulnerability to societal biases. This work endeavors to forge a methodology that applies Knowledge Graphs in narrative construction. Rather than solely focusing on fundamental aspects such as the 4W (who, what, when, where) and general relationships, our approach comprises nely detailed semantic relations, delineating precise type of causality such as an event preventing, intending-to-cause, causing, or enabling another event. Applying state-of-art methods to predict such rich information, we demonstrate that it is possible to obtain automatically generated narratives of better grammatical and semantic accuracy.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Narratives stand at the heart of our societal fabric, serving our understanding and facilitating the exchange and preservation of knowledge. These narratives lter through our everyday lives, appearing in diverse forms such as commercials, political campaigns, news broadcasts, and more, each with its unique purpose and signi cance. Stories hold immense power to shape our thoughts, beliefs, and actions, making them captivating and transformative <ref type="bibr" target="#b0">[1]</ref>. Consequently, the quest to innovate in the realm of complex narrative generation holds the potential to usher in a new era of AI systems that are intricately attuned to human sensibilities. Building upon the profound role of narratives in our society, it becomes evident that our means of narrative generation and comprehension are intertwined with the capabilities of modern AI. Pre-trained language models (PLM), exempli ed by models such as BERT <ref type="bibr" target="#b1">[2]</ref>, GPT-3 <ref type="bibr" target="#b2">[3]</ref>, and the more recent ChatGPT (GPT-3.5) <ref type="foot" target="#foot_0">1</ref> , have showcased remarkable progress in text generation, and conversational tasks. Yet, these models, shaped by training on extensive datasets drawn from undisclosed and diverse sources, bear intrinsic limitations, including knowledge gaps, inaccuracies, and societal biases <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>. Their challenges in maintaining semantic coherence and capturing long-term dependencies within text generation further underscore the need for innovation in narrative crafting <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>.</p><p>Knowledge Graphs (KGs) are proven to be suitable structures for human knowledge, designed for machine-readability and adaptability, while several experiments of text generation from KGs are present in the literature <ref type="bibr" target="#b6">[7]</ref>. Several KGs are available as data sources for the automatic generation of narratives. For example, EventKG <ref type="bibr" target="#b7">[8]</ref> is a knowledge graph that consolidates and links events extracted from diverse sources, including Wikidata and YAGO <ref type="bibr" target="#b8">[9]</ref>. This knowledge graph comprises more than 1.3 million events, each associated with its respective spatial and temporal coordinates. However, EventKG primarily focuses on representing events attributed and relationships between sub-events and super-events. While the value of such a knowledge graph is undeniable, its limitation to speci c event properties, notably the sub(super)events or the 4W, results in succinct and somewhat incomplete narratives.</p><p>Instead, the FARO dataset <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref> encompasses a broader spectrum of semantically precise relationships. This includes event-related connections such as Prevention, Enabling, Causality, and Intention. In this work, we propose to enhance the WebNLG dataset <ref type="bibr" target="#b11">[12]</ref>, by incorporating the FARO dataset. This augmentation aims to generate text with more detailed semantics, particularly focusing on causal, preventive, intentional, and enabling relationships within a speci ed subgraph of events. You can locate the implementation code, and the appendix at https://github.com/ANR-kFLOW/KG2Narrative.</p><p>The remainder of this paper is structured as follows: we rst review the prior research pertaining to narratives and the extraction of relevant information from KGs (Section 2). We present datasets in Section 3, and we detail our approach for KG summarization, which encompasses an initial information selection step before text generation in Section 4. We then present both qualitative and quantitative results in Section 5. We conclude and outline some future work in Section 6.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>A narrative graph <ref type="bibr" target="#b12">[13]</ref> incorporates two main components: the individual representation of events, including the "four W" aspects (who, what, when, where) and the interconnection of these events through temporal and causal relationships. The Simple Event Model (SEM) <ref type="bibr" target="#b13">[14]</ref> provides a foundation for modeling events, but is still insu cient to link disparate events or classes of the same type. To address this limitation, Blin <ref type="bibr" target="#b12">[13]</ref> suggests enriching the event relation types: temporal or causal links from Allen <ref type="bibr" target="#b14">[15]</ref> and dbo:alongside links between classes of the same type. Furthermore, the FARO ontology<ref type="foot" target="#foot_1">2</ref>  <ref type="bibr" target="#b9">[10]</ref> covers most of the existing event relations in the literature, from temporal relation to causal and more ne-grained ones such as prevention.</p><p>KG summarization is an initial step of information retrieval and selection. To acquire the essential nodes for event description, an e ective approach involves ranking techniques that assign signi cance to nodes based on the relationships they possess. Various methods can be used such as entity ranking, relationship ranking, and semantic document ranking <ref type="bibr" target="#b15">[16]</ref>. Blin et al. propose a system that can identify relevant information needed to build a narrative graph, by using an informed graph search traversal strategy <ref type="bibr" target="#b16">[17]</ref>. To determine which information is considered 'relevant' the method uses lters to prune the search space with respect to the Simple Event Model (What, Who, Where, When).</p><p>On the other hand, di erent methods for generating texts from knowledge graphs have been proposed. In <ref type="bibr" target="#b17">[18]</ref>, triples are extracted to ne-tune a GPT-2 model <ref type="bibr" target="#b18">[19]</ref>, making the model dependent on the input triples. A similar approach is introduced in <ref type="bibr" target="#b19">[20]</ref>, involving BART <ref type="bibr" target="#b20">[21]</ref> and T5 <ref type="bibr" target="#b21">[22]</ref>. This approach obtained state-of-the-art performances on the AGENDA dataset <ref type="bibr" target="#b22">[23]</ref> but not on the WebNLG dataset. Both found that Pre-trained Language Models (PLM) work well on unordered representations of the graph. JointGT <ref type="bibr" target="#b23">[24]</ref> uses BART and T5, and exploits new pre-training methods to explicitly preserve the input graph's structural information. JointGT outperforms the other mentioned technique on WebNLG, which might indicate that including the topology of the graph lead to better results. A di erent approach <ref type="bibr" target="#b24">[25]</ref> uses a transformer encoding structure to encode both the global information and the local topology information, and feeds a transformer to decode and generate text. However, this did not work as well as the previously mentioned technique <ref type="bibr" target="#b19">[20]</ref>, which used a PLM model without encoding. This might indicate that PLMs obtain better results than self trained transformer models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Dataset</head><p>In this section, we present the datasets that we used to train our method: WebNLG <ref type="bibr" target="#b25">[26]</ref> and the FARO dataset <ref type="bibr" target="#b10">[11]</ref> (Table <ref type="table" target="#tab_0">1</ref>). For evaluation, we use two evaluation datasets: the FARO test set and the ASRAEL KG <ref type="bibr" target="#b26">[27]</ref>. ASRAEL is a knowledge graph that includes various event-related articles and their interconnections, including the 4W relations. Before our evaluation, ASRAEL lacked precise semantic relations. Therefore, we had to extract these relations from the event articles (linked to the KG) to conduct the assessment. We enhanced the ASRAEL KG with these extracted additional relations (similarly to the ones in FARO), resulting in a more dense and comprehensive knowledge graph. To achieve this objective, we used a pre-trained REBEL model <ref type="bibr" target="#b27">[28]</ref> to extract events and relations (cause, enable, prevent, and enable). Furthermore, we leverage an existing event co-reference resolution model <ref type="bibr" target="#b28">[29]</ref> to perform the task within the KG. This model creates clusters of mentions, computes similarity scores for each cluster, merges those with the highest score, and repeats this process until the score fell below a de ned threshold, which we empirically set to 0.95. This clustering process resulted in a graph primarily composed of clusters with a single mention, which are due to not nding a similar match. According to our manual assessment, the algorithm matched correctly a large number of syntactic matches, which makes it trustworthy. In total, we successfully clustered 45,031 mentions, with 36,057 being unique. The resulting narrative graph <ref type="foot" target="#foot_2">3</ref> provides a RDF representation of event co-references and relationships, enriched with ontologies such as NIF (NLP Interchange Format<ref type="foot" target="#foot_3">4</ref> ), SEM and FARO to describe the relations between triples, further enhancing the context and meaning of our knowledge graph. A global overview of a narrative graph, and a concrete example can be found in the appendix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Knowledge graph summarization</head><p>Knowledge Graph summarization comprises two tasks: the selection of pertinent information from the knowledge graph, and the text generation based on the extracted data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Relevant Information Selection</head><p>A SPARQL query has been written to extract the essential nodes, such as persons, places, and times, crucial for narrative construction from a main event within the ASRAEL KG. This query prioritizes the selection of events involving the 4W nodes with higher frequencies of incoming edges. Mentions are selected similarly; the larger the cluster of co-referent mentions (formed by the event co-reference model) is, the higher the priority of said cluster. Since we face a limitation on the number of input tokens of the text generation model, up to three mentions are selected from the same cluster.</p><p>The quality of the output depends largely on the quality the output of previous steps (relation extraction and co-reference resolution). Future work aims to enhance the accuracy of both these tasks and explore methods for identifying indirectly linked relevant nodes to selected events.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Text Generation from Knowledge Graphs</head><p>As anticipated in Section 2, using a PLM instead of training a language model from scratch can lead to better results. Furthermore, incorporating the graph's topology into the model has been shown to generate better natural text. JointGT <ref type="bibr" target="#b23">[24]</ref> incorporates both of these characteristics, hence, we adopted this method. The authors pre-trained this model on the KGText dataset <ref type="bibr" target="#b29">[30]</ref>, consisting of 7 million graph-text pairs extracted from English Wikipedia dump. <ref type="foot" target="#foot_4">5</ref> It includes around 1.8 million entities and 1,210 relations.</p><p>The WebNLG dataset does not contain any of the FARO relations. Therefore, we ne-tuned the model on a merged dataset, combining the WebNLG and FARO, as in Table <ref type="table" target="#tab_1">2</ref> without making changes to the model itself. The creation of this combined dataset involves the following multistep process. Initially, entities and their respective encodings are extracted from the WebNLG dataset. Subsequently, entities from the FARO dataset are encoded utilizing the extracted encodings from WebNLG. Finally, the resulting encodings and their relations are integrated into the original WebNLG dataset, thereby producing the combined dataset. The model undergoes ne-tuning on the WebNLG dataset. We refer to the original model as base model, and the model ne-tuned on the combined dataset as combined model. <ref type="foot" target="#foot_5">6</ref> Table <ref type="table" target="#tab_2">3</ref> provides crucial insights into the model's performance, measured by the BLEU, METEOR, and ROUGE metrics. BLEU emphasizes precision, indicating how accurately the generated text aligns with the reference text. On the other hand, ROUGE focuses on recall, gauging the extent to which the reference text is captured in the generated output. METEOR combines elements of both precision and recall, and its e ectiveness can be further enhanced by incorporating improved word matching strategies. ROUGE suggests a high level of alignment with reference texts in conveying information, while BLEU shows minor word deviations from references. The lower METEOR score might stem from alignment nuances in score calculation. Notably, the base model's test performance closely mirrors the results outlined in the original JointGT paper <ref type="bibr" target="#b23">[24]</ref>. The model that was trained on the combined dataset performed slightly worse for all three metrics than the model that was trained on the base WebNLG data. This can be explained by two considerations. First, it is evident in Table <ref type="table" target="#tab_2">3</ref> that tests on FARO have very low performances. Secondly, the FARO dataset only accounts for a relatively small proportion in the combined dataset (Table <ref type="table" target="#tab_1">2</ref>). To better understand the reasons, a qualitative analysis is proposed in the next section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Quantitative analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Qualitative analysis</head><p>We examine instances from WebNLG and FARO datasets to analyze the base and combined model's performance. Observing Tables <ref type="table" target="#tab_3">4 and 5</ref>, the text generated by the combined datasettrained model appears more semantically robust. The base model's generated text for FARO triples (Table <ref type="table">4</ref>, column Base generated) is notably brief, often mirroring the triples with semantic inaccuracies. Conversely, the combined model produces more coherent and accurate sentences in the same dataset (column Combined generated), maintaining triple direction. However, it's important to note that while the generated content respects triple and semantic accuracy, it may still have limitations in altering the original label's content.</p><p>We also get a sight why the quantitative results are slightly worst for the combined model. The WebNLG data (Table <ref type="table" target="#tab_3">5</ref>) contains multiple triples per instance, giving more information about the text, and contains multiple labels. The FARO data (Table <ref type="table">4</ref>) contains only one triple per instance, together with one target sentence (label). Therefore, the model has less information about what to generate, and less chances to match the target label. Looking at the FARO input triples and the target label, it can be seen that the relationship (predicate) is often not explicitly represented by a particular word in the target sentence (implicit relation), making the evaluation with matching words harder. We provide additional insights in the appendix.</p><p>User Evaluation on ASRAEL To evaluate the system's performance, seven events from the ASRAEL dataset have been selected based on several criteria: values for the 4W properties, linking to a minimal number of articles, etc. The two largest (in terms of having the most articles) events in ASRAEL having all of the 4W properties are selected for evaluation: "Operation Breaking Dawn", and "2021 storming of the United States Capitol". The rationality behind this is to ensure that the information selection method is challenged by having an extensive amount of information to choose from. Among the remaining events in ASRAEL that include information about the place and time, ve additional events are selected, bringing the total to seven.</p><p>The information selection method is used to select time, place, actor, and up to three mentions from the seven selected events. The base and combined models are used to generate text from the selected information. This information per event can be found in the appendix, together with the generated text. A manual evaluation was needed due to the absence reference text for automated metrics. Three annotators with a pro cient level of English uency determined which text was better for each event, by using either "win", "lose", or "tie", assessing uency (grammatical correctness) and adequacy (correct integration of triples). This method aligns with the approach in <ref type="bibr" target="#b23">[24]</ref>. Majority voting determined the winner or equality between models, followed by a nonparametric sign test at a signi cance level of ↵ = 0.05 to establish superiority. The non-parametric statistical sign-test is used to compare data. It assesses whether the median di erence between observations di ers signi cantly from zero, providing a p-value that indicates the probability of observing the given di erence or a more extreme di erence if the null hypothesis (no di erence) were true. The signi cance level, denoted by alpha ↵, is a predetermined threshold set at 0.05, Table <ref type="table">4</ref>: Sample of the FARO test-set and the generated output of the base and combined model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Triple</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Label</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Base generated</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Combined generated</head><p>(o er, cause, reimburse) (The directors said if Messrs. Drabinsky and Gottlieb mail an o er to shareholders by Nov. 22, it will reimburse them a maximum of C$8.5 million for expenses related to a bid.)</p><p>The cause of the o er is to reimburse .</p><p>The company has also announced that it will o er a new credit facility to small businesses, in an e ort to reimburse them for the cost of capital expenditures. against which the p-value is compared to determine statistical signi cance. Results of this annotation are accessible in Table <ref type="table" target="#tab_4">6</ref>.</p><p>The combined model produces better uent text than the base model in 71.4% of the cases. The non-parametric "sign test" was performed to measure a signi cant di erence in the uency of the text. With a p-value of 0.11, no signi cant di erence was found. The same was done to gauge the text's adequacy. With a p-value of 0.25, no signi cant di erence was found. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>User Evaluation on an Manually Annotated Event</head><p>To demonstrate whether the obtained results are consistent independently from the quality of the information extraction output, we decided to perform a user evaluation on a single article (sample), which has been manually annotated by handcrafting the resulting subgraph. This subgraph has been processed with both the combined and base model, and then evaluated using either "win", "lose", or "tie", in the same way as described in the previous section. The percentage of wins, losses and ties for the combined model, together with the Fleiss' kappa are reported in BLEU, METEOR, and ROUGE metrics have been computed using the sentences from the article as "reference label". These scores are detailed in Table <ref type="table" target="#tab_5">7</ref>. This illustrates that the base model performs slightly better than the model that was trained on the combined data. A reason for this could be formulated by looking at the generated texts, which can be found in the appendix. More often than the combined model, the base model will output parts of the triple without taking the relationship between them into account. This will result in a badly formed sentence, but higher metrics, since more triples are incorporated. This is also re ected in the scores in Table <ref type="table" target="#tab_6">8</ref>, where the combined model is commonly noted for producing more uent texts. Furthermore, the scores in Table <ref type="table" target="#tab_5">7</ref> (computed on a single annotated article) are much lower then those computed on the whole WebNLG test set (Table <ref type="table" target="#tab_2">3</ref>). This outcome could be expected, considering that some of the triples extracted from the article are not, or to a limited extend, present in the original WebNLG data used to pre-train the JointGT model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and Future Work</head><p>The primary goal of this research is to investigate how to build complex narratives in the form of graphs of events, generating text with good level of complexity and semantic richness, expecting the system to generate answers beyond only What (event), Who (actor), Where (location), and When (time).</p><p>We enhanced the WebNLG dataset through the incorporation of the FARO dataset, aimed at re ning the semantic depth of event relations. The expanded dataset now encompasses intricate relations including causality, prevention, intention, and enabling. Even if the metrics show not clear improvement, from qualitative analysis, we can state that training on precise event relations produces more complete generated sentences, while no statistically signi cant di erence was observed on uency. Future work will experiment on more data to draw nal conclusions. Our information selection from the graph focuses solely on the main event, disregarding pertinent details from interconnected events. Additionally, the data used for ne-tuning di ers from the original dataset in terms of triple counts and instances, potentially impacting model evaluation. Future research could explore selectively extracting sub-events and relations at the document level to enhance clustering. Moreover, augmenting the dataset through NLP techniques could signi cantly improve its quality and comprehensiveness.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Sample of the FARO dataset.</figDesc><table><row><cell>Sentence</cell><cell cols="3">Trigger1 Trigger2 Tag</cell><cell>Triplets</cell></row><row><cell>The government has implemented a series of laws to prevent the abuse of animals.</cell><cell>laws</cell><cell>abuse</cell><cell>prevent</cell><cell>&lt;triplet&gt;laws &lt;subj&gt; abuse &lt;obj&gt;prevent</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Sizes of the datasets used for training and evaluating the JointGT model.</figDesc><table><row><cell>Dataset</cell><cell>Train</cell><cell cols="2">Val Test</cell></row><row><cell>WebNLG</cell><cell cols="3">12,876 1,619 1,600</cell></row><row><cell>FARO</cell><cell>1,800</cell><cell>201</cell><cell>108</cell></row><row><cell cols="4">Combined 14,676 1,820 1,708</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>The performance metrics of the best performing model on their corresponding validation and test seteither WebNLG or the combined set. Both models are evaluated also on the FARO test set.</figDesc><table><row><cell>Model</cell><cell>Dataset</cell><cell cols="4">BLEU METEOR ROUGE Step</cell><cell>Epoch</cell></row><row><cell></cell><cell>Val</cell><cell cols="2">0.6642 0.4727</cell><cell>0.7558</cell><cell cols="2">22400 6</cell></row><row><cell>Base (WebNLG)</cell><cell>Test</cell><cell cols="2">0.6529 0.4681</cell><cell>0.7535</cell><cell>-</cell><cell>-</cell></row><row><cell></cell><cell cols="2">FARO test 0.0</cell><cell>0.0565</cell><cell>0.1299</cell><cell>-</cell><cell>-</cell></row><row><cell></cell><cell>Val</cell><cell cols="2">0.6368 0.4543</cell><cell>0.7468</cell><cell cols="2">36000 9</cell></row><row><cell>Combined</cell><cell>Test</cell><cell cols="2">0.6101 0.4409</cell><cell>0.7260</cell><cell>-</cell><cell>-</cell></row><row><cell></cell><cell cols="3">FARO test 0.0477 0.0877</cell><cell>0.1949</cell><cell>-</cell><cell>-</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 5 :</head><label>5</label><figDesc>Sample of the WebNLG Test-set and the generated output of the base model.</figDesc><table><row><cell>Triple</cell></row><row><cell>Label</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 6</head><label>6</label><figDesc>Fleiss' Kappa () indicates perfect, and moderate agreement between annotators. The wins, losses, and ties when comparing the combined model against the base model are indicated in percentages. No model was significantly better than another with a significance level of 0.05.</figDesc><table><row><cell>Model</cell><cell cols="3">Fluency Win % Lose % Tie %</cell><cell></cell><cell cols="3">Adequacy Win % Lose % Tie %</cell><cell></cell></row><row><cell cols="2">Combined vs Base 71.4</cell><cell>14.3</cell><cell>14.3</cell><cell cols="2">1.0 28.6</cell><cell>0.0</cell><cell>71.4</cell><cell>0.6</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 7</head><label>7</label><figDesc>BLEU, METEOR, and ROUGE scores per model on the generated text from the article.</figDesc><table><row><cell>Model</cell><cell cols="2">BLEU METEOR ROUGE</cell></row><row><cell cols="2">Combined 0.1681 0.2081</cell><cell>0.3622</cell></row><row><cell>Base</cell><cell>0.1874 0.2273</cell><cell>0.3738</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 8</head><label>8</label><figDesc>Fleiss' Kappa () indicates substantial agreement between annotators. The wins, losses, and ties when comparing the combined model against the base model are indicated in percentages. The combined model was significantly better than the base model in generating adequate sentences.</figDesc><table><row><cell>Model</cell><cell cols="3">Fluency Win % Lose % Tie %</cell><cell></cell><cell cols="3">Adequacy Win % Lose % Tie %</cell><cell></cell></row><row><cell cols="2">Combined vs Base 33.3</cell><cell>16.7</cell><cell>50.0</cell><cell cols="2">0.73 58.3</cell><cell>8.3</cell><cell>33.3</cell><cell>0.61</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>Table 8 .</head><label>8</label><figDesc>The combined model has been assigned more wins for producing uent and adequate text. The non-parametric "signed test" is applied to test if this is signi cant, again, with a signi cance level of 0.05. With a p-value of 0.34, no signi cant di erence is found in generating more uent texts between models. With a p-value of 0.04, a signi cant di erence is found in generating more adequate sentences by the combined model, compared to the base model.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://openai.com/blog/chatgpt/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://anr-k ow.github.io/faro/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://github.com/ANR-kFLOW/KG2Narrative/blob/main/Data/graphs/ nal_generated/eag_complete_merged. ttl</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://persistence.uni-leipzig.org/nlp2rdf/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://dumps.wikimedia.org/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">The model was replicated using the same parameters from the original paper, except for the batch size lowered due to memory constraints. The parameters are Learning rate: 0.000025, Batch size: 4, Epochs: 10, Optimizer: Adam. Early stopping: 10 epochs</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work has been partially supported by the French National Research Agency (ANR) within the kFLOW project (Grant n°ANR-21-CE23-0028) and the European Union Horizon 2020 research and innovation programme within the MUHAI project (Grant n°951846).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Understanding media enjoyment: The role of transportation into narrative worlds</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Green</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">C</forename><surname>Brock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">F</forename><surname>Kaufman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communication theory</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="311" to="327" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/n19-1423</idno>
	</analytic>
	<monogr>
		<title level="m">Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, ACL</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Language Models are Few-Shot Learners</title>
		<author>
			<persName><forename type="first">T</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1877" to="1901" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Documenting Large Webtext Corpora: A Case Study on the Colossal Clean Crawled Corpus</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dodge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Marasovi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Agnew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Ilharco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Groeneveld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gardner</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.emnlp-main.98</idno>
	</analytic>
	<monogr>
		<title level="m">Conference on Empirical Methods in Natural Language Processing (EMNLP), ACL</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1286" to="1305" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Pretrained Language Model for Text Generation: A Survey</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-R</forename><surname>Wen</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2021/612</idno>
	</analytic>
	<monogr>
		<title level="m">Thirtieth International Joint Conference on Arti cial Intelligence (IJCAI), Survey Track, IJCAI Organization</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="4492" to="4499" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Coherence boosting: When your pretrained language model is not paying enough attention</title>
		<author>
			<persName><forename type="first">N</forename><surname>Malkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jojic</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.acl-long.565</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Muresan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Villavicencio</surname></persName>
		</editor>
		<meeting>the 60th Annual Meeting of the Association for Computational Linguistics<address><addrLine>Dublin, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="8214" to="8236" />
		</imprint>
	</monogr>
	<note>Long Papers), Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Survey on Knowledge Graphs: Representation, Acquisition, and Applications</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Cambria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Marttinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Yu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TNNLS.2021.3070843</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="494" to="514" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">EventKG -the hub of event knowledge on the web -and biographical timeline generation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gottschalk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Demidova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Semantic Web</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">6</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">YAGO2: A spatially and temporally enhanced knowledge base from Wikipedia</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ho Art</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Suchanek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Berberich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Weikum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Arti cial Intelligence</title>
		<imprint>
			<biblScope unit="volume">194</biblScope>
			<biblScope unit="page" from="28" to="61" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Beyond Causality: Representing Event Relations in Knowledge Graphs</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Rebboud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lisena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-17105-5_9</idno>
	</analytic>
	<monogr>
		<title level="m">Knowledge Engineering and Knowledge Management</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="121" to="135" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Prompt-based Data Augmentation for Semantically-Precise Event Relation Classi cation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Rebboud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lisena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Semantic Methods for Events and Stories workshop (SEMMES)</title>
				<meeting><address><addrLine>Hersonissos, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Creating Training Corpora for NLG Micro-Planners</title>
		<author>
			<persName><forename type="first">C</forename><surname>Gardent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shimorina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Perez-Beltrachini</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P17-1017</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics</title>
				<meeting>the 55th Annual Meeting of the Association for Computational Linguistics<address><addrLine>ACL, Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="179" to="188" />
		</imprint>
	</monogr>
	<note>: Long Papers)</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Building Narrative Structures from Knowledge Graphs</title>
		<author>
			<persName><forename type="first">I</forename><surname>Blin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Semantic Web: ESWC 2022 Satellite Events</title>
				<meeting><address><addrLine>Germany</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="234" to="251" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Design and use of the Simple Event Model (SEM)</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">R</forename><surname>Van Hage</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Malaisé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Segers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Hollink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Schreiber</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Web Semantics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="128" to="136" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Maintaining Knowledge about Temporal Intervals</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">F</forename><surname>Allen</surname></persName>
		</author>
		<idno type="DOI">10.1145/182.358434</idno>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="832" to="843" />
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A review of ranking approaches for semantic search on Web</title>
		<author>
			<persName><forename type="first">V</forename><surname>Jindal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bawa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Batra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Processing &amp; Management</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="416" to="425" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Blin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tiddi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Van Trijp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Teije</surname></persName>
		</author>
		<title level="m">Identifying graph traversal strategies to build narrative graphs</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>Under review</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Creative Storytelling with Language Models and Knowledge Graphs</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tiddi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CIKMW2020 Proceeding of the CIKM 2020 Workshops, CEUR Workshop Proceedings, CEUR-WS</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Conrad</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Tiddi</surname></persName>
		</editor>
		<meeting><address><addrLine>CIKMW</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020. 2020</date>
			<biblScope unit="page" from="23" to="33" />
		</imprint>
	</monogr>
	<note>Conference date</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Language Models are Unsupervised Multitask Learners</title>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Luan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">OpenAI blog</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">8</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Investigating Pretrained Language Models for Graph-to-Text Generation</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F R</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schmitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schütze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Gurevych</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">3rd Workshop on Natural Language Processing for Conversational AI, ACL, Online</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="211" to="227" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension</title>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazvininejad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mohamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2020.acl-main.703</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">D</forename><surname>Jurafsky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Chai</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Schluter</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Tetreault</surname></persName>
		</editor>
		<meeting>the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="7871" to="7880" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Exploring the Limits of Transfer Learning with a Uni ed Text-to-Text Transformer</title>
		<author>
			<persName><forename type="first">C</forename><surname>El</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roberts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Matena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Mach. Learn. Res</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Text Generation from Knowledge Graphs with Graph Transformers</title>
		<author>
			<persName><forename type="first">R</forename><surname>Koncel-Kedziorski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bekal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Luan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lapata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajishirzi</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1238</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<editor>
			<persName><forename type="first">J</forename><surname>Burstein</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Doran</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Solorio</surname></persName>
		</editor>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="2284" to="2293" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">JointGT: Graph-Text Joint Representation Learning for Text Generation from Knowledge Graphs</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Cui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, ACL</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2526" to="2538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">How to Train Your Agent to Read and Write</title>
		<author>
			<persName><forename type="first">L</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Conference on Arti cial Intelligence</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="13397" to="13405" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">The WebNLG Challenge: Generating Text from RDF Data</title>
		<author>
			<persName><forename type="first">C</forename><surname>Gardent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shimorina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Narayan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Perez-Beltrachini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">10th International Conference on Natural Language Generation, ACL</title>
				<meeting><address><addrLine>Santiago de Compostela, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="124" to="133" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Searching News Articles Using an Event Knowledge Graph Leveraged by Wikidata</title>
		<author>
			<persName><forename type="first">C</forename><surname>Rudnik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ehrhart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ferret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Teyssou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Tannier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 World Wide Web Conference, WWW, ACL</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1232" to="1239" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">REBEL: Relation Extraction By End-to-end Language generation</title>
		<author>
			<persName><forename type="first">P.-L</forename><surname>Huguet Cabot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Navigli</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2021.ndings-emnlp.204" />
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: EMNLP 2021, ACL</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="2370" to="2381" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Revisiting Joint Modeling of Cross-document Entity and Event Coreference Resolution</title>
		<author>
			<persName><forename type="first">S</forename><surname>Barhom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Shwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Eirew</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bugert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Reimers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dagan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">57th Annual Meeting of the Association for Computational Linguistics, ACL</title>
				<meeting><address><addrLine>Florence, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="4179" to="4189" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">KGPT: Knowledge-Grounded Pre-Training for Datato-Text Generation</title>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), ACL, Online</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="8635" to="8648" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
