<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Story Fragment Stitching: The Case of the Story of Moses</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mohammed Aldawsari</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ehsaneddin Asgari</string-name>
          <email>asgari@berkeley.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mark A. Finlayson</string-name>
          <email>markaf@fiu.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>In: A. Jorge, R. Campos, A. Jatowt, A. Aizawa (eds.): Proceedings of the first AI4Narratives Workshop</institution>
          ,
          <addr-line>Yokohama</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of California</institution>
          ,
          <addr-line>Berkeley</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>2</fpage>
      <lpage>9</lpage>
      <abstract>
        <p>We introduce the task of story fragment stitching, which is the process of automatically aligning and merging event sequences of partial tellings of a story (i.e., story fragments). We assume that each fragment contains at least one event from the story of interest, and that every fragment shares at least one event with another fragment. We propose a graph-based unsupervised approach to solving this problem in which events mentions are represented as nodes in the graph, and the graph is compressed using a variant of model merging to combine nodes. The goal is for each node in the final graph to contain only coreferent event mentions. To find coreferent events, we use BERT contextualized embedding in conjunction with a tf-idf vector representation. Constraints on the merge compression preserve the overall timeline of the story, and the final graph represents the full story timeline. We evaluate our approach using a new annotated corpus of the partial tellings of the story of Moses found in the Quran, which we release for public use. Our approach achieves a performance of 0.63 F1 score.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Understanding stories is a long-held goal of both artificial
intelligence and natural language processing [Charniak, 1972;
Schank and Abelson, 1977; Wilensky, 1978; Dyer, 1983;
Riloff, 1999; Frank et al., 2003; Mueller, 2007; Winston,
2014]. Stories are found throughout our daily lives, e.g., in
news, entertainment, education, religion, and many other
domains. Automatically understanding stories implicates many
interesting natural language processing tasks, and much
information can be extracted from stories, including concrete
facts about specific events, people, and things, commonsense
knowledge about the world, and cultural knowledge about
Copyright c 2020 by the paper’s authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
the societies in which we live. One interesting and
challenging task which has not yet been solved is what we call
here story fragment stitching. In this task we seek to merge
partial tellings of a story—where each partial telling
contains part of the sequence of events of a story, perhaps from
different points of view, and may be found across different
sources or media—into one coherent narrative which may
then be used as the basis for further processing.
Conceptually, this task is similar to both cross-document event
coreference (CDEC) and event ordering in NLP. However, story
fragment stitching, as we define it, presents a more
challenging problem for at least two reasons. First, and unlike event
coreference, the overall timeline of the story’s events need to
be preserved across all fragments. Second, and unlike event
ordering which targets only events related to a single entity,
this work considers all events across all fragments.</p>
      <p>For the purposes of this work, we define a story as a
sequence of events effected by characters and presented in a
discourse. This is in accord with fairly standard definitions:
for example, [Forster, 1927] said that “A story is a narrative of
events arranged in their time sequence.” As a simplifying
assumption, we additionally assume that the events in the story
are presented in the chronological order in which the events
of a story take place (i.e., the fabula time order) [Bordwell,
2007]. We leave the problem of extracting the chronological
ordering of events within a text for other work.</p>
      <p>We present an approach to story fragment stitching
problem inspired by [Finlayson, 2016] which in turn based on
model merging, a regular grammar learning algorithm
[Stolcke and Omohundro, 1993], using similarity measures based
on BERT contextualized embedding and tf-idf weights of
events and their arguments. We apply this approach to a
concrete example of this problem, namely, the story of the
prophet Moses as found in the Quran, the Islamic holy book.
The story of Moses is not found in one single telling in the
Quran; rather, it is found in eight fragments spread across six
different chapters (the chapters of the Quran are called suras),
with the story comprising 7,931 total words across 283 verses
of anywhere from 2 to 94 words in length. In this work we
demonstrated our approach using the seven fragments with
coherent timelines.</p>
      <p>The story of Moses is especially useful for this work
because it has been subject to detailed event analysis, in
particular, [Ghanbari and Ghanbari, 2008] identified a
canonical timeline of events for the story. Further, the Quran verse
structure provides a natural unit of analysis, where nearly
every verse is related to only a single event in the story timeline.
We manually extracted 573 event mentions from 273 verses
(omitting 11, as described later) and annotated all events
corresponding to the Ghanbari’s event categories. We used this
data to test our approach, resulting in a proof of concept of
story fragment stitching. We release both our code and data
to enable reimplementations1.</p>
      <p>We begin by discussing prior work on cross-document
event coreference, event ordering, as well as the description
and analysis of story structure (§2). Then we introduce our
method, including the task definition (§3.1) and specific
aspects of our approach (§3.2–§3.3). We then describe our
evaluation, including construction of the gold standard for the
Moses story in the Quran (§4.1), the experiment setup (§4.2),
the result of our model (§4.5), as well as an error analysis
(§5). We conclude with a list of contributions (§6).
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>The most closely related problems to story stitching are the
problem of cross-document event coreference (CDEC) and
cross-document event ordering. In CDEC systems, the goal is
to group expressions that refer to the same event across
multiple documents. [Bagga and Baldwin, 1999; Lee et al., 2012;
Goyal et al., 2013; Saquete and Navarro-Colorado, 2017;
Kenyon-Dean et al., 2018; Barhom et al., 2019]. In event
ordering task, which was introduced in SemEval-2015
[Minard et al., 2015], the goal is to order events cross-document
in which a specific target entity is involved. That is, a
system should produce a timeline for a specific target entity
and that timeline consists of the ordered list of the events
in which that entity participates. Similarly, within document
event sequence detection task, which was introduced in TAC
KBP 2017 event track [Mitamura et al., 2017], aims to
identify event sequence (i.e., after links) that occurs in a script
[Schank and Abelson, 1977].</p>
      <p>Despite this very interesting and useful prior work, these
systems are not directly applicable to the task of story
fragment stitching as we define it. In particular, CDEC systems
ignore the timeline of the story’s events (i.e., the overall
timeline of the story’s events is not guaranteed to be preserved
across all fragments), while event ordering systems only
order certain events related to a specific target.</p>
      <p>Researchers have explored several ways of assessing
similarity between stories [Schank and Abelson, 1975; Roth and
Frank, 2012; Finlayson, 2012; Iyyer et al., 2016; Nikolentzos
et al., 2017; Chaturvedi et al., 2018]. These works provided
valuable ways to capture similarity between stories.
However, the story similarity task is not directly applicable to the
task of stitching fragmented stories, where the goal is to order
events across multiple stories (fragments), except in the
simple baseline sequence alignment approach [Needleman and
Wunsch, 1970; Reiter, 2014].</p>
      <p>1The code and data are available at https://doi.org/10.34703/
gzx1-9v95/28GC2M</p>
    </sec>
    <sec id="sec-3">
      <title>Approach</title>
      <p>We now discuss the precise definition of the story fragment
stitching task (§3.1) and the details of the two main
components of our approach: model formulation (§3.2), and the
graph merge to align fragments events into a full, ordered,
end-to-end list of story events (§3.3).
3.1</p>
      <sec id="sec-3-1">
        <title>Task</title>
        <p>We define the goal of story fragment stitching as: align a set
of story fragments into a full, ordered, end-to-end list of story
events. We assume that the story fragments are ordered lists
of events, where the order is that of the fabula, namely the
order of events as they happen in the story world. In many
stories, the fabula order is different from the discourse order,
but we do not consider this case here; we leave the problem
of extracting the chronological order of events to other work.
We also assume that each fragment shares at least one event
with another fragment. The output of the system is an
ordered list of nodes, where each node is a collection of event
mentions (corefering events) that all describe one particular
single event, and these nodes are in the same order as the
overall fabula.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Model Formulation</title>
        <p>The first step of the approach is model initialization which
is shown Algorithm 1 lines 1–3. Using the function
constructLinearBranch, we convert each fragment’s
list of events into a linear directed graph (linear branch)
where each node contains only a single event. Each event
is represented by a vector which is a concatenation of the
event contextualized embedding from the BERT model and
tf-idf weights of the event lemma and its semantic arguments.
BERT [Devlin et al., 2018] is a multi-layer bidirectional
transformer trained on plain text for masked word prediction and
next sentence prediction tasks, while tf-idf is the standard
term weighting approach to reflect how important a word is
in a document in comparison to the rest of documents [Salton
and McGill, 1986]. Using the function linkGraphs we
link all linear branches to a start and an end node, resulting
in one directed graph of all the set of fragments, as shown in
Figure 1.</p>
        <p>This initial model will be used to generate possible
solutions by merging different nodes on the basis of a similarity
measure, discussed below. When two nodes A and B are
merged, the new node C should contain an average vector of
both A and B. In the next section, we introduce the merge
approach.
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Graph Merge</title>
        <p>The second step of the approach is model merging, shown in
Algorithm 1 lines 4–15. We first compute a threshold ↵ using
computeTFIDFAvgSim function, which takes the average
of the highest and lowest cosine similarity values between all
fragments using tf-idf weights. ↵ sets the minimum similarity
required to merge two nodes; for our data ↵ was 0.39. Next,
using a cosine similarity measure, the computeNodesSim
function computes the full set of similarity scores between
all pairs of nodes. Then the algorithm starts by searching for
e2:2
en:2
.
.
.
the most similar nodes using findMostSim function, lines
6 and 13, and merges the most similar nodes using merge
function. Because the fragments are assumed to be already
in fabula time order, the pairsIntroduceCycle boolean
function disallows merges that would introduce cycles,
ignoring (and removing) self-loops (the no-cycles constraint), and
thereby preserves the overall order of the events. Note that
disallowing cycles also prevents merges of non-neighboring
nodes within the same fragment. The new merged node then
contains a weighted average vector of the old nodes vectors
and nodes similarity are updated using updateNodesSim
function. The algorithm continues to merge nodes until the
similarity measure drops below ↵ . Because the final
resulting graph is not guaranteed to contain only one path from
start to end, by using bestPath function, the path with the
maximum merged nodes (based on the number of events) is
considered to be the final output of the model.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Algorithm 1</title>
        <p>F : set of text fragments f</p>
        <p>E : map of f to sets ef of gold event annotations
1
/* Create initial model */
G ;
foreach f 2 F do
g constructLinearBranch(f, E.get(f ))</p>
        <p>G.add(g)
2 end
3 model</p>
        <p>linkGraphs(G)
/* Merging process */
4 ↵ computeTFIDFAvgSim(F)
5 nodesSim computeNodesSim(model)
6 (maxPairSim,pairs) findMostSim(nodesSim)
repeat
7 if ¬ pairsIntroduceCycle(model, pairs) then
8 model merge(pairs)
9 nodesSim updateNodesSim(model,nodesSim)
else
10
11 nodesSim
12 end
13 (maxPairSim,pairs)
14 until maxPairSim &lt; ↵ ;
15 bestPath findBestPath(model)
setSimToZero(pairs,nodesSim)
findMostSim(nodesSim)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiment</title>
      <p>We evaluate our approach against a gold-standard annotation
of Moses’ from the Quran. We first describe how we collected
and annotated the data (§4.1). After that we demonstrate the
experiment setup (§4.2) and the evaluation (§4.3). Then we
report the performance of our approach (§4.5). Finally, we do
an error analysis of the performance of our system (§5).
4.1</p>
      <sec id="sec-4-1">
        <title>Data</title>
        <p>Moses was an important figure whose story is central to
the major Abrahamic religions, including Judaism,
Christianity, and Islam. Moses’ story is found in fragmentary form
throughout the holy books of these religions, with some parts
repeated, but in different contexts and sometimes from
different perspectives. In the Quran, the holy book of Islam, the
story of Moses appears in eight different fragments across six
different chapters (suras) comprising 283 verses. Thus the
story of Moses serves as an excellent example for the
evaluation of our approach to story fragment stitching. The relevant
suras and verses are listed in Table 1, along with the number
of events present in the fragments of each chapter.</p>
        <p>We annotated verses based on a comparative analysis of
Moses’ story in the Old Testament and the Quran by
[Ghanbari and Ghanbari, 2008]. The Ghanbari study breaks Moses’
story 43 event categories, shown in Table 3 in chronological
order. For the annotation, three annotators labeled each verse
with its single relevant event. We measured a Fleiss’ kappa of
0.76, which represents excellent agreement. The annotation
was originally done on the Arabic version of the Quran, but
we transferred the annotations to an English translation [Ali,
1973] for the remainder of the study.</p>
        <p>We excluded one fragment (Sura 2 [Al-Baqarah], verses
50–60) from the analysis because its timeline is quite
different from the fabula order. We manually extracted 708 total
event mentions from the remaining seven fragments. Our
annotation procedure followed the standards outlined for events
in the TimeML standard [Saurı et al., 2006]. We omitted 135
Reporting mentions (e.g., say, reply, etc.) because these
usually are just indicators of direct speech, and do not correspond
to plot events. This resulted in 573 event mentions relevant
to the plot, which we labeled as to which specific event it
referred in the Moses timeline (Table 3). 301 of the event
mentions were labeled with an event described in the
timeline, while 272 were not relevant.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Experimental Setup</title>
        <p>We used the netwrokx library [Hagberg et al., 2008] for
graph operations. We extracted event contextualized
embedding using the flair implementation [Akbik et al.,
2018] of the BERT model with the default parameters2.
The tf-idf weights for the lemmas of all tokens excluding
stop words are computed using spaCy [Honnibal and
Montani, 2017] and scikit-learn libraries [Pedregosa et al.,
2011]. The event arguments are extracted and resolved
using the AllenNLP semantic role labeling (SRL) and
coref2bert base uncased, layers=-1,
pooling operation=first
For the evaluation, we used the temporal awareness
measure [UzZaman et al., 2013] used in both event ordering
task SemEval-2015 [Minard et al., 2015] and event sequence
task TAC-KBP-2017 [Mitamura et al., 2017]. The temporal
awareness metric calculates precision and recall values based
on the closure and reduction graphs. For a directed graph, a
reduced graph is derived from the original graph by having
the fewest possible edges that have the same reachability
relation as the original graph. In this work, the final directed
path of nodes in the final model represents the reduced graph.
For example, consider the final directed path of nodes in the
final model to be:</p>
        <p>start ! n1 ! n2 ! n3 ! end
where, for example, events e1, e2 2 n1, e3 2 n2, and
e4, e5 2 n3. The reduced graph (G ) is represented as the
following edges: h(e1, e3), (e2, e3), (e3, e4), (e3, e5)i and the
transitive closure graph (G+) is represented as the following
edges: h(e1, e3), (e2, e3), (e1, e4), (e1, e5),
(e2, e4), (e2, e5), (e3, e4), (e3, e5)i ,where the relation
between (ei, ej ) is defined as before relation. The temporal
awareness metric calculates the precision and recall as
follow:
precision = |System \ Ref erence+|</p>
        <p>|System |
recall = |Ref erence \ System+|
|Ref erence |
(1)
(2)
,where System and Reference are the proposed approach
and the gold standard, respectively. The final F1 score is the
harmonic mean of the precision and recall values.
4.4</p>
      </sec>
      <sec id="sec-4-3">
        <title>Baseline</title>
        <p>We used the Needleman-Wunsch algorithm [Needleman and
Wunsch, 1970] as a baseline. Needleman-Wunsch is a
wellknown global alignment algorithm used in bioinformatics and
the social sciences. Using dynamic programming, this
algorithm searches for optimal alignment of an arbitrary number</p>
        <sec id="sec-4-3-1">
          <title>Needleman-Wunsch</title>
          <p>tf-idf
BERT</p>
        </sec>
        <sec id="sec-4-3-2">
          <title>Model</title>
        </sec>
      </sec>
      <sec id="sec-4-4">
        <title>Concat</title>
        <sec id="sec-4-4-1">
          <title>Prec.</title>
          <p>of items (the events lemma in our case) by using a
scoring function that penalizes the dissimilarities and the
insertion of gaps. We used the default implementation3developed
by [Dekker and Middell, 2011] which follows the group of
progressive alignment algorithms where two sequences are
aligned and then the result is aligned to the next sequence. It
repeats the procedure until all sequences are aligned.
peculiarities of Quranic language cause errors. For example,
the word We is usually present as an event’s argument when
God is speaking of himself. This causes problems for the
coreference resolution system, in that it does not pair we with
such mentions Lord and God, thus introduces additional
errors into the system. Also, some events have the same event
mention and arguments but happen at different points in the
timeline. Example 1 shows text from different parts of the
story: the first is when God shows Moses one of the signs
whereas the second is when Moses shows the Pharaoh the
sign. Notably, the two events have the same event triggers
(showed in bold) and the same arguments (underlined).
(20:19–20) “Throw it down, O Moses,” said (the
Voice). So he threw it down, and lo, it became a
running serpent.
(7:106–107) He said: “If you have brought a sign
then display it, if what you say is true.” At this
Moses threw down his staff, and lo, it became a
live serpent.</p>
          <p>Example 1: An example to show when two events happen at
different points in the timeline.</p>
          <p>Further, the approach is sensitive to the order of merges.
If an incorrect merge is performed early, this can eliminate
correct merges later on account of the no-cycles constraint.
Therefore performing only the highest confidence merges
first is critical, and errors in that process degrade other
distance parts of the model.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Contributions</title>
      <p>We introduced the story fragment stitching problem, the task
of merging partial tellings of a story into a unified whole.
We have introduced an approach that models the story’s
fragments in a graph and applies an adapted model merging
approach to merge similar nodes and produce an ordered,
endto-end list of story events. Our approach achieves a
performance of 0.63 F1 using the temporal awareness metric.
7</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>Mr. Aldawsari was funded in part by a doctoral fellowship
from Prince Sattam Bin Abdulaziz University, as well as NSF
Grant IIS-1749917 to Dr. Finlayson. We thank Seyedeh
Mohadeseh Taheri Mousavi and Zahra Ejei for their assistance
in annotating the verses from the Quran in the original
Arabic. The idea of combining distributional semantics with
automatic story model merging was proposed and developed by
Mr. Asgari in his Master’s thesis at CSAIL MIT supervised
by Dr. Finlayson in 2013–2014 academic year. The creation
of the corpus was also among the contributions of that thesis.
Moses’s Birth</p>
      <p>1. Moses’ Birth and left in the Nile.</p>
      <p>Moses is Rescued from the Nile
2. Moses is rescued from the Nile.
3. Moses’ sister kept an eye on him.
4. Moses brought back to his mother.</p>
      <p>5. Moses after infancy and through maturity.</p>
      <p>Moses kills the Egyptian</p>
      <p>6. Moses beats and kills the Egyptian.</p>
      <p>Moses flees to the Madyan</p>
      <p>7. Moses ran away to the Madyan.</p>
      <p>Moses’ Marriage
8. Moses protected Shu’ayb’s daughters.</p>
      <p>9. Moses traveled with his family.</p>
      <p>Moses is Chosen to be a Prophet
10. Moses saw the fire from the distance.</p>
      <p>11. Moses talked to God through the burning bush.</p>
      <p>God Shows Moses the Miracles
12. God changed the wand to the snake.</p>
      <p>13. God illuminated Moses’ hand.</p>
      <p>God Send Moses to the Pharaoh</p>
      <p>14. God commanded Moses to meet the Pharaoh.</p>
      <p>Moses Speaks with the Pharaoh
15. Moses and Aaron went to the Pharaoh with miracles.
16. Moses showed Pharaoh the signs.
17. Pharaoh refused their message.
18. Pharaoh accused Moses.
19. Pharaoh requested a competition with Moses.
20. Competition between Moses and the magicians.
21. Magicians believed in Moses’s message.
22. Magicians are threatened by the Pharaoh.</p>
      <p>23. Pharaoh cruelty to the believers.</p>
      <p>God Sends Calamities in Egypt
24. Calamities are sent to the Egyptians and the Pharaoh.
25. God withdraw the punishment.
26. God commanded Moses to travel with his people.</p>
      <p>27. Pharaoh and his army followed Moses and his people.
Parting of the Red Sea
28. Separation of the Sea and drowning of the Pharaoh.
29. God saved Moses and his people.</p>
      <p>Going to Mt. Sinai to Receive the Commandments
30. Moses went to Sinai for 40 nights.
31. God sent down food and brings forth water.
32. Moses met God and appeared on mountain.</p>
      <p>33. Moses delivered the commands and the stone tablets.
The People Betray God
34. Worshipping the Calf in the Absence of Moses.
35. Moses returned to his people.
36. Samiri explained to Moses what he saw.
37. Moses blamed his brother.
38. Moses returned to God.</p>
      <p>39. Moses stroked the stone.</p>
      <p>Wandering in the Desert
40. Israelites are commanded to take over the holy region.
41. The disobedient Israelites won’t enter the holy region.
42. God punished them.
43. Sacrifice of a heifer.
2
2
2
2
1
2
1
1
2
2
2
2
2
2
4
3
3
4
3
4
3
3
1
1
1
3
3
5
3
3
1
2
3
2
1
2
1
1
1
1
2
2
1
6
4
4
2
2
10
1
20
3
12
4
11
5
9
8
10
4
5
8
34
8
6
6
6
1
6
6
14
5
6
4
15
3
14
3
3
10
3
9
5
4
1
1
0.33
0.45
0.55
0.34
0.68
0.85
0.74</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Akbik et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Alan</given-names>
            <surname>Akbik</surname>
          </string-name>
          , Duncan Blythe, and
          <string-name>
            <given-names>Roland</given-names>
            <surname>Vollgraf</surname>
          </string-name>
          .
          <article-title>Contextual string embeddings for sequence labeling</article-title>
          .
          <source>In COLING 2018, 27th International Conference on Computational Linguistics</source>
          , pages
          <fpage>1638</fpage>
          -
          <lpage>1649</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Ali</source>
          , 1973]
          <article-title>Abdullah Yusuf Ali. The Holy Qur'an: text, translation and commentary</article-title>
          . Islamic University of Al ima Mohammad
          <source>ibn SAUD</source>
          ,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[Bagga and Baldwin</source>
          , 1999]
          <string-name>
            <given-names>Amit</given-names>
            <surname>Bagga</surname>
          </string-name>
          and
          <string-name>
            <given-names>Breck</given-names>
            <surname>Baldwin</surname>
          </string-name>
          .
          <article-title>Cross-document event coreference: Annotations, experiments, and observations</article-title>
          .
          <source>In Proceedings of the Workshop on Coreference and its Applications</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . Association for Computational Linguistics,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Barhom et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Shany</given-names>
            <surname>Barhom</surname>
          </string-name>
          , Vered Shwartz, Alon Eirew, Michael Bugert, Nils Reimers, and
          <string-name>
            <given-names>Ido</given-names>
            <surname>Dagan</surname>
          </string-name>
          .
          <article-title>Revisiting joint modeling of cross-document entity and event coreference resolution</article-title>
          .
          <source>In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics</source>
          , pages
          <fpage>4179</fpage>
          -
          <lpage>4189</lpage>
          , Florence, Italy,
          <year>July 2019</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Bordwell</source>
          , 2007]
          <string-name>
            <given-names>David</given-names>
            <surname>Bordwell</surname>
          </string-name>
          . Poetics of Cinema. New York: Routledge,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Charniak</source>
          , 1972]
          <string-name>
            <given-names>Eugene</given-names>
            <surname>Charniak</surname>
          </string-name>
          .
          <article-title>Toward a model of children's story comprehension</article-title>
          .
          <source>PhD thesis</source>
          , Massachusetts Institute of Technology,
          <year>1972</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [Chaturvedi et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Snigdha</given-names>
            <surname>Chaturvedi</surname>
          </string-name>
          , Shashank Srivastava, and
          <string-name>
            <given-names>Dan</given-names>
            <surname>Roth</surname>
          </string-name>
          .
          <article-title>Where have i heard this story before? identifying narrative similarity in movie remakes</article-title>
          .
          <source>In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , Volume
          <volume>2</volume>
          (
          <issue>Short Papers)</issue>
          , pages
          <fpage>673</fpage>
          -
          <lpage>678</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Dekker and Middell</source>
          , 2011]
          <article-title>Ronald H Dekker and Gregor Middell</article-title>
          .
          <article-title>Computer-supported collation with collatex: managing textual variance in an environment with varying requirements</article-title>
          .
          <source>Supporting Digital Humanities</source>
          , pages
          <fpage>17</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Devlin et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Jacob</given-names>
            <surname>Devlin</surname>
          </string-name>
          ,
          <string-name>
            <surname>Ming-Wei</surname>
            <given-names>Chang</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Kristina</given-names>
            <surname>Toutanova</surname>
          </string-name>
          . Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          .
          <source>arXiv preprint arXiv:1810.04805</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Dyer</source>
          , 1983]
          <article-title>Michael George Dyer</article-title>
          .
          <article-title>In-depth understanding: A computer model of integrated processing for narrative comprehension</article-title>
          . MIT press,
          <year>1983</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>[Finlayson</source>
          , 2012]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Mark Alan Finlayson</surname>
          </string-name>
          .
          <article-title>Learning narrative structure from annotated folktales</article-title>
          .
          <source>PhD thesis</source>
          , Massachusetts Institute of Technology,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Finlayson</source>
          , 2016]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Alan Finlayson</surname>
          </string-name>
          .
          <article-title>Inferring propp's functions from semantically annotated text</article-title>
          .
          <source>The Journal of American Folklore</source>
          ,
          <volume>129</volume>
          (
          <issue>511</issue>
          ):
          <fpage>55</fpage>
          -
          <lpage>77</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[Forster</source>
          , 1927]
          <string-name>
            <surname>Edward</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Forster</surname>
          </string-name>
          .
          <article-title>Aspects of the Novel</article-title>
          . E. Arnold &amp; Co., London,
          <year>1927</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [Frank et al.,
          <year>2003</year>
          ] Stefan L Frank,
          <article-title>Mathieu Koppen, Leo GM Noordman, and</article-title>
          <string-name>
            <given-names>Wietske</given-names>
            <surname>Vonk</surname>
          </string-name>
          .
          <article-title>Modeling knowledge-based inferences in story comprehension</article-title>
          .
          <source>Cognitive Science</source>
          ,
          <volume>27</volume>
          (
          <issue>6</issue>
          ):
          <fpage>875</fpage>
          -
          <lpage>910</lpage>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [Gardner et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Matt</given-names>
            <surname>Gardner</surname>
          </string-name>
          , Joel Grus, Mark Neumann, Oyvind Tafjord, Pradeep Dasigi, Nelson Liu, Matthew Peters,
          <string-name>
            <given-names>Michael</given-names>
            <surname>Schmitz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Luke</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          .
          <article-title>Allennlp: A deep semantic natural language processing platform</article-title>
          . arXiv preprint arXiv:
          <year>1803</year>
          .07640,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Ghanbari and Ghanbari</source>
          , 2008]
          <string-name>
            <given-names>Bakhshali</given-names>
            <surname>Ghanbari</surname>
          </string-name>
          and
          <string-name>
            <given-names>Zohreh</given-names>
            <surname>Ghanbari</surname>
          </string-name>
          .
          <article-title>Comparative study of moses' position in quran and torah</article-title>
          .
          <source>Journal of Theology</source>
          , (
          <volume>5</volume>
          ):
          <fpage>73</fpage>
          -
          <lpage>90</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [Goyal et al.,
          <year>2013</year>
          ]
          <string-name>
            <given-names>Kartik</given-names>
            <surname>Goyal</surname>
          </string-name>
          , Sujay Kumar Jauhar,
          <string-name>
            <given-names>Huiying</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mrinmaya</given-names>
            <surname>Sachan</surname>
          </string-name>
          , Shashank Srivastava, and
          <string-name>
            <given-names>Eduard</given-names>
            <surname>Hovy</surname>
          </string-name>
          .
          <article-title>A structured distributional semantic model for event co-reference</article-title>
          .
          <source>In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>2</volume>
          :
          <string-name>
            <surname>Short</surname>
            <given-names>Papers)</given-names>
          </string-name>
          , volume
          <volume>2</volume>
          , pages
          <fpage>467</fpage>
          -
          <lpage>473</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [Hagberg et al.,
          <year>2008</year>
          ]
          <string-name>
            <given-names>Aric</given-names>
            <surname>Hagberg</surname>
          </string-name>
          , Pieter Swart, and Daniel S Chult.
          <article-title>Exploring network structure, dynamics, and function using networkx</article-title>
          .
          <source>Technical report</source>
          , Los Alamos National Lab.
          <source>(LANL)</source>
          , Los Alamos,
          <source>NM (United States)</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [He et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Luheng</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mike</given-names>
            <surname>Lewis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Luke</given-names>
            <surname>Zettlemoyer</surname>
          </string-name>
          .
          <article-title>Deep semantic role labeling: What works and what's next</article-title>
          .
          <source>In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          , volume
          <volume>1</volume>
          , pages
          <fpage>473</fpage>
          -
          <lpage>483</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[Honnibal and Montani</source>
          , 2017]
          <article-title>Matthew Honnibal and Ines Montani. spacy 2: Natural language understanding with bloom embeddings, convolutional neural networks and incremental parsing, 2017</article-title>
          . https://github.com/explosion/ spaCy; Last accessed on Nov 28 ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [Iyyer et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Mohit</given-names>
            <surname>Iyyer</surname>
          </string-name>
          , Anupam Guha, Snigdha Chaturvedi,
          <string-name>
            <surname>Jordan</surname>
          </string-name>
          Boyd-Graber,
          <article-title>and Hal Daume´ III. Feuding families and former friends: Unsupervised learning for dynamic fictional relationships</article-title>
          .
          <source>In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</source>
          , pages
          <fpage>1534</fpage>
          -
          <lpage>1544</lpage>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [
          <string-name>
            <surname>Kenyon-Dean</surname>
          </string-name>
          et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Kian</given-names>
            <surname>Kenyon-Dean</surname>
          </string-name>
          ,
          <article-title>Jackie Chi Kit Cheung, and Doina Precup. Resolving event coreference with supervised representation learning and clustering-oriented regularization</article-title>
          .
          <source>In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics</source>
          , pages
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          , New Orleans, Louisiana,
          <year>June 2018</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>[Lee</surname>
          </string-name>
          et al.,
          <year>2012</year>
          ]
          <string-name>
            <given-names>Heeyoung</given-names>
            <surname>Lee</surname>
          </string-name>
          , Marta Recasens,
          <string-name>
            <given-names>Angel</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mihai</given-names>
            <surname>Surdeanu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Dan</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          .
          <article-title>Joint entity and event coreference resolution across documents</article-title>
          .
          <source>In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning</source>
          , pages
          <fpage>489</fpage>
          -
          <lpage>500</lpage>
          . Association for Computational Linguistics,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>[Lee</surname>
          </string-name>
          et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          , Luheng He,
          <string-name>
            <surname>Mike Lewis</surname>
            , and
            <given-names>Luke</given-names>
          </string-name>
          <string-name>
            <surname>Zettlemoyer</surname>
          </string-name>
          .
          <article-title>End-to-end neural coreference resolution</article-title>
          .
          <source>In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>188</fpage>
          -
          <lpage>197</lpage>
          , Copenhagen, Denmark,
          <year>September 2017</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [Minard et al.,
          <year>2015</year>
          ]
          <string-name>
            <surname>Anne-Lyse</surname>
            <given-names>Minard</given-names>
          </string-name>
          , Manuela Speranza, Eneko Agirre, Itziar Aldabe, Marieke van Erp,
          <string-name>
            <surname>Bernardo Magnini</surname>
          </string-name>
          , German Rigau, and Rube´n Urizar. SemEval
          <article-title>-2015 task 4: TimeLine: Cross-document event ordering</article-title>
          .
          <source>In Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval</source>
          <year>2015</year>
          ), pages
          <fpage>778</fpage>
          -
          <lpage>786</lpage>
          , Denver, Colorado,
          <year>June 2015</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [Mitamura et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Teruko</given-names>
            <surname>Mitamura</surname>
          </string-name>
          , Zhengzhong Liu, and
          <string-name>
            <surname>Eduard</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Hovy</surname>
          </string-name>
          .
          <article-title>Events detection, coreference and sequencing: What's next? overview of the tac kbp 2017 event track</article-title>
          .
          <source>In TAC</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <source>[Mueller</source>
          , 2007] Erik T Mueller.
          <article-title>Understanding goal-based stories through model finding and planning</article-title>
          .
          <source>In Intelligent Narrative Technologies: Papers from the AAAI Fall Symposium</source>
          , pages
          <fpage>95</fpage>
          -
          <lpage>101</lpage>
          . AAAI Press Menlo Park, CA,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>[Needleman and Wunsch</source>
          , 1970] Saul
          <string-name>
            <given-names>B</given-names>
            <surname>Needleman and Christian D Wunsch</surname>
          </string-name>
          .
          <article-title>A general method applicable to the search for similarities in the amino acid sequence of two proteins</article-title>
          .
          <source>Journal of molecular biology</source>
          ,
          <volume>48</volume>
          (
          <issue>3</issue>
          ):
          <fpage>443</fpage>
          -
          <lpage>453</lpage>
          ,
          <year>1970</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [Nikolentzos et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Giannis</given-names>
            <surname>Nikolentzos</surname>
          </string-name>
          , Polykarpos Meladianos, Franc¸ois Rousseau, Yannis Stavrakas, and
          <string-name>
            <given-names>Michalis</given-names>
            <surname>Vazirgiannis</surname>
          </string-name>
          .
          <article-title>Shortest-path graph kernels for document similarity</article-title>
          .
          <source>In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>1890</fpage>
          -
          <lpage>1900</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [Pedregosa et al.,
          <year>2011</year>
          ]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderplas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perrot</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Duchesnay</surname>
          </string-name>
          .
          <article-title>Scikit-learn: Machine learning in Python</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          ,
          <volume>12</volume>
          :
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <source>[Reiter</source>
          , 2014]
          <string-name>
            <given-names>Nils</given-names>
            <surname>Reiter</surname>
          </string-name>
          .
          <article-title>Discovering Structural Similarities in Narrative Texts using Event Alignment Algorithms</article-title>
          .
          <source>PhD thesis</source>
          , Heidelberg University,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <source>[Riloff</source>
          , 1999]
          <string-name>
            <given-names>Ellen</given-names>
            <surname>Riloff</surname>
          </string-name>
          .
          <article-title>Information extraction as a stepping stone toward story understanding</article-title>
          .
          <source>Understanding language understanding: Computational models of reading</source>
          , pages
          <fpage>435</fpage>
          -
          <lpage>460</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <source>[Roth and Frank</source>
          , 2012]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Roth</surname>
          </string-name>
          and
          <string-name>
            <given-names>Anette</given-names>
            <surname>Frank</surname>
          </string-name>
          .
          <article-title>Aligning predicates across monolingual comparable texts using graph-based clustering</article-title>
          .
          <source>In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning</source>
          , pages
          <fpage>171</fpage>
          -
          <lpage>182</lpage>
          . Association for Computational Linguistics,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <source>[Salton and McGill</source>
          , 1986]
          <string-name>
            <given-names>Gerard</given-names>
            <surname>Salton and Michael J McGill</surname>
          </string-name>
          .
          <article-title>Introduction to modern information retrieval</article-title>
          .
          <source>McGraw-Hill</source>
          , Inc., New York City,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [Saquete and
          <string-name>
            <surname>Navarro-Colorado</surname>
          </string-name>
          ,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Estela</given-names>
            <surname>Saquete</surname>
          </string-name>
          and
          <string-name>
            <given-names>Borja</given-names>
            <surname>Navarro-Colorado</surname>
          </string-name>
          .
          <article-title>Cross-document event ordering through temporal relation inference and distributional semantic models</article-title>
          .
          <source>Procesamiento del Lenguaje Natural</source>
          ,
          <volume>58</volume>
          :
          <fpage>61</fpage>
          -
          <lpage>68</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [Saurı et al.,
          <year>2006</year>
          ]
          <string-name>
            <given-names>Roser</given-names>
            <surname>Saurı</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Jessica</given-names>
            <surname>Littman</surname>
          </string-name>
          , Bob Knippen, Robert Gaizauskas, Andrea Setzer, and
          <string-name>
            <given-names>James</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          .
          <source>Timeml annotation guidelines version 1.2. 1</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <source>[Schank and Abelson</source>
          , 1975]
          <article-title>Roger C Schank and Robert P Abelson. Scripts, plans, and knowledge</article-title>
          .
          <source>In IJCAI</source>
          , volume
          <volume>75</volume>
          , pages
          <fpage>151</fpage>
          -
          <lpage>157</lpage>
          ,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <source>[Schank and Abelson</source>
          , 1977]
          <article-title>Roger C Schank and Robert P Abelson. Scripts, plans, goals, and understanding: An inquiry into human knowledge structures</article-title>
          . Hillsdale, NJ: Lawrence Erlbaum,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <source>[Stolcke and Omohundro</source>
          , 1993]
          <string-name>
            <given-names>Andreas</given-names>
            <surname>Stolcke</surname>
          </string-name>
          and
          <string-name>
            <given-names>Stephen</given-names>
            <surname>Omohundro</surname>
          </string-name>
          .
          <article-title>Hidden markov model induction by bayesian model merging</article-title>
          .
          <source>In Advances in neural information processing systems</source>
          , pages
          <fpage>11</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [UzZaman et al.,
          <year>2013</year>
          ] Naushad UzZaman, Hector Llorens, Leon Derczynski, James Allen,
          <string-name>
            <given-names>Marc</given-names>
            <surname>Verhagen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>James</given-names>
            <surname>Pustejovsky</surname>
          </string-name>
          .
          <article-title>Semeval-2013 task 1: Tempeval-3: Evaluating time expressions, events, and temporal relations</article-title>
          .
          <source>In Second Joint Conference on Lexical and Computational Semantics (* SEM)</source>
          , Volume
          <volume>2</volume>
          :
          <source>Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval</source>
          <year>2013</year>
          ), pages
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <source>[Wilensky</source>
          , 1978]
          <string-name>
            <given-names>Robert</given-names>
            <surname>Wilensky</surname>
          </string-name>
          .
          <article-title>Understanding goalbased stories</article-title>
          .
          <source>Technical report, YALE UNIV NEW HAVEN CONN DEPT OF COMPUTER SCIENCE</source>
          ,
          <year>1978</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <source>[Winston</source>
          , 2014]
          <article-title>Patrick Henry Winston. The genesis story understanding and story telling system a 21st century step toward artificial intelligence</article-title>
          .
          <source>Technical report</source>
          , Center for Brains,
          <source>Minds and Machines (CBMM)</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>