<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>And then I saw it: Testing Hypotheses on Turning Points in a Corpus of UFO Sighting Reports</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jan Langenhorst</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robert C.Schuppe</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yannick Frommherz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>TUD Dresden University of Technology</institution>
        </aff>
      </contrib-group>
      <fpage>950</fpage>
      <lpage>960</lpage>
      <abstract>
        <p>As part of developing aComputational Narrative Understanding, modeling events within stories has recently received significant attention within the digital humanities community. Most of the current research aims at good performance when predicting events. By contrast, we explore a focused approach based on qualitative observations. We attempt to trace the role of structural elements - more specifically, temporal function words - that may be characteristic of a narrative's turning point. We draw on a corpus of UFO sighting reports in which authors employ a prototypical narrative structure that relies on a turning point at which the extraordinary intrudes the ordinary. Using binary logistic regression, we can identify structural properties which are indicative of turning points in our data, showcasing that a focus on detail can fruitfully complement NLP models in gaining a quantitatively informed understanding of narratives.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;turning points</kwd>
        <kwd>events</kwd>
        <kwd>computational literary studies</kwd>
        <kwd>corpus linguistics</kwd>
        <kwd>logistic regression</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Introduction
(1) I was in my room of a paying guest flat, 5th floor, and was about to go for my bath and
then I suddenly noticed from my window an object glowing/flashing over a jungle area
more than 1 km away from my apartment. (Report 76519)
(2) As we drove north, 2 out of four of us saw a big bright blue ball of fire that looked as if
it got brighter the closer it got to the ground. (Report 65963)
(3) I was getting in my car, when all four of us – my grandson, my grandson’s tutor, my
granddaughter, and myself – noticed a low, slow-moving, sideways teardrop-shaped
object moving from north to south through the San Gabriel Mountains. (Report 4061)
When people tell the story of something extraordinary which has happened to them, they
use a particular kind of language. This is especially true for recounting moments when the
Author 1 and 2 contributed 40 % each to this work, author 3 contributed 20 %.
‡
extraordinary intrudes the ordinary. The excerpts above are sentences stemming from texts
about alleged UFO sightings which were collected online. Within these narratives, the first
appearance of something out of the ordinary marks an importatnutrning point. When looking
at these sentences, a pattern emerges: While they might difer from other parts of the text
contentwise, they also tend to stand out structurally. More precisely, they typically include an
adverbial which temporally grounds the event in relation to other parts of the narrative.</p>
      <p>
        Concepts related to what we just introduced atsurning points are, among others, Labov’s
most reportable event [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], the disruptive event in the narrative theory of Todorov19[] or Field’s
plot point [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Hühn [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] distinguishes betweentype-I-events and type-II-events: Whereas
every change of state in a story marks a type-I-event, a type-II-event is characterized by further
diferentiating traits, such as its unpredictability and its deviation from the norm. We see a
turning point as a type-II-event which has a particular function and prototypical position in
narratives. Hühn argues that type-II-events can only be identified hermeneutically5][.
However, he also notes following Schmid17[
        <xref ref-type="bibr" rid="ref16">, 16</xref>
        ] that there are criteria which hint at the presence
of a type-II-event in a sentence such as, e.g., thenon-iterativity of an action. In our context, this
should entail a higher frequency of temporal function words such as the highlighted ones in
(1) – (3) in sentences recounting a turning point compared to other sentences, as these words
typically hint at a singular event. In this short paper, we aim at testing whether there is a
systematic association between sentences containing a turning point and the use of certain
context-independent markers of temporality.
      </p>
      <p>Following observations such a(s1) – (3), we opted to focus onthen_ADV, as_SCONJ, and
when_ADV as function words which frequently introduce temporal adverbials and seem to
characterize turning point sentences. We test our hypothesis by specifying a model that
predicts whether a sentence is a turning point or not, assessing whether the selected words are
associated with a higher probability. By using a limited number of linguistic factors as
predictors in our model, we aim to contribute to a better understanding of turning points as simpler
models help to keep the impact of individual variables more transparent.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        The computational modeling of narratives – both their constituent elements (e.g., characters
or events) and their overall structure – is a vibrant field of research1, 2[
        <xref ref-type="bibr" rid="ref14">, 14</xref>
        ] and can be
seen as part of a project that strives to develop what Piper callsCoamputational Narrative
Understanding [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Literary event detection is a key element in this enterprise18[
        <xref ref-type="bibr" rid="ref10 ref11">, 10, 11</xref>
        ].
NLP research on how to best predict events in a narrative has yielded models that have been
tested on various datasets with recent approaches reporting good performan2c0e, [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. Not
all of these studies try and measure the same theoretical construct, sinecvent and related
concepts are not defined consistently [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Also, events can be measured on diferent levels, e.g.,
sentence vs. word. Nevertheless, all approaches have in common that they aim to extract those
parts of a narrative that are distinguished from other parts in the way they contribute to the
development of the story.
      </p>
      <p>
        While these studies broadly investigate the same phenomenon, they difer from our approach
in that they are predominantly concerned with predicting events while we aim at identifying
certain characteristics of those events we consider turning points. While, e.g., approaches like
the one by Ouyang/McKeown [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] make extensive use of prior findings from linguistics and
literary studies when selecting features for their models, they are still aimed at good predictive
performance. This is typically achieved by including a myriad of diferent factors which, on
the downside, hampers disentangling variables that contribute to what constitutes a turning
point. In contrast, to be able to better interpret results, we aim to keep our model as simple as
possible when estimating turning point probabilities.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Data</title>
      <p>The data stem from a larger corpus of approximately 110,000 reports of UFO sightings
submitted via the online platformUFO Stalker (https://ufostalker.com)and scraped by one of the
authors.1 Texts are mostly written in English, presumably by people from the U.S., even though
these metadata cannot be verified. The reports’ narrative shape is typically as follows: In a
short exposition or staging phase, authors describe the – usually mundane – situation they say
they were in and quite often who they were with at the time (cf. (4)). Then something
happens: Most often, authors report that they suddenly see a strange light moving in the sky and
such the ‘actual’ reporting of the sighting unfolds. This reporting is mostly an account of the
author’s cognitive processes. Reports typically end without them reaching a definitive
conclusion with regard to what it was that they saw. This puzzlement can be seen as the prototypical
resolution of these narratives.</p>
      <p>(4) I was inside my house with my wife, brother and his daughter. i thought i’d go out into
my backyard. so i opened my door which faces east and walked out. i stopped about 6
feet from the door and felt like i needed to look up in the south direction. so i did and
then i saw it right there in front of a low cloud. it like came out and went down
about 25 feet then left about 25 feet then back then up and back again, then stopped and
sat there. i was yelling at my wife and brother to come out here fast now! my wife was
the first one yet could not see it cause she did not have her glasses on! then my brother
came out and before he could actually look at it, it went into another cloud next to it.
the funny thing is these clouds were kind of transparent so i do not know where it went
it just went into it and vanished. at the time i saw it, it was a circular object just like a
ufo to say. it was of a dark color yet you could see the sun hitting it. so it was there! but
the moves were just to quick. it went from a straight down to a left turn in just a split
second and did what i said above in the same time. but i did get to see it for the time
mentioned above. it makes me wonder if it intended for me to see it. as when i walked
out the door i had the urge to look in that direction. but who knows. this was what i
saw and was amazed at what it did. (Report 60500)</p>
      <p>
        We sampled 496 texts from the larger corpus of reports. These texts were preprocessed
using Stanza (Version 1.8.1) [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] for tokenization, sentence segmentation and part-of-speech
tagging. Two of the authors annotated which of each report’s sentences marks the turning
1A similar dataset (that encompasses a diferent timespan) is available aKtaggle.
Non-Turning Points
when_ADV
y
c
n
e
u
req 60
F
0
2
1
0
0
1
0
8
0
4
0
2
0
0
20
40
60
80
      </p>
      <p>100</p>
      <sec id="sec-3-1">
        <title>Relative Position</title>
      </sec>
      <sec id="sec-3-2">
        <title>Non−Turning Points</title>
      </sec>
      <sec id="sec-3-3">
        <title>Turning Points</title>
        <p>point which we operationalized as the one sentence where it becomes clear that the narrative
is about a UFO sighting, i.e., we only annotated one turning point per text. Inter-annotator
agreement was good (ICC(2,1) = 0.808, 95%-CI [0.766, 0.843]; ICC(3,1) = 0.81, 95%-CI [0.769,
0.845]). Disagreement was resolved via discussion. Reports that consisted of fewer than three
sentences were discarded, in line with the data preparation done by Ouyang/McKeown9] [
following Prince’s definition of a minimal narrative12[]. Also excluded were reports that were
written in languages other than English, described something other than a UFO sighting, were
a mere description of photos or videos that were provided along with the report, did not feature
any narrativity or did not include a discernible turning point. Finally, 352 reports consisting
of 5,346 sentences were included in the analysis. Texts contained up to 81 sentences (Median
= 12, IQR = 10).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Modeling</title>
      <p>To test the hypothesis laid out above, we fit binary logistic regression models with the
probability of a sentence being the turning point as the outcome variable. As predictor variables
we used dummy variables coding for whether the worwdshen_ADV, then_ADV oras_SCONJ
occurred in a given sentence. Further, we opted to include two more structural variables. First,
we added the sentence’s relative position within the text (the sentence’s index divided by the
text’s length) as a percentage. Since we assumed a certain narrative structure, we knew that
position within the narrative would play a role: We observed beforehand that the turning point is
60%
40%
it
n
o
P
g
n
i
n
r
u
T
y
iilt
b
a
b
o
rP20%
0%
Length
8
16
32
0%
25%</p>
      <p>50%</p>
      <sec id="sec-4-1">
        <title>Relative Position</title>
        <p>
          75%
100%
usually located toward the beginning of the story. Second, we included logged sentence length
as a predictor. Sap et al. found that what they camllajor events are usually expressed in longer
sentences [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] and a similar pattern has been observed by Ouyang/McKeown9][. Importantly,
the context of sentences is not included in any way – no information on what was written in
the preceding or following sentence was used in the model, i.e., sentences were assumed
independent. Thus, we do not measure any kind ocfhange from one sentence to the next (like, e.g.,
Ouyang/McKeown do [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]), but rather compare ‘global’ diferences between turning points and
non-turning points. Note that even though sentences are naturally clustered at the text level,
a multilevel model is not warranted in our case since we decided to only select one sentence
per text as the turning point. Thus, the turning point probability does not vary between texts.
        </p>
        <p>Looking at descriptive evidence, the three selected words do exhibit diferent occurrence
distributions depending on whether sentences are turning points or not (F1ig).. The percentage of
sentences that includewhen_ADV is four times higher for turning points than for non-turning
points. The same tendency, though less pronounced can be observed for the woarsd_SCONJ,
whereas then_ADV occurs in a similarly sized share in both subsets of the corpus. Turning
point sentences have a median relative position within the text of 16.7 (IQR = 19.4), so they are
usually present in the earlier parts of the narrative (Fig. 2). Turning point sentences are also
longer than non-turning point sentences in our data (MediTaPn= 25 vs. Mediannon-TP = 17;
60%
it
n
o
P
g
inn40%
r
u
T
y
iilt
b
a
b
o
r
P
20%
0%
0%
25%</p>
        <p>50%</p>
      </sec>
      <sec id="sec-4-2">
        <title>Relative Position</title>
        <p>75%
100%</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <p>We fit three separate models. In a first step, regressing the probability of a sentence being a
turning point on a sentence’s relative position within the text, we estimate a negative
association. This model described a small amount of variance (Tju r2’s= 0.113). In a second step, we
added the logged sentence length (Tjur’s2 = 0.183). Fig. 4 plots the predicted probabilities of
a sentence being a turning point against its relative position within the text (a percentage value
close to 0 for the very beginning and 100 for the end of a text) for diferent sentence lengths.
As can be seen, the model assigns very low probability to sentences after half of the narrative
has passed whereas sentences that lie in the first quarter of the text are assigned probabilities
between around 0.15 and 0.04 for shorter sentences and between 0.54 and 0.20 for very long
sentences.</p>
      <p>Adding our main variables of interest, namely the occurrence of temporal markers resulted
in improved model fit (Tjur’s 2 = 0.213; for full model comparison, sDeeata Availability). The
estimated coefÏcients for as_SCONJ and then_ADV were not statistically significant, which is
in accordance with the descriptive patterns presented above. The worwdhen_ADV, however,
was associated with an increased probability of a sentence being a turning point (Fi5g). Again,
diferent sentence lengths predict diferent probabilities for a sentence being the turning point
with longer sentences being associated with higher probabilities (Fi6g).</p>
      <p>It is important to note that adding content words which we know a priori to be discriminative
80%
60%
it
n
o
P
g
n
i
n
r
ilityuT40%
b
a
b
o
r
P
20%
0%</p>
      <p>Predicted Probabilities of Turning Point</p>
      <p>Length = 8</p>
      <p>Length = 16</p>
      <p>Length = 32
0%
25%
50%
of turning points for our specific genre of text would also result in better model fit – e.g., adding
a binary variable that captures whether the worsdky appears in a given sentence (which is
presumably typical of turning points in our texts since that is the locus of the extraordinary
event) improves model fit (Tjur’s  2 = 0.242). While it is clear that including more or even all
word occurrences into the model will result in better model fit or predictive power, respectively,
it was not our aim to design a model that discriminates turning point sentences and non-turning
point sentences perfectly – i.e., solve a classification problem – but rather test the theoretical
question laid out above in a very specific exemplary genre of texts.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>Our investigation of turning points in UFO sighting narratives was driven by a hypothesis on
the role of certain content-independent characteristics of turning points and had a relatively
narrow scope: Not only does our corpus consist of a very particular and, possibly, idiosyncratic
genre of texts. Also, our study only used a small hand-annotated sample and focused on few
variables that were situated at diferent levels: Position within texts and sentence length did
already account for some variation in the probability of sentences being turning points.
Regarding the role of temporal function words, we found that whwilheen_ADV is predictive of
turning points,then_ADV and as_SCONJ are not. Thus, we were not able to identify a whole
group or class of words that are used to mark turning points, but we did corroborate that the
use ofwhen_ADV is predictive of a turning point. This finding supports our general hypothesis
that turning points are characterized not only by their content, but also by structural properties
such as temporal adverbials. Whether this also holds true for other types of narratives remains
subject to further investigation.</p>
      <p>Using state-of-the-art NLP methodology, there may be huge advances in the prediction of
event types in narrative texts over the next few years. Another question, however, is how well
these NLP models will serve us to understand what makes a turning point a turning point (or
an event an event, for that matter). On a theoretical level, one can think about approaches like
ours as modeling the reader, but also as modeling the author: What hints enable readers to
place the content of a given sentence within the greater narrative? What hints does the author
deem viable to trigger said interpretation? Do these cues vary between diferent genres that
feature diferent narrative structures or schemas? These and many more questions should be
addressed by future research from the vantage point of diferent disciplines – such as literary
studies, linguistics, and psychology. This will help us gain a quantitatively informed
understanding of (literary) narratives. We hope to have exemplified with this study how focusing
on individual linguistic characteristics can complement prediction-focused approaches, aiding
the development of a more thorough, corpus-based understanding of narrativity.</p>
    </sec>
    <sec id="sec-7">
      <title>Data Availability</title>
      <p>The data and code for our analysis are available hat:tps://osf.io/vd9pu/.</p>
    </sec>
    <sec id="sec-8">
      <title>A. Model comparison</title>
      <p>Constant
Tjur’s  2
Observations
Akaike Inf. Crit.
(1)
−0.06∗∗∗
(−0.07, −0.05)</p>
      <p>TurningPoint</p>
      <p>(2)
−0.06∗∗∗
(−0.07, −0.06)</p>
      <p>∗p&lt;0.1; ∗∗p&lt;0.05; ∗∗∗p&lt;0.01
(3)
−0.06∗∗∗
(−0.07, −0.06)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berhe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guinaudeau</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Barras</surname>
          </string-name>
          . “
          <article-title>Survey on Narrative Structure: from Linguistic Theories to Automatic Extraction Approaches”</article-title>
          .
          <source>InT:raitement Automatique des Langues</source>
          <volume>63</volume>
          . Ed. by
          <string-name>
            <given-names>C.</given-names>
            <surname>Fabre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Morin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rosset</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Sébillot</surname>
          </string-name>
          .
          <source>France: ATALA (Association pour le Traitement Automatique des Langues)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>63</fpage>
          -
          <lpage>87</lpage>
          . urlh: ttps://aclanthology.org /
          <year>2022</year>
          .tal-
          <volume>1</volume>
          .3.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Boyd</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. G.</given-names>
            <surname>Blackburn</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Pennebaker</surname>
          </string-name>
          . “
          <article-title>The Narrative Arc: Revealing Core Narrative Structures through Text Analysis”</article-title>
          .
          <source>ISnc:ience Advances 6.32</source>
          (
          <year>2020</year>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          . doi:
          <volume>10</volume>
          .1126/sciadv.aba2196.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Field</surname>
          </string-name>
          .
          <source>Screenplay: The Foundations of Screenwriting. Revised</source>
          . New York: Random House,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gius</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Vauth</surname>
          </string-name>
          . “
          <article-title>Towards an Event Based Plot Model. A Computational Narratology Approach”</article-title>
          .
          <source>In:Journal of Computational Literary Studies</source>
          <volume>1</volume>
          .1 (
          <issue>2022</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .48694/jcls.110.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hühn</surname>
          </string-name>
          . “
          <article-title>Event and Eventfulness”</article-title>
          . In:Handbook of Narratology. Ed. by
          <string-name>
            <given-names>P.</given-names>
            <surname>Hühn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Meister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pier</surname>
          </string-name>
          , and
          <string-name>
            <given-names>W.</given-names>
            <surname>Schmid</surname>
          </string-name>
          . Berlin/Boston: De Gruyter,
          <year>2014</year>
          , pp.
          <fpage>159</fpage>
          -
          <lpage>178</lpage>
          .
          <year>do10i</year>
          :.15 15/9783110316469.159.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.</given-names>
            <surname>Labov</surname>
          </string-name>
          .
          <article-title>Language in the Inner City</article-title>
          . Philadelphia: University of Pennsylvania Press,
          <year>1972</year>
          , pp.
          <fpage>354</fpage>
          -
          <lpage>396</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V. D.</given-names>
            <surname>Lai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. N.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          . “Event Detection:
          <article-title>Gate Diversity and Syntactic Importance Scores for Graph Convolution Neural Networks”</article-title>
          .
          <source>PIrno:ceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)</source>
          .
          <year>2020</year>
          , pp.
          <fpage>5405</fpage>
          -
          <lpage>5411</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .emnlp-main.
          <volume>435</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Last</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Shmilovici</surname>
          </string-name>
          . “
          <article-title>Identifying Turning Points in Animated Cartoons”</article-title>
          .
          <source>In: Expert Systems with Applications</source>
          <volume>123</volume>
          (
          <year>2019</year>
          ), pp.
          <fpage>246</fpage>
          -
          <lpage>255</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2019</year>
          .
          <volume>01</volume>
          .003.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ouyang</surname>
          </string-name>
          and
          <string-name>
            <given-names>K.</given-names>
            <surname>McKeown</surname>
          </string-name>
          . “
          <article-title>Modeling Reportable Events as Turning Points in Narrative”</article-title>
          .
          <source>In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing</source>
          . Ed. by
          <string-name>
            <given-names>L.</given-names>
            <surname>Màrquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Callison-Burch</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>J.</given-names>
            <surname>Su</surname>
          </string-name>
          . Lisbon, Portugal: Association for Computational Linguistics,
          <year>2015</year>
          , pp.
          <fpage>2149</fpage>
          -
          <lpage>2158</lpage>
          .
          <year>doi1</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <fpage>D15</fpage>
          -1257.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Piper</surname>
          </string-name>
          . “
          <article-title>Computational Narrative Understanding: A Big Picture Analysis”</article-title>
          .
          <source>PIrno:ceedings of the Big Picture Workshop</source>
          . Ed. by
          <string-name>
            <given-names>Y.</given-names>
            <surname>Elazar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ettinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kassner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruder</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          . Singapore: Association for Computational Linguistics,
          <year>2023</year>
          , pp.
          <fpage>28</fpage>
          -
          <lpage>39</lpage>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          .bigpicture-
          <volume>1</volume>
          .3.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Piper</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Bagga</surname>
          </string-name>
          . “
          <article-title>Toward a Data-Driven Theory of Narrativity”</article-title>
          .
          <source>INne:w Literary History 54.1</source>
          (
          <issue>2022</issue>
          ), pp.
          <fpage>879</fpage>
          -
          <lpage>901</lpage>
          . doi:
          <volume>10</volume>
          .1353/nlh.
          <year>2022</year>
          .a898332.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Prince</surname>
          </string-name>
          .
          <article-title>A Grammar of Stories: An Introduction</article-title>
          . Vol.
          <volume>13</volume>
          . De Proprietatibus Litterarum. Series Minor. The Hague: De Gruyter,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , J. Bolton, and
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          . “
          <article-title>Stanza: A Python Natural Language Processing Toolkit for Many Human Languages”</article-title>
          . Inar:Xiv preprint arXiv:
          <year>2003</year>
          .
          <volume>07082</volume>
          (
          <year>2020</year>
          ). doi: https://doi.org/10.48550/arXiv.
          <year>2003</year>
          .
          <volume>0708</volume>
          .2
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Reagan</surname>
          </string-name>
          , L. Mitchell,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kiley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Danforth</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Dodds</surname>
          </string-name>
          . “
          <article-title>The Emotional Arcs of Stories are Dominated by Six Basic Shapes”</article-title>
          .
          <source>InE: PJ Data Science 5.1</source>
          (
          <issue>2016</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1140/epjds/s13688-016-0093-1.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jafarpour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Pennebaker</surname>
          </string-name>
          , and
          <string-name>
            <surname>E. Horvitz.</surname>
          </string-name>
          “
          <article-title>Quantifying the Narrative Flow of Imagined versus Autobiographical Stories”</article-title>
          .
          <source>PIrno:ceedings of the National Academy of Sciences 119.45</source>
          (
          <year>2022</year>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .1073/pnas.2211715119.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>W.</given-names>
            <surname>Schmid</surname>
          </string-name>
          . Elemente der Narratologie. 3rd,
          <string-name>
            <surname>revised. De Gruyter Studium</surname>
          </string-name>
          . Berlin/New York: De Gruyter,
          <year>2014</year>
          . doih:ttps://doi.org/10.1515/978311035097 5.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>W.</given-names>
            <surname>Schmid</surname>
          </string-name>
          . “
          <article-title>Narrativity and Eventfulness”. InW:hat is Narratology? Questions and Answers Regarding the Status of a Theory</article-title>
          . Ed. by
          <string-name>
            <given-names>T.</given-names>
            <surname>Kindt</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.-H.</given-names>
            <surname>Müller</surname>
          </string-name>
          . Vol.
          <volume>1</volume>
          .
          <string-name>
            <surname>Narratologia</surname>
          </string-name>
          . Berlin/New York: De Gruyter,
          <year>2003</year>
          , pp.
          <fpage>239</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sims</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Park</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Bamman</surname>
          </string-name>
          . “Literary Event Detection”.
          <article-title>InP:roceedings of the 57th Annual Meeting of the Association for Computational Linguistics</article-title>
          . Florence, Italy: Association for Computational Linguistics,
          <year>2019</year>
          , pp.
          <fpage>3623</fpage>
          -
          <lpage>3634</lpage>
          .
          <year>do1i</year>
          :
          <fpage>0</fpage>
          .18653/v1/
          <fpage>P19</fpage>
          -135
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>T.</given-names>
            <surname>Todorov</surname>
          </string-name>
          . “Die Grammatik der Erzählung”. ISnt:rukturalismus als interpretatives Verfahren. Ed. by
          <string-name>
            <given-names>H.</given-names>
            <surname>Gallas</surname>
          </string-name>
          . Vol.
          <volume>2</volume>
          .
          <string-name>
            <given-names>Collection</given-names>
            <surname>Alternative</surname>
          </string-name>
          . Darmstadt/Neuwied: Luchterhand,
          <year>1972</year>
          , pp.
          <fpage>57</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jafarpour</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Sap</surname>
          </string-name>
          . “
          <article-title>Uncovering Surprising Event Boundaries in Narratives”</article-title>
          .
          <source>In: Proceedings of the 4th Workshop of Narrative Understanding (WNU2022)</source>
          . Seattle, United States: Association for Computational Linguistics,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
          <year>do1i0</year>
          :.
          <volume>18653</volume>
          /v1/
          <year>2022</year>
          .wnu-
          <volume>1</volume>
          .1.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>