<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>ScANT: A Small Corpus of Scene-Annotated Narrative Texts[resource papers]</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tarfah Alrashid</string-name>
          <email>ttalrashid1@sheffield.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Robert Gaizauskas</string-name>
          <email>r.gaizauskas@sheffield.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Jeddah</institution>
          ,
          <addr-line>Jeddah</addr-line>
          ,
          <country country="SA">Saudi Arabia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Shefield</institution>
          ,
          <addr-line>Shefield</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>We present the first publicly available dataset of English narrative texts annotated in compliance with SceneML, a framework for annotating scenes in narrative text. The dataset is composed of selected chapters from six narrative texts - two children's stories and four novels from Project Gutenberg. We give a brief overview of SceneML, describe the corpus sources and the annotation process and provide details of the resulting annotations and inter-annotator agreement.</p>
      </abstract>
      <kwd-group>
        <kwd>SceneML</kwd>
        <kwd>narrative text</kwd>
        <kwd>scenes</kwd>
        <kwd>text segmentation</kwd>
        <kwd>corpus</kwd>
        <kwd>dataset</kwd>
        <kwd>annotation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and Related</title>
    </sec>
    <sec id="sec-2">
      <title>Work</title>
      <p>
        Narrative, or storytelling, is a fundamental mode of human discourse, found across all cultures
and all times, and in many diferent forms, including writing (both fiction and non-fiction),
spoken storytelling, film, video games, and so on [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. A basic structural unit of narrative is
the scene, “a unit of a story in which the elements of time, location, and main characters are
constant” [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Narratives tend to progress as a sequence of scenes, though of course the sequence
of scenes in a narration need not be the same as the temporal sequence of the narrated events
in the storyworld [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] the narrative is describing. Furthermore, one scene may be expressed
in multiple non-contiguous text segments in the narration. Additionally, what are generally
deemed narrative texts may include various non-narrative elements, e.g. authorial comment.
Thus, the task of identifying those chunks in a narrative text which correspond to scenes in the
storyworld and temporally ordering these chunks is a non-trivial challenge. It is an important
challenge both for the insights it gives us into the structure of narratives and for possible
applications, which include automatic story illustration, aligning books and movies, automatic
generation of image descriptions and automatic generation of narratives.
      </p>
      <p>
        In previous work we have introduced SceneML as a framework for annotating scenes in
narrative text [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and discussed issues arising in a pilot annotation exercise which focussed on
the scene identification task [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this paper we present ScANT, the first publicly available
dataset of English narrative texts annotated in compliance with SceneML. While the corpus
is small – just 14 chapters from 6 narrative sources – our hope is that the wider community
will find this useful both for converging on annotation standards for scene identification and
for initial training and testing of automatic scene identification algorithms. The corpus and
annotation guidelines are available at https://doi.org/10.15131/shef.data.21517908 and are made
available under the CC By-NC 4.0 licence 1.
      </p>
      <p>
        There has been a growing interest in computational analysis of narrative. Ranade et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]
provide a thorough overview of recent work on computational understanding of narrative
and Santana et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] provide an extended survey on narrative extraction from textual data.
However, neither of these addresses the issue of identifying scenes in narrative text. The only
other work on annotation of scenes in narrative texts of which we are aware is that of Zehe et
al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Their work difers from ours in several respects. First, their definition of scene states
that a scene is a segment in a narrative in which the time, place and characters remain constant
and which centres around one action. This contrasts with our definition that does not take
into account the actions in a scene and allows multiple actions to happen in one scene (see
Gaizaukas and Alrashid [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] for discussion around our choice of definition). Secondly, their
scheme is less comprehensive – it does not define narrative progression links between scenes or
scene transition segments, and only distinguishes scene and non-scene segments. Thirdly, they
follow a container principle (small places make up larger places) to detect a change in place, e.g.
if the action of characters moves from a corridor to dining room that will not indicate a change
in place as they both part of a hotel, where as our definition counts these as two diferent places.
Finally, they work on German texts while we are working on English texts.
      </p>
    </sec>
    <sec id="sec-3">
      <title>2. Methods and Resources</title>
      <sec id="sec-3-1">
        <title>2.1. SceneML</title>
        <p>SceneML is an evolving framework for annotating scenes in narrative text. The latest
specification and annotation guidelines are available along with the corpus at the DOI referenced above.
Here we summarise the core concepts in SceneML.</p>
        <p>A scene is defined as a unit of narrative in which the time, location and principal characters
are constant and in which specific events which constitute the narrative are recounted. Any
change in time, location or characters indicates a change in the scene. A scene is realised in text
(for written forms of narrative) through one or more scene description segments (SDSs). The SDS
mechanism allows for the relation of one scene in a narrative to be embedded within another, as
for example, in flashback or flashforward. The task of scene identification thus becomes the task
of identifying the boundaries of SDSs, and linking SDSs for the same scene together 2. SceneML
also specifies a set of four narrative progression relations (sequence, analepsis, prolepsis and
concurrence) that are used to capture the temporal relations between scenes.</p>
        <p>Typically, not all text in a narrative is part of a scene description. Some passages describe not
one scene or another but rather the transition between scenes. For example, in Conan Doyle’s
The Man With The Twisted Lip the first scene takes place in Watson’s house and the second
1https://creativecommons.org/licenses/by-nc/4.0/.
2Full scene annotation in SceneML also involves annotating the time, place and characters (named entities) in the
scene, using existing annotation standards (ISO-TimeML,ISO-Space and the ACE NE guidelines). However, in
ScANT we focus on scene segmentation only
in the East End of London, where Watson goes to seek a missing man. Between the two we
have the short passage: “And so in ten minutes I had left my armchair and cheery sitting-room
behind me, and was speeding eastward in a hansom on a strange errand...”. Such elements
SceneML refers to as scene transition segments (STSs). Other sorts of non-scene elements are also
present in narrative. These include general philosophising or opinion segments, background
information segments, and narrative summary or narrative catchup. These passages serve a
variety of functions but do not relate specific, situated events involving protagonists in the
story. All such passages SceneML designates as non-scene elements.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Corpus Sources</title>
        <p>The dataset is composed of selected chapters from children’s stories and from out-of-copyright
adult novels. The former were hypothesised as likely to have a simpler narrative structure and
hence to be a good place to trial our approach; the latter as likely to possess more complex
narrative structure and hence pose a more challenging test to our approach. The sources are:
(1) Bunnies from the Future, a middle grade children’s story by Joe Corcoran 3. The author
has personally granted permission for us to release annotated chapters of this work. (2) The
Wonderful Wizard of Oz, originally released as part of the Brown Corpus 4 and free for
noncommercial purposes. (3) Pride and Prejudice, A Tale of Two Cities, The Adventures of Sherlock
Holmes and The Great Gatsby from Project Gutenberg 5. These are out of copyright in the US
and UK and freely re-distributable subject to Project Gutenberg’s terms and conditions.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. The Annotation Process</title>
      <p>
        In an earlier pilot study [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] we investigated how well-defined the SceneML definitions and
annotation framework were with respect to scene boundary identification. Analysis of the
annotations in that study revealed several causes of observed disagreement: (1) lack of
understanding of the guidelines and task, (2) lack of clarity or specificity in the guidelines, (3) failure
of non-native English speakers to fully grasp the meaning of certain expressions (e.g. idioms).
We have addressed these issues in the construction of ScANT through the following steps: (1) A
more thorough training process that included both an initial training session with a presentation,
demonstration and hands-on exercise for the trainees, plus a follow-on take-away exercise that
was scored against gold-standard annotations produced by the authors and then discussed with
the trainees, (2) Improvement of the initial guidelines to remove sources of confusion revealed
in the earlier pilot, (3) recruitment of native English-speaking annotators with sensitivity to text
analysis (two PhD students, one in English Literature and one in Computational Linguistics).
      </p>
      <p>The annotation process was carried out through a web-based interface to a local instance of
Brat Annotation Tool 6. Annotators used swipe and click operations to annotate SDSs and STs.
Multiple SDSs that are part of the same scene were linked using the Brat relation annotation
3https://freekidsbooks.org/author/joe-corcoran/
4https://www.nltk.org/nltk_data/
5https://www.gutenberg.org
6https://brat.nlplab.org
tool to signal that a same-scene-as relation holds between them. The annotated data is stored
and made available in BRAT standof annotation format 7.</p>
      <p>The corpus consists of fourteen chapters from six diferent narrative sources 8. In each
chapter SDSs, STs and same-scene-as relations were annotated by two annotators and saved
in a separate text file. Both annotators’ annotations are supplied with the corpus. Further
annotations together with a consensus annotation may be made available in the future.</p>
    </sec>
    <sec id="sec-5">
      <title>4. The ScANT Corpus</title>
      <sec id="sec-5-1">
        <title>4.1. Corpus Statistics</title>
        <p>7It can be converted to JSONL format using the tool at: https://github.com/astutic/brat-standoff-to-json/.
8As one of the chapters is quite long it has been divided into three parts for analysis.</p>
      </sec>
      <sec id="sec-5-2">
        <title>4.2. Inter-annotator Agreement</title>
        <p>
          Table 2 shows inter-annotator agreement results for SDSs using Cohen’s Kappa [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. To calculate
Kappa, each sentence is given a tag, 1 for sentences on the boundary of an SDS (either beginning
or end) and 0 otherwise. Boundaries of STs are ignored as it is clear the two annotators’
understanding of the task is so diferent that precise quantitative analysis is not merited.
        </p>
        <p>Aside from calculating only exact matches as agreement ( = 0 in Table 2 ) we also
investigated a more lenient approach to calculate the agreement in which annotators are deemed
to agree if they place a sentence boundary within N sentences. This was prompted by the
observation that in many cases annotators seemed to be placing SDS boundaries relatively close
to each other, but not exactly in the same place. Kappa scores have been calculated for various
N sizes:  = 30% of the median SDS sentence length in each chapter, N=1, N=3 and N=5. We
have omitted one chapter from Table 2 – A Tale of Two Cities, chapter 1, because one annotator
believed it contained nothing but non-scene segments, while the other thought it contained 5
scenes. This gave a kappa score of 0, which skewed the rest of the results.</p>
      </sec>
      <sec id="sec-5-3">
        <title>4.3. Discussion</title>
        <p>
          Regarding diferences between our annotators, it is clear that the two annotators have a diferent
conception of what STs and NSSs are. This is probably due to the fact that the children’s stories
which we used as training materials contain very few of either of these, particulary of NSSs
(in fact this appears to be an interesting diference between children’s and adult narrative).
Any future annotation efort should ensure that these concepts are more clearly understood by
annotators. On examination our view is that A1 has followed the guidelines much more closely
regarding STs and NSSs and therefore, if one is to train or test a classifier on these materials,
our recommendation would be to use the A1 annotations only. However recent work such as
that reported in Uma et al. [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] highlights the potential value of learning from disagreement, so
we have included both sets of annotations in the corpus.
        </p>
        <p>
          Concerning the kappa scores for SDS agreement, they fall in the range that has been
interpreted as ”fair” or ”fair to good”. However, kappa scores are known to be lower when there
are fewer labels (just two in our case) and where the labels are not equiprobably occurring
(also true in our case, since 0 labels are much more frequent than 1’s) so results should be
viewed in this light [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Percentage agreement scores for SDS annotations are around 90%.
Note that kappa scores rise significantly if we are prepared to allow some leniency in terms of
non-exact matching. How legitimate this is needs further examination to determine whether
the improvement is a reflection of genuine uncertainty about the precise boundary between
what the annotators clearly agree are distinct scenes or whether it is the result of conflating
separate scenes, according to the diferent annotators’ perceptions.
        </p>
        <p>While the corpus is too small to start making generalisations about stylistic diferences
between diferent authors, it is worth noting that the amount of non-scene content in the adult
novels (6.21% of the total sentences if we accept A1’s NSS annotations, which we believe are
more accurate) is vastly greater than that in the children’s stories (0.14%), suggesting that much
beyond simple event narration goes on in adult fiction.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Conclusion and Future Work</title>
      <p>We have presented the first dataset of English narrative texts annotated in compliance with
SceneML. The dataset consists of fourteen chapters of novels and children’s stories annotated
for scene description segments and scene transitions segments as defined in SceneML. A total
of almost 200 scenes have been annotated.</p>
      <p>Future work plans include various activities. These include:
1. gathering further annotations for the ScANT source texts to increase robustness of the
annotations and guidelines;
2. extending the annotation to include SceneML narrative progression links;
3. training a model on the corpus to investigate automating the task of scene boundary
detection and to ascertain the suficiency of ScANT for this task;
4. adding other text types to the annotated dataset, such as biography, plays and film scripts;
5. expanding the dataset to include texts in other languages.</p>
      <p>6. exploring whether there is interest in a shared task challenge on scene boundary detection.
We hope the community finds ScANT of use and welcome comment on our work.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The authors thank the Text2Story reviewers for their helpful comments. The first author
acknowledges support from the University of Jeddah in the form of a PhD studentship.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Wikipedia</surname>
          </string-name>
          , Narrative,
          <year>2023</year>
          . URL: https://en.wikipedia.org/wiki/Narrative,
          <source>last accessed 25 March</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Alrashid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gaizauskas</surname>
          </string-name>
          ,
          <article-title>A pilot study on annotating scenes in narrative text using SceneML</article-title>
          ,
          <source>in: Proceedings of the 4th international workshop on narrative extraction from texts (Text2Story</source>
          <year>2021</year>
          ),
          <year>2021</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>14</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Schmid</surname>
          </string-name>
          ,
          <string-name>
            <surname>Narratology:</surname>
          </string-name>
          <article-title>An introduction</article-title>
          , Walter de Gruyter, Berlin,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Gaizauskas</surname>
          </string-name>
          , T. Alrashid,
          <article-title>SceneML: A proposal for annotating scenes in narrative text</article-title>
          ,
          <source>in: Proceedings of the 15th Workshop on Interoperable Semantic Annotation (ISA-15)</source>
          , Gothenburg, Sweden,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Ranade</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Finin</surname>
          </string-name>
          ,
          <article-title>Computational understanding of narratives: A survey</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>101575</fpage>
          -
          <lpage>101594</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Santana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Campos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Amorim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jorge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Silvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nunes</surname>
          </string-name>
          ,
          <article-title>A survey on narrative extraction from textual data</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>43</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zehe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Konle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. K.</given-names>
            <surname>Dümpelmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gius</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hotho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Jannidis</surname>
          </string-name>
          , L. Kaufmann,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krug</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Puppe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reiter</surname>
          </string-name>
          , et al.,
          <article-title>Detecting scenes in fiction: A new segmentation task</article-title>
          ,
          <source>in: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics:</source>
          Main Volume,
          <year>2021</year>
          , pp.
          <fpage>3167</fpage>
          -
          <lpage>3177</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <article-title>A coeficient of agreement for nominal scales</article-title>
          ,
          <source>Educational and psychological measurement 20</source>
          (
          <year>1960</year>
          )
          <fpage>37</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Uma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Fornaciari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Hovy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Paun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Plank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Poesio</surname>
          </string-name>
          ,
          <article-title>Learning from disagreement: A survey</article-title>
          ,
          <source>J. Artif. Int. Res</source>
          .
          <volume>72</volume>
          (
          <year>2022</year>
          )
          <fpage>1385</fpage>
          -
          <lpage>1470</lpage>
          . URL: https://doi.org/10.1613/jair.1.12752. doi:
          <volume>10</volume>
          .1613/jair.1.12752.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Wikipedia</surname>
          </string-name>
          , Cohen's kappa,
          <year>2023</year>
          . URL: https://en.wikipedia.org/wiki/Cohen%27s_kappa,
          <source>last accessed 26 March</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>