<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>THEaiTRE: Artificial Intelligence to Write a Theatre Play</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rudolf Rosaµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ondrˇej Dušekµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom Kocmiµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Marecˇekµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomáš Musilµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patrícia Schmidtováµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dominik Jurkoµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ondrˇej Bojarµ</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Hrbek</string-name>
          <email>hrbek@svandovodivadlo.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Košt'ák</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martina Kinská</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Josef Doležal</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Klára Vosecká</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>In: A. Jorge, R. Campos, A. Jatowt, A. Aizawa (eds.): Proceedings of the first AI4Narratives Workshop</institution>
          ,
          <addr-line>Yokohama</addr-line>
          ,
          <country country="JP">Japan</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <abstract>
        <p>We present THEaiTRE, a starting research project aimed at automatic generation of theatre play scripts. This paper reviews related work and drafts an approach we intend to follow. We plan to adopt generative neural language models and hierarchical generation approaches, supported by summarization and machine translation methods, and complemented with a human-in-the-loop approach.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>We introduce the THEaiTRE project,1 which aims to produce
and stage the first computer-generated theatre play. This play
will be presented on the occasion of the 100th anniversary of
Karel Cˇapek’s play R.U.R. [Cˇ apek, 1920], for which the word
“robot” was invented by Cˇ apek.</p>
      <p>The project, currently in its early stages, is at the
intersection of artificial intelligence research and theatre studies.
The core of our approach is to use state-of-the-art deep neural
models trained and fine-tuned on theatre play data. However,
our team includes both experts on natural language
processing and theatre experts, and our solution will be based on
research and exchange of experience from both fields.</p>
      <p>In this paper, we first review related previous works
(Section 2) and data resources available to us (Section 3). We
then draft the approaches we are following and intending to
follow in the project (Section 4) and present the project
timeline (Section 5).
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <sec id="sec-2-1">
        <title>Narrative Natural Language Generation</title>
        <p>While we are not aware of any generation systems specifically
aimed at theatre play generation, research in story/narrative
generation has been quite active in the past years, with
Copyright c 2020 by the paper’s authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
computer-aided systems allowing various degrees of
automation and different abilities in learning from data [Kybartas
and Bidarra, 2017; Riedl, 2018]. Since recurrent neural
networks (RNN) were applied for text generation [Bahdanau et
al., 2015; Sutskever et al., 2014], research in story generation
has mostly focused on fully data-driven, fully automated
approaches. As plain RNNs were found unsuitable for
producing longer, coherent texts [Wiseman et al., 2017], multiple
improvements have been proposed.</p>
        <p>
          The first line of work focuses on providing a higher-level
semantic representation to the networks and conditioning the
generation on it. Martin et al. [
          <xref ref-type="bibr" rid="ref15 ref27 ref30">2018</xref>
          ] and Ammanabrolu et
al. [2019; 2020] use an event-based representation, where an
event roughly represents a clause (predicate, subject, direct
and indirect object). The model generates the story at the
event level and subsequently realizes the individual events to
surface sentences. Tu et al. [2019] take a similar approach,
using frame semantics and also conditioning sentence
generation on other information, such as sentiment.
        </p>
        <p>
          Other works focus on explicit entity modelling across the
generated story, e.g., Clark et al. [
          <xref ref-type="bibr" rid="ref15 ref27 ref30">2018</xref>
          ]. Here, each entity
has its own distributed representation (embedding), which is
updated on each mention of the entity in the story.
        </p>
        <p>
          Multiple authors attempt to increase long-term coherence
by hierarchical story generation. Fan et al. [
          <xref ref-type="bibr" rid="ref15 ref27 ref30">2018</xref>
          ] generate
first a short prompt/tagline, then use it to condition the full
story generation. Yao et al. [2019] take a similar approach,
using a “storyline” – a list of entities and items to be
introduced in the story in the given order. Fan et al. [2019]
then combine the hierarchical generation with explicit entity
modelling. Their system generates outputs using anonymized
but tracked entities, which are subsequently lexicalized in the
context of the story by generating referring expressions.
        </p>
        <p>Several works experiment with altering the base RNN
architecture: Wang and Wan [2019] use a modified Transformer
architecture [Vaswani et al., 2017], which is trained as a
conditional variational autoencoder. Tambwekar et al. [2019]
utilize reinforcement learning with automatically induced
rewards to train their event-based model. Ammanabrolu et
al. [2019; 2020] extend this work by experimenting with
various sentence realization techniques, including retrieval from
database and post-editing.</p>
        <p>Latest works use massive pretrained language models
based on the Transformer architecture, such as GPT-2
[Radford et al., 2019], for generation. See et al. [2019] use GPT-2
directly and show that it is superior to plain RNNs. Mao et al.
[2019] apply GPT-2 fine-tuned for both story generation and
common-sense reasoning to improve coherence.</p>
        <p>While research in this area has progressed considerably,
most experiments have been performed on rather short and
simple stories, such as the ROCStories corpus [Mostafazadeh
et al., 2016]. Many works focus on limited tasks, such as
single-sentence continuation generation [Tu et al., 2019]. The
state-of-the-art results still cannot match human performance,
producing repetitive and dull outputs [See et al., 2019].
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Dramatic Analysis</title>
        <p>For our needs, we are mostly interested in classifications and
abstractions over theatre play scripts or their parts. In the field
of theatre studies, there is a vast amount of research on the
structure and interpretation of theatre plays. Unfortunately,
the results of such research are not made available in forms
and formats that would easily allow us to use these as data
and annotations in machine learning approaches.</p>
        <p>
          The Thirty-Six Dramatic Situations by Polti [
          <xref ref-type="bibr" rid="ref26">1921</xref>
          ]2 is a
classic work, in which the author presented a supposedly
ultimate list of all categories of possible dramatic situations that
can occur in a theatre play (e.g. “adultery” or “conflict with a
god”), further subclassified into 323 situational possibilities.
        </p>
        <p>Although not directly related to theatre plays, the work of
[Propp, 1968] is also essential. Propp analyzed Russian folk
tales and identified 31 functions, similar to Polti’s situations
but somewhat more down-to-earth (e.g. “villainy” or
“wedding”), as well as 7 abstract character types (e.g. “villain” or
“hero”) and other abstractions.</p>
        <p>
          Polti’s and Propp’s categorizations are sometimes used in
analyzing and generating narratives, although typically not in
drama. The works closest to our focus is probably that of
[Gervás et al., 2016] or Lombardo et al. [
          <xref ref-type="bibr" rid="ref15 ref27 ref30">2018</xref>
          ], who devised
an ontologies of abstractions for annotating scripts, based on
both of the mentioned works, as well as on more recent plot
categorization studies [Booker, 2004; Tobias, 2011].
        </p>
        <p>There are also works producing drama analyses in the
form of networks, capturing various relations between the
characters in the play [Moretti, 2014; Horstmann, 2019;
Fischer et al., 2019].
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Computer-Generated Art</title>
        <p>There already is a range of partially or fully artificially
generated works of art – e.g. a short sci-fi movie with an
LSTMgenerated and human-post-edited script [Benjamin et al.,
2016], a musical based on suggestions from several
automated tools [Colton et al., 2016], a human-picked collection
of computer generated poems [Materna, 2016], or a theatre
play written with the help of a next word suggestion tool
[Helper, 2018]. While this demonstrates the technical
possibility of such an approach, the mixed reception of the
outcomes shows that the employed technologies are not (yet?) on
par with humans [See et al., 2019]. We thus believe a more
specialized and complex approach is needed here.</p>
        <p>2https://en.wikipedia.org/wiki/The_Thirty-Six_Dramatic_
Situations</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Data Resources</title>
      <p>Theatre play scripts are not easily available for our purposes.
As no reasonable corpus is available, we have to create one
ourselves. The corpus will contain Czech and English theatre
play scripts and synopses (plot summaries), and will be used
to train and fine-tune our systems, described in following
sections. We are also collecting film and TV series scripts, which
are easier to obtain in large quantities, although they are not a
perfect match for our setting. Unfortunately, due to copyright
reasons, we will not be able to release the full corpus.</p>
      <p>In most cases, scripts cannot be downloaded for free, and
for most scripts it seems that they are only available in print
or scanned. Even electronically available scripts come in
various formats and there seems to be no technical standards in
this respect. For our project, we need to devise a common
representation format, and automatically or semi-automatically
convert and normalize the data into the format, marking
character names, lines, scenic notes, scene settings, etc. [Croce et
al., 2019]. Also the scripts and synopses need to be paired
together. At the moment, we only have collected and partially
converted several hundreds of documents.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Planned Approach</title>
      <p>As a theatre play script is a highly structured and complex
piece of text, we plan to take a hierarchical approach
composed of several steps to generating the full script, also
employing human inputs in the process. The overall idea is to
start from a brief description of the play, gradually expanding
it into more detailed act and scene synopses, and finally
generating the individual scene dialogues. We currently envision
using generative neural models for the final step (Section 4.1),
conditioned by prompts generated by hierarchical generation
approaches (Section 4.2).
4.1</p>
      <sec id="sec-4-1">
        <title>Applying Neural Language Models</title>
        <p>Large neural language models (LMs), such as GPT-2
[Radford et al., 2019; see Section 2.1], are able to generate
believable texts in certain domains (e.g. news articles). This is
not the case for the domain of theatre plays. The original
GPT-2 must have had a number of plays (or movie scripts) in
the training data, which is evident when it is presented with
a suitable starting prompt. It can produce a text that follows
the formal structure and has some level of content coherence.
However, the basic attributes of a dramatic situation are
missing: there is no plot, and the scene is not moving towards a
conclusion. Other problems include having new characters
appear randomly in the middle of the scene or falling into a
state of repeating the same sentence forever.</p>
        <p>Our basic workflow would be to seed an LM with a prompt
which is the beginning of a dramatic situation. The LM would
generate the rest of the whole dialogue. We plan to finetune
the LM to theatre plays to see how far this approach can go.
Then we plan to restrict the generation by enforcing that only
certain predetermined characters speak, possibly in a
pregenerated order. This can be achieved by stopping the generation
at the end of a character’s line, adding the name of the next
desired character and then resuming the generation process.</p>
        <p>To make the characters more internally consistent and
different from each other at the same time, we plan to devise
individual LMs specialized to specific character types, based
on a clustering of the characters across plays. The part of each
character would then be generated by a different LM; i.e., the
script would consist of several LMs “talking” to each other.
4.2</p>
      </sec>
      <sec id="sec-4-2">
        <title>Hierarchical Generation</title>
        <p>
          We also plan to extend our experiments with hierarchical
generation from large pretrained LMs. We will use an approach
similar to Fan et al. [
          <xref ref-type="bibr" rid="ref15 ref27 ref30">2018</xref>
          ] and Yao et al. [2019] (see
Section 2.1): starting with generating a title or a prompt for the
story, then generating a textual synopsis. The generation of
the play from the synopsis will follow as a novel step, not
present in previous works. We are considering multiple
options of what to choose as the synopsis representation: the
play background/setting from play databases, more detailed
synopses from fan websites, or scenic remarks extracted from
texts of plays themselves. Ultimately, the choice will be made
based on data availability. The setup will also include
generating “play metadata”, such as the main theme, list of
characters, narrative type, etc.
        </p>
        <p>The final step will use a similar approach as the base LM
generation (see Section 4.1). We also plan on using
explicit embeddings for individual characters in the play and
using explicit entity tracking/coreference [Clark et al., 2018;
Fan et al., 2019]. Since the available automatic coreference
tools [e.g., Clark and Manning, 2016; Lee et al., 2017] are
typically not trained for processing dialogic texts, they may
require adaptation.
4.3</p>
      </sec>
      <sec id="sec-4-3">
        <title>Data Synthesis through Summarization</title>
        <p>The hierarchical generation approach relies on data that
contain information of various granularity, as described in
Section 4.2. However, most of the available data contain only
the title and the script of the play, missing other invaluable
information. In our project, we intend to synthesize the
missing data; synthetic data are frequently used in various tasks,
such as machine translation [Bojar and Tamchyna, 2011;
Sennrich et al., 2016].</p>
        <p>We can generate synthetic data with the use of the
classical task of text summarization; abstractive summarization in
particular [Rush et al., 2015]. The main idea is to take a long
document and summarize it into a few sentences, then take
these synthetic data and use them for training the generative
models in the hierarchical approach. With various
summarizing models, we can first abstract the whole script of a theatre
play into a detailed synopsis, then the detailed synopsis into
a short plot synopsis, and eventually the short synopsis into
the play title. With these summarizing models, we can fill the
gaps in our datasets, so that the hierarchical generation
models can be trained on all theatre scripts available to us, even if
they lack some or all higher-level summaries.</p>
        <p>We plan to train the Transformer model [Vaswani et al.,
2017] for the summarization tasks. As we expect the amount
of available training play-summary pairs to be scarce, we will
pretrain our models on other summarization tasks, such as
news abstract generation for which plenty of parallel data is
available [Straka et al., 2018], followed by fine-tuning the
pretrained models on our in-domain theatre data.</p>
        <p>Due to the specific nature of the genre, where a lot of what
is meant is not explicitly said by any of the characters, we
know that the summarization may be difficult or impossible
to do, and this component thus cannot be entirely relied on.
4.4</p>
      </sec>
      <sec id="sec-4-4">
        <title>Machine Translation</title>
        <p>We plan on using machine translation (MT) for two purposes:
(1) Since we have limited amounts of training data scattered
across both English and Czech, we need the generation to
take advantage of data in both languages. Therefore, we plan
to generate new training data by translating either Czech texts
to English or vice versa. (2) We would like the same
resulting generated play to be available instantly in both languages.
Therefore, we plan to generate it in one of the languages and
use MT to bring the result over to the other language.</p>
        <p>For both applications, we are going to use our in-house
state-of-the-art Czech-English model [Popel, 2018].
However, theatre play scripts are a specific domain of data for
which our MT models were not specifically trained. To tackle
this problem, we will finetune [Miceli Barone et al., 2017]
the general MT models on theatre parallel data, possibly also
applying automated heuristical pre-processing and/or
postediting [Rosa et al., 2012].
4.5</p>
      </sec>
      <sec id="sec-4-5">
        <title>Human in the Loop</title>
        <p>To ensure a satisfactory result, we intend to complement the
automated generation with interventions from theatre
professionals, using a human-in-the-loop approach.</p>
        <p>We currently envision using the automated system to
generate texts and the human to choose parts of the output to use
in the play. This could be done e.g. in an iterative interactive
way, where the system generates several options for a line of
the script, the human picks one of the options to add to the
script, the system generates continuation options, etc.</p>
        <p>Moreover, only the dialogues of the characters will be fully
automatically generated. The subsequent realization and
performance of the play will be in the hands of theatre
professionals, who will analyze and interpret the script, devise stage
directions, rehearse the play, design the scene, and finally
perform the play for a live audience, all of which will further
shape the perception of the play by the spectators.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusion and Future Work</title>
      <p>After some preliminary work, the project started in April
2020. The first automatically generated THEaiTRE play will
be premiered in January 2021, at the occasion of the 100th
anniversary of the premiere of the play R.U.R. [Cˇ apek, 1920].
A premiere of a second play, generated from an improved
version of our system, is planned for 2022.</p>
      <p>The project can be tracked at https://theaitre.com</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The THEaiTRE project is supported by the Technology
Agency of the Czech Republic grant TL03000348. and
partially supported by SVV project number 260 575.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [Ammanabrolu et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Prithviraj</given-names>
            <surname>Ammanabrolu</surname>
          </string-name>
          , Ethan Tien, Wesley Cheung, Zhaochen Luo, William Ma, Lara Martin,
          <string-name>
            <given-names>and Mark</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <article-title>Guided Neural Language Generation for Automated Storytelling</article-title>
          .
          <source>In Proceedings of the Second Workshop on Storytelling</source>
          , pages
          <fpage>46</fpage>
          -
          <lpage>55</lpage>
          , Florence, Italy,
          <year>August 2019</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [Ammanabrolu et al.,
          <year>2020</year>
          ]
          <string-name>
            <given-names>Prithviraj</given-names>
            <surname>Ammanabrolu</surname>
          </string-name>
          , Ethan Tien, Wesley Cheung, Zhaochen Luo, William Ma, Lara J.
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>and Mark O.</given-names>
          </string-name>
          <string-name>
            <surname>Riedl</surname>
          </string-name>
          . Story Realization:
          <article-title>Expanding Plot Events into Sentences</article-title>
          . In AAAI, New York, NY, USA,
          <year>February 2020</year>
          . arXiv:
          <year>1909</year>
          .03480.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Bahdanau et al.,
          <year>2015</year>
          ]
          <string-name>
            <given-names>Dzmitry</given-names>
            <surname>Bahdanau</surname>
          </string-name>
          , Kyunghyun Cho, and
          <string-name>
            <surname>Yoshua Bengio.</surname>
          </string-name>
          <article-title>Neural Machine Translation by Jointly Learning to Align and Translate</article-title>
          .
          <source>In 3rd International Conference on Learning Representations (ICLR2015)</source>
          , San Diego, CA, USA, May
          <year>2015</year>
          . arXiv:
          <volume>1409</volume>
          .
          <fpage>0473</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [Benjamin et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>AI</given-names>
            <surname>Benjamin</surname>
          </string-name>
          ,
          <article-title>Oscar Sharp, and Ross Goodwin. Sunspring, a sci-fi short film starring Thomas Middleditch</article-title>
          ,
          <year>2016</year>
          . https://www.youtube.com/watch?v=
          <fpage>LY7x2Ihqjmc</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Bojar and Tamchyna</source>
          , 2011]
          <string-name>
            <given-names>Ondrˇej</given-names>
            <surname>Bojar and Aleš Tamchyna</surname>
          </string-name>
          .
          <article-title>Improving Translation Model by Monolingual Data</article-title>
          .
          <source>In Proceedings of WMT</source>
          , pages
          <fpage>330</fpage>
          -
          <lpage>336</lpage>
          , Edinburgh, Scotland,
          <year>2011</year>
          . ACL.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Booker</source>
          , 2004]
          <string-name>
            <given-names>Christopher</given-names>
            <surname>Booker</surname>
          </string-name>
          .
          <article-title>The seven basic plots: Why we tell stories</article-title>
          . A&amp;
          <string-name>
            <surname>C Black</surname>
          </string-name>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Cˇ apek</source>
          , 1920]
          <article-title>Karel Cˇ apek</article-title>
          . R.U.R.
          <article-title>(Rossum's Universal Robots)</article-title>
          . Aventinum,
          <string-name>
            <surname>Ot.</surname>
          </string-name>
          Štorch-Marien, Praha,
          <year>1920</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>[Clark and Manning</source>
          , 2016]
          <string-name>
            <given-names>Kevin</given-names>
            <surname>Clark</surname>
          </string-name>
          and
          <string-name>
            <given-names>Christopher D.</given-names>
            <surname>Manning</surname>
          </string-name>
          .
          <article-title>Deep Reinforcement Learning for MentionRanking Coreference Models</article-title>
          .
          <source>In Proceedings of EMNLP</source>
          , pages
          <fpage>2256</fpage>
          -
          <lpage>2262</lpage>
          , Austin, Texas,
          <year>November 2016</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [Clark et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Elizabeth</given-names>
            <surname>Clark</surname>
          </string-name>
          , Yangfeng Ji, and
          <string-name>
            <surname>Noah</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Smith.</surname>
          </string-name>
          <article-title>Neural Text Generation in Stories Using Entity Representations as Context</article-title>
          .
          <source>In Proceedings of NAACL-HLT</source>
          , pages
          <fpage>2250</fpage>
          -
          <lpage>2260</lpage>
          , New Orleans, Louisiana,
          <year>June 2018</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [Colton et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Simon</given-names>
            <surname>Colton</surname>
          </string-name>
          , Maria Teresa Llano, Rose Hepworth, John Charnley, Catherine V.
          <string-name>
            <surname>Gale</surname>
          </string-name>
          , Archie Baron, François Pachet, Pierre Roy, Pablo Gervás, Nick Collins, Bob Sturm, Tillman Weyde, Daniel Wolff, and
          <article-title>James Robert Lloyd. The Beyond the Fence musical and Computer Says Show documentary</article-title>
          .
          <source>In Proceedings of the Seventh International Conference on Computational Creativity</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [Croce et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Danilo</given-names>
            <surname>Croce</surname>
          </string-name>
          , Roberto Basili, Vincenzo Lombardo, and
          <string-name>
            <given-names>Eleonora</given-names>
            <surname>Ceccaldi</surname>
          </string-name>
          .
          <article-title>Automatic recognition of narrative drama units: A structured learning approach</article-title>
          .
          <source>In Text2Story@ECIR</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [Fan et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Angela</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mike</given-names>
            <surname>Lewis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Yann</given-names>
            <surname>Dauphin</surname>
          </string-name>
          .
          <article-title>Hierarchical Neural Story Generation</article-title>
          .
          <source>In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          , New Orleans, LA, USA,
          <year>June 2018</year>
          . arXiv:
          <year>1805</year>
          .04833.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>[Fischer</surname>
          </string-name>
          et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Frank</given-names>
            <surname>Fischer</surname>
          </string-name>
          , Ingo Börner, Mathias Göbel, Angelika Hechtl, Christopher Kittel, Carsten Milling, and
          <string-name>
            <given-names>Peer</given-names>
            <surname>Trilcke</surname>
          </string-name>
          .
          <article-title>Programmable corpora. die digitale literaturwissenschaft zwischen forschung und infrastruktur am beispiel von dracor</article-title>
          .
          <source>In DHd 2019 Digital Humanities: multimedial &amp; multimodal. Konferenzabstracts</source>
          , pages
          <fpage>194</fpage>
          -
          <lpage>197</lpage>
          , Frankfurt am Main,
          <year>March 2019</year>
          . Zenodo. https://github.com/dracor-org/gerdracor.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [Gervás et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Pablo</given-names>
            <surname>Gervás</surname>
          </string-name>
          , Raquel Hervás, Carlos León, and Catherine V Gale.
          <article-title>Annotating musical theatre plots on narrative structure and emotional content</article-title>
          .
          <source>In 7th Workshop on Computational Models of Narrative (CMN</source>
          <year>2016</year>
          ).
          <source>Schloss Dagstuhl-Leibniz-Zentrum fuer Informatik</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>[Helper</source>
          , 2018]
          <string-name>
            <given-names>Roslyn</given-names>
            <surname>Helper</surname>
          </string-name>
          .
          <article-title>Lifestyle of the Richard and</article-title>
          family,
          <year>2018</year>
          . https://www.roslynhelper.
          <article-title>com/ lifestyle-of-the-richard-and-family.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Horstmann</source>
          , 2019]
          <string-name>
            <given-names>Jan</given-names>
            <surname>Horstmann</surname>
          </string-name>
          .
          <article-title>DraCor: Drama corpora project</article-title>
          .
          <source>In forTEXT. Literatur digital erforschen</source>
          ,
          <year>2019</year>
          . https://dracor.org/.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[Kybartas and Bidarra</source>
          , 2017]
          <string-name>
            <given-names>B.</given-names>
            <surname>Kybartas</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Bidarra</surname>
          </string-name>
          .
          <article-title>A Survey on Story Generation Techniques for Authoring Computational Narratives</article-title>
          .
          <source>IEEE Transactions on Computational Intelligence and AI</source>
          in Games,
          <volume>9</volume>
          (
          <issue>3</issue>
          ):
          <fpage>239</fpage>
          -
          <lpage>253</lpage>
          ,
          <year>September 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>[Lee</surname>
          </string-name>
          et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Kenton</given-names>
            <surname>Lee</surname>
          </string-name>
          , Luheng He,
          <string-name>
            <surname>Mike Lewis</surname>
            , and
            <given-names>Luke</given-names>
          </string-name>
          <string-name>
            <surname>Zettlemoyer</surname>
          </string-name>
          .
          <article-title>End-to-end Neural Coreference Resolution</article-title>
          .
          <source>In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>188</fpage>
          -
          <lpage>197</lpage>
          , Copenhagen, Denmark,
          <year>September 2017</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [Lombardo et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Vincenzo</given-names>
            <surname>Lombardo</surname>
          </string-name>
          , Rossana Damiano, and Antonio Pizzo.
          <article-title>Drammar: A comprehensive ontological resource on drama</article-title>
          .
          <source>In International Semantic Web Conference</source>
          , pages
          <fpage>103</fpage>
          -
          <lpage>118</lpage>
          . Springer,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [Mao et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Huanru</given-names>
            <surname>Henry</surname>
          </string-name>
          <string-name>
            <given-names>Mao</given-names>
            , Bodhisattwa Prasad Majumder,
            <surname>Julian McAuley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Garrison</given-names>
            <surname>Cottrell</surname>
          </string-name>
          .
          <article-title>Improving Neural Story Generation by Targeted Common Sense Grounding</article-title>
          .
          <source>In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)</source>
          , pages
          <fpage>5987</fpage>
          -
          <lpage>5992</lpage>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          , China,
          <year>November 2019</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [Martin et al.,
          <year>2018</year>
          ] Lara J Martin,
          <string-name>
            <surname>Prithviraj Ammanabrolu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Xinyu</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>William</given-names>
            <surname>Hancock</surname>
          </string-name>
          , Shruti Singh,
          <string-name>
            <given-names>Brent</given-names>
            <surname>Harrison</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Mark O</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <article-title>Event Representations for Automated Story Generation with Deep Neural Nets</article-title>
          . In AAAI, New Orleans, LA, USA,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <source>[Materna</source>
          , 2016]
          <string-name>
            <given-names>Jirˇí</given-names>
            <surname>Materna</surname>
          </string-name>
          .
          <source>Poezie umeˇlého sveˇta. Backstage Books</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>[Miceli</surname>
          </string-name>
          Barone et al.,
          <year>2017</year>
          ] Antonio Valerio Miceli Barone, Barry Haddow, Ulrich Germann, and
          <string-name>
            <given-names>Rico</given-names>
            <surname>Sennrich</surname>
          </string-name>
          .
          <article-title>Regularization techniques for fine-tuning in neural machine translation</article-title>
          .
          <source>In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>1489</fpage>
          -
          <lpage>1494</lpage>
          , Copenhagen, Denmark,
          <year>September 2017</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>[Moretti</source>
          , 2014]
          <string-name>
            <given-names>Franco</given-names>
            <surname>Moretti</surname>
          </string-name>
          . “Operationalizing”
          <article-title>: or, the function of measurement in modern literary theory</article-title>
          .
          <source>Pamphlet</source>
          <volume>6</volume>
          , Stanford Literary Lab,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [Mostafazadeh et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Nasrin</given-names>
            <surname>Mostafazadeh</surname>
          </string-name>
          , Lucy Vanderwende, Wen-tau
          <string-name>
            <surname>Yih</surname>
            , Pushmeet Kohli, and
            <given-names>James</given-names>
          </string-name>
          <string-name>
            <surname>Allen</surname>
          </string-name>
          .
          <article-title>Story Cloze Evaluator: Vector Space Representation Evaluation by Predicting What Happens Next</article-title>
          .
          <source>In Proceedings of the 1st Workshop on Evaluating VectorSpace Representations for NLP</source>
          , pages
          <fpage>24</fpage>
          -
          <lpage>29</lpage>
          , Berlin, Germany,
          <year>August 2016</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>[Polti</source>
          , 1921]
          <string-name>
            <given-names>Georges</given-names>
            <surname>Polti</surname>
          </string-name>
          .
          <article-title>The thirty-six dramatic situations</article-title>
          .
          <source>JK Reeve</source>
          ,
          <year>1921</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <source>[Popel</source>
          , 2018]
          <string-name>
            <given-names>Martin</given-names>
            <surname>Popel</surname>
          </string-name>
          .
          <article-title>Cuni transformer neural mt system for wmt18</article-title>
          .
          <source>In Proceedings of the Third Conference on Machine Translation</source>
          , pages
          <fpage>486</fpage>
          -
          <lpage>491</lpage>
          , Belgium, Brussels,
          <year>October 2018</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <source>[Propp</source>
          , 1968]
          <string-name>
            <given-names>Vladimir</given-names>
            <surname>Propp</surname>
          </string-name>
          .
          <article-title>Morphology of the folktale</article-title>
          ,
          <source>trans. Louis Wagner</source>
          , 2d. ed.(
          <year>1928</year>
          ,
          <year>1968</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [Radford et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Alec</given-names>
            <surname>Radford</surname>
          </string-name>
          , Jeffrey Wu, Rewon Child, David Luan,
          <string-name>
            <given-names>Dario</given-names>
            <surname>Amodei</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Ilya</given-names>
            <surname>Sutskever</surname>
          </string-name>
          .
          <article-title>Language Models are Unsupervised Multitask Learners</article-title>
          .
          <source>Technical report, OpenAI</source>
          ,
          <year>February 2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <source>[Riedl</source>
          , 2018]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <source>Computational Narrative Intelligence: Past</source>
          , Present, and
          <string-name>
            <surname>Future</surname>
          </string-name>
          . Medium,
          <year>February 2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [Rosa et al.,
          <year>2012</year>
          ]
          <string-name>
            <given-names>Rudolf</given-names>
            <surname>Rosa</surname>
          </string-name>
          , David Marecˇek, and Ondrˇej Dušek.
          <article-title>Depfix: A system for automatic correction of Czech MT outputs</article-title>
          .
          <source>In Proceedings of WMT</source>
          , pages
          <fpage>362</fpage>
          -
          <lpage>368</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [Rush et al.,
          <year>2015</year>
          ]
          <string-name>
            <surname>Alexander</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Rush</surname>
            , Sumit Chopra, and
            <given-names>Jason</given-names>
          </string-name>
          <string-name>
            <surname>Weston</surname>
          </string-name>
          .
          <article-title>A neural attention model for abstractive sentence summarization</article-title>
          .
          <source>In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>379</fpage>
          -
          <lpage>389</lpage>
          , Lisbon, Portugal,
          <year>September 2015</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [See et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Abigail</given-names>
            <surname>See</surname>
          </string-name>
          , Aneesh Pappu,
          <string-name>
            <given-names>Rohun</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Akhila</given-names>
            <surname>Yerukola</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Christopher D.</given-names>
            <surname>Manning. Do Massively Pretrained Language Models Make Better Storytellers? In</surname>
          </string-name>
          <string-name>
            <surname>CoNLL</surname>
          </string-name>
          , Hong Kong,
          <year>November 2019</year>
          . arXiv:
          <year>1909</year>
          .10705.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [Sennrich et al.,
          <year>2016</year>
          ]
          <string-name>
            <given-names>Rico</given-names>
            <surname>Sennrich</surname>
          </string-name>
          , Barry Haddow, and
          <string-name>
            <given-names>Alexandra</given-names>
            <surname>Birch</surname>
          </string-name>
          .
          <article-title>Improving neural machine translation models with monolingual data</article-title>
          .
          <source>In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)</source>
          , pages
          <fpage>86</fpage>
          -
          <lpage>96</lpage>
          , Berlin, Germany,
          <year>August 2016</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [Straka et al.,
          <year>2018</year>
          ]
          <string-name>
            <given-names>Milan</given-names>
            <surname>Straka</surname>
          </string-name>
          , Nikita Mediankin, Tom Kocmi, Zdeneˇk Žabokrtský, Vojteˇch Hudecˇek, and Jan Hajic.
          <article-title>SumeCzech: Large Czech News-Based Summarization Dataset</article-title>
          .
          <source>In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC</source>
          <year>2018</year>
          ), Miyazaki, Japan, May 7-
          <issue>12</issue>
          ,
          <year>2018</year>
          2018.
          <article-title>European Language Resources Association (ELRA).</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [Sutskever et al.,
          <year>2014</year>
          ]
          <string-name>
            <given-names>Ilya</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <article-title>Oriol Vinyals, and Quoc VV Le</article-title>
          .
          <article-title>Sequence to sequence learning with neural networks</article-title>
          .
          <source>In Advances in Neural Information Processing Systems</source>
          , pages
          <fpage>3104</fpage>
          -
          <lpage>3112</lpage>
          ,
          <year>2014</year>
          . arXiv:
          <volume>1409</volume>
          .
          <fpage>3215</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [Tambwekar et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Pradyumna</given-names>
            <surname>Tambwekar</surname>
          </string-name>
          , Murtaza Dhuliawala, Animesh Mehta,
          <string-name>
            <surname>Lara J</surname>
          </string-name>
          . Martin,
          <string-name>
            <given-names>Brent</given-names>
            <surname>Harrison</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Mark O.</given-names>
            <surname>Riedl</surname>
          </string-name>
          .
          <article-title>Controllable Neural Story Plot Generation via Reinforcement Learning</article-title>
          .
          <source>In 2019 International Joint Conference on Artificial Intelligence</source>
          , Macau,
          <year>August 2019</year>
          . arXiv:
          <year>1809</year>
          .10736.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <source>[Tobias</source>
          , 2011] Ronald
          <string-name>
            <surname>B Tobias.</surname>
          </string-name>
          20
          <string-name>
            <given-names>MASTER</given-names>
            <surname>Plots</surname>
          </string-name>
          <article-title>: and how to build them</article-title>
          .
          <source>Penguin</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          [Tu et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Lifu</given-names>
            <surname>Tu</surname>
          </string-name>
          , Xiaoan Ding,
          <string-name>
            <given-names>Dong</given-names>
            <surname>Yu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Kevin</given-names>
            <surname>Gimpel</surname>
          </string-name>
          .
          <article-title>Generating Diverse Story Continuations with Controllable Semantics</article-title>
          .
          <source>In 3rd Workshop on Neural Generation and Translation (WNGT</source>
          <year>2019</year>
          ), Hong Kong,
          <year>November 2019</year>
          . arXiv:
          <year>1909</year>
          .13434.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          [Vaswani et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Ashish</given-names>
            <surname>Vaswani</surname>
          </string-name>
          , Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones,
          <string-name>
            <given-names>Aidan N.</given-names>
            <surname>Gomez</surname>
          </string-name>
          , Lukasz Kaiser, and
          <string-name>
            <given-names>Illia</given-names>
            <surname>Polosukhin</surname>
          </string-name>
          .
          <article-title>Attention Is All You Need</article-title>
          .
          <source>In 31st Conference on Neural Information Processing Systems (NIPS)</source>
          , Long Beach, CA, USA,
          <year>December 2017</year>
          . arXiv:
          <volume>1706</volume>
          .
          <fpage>03762</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <source>[Wang and Wan</source>
          , 2019]
          <string-name>
            <given-names>Tianming</given-names>
            <surname>Wang</surname>
          </string-name>
          and
          <string-name>
            <given-names>Xiaojun</given-names>
            <surname>Wan</surname>
          </string-name>
          . T-CVAE:
          <article-title>Transformer-Based Conditioned Variational Autoencoder for Story Completion</article-title>
          .
          <source>In Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence</source>
          , pages
          <fpage>5233</fpage>
          -
          <lpage>5239</lpage>
          , Macao, China,
          <year>August 2019</year>
          .
          <source>International Joint Conferences on Artificial Intelligence Organization.</source>
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          [Wiseman et al.,
          <year>2017</year>
          ]
          <string-name>
            <given-names>Sam</given-names>
            <surname>Wiseman</surname>
          </string-name>
          ,
          <string-name>
            <surname>Stuart M. Shieber</surname>
          </string-name>
          , and
          <string-name>
            <surname>Alexander</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Rush</surname>
          </string-name>
          .
          <article-title>Challenges in Data-to-Document Generation</article-title>
          .
          <source>In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>2243</fpage>
          -
          <lpage>2253</lpage>
          , Copenhagen, Denmark,
          <year>September 2017</year>
          . arXiv:
          <volume>1707</volume>
          .
          <fpage>08052</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          [Yao et al.,
          <year>2019</year>
          ]
          <string-name>
            <given-names>Lili</given-names>
            <surname>Yao</surname>
          </string-name>
          , Nanyun Peng, Ralph Weischedel, Kevin Knight,
          <string-name>
            <given-names>Dongyan</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and Rui</given-names>
            <surname>Yan</surname>
          </string-name>
          .
          <article-title>Plan-AndWrite: Towards Better Automatic Storytelling</article-title>
          . In AAAI, Honolulu,
          <string-name>
            <surname>HI</surname>
          </string-name>
          , USA,
          <year>January 2019</year>
          . arXiv:
          <year>1811</year>
          .05701.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>