<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>THEaiTRE 1.0: Interactive Generation of Theatre Play Scripts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rudolf Rosa</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom´aˇs Musil</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ondˇrej Duˇsek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dominik Jurko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Patr´ıcia Schmidtova´</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Mareˇcek</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ondˇrej Bojar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom Kocmi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Hrbek</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David Koˇsˇta´k</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Martina Kinsk´a</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marie Nov´akov´a</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Josef Doleˇzal</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Kl´ara Voseck´a</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tom´aˇs Studen´ık</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Petr Zˇabka</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>The Sˇvanda Theatre in Sm´ıchov Prague</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Czechia hrbek@svandovodivadlo.cz</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>CEE Hacks Prague</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Czechia info@ceehacks.com</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Charles University, Faculty of Mathematics and Physics, Institute of Formal and Applied Linguistics Prague</institution>
          ,
          <addr-line>Czechia</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>The Academy of Performing Arts in Prague, Theatre Faculty (DAMU) Prague</institution>
          ,
          <addr-line>Czechia</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <abstract>
        <p>We present the first version of a system for interactive generation of theatre play scripts. The system is based on a vanilla GPT-2 model with several adjustments, targeting specific issues we encountered in practice. We also list other issues we encountered but plan to only solve in a future version of the system. The presented system was used to generate a theatre play script premiered in February 2021.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <sec id="sec-1-1">
        <title>The THEaiTRE project1 [RDK+20] aims to produce and stage the first computer-generated theatre play on</title>
        <p>the occasion of the 100th anniversary of Karel Cˇ apek’s play R.U.R. [Cˇ ap20], in which the word “robot” first
appeared.</p>
        <sec id="sec-1-1-1">
          <title>In this paper, we describe the THEaiTRobot 1.0 tool, which allows the user to interactively generate scripts</title>
          <p>for individual theatre play scenes. The tool is based on the GPT-2 XL [RWC+19] generative language model,
using the model without any fine-tuning, as we found that with a prompt formatted as a part of a theatre play
script, the model usually generates continuations that fit the format well. However, we encountered numerous
problems when generating the script in this way. We managed to tackle some of the problems with various
adjustments, but some of them remain to be solved in a future version.</p>
          <p>Our tool was used to generate the script for a new play, AI: Kdyˇz robot p´ıˇse hru (AI: When a robot writes
a play ), which was premiered online on 26th February 2021.2 Although there were various forms of human
intervention when generating the script, we estimate that over 90% of the text comes from the automated tool;
moreover, most of the interventions were similar to those a dramaturge and director would do in case of a
humanwritten script (making cuts, rearranging lines, reassigning characters, minor edits of the lines, etc.).3 We present
a preliminary analysis of the interventions in Section 2.1. The GPT-2 model was found unfit for generating
long and complex texts such as a full play script; we therefore generated several individual scenes and then a
dramaturge joined them into a full play.</p>
        </sec>
        <sec id="sec-1-1-2">
          <title>We have published a video showing the operation of THEaiTRobot 1.0, a sample of its outputs, and its source codes:</title>
          <p>• Video: https://youtu.be/ksrZouM7Wyg
• Sample outputs: http://bit.ly/theaitre-samples
• Source codes: http://hdl.handle.net/11234/1-3507 [RDK+21]
2</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>The Generation Process</title>
      <p>The process of generating a theatre play scene script starts by the user (a theatre dramaturge in our case) defining
the start of the scene, typically a setting and several initial lines of dialogue, which defines the theme of the
scene, introduces the characters, and encourages the GPT-2 language model to start generating a dialogue. For
the first play, we defined a set of inputs revolving around a common topic to ensure some basic coherence of the
whole play.4 The THEaiTRobot tool then uses the vanilla GPT-2 XL model to generate continuing lines, which
then get translated from English to Czech by a Machine Translation service. The user has the option to discard
any generated line (together with all subsequent lines), prompting the tool to generate a di↵erent continuation. 5</p>
      <sec id="sec-2-1">
        <title>The user can also manually enter a line into the script, which becomes part of the input for GPT-2.6 The tool</title>
        <p>itself is implemented as a web application with a server backend, using the Huggingface Transformers library
[WDS+20].
2.1</p>
        <p>Preliminary Analysis of Human Interventions</p>
        <sec id="sec-2-1-1">
          <title>We are currently in the process of performing a detailed audit of the genesis of the final script of the first play,</title>
          <p>which we intend to publish once finished. So far, we have completed the analysis of the first scene, Death.</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>The first scene consists of 60 lines, out of which 45 were used without any changes from the generated script,</title>
          <p>while the remaining 15 lines were slightly modified. We detail the modifications in Table 1; note that in some
lines there was more than one intervention. Also, 11 lines from the generated script were deleted.</p>
        </sec>
        <sec id="sec-2-1-3">
          <title>The generation process was initiated with a prompt consisting of a scene setting and two character lines; only</title>
          <p>one of them also became part of the final scenario. In total, 91% of the words in the script of the first scene are
used as they were generated, while 9% of the words were added, changed or reordered.</p>
          <p>We have only performed the analysis on the English side of the script. On the Czech side of the script, there
are additional edits which fix some errors of the automated machine translation, as explained in Section 2.3.4.7
2https://www.svandovodivadlo.cz/inscenace/673/ai-kdyz-robot-pise-hru/3445
3In fact, the dramaturge and the director reported that they made fewer and smaller edits to the script than they typically do
with a human-written script.</p>
          <p>4The play tells a story of a robot trying to find his place in the human society. Each scene revolves around a typically human
theme, such as death, love, sex, or work, and the robot learns about this theme through the interaction with a human character.
5This option was used for approximately 5% of the lines in the script of the first play.</p>
          <p>6This option was used very rarely. Apart from the input prompts, only approximately 1% of the lines were hand-written and
manually entered into the script.</p>
          <p>7On the Czech side, approximately 23% of words were modified, but most of the additional modifications were fixes of incorrect
T-V distinction or gender.</p>
          <p>Resolved Issues</p>
          <p>Set of Characters</p>
        </sec>
        <sec id="sec-2-1-4">
          <title>Example (before – after)</title>
        </sec>
        <sec id="sec-2-1-5">
          <title>Robot: I love you so much I want to hug you to death.</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>It‘s morning. Robot enters room of his master who is really old and sick. Robot sees that his master is not doing very well this morning. He sits at the edge of his bed and takes his hand.</title>
        </sec>
        <sec id="sec-2-1-7">
          <title>Master: We both know I am dying.</title>
        </sec>
        <sec id="sec-2-1-8">
          <title>Master: No. Don‘t say that. I want to</title>
          <p>have an end!</p>
        </sec>
        <sec id="sec-2-1-9">
          <title>Master: You are going to die in your sleep.</title>
        </sec>
        <sec id="sec-2-1-10">
          <title>Master: I don‘t think I could hug you to life.</title>
        </sec>
        <sec id="sec-2-1-11">
          <title>Robot: I‘m afraid of what I‘ve been doing here.</title>
        </sec>
        <sec id="sec-2-1-12">
          <title>Master: No. Don‘t say that. I want to</title>
          <p>enjoy my ending!</p>
        </sec>
        <sec id="sec-2-1-13">
          <title>Robot: You are going to die in your sleep.</title>
        </sec>
        <sec id="sec-2-1-14">
          <title>Master: I don‘t think you could hug me to life.</title>
          <p>Master: I‘m afraid of what I‘ve
been doing here.</p>
        </sec>
        <sec id="sec-2-1-15">
          <title>Robot: I‘m afraid of what I‘ve been doing here.</title>
        </sec>
        <sec id="sec-2-1-16">
          <title>The model does not work with a limited set of characters naturally and tends to forget characters and invent new</title>
          <p>characters too often.8 We resolve this by modifying the next token probability distribution within the GPT-2
model, so that at the start of a new line, only tokens corresponding to character names present in the input
prompt are allowed. We also boost probabilities of characters that have not spoken for some time.9
2.2.2</p>
          <p>Repetitiveness</p>
        </sec>
        <sec id="sec-2-1-17">
          <title>GPT-2’s generation may get stuck in a loop, generating one or several lines again and again. We managed to resolve this by modifying the hyperparameters of GPT-2, changing repetition penalty from 1.00 to 1.01. As a backup, we also automatically discard any generated repeated lines and prompt the model to generate another continuing line.</title>
          <p>2.2.3</p>
          <p>Limited Context</p>
        </sec>
        <sec id="sec-2-1-18">
          <title>The variant of the GPT-2 model which we are using has a limit of 1024 subword tokens, within which both the</title>
          <p>input prompt and the generated output must fit. The typical solution is to crop the input at the beginning
so that it fits into the window with sucient space for generating the output. However, this means forgetting
potentially important information from the input prompt and the previously generated text, which can lead to
an unwanted continual topic drift and also to generating contradictory text; the text is still locally consistent,
but as a whole it may be inconsistent.</p>
          <p>To handle this issue, we introduce automated extractive summarization into the process, hoping that the
summarization algorithm will identify the most important pieces of information to remember. Whenever the
input for GPT-2 (the input prompt + the so far generated script) exceeds a preset limit of M = 924 tokens,10
8If there are only two characters in the scene, the model often keeps to them or only introduces 1-2 additional characters. If the
number of characters is higher than 3, the model usually forgets some of them after some time. Also, some character names push
the model in unintended directions. We have seen the model generalize over character names originally involving only “Robot 1”
and “Robot 2”, with the model continually introducing “Robot 3”, “Robot 4”, “Robot 5”, etc. We have also observed the model to
immediately change a character called “Vladimir” to “Vladimir Putin”.</p>
          <p>9Specifically, we multiply each character probability by 2c where c is the number of lines for which the character has not spoken.
10Most script lines in our setting fit within 100 tokens, so ensuring there is space for generating at least 100 tokens means that
usually the model will generate a complete line, ending with a newline symbol; in case the generated line is too long, it is simply
cut o↵ once the limit of 1024 tokens is depleted.
we summarize the input using TextRank11 [MT04] before feeding the input into the GPT-2 model:
• We keep all lines within the last R = 250 tokens from the input12 to ensure local consistency.
• We summarize all the preceding lines into N = 5 lines (while keeping their original order) to ensure global
consistency.
• We concatenate the summary and the kept lines.</p>
          <p>• If the resulting text is still longer than M tokens, we crop it at the beginning to M tokens.
2.2.4</p>
          <p>Machine Translation</p>
        </sec>
        <sec id="sec-2-1-19">
          <title>The GPT-2 model operates on English, while we want to generate a Czech script. We therefore automatically</title>
          <p>translate the generated script using the CUBBITT [PTT+20] neural translation model. As the translation tends
to discard character names from the lines, we add them by identifying them in the input and translating them
independently.
2.3
2.3.1</p>
          <p>Unresolved Issues and Future Plans</p>
          <p>Generating a Whole Play</p>
        </sec>
        <sec id="sec-2-1-20">
          <title>The model is not able to generate a long and complex text such as a full theatre play script. To resolve this,</title>
          <p>we intend to generate the script hierarchically, first generating a synopsis for the whole play, then expanding it
into synopses for individual scenes, and finally generating each scene individually based on its synopsis. This
approach is inspired by the work of Fan et al. [FLD18, FLD19], who take a similar coarse-to-fine approach to
story generation. Our situation is, however, more complex, as we plan to use one more step of the hierarchy.
2.3.2</p>
          <p>Character Personalities
The characters in the play do not seem to have independent personalities in the generated script; the model
seems to simply ensure consistency with already generated text, not taking the character names into account
much. The character personalities thus appear to switch and merge. We intend to resolve this by learning theatre
character embeddings and using them to condition the language model. We plan to resolve this by clustering
our data into several basic character personality types [AKDM19], then train separate character-aware language
models, either by finetuning the GPT-2 model, or by using adapter models [MIL+20, WTD+20].
2.3.3</p>
          <p>Dramatic Situations
The text is generated word by word and line by line, whereas human authors of theatre plays typically operate
on a more abstract level, such as dramatic situations [Pol21].13 While there is some work on identifying dramatic
turning points [PKL19, PKFL20], it is too coarse-grained for our application. We are thus currently annotating
a corpus of theatre play scripts with a modified set of dramatic situations, and plan to enhance the tool with this
abstraction, either by adding one more layer in the hierarchical setup, or by using special tokens or embeddings
to mark dramatic situations in the generated text.
2.3.4</p>
          <p>Machine Translation</p>
        </sec>
        <sec id="sec-2-1-21">
          <title>The MT model we use is tuned for news text, not theatre scripts, and translates each sentence independently.</title>
          <p>This leads to various issues, including errors in morphological gender (which should pertain to the character),
variance in the honorific T–V distinction (which may vary but should be consistent for each pair of characters),
and erroneous sentence splitting. We intend to tackle these issues by using a document-level translation system
which takes larger context into account, fine-tuning the model on a corpus of theatre play scripts, and adding
various heuristic modifications where necessary.</p>
          <p>11We use the pytextrank library with minor modifications to reflect the specific structure of our inputs, so that the algorithm
returns N most important (potentially multi-sentence) full lines from the script instead of just N most important sentences. We
set limit phrases=100.</p>
          <p>12We find the first newline symbol in the last R tokens and keep all the lines after it.</p>
          <p>13https://en.wikipedia.org/wiki/The_Thirty-Six_Dramatic_Situations
We have not yet devised any automated or semi-automated evaluation setup to measure the quality of the
generated scripts. Our design decisions so far have thus been based solely on manual analyses of small numbers
of outputs, performed by theatre experts. While such analyses are very trustworthy, they are not easy to perform
at an adequate scale. On the other hand, we are not aware of any meaningful automated measures of theatre
script quality. We are currently exploring automated chatbot quality measures, which might or might not provide
some useful indications of the script quality. We are also working on making the manual evaluation more ecient
by designing a set of evaluation prompts and a standardized binary evaluation procedure.</p>
        </sec>
        <sec id="sec-2-1-22">
          <title>Nevertheless, the ultimate evaluation of a theatre play always is the reception by the audience and the critiques;</title>
          <p>which, in case of the presented play, have been rather positive.14
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Conclusion</title>
      <sec id="sec-3-1">
        <title>We have developed THEaiTRobot 1.0, a tool for interactively generating theatre play scripts. The tools is based on GPT-2, with several modifications targeting encountered issues. We have also discussed persisting issues and suggested remedies for a future version.</title>
      </sec>
      <sec id="sec-3-2">
        <title>We used the tool to create the first predominantly machine-generated theatre play script, which premiered on 26th February 2021. Another play, to be generated by an improved version of the tool, is planned for 2022.</title>
        <p>Acknowledgements</p>
      </sec>
      <sec id="sec-3-3">
        <title>The project TL03000348 THEaiTRE: Umˇel´a inteligence autorem divadeln´ı hry is co-financed with the state</title>
        <p>support of Technological Agency of the Czech Republic within the E´ TA 3 Programme.
[Cˇ ap20]</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [AKDM19]
          <string-name>
            <given-names>Mahmoud</given-names>
            <surname>Azab</surname>
          </string-name>
          , Noriyuki Kojima, Jia Deng, and
          <string-name>
            <given-names>Rada</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          .
          <article-title>Representing Movie Characters in Dialogues</article-title>
          .
          <source>In Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL)</source>
          , pages
          <fpage>99</fpage>
          -
          <lpage>109</lpage>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          ,
          <year>November 2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Karel Cˇ apek. R.U.R. (</surname>
          </string-name>
          <article-title>Rossum's Universal Robots)</article-title>
          . Aventinum,
          <string-name>
            <surname>Ot.</surname>
          </string-name>
          Sˇtorch-Marien, Praha,
          <year>1920</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>Angela</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mike</given-names>
            <surname>Lewis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Yann</given-names>
            <surname>Dauphin</surname>
          </string-name>
          .
          <article-title>Hierarchical Neural Story Generation</article-title>
          .
          <source>In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume</source>
          <volume>1</volume>
          :
          <string-name>
            <surname>Long</surname>
            <given-names>Papers)</given-names>
          </string-name>
          , New Orleans, LA, USA,
          <year>June 2018</year>
          . arXiv:
          <year>1805</year>
          .04833.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <given-names>Angela</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Mike</given-names>
            <surname>Lewis</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Yann</given-names>
            <surname>Dauphin</surname>
          </string-name>
          .
          <article-title>Strategies for Structuring Story Generation</article-title>
          .
          <source>In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics</source>
          , pages
          <fpage>2650</fpage>
          -
          <lpage>2660</lpage>
          , Florence, Italy,
          <year>July 2019</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Madotto</surname>
          </string-name>
          , Etsuko Ishii, Zhaojiang Lin, Sumanth
          <string-name>
            <surname>Dathathri</surname>
            , and
            <given-names>Pascale</given-names>
          </string-name>
          <string-name>
            <surname>Fung</surname>
          </string-name>
          .
          <article-title>Plug-andPlay Conversational Models</article-title>
          .
          <source>In Findings of the Association for Computational Linguistics: EMNLP</source>
          <year>2020</year>
          , pages
          <fpage>2422</fpage>
          -
          <lpage>2433</lpage>
          , Online,
          <year>November 2020</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>Rada</given-names>
            <surname>Mihalcea</surname>
          </string-name>
          and
          <string-name>
            <surname>Paul Tarau.</surname>
          </string-name>
          <article-title>TextRank: Bringing Order into Text</article-title>
          .
          <source>In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing</source>
          , pages
          <fpage>404</fpage>
          -
          <lpage>411</lpage>
          , Barcelona, Spain,
          <year>July 2004</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [PKFL20]
          <string-name>
            <given-names>Pinelopi</given-names>
            <surname>Papalampidi</surname>
          </string-name>
          , Frank Keller, Lea Frermann, and
          <string-name>
            <given-names>Mirella</given-names>
            <surname>Lapata</surname>
          </string-name>
          .
          <article-title>Screenplay Summarization Using Latent Narrative Structure</article-title>
          .
          <source>In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</source>
          , pages
          <fpage>1920</fpage>
          -
          <lpage>1933</lpage>
          , Online,
          <year>July 2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Pinelopi</given-names>
            <surname>Papalampidi</surname>
          </string-name>
          , Frank Keller, and Mirella Lapata.
          <article-title>Movie Plot Analysis via Turning Point Identification</article-title>
          .
          <source>In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLPIJCNLP)</source>
          , pages
          <fpage>1707</fpage>
          -
          <lpage>1717</lpage>
          ,
          <string-name>
            <surname>Hong</surname>
            <given-names>Kong</given-names>
          </string-name>
          , China,
          <year>November 2019</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [PTT+20]
          <string-name>
            <surname>Martin</surname>
            <given-names>Popel</given-names>
          </string-name>
          , Marketa Tomkov´a,
          <string-name>
            <surname>Jakub</surname>
            <given-names>Tomek</given-names>
          </string-name>
          , Lukasz Kaiser, Jakob Uszkoreit, Ondˇrej Bojar, and
          <article-title>Zdenˇek Zˇabokrtsky´</article-title>
          .
          <article-title>Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals</article-title>
          .
          <source>Nature Communications</source>
          ,
          <volume>11</volume>
          (
          <issue>4381</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [RDK+20]
          <string-name>
            <surname>Rudolf</surname>
            <given-names>Rosa</given-names>
          </string-name>
          ,
          <source>Ondˇrej Duˇsek</source>
          , Tom Kocmi, David Mareˇcek, Tom´aˇs Musil,
          <article-title>Patr´ıcia Schmidtov´a, Dominik Jurko</article-title>
          , Ondˇrej Bojar, Daniel Hrbek, David Koˇsˇt´ak, Martina Kinsk´a,
          <string-name>
            <surname>Josef</surname>
            <given-names>Doleˇzal</given-names>
          </string-name>
          , and
          <article-title>Kl´ara Voseck´a. THEaiTRE: Artificial intelligence to write a theatre play</article-title>
          .
          <source>In Al´ıpio Jorge</source>
          , Ricardo Campos, Adam Jatowt, and Akiko Aizawa, editors,
          <source>Proceedings of AI4Narratives - Workshop on Artificial Intelligence for Narratives</source>
          , volume
          <volume>2794</volume>
          <source>of CEUR Workshop Proceedings</source>
          , pages
          <fpage>9</fpage>
          -
          <lpage>13</lpage>
          , Aachen, Germany,
          <year>2020</year>
          . RWTH Aachen University, RWTH Aachen University.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [RDK+21]
          <string-name>
            <surname>Rudolf</surname>
            <given-names>Rosa</given-names>
          </string-name>
          ,
          <source>Ondˇrej Duˇsek</source>
          , Tom Kocmi, David Mareˇcek, Tom´aˇs Musil,
          <article-title>Patr´ıcia Schmidtov´a, Dominik Jurko</article-title>
          , Ondˇrej Bojar, Daniel Hrbek, David Koˇsˇt´ak, Martina Kinsk´
          <article-title>a, Marie Nov´akov´a, Josef Doleˇzal, and Kl´ara Voseck´a</article-title>
          .
          <source>THEaiTRobot 1</source>
          .0,
          <year>2021</year>
          .
          <article-title>LINDAT/CLARIAH-CZ digital library at the Institute of Formal and Applied Linguistics ( U´FAL)</article-title>
          ,
          <source>Faculty of Mathematics and Physics</source>
          , Charles University.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [WDS+20]
          <string-name>
            <surname>Thomas</surname>
            <given-names>Wolf</given-names>
          </string-name>
          , Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, R´emi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and
          <string-name>
            <surname>Alexander</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Rush</surname>
          </string-name>
          . Transformers:
          <article-title>State-of-the-art natural language processing</article-title>
          .
          <source>In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations</source>
          , pages
          <fpage>38</fpage>
          -
          <lpage>45</lpage>
          , Online,
          <year>October 2020</year>
          .
          <article-title>Association for Computational Linguistics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [WTD+20]
          <string-name>
            <surname>Ruize</surname>
            <given-names>Wang</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Duyu</given-names>
            <surname>Tang</surname>
          </string-name>
          , Nan Duan, Zhongyu Wei, Xuanjing Huang, Jianshu Ji, Guihong Cao, Daxin Jiang, and
          <string-name>
            <given-names>Ming</given-names>
            <surname>Zhou. K-Adapter</surname>
          </string-name>
          :
          <article-title>Infusing Knowledge into Pre-Trained Models with Adapters</article-title>
          . arXiv:
          <year>2002</year>
          .
          <year>01808</year>
          [cs],
          <year>December 2020</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>