<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Camelot: A Modular Customizable Sandbox for Visualizing Interactive Narratives</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Alireza Shirvani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stephen G. Ware</string-name>
          <email>sgwareg@uky.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Narrative Intelligence Lab, University of Kentucky Lexington</institution>
          ,
          <addr-line>KY 40506</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Camelot is a modular customizable virtual environment that is inspired by the needs of current and previous narrative generation research. Camelot is meant to facilitate interactive narrative prototyping, controlled comparisons of different systems, and reproducing and building on the works of others. It provides a 3D presentation layer that is fully separable from the narrative generation system that controls it. This allows any application, AI algorithm, or technology, written in any programming language, to connect and use Camelot to visualize their interactive narratives. In this paper, we introduce Camelot and its capabilities, and provide some details on how and to what extent it can be used to benefit the interactive narrative community.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>Camelot is a modular and customizable interactive narrative
environment that provides a sandbox to act as a
presentation layer for any narrative generation system. Camelot is
a real-time 3D third-person virtual environment that takes
place in a Medieval fantasy setting and includes
customizable characters, places, and items. By using this
environment, researchers can build and test prototypes faster and
easier.</p>
      <p>By providing a fully separate presentation layer, Camelot
is independent of the programming language or technology
used by the narrative generation system. This separation of
concerns lets Camelot provide a standard of presentation
that can be shared among the interactive narrative
community. Through this standard, highly different AI approaches
can be meaningfully compared to one another and
evaluated in the same context and with the same subjects.
Moreover, this standard can facilitate researchers to reproduce and
build upon the works of others.</p>
      <p>In this paper, we provide some details about the
accessibility and capabilities of Camelot. We hope that it may reach
and assist many researchers in their efforts to contribute to
the interactive narrative and AI community. In section 2, we
will discuss the design of Camelot to support research and
simplify its application. Section 3 presents the potential
applications of Camelot and several proof of concept games
that are free to access and play. Section 4 discusses our
previous attempts and future plans for community outreach, and
finally, section 5 presents the conclusions.</p>
      <p>2</p>
    </sec>
    <sec id="sec-2">
      <title>Design to Support Research</title>
      <p>2.1</p>
      <sec id="sec-2-1">
        <title>Interoperability</title>
        <p>
          To generate an interactive narrative, Camelot communicates
with an experience manager (EM)
          <xref ref-type="bibr" rid="ref29 ref34 ref48">(Riedl and Bulitko 2013)</xref>
          .
Experience managers, sometimes called drama managers,
emerged early in interactive narrative research
          <xref ref-type="bibr" rid="ref5 ref55">(Bates 1992;
Weyhrauch 1997)</xref>
          and continue to be a popular architecture
(see
          <xref ref-type="bibr" rid="ref32">Roberts and Isbell (2008)</xref>
          for a survey). In contrast to
some previous narrative control systems, such as Mimesis
          <xref ref-type="bibr" rid="ref59">(Young 2001)</xref>
          or Zo´calo
          <xref ref-type="bibr" rid="ref56">(Young et al. 2011)</xref>
          , Camelot
provides both the presentation layer and the bridge that connects
it to an EM.
        </p>
        <p>A Camelot EM can be written in any programming
language that has standard input and output capabilities. In fact,
all communications between Camelot and the EM are
transmitted via the standard I/O, e.g. System.out.Println
in Java, print in Python, or Console.WriteLine in
C#. Camelot has a large list of available commands that
can be used to control its UI, characters, environments, etc.
These commands are referred to as actions and have the
following format1:</p>
        <sec id="sec-2-1-1">
          <title>ActionName(Argument1, Argument2, ...) e.g. Attack(Hero, Villain)</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Sit(Tom, Room.Chair)</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>PlaySound(LivelyMusic)</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Managing Sequences of Actions</title>
        <p>To execute an action, an EM can append start to a command
and sends it to Camelot. Camelot then attempts to execute
that command and responds with the same command with a
1For a complete list of actions, the description of their function,
and the details of their arguments, please refer to the actions page of
the documentation website (link provided at the end of the paper).
succeeded prefix, when the execution is successful, or
otherwise with an error or failed prefix. The response message
starts with error if the action could not be started, due to,
for instance, insufficient or incorrect arguments, or targeting
characters, items, or places that were not instantiated
beforehand. The response message starts with failed if the action
execution fails after it was started, e.g. characters trying to
walk out of a locked prison cell, player character walking
interrupted by user input. Whether an action fails with an
error or failed message, a short message is also appended that
describes the reason for the failure. The EM can use these
responses to properly sequence the commands its wants to
visualize.</p>
        <p>An EM can also append stop to a previously started
command to stop its execution. In that case, Camelot responds
with a failed message and does it best to revert back any
changes made by the execution of that command. For
instance, if a character is in the process of exiting a door after
opening it, the door is closed as a consequence of stopping
the Exit command.</p>
      </sec>
      <sec id="sec-2-3">
        <title>Action Abstraction Levels</title>
        <p>Many Camelot actions are comprised of smaller units that
are managed by Camelot without concerning the EM. For
instance, when the EM calls the Exit command, Camelot
makes the specified character walk to the specified door,
open the door, and go through the door. Camelot then closes
the door and makes the screen fade out. In doing so, Camelot
does not burden the EM with small units of work that can be
combined into a single action. In this case, walking to,
opening, and closing a door, as well as having the screen fade out
are all also available to the EM to execute individually.</p>
        <p>Furthermore, the EM is free to define any level of
action abstraction by managing the execution of a sequence of
Camelot commands. For instance, in an EM, we can define a
Shop function that when called, runs a sequence of Camelot
commands that make a character walk to a merchant and
take an item from them.</p>
      </sec>
      <sec id="sec-2-4">
        <title>Asynchronous Execution</title>
        <p>Camelot manages simultaneous actions that use the same
assets. Camelot locks characters, furniture, and items when an
action starts using them. All other starting actions that
target those objects need to wait for the release of that lock.
For instance, assume that an EM simultaneously asks both
Tom and Jane to go to a merchant to take an item by
calling Take(Tom, Item, Merchant) and Take(Jane, Item,
Merchant). If Tom reaches the merchant first, they start taking
the item, while Jane walks to the merchant and waits. At this
point, the EM could, for instance, decide to stop the
command Take(Jane, Item, Merchant) upon receiving started</p>
        <sec id="sec-2-4-1">
          <title>Take(Tom, Item, Merchant). Otherwise, when Take(Tom,</title>
        </sec>
        <sec id="sec-2-4-2">
          <title>Item, Merchant) succeeds, Take(Jane, Item, Merchant) re</title>
          <p>sumes and since items cannot be in two places at once, the
item disappears from Tom’s hand and is placed in Jane’s.</p>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>User Input</title>
        <p>When the player interacts with the environment, Camelot
sends messages with an input prefix to notify the EM. These
commands include any interactions with the objects or
nonplayer characters in the environment, dialog choices,
keyboard inputs, or specific changes in the position of the player
character.</p>
        <p>e.g. input arrived Hero position Castle.Door
input Draw Hero Sword</p>
        <p>input Key Inventory</p>
        <p>Since the EM is fully separate from Camelot, it is not
dependent on any specific algorithms or technologies, and it
does not even have to be deployed on the same physical
machine (following the architecture used by Young’s Mimesis
(2001), and many other similar systems). The only
connection between Camelot and an EM are the simple strings of
texts that are human readable and easy to understand.
2.2</p>
      </sec>
      <sec id="sec-2-6">
        <title>Camelot Gameplay Logs</title>
        <p>While a user is playing an interactive narrative, Camelot
generates a list of all its communication messages with the
EM, as well as their time stamps. Camelot gameplay logs
capture all the events that occur during a playthrough via the
user input or the EM, including when actions start, succeed,
or fail. These files can then be used to almost accurately
reproduce and analyze a user’s playthrough and the story that
unfolds based on their choices. A log file’s information and
file size make it very efficient to transfer and collect a data
set of user gameplay, which can benefit data-driven
storytelling systems. We also provide an application that can be
downloaded from Camelot’s website and used as an EM to
almost accurately recreate a playthrough from a log file.
2.3</p>
      </sec>
      <sec id="sec-2-7">
        <title>Modular and Customizable</title>
        <p>Camelot comes with a set of characters and places that can
be customized as intended. To create various characters,
the EM can choose from different body-types, hair styles,
hair colors, eye colors, skin tones, and outfits. Figure 2
presents some examples of these characters. Camelot also
provides many small, contained, pre-built environments,
named places, that can be instantiated to create the story
world. Figure 3 presents some examples of these places.
Each place comes with a set of interactive furniture, such
as shelves, chairs, tables, or cauldrons, that can be hidden or
shown depending on the context of the story.</p>
        <p>Camelot does not impose any restrictions on where the
doors of each place lead to. This enables Camelot’s world
creation to be modular and allows any configuration of the
space. More specifically, every door leads to an area outside
the place obstructed by white clouds. When a character exits
through a door, they stay behind that door and wait for the
EM to change their position. The EM creates the illusion that
doors are connected by having a character enter through one
door immediately after leaving through another.
2.4</p>
      </sec>
      <sec id="sec-2-8">
        <title>Stateless Presentation</title>
        <p>Camelot only acts as a presentation layer to an EM. Since
different AI technologies, e.g. planning and machine
learning, have very different representations of state, Camelot
does not require the EM to use any particular state
representation. In fact, to a large extent, Camelot does not keep
track of the state of the world.</p>
        <p>For example, Camelot has a UI element named the List
that can be used to represent character inventory (Figure 4).
It is in fact the List and not a list, as in there are not different
instances of it for different characters. Again, it is the EM
that decides what to put in the list when to display the list
(e.g. to show the inventory of a specific character).</p>
        <p>Furthermore, Camelot also has no notion of the player.
More specifically, any of the instantiated characters can be
controlled by mouse and keyboard as long as they are the
camera focus. We will discuss the camera focus later in the
Camera Control subsection. Any references to the player
character in this paper refer to the one currently being
controlled by the user.</p>
        <p>There are some exceptions to the stateless nature of
Camelot, specifically the physical position of characters. For
instance, if a character is sitting on a chair, an action that
attempts to make another character sit on that chair will fail
with a message stating that the chair is already occupied by
another character.</p>
        <p>Moreover, many character actions require the character to
first walk to the target. Since places are independent
contained environments, the corresponding actions will fail if a
targeted character moves to a different place.</p>
        <p>
          However, this is not true about items. All actions that
target items will teleport the specified item to the position
required by the action. For instance, if an item is on a shelf
and the EM asks a character to take the item out of their
pocket, the item will instantaneously disappear from the
shelf and appear in their hand. In addition, the SetPosition
command can be used to instantaneously teleport a
character or item to any other position within any place. Therefore,
Camelot can also be adopted in interactive storytelling
systems with a weak or non-existent sense of permanent state,
such as purely language-based interactive narratives (e.g.,
neural language model based storytelling systems such as
          <xref ref-type="bibr" rid="ref21 ref22">(Martin, Sood, and Riedl 2018)</xref>
          .
2.5
        </p>
      </sec>
      <sec id="sec-2-9">
        <title>Simple Description of Affordances</title>
        <p>Affordances are the actions a player can choose from in an
interactive narrative (not to be confused with Camelot
commands also called actions). In Camelot, affordances can be
simply described by the EnableIcon command. EnableIcon
can be used to describe an affordance that can be performed
to a character, furniture, or item. For instance, it can be used
to allow the player to click on a chair to sit on it.</p>
        <p>
          There are several important things to note about
EnableIcon. When EnableIcon is used for a character, furniture, or
item,
• The object will be highlighted when the user hovers the
mouse over it.
• If the user right-clicks on the object, a radial menu is
shown that presents all available interactions that can be
performed on that object. Each option can be presented
with a title and an icon. Camelot provides a large variety
of icons that can used for this purpose. Figure 1 presents
an example of a radial menu.
• When the user chooses to interact with an object, Camelot
only responds by sending an input message to the EM.
Camelot does not start any action unless directly
instructed by the EM. This grants the EM full control
over what to do next in response to user interactions and
whether to accommodate or intervene (see
          <xref ref-type="bibr" rid="ref31">Riedl, Saretto,
and Young’s (2003</xref>
          ) discussion of mediation).
• The affordances can also be removed by simply calling
the DisableIcon command.
        </p>
        <p>As an example if Camelot receives EnableIcon(SitDown,</p>
        <sec id="sec-2-9-1">
          <title>Chair, Room.Chair, “Sit on the chair”),</title>
          <p>• When the user right-clicks on the chair, they see an option
with title “Sit on the chair” and icon Chair.
• If the user clicks on the chair, Camelot sends the following
message to the EM: input SitDown Player Room.Chair.</p>
          <p>This notifies the EM that the Player has chosen SitDown.
• The EM can then choose to make the player character sit
on the chair by sending start Sit(Player, Room.Chair) to
Camelot, or it can choose to show a message to the user
like “I am not tired right now!”
2.6</p>
        </sec>
      </sec>
      <sec id="sec-2-10">
        <title>Animations and Expressions</title>
        <p>A large set of available Camelot commands can be used to
animate characters. For instance, characters can open doors
or chests, sit on chairs, or sleep on beds. These animations
can be used as a visual response to user interactions in form
of player character actions, as well as non-player reactions
to those actions, e.g. clapping, laughing, waving, etc.</p>
        <p>
          In addition to these animations, there are several
expressions that can be used to express character emotions, which
are happy, sad, angry, scared, surprised, and disgusted. The
SetExpression command changes a character’s facial
expression as well as their idle animation to reflect that emotion.
Character expressions can also change during dialog to
display their reactions to dialog choices. These expressions can
be used by believable agent research that model affect
          <xref ref-type="bibr" rid="ref1 ref19 ref2 ref27 ref32 ref41 ref52">(Arellano, Varona, and Perales 2008; Marsella and Gratch 2009;
Neto and da Silva 2012; Alfonso Espinosa, Vivancos Rubio,
and Botti Navarro 2014; Shirvani and Ware 2020)</xref>
          . Figure 5
presents some examples of these expressions.
        </p>
        <p>
          Camelot does not support graphic depictions of strong
violence2 or inappropriate nudity in order to make
interactive narratives designed with it easier to approve by groups
like university Institutional Review Boards (IRBs). Several
Camelot games have been used in IRB-approved studies
          <xref ref-type="bibr" rid="ref39 ref40 ref41 ref53">(Ware et al. 2019; Shirvani and Ware 2020)</xref>
          .
2.7
        </p>
      </sec>
      <sec id="sec-2-11">
        <title>Flexible UI</title>
        <p>Camelot provides several general UI elements to use in a
narrative. In addition to the radial menu, the list window can
be used to present a list of items to interact with, e.g.
displaying the inventory of a character or container, RPG character
statistics, a set of skills to purchase, etc., and the narration
window can be used to present simple text.</p>
        <p>
          The dialog window provides interactive dialog that can be
configured with character portraits and links embedded in
the text that the user can click
          <xref ref-type="bibr" rid="ref14 ref36 ref9">(Cavazza and Charles 2005;
Endrass et al. 2013; Ryan, Mateas, and Wardrip-Fruin 2016)</xref>
          .
Dialog links are parts of the text that are highlighted in blue
and can be clicked to represent dialog choices or advance
the dialog tree. Figures 1, 4, and 6 present examples of these
UI elements.
2.8
        </p>
      </sec>
      <sec id="sec-2-12">
        <title>Camera Control</title>
        <p>Camelot provides different options for controlling the
camera, which can be used via SetCameraFocus,
SetCameraMode, and SetCameraBlend commands. At each moment,
2Characters can attack using the Attack command presented by
swinging their arm while holding an item such as a sword or
hammer.
the camera can be focused on a character, furniture, or item
using SetCameraFocus. If the focus of the camera is a
character and the input is enabled (via the EnableInput
command), the camera follows that character’s movements, and
that character can be controlled by mouse and keyboard. We
must note that the input can also be disable at times, for
instance, during cutscenes.</p>
        <p>SetCameraMode can be used to switch between three
camera modes in real-time. The follow camera mode
displays a third-person over-the-shoulder view of the character
that is the focus of the camera, e.g. as in action RPG games.
This mode can only be enabled if the camera focus is a
character.</p>
        <p>In track mode, a top-down view of the place is
displayed, e.g. as in point-and-click adventures. As the
character moves, the camera changes rotation to keep the character
at the center, and if the character moves too far, the active
camera switches to a different camera of that place that has
a better view of the character.</p>
        <p>Finally, the focus camera mode, presents a front close-up
of the camera focus. This mode can be used to display
character expressions or temporarily shift the focus of the user
to a specific item or furniture that can be interacted with.</p>
        <p>When the EM changes the camera focus or mode, the
view transitions from the active camera to the new focus
or mode. The duration of this transition can be controlled
via the SetCameraBlend command. This command gives the
EM more freedom to control the camera and create dramatic
shifts or cuts during cut-scenes.
2.9</p>
      </sec>
      <sec id="sec-2-13">
        <title>License and Availability</title>
        <p>Camelot is published under the Non-Profit Open Source
License 3.0. This license allows Camelot to be used for
personal, professional, and academic projects at no cost. It is
only necessary to acknowledge the original project and
creators in any derivative works3. Currently, the executable can
be downloaded and used on Windows and Mac operating
systems. The source code is also available to download.
However, the copyrighted assets are not distributed with the
source, and can be purchased from the Unity Asset Store
at additional cost. The link to Camelot’s documentation and
download are presented at the end of this paper.</p>
        <p>3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Applications and Practices</title>
      <p>Camelot can benefit a wide range of AI research including
but not limited to:</p>
      <p>
        3We ask users to cite this paper in any published works that use
Camelot.
• Automatic story generation and agent simulations
using neural networks, reinforcement learning, and other
machine learning algorithms
        <xref ref-type="bibr" rid="ref11 ref21 ref22 ref29 ref34 ref4 ref47 ref48">(Rowe and Lester 2013;
Harrison, Purdy, and Riedl 2017; Wang et al. 2017;
Martin et al. 2018; Tambwekar et al. 2018)</xref>
        .
So far, Camelot has been used to create four different
interactive narratives. The Relics of the Kingdom and Murder
in Felguard were developed respectively in C++ and Python
by two different teams of undergraduate students at the
University of Kentucky. The Three Kings was developed in C#
and best showcases different features of Camelot and the use
of its UI in creating branching narratives. In contrast to the
last three mentioned interactive narratives that were hand
authored, Saving Grandma, also developed in C#, is a story
graph interactive narrative that was generated using
narrative planning
        <xref ref-type="bibr" rid="ref39 ref40 ref53">(Ware et al. 2019)</xref>
        . Murder in Felguard and
The Three Kings are both free to access on Camelot’s
documentation website.
      </p>
      <p>4</p>
    </sec>
    <sec id="sec-4">
      <title>Community Outreach</title>
      <p>
        Our hope is to encourage researcher to adopt Camelot in
their relevant research. In previous years, Camelot was
introduced in the Playable Experiences track of AIIDE
        <xref ref-type="bibr" rid="ref38">(Samuel
et al. 2018)</xref>
        . A tutorial on Camelot was also held at AIIDE
to showcase the capabilities and use cases of Camelot. This
tutorial featured several invited demonstrations of
experience managers that used interactive behavior trees
        <xref ref-type="bibr" rid="ref20 ref39 ref40">(Martens
and Iqbal 2019)</xref>
        , multi-agent reinforcement learning
        <xref ref-type="bibr" rid="ref32 ref7">(Busoniu, Babuska, and De Schutter 2008)</xref>
        , the Ensemble engine
        <xref ref-type="bibr" rid="ref37">(Samuel et al. 2015)</xref>
        , multi-agent narrative planning
        <xref ref-type="bibr" rid="ref39 ref40 ref53">(Ware
et al. 2019)</xref>
        , and murder mystery generation
        <xref ref-type="bibr" rid="ref26">(Mohr, Eger,
and Martens 2018)</xref>
        . A showcase of Camelot will also be
presented at AIIDE 2020’s Intelligent Narrative Technologies
(INT) workshop.
      </p>
      <p>Our focus for the future of Camelot is to organize the
Interactive Narrative Challenge (INCH). The purpose of INCH
is to solicit AI EMs from many interactive narrative
researchers and present their interactive narratives to human
judges for qualitative and quantitative evaluation. INCH
provides a practical context for controlled comparisons of
interactive narratives across different systems. INCH will
feature awards for many contributions in various aspects of a
narrative, including use of narrative devices, e.g. flashbacks,
foreshadowing, suspense, etc., story coherence, player
freedom, replayablility, character richness, and so on. As a result
of INCH, researchers will have access to free evaluation of
their work by human players, as well as the dataset of the
logs of all playthroughs. These logs can be further used to
analyze user experience or to train a data-driven AI narrative
system.</p>
      <p>5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>Translating AI algorithms and technologies into a
userfriendly, visual interface is almost always a step in
evaluating narrative generation systems via a human audience.
The purpose of Camelot is to provide a modular,
customizable, and easy-to-use virtual environment for researchers to
visualize their stories. Camelot is fully independent of the
experience manager that controls it, which allows any
programming language or algorithm to easy connect and take
advantage of Camelot. In addition, it enables the controlled
comparison of drastically different narrative generation
systems and allows researchers to reproduce and build on the
work of others.</p>
      <p>We plan to take advantage of Camelot in the Interactive
Narrative Challenge to encourage researchers to submit their
interactive narratives and to provide them with access to
qualitative and quantitative evaluation of their work by
human judges.</p>
      <p>Camelot is an ongoing project and we plan to improve
and expand it to support future interactive narrative
authoring techniques.</p>
      <sec id="sec-5-1">
        <title>Downloading Camelot</title>
        <p>You can view a comprehensive interactive documentation
website for Camelot at:</p>
        <p>www.cs.uky.edu/ sgware/projects/camelot</p>
        <p>The documentation provides details on how to use
Camelot and its commands, as well as showcasing its
characters, places, items, affordances icons, visual effects, and
sound effects. You can also download Camelot for Windows
or MacOs from the documentation website.</p>
        <p>The website provides several applications that can be used
as example EMs for Camelot. First, CamelotReplay is an
application that reproduces a playthrough from a log file. Next,
there are simple EMs that give beginners a place to start
working with Camelot. They showcase a character moving
from one place to another, trying out different outfits, and
buying an item from a merchant. Finally, there are also two
full interactive narratives, Murder in Felguard and The Three
Kings, that showcase the wide range of things you can do in
Camelot.</p>
      </sec>
      <sec id="sec-5-2">
        <title>Acknowledgments</title>
        <p>The development of Camelot was supported by the
University of New Orleans and the University of Kentucky.
We thank Edward T. Garcia, Rachelyn Farrell, and Porscha
Banker for their insights and assistance with the project.
Harrison, B.; Purdy, C.; and Riedl, M. O. 2017. Toward
automated story generation with markov chain monte carlo
methods and deep neural networks. In Thirteenth Artificial</p>
        <sec id="sec-5-2-1">
          <title>Intelligence and Interactive Digital Entertainment Conference.</title>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>Alfonso</given-names>
            <surname>Espinosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.; Vivancos</given-names>
            <surname>Rubio</surname>
          </string-name>
          , E.; and
          <string-name>
            <given-names>Botti</given-names>
            <surname>Navarro</surname>
          </string-name>
          ,
          <string-name>
            <surname>V. J.</surname>
          </string-name>
          <year>2014</year>
          .
          <article-title>Extending a BDI agents' architecture with open emotional components</article-title>
          .
          <source>Technical report</source>
          , Department of Information Technology, Universitat Politeo´cnica de Valeo´ncia.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Arellano</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Varona</surname>
          </string-name>
          , J.; and
          <string-name>
            <surname>Perales</surname>
            ,
            <given-names>F. J.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Generation and visualization of emotional states in virtual characters</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>Computer Animation and Virtual Worlds</source>
          <volume>19</volume>
          (
          <issue>3-4</issue>
          ):
          <fpage>259</fpage>
          -
          <lpage>270</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Bahamo´n</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>An empirical evaluation of a generative method for the expression of personality traits through action choice</article-title>
          .
          <source>In 13th AAAI International Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          ,
          <fpage>144</fpage>
          -
          <lpage>150</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Bates</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>1992</year>
          .
          <article-title>Virtual reality, art, and entertainment</article-title>
          .
          <source>Presence: Teleoperators &amp; Virtual Environments</source>
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <fpage>133</fpage>
          -
          <lpage>138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Berov</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Steering plot through personality and affect: an extended BDI model of fictional characters</article-title>
          .
          <source>In Joint German/Austrian Conference on Artificial Intelligence (Ku¨nstliche Intelligenz)</source>
          ,
          <fpage>293</fpage>
          -
          <lpage>299</lpage>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Busoniu</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Babuska</surname>
          </string-name>
          , R.; and
          <string-name>
            <surname>De Schutter</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>A comprehensive survey of multiagent reinforcement learning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          , Part C (
          <article-title>Applications</article-title>
          and Reviews)
          <volume>38</volume>
          (
          <issue>2</issue>
          ):
          <fpage>156</fpage>
          -
          <lpage>172</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Cavazza</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Charles</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Dialogue generation in character-based interactive storytelling</article-title>
          .
          <source>In Proceedings of the First AAAI Conference on Artificial Intelligence and Interactive</source>
          Digital Entertainment, AIIDE'
          <volume>05</volume>
          ,
          <fpage>21</fpage>
          -
          <lpage>26</lpage>
          . AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Drucker</surname>
            ,
            <given-names>S. M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Zeltzer</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>Intelligent camera control in a virtual environment</article-title>
          .
          <source>In Graphics Interface</source>
          ,
          <fpage>190</fpage>
          -
          <lpage>190</lpage>
          . Citeseer.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Eger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Martens</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Character beliefs in story generation</article-title>
          .
          <source>In Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference.</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>El-Nasr</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Yen</surname>
            , J.; and Ioerger,
            <given-names>T. R.</given-names>
          </string-name>
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>FLAME-fuzzy logic adaptive model of emotions. Autonomous Agents and Multi-agent systems 3(3</article-title>
          ):
          <fpage>219</fpage>
          -
          <lpage>257</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Endrass</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Klimmt</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mehlmann</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ; Andre´, E.; and
          <string-name>
            <surname>Roth</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Designing user-character dialog in interactive narratives: An exploratory experiment</article-title>
          .
          <source>IEEE Transactions on Computational Intelligence and AI in Games</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ):
          <fpage>166</fpage>
          -
          <lpage>173</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Ferreira</surname>
            ,
            <given-names>F. P.</given-names>
          </string-name>
          ; Gelatti, G.; and
          <string-name>
            <surname>Musse</surname>
            ,
            <given-names>S. R.</given-names>
          </string-name>
          <year>2002</year>
          .
          <article-title>Intelligent virtual environment and camera control in behavioural simulation</article-title>
          .
          <source>In Proceedings. XV Brazilian Symposium on Computer Graphics and Image Processing</source>
          ,
          <fpage>365</fpage>
          -
          <lpage>372</lpage>
          . IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Gebhard</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>ALMA: a layered model of affect</article-title>
          .
          <source>In Proceedings of the fourth international joint conference on Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>29</volume>
          -
          <fpage>36</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Jhala</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Cinematic visual discourse: Representation, generation, and evaluation</article-title>
          .
          <source>IEEE Transactions on computational intelligence and AI in games 2</source>
          (
          <issue>2</issue>
          ):
          <fpage>69</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          2011.
          <article-title>Intelligent camera control using behavior trees</article-title>
          .
          <source>In International Conference on Motion in Games</source>
          ,
          <volume>156</volume>
          -
          <fpage>167</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Marsella</surname>
            ,
            <given-names>S. C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Gratch</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2009</year>
          .
          <article-title>EMA: A process model of appraisal dynamics</article-title>
          .
          <source>Cognitive Systems Research</source>
          <volume>10</volume>
          (
          <issue>1</issue>
          ):
          <fpage>70</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Martens</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Iqbal</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Villanelle: an authoring tool for autonomous characters in interactive fiction</article-title>
          .
          <source>In Proceedings of the International Conference on Interactive Digital Storytelling</source>
          ,
          <fpage>290</fpage>
          -
          <lpage>303</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>L. J.</given-names>
          </string-name>
          ; Ammanabrolu,
          <string-name>
            <given-names>P.</given-names>
            ;
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            ;
            <surname>Hancock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            ;
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            ;
            <surname>Harrison</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          ; and Riedl,
          <string-name>
            <surname>M. O.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Event representations for automated story generation with deep neural nets</article-title>
          .
          <source>In Thirty-Second AAAI Conference on Artificial Intelligence.</source>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>L. J.</given-names>
          </string-name>
          ; Sood,
          <string-name>
            <surname>S.</surname>
          </string-name>
          ; and Riedl,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Dungeons and dqns: Toward reinforcement learning agents that play tabletop roleplaying games</article-title>
          . In Wu, H.; Si,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Jhala</surname>
          </string-name>
          , A., eds.,
          <source>Proceedings of the Joint Workshop on Intelligent Narrative Technologies and Workshop on Intelligent Cinematography and Editing co-located with 14th AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, INT/WICED@AIIDE</source>
          <year>2018</year>
          , Edmonton, Canada,
          <source>November 13-14</source>
          ,
          <year>2018</year>
          , volume
          <volume>2321</volume>
          <source>of CEUR Workshop Proceedings.</source>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Mateas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Stern</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2003</year>
          .
          <article-title>Fac¸ade: An experiment in building a fully-realized interactive drama</article-title>
          .
          <source>In Game developers conference</source>
          , volume
          <volume>2</volume>
          ,
          <fpage>4</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>McCoy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Treanor,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Samuel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ;
            <surname>Reed</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. A.</surname>
          </string-name>
          ; WardripFruin, N.; and Mateas,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2012</year>
          .
          <article-title>Prom Week: designing past the game/story dilemma</article-title>
          .
          <source>In Proceedings of the International Conference on the Foundations of Digital Games</source>
          ,
          <fpage>235</fpage>
          -
          <lpage>237</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>McCoy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Treanor,
          <string-name>
            <given-names>M.</given-names>
            ;
            <surname>Samuel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            ;
            <surname>Reed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            ;
            <surname>Mateas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Wardrip-Fruin</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <year>2014</year>
          .
          <article-title>Social story worlds with Comme il Faut</article-title>
          .
          <source>IEEE Transactions on Computational intelligence and AI in Games</source>
          <volume>6</volume>
          (
          <issue>2</issue>
          ):
          <fpage>97</fpage>
          -
          <lpage>112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Mohr</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ; Eger,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Martens</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Eliminating the impossible: a procedurally generated murder mystery</article-title>
          .
          <source>In Proceedings of the Experimental AI in Games workshop at the 14th AAAI international conference on Artificial Intelligence and Interactive Digital Entertainment.</source>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Neto</surname>
            ,
            <given-names>A. F. B.</given-names>
          </string-name>
          ,
          <article-title>and da</article-title>
          <string-name>
            <surname>Silva</surname>
            ,
            <given-names>F. S. C.</given-names>
          </string-name>
          <year>2012</year>
          .
          <article-title>A computer architecture for intelligent agents with personality and emotions</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          Springer.
          <fpage>263</fpage>
          -
          <lpage>285</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bulitko</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Interactive narrative: An intelligent systems approach</article-title>
          .
          <source>AI</source>
          Magazine
          <volume>34</volume>
          (
          <issue>1</issue>
          ):
          <fpage>67</fpage>
          -
          <lpage>67</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          <year>2010</year>
          .
          <article-title>Narrative planning: Balancing plot and character</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          <volume>39</volume>
          :
          <fpage>217</fpage>
          -
          <lpage>268</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Saretto</surname>
            ,
            <given-names>C. J.;</given-names>
          </string-name>
          and Young,
          <string-name>
            <surname>R. M.</surname>
          </string-name>
          <year>2003</year>
          .
          <article-title>Managing interaction between users and agents in a multi-agent storytelling environment</article-title>
          .
          <source>In Proceedings of the second international joint conference on Autonomous Agents and Multiagent Systems</source>
          ,
          <volume>741</volume>
          -
          <fpage>748</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Roberts</surname>
            ,
            <given-names>D. L.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Isbell</surname>
            ,
            <given-names>C. L.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>A survey and qualitative analysis of recent advances in drama management</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <source>International Transactions on Systems Science and Applications</source>
          ,
          <source>Special Issue on Agent Based Systems for Human Learning</source>
          <volume>4</volume>
          (
          <issue>2</issue>
          ):
          <fpage>61</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>Rowe</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lester</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>A modular reinforcement learning framework for interactive narrative planning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <source>In Ninth Artificial Intelligence and Interactive Digital Entertainment Conference</source>
          ,
          <volume>57</volume>
          -
          <fpage>63</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>Ryan</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Mateas,
          <string-name>
            <given-names>M.</given-names>
            ; and
            <surname>Wardrip-Fruin</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>Characters who speak their minds: dialogue generation in Talk of the Town</article-title>
          .
          <source>In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference.</source>
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <string-name>
            <surname>Samuel</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Maddaloni</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mateas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Wardrip-Fruin</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>The Ensemble engine: Nextgeneration social physics</article-title>
          .
          <source>In Proceedings of the Tenth International Conference on the Foundations of Digital Games (FDG</source>
          <year>2015</year>
          ),
          <fpage>22</fpage>
          -
          <lpage>25</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Samuel</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Reed</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Short</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Heck</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Robison</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Wright</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Soule</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Treanor</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>McCoy</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Sullivan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; et al.
          <year>2018</year>
          .
          <article-title>Playable experiences at AIIDE 2018</article-title>
          .
          <source>In Proceedings of the Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference</source>
          ,
          <volume>275</volume>
          -
          <fpage>280</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          <year>2019a</year>
          .
          <article-title>On automatically motivating story characters</article-title>
          .
          <source>In Proceedings of the Experimental AI in Games workshop at the 15th AAAI international conference on Artificial Intelligence and Interactive Digital Entertainment.</source>
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          <year>2019b</year>
          .
          <article-title>A plan-based personality model for story characters</article-title>
          .
          <source>In Proceedings of the 15th AAAI international conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          ,
          <fpage>188</fpage>
          -
          <lpage>194</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>A formalization of emotional planning for strong-story systems</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Farrell</surname>
          </string-name>
          , R.; and
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Combining intentionality and belief: Revisiting believable character plans</article-title>
          .
          <source>In Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference</source>
          ,
          <volume>222</volume>
          -
          <fpage>228</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          ; and Farrell,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>A possible worlds model of belief for state-space narrative planning</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <source>In Proceedings of the Thirteenth Artificial Intelligence and Interactive Digital Entertainment Conference</source>
          ,
          <volume>101</volume>
          -
          <fpage>107</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Towards more believable characters using personality and emotion</article-title>
          .
          <source>In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          ,
          <fpage>230</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>Shvo</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Buhmann</surname>
            , J.; and Kapadia,
            <given-names>M.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>An interdependent model of personality, motivation, emotion, and mood for intelligent virtual agents</article-title>
          .
          <source>In Proceedings of the 19th ACM International Conference on Intelligent Virtual Agents</source>
          ,
          <fpage>65</fpage>
          -
          <lpage>72</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Tambwekar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Dhuliawala</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Martin</surname>
            ,
            <given-names>L. J.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Mehta</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Harrison</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ; and Riedl,
          <string-name>
            <surname>M. O.</surname>
          </string-name>
          <year>2018</year>
          .
          <article-title>Controllable neural story plot generation via reinforcement learning</article-title>
          .
          <source>arXiv preprint arXiv:1809</source>
          .10736.
        </mixed-citation>
      </ref>
      <ref id="ref48">
        <mixed-citation>
          <string-name>
            <surname>Teutenberg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Porteous</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Efficient intentbased narrative generation using multiple planning agents</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref49">
        <mixed-citation>
          <source>In Proceedings of the 2013 international conference on Autonomous agents and multi-agent systems</source>
          ,
          <volume>603</volume>
          -
          <fpage>610</fpage>
          . International Foundation for Autonomous Agents and
          <string-name>
            <given-names>Multiagent</given-names>
            <surname>Systems</surname>
          </string-name>
          (IFAAMAS).
        </mixed-citation>
      </ref>
      <ref id="ref50">
        <mixed-citation>
          <string-name>
            <surname>Teutenberg</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Porteous</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Incorporating global and local knowledge in intentional narrative planning</article-title>
          .
          <source>In International Conference on Autonomous Agents and Multiagent Systems</source>
          ,
          <volume>1539</volume>
          -
          <fpage>1546</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref51">
        <mixed-citation>
          2017.
          <article-title>Interactive narrative personalization with deep reinforcement learning</article-title>
          .
          <source>In Proceedings of the 26th International Joint Conference on Artificial Intelligence</source>
          ,
          <fpage>3852</fpage>
          -
          <lpage>3858</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref52">
        <mixed-citation>
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Glaive: a state-space narrative planner supporting intentionality and conflict</article-title>
          .
          <source>In Tenth Artificial Intelligence and Interactive Digital Entertainment Conference.</source>
        </mixed-citation>
      </ref>
      <ref id="ref53">
        <mixed-citation>
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Garcia</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Shirvani</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ; and Farrell,
          <string-name>
            <surname>R.</surname>
          </string-name>
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref54">
        <mixed-citation>
          <article-title>Multi-agent narrative experience management as story graph pruning</article-title>
          .
          <source>In Proceedings of the fifteenth Artificial Intelligence and Interactive Digital Entertainment Conference</source>
          ,
          <volume>87</volume>
          -
          <fpage>93</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref55">
        <mixed-citation>
          <string-name>
            <surname>Weyhrauch</surname>
            ,
            <given-names>P. W.</given-names>
          </string-name>
          <year>1997</year>
          .
          <article-title>Guiding interactive drama</article-title>
          .
          <source>Ph.D.</source>
        </mixed-citation>
      </ref>
      <ref id="ref56">
        <mixed-citation>
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Thomas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ; Bevan,
          <string-name>
            <given-names>C.</given-names>
            ; and
            <surname>Cassel</surname>
          </string-name>
          ,
          <string-name>
            <surname>B.</surname>
          </string-name>
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref57">
        <mixed-citation>
          <source>In Working Notes of the Workshop on Sharing Interactive Digital Storytelling Technologies at ICIDS</source>
          , volume
          <volume>11</volume>
          . Citeseer.
        </mixed-citation>
      </ref>
      <ref id="ref58">
        <mixed-citation>
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Ware</surname>
            ,
            <given-names>S. G.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Cassell</surname>
            ,
            <given-names>B. A.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Robertson</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>Plans and planning in narrative generation: a review of plan-based approaches to the generation of story, discourse and interactivity in narratives</article-title>
          .
          <source>Sprache und Datenverarbeitung</source>
          ,
          <source>Special Issue on Formal and Computational Models of Narrative</source>
          <volume>37</volume>
          (
          <issue>1-2</issue>
          ):
          <fpage>41</fpage>
          -
          <lpage>64</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref59">
        <mixed-citation>
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. M.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>An overview of the Mimesis architecture: Integrating intelligent narrative control into an existing gaming environment</article-title>
          .
          <source>In Working notes of the AAAI spring symposium on Artificial Intelligence and Interactive Entertainment</source>
          ,
          <fpage>77</fpage>
          -
          <lpage>81</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>