Camelot: A Modular Customizable Sandbox for Visualizing Interactive Narratives Alireza Shirvani, Stephen G. Ware Narrative Intelligence Lab, University of Kentucky Lexington, KY 40506 {ashirvani, sgware}@uky.edu Abstract simplify its application. Section 3 presents the potential ap- plications of Camelot and several proof of concept games Camelot is a modular customizable virtual environment that that are free to access and play. Section 4 discusses our pre- is inspired by the needs of current and previous narrative generation research. Camelot is meant to facilitate interac- vious attempts and future plans for community outreach, and tive narrative prototyping, controlled comparisons of differ- finally, section 5 presents the conclusions. ent systems, and reproducing and building on the works of others. It provides a 3D presentation layer that is fully separa- 2 Design to Support Research ble from the narrative generation system that controls it. This allows any application, AI algorithm, or technology, written 2.1 Interoperability in any programming language, to connect and use Camelot to To generate an interactive narrative, Camelot communicates visualize their interactive narratives. In this paper, we intro- with an experience manager (EM) (Riedl and Bulitko 2013). duce Camelot and its capabilities, and provide some details Experience managers, sometimes called drama managers, on how and to what extent it can be used to benefit the inter- emerged early in interactive narrative research (Bates 1992; active narrative community. Weyhrauch 1997) and continue to be a popular architecture (see Roberts and Isbell (2008) for a survey). In contrast to 1 Introduction some previous narrative control systems, such as Mimesis (Young 2001) or Zócalo (Young et al. 2011), Camelot pro- Camelot is a modular and customizable interactive narrative vides both the presentation layer and the bridge that connects environment that provides a sandbox to act as a presenta- it to an EM. tion layer for any narrative generation system. Camelot is A Camelot EM can be written in any programming lan- a real-time 3D third-person virtual environment that takes guage that has standard input and output capabilities. In fact, place in a Medieval fantasy setting and includes customiz- all communications between Camelot and the EM are trans- able characters, places, and items. By using this environ- mitted via the standard I/O, e.g. System.out.Println ment, researchers can build and test prototypes faster and in Java, print in Python, or Console.WriteLine in easier. C#. Camelot has a large list of available commands that By providing a fully separate presentation layer, Camelot can be used to control its UI, characters, environments, etc. is independent of the programming language or technology These commands are referred to as actions and have the fol- used by the narrative generation system. This separation of lowing format1 : concerns lets Camelot provide a standard of presentation that can be shared among the interactive narrative commu- ActionName(Argument1, Argument2, ...) nity. Through this standard, highly different AI approaches e.g. Attack(Hero, Villain) can be meaningfully compared to one another and evalu- ated in the same context and with the same subjects. More- Sit(Tom, Room.Chair) over, this standard can facilitate researchers to reproduce and PlaySound(LivelyMusic) build upon the works of others. In this paper, we provide some details about the accessi- bility and capabilities of Camelot. We hope that it may reach Managing Sequences of Actions and assist many researchers in their efforts to contribute to To execute an action, an EM can append start to a command the interactive narrative and AI community. In section 2, we and sends it to Camelot. Camelot then attempts to execute will discuss the design of Camelot to support research and that command and responds with the same command with a 1 Copyright © 2020 for this paper by its authors. Use permitted under For a complete list of actions, the description of their function, Creative Commons License Attribution 4.0 International (CC BY and the details of their arguments, please refer to the actions page of 4.0). the documentation website (link provided at the end of the paper). sets. Camelot locks characters, furniture, and items when an Figure 1: The radial menu shows a list of enabled affor- action starts using them. All other starting actions that tar- dances enabled on an object (the eagle statue). The icon, get those objects need to wait for the release of that lock. title, and name of each affordance is specified by the EM. For instance, assume that an EM simultaneously asks both Tom and Jane to go to a merchant to take an item by call- ing Take(Tom, Item, Merchant) and Take(Jane, Item, Mer- chant). If Tom reaches the merchant first, they start taking the item, while Jane walks to the merchant and waits. At this point, the EM could, for instance, decide to stop the com- mand Take(Jane, Item, Merchant) upon receiving started Take(Tom, Item, Merchant). Otherwise, when Take(Tom, Item, Merchant) succeeds, Take(Jane, Item, Merchant) re- sumes and since items cannot be in two places at once, the item disappears from Tom’s hand and is placed in Jane’s. User Input When the player interacts with the environment, Camelot succeeded prefix, when the execution is successful, or oth- sends messages with an input prefix to notify the EM. These erwise with an error or failed prefix. The response message commands include any interactions with the objects or non- starts with error if the action could not be started, due to, player characters in the environment, dialog choices, key- for instance, insufficient or incorrect arguments, or targeting board inputs, or specific changes in the position of the player characters, items, or places that were not instantiated before- character. hand. The response message starts with failed if the action e.g. input arrived Hero position Castle.Door execution fails after it was started, e.g. characters trying to walk out of a locked prison cell, player character walking input Draw Hero Sword interrupted by user input. Whether an action fails with an er- input Key Inventory ror or failed message, a short message is also appended that Since the EM is fully separate from Camelot, it is not de- describes the reason for the failure. The EM can use these pendent on any specific algorithms or technologies, and it responses to properly sequence the commands its wants to does not even have to be deployed on the same physical ma- visualize. chine (following the architecture used by Young’s Mimesis An EM can also append stop to a previously started com- (2001), and many other similar systems). The only connec- mand to stop its execution. In that case, Camelot responds tion between Camelot and an EM are the simple strings of with a failed message and does it best to revert back any texts that are human readable and easy to understand. changes made by the execution of that command. For in- stance, if a character is in the process of exiting a door after 2.2 Camelot Gameplay Logs opening it, the door is closed as a consequence of stopping the Exit command. While a user is playing an interactive narrative, Camelot generates a list of all its communication messages with the EM, as well as their time stamps. Camelot gameplay logs Action Abstraction Levels capture all the events that occur during a playthrough via the Many Camelot actions are comprised of smaller units that user input or the EM, including when actions start, succeed, are managed by Camelot without concerning the EM. For or fail. These files can then be used to almost accurately re- instance, when the EM calls the Exit command, Camelot produce and analyze a user’s playthrough and the story that makes the specified character walk to the specified door, unfolds based on their choices. A log file’s information and open the door, and go through the door. Camelot then closes file size make it very efficient to transfer and collect a data the door and makes the screen fade out. In doing so, Camelot set of user gameplay, which can benefit data-driven story- does not burden the EM with small units of work that can be telling systems. We also provide an application that can be combined into a single action. In this case, walking to, open- downloaded from Camelot’s website and used as an EM to ing, and closing a door, as well as having the screen fade out almost accurately recreate a playthrough from a log file. are all also available to the EM to execute individually. Furthermore, the EM is free to define any level of ac- 2.3 Modular and Customizable tion abstraction by managing the execution of a sequence of Camelot comes with a set of characters and places that can Camelot commands. For instance, in an EM, we can define a be customized as intended. To create various characters, Shop function that when called, runs a sequence of Camelot the EM can choose from different body-types, hair styles, commands that make a character walk to a merchant and hair colors, eye colors, skin tones, and outfits. Figure 2 take an item from them. presents some examples of these characters. Camelot also provides many small, contained, pre-built environments, Asynchronous Execution named places, that can be instantiated to create the story Camelot manages simultaneous actions that use the same as- world. Figure 3 presents some examples of these places. Each place comes with a set of interactive furniture, such 2.5 Simple Description of Affordances as shelves, chairs, tables, or cauldrons, that can be hidden or Affordances are the actions a player can choose from in an shown depending on the context of the story. interactive narrative (not to be confused with Camelot com- Camelot does not impose any restrictions on where the mands also called actions). In Camelot, affordances can be doors of each place lead to. This enables Camelot’s world simply described by the EnableIcon command. EnableIcon creation to be modular and allows any configuration of the can be used to describe an affordance that can be performed space. More specifically, every door leads to an area outside to a character, furniture, or item. For instance, it can be used the place obstructed by white clouds. When a character exits to allow the player to click on a chair to sit on it. through a door, they stay behind that door and wait for the There are several important things to note about Enable- EM to change their position. The EM creates the illusion that Icon. When EnableIcon is used for a character, furniture, or doors are connected by having a character enter through one item, door immediately after leaving through another. • The object will be highlighted when the user hovers the mouse over it. 2.4 Stateless Presentation • If the user right-clicks on the object, a radial menu is shown that presents all available interactions that can be Camelot only acts as a presentation layer to an EM. Since performed on that object. Each option can be presented different AI technologies, e.g. planning and machine learn- with a title and an icon. Camelot provides a large variety ing, have very different representations of state, Camelot of icons that can used for this purpose. Figure 1 presents does not require the EM to use any particular state repre- an example of a radial menu. sentation. In fact, to a large extent, Camelot does not keep • When the user chooses to interact with an object, Camelot track of the state of the world. only responds by sending an input message to the EM. For example, Camelot has a UI element named the List Camelot does not start any action unless directly in- that can be used to represent character inventory (Figure 4). structed by the EM. This grants the EM full control It is in fact the List and not a list, as in there are not different over what to do next in response to user interactions and instances of it for different characters. Again, it is the EM whether to accommodate or intervene (see Riedl, Saretto, that decides what to put in the list when to display the list and Young’s (2003) discussion of mediation). (e.g. to show the inventory of a specific character). • The affordances can also be removed by simply calling Furthermore, Camelot also has no notion of the player. the DisableIcon command. More specifically, any of the instantiated characters can be As an example if Camelot receives EnableIcon(SitDown, controlled by mouse and keyboard as long as they are the Chair, Room.Chair, “Sit on the chair”), camera focus. We will discuss the camera focus later in the Camera Control subsection. Any references to the player • When the user right-clicks on the chair, they see an option character in this paper refer to the one currently being con- with title “Sit on the chair” and icon Chair. trolled by the user. • If the user clicks on the chair, Camelot sends the following There are some exceptions to the stateless nature of message to the EM: input SitDown Player Room.Chair. Camelot, specifically the physical position of characters. For This notifies the EM that the Player has chosen SitDown. instance, if a character is sitting on a chair, an action that at- • The EM can then choose to make the player character sit tempts to make another character sit on that chair will fail on the chair by sending start Sit(Player, Room.Chair) to with a message stating that the chair is already occupied by Camelot, or it can choose to show a message to the user another character. like “I am not tired right now!” Moreover, many character actions require the character to first walk to the target. Since places are independent con- 2.6 Animations and Expressions tained environments, the corresponding actions will fail if a A large set of available Camelot commands can be used to targeted character moves to a different place. animate characters. For instance, characters can open doors However, this is not true about items. All actions that tar- or chests, sit on chairs, or sleep on beds. These animations get items will teleport the specified item to the position re- can be used as a visual response to user interactions in form quired by the action. For instance, if an item is on a shelf of player character actions, as well as non-player reactions and the EM asks a character to take the item out of their to those actions, e.g. clapping, laughing, waving, etc. pocket, the item will instantaneously disappear from the In addition to these animations, there are several expres- shelf and appear in their hand. In addition, the SetPosition sions that can be used to express character emotions, which command can be used to instantaneously teleport a charac- are happy, sad, angry, scared, surprised, and disgusted. The ter or item to any other position within any place. Therefore, SetExpression command changes a character’s facial expres- Camelot can also be adopted in interactive storytelling sys- sion as well as their idle animation to reflect that emotion. tems with a weak or non-existent sense of permanent state, Character expressions can also change during dialog to dis- such as purely language-based interactive narratives (e.g., play their reactions to dialog choices. These expressions can neural language model based storytelling systems such as be used by believable agent research that model affect (Arel- (Martin, Sood, and Riedl 2018). lano, Varona, and Perales 2008; Marsella and Gratch 2009; Figure 2: Some examples of Camelot characters Neto and da Silva 2012; Alfonso Espinosa, Vivancos Rubio, the camera can be focused on a character, furniture, or item and Botti Navarro 2014; Shirvani and Ware 2020). Figure 5 using SetCameraFocus. If the focus of the camera is a char- presents some examples of these expressions. acter and the input is enabled (via the EnableInput com- Camelot does not support graphic depictions of strong mand), the camera follows that character’s movements, and violence2 or inappropriate nudity in order to make interac- that character can be controlled by mouse and keyboard. We tive narratives designed with it easier to approve by groups must note that the input can also be disable at times, for in- like university Institutional Review Boards (IRBs). Several stance, during cutscenes. Camelot games have been used in IRB-approved studies SetCameraMode can be used to switch between three (Ware et al. 2019; Shirvani and Ware 2020). camera modes in real-time. The follow camera mode dis- plays a third-person over-the-shoulder view of the character 2.7 Flexible UI that is the focus of the camera, e.g. as in action RPG games. Camelot provides several general UI elements to use in a This mode can only be enabled if the camera focus is a char- narrative. In addition to the radial menu, the list window can acter. be used to present a list of items to interact with, e.g. display- In track mode, a top-down view of the place is dis- ing the inventory of a character or container, RPG character played, e.g. as in point-and-click adventures. As the charac- statistics, a set of skills to purchase, etc., and the narration ter moves, the camera changes rotation to keep the character window can be used to present simple text. at the center, and if the character moves too far, the active The dialog window provides interactive dialog that can be camera switches to a different camera of that place that has configured with character portraits and links embedded in a better view of the character. the text that the user can click (Cavazza and Charles 2005; Finally, the focus camera mode, presents a front close-up Endrass et al. 2013; Ryan, Mateas, and Wardrip-Fruin 2016). of the camera focus. This mode can be used to display char- Dialog links are parts of the text that are highlighted in blue acter expressions or temporarily shift the focus of the user and can be clicked to represent dialog choices or advance to a specific item or furniture that can be interacted with. the dialog tree. Figures 1, 4, and 6 present examples of these When the EM changes the camera focus or mode, the UI elements. view transitions from the active camera to the new focus or mode. The duration of this transition can be controlled 2.8 Camera Control via the SetCameraBlend command. This command gives the Camelot provides different options for controlling the cam- EM more freedom to control the camera and create dramatic era, which can be used via SetCameraFocus, SetCamer- shifts or cuts during cut-scenes. aMode, and SetCameraBlend commands. At each moment, 2 2.9 License and Availability Characters can attack using the Attack command presented by swinging their arm while holding an item such as a sword or ham- Camelot is published under the Non-Profit Open Source Li- mer. cense 3.0. This license allows Camelot to be used for per- Figure 3: Some examples of Camelot places Figure 6: Interactive dialog can show up to two characters Figure 4: The list shows the current owner of the list (on the and any number of clickable links. Clickable links are high- left) and a list of items (at the center) that can be selected to lighted in blue. interact with or view their details (on the right). sonal, professional, and academic projects at no cost. It is only necessary to acknowledge the original project and cre- ators in any derivative works3 . Currently, the executable can Figure 5: Examples of Camelot emotions, from left to right, be downloaded and used on Windows and Mac operating surprised, angry, and scared systems. The source code is also available to download. However, the copyrighted assets are not distributed with the source, and can be purchased from the Unity Asset Store at additional cost. The link to Camelot’s documentation and download are presented at the end of this paper. 3 Applications and Practices Camelot can benefit a wide range of AI research including but not limited to: 3 We ask users to cite this paper in any published works that use Camelot. • Automatic story generation and agent simulations us- Our focus for the future of Camelot is to organize the In- ing neural networks, reinforcement learning, and other teractive Narrative Challenge (INCH). The purpose of INCH machine learning algorithms (Rowe and Lester 2013; is to solicit AI EMs from many interactive narrative re- Harrison, Purdy, and Riedl 2017; Wang et al. 2017; searchers and present their interactive narratives to human Martin et al. 2018; Tambwekar et al. 2018). judges for qualitative and quantitative evaluation. INCH pro- • Strong-story and strong-autonomy systems using narra- vides a practical context for controlled comparisons of in- tive planning (Young et al. 2013), with goals (Riedl teractive narratives across different systems. INCH will fea- and Young 2010; Teutenberg and Porteous 2013; Ware ture awards for many contributions in various aspects of a and Young 2014; Shirvani and Ware 2019a) and beliefs narrative, including use of narrative devices, e.g. flashbacks, (Teutenberg and Porteous 2015; Shirvani, Ware, and Far- foreshadowing, suspense, etc., story coherence, player free- rell 2017; Eger and Martens 2017; Shirvani, Farrell, and dom, replayablility, character richness, and so on. As a result Ware 2018). of INCH, researchers will have access to free evaluation of their work by human players, as well as the dataset of the • Generating believable behavior by modeling agents with logs of all playthroughs. These logs can be further used to emotions (Gebhard 2005; Marsella and Gratch 2009; analyze user experience or to train a data-driven AI narrative Shirvani 2019; Shirvani and Ware 2020) or personality system. (Bahamón and Young 2017; Berov 2017; Shirvani and Ware 2019b; Shvo, Buhmann, and Kapadia 2019). 5 Conclusions • Dialog generation in interactive narratives (Cavazza and Translating AI algorithms and technologies into a user- Charles 2005; Endrass et al. 2013; Ryan, Mateas, and friendly, visual interface is almost always a step in evalu- Wardrip-Fruin 2016). ating narrative generation systems via a human audience. • Social simulations and interactive dramas using rule- The purpose of Camelot is to provide a modular, customiz- based systems (El-Nasr, Yen, and Ioerger 2000; McCoy able, and easy-to-use virtual environment for researchers to et al. 2012; 2014) and beat-based architectures (Mateas visualize their stories. Camelot is fully independent of the and Stern 2003). experience manager that controls it, which allows any pro- gramming language or algorithm to easy connect and take • Intelligent camera control for virtual environments advantage of Camelot. In addition, it enables the controlled (Drucker and Zeltzer 1994; Ferreira, Gelatti, and Musse comparison of drastically different narrative generation sys- 2002; Jhala and Young 2010; Markowitz et al. 2011) tems and allows researchers to reproduce and build on the So far, Camelot has been used to create four different in- work of others. teractive narratives. The Relics of the Kingdom and Murder We plan to take advantage of Camelot in the Interactive in Felguard were developed respectively in C++ and Python Narrative Challenge to encourage researchers to submit their by two different teams of undergraduate students at the Uni- interactive narratives and to provide them with access to versity of Kentucky. The Three Kings was developed in C# qualitative and quantitative evaluation of their work by hu- and best showcases different features of Camelot and the use man judges. of its UI in creating branching narratives. In contrast to the Camelot is an ongoing project and we plan to improve last three mentioned interactive narratives that were hand au- and expand it to support future interactive narrative author- thored, Saving Grandma, also developed in C#, is a story ing techniques. graph interactive narrative that was generated using narra- tive planning (Ware et al. 2019). Murder in Felguard and Downloading Camelot The Three Kings are both free to access on Camelot’s docu- You can view a comprehensive interactive documentation mentation website. website for Camelot at: www.cs.uky.edu/∼sgware/projects/camelot 4 Community Outreach The documentation provides details on how to use Our hope is to encourage researcher to adopt Camelot in Camelot and its commands, as well as showcasing its char- their relevant research. In previous years, Camelot was intro- acters, places, items, affordances icons, visual effects, and duced in the Playable Experiences track of AIIDE (Samuel sound effects. You can also download Camelot for Windows et al. 2018). A tutorial on Camelot was also held at AIIDE or MacOs from the documentation website. to showcase the capabilities and use cases of Camelot. This The website provides several applications that can be used tutorial featured several invited demonstrations of experi- as example EMs for Camelot. First, CamelotReplay is an ap- ence managers that used interactive behavior trees (Martens plication that reproduces a playthrough from a log file. Next, and Iqbal 2019), multi-agent reinforcement learning (Buso- there are simple EMs that give beginners a place to start niu, Babuska, and De Schutter 2008), the Ensemble engine working with Camelot. They showcase a character moving (Samuel et al. 2015), multi-agent narrative planning (Ware from one place to another, trying out different outfits, and et al. 2019), and murder mystery generation (Mohr, Eger, buying an item from a merchant. Finally, there are also two and Martens 2018). A showcase of Camelot will also be pre- full interactive narratives, Murder in Felguard and The Three sented at AIIDE 2020’s Intelligent Narrative Technologies Kings, that showcase the wide range of things you can do in (INT) workshop. Camelot. Acknowledgments Harrison, B.; Purdy, C.; and Riedl, M. O. 2017. Toward The development of Camelot was supported by the Uni- automated story generation with markov chain monte carlo versity of New Orleans and the University of Kentucky. methods and deep neural networks. In Thirteenth Artificial We thank Edward T. Garcia, Rachelyn Farrell, and Porscha Intelligence and Interactive Digital Entertainment Confer- Banker for their insights and assistance with the project. ence. Jhala, A., and Young, R. M. 2010. Cinematic visual dis- References course: Representation, generation, and evaluation. IEEE Alfonso Espinosa, B.; Vivancos Rubio, E.; and Transactions on computational intelligence and AI in games Botti Navarro, V. J. 2014. Extending a BDI agents’ 2(2):69–81. architecture with open emotional components. Technical Markowitz, D.; Kider, J. T.; Shoulson, A.; and Badler, N. I. report, Department of Information Technology, Universitat 2011. Intelligent camera control using behavior trees. In Politeócnica de Valeóncia. International Conference on Motion in Games, 156–167. Arellano, D.; Varona, J.; and Perales, F. J. 2008. Generation Springer. and visualization of emotional states in virtual characters. Computer Animation and Virtual Worlds 19(3-4):259–270. Marsella, S. C., and Gratch, J. 2009. EMA: A process model of appraisal dynamics. Cognitive Systems Research Bahamón, J. C., and Young, R. M. 2017. An empirical eval- 10(1):70–90. uation of a generative method for the expression of person- ality traits through action choice. In 13th AAAI International Martens, C., and Iqbal, O. 2019. Villanelle: an authoring Conference on Artificial Intelligence and Interactive Digital tool for autonomous characters in interactive fiction. In Pro- Entertainment, 144–150. ceedings of the International Conference on Interactive Dig- ital Storytelling, 290–303. Bates, J. 1992. Virtual reality, art, and entertainment. Pres- ence: Teleoperators & Virtual Environments 1(1):133–138. Martin, L. J.; Ammanabrolu, P.; Wang, X.; Hancock, W.; Berov, L. 2017. Steering plot through personality and af- Singh, S.; Harrison, B.; and Riedl, M. O. 2018. Event repre- fect: an extended BDI model of fictional characters. In sentations for automated story generation with deep neural Joint German/Austrian Conference on Artificial Intelligence nets. In Thirty-Second AAAI Conference on Artificial Intel- (Künstliche Intelligenz), 293–299. Springer. ligence. Busoniu, L.; Babuska, R.; and De Schutter, B. 2008. A Martin, L. J.; Sood, S.; and Riedl, M. 2018. Dungeons and comprehensive survey of multiagent reinforcement learning. dqns: Toward reinforcement learning agents that play table- IEEE Transactions on Systems, Man, and Cybernetics, Part top roleplaying games. In Wu, H.; Si, M.; and Jhala, A., eds., C (Applications and Reviews) 38(2):156–172. Proceedings of the Joint Workshop on Intelligent Narrative Technologies and Workshop on Intelligent Cinematography Cavazza, M., and Charles, F. 2005. Dialogue generation and Editing co-located with 14th AAAI Conference on Ar- in character-based interactive storytelling. In Proceedings tificial Intelligence and Interactive Digital Entertainment, of the First AAAI Conference on Artificial Intelligence and INT/WICED@AIIDE 2018, Edmonton, Canada, November Interactive Digital Entertainment, AIIDE’05, 21–26. AAAI 13-14, 2018, volume 2321 of CEUR Workshop Proceedings. Press. CEUR-WS.org. Drucker, S. M., and Zeltzer, D. 1994. Intelligent camera control in a virtual environment. In Graphics Interface, 190– Mateas, M., and Stern, A. 2003. Façade: An experiment in 190. Citeseer. building a fully-realized interactive drama. In Game devel- opers conference, volume 2, 4–8. Eger, M., and Martens, C. 2017. Character beliefs in story generation. In Thirteenth Artificial Intelligence and Interac- McCoy, J.; Treanor, M.; Samuel, B.; Reed, A. A.; Wardrip- tive Digital Entertainment Conference. Fruin, N.; and Mateas, M. 2012. Prom Week: designing past El-Nasr, M. S.; Yen, J.; and Ioerger, T. R. 2000. the game/story dilemma. In Proceedings of the International FLAME—fuzzy logic adaptive model of emotions. Au- Conference on the Foundations of Digital Games, 235–237. tonomous Agents and Multi-agent systems 3(3):219–257. McCoy, J.; Treanor, M.; Samuel, B.; Reed, A. A.; Mateas, Endrass, B.; Klimmt, C.; Mehlmann, G.; André, E.; and M.; and Wardrip-Fruin, N. 2014. Social story worlds with Roth, C. 2013. Designing user-character dialog in inter- Comme il Faut. IEEE Transactions on Computational intel- active narratives: An exploratory experiment. IEEE Trans- ligence and AI in Games 6(2):97–112. actions on Computational Intelligence and AI in Games Mohr, H.; Eger, M.; and Martens, C. 2018. Eliminating the 6(2):166–173. impossible: a procedurally generated murder mystery. In Ferreira, F. P.; Gelatti, G.; and Musse, S. R. 2002. Intelligent Proceedings of the Experimental AI in Games workshop at virtual environment and camera control in behavioural sim- the 14th AAAI international conference on Artificial Intelli- ulation. In Proceedings. XV Brazilian Symposium on Com- gence and Interactive Digital Entertainment. puter Graphics and Image Processing, 365–372. IEEE. Neto, A. F. B., and da Silva, F. S. C. 2012. A computer archi- Gebhard, P. 2005. ALMA: a layered model of affect. In tecture for intelligent agents with personality and emotions. Proceedings of the fourth international joint conference on In Human-Computer Interaction: The Agency Perspective. Autonomous Agents and Multi-Agent Systems, 29–36. Springer. 263–285. Riedl, M. O., and Bulitko, V. 2013. Interactive narrative: An Shvo, M.; Buhmann, J.; and Kapadia, M. 2019. An inter- intelligent systems approach. AI Magazine 34(1):67–67. dependent model of personality, motivation, emotion, and Riedl, M. O., and Young, R. M. 2010. Narrative planning: mood for intelligent virtual agents. In Proceedings of the Balancing plot and character. Journal of Artificial Intelli- 19th ACM International Conference on Intelligent Virtual gence Research 39:217–268. Agents, 65–72. Riedl, M.; Saretto, C. J.; and Young, R. M. 2003. Managing Tambwekar, P.; Dhuliawala, M.; Martin, L. J.; Mehta, A.; interaction between users and agents in a multi-agent story- Harrison, B.; and Riedl, M. O. 2018. Controllable neu- telling environment. In Proceedings of the second interna- ral story plot generation via reinforcement learning. arXiv tional joint conference on Autonomous Agents and Multia- preprint arXiv:1809.10736. gent Systems, 741–748. Teutenberg, J., and Porteous, J. 2013. Efficient intent- Roberts, D. L., and Isbell, C. L. 2008. A survey and qual- based narrative generation using multiple planning agents. itative analysis of recent advances in drama management. In Proceedings of the 2013 international conference on Au- International Transactions on Systems Science and Appli- tonomous agents and multi-agent systems, 603–610. Inter- cations, Special Issue on Agent Based Systems for Human national Foundation for Autonomous Agents and Multiagent Learning 4(2):61–75. Systems (IFAAMAS). Rowe, J. P., and Lester, J. C. 2013. A modular reinforce- Teutenberg, J., and Porteous, J. 2015. Incorporating global ment learning framework for interactive narrative planning. and local knowledge in intentional narrative planning. In In Ninth Artificial Intelligence and Interactive Digital En- International Conference on Autonomous Agents and Multi- tertainment Conference, 57–63. agent Systems, 1539–1546. Ryan, J.; Mateas, M.; and Wardrip-Fruin, N. 2016. Charac- Wang, P.; Rowe, J. P.; Min, W.; Mott, B. W.; and Lester, J. C. ters who speak their minds: dialogue generation in Talk of 2017. Interactive narrative personalization with deep rein- the Town. In Twelfth Artificial Intelligence and Interactive forcement learning. In Proceedings of the 26th International Digital Entertainment Conference. Joint Conference on Artificial Intelligence, 3852–3858. Samuel, B.; Reed, A. A.; Maddaloni, P.; Mateas, M.; and Ware, S. G., and Young, R. M. 2014. Glaive: a state-space Wardrip-Fruin, N. 2015. The Ensemble engine: Next- narrative planner supporting intentionality and conflict. In generation social physics. In Proceedings of the Tenth Inter- Tenth Artificial Intelligence and Interactive Digital Enter- national Conference on the Foundations of Digital Games tainment Conference. (FDG 2015), 22–25. Ware, S. G.; Garcia, E.; Shirvani, A.; and Farrell, R. 2019. Samuel, B.; Reed, A.; Short, E.; Heck, S.; Robison, B.; Multi-agent narrative experience management as story graph Wright, L.; Soule, T.; Treanor, M.; McCoy, J.; Sullivan, A.; pruning. In Proceedings of the fifteenth Artificial Intel- et al. 2018. Playable experiences at AIIDE 2018. In Pro- ligence and Interactive Digital Entertainment Conference, ceedings of the Fourteenth Artificial Intelligence and Inter- 87–93. active Digital Entertainment Conference, 275–280. Weyhrauch, P. W. 1997. Guiding interactive drama. Ph.D. Shirvani, A., and Ware, S. G. 2019a. On automatically mo- Dissertation, Carnegie Mellon University. tivating story characters. In Proceedings of the Experimen- Young, R. M.; Thomas, J.; Bevan, C.; and Cassel, B. 2011. tal AI in Games workshop at the 15th AAAI international Zócalo: A service-oriented architecture facilitating sharing conference on Artificial Intelligence and Interactive Digital of computational resources in interactive narrative research. Entertainment. In Working Notes of the Workshop on Sharing Interactive Shirvani, A., and Ware, S. G. 2019b. A plan-based person- Digital Storytelling Technologies at ICIDS, volume 11. Cite- ality model for story characters. In Proceedings of the 15th seer. AAAI international conference on Artificial Intelligence and Young, R. M.; Ware, S. G.; Cassell, B. A.; and Robertson, Interactive Digital Entertainment, 188–194. J. 2013. Plans and planning in narrative generation: a re- Shirvani, A., and Ware, S. G. 2020. A formalization of view of plan-based approaches to the generation of story, emotional planning for strong-story systems. discourse and interactivity in narratives. Sprache und Daten- Shirvani, A.; Farrell, R.; and Ware, S. G. 2018. Combin- verarbeitung, Special Issue on Formal and Computational ing intentionality and belief: Revisiting believable character Models of Narrative 37(1-2):41–64. plans. In Fourteenth Artificial Intelligence and Interactive Young, R. M. 2001. An overview of the Mimesis architec- Digital Entertainment Conference, 222–228. ture: Integrating intelligent narrative control into an existing Shirvani, A.; Ware, S. G.; and Farrell, R. 2017. A possible gaming environment. In Working notes of the AAAI spring worlds model of belief for state-space narrative planning. symposium on Artificial Intelligence and Interactive Enter- In Proceedings of the Thirteenth Artificial Intelligence and tainment, 77–81. Interactive Digital Entertainment Conference, 101–107. Shirvani, A. 2019. Towards more believable characters us- ing personality and emotion. In Proceedings of the AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment, 230–232.