Generative Text using Classical Nondeterminism Ian Horswill Northwestern University, Evanston IL, USA ian@northwestern.edu Abstract versions of templates can be specified, we have a Many recent generative text systems combine a context-free context-free grammar. grammar base with some set of extensions such as tagging or 3. When templates can take arguments and/or return inline JavaScript. We argue that the restriction to CFGs is values, we have something closer to Knuth’s at- unnecessary and that the standard pattern-directed, nondeter- tribute grammars (Aho and Ullman 1977). ministic control structures common to Prolog, definite-clause grammars, and many HTNs, support a superset of these ca- 4. When the grammar allows equality constraints on pabilities while still being simple to implement. We describe those arguments, the result is a definite-clause Step, a generative text system for Unity games based on grammar (Pereira and Shieber 1987; Pereira and higher-order HTNs that so far as we can determine from pub- Warren 1980), or some other unification grammar. lished descriptions, generalizes this previous work. We then 5. When the generator can pass state information describe syntactic extensions to make Step more natural as a text-authoring language. Finally, we discuss how Step and from early parts of the sentence to later ones, the similar systems can be implemented very compactly in mod- result is a first-order, SHOP-style, hierarchical task ern mainstream programming languages. network (HTN) planner (Nau et al. 1999). 6. When templates can take other templates as argu- ments, one then has a higher-order HTN planner. Introduction Most interactive fiction systems, and many video games, use Most published systems are types 1-3. However, types 4- generative text subsystems to dynamically generate dialog 6 are more general and are all easily implemented using the and narration. A number of generative text systems have nondeterministic, pattern-directed, control structure of been published in the game AI and IF literatures (Compton, Prolog (Clocksin and Mellish 2003) and its siblings. Indeed, et al. 2014; Evans and Short 2014; Garbe et al. 2019; Prolog was initially invented for natural language pro- Horswill 2014, 2016; Mawhorter 2016; Montfort 2007; cessing, and definite-clause grammars are implemented as a Nelson 2006; Reed et al. 2011; Ryan, et al. 2016). Prolog macro. The original SHOP paper (Nau et al. 1999) Although a few of these systems implement full natural discusses its similarity to Prolog. SHOP can also be imple- language generation pipelines transforming symbolic repre- mented as a Prolog macro. sentations into text, most start from human-authored text Unfortunately, the literature on implementing nondeter- and adapt it to context, for example by variable substitution. ministic programming languages has few entry points for These latter systems are the focus of this paper. All im- non-specialists. The Prolog implementation literature from plement some generalization of context-free grammars. the 80s and 90s (Roy 1993) focuses on complex, high per- They can be placed in a rough computational hierarchy, in formance systems, using the notoriously opaque Warren which each system is a proper superset of those before it: Abstract Machine (Ait-Kaci 1991; Warren 1983). The more modern literature from the PL community (Friedman et al. 1. The simplest systems, template substitution sys- 2018) focuses on using functional programming languages tems, are less powerful than CFGs. They take a with tail-call optimization and first-class continuations. string and substitute the values of variables at spec- Perhaps as a result, many of the text generators in the re- ified positions. search literature lack backtracking, equational constraints 2. When a template is allowed to recursively substi- (unification), or higher-order operators. This paper has 3 tute the value of other templates, and alternative goals: Copyright © 2020 for this paper by its author. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).  To argue that nondeterministic programming is the Describe ?who: appropriate internal representation for CFG-like text [Write "The"] [Role ?who] [Write ","] generators, regardless of what designer-facing GUI [Mention ?who] [Write ", is a"] or syntactic sugar is used to aid the authoring process. [Age ?who] [Gender ?who] [Write "with"]  To present Step, a text generation language based on [SalientDetail ?who] [Write "."] higher-order HTNs, which manages to hide much of [Mention ?who] the Prolog-ishness of the underlying representation [Write "regards you intently."] from the author. [end]  To describe how to implement such systems suc- cinctly using in modern, mainstream programming We will begin by describing the basic language, and then languages. describe the syntactic sugar that allows it to read more like English text. Step: a Simple Text Planner Core Language Step is a nondeterministic language for writing text genera- Following the notion of it being an HTN, procedures are tors. It can be used for anything from Tracery-style context- named tasks rather than predicates (as in Prolog) or non- free grammars, to very general systems that dynamically in- terminals (as in CFGs). The different ways of achieving a flect nouns and verbs to reflect different cases, voices, task are called methods, rather than clauses (Prolog) or rules tenses, genders, etc. It is written in C# and can be used as a (CFGs). Methods can fail, in which case the system back- drop-in DLL for Unity projects. tracks and tries the next method. If all methods fail, the task Following Ingold (Ingold 2015), our goal is for the bulk fails, and its caller backtracks. Execution completes when of the text in a script to look like English text with occa- the system finds a series of task calls and methods to imple- sional markup, rather than like programming language code ment them such that all methods succeed. Primitive tasks with occasional bits of English text. For example: can be written directly in C#. Compound tasks, are speci- fied by methods decomposing them into subtasks. Describe ?who: Tasks and methods The ?who/Role, ?who, is a ?who/Age+Gender The simplest Step programs implement CFGs: with ?who/SalientDetail. ?who regards you intently. [randomly] [end] Noun: the cat Noun: the dog This might expand into text such as: [randomly]  The bartender, Olivia Strok, is a 40-something Verb: bites woman with blue eyes. She regards you intently. Verb: licks  The bartender, Jeff Craig, is an elderly man with Verb: chases crooked teeth. He regards you intently. Methods take the form: task:text, and state that a method of This is syntactic sugar for a more overtly code-like form achieving the specified task is to output the specified text. where bracketed expressions represent subroutine calls: The [randomly] annotation states that the subsequent task should try its methods in random order. Thus, Noun will Describe ?who: randomly generate either “the cat” or “the dog.” The [Role ?who], [Mention ?who], is a Bracketed expressions denote calls whose output are sub- [Age ?who] [Gender ?who] with stituted into the output text. Thus: [SalientDetail ?who]. [Mention ?who] regards you intently. Sentence: [Noun] [Verb] [Noun] [end] produces a random version of “the cat/dog bites/licks/chases The ultimate internal representation used by the interpreter the cat/dog”. is roughly all calls:1 1 The true internal representation treats Write as a primitive, separate from a call, and also includes a few other non-call primitives such as assignment statements. Arguments and local variables The bracketed expressions surrounding the first argu- Tasks can take arguments, which are pattern-matched to ments are guards, stating the method doesn’t match unless methods using unification. Local variables have names be- the predicate is true. The first method is equivalent to: ginning with ?: Detail ?who 3: [BlueEyes ?who] blue eyes Sentence: [Noun animal] ate [Noun ?] If ?who doesn’t have blue eyes, the call to BlueEyes will [randomly] fail and the system will move on to the next method. Noun animal: the cat Finally, the [generator] annotation states that Detail Noun animal: the dog is fully non-deterministic. It will not stop with the first so- Noun vegetable: the carrot lution it finds, but attempt to generate further solutions Noun vegetable: the lettuce should the first be rejected. We will see this used below. Noun mineral: the dirt Global state variables Noun ?: nothing In addition to text output and unification of local variables, tasks can also update state in the form global variables. Un- The first call to Noun will match only its first two methods, like ? variables, which are local to their methods and bind while the second will match any, since the variable ? can through unification, global variables are Capitalized and are match anything, include the ? variable in the last method. explicitly updated using assignment statements. Thus, Sentence now generates strings of the form “the For example, the Mention task generate names or pro- dog/cat ate the cat/dog/carrot/lettuce/dirt/nothing”. nouns, as appropriate, by storing discourse context in the Predicates global variables LatestThey, LatestShe, etc.: Tasks need not generate textual output. They can be called simply to test whether they succeed, in which case they op- Mention LatestThey: they erate as predicates. They can also be called to bind variables Mention LatestShe: she passed in as arguments. For example, if we state: Mention LatestHe: he Mention ?who: PreferredPronoun bill he: [Write ?who] [PreferredPronoun ?who ?pro] PreferredPronoun sally she: [Update ?who ?pro] PreferredPronoun masha they: [end] Pronoun ?who: Update ?who he: [set LatestHe ?who] [PreferredPronoun ?who ?what] Update ?who she: [set LatestShe ?who] [Write ?what] Update ?who they: [set LatestThey ?who] [end] Since mention is not marked with [randomly], it will try Then the call [PreferredPronoun masha ?pro] will its methods in the order listed. The first three match the ar- bind ?pro to “they”, while the call [Pronoun masha] will gument to the current discourse context (LatestShe, etc). output “they” to the text stream. The fourth is a catch-all that is attempted only when a pro- Generators noun is impossible. It prints the name of the character and Tasks may also generate multiple alternative results, with or then updates the discourse context. without text output. For example, the following generates Assignments to variables are undone upon backtracking, different fragments of text describing a person, ?who: as are unifications and text output. The final text output and variable bindings show only the choices made in the final, [generator] successful execution path. Detail [BlueEyes ?who] 3: blue eyes Equivalent versions of Mention can be written to gener- Detail [CrookedTeeth ?who] 2: crooked teeth ate object case (him/her/them), possessive (his, her, their) Detail ?who 0: nothing remarkable and reflective (himself, herself, themselves) pronouns, re- spectively. Alternatively, they could be packaged as a sin- Detail takes two arguments: a person (?who), and a level gle task that takes two arguments, the object to output, and of interestingness. Thus, [Detail ?fred 3] will attempt the case in which to inflect it. This can also be extended to to generate a level-3-internestingness detail, while [Detail support more complex inflections when generating text for ?fred ?i] will generate the first detail it can, and return languages with more sophisticated case systems than Eng- its level of internestingnesss by binding it to ?i. lish. For a general discussion of noun/verb agreement and feature marking using matching, see (Covington 1993). Higher-order tasks Finally, tasks can take other tasks and task expressions as might generate something like “Sharon has a brother, Bill,” arguments, allowing the introduction of iteration and aggre- the call to Brother having bound ?t to Bill, and the call gation constructs. to Name having printed it. If this expression appears inside For example the built-in primitive, Max, allows the author of a backtracking construct, such as Max or ForAll, and to score different alternative texts and choose the most de- Sharon has multiple brothers, then Brother will generate sirable one. This is how the SalientDetail task referred successive brothers, and they will each be Named. Multi- to earlier works: ple relations can be stacked, as in: SalientDetail ?who: ?who/Spouse/Brother/Name [Max ?salience [Detail ?who ?salience]] which would print the name of ?who’s brother-in-law. Fi- Here, Max calls Detail, but repeatedly forces it to back- nally, since the “/Name” in the construction is awkward, it track, generating new solutions, until all possible solutions can be omitted, in which case the system will call Men- have been generated. Along the way, it checks the value of tion on the brother. Thus: ?salience and remembers the solution with the highest salience. It then continues execution of the rest of the pro- ?who has a brother, ?who/Brother gram, using the highest-scoring solution. The program be- haves as if that solution, and only that solution, had been expands into the equivalent code: run; pronoun bindings and any other state information are appropriate for the text that is displayed. [Mention ?who] has a brother, [Brother ?who ?t] [Mention ?t] Syntactic Sugar Although the core language is powerful, the many bracketed Example expressions break up the flow of the text, leading to text that Let’s write a more elaborate version of Mention that further looks like code rather than marked-up English. Although adapts referring expressions to context. We’ll start by this is unavoidable for tasks like Mention that require con- changing the final rule for Mention to call FirstMention trol structure and state manipulation, the bulk of an interac- the first time ?who is mentioned, and call Subsequent tive fiction work would consist of straightforward chunks of thereafter: text that call into a few complex tasks like Mention. By adding a small amount of syntactic sugar, we can al- Mention ?who: low this simpler, template-like, code to make simple calls [case ?who] Mentioned : [Subsequent ?who] without the use of bracket expressions: [else] [FirstMention ?who] [add ?who Mentioned] [end]  ?variable [PreferredPronoun ?who ?pro] Expands into: [Mention ?variable] [Update ?who ?pro]  ?variable/Task [end] Expands into: [Task ?variable]  ?variable/Task1+Task2 FirstMention and Subsequent can be customized how- Expands into: ever one needs. In this case, we’ll have FirstMention [Task1 ?variable] [Task2 ?variable] generate full name (“Bill Taylor”) and Subsequent gener- As many tasks as desired can be included. ate something shorter (“Bill”, “Mr. Taylor” or “Taylor”):  ?variable/Relation/Printer Expands into: FirstMention ?who: ?who/FirstName+LastName [Relation ?variable ?t] [Printer ?t] Subsequent [Familiar ?who]: ?who/FirstName Where ?t is a temporary variable. Subsequent ?who: ?who/Honorific+LastName Subsequent ?who: ?who/LastName The last form is best explained by an example. Suppose there is a predicate called Brother that succeeds when- Subsequent generates the first name for characters who ever its two arguments are brothers. Then the text: are on familiar terms with the narrator, otherwise honorific and last name, when possible, or last name if no honorific is ?who has a brother, ?who/Brother/Name known. We might determine the honorific like this: [fallible] This tries each method, allowing it to generate new states Honorific [PhD ?who]: Dr. (new text, bindings) and pass them to the continuation. If Honorific ?who: ?who/PreferredPronoun/Honor the continuation accepts one of the states generated by a [fallible] method, the method immediately returns true and so does Honor he: Mr. the task. If method fails (returns false), the task tries subse- Honor she: Ms. quent methods until some method is successful. If no method is successful, the task fails and returns false. Since not all characters have honorifics, the [fallible] A Method is an argument pattern to match the arguments annotation declares that Honorific failing isn’t considered against, together with a linked list of Calls to different an error. If Honorific fails, it just triggers backtracking tasks. Its TryMethod method looks like: so that Subsequent moves on to its next method. This is an important point. Nondeterministic control frees bool TryMethod(object[] args, State s, the author to call fallible tasks as if they will work, knowing Continuation k) the system will automatically clean up and try something => s.MakeFrame().Unify(args, Pattern, else should they fail. Without it, callers such as Subse- out newState) quent would need elaborate checks to test whether it was && FirstCall.TryCall(newState, k)); safe to call the fallible task, and those checks would need to be found and updated each time the task was modified. MakeFrame makes fresh logic variables for the method’s arguments, resulting in a new state. The method then unifies its pattern with its arguments, resulting in a new binding list, Implementation and hence another new state. It then tries the first call in its Step is an interpreter written in C#. Each nondeterministic chain, passing it the continuation and new state. If either operation (trying a method, calling a task, etc.) takes as an unification or call fails, the method fails. argument a current state, which contains everything that can A Call has fields Task (task to call), Arguments (to be changed by the operation (text generated, variable bind- pass to it), and Next (the next Call in the method). It also ings). When successful, the task produces a new state, per- has its own TryCall method, which will attempt to recur- haps with additional text or bindings. sively call TryTask, passing a continuation that invokes ei- This requires that states be represented as immutable data. ther the next call in the method’s chain of calls, or, if we’re When we make a new state, we preserve the original one in at the end of the chain, its own continuation: case we backtrack. Note that nondeterministic languages obey strict stack discipline, so states can be stack allocated bool TryCall(State s, Continuation k) { when implemented in languages that support it. => Task.TryTask( Arguments, s, Control structure ns => (Next == null) ? k(ns) : Next.Try(ns, k)); Since nondeterministic operations can generate zero, one, or many states, they generate candidate states one by one, feed- Finally, the game’s C# code can invoke a Step task by ing them to a continuation, a callback that approves or re- calling its TryTask method and passing it a continuation jects the new state. As soon as the continuation returns true that saves the final state and returns true: for a generated state, the operation returns true. If the oper- ation runs out of new states without the continuation return- string Call(Task t, object[] args) { ing true, it returns false, meaning the operation failed. State result = null; A simplified version of the core interpreter is as follows. t.TryTask(args, EmptyState, A CompoundTask is an object containing a set of Method s => { result = s; return true; }) objects and a C# method called TryTask: return result.Output; } bool TryTask(object[] args, State s, Continuation k) { Alternatively, it could return the entire State, including foreach (var m in Methods) variable bindings, allowing the game to retain discourse if (m.TryMethod(args, s, k)) context from call to call. It can also be used to get infor- return true; mation about the generated utterance, implement a version return false; of Expressionist’s content bundles (Ryan, et al. 2016). } The code above is highly simplified. The real version of TryTask tries Methods in random order when the task is tagged with [randomly]. It also passes TryMethod a con- Mateas 2015) is an extremely successful CFG-based gener- tinuation that counts the number of solutions, since non- ator designed for casual users. It augments the base CFG [generator] tasks are easier to debug if they don’t gener- with the ability to call arbitrary JavaScript code. Expres- ate multiple solutions, and non-[fallible] tasks are eas- sionist (Osborn et al. 2017; Ryan et al. 2015; Ryan et al. ier to debug if they throw exceptions upon failure. 2016; Ryan et al. 2016) augments CFGs with a tagging The real version of TryCall includes a check for mechanism such that the input is a set of tags that must be whether the task to call is a primitive task in which case its generated and the output is a string plus a set of tags that C# implementation is called directly. The real system also were generated. The exact operation of the system isn’t de- passes the state as separate arguments for text output and scribed, but it appears to be similar to an HTN in which a variable bindings rather than as a unified State object. tag flips a bit in the state output. Dear Leader’s Happy Story That said, we hope it is clear from the foregoing that im- Time (Horswill 2016) used a higher-order HTN similar to plementing a full nondeterministic system need not be diffi- Step, but was implemented as an embedded language in cult. The three C# methods comprising the simplified core Prolog, making it considerably less accessible. interpreter is 17 lines of code. The most sophisticated pro- Dunyazad (Mawhorter 2016) uses a template/CFG gramming technique used is λ expressions, which are now scheme to generate text from a logical form. It includes a quite commonplace. number of extensions to handle pronominalization and sub- ject/verb agreement, and contains an interesting scheme for imposing a tree structure on the names of non-terminals, Related Work with wildcard matching of non-terminals. While Step is not intended to be a full interactive finction Step supports a strict superset of the functionality of the language, there are nonetheless a number of interactive fic- CFG-based systems, yet it is surprisingly simple. Its source tion languages that both generate text and also use some code is 17% smaller than Expressionist’s. kind of declarative programming. Inform 7 (Nelson 2006), the oldest and best developed of these, uses an English-like Conclusion syntax to allow authors to write deterministically pattern- matched rules for game logic. Versu (Evans and Short 2014) Step allows developers to add flexible text generation to any uses straightforward text templates, but its game logic is im- Unity game. It is not a self-contained IF engine such as plemented in a novel logic programming language. Both Twine or Inform, nor is it a causal creator such as Tracery. systems use template-based NLG, although Inform’s ap- It is intended for adaptive delivery of hand-authored dialog pears to be equivalent to a CFG. Martens’ Ceptre is a logic and narration with a minimum of pain and a maximum of programming language for interactive narrative based on flexibility. linear logic; Martens’ Quiescent Theatre compiles linear- Step allows simple use cases to be written simply; a Trac- logic narratives to Twine, using templated text (Martens ery-style CFG can be written without learning anything 2015). StoryAssembler (Garbe et al. 2019) combines a for- Prolog-like. More sophisticated features such as pronomi- ward state-space planner with an HTN planner to generate nalization or list aggregation can be added easily by writing branching choice-driven narratives. A template system is a small amount of code, or by using a library written by oth- then used for text generation. ers. Since the features are written in Step code, they can be Another IF language, Curveship (formerly known as nn), adapted for a given game without changing the interpreter. uses a full natural language generation pipeline to transform Step does have limitations. While it is can be used to in- those symbolic structures into English surface text, allowing flect words in languages with sophisticated case systems, it to dynamically change grammatical tense and mood the resulting scripts look less like story text and more like (Montfort 2007). More recently, NLG pipelines have been Prolog or SHOP code. So Step is not a solution to the prob- implemented in a few research games. SpyFeet (Reed et al. lem of localization to multiple languages. 2011a, 2011b) can dynamically vary generation based on Finally, Step demonstrates that full pattern-directed, non- parameters such as character personality. Bot Colony deterministic programming can be implemented nearly as (Joseph 2012) generates English from logical forms, alt- easily as the simpler CFG systems. We strongly recommend hough the authors’ paper focused primarily on NL under- the use of pattern-directed invocation in similar projects. standing rather than generation. MKULTRA (Horswill 2014) also generated surface forms from logical forms, however it did so using definite clause grammars rather than Acknowledgements a full NLG pipeline. I would like to thank the reviewers, Ethan Robison, Rob Other games and generative text systems use some variant Zubek and Matt Viglione for advice and comments, and of CFGs. Tracery (Compton et al. 2014; Compton and Ethan for being willing to kick the tires on the system. References by Reduction to Symbolic Visibly Pushdown Automata.” in Intelligent Narrative Technologies (INT). Snowbird, UT: AAAI Aho, Alfred V. and Jeffrey D. Ullman. 1977. “Principles of Press. Compiler Design (Addison-Wesley Series in Computer Science Pereira, Fernando C. N. and Stuart Shieber. 1987. Prolog and and Information Processing).” Natural Language Analysis. Brookline, MA: Microtome Ait-Kaci, Hassan. 1991. Warren’s Abstract Machine: A Tutorial Publishing. Reconstruction. Cambridgte, MA: MIT Press. Pereira, Fernando C. N. and David H. D. Warren. 1980. “Definite Clocksin, William F. and Christopher S. Mellish. 2003. Clause Grammars for Language Analysis - A Survey of the Programming in Prolog: Using the ISO Standard. 5th ed. New Formalism and a Comparison with Augmented Transition York, NY: Springer. Networks.” Artificial Intelligence 13(231–278). Compton, Kate, Benjamin Filstrup, and Michael Mateas. 2014. Reed, Aaron A, Ben Samuel, Anne Sullivan, Ricky Grant, April “Tracery : Approachable Story Grammar Authoring for Casual Grow, Justin Lazaro, Jennifer Mahal, Sri Kurniawan, Marilyn Users.” Papers from the 2014 AIIDE Workshop, Intelligent Walker, and Noah Wardrip-fruin. 2011a. “A Step Towards the Narrative Technologies (7th INT, 2014) 64–67. Future of Role-Playing Games : The SpyFeet Mobile RPG A Step Compton, Kate and Michael Mateas. 2015. “Casual Creators.” Towards the Future of Role-Playing Games : The SpyFeet Mobile Proceedings of the Sixth International Conference on RPG Project.” in Artificial Intelligence and Interactive Digital Computational Creativity June. Entertainment (AIIDE). Covington, Michael A. 1993. Natural Language Processing for Reed, Aaron A, Ben Samuel, Anne Sullivan, Ricky Grant, April Prolog Programmers. Prentice Hall. Grow, Justin Lazaro, Jennifer Mahal, Sri Kurniawan, Marilyn Walker, and Noah Wardrip-fruin. 2011b. “SpyFeet : An Exercise Evans, Richard and Emily Short. 2014. “Versu - A Simulationist RPG.” in Foundations of Digital Games. Bordeaux, France. Storytelling System.” IEEE Transactions on Computational Intelligence and AI in Games 6(2):113–30. Reed, Aaron A., Ben Samuel, Anne Sullivan, Ricky Grant, April Grow, Justin Lazaro, Jennifer Mahal, Sri Kurniawan, Marilyn Friedman, Daniel P., William E. Byrd, Oleg Kiselyov, and Jason Walker, and Noah Wardrip-Fruin. 2011. “A Step Towards the Hemann. 2018. The Reasoned Schemer. 2nd editio. MIT Press. Future of Role-Playing Games: The SpyFeet Mobile RPG Project.” Garbe, Jacob, Max Kreminski, Ben Samuel, Noah Wardrip-fruin, in Proceedings of the Seventh Conference on Artificial Intelligence and Michael Mateas. 2019. “StoryAssembler : An Engine for and Interactive Digital Entertainment (AIIDE-11). Stanford, CA. Generating Dynamic Choice-Driven Narratives.” P. August in Roy, Peter Van. 1993. 1983–1993: The Wonder Years of Foundations of Digital Games (FDG). San Luis Obispo, CA, USA: Sequential Prolog Implementation. Paris, France. ACM Press. Ryan, James, Michael Mateas, and Noah Wardrip-fruin. 2016. Horswill, I. D. 2016. “Dear Leader’s Happy Story Time: A Party “Characters Who Speak Their Minds : Dialogue Generation in Game Based on Automated Story Generation.” in AAAI Workshop Talk of the Town Characters Who Speak Their Minds : Dialogue - Technical Report. Vol. WS-16-21-. Generation in Talk of the Town.” in Artificial Intelligence and Horswill, Ian. 2014. “Architectural Issues for Compositional Interactive Digital Entertainment (AIIDE). Burlingame, CA: Dialog in Games.” in AAAI Workshop - Technical Report. Vol. AAAI Press. WS-14-17. Ryan, James Owen, Andrew Max Fisher, Taylor Owen-milner, Ingold, Jon. 2015. “Adventure in Text: Innovating in Interactive Michael Mateas, and Noah Wardrip-fruin. 2015. “Toward Natural Fiction.” in Game Developer’s Conference. San Francisco, CA: Language Generation by Humans.” in Intelligent Narrative UBM Techweb. Technologies (INT). Santa Cruz, California: AAAI Press. Joseph, Eugene. 2012. “Bot Colony – a Video Game Featuring Ryan, James, Ethan Seither, Michael Mateas, and Noah Wardrip- Intelligent Language-Based Interaction with the Characters.” in fruin. 2016. “Expressionist : An Authoring Tool for In-Game Text Workshop on Games and NLP (GAMNLP). Raleigh, North Generation Expressionist : An Authoring Tool for In-Game Text Carolina, USA: AAAI Press. Generation.” Pp. 221–33 in International Confedrence on Martens, Chris. 2015. “Programming Interactive Worlds with Interactive Digital Storytelling (Lecture Notes in Computer Linear Logic.” Carnegie Mellon University. Science). Mawhorter, Peter. 2016. “Artificial Intelligence as a Tool for Warren, David H. D. 1983. An Abstract Prolog Instruction Set. Understanding Narrative Choices: A Choice-Point Generator and Menlo Park, CA. a Theory of Choice Poetics.” University of California, Santa Cruz. Montfort, Nick. 2007. “Generating Narrative Variation in Interactive Fiction.” University of Pennsylvania. Nau, Dana, Yue Cao, Amnon Lotem, and Hector Munoz-Avila. 1999. “SHOP: Simple Hierarchical Ordered Planner.” Pp. 968–73 in Proceedings of the 16th international joint conference on Artificial intelligence. Stockholm, Sweden: Morgan Kaufmann Publishers Inc. Nelson, Graham. 2006. “Natural Language, Semantic Analysis, and Interactive Fiction.” Osborn, Joseph C., James Ryan, and Michael Mateas. 2017. “Analyzing Expressionist Grammars by Reduction to Symbolic Visibly Pushdown Automata Analyzing Expressionist Grammars