<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Knowledge Goal Recognition for Interactive Narratives</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Cory Siler</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Stephen G. Ware</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Narrative Intelligence Lab, Department of Computer Science, University of Kentucky</institution>
          ,
          <addr-line>Lexington, KY, USA 40506</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Player goals in games are often framed in terms of achieving something in the game world, but this framing can fail to capture goals centered on the player's own mental model, such as seeking the answers to questions about the game world. We use a least-commitment model of interactive narrative to characterize these knowledge goals and the problem of knowledge goal recognition. As a first attempt to solve the knowledge goal recognition problem, we adapt a classical goal recognition paradigm, but in our empirical evaluation the approach sufers from a high rate of incorrectly rejecting a synthetic player's true goals; we discuss how handling of player goals could be made more robust in practice.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;goal recognition</kwd>
        <kwd>interactive narrative</kwd>
        <kwd>narrative planning</kwd>
        <kwd>question answering</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        tal to a world-state goal: Obtaining the item involves
ifguring out where to find it, and staying alive involves
Goal recognition is the task of inferring the intentions be- figuring out which characters have harmful intent.
hind an agent’s actions. When the agent in question is a In interactive narrative games, the line between
knowlhuman game-player, it can serve as a form of player mod- edge goals and non-knowledge goals is further blurred:
eling [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] that helps the system predict what the player As gameplay progresses, the player extends their model
will do next. Proposed applications have included tailor- of the story by discovering new information through
ing procedurally-generated quests to a player’s prefer- their interactions while at the same time their choices
ences in an adventure game [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], assessing the player’s constrain the range of possible stories that could emerge.
understanding in an educational application [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], or de- In an interactive narrative architecture using an
experitecting when the player’s actions threaten to derail the ence manager—an artificially intelligent agent that
conauthorial intent in a story-focused experience [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. trols the non-player elements of the game to adapt to the
      </p>
      <p>
        Goal recognition has been studied extensively in a player’s decisions—the experience manager may have its
games context [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], but the work so far has largely cen- own goals for the story, reflecting the game designers’
intered around goals about the state of the game world: tent for the player’s experience. An experience manager
achievement goals to make a particular fact about the that recognizes the player’s goals can find the overlap
world state true or maintenance goals to prevent a fact with its own goals and guide the story down a path that
from being undone [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Although many goals in games satisfies both [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ].
ift into this framework—e.g., obtaining an item, getting This paper’s contributions are as follows. First, we
to a location, or defeating an adversary are achievement provide a framework for characterizing knowledge goals
goals; keeping a character alive is a maintenance goal— and knowledge goal recognition in an interactive
narraplayers may also pursue goals that cannot be expressed tive environment. Because the player may have limited
solely in terms of world state. awareness of how the game will respond to their
deci
      </p>
      <p>
        For instance, Ram [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] defines knowledge goals as the in- sions, classical frameworks that make strong
assumptentions of an agent to extend or organize its own mental tions about an agent’s model of environment dynamics
structures. Knowledge goals encompass players’ desire are inadequate. Instead, we draw on a formal model of
to explore the game world, uncover mysteries about past discourse from semantics and pragmatics that has the
occurrences, and explain unexpected findings. They are asking and answering of questions as its basic
operacentral to genres such as mystery games [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] as well as tions [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Analogous to how a question prompts the
tutoring and training systems [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. A goal recognition ap- respondent to extend the body of information mutually
proach that does not account for exploratory behavior is known to both parties in the dialogue, a player acting
liable to fail even when dealing with an accomplishment- in an interactive narrative game prompts the game to
focused player, since a knowledge goal can be instrumen- extend the mutually-known body of information about
the story. Our model can capture traditional goal types,
but also knowledge goals reflecting implied questions for
the story to answer; we define these goals with reference
to a cognitive model of literal, spoken questions [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>Second, we present a preliminary study of algorithms
AIIDE Workshop on Experimental Artificial Intelligence in Games,
October 08, 2023, University of Utah, Utah, USA
$ cory.siler@uky.edu (C. Siler); sgware@cs.uky.edu (S. G. Ware)</p>
      <p>
        © 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g ACttEribUutRion W4.0oInrtekrnsahtioonpal (PCCroBYce4.0e).dings (CEUR-WS.org)
Our model of interactive narrative draws from others that
treat the story world as only “existing” as far as the player
is aware of it, rather than simulating the entire world. By
modeling the player’s knowledge of the story so far, an
experience manager can delay decisions about
properties and events outside the player’s perception and use
those decisions as tools to adjust the course of the story
in response to the player’s actions. This idea has been
the basis of approaches to preventing player derailment
of experience manager goals [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], saving computational
resources by deferring [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] or shortcutting [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] how
ofscreen events are decided, and increasing the depth [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
and diversity [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] of generated stories.
      </p>
      <p>
        Horswill [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] introduces the term story state to refer
to the evolving set of design decisions about a story over
the course of its creation, whether the creation involves
the changing decisions of an external author or
improvisation of story-world background within an interactive
narrative during the unfolding of the narrative events
themselves. The state-transition model we use in our
framework operates on a form of story state.
      </p>
      <p>
        Baikadi et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] present a machine learning model
for recognizing player goals where the player may not
be aware of all game-supplied goals at the start, and
may take exploratory actions to discover new gameplay
goals for themself. Interactive narratives are modeled
as a graph of narrative discovery events where
storydriving information is revealed; with the testbed of the
Crystal Island [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] educational mystery game, Baikadi
et al. allude to the idea we build on here of handling
knowledge-seeking and objective-achieving in a unified
framework. However, their approach uses training data
to build domain-specific goal recognition models, while
ours uses domain-independent planning to try to
recognize goals in the absence of training data.
      </p>
      <p>
        Goal recognition has been explored in a
planningcentric narrative context before, albeit focused on
worldstate models of planning. Farrell and Ware [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] take a
narrative generation framework that models story
characters’ beliefs and intentions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], and build upon it to
identify the intentions and beliefs of an existing agent
from its actions. Cardona-Rivera and Young [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] present
for identifying a player’s knowledge and achievement algorithms to recognize a player’s intentions for the
nargoals from the player’s actions. We adapt a planning- rative trajectory, and also propose that interactive
narrabased goal recognition paradigm [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] into our frame- tive players predict an experience manager’s intentions
work to define these algorithms, and empirically evalu- for the narrative trajectory, and that plan recognition
ate them on synthetic player agents. Our experiments algorithms can serve as a proxy for how players make
reveal important shortcomings in the algorithms; robust these predictions.
goal recognition for diverse goal types remains an open Meneguzzi and Pereira [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] give a survey of
planningproblem, so we conclude by discussing a research agenda oriented approaches to goal recognition in general. They
for addressing it. taxonomize the approaches by the type of environment
(stochasticity/determinism and complete/incomplete
information), extent of the goal recognizer’s information
2. Related Work (complete awareness of the target agent’s actions vs.
missing or noisy information), the target agent’s behavior
(whether it plans optimally and whether it tries to thwart
goal recognition), and form of the solution (whether
candidate goals are assigned a probability or qualitative
ordering for how likely they are to be the true goal, or
else a binary accept/reject decision). Recently active
goal elicitation has been proposed by Amos-Binks and
Cardona-Rivera [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] where the recognizer afects the
agent’s environment.
      </p>
      <p>
        Building on the analogy of interactive narrative as a
dialogue between player and game [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], player and author
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], or player and co-participants [25], our framework
borrows from a model of explicit dialogue from semantics
and pragmatics by Roberts [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] with foundations from
Stalnaker [26]. We adapt the concept of the common
ground, the set of propositions that dialogue participants
mutually accept as true, and the progression of a
dialogue as a sequence of moves from among two types,
assertions that add to the common ground (analogous
to our observation sets) and questions that define which
subsequent moves are relevant (analogous to our player
actions). However, a key diference between our
assumptions and those of the dialogue models is that moves in an
interactive narrative can constrain or determine which
facts are true rather than simply revealing static facts
that were already true.
      </p>
      <p>Another perspective on explicit question-asking comes
from Rothe et al. [27], who ask what humans are likely
to ask in the context of the game Battleship. They define
a “good” question in terms of maximizing information
gain. By studying players empirically, they conclude that
people tend to choose the most informative questions
when presented with a list of question options, but not
when generating their own questions from scratch. Their
environment is restricted compared to our interactive
narrative focus: They assume that the Battleship
players are asking questions solely to optimize their chances
of winning the game, and informativeness of questions
translates directly to increased ability to win, whereas we
consider knowledge goals that sometimes reflect
questions asked simply for their own sake.</p>
    </sec>
    <sec id="sec-2">
      <title>3. A Model of Interactive Narrative and Goals</title>
      <sec id="sec-2-1">
        <title>After  is taken, the game determines and reveals the</title>
        <p>
          chosen observation set  ∈ (, ). This encompasses
the direct results of the player action as well as anything
In Section 3.1, we propose a representation for how in- else that happens in the story world before the player is
formation is revealed over the progression of interactive able to act next (e.g., NPC actions). The new common
narrative. In Section 3.2, we define a class of knowledge ground then becomes ′ = ( ∪ ).
goals with respect to this framework. As a more specific language for representing the
common ground, we consider the knowledge representation
from QUEST [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ], which has seen prior use for
model3.1. Interactive Narrative Domains ing audience reasoning about narratives [28] and can
At a high level, we model an interactive narrative domain encode the causal relationships [29] and character
intenas a state-transition model similar to a Markov decision tions [30] commonly emphasized in plan-based narrative
process, but nonstochastic: there is a known set of possi- generation.
ble outcomes for taking a given action in a given situation, A QUEST knowledge structure (QKS) is a directed
but no assignment of probabilities to outcomes. graph where nodes are annotated with semantic
informa
        </p>
        <p>The domain is a tuple ⟨, , , , ⟩.  is a universe tion and where nodes and arcs each have one of several
of propositions.  is a universe of proposition sets  ⊆  . predefined types; see Graesser et al. [31] for an extended
We call  the common ground in reference to the discourse account of types and their constraints. We focus on a few
model by Stalnaker [26], as it represents the information types: event nodes which correspond to character actions
about a story mutually known between the player and or happenings in the world, state nodes which correspond
game at a given time during a playthrough. Besides not to something being true in the world, consequence arcs
being self-contradictory, we place no general restrictions which express a causal relationship between two nodes,
on the contents of a common ground, although we pro- goal nodes which define in-story character goals (distinct
pose a more restricted implementation later in this sec- from our model of player goals; we omit these from our
tion. A common ground functions like a state, but unlike examples for brevity and clarity), and outcome arcs
showa world state which only tracks facts that are true in the ing motivation of event nodes by goal nodes, and reason
present moment of the story, a common ground in de- arcs linking goal nodes together as character plans.
scribes the story as a whole and will only grow over time; To relate this to the abstract model from above, we can
if a propositional representation of world state needs to define the propositions in  to indicate the existence of
be tracked within the model, the propositions should be QKS nodes and edges, so a common ground in  ∈ 
defined to contain time indices or other ordering con- corresponds to a QKS. When the player takes an action 
straints that distinguish the past of the story from the in , each observation set in (, ) will include at
minipresent in which the player is currently interacting. mum a new event node expressing that  took place and
 is a set of player actions. () is a function mapping consequence arcs to that event node from prior events
a common ground  ∈  to the set of player actions that or facts that made the player action possible.
are legal from that common ground. (, ) is a function We describe an example of how we represent a
that maps a common ground  ∈  and action  ∈ () common-ground change in a hypothetical adventure
to a set of possible resultant observation sets, each of game. To start, the player is informed that their
characwhich takes the form of a proposition set  ⊆  where ter is at their cottage and that a bandit has just broken
( ∪ ) ∈ . into the cottage and left with some stolen money. These</p>
        <p>We use this formalism to model the evolution of a facts make up the common ground . We illustrate an
player’s knowledge over the course of a playthrough initial QKS representation in Figure 1, including a state
of an interactive narrative game. At any given time, node reflecting the player’s location, and a network of
the current common ground  encompasses all of the state and event nodes reflecting the burglary backstory.
facts revealed to the player about the story so far. When (Depending on the architecture, the game may have
prethe player chooses an action , they know the result determined where the coin and the bandit went after the
will be some observation set in (, ), but they can- burglary, but this information is not yet part of the story
not necessarily predict which one. This can model game as far as the player is aware so it is not modeled in the
architectures where the actual results of actions are pre- common ground.)
determined but depend on information unknown to the The player is presented with a choice of actions to go
player (and therefore unmodeled in the common-ground to one of the two other locations in the game world—the
representation), but also architectures that use least- or market and the camp. The player chooses to go to the
late-commitment experience management or dynamic market (action ). From their perspective, there may be
procedural content generation where that information one of multiple outcomes (the full range of possibilities
is altogether undecided by the system until it is needed. makes up (, )): They may encounter the bandit there,
want (via [goal node]) to do [event linked by outcome arc to
the goal node]?” can be specified by following reason arcs
forward from the goal node. The QUEST cognitive model
predicts that among the neighboring nodes returned this
way, humans will rate nearer neighbors as better answers
than more distant ones.</p>
        <p>In the situation where we ask a QUEST-style question
about a common ground that does not yet contain an
answer, we can model the question as a goal to reach a
common ground where the answer exists. For instance,
consider the example from Section 3.1. The player could
have the question: “What are the consequences of the
bandit having the coin?” The question starts out
unanswered in the initial QKS of Figure 1; but by going to
Figure 1: Example of a common ground in the QKS represen- the Market, the player may witness the bandit using the
tation. coin as in the top-right QKS of Figure 2, in which case
a consequence arc from the bandit-has-coin state node
now exists and the question is answered.
and may or may not witness the bandit spending the Formally, let  be the QKS representation of the current
stolen money there, or else they may not find the bandit common ground. We define a QUEST question goal as
and therefore conclude that the bandit went to the camp a tuple ⟨, , , ⟩, where  is an existing node in
instead. These map to candidates for how to update the ,  is an arc direction among incoming or outgoing,
common ground, as illustrated in Figure 2.  is an arc type, and  is a node type. We say that a</p>
        <p>The game mechanics resolve what the actual outcome state ′ ⊇  satisfies ⟨, , , ⟩ if the QKS for ′
should be: For instance, the player is informed that they contains an arc of type  going direction  from 
see the bandit buying a potion with the money. We up- such that the node on the other end of the arc has type
date the QKS to include the corresponding observation .  constitutes the subject of the question (e.g., an
set to represent the new situation, as with the top-right event/state node in the how case) and the others define
QKS in Figure 2. The observation set adds information the form of an answer (e.g., incoming consequence arc
both forward in time, such as the event node for the from a state/event node).
player’s action of traveling, and backward in time, such
as the consequence arc reflecting the past occurrence of
the bandit’s arrival at the market. 4. Goal Recognition</p>
      </sec>
      <sec id="sec-2-2">
        <title>Suppose we have a log of actions the player has taken</title>
        <p>3.2. Goals during an interactive narrative. The log may end
beAbstractly, we define a player goal as a formula over fore the player has completed any identifiable goals,
propositions in  . We say that a goal  is satisfied in but an observer—e.g., a game designer analyzing the
common ground  if  |= . In the QKS model, this playthrough in hindsight or an experience manager
trytranslates to a goal specifying what nodes and arcs can be ing to adapt to the player for later interactions—may
added to make the QKS satisfactory. This representation nonetheless need to reason about the player’s
intendoes not lose generality over a world-state model since tions. How do we identify possible player goals, including
it can specify world-state goals using state nodes for the knowledge goals, motivating the actions?
desired facts, but we focus this section on how it can be Our first attempt to solve this problem is a goal
recogused to define a certain class of knowledge goals. nition as planning [32] approach: We model the player</p>
        <p>An advantage of the QKS representation is that it can with an artificial agent which we call an agent model,
hymake direct use of QUEST’s question-answering proce- pothesize that the player has a specific goal, simulate the
dures to determine whether the present common ground agent model’s reasoning about how to pursue that goal,
answers certain kinds of questions about the story. For and determine whether the agent could have chosen the
instance, a question of the form “How did [event/state] same course of action that the player did in the logs; if
happen?” can be answered by following consequence so, we conclude the player had that goal.
arcs backward from the node; a question of the form However, agent models for existing
goal-recognition“What are the consequences of [event/state]?” can be an- as-planning approaches prescribe behavior in a
determinswered by following consequence arcs forward from the istic or stochastic environment, whereas our framework
node; or a question of the form “Why did the character treats the game as a nondeterministic, nonstochastic
environment: players know the range of possible observation acting toward some goal in  but has not yet achieved
sets that could result from their action but have no reli- it. The solution to a goal recognition problem is the set
able way of anticipating which specific observation set ′ ⊆  of candidate goals such that an agent modeled
will be chosen. An agent model now needs to account by  acting in domain  pursuing any goal  ∈ ′
for how a player might handle this unpredictability. could produce trajectory  .</p>
        <p>In our framework, we define a goal recognition prob- Algorithm 1 sketches the goal-recognition-as-planning
lem instance as a tuple ⟨,  , ⟩.  is an interactive process. For each player action so far in  , it checks the
narrative domain ⟨, , , , ,  ⟩ as defined in Sec- consistency of that action with each goal; assume the
subtion 3.1.  is a trajectory consisting of a sequence routine verify is a search process that returns whether
1, 1, 2, 2, · · · , , where  ∈  and  ∈  for the action could be selected by the agent model. (We
 = 1 to . This represents (chronologically) the com- check each action individually instead of the whole
semon grounds that the player has experienced so far and quence of actions at once because an agent may plan with
the actions the player took in response.  is a set of the expectation of a certain observation set but receive a
candidate goals.  is an agent model as elaborated later diferent observation set in actuality and have to revise
in this section. its plan.)</p>
        <p>Assume  comes from a game log, presenting a snap- We spend the rest of this section discussing specific
shot of an in-progress playthrough where the player is agent models that the goal recognizer could assume.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Experiments</title>
      <sec id="sec-3-1">
        <title>Algorithm 1 Goal recognition for the common ground</title>
        <p>Input: Domain , common-ground/player-action
tra</p>
        <p>jectory  , candidate goals , agent model 
Output: A set of goals ′ ⊆  that an agent modeled
by  could have been pursuing if it took the action
sequence in 
1: ′ ← 
2: for all  ∈  do
3: for all  ∈ ′ do
4: if ¬verify(, , , ,  ) then
5: ′ ← ′ ∖ {}
6: return ′</p>
        <p>There are many risks to the robustness of a goal
recognition model when applied to real human players: the
player acting toward a goal outside of the candidates
considered, changing goals, behaving in a non-goal-directed
way, missing or misunderstanding information the model
assumes is available to them. This preliminary study
considers synthetic players that do not yet incorporate these
risks, but we acknowledge the importance of human
factors for our future work.</p>
        <p>There is a wide spectrum of ways even idealized
artificial agents can handle the nondeterministic
environments of our framework, as shown by the contrast
be</p>
        <p>
          First we propose goal recognition using an optimistic tween the highly risk-taking optimistic-planning agent
planning agent model where the agent plans for the best model and the highly risk-averse adversarial-planning
case, hoping that its action will result in a specific ob- agent model described in Section 4. A mismatch between
servation set that gets it closer to the goal. Given a cur- the agent model assumed by the goal recognizer and the
rent common ground , for an optimistic-planning agent decision-making criteria of the actual player can result
with goal , the agent can take an action  if there exists in wrong conclusions about the player goals—false
posisome hypothetical plan , +1, +1, · · ·  ,  where tives where a candidate goal is wrongly attributed to the
 satisfies ; we also require the plan to be nonredun- player, and false negatives where the player’s actual goal
dant in that no strict subsequence of , +1, · · ·  also is wrongly rejected as a possibility.
satisfies . This definition is based on Sabre’s character Our experiment seeks to quantify the error-proneness
model [
          <xref ref-type="bibr" rid="ref22">22, 33</xref>
          ]. of goal recognition that assumes one agent model when
        </p>
        <p>We also propose an adversarial planning agent model the player acts according to another agent model. By
usthat plans for the worst case, trying to act according to ing an optimistic planner as the “real” player and trying to
a policy that can eventually satisfy the goal even when identify that player’s goals using the opposite extreme of
its actions result in the worst-case observation sets. For an adversarial-planning goal recognizer, and the reverse,
some goal , define a safe common ground  as (base case) we aim to establish upper bounds on goal recognition
one that satisfies  or (recursively) for which there exists error before human factors are applied.
an action  ∈ () such that all outcomes in (, ) We generated goal recognition problem instances as
result in safe common grounds. Given a current common follows: To derive the domain , we started with
depthground , for an adversarial-planning agent with goal limited, tree-structured story graphs [35] from a
narra, the agent can take an action  if all possible result- tive planning [36] problem, generated using the Sabre
ing common grounds are safe. However, because this narrative planner [33]; these graphs consist of nodes
repdefinition alone could easily result in situations where resenting world states and edges representing player or
the agent would have no valid action choices defined non-player actions, annotated with information such as
(because the agent will eventually have to take an ac- whether the player observed a given non-player action.
tion where at least one possible outcome could prevent We restructured the story graphs to alternate between
the goal), we generalize this definition—we model an branching on a choice of player actions and branching
agent who believes the observation sets are chosen uni- on a choice of non-player macro-actions containing any
formly at random, and the agent follows an expectimax- number of non-player actions. At each node, we used
style [34] policy that it thinks will maximize the worst- previously-proposed mappings [29, 30] to derive a QKS
case probability of satisfying the goal. Define the score equivalent of the story so far. We also used an approach
expectimax() of a common ground  for goal  as 1 similar to Robertson and Young [37] and Fisher et al. [38]
if  satisfies , or 0 if  can never be satisfied from  to allow uncertainty about which of multiple story-graph
(e.g., because  is a leaf in a finite tree of possible tra- nodes the player was in, due to possible unobserved past
jectories); otherwise, define expectimax() as the aver- events; we derived the final QKS common ground
repage score of common grounds reached from choosing resenting the player knowledge by taking the maximal
a best action, max∈() ∑︀∈(,|)e(xp,ec)t|imax(∪) . An subTgoraopbhtatihnataistsrhajaercetdorbyy theoofripglianyaelrstaocrtieiosn.s so far,
adversarial-planning agent can take an action  if  we sampled and truncated goal-satisfying playthroughs
maximizes the average in this manner. given a goal  and agent model  . We manually defined
the set of candidate goals  for the domain.</p>
        <p>As a source for our domain, we used the narrative Goal recognizer output
planning problem from the Grandma adventure game p n
used by Ware et al. [39]. We retained the same characters,
actions, locations, and items, but modified the initial
sKtnatoewanntdoNthPeCpglaoyaelsr itno tchreeainteittihaleQinKiSti,atlhsreeteuapcatisofnosllhoawvse: d p′ T59rue Pos. 1F1a7lse Neg.
already happened in the backstory: The bandit character un th
has stolen a sword and a coin from the player character’s roG tru
house and left the house. The merchant character is at the
bmoatrhkelotcaantdiotnhse rgeuaacrhdabchlearfaroctmeraiscartotshsreobaadnsdriet’ascchaamblpe, n′ 1F2a7lse Pos. T25ru7e Neg. 6S7p%ecificity
from the cottage. Unknown to the player and thus not
represented in the initial QKS, the bandit intends to use
the sword to kill the guard and/or rob the merchant, Figure 3: Adversarial-planning goal recognizer on
optimisticand/or use the coin to buy from the merchant. planning player over all goals and trajectories</p>
        <p>We defined four possible goals that the synthetic player
would try to satisfy and that the goal recognizer would
try to distinguish between as the candidate goal set : Goal recognizer output
achievement goals to get the stolen coin and to get the p n
stolen sword, and knowledge goals in the form of QUEST
question goals for “Why did the bandit steal the coin?”
aonvedrl“aWp hiny sdoimdethoef bthaendpiltaysteeraal ctthioensswtohradt?c”anThbeesuesgeodailns udn pth′ T59rue Pos. 3F6a0ls1e Neg. Sensitivity 2%
the course of satisfying them, e.g., following the bandit ro tru
can support any of the goals, but diverge in others, e.g., G
killing the bandit enables taking back the stolen items
buuntfoelldimainndatleesaronptphoeritruinnitteienstitoonws.atch the bandit’s plans n′ 5F2alse Pos. T15ru56e Neg. 9S7p%ecificity</p>
        <p>We generated the narrative planning problem’s story
graph to a fixed depth of 6 steps, based on available
computation time, and converted it to the graph of QKS com- Figure 4: Optimistic-planning goal recognizer on
adversarialmon grounds as described above. We then collected all planning player over all goals and trajectories
trajectories for each agent model and each goal and
analyzed them in the following process:</p>
        <p>Suppose the real player is an optimistic-planning agent reverse. We show standard measures of performance for
and the goal recognizer assumes an adversarial-planning a test to distinguish positive from negative cases:
sensiagent model, or the reverse. Let  be a trajectory; let  tivity (how often the recognizer concluded the player had
be all the goals, let ′ be the set of goals for which the goal, given that the goal was consistent with the true
the player generated  and let ′ be the set of player model), and specificity (how often the recognizer
goals identified by the goal recognizer for  . We counted concluded the player did not have the goal, given that
goals in ′ ∩ ′ as true positives for which the the goal was inconsistent with the true player model).
goal recognizer would correctly identify a player goal Both of the player-recognizer combinations had
worsefor  ; ′ ∖ ′ was false negatives for which the than-random sensitivity and better-than-random
specirecognizer failed to identify a goal that was consistent ifcity; the recognizers skewed toward correctly rejecting
with the true player model; ′ ∖ ′ was false candidate goals when the player did not have those goals,
positives for which the recognizer identified a goal that but failing to detect the true goals. The diference was
was actually inconsistent with the true player model; especially strongly pronounced in a goal recognizer that
and  ∖ (′ ∪ ′) was true negatives where the assumed an optimistic-planning agent model when the
recognizer correctly did not identify a goal that would actual player used the adversarial-planning agent model;
have been inconsistent with the true player model. that is, when considering a goal that the player actually</p>
        <p>We show confusion matrices for the results across all had, the optimistic-planning goal recognizer was highly
goals and trajectory lengths: Figure 3 shows the per- likely to erroneously reject that goal. These instances
formance of a goal recognizer assuming an adversarial- came from the fact that our optimistic-planning agent
planning agent model if the actual player acts like an model attempts to be as eficient as possible by avoiding
optimistic-planning agent model, and Figure 4 shows the
actions that could be redundant to the goal, while the
adversarial-planning model accepts longer paths in favor
of safety; the simulated adversarial-planning player often This material is based upon work supported by the
Natook actions that were unexplainable to the optimistic- tional Science Foundation under Grant No. IIS-2145153.
planning recognizer because there was a more direct Any opinions, findings, and conclusions or
recommendaroute available. This experiment suggests that strict as- tions expressed in this material are those of the author(s)
sumptions about agent eficiency—which are common in and do not necessarily reflect the views of the National
existing goal recognition approaches—are too brittle in Science Foundation.
practice, and future goal recognition approaches should
be designed to handle cautious or meandering players.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
    </sec>
    <sec id="sec-5">
      <title>6. Conclusions</title>
      <p>This paper highlighted an underexplored class of goals
important to interactive narratives—player goals to fill
the gaps in their knowledge about the story so far. We
extended goal recognition to these goals by defining a
planning framework over the space of player mental
models rather than over the space of world states, drawing
on representations of discourse and question-answering
from linguistics and cognitive science.</p>
      <p>
        Accurate algorithms for knowledge goal recognition
are still an open problem. An approach based on
simulating a hypothetical player and comparing its
decisions to the real player’s can easily fail to detect goals
of a player whose playstyle does not match the
algorithm’s assumptions. However, the desiderata for a goal
recognition algorithm depend on how that algorithm
will be used. For instance, high-specificity but
lowsensitivity goal recognition could be acceptable for an
experience manager whose objective is to find a small
handful of the player’s interests and use them to ofer
the player mutually-beneficial opportunities. Conversely,
low-specificity but high-sensitivity goal recognition can
still be useful for a highly-improvisational experience
manager deciding when to fix “plot holes” in its stories
that may be exposed by player knowledge goals [
        <xref ref-type="bibr" rid="ref25">40</xref>
        ].
      </p>
      <p>
        Reasoning about player goals will ultimately require
considering the context that goals come from. In the case
of knowledge goals, aside from ofering models, the
literature we reference goes on to emphasize that reasoning
efectively about questions requires understanding why
they were asked: Roberts [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and Ram [
        <xref ref-type="bibr" rid="ref26">41</xref>
        ] frame
basic questions as part of strategies to answer higher-level
questions, and Graesser et al. [
        <xref ref-type="bibr" rid="ref27">42</xref>
        ] and Ram [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] discuss
the functions of questions to support the asker’s goals
and explain anomalous findings. In the long term, we aim
to take theories of when knowledge goals are likely to
occur, and integrate them with mechanisms for
confirming those knowledge goals from a player’s actions and
for using this information to shape the story in concert
with the player.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G. N.</given-names>
            <surname>Yannakakis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Spronck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Loiacono</surname>
          </string-name>
          , E. André, Player modeling,
          <source>Artificial and Computational Intelligence in Games</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Harrison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. W.</given-names>
            <surname>Fendt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. L.</given-names>
            <surname>Roberts</surname>
          </string-name>
          ,
          <article-title>A survey and analysis of techniques for player behavior prediction in massively multiplayer online role-playing games</article-title>
          ,
          <source>IEEE Transactions on Emerging Topics in Computing</source>
          <volume>3</volume>
          (
          <year>2014</year>
          )
          <fpage>260</fpage>
          -
          <lpage>274</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E. Y.</given-names>
            <surname>Ha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Rowe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. W.</given-names>
            <surname>Mott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Lester</surname>
          </string-name>
          ,
          <article-title>Recognizing player goals in open-ended digital games with Markov logic networks</article-title>
          , in: G. Sukthankar,
          <string-name>
            <given-names>R.</given-names>
            <surname>Goldman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Geib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pynadath</surname>
          </string-name>
          , H. Bui (Eds.),
          <source>Plan, Activity and Intent Recognition: Theory and Practice</source>
          ,
          <year>2014</year>
          , pp.
          <fpage>289</fpage>
          -
          <lpage>311</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Harris</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>Proactive mediation in planbased narrative environments</article-title>
          ,
          <source>IEEE Transactions on Computational Intelligence and AI in games 1</source>
          (
          <year>2009</year>
          )
          <fpage>233</fpage>
          -
          <lpage>244</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Levesque</surname>
          </string-name>
          ,
          <article-title>Intention is choice with commitment</article-title>
          ,
          <source>Artificial Intelligence</source>
          <volume>42</volume>
          (
          <year>1990</year>
          )
          <fpage>213</fpage>
          -
          <lpage>261</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <article-title>Knowledge goals: A theory of interestingness</article-title>
          ,
          <source>in: Cognitive Science Society Annual Conference</source>
          ,
          <year>1990</year>
          , pp.
          <fpage>206</fpage>
          -
          <lpage>214</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Baikadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rowe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lester</surname>
          </string-name>
          ,
          <article-title>Generalizability of goal recognition models in narrative-centered learning environments</article-title>
          , in: International Conference on User Modeling, Adaptation, and
          <string-name>
            <surname>Personalization</surname>
          </string-name>
          ,
          <year>2014</year>
          , pp.
          <fpage>278</fpage>
          -
          <lpage>289</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gómez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Márquez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zapa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Florez</surname>
          </string-name>
          ,
          <article-title>Gdabased tutor module of an intelligent tutoring system for the personalization of pedagogic strategies</article-title>
          ,
          <source>in: International Conference on Intelligent and Interactive Systems and Applications</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>742</fpage>
          -
          <lpage>750</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Cardona-Rivera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>Symbolic plan recognition in interactive narrative environments</article-title>
          ,
          <source>in: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          , volume
          <volume>11</volume>
          ,
          <year>2015</year>
          , pp.
          <fpage>16</fpage>
          -
          <lpage>22</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <article-title>Mutual implicit question answering for shared authorship: a pilot study on player expectations</article-title>
          ,
          <source>in: Intelligent Narrative Technologies</source>
          volume
          <volume>10</volume>
          ,
          <year>2014</year>
          , pp.
          <fpage>2</fpage>
          -
          <lpage>8</lpage>
          . Workshop, volume
          <volume>10</volume>
          ,
          <year>2017</year>
          , pp.
          <fpage>259</fpage>
          -
          <lpage>265</lpage>
          . [25]
          <string-name>
            <given-names>B.</given-names>
            <surname>Magerko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Manzoul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Baumer,
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Roberts</surname>
          </string-name>
          , Information structure in discourse: To- D. Fuller,
          <string-name>
            <given-names>K.</given-names>
            <surname>Luther</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Pearce</surname>
          </string-name>
          ,
          <article-title>An empirical study wards an integrated formal theory of pragmatics, of cognition and theatrical improvisation</article-title>
          ,
          <source>in: ACM Semantics and Pragmatics</source>
          <volume>5</volume>
          (
          <year>2012</year>
          )
          <fpage>1</fpage>
          -
          <lpage>69</lpage>
          . Conference on Creativity and Cognition, volume
          <volume>7</volume>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Graesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. E.</given-names>
            <surname>Gordon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Brainerd</surname>
          </string-name>
          , QUEST:
          <year>2009</year>
          , pp.
          <fpage>117</fpage>
          -
          <lpage>126</lpage>
          .
          <article-title>A model of question answering</article-title>
          , Computers &amp; Math- [26]
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Stalnaker</surname>
          </string-name>
          , Assertion, in: Syntax and Semantics,
          <source>ematics with Applications</source>
          <volume>23</volume>
          (
          <year>1992</year>
          )
          <fpage>733</fpage>
          -
          <lpage>745</lpage>
          . volume
          <volume>9</volume>
          , New York Academic Press,
          <year>1978</year>
          , pp.
          <fpage>315</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>F. R.</given-names>
            <surname>Meneguzzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <article-title>A survey on goal 332. recognition as planning</article-title>
          , in: International Joint Con- [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rothe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. M.</given-names>
            <surname>Lake</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Gureckis</surname>
          </string-name>
          ,
          <source>Do people ask ference on Artificial Intelligence</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>4524</fpage>
          - good questions?,
          <source>Computational Brain &amp; Behavior 4532. 1</source>
          (
          <year>2018</year>
          )
          <fpage>69</fpage>
          -
          <lpage>89</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Robertson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          , Perceptual experience [28]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Graesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. L.</given-names>
            <surname>Lang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Roberts</surname>
          </string-name>
          , Quesmanagement, IEEE Transactions on
          <article-title>Games 11 tion answering in the context of stories</article-title>
          ,
          <source>Journal of (</source>
          <year>2018</year>
          )
          <fpage>15</fpage>
          -
          <lpage>24</lpage>
          . Experimental Psychology:
          <source>General</source>
          <volume>120</volume>
          (
          <year>1991</year>
          )
          <fpage>254</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Sunshine-Hill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. I. Badler</given-names>
            , Perceptually realistic [29]
            <surname>D. B. Christian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>Comparing cognitive behavior through alibi generation, in: AAAI Con- and computational models of narrative structure</article-title>
          ,
          <source>in: ference on Artificial Intelligence and Interactive National Conference of the American Association Digital Entertainment</source>
          ,
          <year>2010</year>
          .
          <source>for Artificial Intelligence</source>
          , volume
          <volume>19</volume>
          ,
          <year>2004</year>
          , pp.
          <fpage>385</fpage>
          -
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Flores</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Thue</surname>
          </string-name>
          ,
          <article-title>Level of detail event generation, 390</article-title>
          . in: International Conference on Interactive Digital [30]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Cardona-Rivera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Price</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Winer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Storytelling</surname>
          </string-name>
          , volume
          <volume>10</volume>
          ,
          <year>2017</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>86</lpage>
          . Young,
          <article-title>Question answering in the context of stories</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>I.</given-names>
            <surname>Swartjes</surname>
          </string-name>
          , E. Kruizinga,
          <string-name>
            <given-names>M.</given-names>
            <surname>Theune</surname>
          </string-name>
          ,
          <article-title>Let's pretend generated by computers, Advances in Cognitive I had a sword: Late commitment in emergent nar- Systems 4 (</article-title>
          <year>2016</year>
          )
          <fpage>227</fpage>
          -
          <lpage>245</lpage>
          . rative, in: International Conference on Interactive [31]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Graesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Byrne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. L.</given-names>
            <surname>Behrens</surname>
          </string-name>
          ,
          <source>AnDigital Storytelling</source>
          , volume
          <volume>1</volume>
          ,
          <year>2008</year>
          , pp.
          <fpage>264</fpage>
          -
          <lpage>267</lpage>
          .
          <article-title>swering questions about information in databases,</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D.</given-names>
            <surname>Thue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Schifel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Þ. Guðmundsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Questions</surname>
          </string-name>
          and
          <string-name>
            <surname>Information Systems</surname>
          </string-name>
          (
          <year>1992</year>
          )
          <fpage>229</fpage>
          -
          <lpage>252</lpage>
          . Kristjánsson,
          <string-name>
            <given-names>K.</given-names>
            <surname>Eiríksson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Björnsson</surname>
          </string-name>
          , Open [32]
          <string-name>
            <given-names>M.</given-names>
            <surname>Ramírez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gefner</surname>
          </string-name>
          ,
          <article-title>Plan recognition as planworld story generation for increased expressive ning</article-title>
          , in: International Joint Conference on Artifirange,
          <source>in: International Conference on Interactive cial Intelligence</source>
          ,
          <year>2009</year>
          , pp.
          <fpage>1778</fpage>
          -
          <lpage>1783</lpage>
          . Digital Storytelling,
          <year>2017</year>
          , pp.
          <fpage>313</fpage>
          -
          <lpage>316</lpage>
          . [33]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Siler</surname>
          </string-name>
          , Sabre: A narrative planner
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>I. Horswill</surname>
          </string-name>
          ,
          <article-title>Retcon: a least-commitment story- supporting intention and deep theory of mind, in: world system</article-title>
          ,
          <source>in: Experimental AI in Games Work- AAAI Conference on Artificial Intelligence and Inshop</source>
          , volume
          <volume>9</volume>
          ,
          <year>2022</year>
          . teractive Digital Entertainment, volume
          <volume>17</volume>
          ,
          <year>2021</year>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>J.</given-names>
            <surname>Rowe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mott</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>McQuiggan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Robison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>106</lpage>
          . J.
          <string-name>
            <surname>Lester</surname>
          </string-name>
          , Crystal Island:
          <article-title>A narrative-centered learn-</article-title>
          [34]
          <string-name>
            <given-names>D.</given-names>
            <surname>Michie</surname>
          </string-name>
          ,
          <article-title>Game-playing and game-learning auing environment for eighth grade microbiology</article-title>
          , in: tomata,
          <source>in: Advances in Programming and NonInternational Conference on Artificial Intelligence Numerical Computation</source>
          ,
          <year>1966</year>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>200</lpage>
          . in Education Workshop on Intelligent Educational [35]
          <string-name>
            <given-names>M. O.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          , From linear story genGames,
          <year>2009</year>
          , pp.
          <fpage>11</fpage>
          -
          <lpage>20</lpage>
          .
          <article-title>eration to branching story graphs</article-title>
          , in: AAAI Con-
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>R.</given-names>
            <surname>Farrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <article-title>Narrative planning for belief ference on Artificial Intelligence and Interactive and intention recognition</article-title>
          ,
          <source>in: AAAI Conference on Digital Entertainment</source>
          ,
          <year>2005</year>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>116</lpage>
          . Artificial Intelligence and
          <string-name>
            <given-names>Interactive</given-names>
            <surname>Digital</surname>
          </string-name>
          Enter- [36]
          <string-name>
            <given-names>M. O.</given-names>
            <surname>Riedl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>Narrative planning: baltainment</article-title>
          , volume
          <volume>16</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>58</lpage>
          .
          <article-title>ancing plot and character</article-title>
          ,
          <source>Journal of Artificial</source>
          In-
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shirvani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Farrell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          , Combining inten- telligence
          <source>Research</source>
          <volume>39</volume>
          (
          <year>2010</year>
          )
          <fpage>217</fpage>
          -
          <lpage>268</lpage>
          . tionality and belief: revisiting believable character [37]
          <string-name>
            <given-names>J.</given-names>
            <surname>Robertson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          ,
          <article-title>A model of superposed plans</article-title>
          ,
          <source>in: AAAI Conference on Artificial Intelli- states, in: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          , vol
          <article-title>- gence and Interactive Digital Entertainment</article-title>
          , volume
          <volume>14</volume>
          ,
          <year>2018</year>
          , pp.
          <fpage>222</fpage>
          -
          <lpage>228</lpage>
          . ume 12,
          <year>2016</year>
          , pp.
          <fpage>65</fpage>
          -
          <lpage>71</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Amos-Binks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Cardona-Rivera</surname>
          </string-name>
          , Goal elicita- [38]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fisher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Siler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <article-title>Intelligent detion planning: Acting to reveal the goals of others, escalation training via emotion-inspired narrative</article-title>
          <source>in: Advances in Cognitive Systems</source>
          ,
          <year>2020</year>
          . planning, in: Intelligent Narrative Technologies
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Cardona-Rivera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. M.</given-names>
            <surname>Young</surname>
          </string-name>
          , Games as con- Workshop,
          <year>2022</year>
          . versation, in: AAAI Conference on Artificial In- [39]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Ware</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. T.</given-names>
            <surname>Garcia</surname>
          </string-name>
          , M. Fisher, A. Shirvani, telligence and Interactive Digital Entertainment,
          <string-name>
            <given-names>R.</given-names>
            <surname>Farrell</surname>
          </string-name>
          <article-title>, Multi-agent narrative experience management as story graph pruning</article-title>
          ,
          <source>IEEE Transactions on Games</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [40]
          <string-name>
            <given-names>C.</given-names>
            <surname>Siler</surname>
          </string-name>
          ,
          <article-title>Open-world narrative generation to answer players' questions</article-title>
          , in
          <source>: AAAI Conference on Artificial Intelligence and Interactive Digital Entertainment</source>
          , volume
          <volume>18</volume>
          ,
          <year>2022</year>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>310</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ram</surname>
          </string-name>
          ,
          <article-title>A theory of questions and question asking</article-title>
          ,
          <source>Journal of the Learning Sciences</source>
          <volume>1</volume>
          (
          <year>1991</year>
          )
          <fpage>273</fpage>
          -
          <lpage>318</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Graesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Baggett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <source>Questiondriven explanatory reasoning, Applied Cognitive Psychology</source>
          <volume>10</volume>
          (
          <year>1996</year>
          )
          <fpage>17</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>