<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Flexible Plan-Subplan Matching for Plan Recognition in Case-Based Agents</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Keith A. Frazer</string-name>
          <email>kfrazer3@gatech.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Swaroop S. Vattam</string-name>
          <email>swaroop.vattam.ctr.in@nrl.navy.mil</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>David W. Aha</string-name>
          <email>david.aha@nrl.navy.mil</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>College of Computing, Georgia Institute of Technology</institution>
          ,
          <addr-line>Atlanta, GA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>NRC Postdoctoral Fellow; Naval Research Laboratory (Code 5514);</institution>
          <addr-line>Washington, DC</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Navy Center for Applied Research in Artificial Intelligence; Naval Research Laboratory (Code 5514);</institution>
          <addr-line>Washington, DC</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>78</fpage>
      <lpage>87</lpage>
      <abstract>
        <p>Plan-subplan matching is an important step in case-based plan recognition. We present RelaxedVF2, an algorithm for plan-subplan matching for plans encoded using the Action Sequence Graph representation. RelaxedVF2 is a subgraph monomorphism algorithm that affords flexibility and error tolerance for plan-subplan matching. We present a study comparing RelaxedVF2 with an alternate degreesequence matcher that we used in our prior work and found that RelaxedVF2 attains higher plan recognition accuracy on a paradigmatic domain.</p>
      </abstract>
      <kwd-group>
        <kwd>Plan Recognition</kwd>
        <kwd>Case-Based Reasoning</kwd>
        <kwd>Action-Sequence Graph</kwd>
        <kwd>Relaxed Graph Matching</kwd>
        <kwd>Error-Tolerant</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>a labeled directed multigraph. Plan-subplan matching using ASGs reduces to graph-subgraph
matching.</p>
      <p>
        We introduce the RelaxedVF2 algorithm for graph-subgraph matching that is tailored to
matching plans represented as ASGs. RelaxedVF2 is an extension of the popular VF2
algorithm
        <xref ref-type="bibr" rid="ref6">(Cordella et al., 2004)</xref>
        for subgraph isomorphism. The extensions to VF2 we
propose transform it into a subgraph monomorphism matching algorithm, which makes
RelaxedVF2 better suited for additional edges that arise between the nodes of states and
actions as a plan’s observed execution progresses.
      </p>
      <p>
        In §2 we present related work on CBPR and graph matching techniques. In §3 we present
the ASG representation for plans. In §4 we present our plan-subplan matching approach,
including the RelaxedVF2 algorithm and the scoring function. In §5 we present an initial
empirical study comparing the performance of RelaxedVF2 to an alternative
degreesequence matching algorithm that we used in our prior work
        <xref ref-type="bibr" rid="ref20">(Vattam et al., 2015)</xref>
        . Our results
show that RelaxedVF2 compares favorably to the alternative approach. We conclude and
discuss future research plans in §6.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Research</title>
      <p>
        Several approaches has been proposed to address the problem of plan recognition
        <xref ref-type="bibr" rid="ref17">(Sukthankar et al., 2014)</xref>
        , including consistency-based
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14">(e.g., Hong, 2001; Kautz &amp; Allen,
1986; Kumaran, 2007; Lau et al., 2003; Lesh &amp; Etzioni, 1996)</xref>
        , and probabilistic approaches
        <xref ref-type="bibr" rid="ref16 ref3 ref4 ref8 ref9">(e.g., Bui, 2003; Charniak &amp; Goldman, 1991; 1993; Geib &amp; Goldman, 2009; Goldman et al.,
1999; Pynadath &amp; Wellman, 2000)</xref>
        . Both types are “model-heavy”, requiring accurate
models of an actor’s possible actions and how they interact to accomplish different goals.
Engineering these models is difficult and time consuming. Furthermore, these plan
recognizers perform poorly when confronted with novel situations and are brittle when the
operating conditions deviate from model parameters.
      </p>
      <p>
        CBPR is a model-lite, less studied approach to plan recognition. Existing CBPR
approaches
        <xref ref-type="bibr" rid="ref18 ref7">(e.g., Cox &amp; Kerkez, 2006; Tecuci &amp; Porter, 2009)</xref>
        eschew generalized models
for plan libraries that contain plan instances which can be gathered from experience. CBPR
algorithms can respond to novel inputs outside the scope of their plan library by using plan
adaptation techniques. However, earlier CBPR approaches were not error-tolerant.
      </p>
      <p>
        In contrast, our work on SET-PR focuses on error-tolerant CBPR
        <xref ref-type="bibr" rid="ref19">(Vattam et al., 2014;
2015)</xref>
        . We showed that SET-PR is robust to three kinds of inputs errors (missing, mislabeled,
and extraneous actions). One of the factors contributing to its robustness is that SET-PR uses
an ASG plan representation and the degree sequence similarity function for plan-subplan
matching. Although we previously showed that SET-PR was robust to input errors, there is
room for improvement.
      </p>
      <p>
        VF2
        <xref ref-type="bibr" rid="ref6">(Cordella et al., 2004)</xref>
        is an exact graph matching algorithm for finding node-induced
subgraph isomorphisms. It is one of the few such algorithms applicable to directed
multigraphs. Our extension, RelaxedVF2, transforms VF2 from finding node-induced
subgraph isomorphisms to subgraph monomorphisms (see Figure 1 for an illustration on how
these differ) and by modifying it to return partial mappings from graph to subgraph when no
complete match is available.
because it is missing a node (1) and an edge (between 4 and 5). Definitions are provided in Section 4.1.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Plan Representation: Action Sequence Graphs</title>
      <p>Suppose a plan is modeled as an action state sequence  = 〈(  ,   ), … , (  ,   )〉, where
each action   is a ground operator in the planning domain, and   is a ground state obtained
by executing   in   − , with the caveat that   is an initial state,   is null, and   is a goal
state. An action  in ( ,  ) ∈  is a ground literal 
=  ( 1:  1, … ,   :   ), where  ∈  (a
finite set of predicate symbols),   ∈  (a finite set of object types), and   is an instance of

 (e.g., stack(block:A, block:B)). A state  in ( ,  ) ∈  is a set of ground literals (e.g.,
{on(block:A,block:B), on(block:B,substrate:TABLE)}).</p>
      <p>An Action Sequence Graph (ASG) is a graphical representation of a plan that preserves its
topology (including the order of the propositions and their arguments). Vattam et al. (2014;
2015) provide a detailed definition of ASGs and their generation. An ASG is automatically
generated by transforming individual propositions in a plan into predicate encoding graphs,
and by taking the union of all the individual predicate encoding graphs so as to maintain the
total order of the plan. Figure 2 shows an example proposition and its corresponding
predicate encoding graph. Figure 3 shows an example full plan and its corresponding ASG.
An ASG is a labeled directed multigraph, which constrains the set of graph matching
algorithms that can manipulate them.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Robust Plan-Subplan Matching</title>
      <p>As our goal is error-tolerant plan recognition, our approach requires plan-subplan matching
that is robust to input errors. Plan-subplan matching requires a measure of similarity. As we
encode our plans in the case library and the input subplans as graphs, we utilize maximum
common subgraph monomorphism as a measure of similarity between them rather than the
more conventional maximum common subgraph isomorphism measure.
Let  1,  2 be graphs composed of sets of vertices and edges  1,  2 and  1,  2 respectively.
 2 is isomorphic to a subgraph of  1 if there exists a one-to-one mapping between each
vertex of  2 and a vertex in  1 and the number of edges between nodes in the mapping are
maintained.  2 is instead monomorphic if it consists of any subset of the vertices and edges
of  1. Monomorphism must be utilized over isomorphism when matching incomplete
subplans to complete plans in the case library because as plans are observed new edges are
often added relating existing action and state vertices.</p>
      <p>RelaxedVF2 (§4.1), an exact graph matching algorithm, does not return a similarity score.
It instead returns a one-to-one mapping of nodes between the subplan and plans in the case
library. While the length of the maximum common subgraph is often used to score matches,
we instead developed a more nuanced candidate scoring algorithm (§4.2) to increase
matching accuracy.</p>
    </sec>
    <sec id="sec-5">
      <title>4.1 RelaxedVF2</title>
      <p>RelaxedVF2 (Algorithm 1) computes the maximum common subgraph monomorphism
between two labeled directed multigraphs. Here we refer to node-induced isomorphism as a
subset of the nodes with all corresponding edges between them.</p>
      <p>VF2 matches two graphs,  1 and  2, using semantic and syntactic feasibility functions to
iteratively add compatible nodes of the graphs to an internal mapping,  , which is expressed
as a set of pairs ( ,  ) that represent the mapping of a node  ∈  1 with a node  ∈  2.
Therefore, a mapping  is a graph isomorphism if it is a bijective function that preserves
uses a depth-first search through all possible nodes that can be added based on semantic and
structural similarity. It then progresses to nodes matching on only semantic similarity before
finally adding nodes matching using only structural similarity.</p>
      <p>Algorithm 1: RelaxedVF2
PROCEDURE RelaxedVF2( ,  1,  2)</p>
      <p>INPUT: An intermediate state  (the initial state  0 has  ( 0) = ∅) and two graphs
OUTPUT: the mappings between  1 and  2
IF  ( ) covers all the nodes of  2 THEN</p>
      <p>OUTPUT  ( ) //The function  returns the mappings between nodes of  1,  2 in state 
ELSE</p>
      <p>L = [ ] //Sorted list of feasible pairs
mappingFound = False
 ( )  candidate pairs for inclusion in  ( ) //Used candidate pairs function from VF2
FOREACH ( ,  ) ∈  ( )</p>
      <p>IF  ( ,  ,  ) THEN</p>
      <p>L  L ∪ ( ,  )
WHILE NOT mappingFound //Loop until match is found or no more candidates
 ′   ( ) ∪ L.pop( ,  ) //Get the top feasible pair from list
mappingFound = RelaxedVF2( ′,  1,  2) //Recursive call
IF NOT mappingFound //Output partial match if no match found</p>
      <p>OUTPUT  ( )</p>
      <p>Restore data structures</p>
    </sec>
    <sec id="sec-6">
      <title>4.2 Scoring</title>
      <p>
        The size of the largest common subgraph can be used as a similarity measure
        <xref ref-type="bibr" rid="ref2">(Bergmann,
2002)</xref>
        . VF2 is error tolerant and will return matches even if they are of lower quality.
Therefore, we designed a metric that is a function of both match size and quality. The match
algorithm scores 1 point for every full match based on both semantics and structure (0.7 per
semantic match and 0.3 per structural match, based on previous weights used in SET-PR
        <xref ref-type="bibr" rid="ref20">(Vattam et al., 2015)</xref>
        ). After retrieving all matches of the subgraph against the case library
this score is then used to sort and find the best match.
      </p>
    </sec>
    <sec id="sec-7">
      <title>5. Empirical Study</title>
      <p>
        In this study, we compare plan-subplan matching using two similarity measures on ASGs:
(1) RelaxedVF2, and (2) DSQ (degree-sequence matcher)
        <xref ref-type="bibr" rid="ref19">(Vattam et al., 2014; 2015)</xref>
        . Our
claim is that RelaxedVF2 offers better performance compared to DSQ.
      </p>
      <p>The default plan representation consists of action-state sequences
(〈(  ,   ), … , (  ,   )〉). We also evaluated a plan representation consisting of only action
sequences (〈(  ), … , (  )〉) because state information is not always available in all planning
domains and it presents a more difficult challenge for DSQ. This yields four conditions:
RelaxedVF2ActionStates, DSQActionStates, RelaxedVF2ActionsOnly, and DSQActionsOnly.</p>
      <p>We empirically test whether, for error tolerant CBPR, using RelaxedVF2 outperforms
DSQ for both the ActionStates and ActionsOnly conditions. We use plan recognition
accuracy as our performance metric, where accuracy is defined as the ratio of queries that
resulted in correct plan retrieval to the total number of queries.</p>
    </sec>
    <sec id="sec-8">
      <title>5.1 Empirical method</title>
      <p>
        We conducted our experiments in the Blocks World domain, which is simple and allows us
to quickly generate a plan library  with desired characteristics. We used SHOP2
        <xref ref-type="bibr" rid="ref15">(Nau et al.,
2003)</xref>
        to generate  ’s plans as follows. We generated 20 random initial states and paired each
with 5 randomly generated goal states to obtain 100 planning problems. Each was given as
input to SHOP2 to obtain a plan. We fixed plan length to 20 by discarding any whose length
was not 20 and generating a new one (with a different goal state) in its place. This distribution
was chosen because it is challenging for plan recognition.
      </p>
      <p>
        We evaluated plan recognition accuracy using a leave-one-in strategy
        <xref ref-type="bibr" rid="ref1">(Aha &amp; Breslow,
1997)</xref>
        . For each compared condition:
1. We randomly selected a plan  in  ( is not deleted from  ).
2. We introduced a fixed percentage of error into  consisting of a uniform distribution of
missing, mislabeled, and extraneous actions and random distortions of states associated
with those actions. The error levels that we tested were {0%,10%,20%,30%,40%,50%}.
3. The error- plan was then used to incrementally query  to retrieve a plan. For example,
if error- had 20 steps, the evaluator performed 11 queries at the following plan lengths:
0% (initial state only, no actions are observed), 10% (first two actions and states are
observed), and so on until 100% (full plan is observed).
4. Each query derived from error- was used to retrieve the top matching plan π . If  was
equal to π , it was considered a success and a failure otherwise.
5. We repeated steps 1-4 for all 100 plans in  in each of 20 trials.
      </p>
      <p>This yields 1100 queries per error percent level per trial, yielding 132,000 queries (1100
queries  6 error levels  20 trials). We computed average accuracy over 20 trials.</p>
    </sec>
    <sec id="sec-9">
      <title>5.2 Results and discussion</title>
      <p>We computed mean accuracy for each percentError (in [0.0,0.5] with increments of 0.1)
and each percentAction (in [0.0,1.0] with increments of 0.1) for RelaxedVF2 and DSQ. The
results are shown in Figures 4 and 5 for ActionStates and ActionsOnly, respectively. Our
results show that for all error levels and percent actions RelaxedVF2’s mean accuracy was
higher than DSQ’s. In ActionStates, RelaxedVF2 achieves 50% accuracy by 20% actions at
all error levels, but DSQ only achieves 50% accuracy at 100% actions at only 0% and 10%
error. In ActionsOnly, RelaxedVF2 achieves 50% accuracy by 40% actions at all error levels,
but DSQ only approaches 50% accuracy at 100% actions with 0% error.</p>
      <p>We conducted a one-way ANOVA test to compare the effects of percent Actions
(0%100%), percent Error (0%-50%) and the matching algorithms (RelaxedVF2, DSQ) on
accuracy. There was a significant effect on accuracy at p &lt; 0.05 with respect to the matching
algorithms (F(1,17)=18687550.204, p=0.0). This analysis shows that RelaxedVF2
significantly outperformed DSQ, which lends support to our claim.</p>
      <p>DSQ performs considerably worse without state information because the ASGs become
much smaller. The degree sequences across the partitions of the smaller graphs will yield
similar values, preventing DSQ from disambiguating the different plans.</p>
      <p>Surprisingly, accuracy decreased around 80% actions in RelaxedVF2 in the ActionStates
condition (Figure 4). As the error level increases this dip occurs earlier. We hypothesize that
the more densely connected graphs resulting from additional action-state information causes
these graphs to more closely resemble each other, thus reducing recognition accuracy. We
plan to investigate this in future work.</p>
      <p>Given that RelaxedVF2 is an exact graph matching algorithm and DSQ is an approximate
algorithm, DSQ should have a significantly shorter runtime. In this study, RelaxedVF2 had
a mean runtime (in seconds) of 0.121 and 0.045 in the ActionStates and ActionsOnly
conditions, respectively. DSQ mean runtime was 0.020 and 0.019 in these conditions. We
subjected the mean runtimes to a t-test and found the differences in the runtimes to be
significant at p &lt; 0.05 for both conditions.</p>
    </sec>
    <sec id="sec-10">
      <title>6. Summary</title>
      <p>
        CBPR under imperfect observability requires error tolerant plan-subplan matching, which
requires flexible representation and matching algorithms. In earlier work we introduced the
ASG representation for plan recognition and degree-sequence plan matching
        <xref ref-type="bibr" rid="ref19">(Vattam et al.,
2014; 2015)</xref>
        . Although this matching algorithm worked reasonably well, there remained
room for improvement. Here we presented RelaxedVF2, an alternative plan-subplan
matching algorithm. It is a subgraph monomorphism algorithm, and thus affords flexibility
and error tolerance in matching compared to VF2. In our empirical study we found support
for our claim that, for error-tolerant CBPR, RelaxedVF2 can outperform the degree-sequence
matcher, at least for the paradigmatic domain we studied.
      </p>
      <p>In future work, we will investigate whether the same result occurs when using datasets
from additional domains to address the single dataset limitation of our current study. We also
plan to integrate RelaxedVF2 into our plan recognition architecture to complement the
existing methods. We also plan to do a comparative study with other state-of-the-art plan
recognizers.</p>
    </sec>
    <sec id="sec-11">
      <title>Acknowledgements</title>
      <p>Thanks to OSD ASD (R&amp;E) for sponsoring this research. Swaroop Vattam conducted this
research while an NRC post-doctoral research associate at NRL. Keith Frazer conducted this
research while an NREIP intern at NRL. The views and opinions in this paper are those of
the authors and should not be interpreted as representing the official views of NRL or OSD.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Aha</surname>
            ,
            <given-names>D. W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Breslow</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          (
          <year>1997</year>
          ).
          <article-title>Refining conversational case libraries</article-title>
          .
          <source>Proceedings of the Second International Conference on CBR</source>
          (pp.
          <fpage>267</fpage>
          -
          <lpage>278</lpage>
          ). Providence, RI: Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bergmann</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>2002</year>
          ).
          <article-title>Experience management: Foundations, development methodology, and Internetbased applications</article-title>
          . Berlin, Germany: Springer-Verlag.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Bui</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>A general model for online probabilistic plan recognition</article-title>
          .
          <source>Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence</source>
          (pp.
          <fpage>1309</fpage>
          -
          <lpage>1315</lpage>
          ). Acapulco, Mexico: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Charniak</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1991</year>
          ).
          <article-title>A probabilistic model of plan recognition</article-title>
          .
          <source>Proceedings of the Ninth National Conference on Artificial Intelligence</source>
          (pp.
          <fpage>160</fpage>
          -
          <lpage>165</lpage>
          ). Anaheim, CA: AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Charniak</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          (
          <year>1993</year>
          ).
          <article-title>A Bayesian model of plan recognition</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>64</volume>
          ,
          <fpage>53</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Cordella</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Foggia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Sansone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            , &amp;
            <surname>Vento</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          (
          <year>2004</year>
          ).
          <article-title>A (sub)graph isomorphism algorithm for matching large graphs</article-title>
          .
          <source>Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>26</volume>
          (
          <issue>10</issue>
          ),
          <fpage>1367</fpage>
          -
          <lpage>1372</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Cox</surname>
            ,
            <given-names>M.T.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kerkez</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2006</year>
          ).
          <article-title>Case-based plan recognition with novel input</article-title>
          .
          <source>Control and Intelligent Systems</source>
          ,
          <volume>34</volume>
          (
          <issue>2</issue>
          ),
          <fpage>96</fpage>
          -
          <lpage>104</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Geib</surname>
            ,
            <given-names>C.W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.P.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>A probabilistic plan recognition algorithm based on plan tree grammars</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>173</volume>
          (
          <issue>11</issue>
          ),
          <fpage>1101</fpage>
          -
          <lpage>1132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geib</surname>
            ,
            <given-names>C.W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>C.A.</given-names>
          </string-name>
          (
          <year>1999</year>
          ).
          <article-title>A new model of plan recognition</article-title>
          .
          <source>Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence</source>
          (pp.
          <fpage>245</fpage>
          -
          <lpage>254</lpage>
          ). Bled, Slovenia: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Hong</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>2001</year>
          ).
          <article-title>Goal recognition through goal graph analysis</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          ,
          <volume>15</volume>
          ,
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Kautz</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          (
          <year>1986</year>
          ).
          <article-title>Generalized plan recognition</article-title>
          .
          <source>Proceedings of the Fifth National Conference on Artificial Intelligence</source>
          (pp.
          <fpage>32</fpage>
          -
          <lpage>37</lpage>
          ). Philadelphia, PA: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Kumaran</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          (
          <year>2007</year>
          ).
          <article-title>Plan recognition as candidate space search</article-title>
          .
          <source>Master's thesis</source>
          , Department of Computer Science, North Carolina State University, Raleigh, NC.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Lau</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolfman</surname>
            ,
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Domingos</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Weld</surname>
            ,
            <given-names>D.S.</given-names>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>Programming by demonstration using version space algebra</article-title>
          .
          <source>Machine Learning</source>
          ,
          <volume>53</volume>
          (
          <issue>1-2</issue>
          ),
          <fpage>111</fpage>
          -
          <lpage>156</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Lesh</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Etzioni</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>1996</year>
          ).
          <article-title>Scaling up goal recognition</article-title>
          .
          <source>Proceedings of the Fifth International Conference on Knowledge Representation and Reasoning</source>
          (pp.
          <fpage>178</fpage>
          -
          <lpage>189</lpage>
          ). Cambridge, MA: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Nau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Au</surname>
          </string-name>
          , T.-C.,
          <string-name>
            <surname>Ilghami</surname>
            ,
            <given-names>O</given-names>
          </string-name>
          , Kuter,
          <string-name>
            <surname>U</surname>
          </string-name>
          , Murdock,
          <string-name>
            <given-names>J.W.</given-names>
            ,
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            , &amp;
            <surname>Yaman</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          (
          <year>2003</year>
          ).
          <article-title>SHOP2: An HTN planning system</article-title>
          .
          <source>Journal of Artificial Intelligence Research</source>
          ,
          <volume>20</volume>
          ,
          <fpage>379</fpage>
          -
          <lpage>404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Pynadath</surname>
            ,
            <given-names>D.V.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Wellman</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Probabilistic state-dependent grammars for plan recognition</article-title>
          .
          <source>Proceedings of the Conference on Uncertainty in Artificial Intelligence</source>
          (pp.
          <fpage>507</fpage>
          -
          <lpage>514</lpage>
          ). San Francisco, CA: Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Sukthankar</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Geib</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pynadath</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Bui</surname>
          </string-name>
          , H. (Eds.) (
          <year>2014</year>
          ). Plan, Activity, and
          <string-name>
            <given-names>Intent</given-names>
            <surname>Recognition</surname>
          </string-name>
          . Cambridge, MA: Elsevier.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Tecuci</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Porter</surname>
            ,
            <given-names>B.W.</given-names>
          </string-name>
          (
          <year>2009</year>
          ).
          <article-title>Memory based goal schema recognition</article-title>
          .
          <source>In Proceedings of the 22nd International Florida AI Research Society Conference. Sanibel Island</source>
          , FL: AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Vattam</surname>
            ,
            <given-names>S.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aha</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Floyd</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2014</year>
          ).
          <article-title>Case-based plan recognition using action sequence graphs</article-title>
          .
          <source>Proceedings of the Twenty-Second International Conference on Case-Based Reasoning</source>
          (pp.
          <fpage>495</fpage>
          -
          <lpage>510</lpage>
          ). Cork, Ireland: Springer.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Vattam</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Aha</surname>
            ,
            <given-names>D.W.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Floyd</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2015</year>
          ).
          <article-title>Error tolerant plan recognition: An empirical investigation</article-title>
          .
          <source>In Proceedings of the Twenty-Eighth Florida Artificial Intelligence Research</source>
          Society Conference. Hollywood, FL: AAAI Press.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>