<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Analogical Localization: Flexible Plan Execution in Open Worlds</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Scott Friedman</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mark Burstein</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jeffrey Rye</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ugur Kuter</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>friedman</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>burstein</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>ukuterg@sift.net</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>SIFT, LLC Minneapolis, MN</institution>
          ,
          <addr-line>55401</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <fpage>33</fpage>
      <lpage>42</lpage>
      <abstract>
        <p>Cognitive systems face the challenge of pursuing changing goals in an open world with unpredictable collaborators and adversaries. Considerable work has focused on automated planning in dynamic worlds, and even re-planning and plan repair due to unexpected changes. Less work explores how humans and computers can negotiate to define shared goals and collaborate over the fulfillment of those goals. Our work takes a domain-general approach to plan localization, the problem of establishing the set of steps within the plan that are candidates (potentially after some adaptive repair actions) for next actions given the world's unforeseeen changes. We use analogical mapping to help agents determine the nearest states in a diverse plan relative to the current world state, identifying both the maximal satisfied states that the world conforms to presently, and the closest desired states adjacent to satisfied states that are both achievable by an action and makes progress toward the goal. These are demonstrated in a system called CLiC. The system's overall purpose is to engage in symmetric dialog with human users about goals and recommended actions to achieve those goals. Both the human and the system may choose to take those actions, or describe them to the other party. They may not always do what they are told. Preliminary results indicate that our approach suits collaborative situated agents with flexible goals in open worlds.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Sometimes plans fail. The issues of uncertainty and exogenous events dominate, and
require plan adaptions during execution of the plans. Progress in AI automated planning
research has developed different approaches to plan adaptation [
        <xref ref-type="bibr" rid="ref1 ref10 ref12 ref14">12, 10, 1, 14</xref>
        ], but
despite these advances, the problem of analogically identifying the most relevant goals and
states in plans to achieve when a failure occurs in the world is still under-explored. This
paper describes an approach to plan localization that addresses unexpected changes in
the world, primarily due to well-intentioned collaborators.
      </p>
      <p>Our approach is different than most work on reactive plan monitoring and repair.
For instance, our system does not track whether the last action was precisely matched,
since it exists in an open world without full knowledge of actions and paths to success.
As other agents act in the world, it examines the world and reasons about the progress
that has been made, if any. Our system identifies the plan states that are most similar
Copyright © 2017 for this paper by its authors. Copying permitted for private and
academic purpose. In Proceedings of the ICCBR 2017 Workshops. Trondheim, Norway
to the world state, and uses analogical inferences to resume (or re-enter) the plan at
the most preferred state. Our approach (1) computes a plan locale, (2) opportunistically
identifies a plan reentry state if the locale is not a recognized future plan state, and
(3) selects actions to enter to that re-entry state from the observed world state. For
instance, in a block-building domain, the collaborator may have placed two or three
blocks at once in an unexpected configuration, putting the world in an unexpected state,
but the world state may analogically similar to a more desirable planned state (e.g.,
with slightly different block configuration), and we can change the world to satisfy
that desirable state given some small action. Our approach enables the system to take
opportunistic advantage of these situations and also take action despite setbacks.</p>
      <p>By using analogical similarity matching to compare the world against future states
in a diverse plan tree we can compute two different states of interest:
1. The best satisfied state: the state in the plan that is satisfied and closest to the goal,
where distance ties are broken by structural similarity.
2. The nearest similar desired state: the plan state following a satisfied state that is
most structurally similar to the current world state.</p>
      <p>The satisfied and desired states comprise the locale, which is where the agent should
focus its development of a next action to execute. Given the desired state found by this
localization process, the system attempts to improvise actions— computed online via
analogical inference— that will transition the world back into the plan by transitioning
to the desired state. We call this the opportunistic plan reentry. Localization and reentry
flexibly allow the agent to react to setbacks in the world as well as exploit unexpected
world changes to more quickly achieve goals. This does not require a complete plan
of all possible world states, but the more diverse and complete the plan is, the more
resilient the localization behavior will be.</p>
      <p>We have implemented this approach in CLiC, a dialogue-oriented agent that
collaboratively builds block-based structures in a visual environment shared with a
human collaborator. The high-level plan localization approach is illustrated in Figure 1
with a working example. The human collaborator suggests a goal structure in language,
and CLiC uses its conceptual knowledge to envision that goal and any mentioned
constraints. It then generates a diverse plan via regression to the current (empty table) state.
CLiC both issues linguistic directives so that the user can move blocks, or alternatively,
it responds to user directives by taking actions to move blocks on the table to
incrementally achieve the shared goal. At each step, if the user responds unpredictably, e.g., by
placing two blocks instead of one, or not placing them where directed, CLiC uses
analogical plan localization to re-establish it’s perceived position in the plan by matching
against the shared world state. In this process it re-aligns which real blocks correspond
to which planned blocks, and then proceeds to improvise new directives to move the
plan forward, e.g., “push block 8 together with block 6,”.</p>
      <p>We continue with a description of CLiC’s planning, plan revision, and the task
of collaborative building in Section 2. In Section 4, we describe our domain-general
analogical plan localization approach, which we empirically support in Section 5 with
multiple scenarios of CLiC utilizing this approach to flexibly mitigate and exploit
unexpected changes in the world. We close with a brief discussion of related work (Section 6)
and future work (Section 7).
(1.) User directive: “Let’s build a 3-step staircase.”
(2.) Envisioned Goal
(3.) Plan generated by regressing from goal.
(4.) CLiC localizes
world state within plan.</p>
      <p>satisfied
state preferred
state</p>
      <p>(5.) analogical inference(s) from
preferred plan state yields new goals:
(touching-horizontal b6 b8)</p>
      <p>
        (6.) CLiC directive:
“Push block 8 together with block 6.”
We are developing CLiC for DARPA’s Communicating with Computers program, a
program about contextually-grounded multi-modal communication with machines. We
are presently working in two collaboration-oriented test domains: (1) collaborative goal
selection and construction of structures using blocks and (2) a system for discussions
with biologists about modeling and evaluating biochemical pathways. The basic CLiC
reasoning infrastructure is shared across these domains. CLiC is a subsystem of a larger
integration with the TRIPS agent-based dialog system architecture [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. CLiC provides
the domain reasoning for the dialogs occurring in both domains, and does planning and
action selection, as well as responses to the user. Agents from the TRIPS system do the
natural language understanding and maintain the state of the dialog as a high level goal
tree, issuing high-level collaborative goals to CLiC.
      </p>
      <p>
        CLiC reactively pursues collaborative goals by directly acting in the world or
issuing linguistic directives to the user [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], and sometimes asking questions when more
information is required to formulate a reasonable objective. In the biocuration domain,
CLiC coordinates domain-specific biological modeling and reasoning agents,
developed by other collaborators, to achieve collaborative goals with biologists. The rest of
the paper focuses on the collaborative block-building setting, since CLiC’s biocuration
setting does not presently demand advance planning.
      </p>
      <p>In the blocks world domain, the human collaborator suggests a named structure to
build, e.g., row, stack, wall, staircase, with additional specifiers for size. CLiC uses
conceptual knowledge to envision (as a set of logical and spatial relations over blocks) the
form to be constructed. This is shown in Figure 2(a). When the user issues subsequent
revisions to the goal, such as specifying sizes or colors— some of which may involve
conflicting constraints— CLiC transforms these revisions into rules representing the
specified constraints and then runs these rules to revise the goal. The result is illustrated
in Figure 2(b), which is the same goal after being told “the blocks should be green” and
then “the tops should be red.”
a.)</p>
      <p>b.)</p>
      <p>When CLiC receives a new goal, or when a goal is modified, it performs a simple
plan regression search to generate a diverse plan from the solution state to the initial
(empty table) state. One such plan graph CLiC generates to build a three-step staircase
is shown in Figure 1. Unless specified by the user, the plan states are agnostic as to
which specific blocks are used, what color they are, and the orienation of the structure
(e.g., whether the staircase ascends to the right or left).</p>
      <p>After agreeing on a goal— and in absence of additional directives and questions
from the user— the rest of the block-building proceeds by (1) plan localization, (2)
action selection, (3) action execution, and (4) awaiting collaborator actions before looping
again. Before describing the plan localization approach that is our focus here, we briefly
review the theory and implementation of CLiC’s analogical reasoning.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Background: Analogy &amp; Similarity</title>
      <p>
        CLiC uses a domain-general implementation of the Structure-Mapping Theory [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] with
a greedy algorithm similar to the Structure-Mapping Engine [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Structure-mapping takes two relational semantic graphs as input— a base and a
target— and greedily computes a best mapping (i.e., nodes that correspond between
base and target). Structure-mapping is a maximal common subgraph (MCS) problem,
with the following hard constraints:
– One-to-one: a node in the base can match to at most one on the target, and the
reverse. (This follows from the definition of MCS and isomorphism.)
– Parallel connectivity: two relational nodes can only correspond if their arguments
also correspond.
– Identicality: relations (or attributes) can only correspond if their predicates (or
categories) correspond.</p>
      <p>Structure-mapping specifies an additional soft constraint:
– Systematicity: higher-order structures (e.g., relations over other relations or
functions) are preferred over independent facts.</p>
      <p>Structure-mapping computes a similarity score bt between the base b and target t, which
is a weighted sum of nodes put into correspondence (higher-order relational nodes are
weighted higher to implement systematicity). Intuitively, bt increases with the number
of nodes in the mapping and with the systematicity of the nodes in the mapping, all else
being equal. This is the objective function to maximize in the MCS solution. We use it
to rank similarity: for a base b, target t1 is more similar than target t2 if bt1 &gt; bt2 .</p>
      <p>Finally, structure-mapping produces analogical inferences between base and target
graphs. These analogical inferences are relations and attributes that are excluded from
the mapping (i.e., they have no match in the MCS), that describe elements that are
in the mapping. Analogical inferences can be projected from base-to-target or
targetto-base. For example, suppose blocks in the world world-b1 and world-b2
correspond to blocks in a plan state plan-b1 and plan-b2, respectively, and the relational
statement (touching-horizontal plan-b1 plan-b2) is asserted in the plan, but
the corresponding statement is not asserted in the world. Structure-mapping will
produce the analogical inference (touching-horizontal world-b1 world-b2) as
a projection in the world graph. Analogical inferences are relations or attributes
projected whenever symbols correspond across graphs (e.g., hworld-b1; plan-b1i and
hworld-b2; plan-b2i) and one graph lacks a relation or attribute over the
corresponding elements (e.g., (touching-horizontal world-b1 world-b2) is not asserted,
so it is inferred). These inferences are not provably sound, but as we discuss next, they
can be used very practically for improvising actions in the world.
4</p>
    </sec>
    <sec id="sec-3">
      <title>Approach</title>
      <p>CLiC’s analogical plan localization runs whenever the world changes. For the purpose
of illustration, suppose the task is to build a three-step staircase out of cubed blocks
(comprised of six blocks in total) and CLiC is directing the human user how to build
the structure. Suppose also that there is a single block b6 on the table, and CLiC selects
a planned action and tells the user, “Put block B6 on the table, and push B6 together
with B8.” This directive— should the human user obey it— would traverse a single
edge in the plan graph and result in a transition to a planned state.</p>
      <p>Now suppose that instead of putting b6 next to b8, the user put b6 on the table apart
from b8 and then immediately put b7 on top of b6, as shown in Figure 1, bottom left.
CLiC’s plan localization runs after this unexpected (and undirected) world change.</p>
      <p>The plan localization algorithm is given the world state w and sets its current state
c to the initial state (e.g., STATE342 in Figure 1), and then performs a best-first search:
1. If c is the goal state, the world satisfies the goal. Return success.
2. Otherwise, compute N as c’s next immediate states in the plan graph.
3. Compute next state c0 with highest similarity to the world: c0 = maxn2N wn.
4. If the w to c0 mapping has no analogical inferences to w, w satisfies state c0 then
set c = c0, loop to step 1.
5. Otherwise, return c as best satisfied state ssat and return c0 as desired state sdes.
This best-first search uses structural similarity as a guide through the state space of the
plan to orient the agent within the plan. The agent has now approximated the best state
in the plan whose conditions have been satisfied ssat and a structurally similar state that
might be opportunistically re-entered sdes.</p>
      <p>The algorithm then computes actions that will change the world to satisfy sdes by
computing analogical inferences in the mapping from w to sdes: all analogical
inferences from sdes into w are treated as requirements to satisfy sdes. In the case described
above (where b6 is not touching b8), the system can force the world into STATE339
in Figure 1 by achieving the analogical inference (touching-horizontal b6 b8).
This action will allow CLiC to jump form its previous locale with one block (i.e.,
STATE341) two steps ahead (i.e., to STATE339) by exploiting the unexpected change.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Experiments</title>
      <p>We present four scenarios where CLiC localizes itself and achieves its goal, despite
unexpected changes in the world and unexpected starting conditions.
5.1</p>
      <p>Change spatial relations to reenter plan
In Section 4, we used the scenario in Figure 1 (on page 3) to outline the approach. In
this scenario, the goal is to build a staircase. With a single block on the table, CLiC
directs the human to put another block next to it; however, the user instead stacks two
blocks on the table apart from the first. CLiC uses analogy to find the similar, desired
plan state with a stack of two blocks and another block touching it on the table. The
analogical inference is that the B6 and B8 should be touching, so it directs the user to
push them together. This improvisation allows CLiC to reenter and complete the plan.
5.2</p>
      <p>Utilize unexpected structure toward the goal
In this scenario, shown in Figure 3, the human suggests building a three-block stack.
CLiC suggests stacking a second block on B1, but the human instead stacks B10 B11
apart from B1. CLiC then localizes this state to the penultimate state using the new
B10/B11 entities, and refocuses effort on the B10/B11 stack instead of the
initiallysuggested B1 stack, with the directive “How about you put B12 on B11?”
User-specified goal: “let’s build a stack with three blocks.”
ACT 1: human follows CLiC’s directive.</p>
      <p>ACT 2: human disregards CLiC’s directive,</p>
      <p>places two blocks apart from B1.</p>
      <p>CLiC Plan Graph:
sat</p>
      <p>des
before ACT1
CLiC: How about you
put B1 on the table?
sat</p>
      <p>des
after ACT1
CLiC: How about you
put B11 on B1?
sat</p>
      <p>des
after ACT2
CLiC: How about you
put B12 on B11?
Staircases can be built in either direction, unless otherwise specified. In the Figure 4
scenario, CLiC and the human reach a state where a row of three blocks are on the
table, and a fourth block is on top of the middle. From here, the staircase could still
be built in either direction. CLiC suggests building in one direction, but the human
disregards and puts a block on the other side.</p>
      <p>CLiC reacts to this unexpected change by still localizing the world into the
penultimate state and directing the user to put B3 on B2 to complete the staircase in the
direction the collaborator determined.
In the scenario shown in Figure 5, the human places a four-block row on the table,
with a fifth block on top, before specifying the staircase goal via dialogue. CLiC
reactively localizes the already-developed world into its plan, identifying penultimate
state STATE1988 as the desired state sdes. The fourth block B4 on the table
corresponds to a second-row block in the planned state, so the analogical inferences include
(touching-horizontal b10 b4) and (on b4 b3). CLiC achieves these spatial
relations by stacking B4 on B3 and then subsequently achieves the goal state.</p>
      <p>This illustrates that analogical plan localization is useful for orienting the agent in
unexpected starting states in addition to reconciling unexpected changes in the world.
User-specified goal: “let’s build a staircase with three steps.”</p>
      <p>ACT 4: human places B12 on B1.</p>
      <p>ACT 5: human puts B2 on B11 instead of B10,</p>
      <p>building staircase in opposite direction.</p>
      <p>CLiC Plan Graph:
sat
des
sat</p>
      <p>des
after ACT4
before ACT4</p>
      <p>CLiC: How about you
CLiC: How about you put B2 on B10, and put
put B12 on B1? B2 and B12 together?
sat
des
after ACT5
CLiC: How about you
put B3 on B2?</p>
      <p>User-specified goal: “build stairs with three steps.”
START: human places blocks
and then suggests goal.</p>
      <p>ACT 1: CLiC moves
B4 to new location.</p>
      <p>ACT2: CLiC completes
the stairs with B11.</p>
      <p>CLiC Plan Graph:
sat
at START</p>
      <p>sat
des after ACT1
des</p>
      <p>sat</p>
    </sec>
    <sec id="sec-5">
      <title>Related Work</title>
      <p>
        Many AI planning systems have addressed re-planning, and plan repair. Decades of
work on reactive planning (e.g., [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]) have investigated insertive and destructive
approaches to reconciling the world state with plan states. The approach described in this
paper differs from these AI reactive replanning and plan repair methods in that our
approach uses structurally similar states to opportunistically repair plans, rather than just
replanning from scratch given the current situation. As such it is somewhat similar to
a case-based adaptation approach, though the adaptations are not made on prior cases,
but on the initially developed plans for the goal.
      </p>
      <p>
        Approaches for replanning from scratch [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] are fundamentally different than the
approach described in this paper, since replanning generates new plans from the current
state to the end of goals of a planning problems; conversely, our approach identifies
opportunities for the agent to re-enter the plan, and then the existing plan can be reused.
      </p>
      <p>
        AI plan repair typically focuses on locally repairing hierarchical plans when the
system identifies a discrepancy during execution. For instance, replanning approaches
like HOTRiDE [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and SHOPLifter [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] are closely related to the case-based plan
adaptation techniques proposed in the RepairSHOP system [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        Approaches to diverse planning (e.g., [
        <xref ref-type="bibr" rid="ref13 ref2 ref9">2, 13, 9</xref>
        ]) have generated measurably diverse
plans to cover more space of possible outcomes and explore possible state
discrepancies in plans a priori, before execution. Among these are Coman &amp; Munoz-Avila’s
work describing how to use case-based similarity/analogy methods in order to generate
semantically-different plans. Unlike these approaches that use analogy and similarity
to identify differences among diverse plans, our approach uses analogical reasoning to
compare the world to the plan, orient the agent, and select actions.
7
      </p>
    </sec>
    <sec id="sec-6">
      <title>Discussion &amp; Future Work</title>
      <p>We presented an approach to analogical plan localization that allows agents to flexibly
recompute their locale of execution in a plan after drastic or unexpected world changes.
In this setting, the plan is not considred as a strict policy for execution in the world;
rather, it provides an ordering over partial world states that can be opportunistically
reentered and traversed.</p>
      <p>Our choice of analogy is particularly useful in a block-building domain. For
instance, if our plan states that a blue block should go on a red block, it does not matter
which specific blue block we choose. This means that analogical mapping can
flexibly re-frame which actual blue block corresponds to the planned blue block in order
to accommodate other entities and spatial relations in the analogical mapping. Other
domains that allow substitution of entities will likewise benefit from this approach,
whereas highly-specific goals are less amenable, e.g., if the goal in a logistics domain
is for a specific truck to arrive with specific cargo at a specific location.</p>
      <p>
        Our approach of localization via analogy is not limited to plans; we are using similar
approaches to build systems that read articles and then localize extracted information
within large models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], akin to event recognition and information fusion. In this setting,
extracted material– such as an abstract desription of an event– may localize against
many concrete events in a large model, so we use a constrained similarity-based retrieval
model similar to MAC/FAC [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        Near-term future work includes validating this approach on other planning domains,
as we believe that PDDL will support analogy. Other considerations include scaling to
situations with larger branching factor, where CLiC’s exhaustive regression planning is
not tractable. In these cases, we could utilize HTN planning with diversity (e.g., [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]) to
cover a subset of the plan space and support plan localization.
8
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was supported in part by Contract W911NF-15-C-0238 with the US Defense
Advanced Research Projects Agency (DARPA) and the Army Research Office (ARO).
Approved for Public Release, Distribution Unlimited. The views expressed are those
of the authors and do not reflect the official policy or position of the Department of
Defense or the U.S. Government.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Ayan</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuter</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yaman</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.P.:</given-names>
          </string-name>
          <article-title>HOTRiDE: Hierarchical Ordered Task Replanning in Dynamic Environments</article-title>
          . In: ICAPS-07 Workshop on Planning and
          <article-title>Plan Execution for Real-World Systems (</article-title>
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Coman</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <article-title>Mun˜oz-</article-title>
          <string-name>
            <surname>Avila</surname>
          </string-name>
          , H.:
          <article-title>Generating diverse plans using quantitative and qualitative plan distance metrics</article-title>
          .
          <source>In: Proceedings AAAI</source>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Cushing</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , J.Benton, Kambhampati,
          <string-name>
            <surname>S.</surname>
          </string-name>
          :
          <article-title>Replanning as deliberative re-selection of objectives</article-title>
          .
          <source>Tech. rep., Computer</source>
          Science Department (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ferguson</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Allen</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>A cognitive model for collaborative agents</article-title>
          .
          <source>In: Proceedings of the AAAI 2011 Fall Symposium on Advances in Cognitive Systems. AAAI</source>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ferguson</surname>
            ,
            <given-names>R.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lovett</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gentner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Extending SME to handle large-scale cognitive modeling</article-title>
          .
          <source>Cognitive Science</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Forbus</surname>
            ,
            <given-names>K.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gentner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Law</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>MAC/FAC: A model of similarity-based retrieval</article-title>
          .
          <source>Cognitive science 19</source>
          (
          <issue>2</issue>
          ),
          <fpage>141</fpage>
          -
          <lpage>205</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Friedman</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burstein</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McDonald</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Paullada</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plotnick</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bobrow</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cochran</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pustejovsky</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anick</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Learning by reading: Extending and localizing against a model</article-title>
          .
          <source>In: Proceedings of the 4th Annual Conference on Advances in Cognitive Systems</source>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Gentner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Structure-mapping: A theoretical framework for analogy</article-title>
          .
          <source>Cognitive science 7</source>
          (
          <issue>2</issue>
          ),
          <fpage>155</fpage>
          -
          <lpage>170</lpage>
          (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Goldman</surname>
            ,
            <given-names>R.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuter</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Measuring plan diversity: Pathologies in existing approaches and a new plan distance metric</article-title>
          .
          <source>In: AAAI</source>
          . pp.
          <fpage>3275</fpage>
          -
          <lpage>3282</lpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kuter</surname>
            ,
            <given-names>U.</given-names>
          </string-name>
          :
          <article-title>Dynamics of behavior and acting in dynamic environments: Forethought, reaction, and plan repair</article-title>
          .
          <source>Tech. Rep. 2012-1</source>
          , SIFT (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>McDonald</surname>
            ,
            <given-names>D.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Greenbacker</surname>
            ,
            <given-names>C.F.</given-names>
          </string-name>
          :
          <article-title>'If you've heard it, you can say it': towards an account of expressibility</article-title>
          .
          <source>In: Proceedings of 6th International NLG Conference</source>
          . pp.
          <fpage>185</fpage>
          -
          <lpage>189</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Musliner</surname>
            ,
            <given-names>D.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Durfee</surname>
            ,
            <given-names>E.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shin</surname>
            ,
            <given-names>K.G.</given-names>
          </string-name>
          :
          <article-title>Execution monitoring and recovery planning with time</article-title>
          .
          <source>In: Proceedings of the Seventh IEEE Conference on Artificial Intelligence Applications</source>
          . vol.
          <volume>1</volume>
          , pp.
          <fpage>385</fpage>
          -
          <lpage>388</lpage>
          . IEEE (
          <year>1991</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Myers</surname>
            ,
            <given-names>K.L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>T.J.:</given-names>
          </string-name>
          <article-title>Generating qualitatively different plans through metatheoretic biases</article-title>
          .
          <source>In: AAAI</source>
          . pp.
          <fpage>570</fpage>
          -
          <lpage>576</lpage>
          . AAAI/MIT Press, Menlo Park, Cal.
          <source>(July</source>
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Warfield</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hogg</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lee-Urban</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Munoz-Avila</surname>
          </string-name>
          , H.:
          <article-title>Adaptation of hierarchical task network plans</article-title>
          .
          <source>In: FLAIRS-2007</source>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>