<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>The Internal Reasoning of Robots</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Don Perlis</string-name>
          <email>perlis@cs.umd.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Justin Brody</string-name>
          <email>justin.brody@goucher.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sarit Kraus</string-name>
          <email>sarit@cs.biu.ac.il</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Michael Miller</string-name>
          <email>mjmiller@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Maryland, Goucher College, Bar Ilan University</institution>
          ,
          <addr-line>Bethesda MD</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We argue for the value of examining the internal processes that robots might actually use to draw inferences in a timely way in a dynamic world. This requires a significantly different way of thinking about logic and reasoning, which in turn bears on some traditional logic-related problems such as omniscience and reasoning in the presence of a contradiction, as well as on a wide variety of other AI issues. A nonstandard internally-evolving notion of time seems to be the key that unlocks other tools.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        We teeter on the edge of the age of general-purpose robots.
It thus becomes ever more important that commonsense
reasoning (CSR) examine in some detail just how such a
robot will actually think, i.e., produce inferences over time
(as it plans, decides, assesses, questions, learns, explores,
updates, reconsiders, etc). In particular, robots will need to
keep their reasoning abreast of at least some aspects of the
evolving world, including the passage of time and how
they are progressing with regard to their own (also
evolving) goals.1
On the surface much of CSR may seem to be aiming at just
these issues.2 But the bulk of such work follows what Ray
Reiter has called the “external design stance”
        <xref ref-type="bibr" rid="ref39">(Reiter 2001,
pp 292-293)</xref>
        : that of a designer-scientist “entirely external
to … [and] … looking down on some world inhabited by
an agent.” Indeed, a lot of this work is very relevant and
has led to major advances in our understanding: situation
calculus, nonmonotonic reasoning, and much more. Still,
the external stance is nevertheless a very highly idealized
abstraction that creates an unworkable barrier regarding a
robot’s internal reasoning, and in addition faces huge
hurdles such as omniscience, contradiction-intolerance, and
more.
      </p>
      <p>
        Work primarily supported by the U. S. Office of Naval Research.
1 While we recognize that Markov decision processes (MDPs) and related
technical tools are standard items in much of current (often
highlystructured special-task) robotic work, general-purpose robots will be
bombarded with “culturally supplied” information from other agents,
signage, online, and so on, and will need to reason in real-time with such
information. Hence a knowledge base (KB) managed in large measure by
inferential processes seems unavoidable.
2 See for instance
        <xref ref-type="bibr" rid="ref38">(Rajan&amp;Saffiotti 2017)</xref>
        for very recent work.
This paper attempts to shed light on that barrier and those
hurdles, and to highlight an alternative that drives a sharp
wedge between two notions of logic: (i) the standard
“external” kind (E-logics) that specify features from afar via
closure under (some form of) consequence or entailment
relation, and (ii) “internal” ones (I-logics) that represent
(and indeed can actually be used for) the inferential
processing undertaken by an agent over time. (We especially
focus on active logic, which is perhaps the most developed
form of I-logic so far. Active logic grew out of ideas in
        <xref ref-type="bibr" rid="ref14">(Elgot-Drapkin&amp;Perlis 1990)</xref>
        , and has been continually
investigated ever since
        <xref ref-type="bibr" rid="ref11 ref23 ref30 ref4">(Nirkhe et al 1991; Miller&amp;Perlis
1996; Kraus et al 2000; Anderson et al 2008; Brody et al
2014; Brody&amp;Perlis 2015)</xref>
        .)
As we will see, some of the issues faced by E-logics (e.g.,
omniscience) simply go away in an I-logic approach. In
addition, we have found a wide array of unexpected
benefits of such an approach, that ties CSR to many other parts
of AI. Thus the present paper is also a kind of progress
report, pulling together many aspects of our attempt to look
under the robotic hood, to craft appropriate logic
mechanisms to go there, and to explore applications across AI. As
such, it will have a large number of short sections; we beg
the reader’s indulgence, for we see this as the most useful
way to communicate the range of these ideas compactly.
The single most salient departure that I-logics make from
E-logics is that of taking into account the actual process of
inferring as something that itself takes time. Thus when a
conclusion is inferred, it has become a later time than prior
to reaching that conclusion. This time-stratification spreads
successive inferences out and leaves a self-updating record
of an agent’s evolving beliefs up until the present moment
(which itself then moves ahead one more step, and so on
indefinitely). Secondarily, this stratification then provides a
very simple yet far-reaching form of introspection: looking
back at one’s beliefs of past moments and drawing
conclusions bearing on everything from non-monotonicity and
contradiction-handling, to ambiguity resolution, agent
control of semantics, and awareness of own actions. Third, the
notions of axiom and theorem and entailment are no longer
very informative: beliefs come and go – still due to
(various forms of) inference, but including evolving time and
the ability to give up (i,e., disinherit) beliefs that are judged
as no longer appropriate.
      </p>
      <p>Active logic in particular posits an unending3 sequence of
time-steps, at each of which the knowledge base (KB) has
a finite number of wffs, considered as the beliefs that the
reasoning agent holds (at that step); the contents of the KB
then fluctuate in time, and there is no final state where the
agent arrives at its “finished” belief-set. It is the agent’s
behavior through time that is of interest.</p>
    </sec>
    <sec id="sec-2">
      <title>Elementary Example: Go to Lunch</title>
      <p>A robot needs to get to a noon lunch date, and it is now
11am. How can it ever decide to start walking? The
problem is that, given Now(11:00), standard logics will treat
this as an axiom and so the robot will never realize the time
has changed, e.g., that it has now become 11:30 and it
should start walking.4 Clearly it is essential that the robot
be able to update its belief as to what time it is.</p>
      <p>An example of the desired behavior is illustrated below;
underlined items on each line indicate beliefs
newlyformed at the corresponding time-step:</p>
      <sec id="sec-2-1">
        <title>Time</title>
        <p>11:00
11:01
…
11:30
11:31</p>
      </sec>
      <sec id="sec-2-2">
        <title>Evolving belief set</title>
        <p>Now(11:00); Now(11:30) à Do(walk)
Now(11:01); Now(11:30) à Do(walk)
Now(11:30); Now(11:30) à Do(walk)</p>
        <p>
          Now(11:31); Now(11:30)àDo(walk), Do(walk)
At time 11:31 it has just inferred Do(walk).5 Notice that
beliefs of the form Now(t) come and go, whereas the
“plan” to walk starting at 11:30 continues to be inherited.6
A “clock” inference rule (along with Modus Ponens in the
last two steps) can achieve this: from Now(t) infer
Now(t+1):
t: Now(t)
-----------------t+1: Now(t+1)
3 In concert with Nilsson’s notion of an agent with a lifetime of its own
          <xref ref-type="bibr" rid="ref29">(Nilsson 1983)</xref>
          .
4 If lunch for a robot sounds silly, the reader is invited to imagine that the
task instead is to approach and disarm a bomb at noon (when local
civilians will have been safely moved away).
5 If one wants to be picky, perhaps this should have been inferred a little
earlier, say at 11:29, so that the walking can actually start by time 11:30.
Here we are ignoring such details, and also the granularity of time steps.
6 After 11:30, there is no need to continue inheriting the plan; current
implementations of active logic do not take advantage of this “garbage
collection” but we expect our next version to do so.
        </p>
        <p>While this may seem simple enough, it radically changes
the notion of a logic from an external specification
(Elogic) of a system in another world, to an internal
mechanism (I-logic) operating within and as part of that world. In
particular, the example is written in the notation of active
logic, the I-logic approach that we have been pursuing.
We next offer three clarifications to avoid confusion
between E- and I-logics.</p>
      </sec>
      <sec id="sec-2-3">
        <title>This Is Not Your Grandmother’s Temporal Logic</title>
        <p>Temporal logics are well known.7 But, in virtually all
cases, they are not properly temporal – that is, they do not
vary with time. In fact, they are examples of E-logics,
taking an external timeless stance even while looking in on a
world that may evolve in time. In effect, temporal logics
have a frozen permanent now from which they can express
facts about what is, will be, or was the case at various
specified moments. But inferences made using such logics do
not correspond to anything changing within the world
being explored.</p>
        <p>Yet a wealth of beneficial connections arise between a
properly temporal (I-logic) version of CSR and much of
the rest of AI – e.g., NLP, perception, robotics, planning.
As noted, this paper attempts to bring together a wide
range of such benefits as well as provide motivation for the
underlying logical apparatus, especially in the active logic
form of I-logic. In effect, time-change is the root out of
which all the rest flows. In particular, it dispenses with
omniscience quite trivially: an agent believes only what it
has had time to come to believe so far; anything else it may
come to believe only later on (as further inferences are
drawn). Such an agent certainly does not believe (contain
in its KB) all wffs that are entailed by its current beliefs.
Indeed, current beliefs may well be inconsistent – more on
that below.</p>
      </sec>
      <sec id="sec-2-4">
        <title>This Is Not Your Grandfather’s Belief Revision</title>
        <p>
          Belief revision8 provides a possible way to view the above
clock rule: insert Now(11:30) as an update, which triggers
relaxation of the KB – removal of Now(11:00) among
other changes. Yet that last phrase (“among other changes”)
is where E-logic reveals one of its main hurdles: standard
notions of belief revision – being based on a notion of
closure under consequence – cannot serve as a mechanism for
a robot to use, simply because such closure in general is
very expensive (in most cases non-terminating or even
undecidable). This is the omniscience problem, and is
uni7 For standard approaches, see
          <xref ref-type="bibr" rid="ref23 ref35 ref7">(Pnueli 1977; Baral&amp;Zhao 2008; Gonzalez
et al 2002; Barringer et al 2013; Kraus&amp;Lehmann 1986)</xref>
          8 See, e.g.,
          <xref ref-type="bibr" rid="ref13 ref20 ref41">(Gardenfors 2003; Sloan&amp;Turan 1999; Goldsmith et al 2004;
Delgrande et al 2013; Diller et al 2015)</xref>
          for traditional E-logic approaches.
versally recognized as unrealistic: producing consequences
is time-consuming.9
Traditional (E-logic) belief revision also suffers from
“recency prejudice”
          <xref ref-type="bibr" rid="ref32">(Perlis 1997, 2000)</xref>
          , in which newly
acquired information is taken to have a firm validity that
preexisting beliefs must yield to. Yet it is hard to think of a
case in which a new item P should take precedence over
one’s entire KB. The reasons for preferring P would surely
in large measure be deeply embedded in that very KB as
part of one’s understanding of many relevant aspects of the
world. Thus P and the KB (including information as to
where this new P came from) would need to “fight it out”
as to whether to accept P or not; and any conclusion could
vary over time as the agent devotes more thought to the
matter (and/or may decide to seek more information).
        </p>
      </sec>
      <sec id="sec-2-5">
        <title>Goodbye to Axioms</title>
        <p>Very little in CSR can reasonably be taken as firmly given
over an agent’s lifetime. Perhaps some mathematical
concepts, perhaps some definitions. But more commonly, we
hold beliefs for awhile and then relax them if sufficient
counterevidence arises. Or, in many cases, we already have
that evidence, in the form of other beliefs to the effect that
something is in flux (the time, an airplane’s location, and
so on); sometimes change is the rule. It is hard then to find
much to take as axiomatic. Here are two more examples.
(1) Your eleven-year-old son tells you that Barack
Obama is 6’8” tall. You do not take this as a fact;
on the contrary – although you may not have any
specific height in mind for Obama – you do
believe 6’8” is sufficiently unusual (and presidents
are sufficiently in the news) that it would have
been remarked on a lot and you would have heard
it before. So you discount the information from
your son. But if your son then tells you that
Obama has been slouching so as to disguise his
height ever since his twenties, and that he is in
fact 6’8”, would you still be so sure he is wrong?
(2) You hear the TV meteorologist say that the
temperature dropped to 1 degree below zero last
night; and you accept this. But you would not be
especially startled to learn later that the
meteorologist has misread her notes and that the low was 1
degree above zero; or that the thermometer had
given a false reading.</p>
        <p>
          In each case, many background assumptions are in effect.
At this point one might be tempted to opt for probabilities.
But while the latter clearly have an important role to play
in AI, they need not come in quite here. Instead, we often
simply reserve judgment, or suspend a previous judgment.
9 This is sometimes embraced as a necessary evil
          <xref ref-type="bibr" rid="ref39">(Reiter 2001)</xref>
          ; or dealt
with via specialized semantics
          <xref ref-type="bibr" rid="ref26">(Levesque&amp;Lakemeyer 2000)</xref>
          which
however does not adequately address or ameliorate the time-consumption
aspect.
        </p>
        <p>And again, I-logics are vehicles for this real-time ongoing
sort of reasoning. Indeed, an agent can only reason with
what it has at hand.10
I-logic (at least in its active-logic form) not only brings
many benefits but (perhaps surprisingly) is not particularly
mired in the weeds of implementational details. This is not
to say that all such issues are now fully resolved – this is a
long work very much still in progress. But looking under
the robotic hood, so to speak, is essential if we are to come
to grips with how CSR can actually take place in robotic
creations coming in the (seemingly quite near) future.
Thus instead of axioms, at any moment, our artificial agent
has a specific collection of beliefs (stored in memory) and
this collection changes as inferences are drawn,
perceptions made, and so on. Among these changes – and central
to most of the distinct features of active logic – is the
updating of the present time as in the clock rule. There is no
notion of inferential closure; the current beliefs are simply
whatever has been inferred/perceived and kept so far (i.e.,
inherited to the present time).</p>
        <p>A belief can fail to inherit for a variety of reasons. No
belief of the form Now(t) is inherited – it is replaced by
Now(t+1). Other failures of inheritance are illustrated in
various cases below. But more importantly we now turn to
the power of introspective reasoning that becomes possible
in I-logics endowed with a notion evolving time.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Introspection Is a Many-Splendored Thing</title>
      <p>Introspection is one of the most valuable tools that come
almost for free in an I-logic.11 It in turn facilitates powerful
methods for detecting and defusing contradictions,
managing nonmonotonic inference, reasoning about and adjusting
semantics, tracking actions, and much more. In this and
several sections that follow, we explain and illustrate a
number of these ideas.</p>
      <p>Given a belief P at time t, an agent ought to be able to note
later on (say at time t+1) that it had that belief earlier. This
can be achieved in active logic by means of a rule such as
the following (positive introspection), where the
KB10 See for instance the Oxford Reference on Neurath’s boat – “The
powerful image conjured up by Neurath, in his Anti-Spengler (1921), whereby
the body of knowledge is compared to a boat that must be repaired at sea:
‘we are like sailors who on the open sea must reconstruct their ship but
are never able to start afresh from the bottom…’. Any part can be
replaced, provided there is enough of the rest on which to stand. The image
opposes that according to which knowledge must rest upon foundations,
thought of as themselves immune from criticism, and transmitting their
immunity to other propositions by a kind of laying-on of hands.”
11 And so perhaps “introspective logic” would be a more apt name than
internal logic.
predicate symbol refers to the agent’s own knowledge
base:
t: P
--------------t+1: KB(P,t)
Similarly, another rule (negative introspection) can provide
the result that one did not just previously have a given
belief:12
t: …
----------------t+1: ~KB(P,t) [if P is not present at the previous step]
These two rules are trivial to implement and cheap to run,
involving no more than a linear-time lookup at time t+1 to
see what wffs are or are not among the t-beliefs.13 Yet a
surprising number of capabilities flow from this, as
expanded upon in the next several subsections.</p>
      <sec id="sec-3-1">
        <title>Non-monotonicity</title>
        <p>At this point we can already carry out some simple cases of
nonmonotonic reasoning. For instance, the default that B’s
are typically F’s (birds typically fly) can be captured like
this: if one doesn’t already (as in a moment ago) know that
a given bird doesn’t fly, then assume it does. In
activelogic notation this can be written as follows:
∀ [ (∀t) {Bird(x) &amp; ~KB(~Flies(x),t-1)} à Flies(x) ]
Then given Bird(tweety), all it takes to infer that Tweety
can fly is ~KB(~Flies(tweety),t-1), which comes instantly
from negative introspection – unless one does already
know Tweety cannot fly. No fuss, no muss – no need for
complex consistency checks or internal model-building;
conclusions are held as long as they are held, and can be
surrendered when evidence so suggests.14
Thus, one might later on come to believe Tweety is a
penguin – whether by observation or simply additional
inference. This will then appear as a (direct) contradiction in the
KB: two beliefs of the form P and ~P will both be present
at the same time-step. Which brings us to the next
subsection.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Contradictions</title>
        <p>
          Contradictions are virtually inevitable in commonsense
reasoning (Perlis 1997). While this is generally considered
a major nuisance for CSR, it can actually be a boon. Here
12 Many issues arise here that we do not have space to address, such as: to
which wffs P are the introspection rules applied (if care is not taken, the
KB will quickly become swamped). A much longer paper in preparation
will deal with this.
13 A t-belief is simply any belief in the KB at time t.
14 Of course, an agent can also remain in doubt, or even be deliberately
tentative (such as with probabilities and during learning; see
          <xref ref-type="bibr" rid="ref18">(Getoor&amp;Taskar 2007)</xref>
          ).
is how an I-logic can benefit (in the specific form of active
logic): If the wffs P and ~P both appear as t-beliefs, then
neither are inherited as (t+1)-beliefs and instead Contra(t,
P, ~P) is inferred as a (t+1)-belief. Thus the agent retains in
the evolving present the fact that there had been an earlier
contradiction, but is no longer directly subject to it, and ex
contradiction quodlibet (from a contradiction all follows)
is thereby disarmed.15
Thus instead of being a logician’s anathema, contradictions
can be a robot’s best friend, helping it adjust its KB to
come more into line with reality. Contradictions simply
remain undiscovered in the KB until they are discovered
(in the P, ~P form) over time – and then defused. This is a
very different approach from more customary
paraconsistent logics, most of which skirt around the edges of a
contradiction – rather than acknowledge it and use it to
make changes to the KB – or in effect assume they can all
be hunted out in advance.16
In the case of Tweety above, new information that she is a
penguin and does not fly will provide (say at time-step t) a
direct contradiction between Flies(tweety) and
~Flies(tweety), which then at time t+1 will result in the KB
having neither of these inherited from step t, but instead
will have an assertion that such a contradiction did arise at
time t. If the agent has further information – such as that
penguins are a subclass of birds, and that subclass
properties are more trustworthy17 – then ~Flies(tweety) can be
reinstated. If not, then the agent remains in doubt.
It is our contention that this sort of fluctuating
conflictresolution over time is the only option for an actual agent
engaged in reasoning as the world evolves.
        </p>
      </sec>
      <sec id="sec-3-3">
        <title>Semantics and Pragmatics</title>
        <p>
          In an I-logic, semantics can take on an entirely new aspect,
where the agent can exert control and both determine and
reason about what its expressions do or don’t stand for.18
This is one of the most powerful aspects of introspection
that we have noted so far. In effect, one can reason about
one’s own expressions – simply by means of
introspectively examining past beliefs and subexpressions thereof. One
15 To be sure, whatever circumstances that produced P and ~P may do so
again, so this is not a panacea. But it can be shown (Miller 1993) that
under reasonably broad conditions this too will resolve into a stable state.
16 E.g., see
          <xref ref-type="bibr" rid="ref40">(Roos 1992)</xref>
          for a more traditional E-logic treatment; and
          <xref ref-type="bibr" rid="ref3">(Anderson et al 2013)</xref>
          for more on an active logic approach.
17 Such a rule has been implemented in one of our active logic programs.
18 That is, this refers to meanings the agent assigns to its expressions,
quite apart from what a logic-designer may have in mind. Note that the
recent Facebook robot-incident of “inventing a new language” is not of
this sort at all: those robots did not assign meanings to anything, either in
the original English or in their later made-up phrases. See
          <xref ref-type="bibr" rid="ref42">(http://www.newsweek.com/2017/08/18/ai-facebook-artificialintelligence-machine-learning-robots-robotics-646944.html )</xref>
          can even assign new expressions, if for instance a new
entity is observed, or if one infers that two entities were being
conflated as one (as in the cases of ambiguity or of
misidentification).
        </p>
        <p>
          In fact, AI systems are generally notorious for altogether
ignoring the expression/meaning distinction, as in: Joe is a
person and also we just now used “Joe” to refer to him.
People can and do (and must) note and make use of the
difference between language and what language refers to.
Our artificial agents need to be able to do the same;
otherwise they can hardly be said to know anything
          <xref ref-type="bibr" rid="ref33">(Perlis
2016)</xref>
          , let alone reason about errors. With all the recent
successes in NLP (mostly coming from deep learning), still
there is almost no language-like introspection, no meanings
associated with words in a way that allows reasoning, let
alone adjusting meanings.
        </p>
        <p>On the other hand, introspection allows representation of
beliefs (at least at previous steps) as objects that can be
reasoned about. This has numerous ramifications, which
for lack of space we can only briefly allude to in the rest of
this section.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Ambiguity and Misidentification</title>
        <p>A potentially ambiguous expression (say, “Jean’s car”) can
be recognized as such (e.g., by noticing a direct
contradiction – “this is Jean’s car, and the key to Jean’s car isn’t the
key to this car”). This in turn triggers an effort to resolve
the contradiction. Maybe Jean has two cars (ambiguity); or
maybe this is the wrong key or that is not her car at all
(misidentification).</p>
        <p>The latter case is especially interesting, for it requires some
expression to represent an object (the wrong key or wrong
car), but not the expression that had been used a moment
ago. Miller and Perlis (Miller&amp;Perlis 1996) propose a
special active-logic function-symbol tfitb to produce a new
name on demand, for the “thing formerly interpreted to be”
something else.</p>
      </sec>
      <sec id="sec-3-5">
        <title>Focal points</title>
        <p>
          A related idea comes up in planning, especially multiagent
planning. It may be important to identify an entity that
another agent is likely to similarly identify – for instance a
good location to meet up or to leave a message, or an
“obvious” item to pick out of a long list (e.g., the first, last, or
middle one). This in turn may require coming up with a
new expression that was not previously in one’s ontology.
In
          <xref ref-type="bibr" rid="ref23">(Kraus et al 2000)</xref>
          an approach to this is given using
active logic.
        </p>
      </sec>
      <sec id="sec-3-6">
        <title>Pragmatics</title>
        <p>
          In conversation, all sorts of assumptions arise and are
confirmed or dispelled, often by means of further
conversation. Thus NLP-dialog is a prime example of beliefs
coming and going during reasoning. Here is one example
dialog, in which reasoning involves inferences that evolve
over time, that has been implemented in active logic
          <xref ref-type="bibr" rid="ref37">(Purang et al 1996)</xref>
          :
(A) Kathy: Are the roses fresh?
(B) Bill: They are in the fridge.
(C) Bill: But they are not fresh.
        </p>
        <p>At some point prior to (C), Bill supposes Kathy will draw
from (B) the implicature that the roses are fresh, so in (C)
he dispels that inaccuracy. Thus Bill has to reason about
the effects of the ongoing conversation and make
adjustments to it.</p>
      </sec>
      <sec id="sec-3-7">
        <title>The One Wise Man Problem</title>
        <p>
          Much has been made of the Three-Wise-Men problem –
see
          <xref ref-type="bibr" rid="ref14 ref22">(Konolige 1984; Elgot-Drapkin&amp;Perlis 1990)</xref>
          . A
realistic treatment has to take into account the passage of time as
the wise men think; and this can be done in traditional
temporal logic, as long as the wise men themselves are not
required to use that same logic. But suppose we do want to
capture the reasoning of such an agent; for instance – to
make the problem especially simple – the King who wants
to assure himself that his one wise man is not an idiot. So
the King proposes this problem to his wise man: “Is 15 a
prime number?” Being no genius himself, the King has to
think for awhile before deciding the answer is “no” – and if
by then the wise man has not yet answered, the King can
start looking for a replacement. But to do this reasoning
(which involves introspection), the King will need I-logic,
and in particular an I-logic that closely tracks time.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>What Am I Doing?</title>
      <p>It is important that an agent not only plan and take actions,
but that it also know when it is in fact doing so. Otherwise
strange behaviors can result. In one of our robotic studies
recently, robot Alice was programmed to point and say “I
see Julia” whenever it heard an utterance containing the
word “Julia” (actually, it was doing no actual
wordprocessing at the time, but simply matching the input
sound-stream to a stored one). So it got itself into a loop,
hearing “Julia” from its own loudspeaker and then pointing
and repeating the same phrase over and over.</p>
      <p>
        But taking a cue from neuropsychology,19 we were able
to encode a rule for noting one’s own activity: whenever an
action is undertaken, Do(x) is inferred (recall the Lunch
example), and at the next step Doing(x) can be inferred,
and inherited as long as the activity is still underway.20 We
have implemented this in a grounded way, so that when
Alice undertakes to speak she infers that she is engaged in
19 The so-called efference copy, see
        <xref ref-type="bibr" rid="ref11">(Brody et al 2015)</xref>
        .
20 This is a different method from that used in
        <xref ref-type="bibr" rid="ref9">(Bringsjord et al 2015)</xref>
        where voice recognition appears to take precedence over recall of one’s
own actions.
a speaking action (but also checks what she hears to make
sure it matches her expected speech).
      </p>
    </sec>
    <sec id="sec-5">
      <title>Reasoned Learning</title>
      <p>Machine learning (ML) has taken center stage in recent
years, and for good reason: it has made justly fabled
strides, and surely will be a major part of any future
general-purpose AI. But alone it is insufficient. The practices
usually referred to as ML are ones of habituation or
training. A human turns a trainable system on, allows it to train,
perhaps applies it, and later turns it off; in itself, traditional
ML has little if any autonomy.</p>
      <p>
        But a general-purpose AI (robotic or otherwise) will need
to decide what to learn, and when and how, and whether
learning is working and/or should stop. Moreover, as noted
in the Introduction, cultural (symbolic) transmission is also
a major source of learning.21 And finally, a system will
need to know what it has or hasn’t already learned.22
An I-logic (particularly, active logic) – in keeping a history
of its own KB over time – can potentially examine that
history, infer that it has (or lacks) certain capabilities, and
then decide whether to activate an appropriate ML process;
see
        <xref ref-type="bibr" rid="ref15">(Elgot-Drapkin, et al 1991)</xref>
        for a brief introduction.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Related Work</title>
      <p>
        Ray Reiter
        <xref ref-type="bibr" rid="ref39">(Reiter 2001)</xref>
        considers numerous issues that
arise in commonsense reasoning (CSR) when an agent’s
deliberations occur within a dynamic setting, and in
particular, how a formal logic might be used by an agent to do its
own reasoning, and have that reasoning keep up with
changing events (pp.163-164). Reiter succeeds in isolating
various themes surrounding this: omniscience, internal
contradictions, and so on. But in the end he advocates
instead the “external design stance.” Action languages
        <xref ref-type="bibr" rid="ref17">(Gelfond&amp;Lifschitz 1998)</xref>
        are another firmly E-logical
approach that thus again are suitable for external analysis of
an agent but not for real-time use by an agent, let alone by
one with a potentially inconsistent KB; the same holds for
temporal action logics (TAL; see Doherty 1998) and the
temporal logic of actions
        <xref ref-type="bibr" rid="ref24">(TLA; see Lamport 1994)</xref>
        .
In a survey of commonsense reasoning (Davis 2017) the
Eand I- distinction is also raised (under different
terminology); but, like Reiter, he focuses primarily on the external
stance. A survey on robot deliberation
        <xref ref-type="bibr" rid="ref21">(Ingrand&amp;Ghallab
2017)</xref>
        does not address this distinction.
21 See also
        <xref ref-type="bibr" rid="ref25">(Levesque 2017)</xref>
        .
22 But again see
        <xref ref-type="bibr" rid="ref18">(Getoor&amp;Taskar 2007)</xref>
        for another approach.
      </p>
      <p>
        Levesque and Lakemeyer
        <xref ref-type="bibr" rid="ref26">(Levesque&amp;Lakemeyer 2000, pp
195-196)</xref>
        argue that attending to internal inference
mechanisms to avoid omniscience makes behavioral predictions
impossible. They deal with omniscience instead by
enlargements of the semantics to allow “non-standard world
states” that keep out undesired agent-beliefs. But it is
unclear what predictions one could hope to make, given an
agent with thousands of explicit beliefs, other than ones of
such generality as to be virtually useless about that
particular agent’s behavior. Will it complete a given task (even a
purely inferential one) within ten days? One surely cannot
expect anything other than a careful examination of the
robot’s actual processing to reveal such results.
      </p>
      <p>
        On the other hand, Richard Weyhrauch and Carolyn
Talcott
        <xref ref-type="bibr" rid="ref45 ref45 ref46">(Weyhrauch 1980; Weyhrauch&amp;Talcott 1990, 1994;
Talcott 2003)</xref>
        initiated the FOL approach (one instance of
an I-logic) which aimed at providing reasoning
mechanisms for actual use by an agent; however this effort has
remained in a fragmentary state. An interesting addendum
to FOL is WristWatch
        <xref ref-type="bibr" rid="ref47">(Weyhrauch &amp; Talcott 1997)</xref>
        —a
dynamic context from which to answer questions about
time, specifically about the ever-changing meanings of the
constants now and then as updated by their “tick” inference
rule. Weyhrauch and Talcott speculate about supplying a
robot with WristWatch embedded into FOL as its
mechanism to reason about time.
      </p>
      <p>
        Pei Wang's Non-Axiomatic Logic (aka NARS) provides a
(term-logic based) reasoning system which aims to be
finite, real-time and open
        <xref ref-type="bibr" rid="ref43">(Wang 2013)</xref>
        . It shares some
features with active logic, in that it is non-monotonic, allows
for self-reference and is intended to be situated (in that
knowledge is not disembodied but should be based on the
agent's experience). While Chapter 9 of
        <xref ref-type="bibr" rid="ref43">(Wang 2013)</xref>
        addresses potential meta-cognition in his system, no
particular mechanisms for monitoring an ongoing reasoning
process seem to be specified. Gestures toward such
mechanisms are made (by, e.g., referencing "doubt" and "wait"
operations), but we are not aware of any attempt to
operationalize these. Later iterations of NARS
        <xref ref-type="bibr" rid="ref44">(Wang&amp;Hammer
2015; Hammer et al 2016)</xref>
        address temporality and
recognize the problem of assuming that "the reasoning system
itself is outside the flow of time"
        <xref ref-type="bibr" rid="ref44">(Wang&amp;Hammer
2015)</xref>
        . The temporality in this system differs from active
logic, however, in that the flow of time is not itself seen as
an object of reasoning.
      </p>
      <p>
        Jacek Malec and his group (Asker&amp;Malec 2005) extended
active logic and proposed a labeled deductive system
(LDS) which attaches a label to every well-formed
formula. LDS allows the inference rules to analyze and modify
labels, or even trigger on specific conditions defined on the
labels. They demonstrated the use of LDS by formalizing
models of short-term memory, followed up by studying
several scenarios (Heins 2009). In related work,
        <xref ref-type="bibr" rid="ref31">Nowaczyk
2006</xref>
        ) extends active logic to partial planning situations.
An interesting middle-ground is taken in TRL – timed
reasoning logic – see
        <xref ref-type="bibr" rid="ref1 ref1 ref2">(Alechina et al 2004a,b;
Agnotes&amp;Alechina 2007)</xref>
        . While TRL remains at the E-logic
level, it can express fairly detailed aspects of internal
processing. In that respect it is similar to the meta-level
steplogics in
        <xref ref-type="bibr" rid="ref14">(Elgot-Drapkin&amp;Perlis 1990)</xref>
        . Because of more
limited expressive power, TRL tends to be decidable. On
the other hand, the semantics given in
        <xref ref-type="bibr" rid="ref4">(Anderson, et al
2008)</xref>
        appears to offer a compelling psychologically
plausible alternative. But it is noteworthy that none of these
address the agent-controlled-semantics issue above.
The planning community is beginning to acknowledge the
importance of taking planning-time into account as part of
the planning process; see for instance
        <xref ref-type="bibr" rid="ref19 ref27">(Ghallab et al 2016;
Lin et al 2015)</xref>
        . The earliest published work we are aware
of on this is
        <xref ref-type="bibr" rid="ref30">(Nirkhe et al 1991)</xref>
        .
      </p>
      <p>A recent article (Tenorth&amp;Beetz 2017) discusses complex
interactions between robotic control, knowledge
representations at various levels, and reasoning over those
representations, including temporal reasoning. While the
intention is to provide robots with inferential abilities, the
approach appears to remain in the E-logic framework.</p>
    </sec>
    <sec id="sec-7">
      <title>Conclusion: Reasoning is a Process</title>
      <p>A reasoner is engaged in reasoning, and makes decisions
during (and as part of) that reasoning, such as whether to
continue along present lines, or try a new tack, or give up,
or seek assistance. That is, a reasoning agent itself is
engaged in some version of what we have called I-logic. On
the other hand, the study of reasoning can of course
proceed at many levels and in many forms.</p>
      <p>It may be premature – despite many decades of work
(including some by ourselves) – to try to pin down precise
specifications (i.e., in an E-logic) of broad CSR behaviors.
We know so little of the notion of intelligence at this point,
that it may be more useful to get lots more experience with
reasoning behavior itself (that is, via I-logics that can
actually be used by automated agents/robots). At least, this is
the perspective we are exploring here.</p>
      <p>
        An analogy with
        <xref ref-type="bibr" rid="ref36">(Polya 1945)</xref>
        is tempting. While
mathematical logic is the very epitome of E-logic (fully focused
on entailment/consequence), it largely ignores the situation
of actual mathematician-reasoners who question axioms,
decide to change problems, and are keenly aware of (and
make use of) their progress or lack of it over time
        <xref ref-type="bibr" rid="ref33">(Perlis,
2016)</xref>
        . Polya’s advice is aimed at the latter, with practical
in-the-moment strategies to attend to. And while
mathematical logic has been extraordinarily successful in its own
right, it has afforded relatively mild impact or insight into
mathematical practice overall.
      </p>
      <p>We repeat from our Introduction: The single most salient
departure that I-logics make from E-logics is that of taking
into account the actual process of inferring as something
that itself takes time. This departure provides a very rich
set of tools that we hope to have illustrated here.</p>
      <p>Miller, M. 1993. A view of one’s past, and other aspects of
reasoned change in belief. PhD dissertation, University of Maryland.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Agnotes</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Alechina</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>2007</year>
          .
          <article-title>The dynamics of syntactic knowledge</article-title>
          .
          <source>Journal of Logic and Computation</source>
          . February 2007 Alechina,
          <string-name>
            <given-names>N.</given-names>
            ,
            <surname>Logan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            , and
            <surname>Whitsey</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <year>2004a</year>
          .
          <article-title>A complete and decidable logic for resource-bounded agents</article-title>
          .
          <source>In Proceedings of the Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS</source>
          <year>2004</year>
          ). ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Alechina</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Logan</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Whitsey</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2004b</year>
          .
          <article-title>Modelling communicating agents in timed reasoning logics</article-title>
          .
          <source>Proceedings, 9th European Conference (JELIA) - Lecture Notes in AI 3229</source>
          , Springer.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomaa</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>An approach to human-level commonsense reasoning</article-title>
          . In K. Tanaka,
          <string-name>
            <given-names>F.</given-names>
            <surname>Berto</surname>
          </string-name>
          , E. Mares, and
          <string-name>
            <given-names>F.</given-names>
            <surname>Paoli</surname>
          </string-name>
          (eds.),
          <source>Paraconsistency: Logic and Applications</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gomaa</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Active logic semantics for a single agent in a static world</article-title>
          .
          <source>Artificial Intelligence</source>
          <volume>172</volume>
          :
          <fpage>1045</fpage>
          -
          <lpage>63</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Anderson</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2005</year>
          .
          <article-title>Logic, self-awareness and selfimprovement: The metacognitive loop and the problem of brittleness</article-title>
          .
          <source>Journal of Logic and Computation</source>
          . .
          <volume>15</volume>
          (
          <issue>1</issue>
          )
          <string-name>
            <surname>Asker</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Malec. J.</surname>
          </string-name>
          <year>2005</year>
          .
          <article-title>Reasoning with limited resources: Active logics expressed as labeled deductive systems</article-title>
          .
          <source>Bulletin of the Polish Academy of Sciences, Technical Sciences</source>
          , Vol.
          <volume>53</volume>
          , No.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          1,
          <issue>2005</issue>
          , pp.
          <fpage>69</fpage>
          -
          <lpage>78</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Baral</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jicheng</surname>
            <given-names>Z.</given-names>
          </string-name>
          <year>2008</year>
          .
          <article-title>Non-monotonic temporal logics that facilitate elaboration tolerant revision of goals</article-title>
          .
          <source>AAAI</source>
          <year>2008</year>
          :
          <fpage>406</fpage>
          -4
          <string-name>
            <surname>Barringer</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fisher</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gabbay</surname>
            <given-names>DM</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gough</surname>
            <given-names>G</given-names>
          </string-name>
          , editors.
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <source>Advances in temporal logic. Springer Science &amp; Business Media; 2013 Nov</source>
          <volume>11</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Bringsjord</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Licato</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Govindarajulu</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ghosh</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Sen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Real robots that pass human tests of self-consciousness.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>In Proceedings of IEEE the 24th International Symposium on Robots and Human Interactive</source>
          Communications Brody,
          <string-name>
            <given-names>J.</given-names>
            ,
            <surname>Cox</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.T.</given-names>
            , and
            <surname>Perlis</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>2014</year>
          .
          <article-title>Incorporating elements of a processual self into active logic</article-title>
          . In M. Waser (Ed.),
          <article-title>Implementing selves with safe motivational systems and selfimprovement: Papers from the Spring Symposium</article-title>
          , AAAI Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Brody</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Who's talking? Efference copy and a robot's sense of agency</article-title>
          .
          <source>AAAI Fall Symposium</source>
          <year>2015</year>
          ,
          <string-name>
            <surname>Arlington</surname>
            <given-names>VA</given-names>
          </string-name>
          ,
          <year>2015</year>
          Davis,
          <string-name>
            <surname>E.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Logical formalizations of commonsense reasoning: a survey</article-title>
          .
          <source>J. Artificial Intelligence Research</source>
          ,
          <volume>59</volume>
          ,
          <fpage>651</fpage>
          -
          <lpage>723</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Dell'Aglio</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Della</given-names>
            <surname>Valle</surname>
          </string-name>
          , E.,
          <string-name>
            <surname>Van Harmelen</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Bernstein</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Stream reasoning: A survey and outlook</article-title>
          . Data Science, in press.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Delgrande</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peppas</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Woltran</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2013</year>
          .
          <article-title>AGM-Style Belief Revision of Logic Programs under Answer Set Semantics</article-title>
          .
          <source>LPNMR</source>
          <year>2013</year>
          :
          <fpage>264</fpage>
          -
          <lpage>276</lpage>
          Diller,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>Adrian</surname>
          </string-name>
          <string-name>
            <surname>Haret</surname>
          </string-name>
          , Thomas Linsbichler, Stefan Rümmele, Stefan Woltran,
          <year>2015</year>
          .
          <article-title>An Extension-Based Approach to Belief Revision in Abstract Argumentation</article-title>
          .
          <source>IJCAI</source>
          <year>2015</year>
          :
          <fpage>2926</fpage>
          -
          <lpage>2932</lpage>
          Doherty,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>1998</year>
          .
          <article-title>TAL| Projects| AIICS| IDA</article-title>
          .
          <source>Electronic Transactions on Artificial Intelligence</source>
          ,
          <volume>2</volume>
          (
          <issue>3-4</issue>
          ),
          <fpage>273</fpage>
          -
          <lpage>306</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Elgot-Drapkin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1990</year>
          .
          <article-title>Reasoning situated in time, I: basic concepts</article-title>
          .
          <source>J. of Experimental and Theoretical AI</source>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <surname>Elgot-Drapkin</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miller</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1991</year>
          .
          <article-title>Memory, reason and time: the step-logic approach</article-title>
          . In R. Cummins and
          <string-name>
            <surname>J.</surname>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <string-name>
            <surname>Gärdenfors</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <year>2003</year>
          .
          <article-title>(editor) Belief revision</article-title>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>Gelfond</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lifschitz</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <year>1998</year>
          .
          <article-title>Action languages</article-title>
          .
          <source>Electronic transactions on AI</source>
          , v.
          <volume>3</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Getoor</surname>
            ,
            <given-names>L</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Taskar</surname>
          </string-name>
          , B. (editors)
          <year>2007</year>
          .
          <article-title>Introduction to Relational Statistical Learning</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>Ghallab</surname>
            ,
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nau</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Traverso</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <year>2016</year>
          .
          <source>Automated Planning and Acting</source>
          . Cambridge Univ. Press.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <string-name>
            <surname>Goldsmith</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sloan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Szorenyi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Turán</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2004</year>
          .
          <article-title>Theory revision with queries: Horn, read-once, and parity formulas</article-title>
          .
          <source>AIJ</source>
          <volume>156</volume>
          (
          <issue>2</issue>
          ):
          <fpage>139</fpage>
          -
          <lpage>176</lpage>
          Gonzalez,
          <string-name>
            <given-names>G.</given-names>
            ,
            <surname>Baral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            , and
            <surname>Cooper</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2002</year>
          .
          <article-title>Modeling multimedia displays using action based temporal logic</article-title>
          .
          <source>VDB</source>
          <year>2002</year>
          :
          <fpage>141</fpage>
          -
          <lpage>155</lpage>
          Hammer,
          <string-name>
            <given-names>P.</given-names>
            ,
            <surname>Lofthouse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            , and
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>P.</surname>
          </string-name>
          <year>2016</year>
          .
          <article-title>"The OpenNARS implementation of the non-axiomatic reasoning system</article-title>
          .
          <source>" International Conference on Artificial General Intelligence</source>
          . Springer International Publishing Heins,
          <string-name>
            <surname>T.</surname>
          </string-name>
          <year>2009</year>
          ..
          <article-title>A case study of active logic</article-title>
          .
          <source>Master's thesis</source>
          , Department of Computer Science, Lund University.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <surname>Ingrand</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Ghallab</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Deliberation for autonomous robots: A survey</article-title>
          .
          <source>Artificial Intelligence</source>
          , v 247.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <string-name>
            <surname>Konolige</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <year>1984</year>
          .
          <article-title>A deduction model of belief and its logics</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lehmann</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1986</year>
          .
          <article-title>Knowledge, Belief and Time</article-title>
          .
          <source>ICALP</source>
          <year>1986</year>
          :
          <fpage>186</fpage>
          -
          <lpage>195</lpage>
          Kraus S.,
          <string-name>
            <surname>Rosenschein</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Fenster</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>Exploiting focal points among alternative solutions: two approaches</article-title>
          .
          <source>Annals of Mathematics and Artificial Intelligence</source>
          ,
          <volume>28</volume>
          (
          <issue>1-4</issue>
          ):
          <fpage>187</fpage>
          -
          <lpage>258</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <surname>Lamport</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>The temporal logic of actions</article-title>
          .
          <source>ACM Transactions on Programming Languages and Systems (TOPLAS)</source>
          ,
          <volume>16</volume>
          (
          <issue>3</issue>
          ),
          <fpage>872</fpage>
          -
          <lpage>923</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <string-name>
            <surname>Levesque</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Common Sense, the Turing Test, and the Quest for Real AI</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <string-name>
            <surname>Levesque</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Lakemeyer</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>The Logic of Knowledge Bases</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrey</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ece</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Horvitz</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Metareasoning for planning under uncertainty</article-title>
          .
          <source>arXiv:1505.00399v1 Miller</source>
          ,
          <string-name>
            <given-names>M.</given-names>
            , and
            <surname>Perlis</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>1996</year>
          .
          <article-title>Automated inference in active logics</article-title>
          .
          <source>J Applied Non-classical Logics</source>
          ,
          <volume>6</volume>
          (
          <issue>1</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <string-name>
            <surname>Mueller</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Commonsense Reasoning, 2nd edition</article-title>
          . Morgan Kauffman.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <string-name>
            <surname>Nilsson</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>1983</year>
          .
          <article-title>Artificial intelligence prepares for 2001</article-title>
          .
          <source>AI Magazine</source>
          ,
          <volume>4</volume>
          (
          <issue>4</issue>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <string-name>
            <surname>Nirkhe</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1991</year>
          .
          <article-title>Fully deadlinecoupled planning: One step at a time</article-title>
          .
          <source>International Symposium on Methodologies for Intelligent Systems (ISMIS</source>
          <year>1991</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <string-name>
            <surname>Nowaczyk</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2006</year>
          .
          <article-title>Partial planning for situated agents based on active logic</article-title>
          . Workshop on Logics for Resource Bounded Agents,
          <string-name>
            <given-names>ESSLLI</given-names>
            <surname>Perlis</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <year>1997</year>
          .
          <article-title>Sources of, and exploiting, inconsistency: preliminary report</article-title>
          .
          <source>Journal of Applied Non-Classical Logics.</source>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2000</year>
          .
          <article-title>The role(s) of belief in AI</article-title>
          . In J. Minker (ed.)
          <string-name>
            <surname>Logic-Based</surname>
            <given-names>AI</given-names>
          </string-name>
          , Kluwer.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>The five dimensions of reasoning in the wild</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          <string-name>
            <surname>AAAI-</surname>
          </string-name>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          <string-name>
            <surname>Pnueli A.</surname>
          </string-name>
          <year>1977</year>
          .
          <article-title>The temporal logic of programs</article-title>
          .
          <source>In Foundations of Computer Science</source>
          ,
          <year>1977</year>
          .,
          <source>18th Annual Symposium. Oct</source>
          <volume>31</volume>
          (pp.
          <fpage>46</fpage>
          -
          <lpage>57</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          <string-name>
            <surname>Polya</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <year>1945</year>
          . How to Solve It. Princeton Univ. Press.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          <string-name>
            <surname>Purang</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gurney</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Perlis</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <year>1996</year>
          . AAAI Spring Symposium.
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          <string-name>
            <surname>Rajan</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Saffiotti</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . (editors)
          <year>2017</year>
          .
          <article-title>Special issue on AI and robotics</article-title>
          .
          <source>Artificial Intelligence</source>
          , v 247.
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          <string-name>
            <surname>Reiter</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <year>2001</year>
          .
          <article-title>Knowledge in Action</article-title>
          . MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          <string-name>
            <surname>Roos</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <year>1992</year>
          .
          <article-title>A logic for reasoning with inconsistent knowledge</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>57</volume>
          ,
          <fpage>69</fpage>
          -
          <lpage>103</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          <string-name>
            <surname>Sloan</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          and Turán,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <year>1999</year>
          .
          <article-title>On theory revision with queries</article-title>
          .
          <source>COLT</source>
          <year>1999</year>
          :
          <fpage>41</fpage>
          -
          <lpage>52</lpage>
          Talcott,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>2003</year>
          .
          <article-title>FOL: Towards an architecture for building autonomous agents from building blocks of first order logic</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          http://www-formal.stanford.edu/FOL/03jan
          <article-title>-umd.ppt (Slides from talk at U Maryland</article-title>
          .)
          <string-name>
            <surname>Tenorth</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          and
          <string-name>
            <surname>Beetz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2017</year>
          .
          <article-title>Representations for robot knowledge in the KnowRob framework</article-title>
          .
          <source>Artificial Intelligence</source>
          , v 247.
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          <string-name>
            <surname>Wang. P.</surname>
          </string-name>
          <year>2013</year>
          .
          <article-title>Non-axiomatic logic: A model of intelligent reasoning</article-title>
          . World Scientific.
        </mixed-citation>
      </ref>
      <ref id="ref44">
        <mixed-citation>
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Hammer</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>"Issues in temporal and causal inference</article-title>
          .
          <source>" International Conference on Artificial General Intelligence</source>
          . Springer.
        </mixed-citation>
      </ref>
      <ref id="ref45">
        <mixed-citation>
          <string-name>
            <surname>Weyhrauch</surname>
            ,
            <given-names>R</given-names>
          </string-name>
          <year>1980</year>
          .
          <article-title>Prolegomena to a theory of mechanized formal reasoning</article-title>
          .
          <source>Artificial Intelligence</source>
          ,
          <volume>13</volume>
          ,
          <fpage>133</fpage>
          -
          <lpage>170</lpage>
          Weyhrauch,
          <string-name>
            <given-names>R.</given-names>
            , and
            <surname>Talcott</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>1990</year>
          .
          <article-title>Towards a theory of mechanizable theories: I FOL contexts - the extensional view</article-title>
          .
          <source>European Conference on Artificial Intelligence (ECAI).</source>
        </mixed-citation>
      </ref>
      <ref id="ref46">
        <mixed-citation>
          <string-name>
            <surname>Weyhrauch</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Talcott</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <year>1994</year>
          .
          <article-title>The logic of FOL Systems: formulated in set theory</article-title>
          . In Logic, Language, and Computation.
        </mixed-citation>
      </ref>
      <ref id="ref47">
        <mixed-citation>
          <string-name>
            <surname>Weyhrauch</surname>
            ,
            <given-names>R</given-names>
          </string-name>
          , and Talcott,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <year>1997</year>
          .
          <article-title>WristWatch - an FOL theory of time</article-title>
          . http://www-formal.stanford.edu/FOL/w.ps
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>