=Paper=
{{Paper
|id=Vol-2052/paper16
|storemode=property
|title=The Internal Reasoning of Robots
|pdfUrl=https://ceur-ws.org/Vol-2052/paper16.pdf
|volume=Vol-2052
|authors=Don Perlis,Justin Brody,Sarit Kraus,Michael Miller
|dblpUrl=https://dblp.org/rec/conf/commonsense/PerlisBKM17
}}
==The Internal Reasoning of Robots==
The Internal Reasoning of Robots
Don Perlis, Justin Brody, Sarit Kraus, Michael Miller
University of Maryland, Goucher College, Bar Ilan University, Bethesda MD
perlis@cs.umd.edu, justin.brody@goucher.edu, sarit@cs.biu.ac.il, mjmiller@gmail.com,
Abstract
We argue for the value of examining the internal processes This paper attempts to shed light on that barrier and those
that robots might actually use to draw inferences in a timely hurdles, and to highlight an alternative that drives a sharp
way in a dynamic world. This requires a significantly differ- wedge between two notions of logic: (i) the standard “ex-
ent way of thinking about logic and reasoning, which in turn
ternal” kind (E-logics) that specify features from afar via
bears on some traditional logic-related problems such as
omniscience and reasoning in the presence of a contradic- closure under (some form of) consequence or entailment
tion, as well as on a wide variety of other AI issues. A non- relation, and (ii) “internal” ones (I-logics) that represent
standard internally-evolving notion of time seems to be the (and indeed can actually be used for) the inferential pro-
key that unlocks other tools. cessing undertaken by an agent over time. (We especially
focus on active logic, which is perhaps the most developed
form of I-logic so far. Active logic grew out of ideas in
Introduction
(Elgot-Drapkin&Perlis 1990), and has been continually
We teeter on the edge of the age of general-purpose robots. investigated ever since (Nirkhe et al 1991; Miller&Perlis
It thus becomes ever more important that commonsense 1996; Kraus et al 2000; Anderson et al 2008; Brody et al
reasoning (CSR) examine in some detail just how such a 2014; Brody&Perlis 2015).)
robot will actually think, i.e., produce inferences over time
(as it plans, decides, assesses, questions, learns, explores, As we will see, some of the issues faced by E-logics (e.g.,
updates, reconsiders, etc). In particular, robots will need to omniscience) simply go away in an I-logic approach. In
keep their reasoning abreast of at least some aspects of the addition, we have found a wide array of unexpected bene-
evolving world, including the passage of time and how fits of such an approach, that ties CSR to many other parts
they are progressing with regard to their own (also evolv- of AI. Thus the present paper is also a kind of progress
ing) goals.1 report, pulling together many aspects of our attempt to look
under the robotic hood, to craft appropriate logic mecha-
On the surface much of CSR may seem to be aiming at just nisms to go there, and to explore applications across AI. As
these issues.2 But the bulk of such work follows what Ray such, it will have a large number of short sections; we beg
Reiter has called the “external design stance” (Reiter 2001, the reader’s indulgence, for we see this as the most useful
pp 292-293): that of a designer-scientist “entirely external way to communicate the range of these ideas compactly.
to … [and] … looking down on some world inhabited by
an agent.” Indeed, a lot of this work is very relevant and The single most salient departure that I-logics make from
has led to major advances in our understanding: situation E-logics is that of taking into account the actual process of
calculus, nonmonotonic reasoning, and much more. Still, inferring as something that itself takes time. Thus when a
the external stance is nevertheless a very highly idealized conclusion is inferred, it has become a later time than prior
abstraction that creates an unworkable barrier regarding a to reaching that conclusion. This time-stratification spreads
robot’s internal reasoning, and in addition faces huge hur- successive inferences out and leaves a self-updating record
dles such as omniscience, contradiction-intolerance, and of an agent’s evolving beliefs up until the present moment
more. (which itself then moves ahead one more step, and so on
indefinitely). Secondarily, this stratification then provides a
Work primarily supported by the U. S. Office of Naval Research.
very simple yet far-reaching form of introspection: looking
back at one’s beliefs of past moments and drawing conclu-
1
While we recognize that Markov decision processes (MDPs) and related sions bearing on everything from non-monotonicity and
technical tools are standard items in much of current (often highly-
structured special-task) robotic work, general-purpose robots will be
contradiction-handling, to ambiguity resolution, agent con-
bombarded with “culturally supplied” information from other agents, trol of semantics, and awareness of own actions. Third, the
signage, online, and so on, and will need to reason in real-time with such notions of axiom and theorem and entailment are no longer
information. Hence a knowledge base (KB) managed in large measure by
inferential processes seems unavoidable.
very informative: beliefs come and go – still due to (vari-
ous forms of) inference, but including evolving time and
2
See for instance (Rajan&Saffiotti 2017) for very recent work.
the ability to give up (i,e., disinherit) beliefs that are judged While this may seem simple enough, it radically changes
as no longer appropriate. the notion of a logic from an external specification (E-
logic) of a system in another world, to an internal mecha-
Active logic in particular posits an unending3 sequence of nism (I-logic) operating within and as part of that world. In
time-steps, at each of which the knowledge base (KB) has particular, the example is written in the notation of active
a finite number of wffs, considered as the beliefs that the logic, the I-logic approach that we have been pursuing.
reasoning agent holds (at that step); the contents of the KB
then fluctuate in time, and there is no final state where the We next offer three clarifications to avoid confusion be-
agent arrives at its “finished” belief-set. It is the agent’s tween E- and I-logics.
behavior through time that is of interest. This Is Not Your Grandmother’s Temporal Logic
Temporal logics are well known.7 But, in virtually all cas-
es, they are not properly temporal – that is, they do not
Elementary Example: Go to Lunch
vary with time. In fact, they are examples of E-logics, tak-
A robot needs to get to a noon lunch date, and it is now ing an external timeless stance even while looking in on a
11am. How can it ever decide to start walking? The prob- world that may evolve in time. In effect, temporal logics
lem is that, given Now(11:00), standard logics will treat have a frozen permanent now from which they can express
this as an axiom and so the robot will never realize the time facts about what is, will be, or was the case at various spec-
has changed, e.g., that it has now become 11:30 and it ified moments. But inferences made using such logics do
should start walking.4 Clearly it is essential that the robot not correspond to anything changing within the world be-
be able to update its belief as to what time it is. ing explored.
An example of the desired behavior is illustrated below; Yet a wealth of beneficial connections arise between a
underlined items on each line indicate beliefs newly- properly temporal (I-logic) version of CSR and much of
formed at the corresponding time-step: the rest of AI – e.g., NLP, perception, robotics, planning.
As noted, this paper attempts to bring together a wide
Time Evolving belief set range of such benefits as well as provide motivation for the
underlying logical apparatus, especially in the active logic
11:00 Now(11:00); Now(11:30) à Do(walk) form of I-logic. In effect, time-change is the root out of
11:01 Now(11:01); Now(11:30) à Do(walk) which all the rest flows. In particular, it dispenses with
… omniscience quite trivially: an agent believes only what it
11:30 Now(11:30); Now(11:30) à Do(walk) has had time to come to believe so far; anything else it may
11:31 Now(11:31); Now(11:30)àDo(walk), Do(walk) come to believe only later on (as further inferences are
drawn). Such an agent certainly does not believe (contain
At time 11:31 it has just inferred Do(walk).5 Notice that in its KB) all wffs that are entailed by its current beliefs.
beliefs of the form Now(t) come and go, whereas the Indeed, current beliefs may well be inconsistent – more on
“plan” to walk starting at 11:30 continues to be inherited.6 that below.
A “clock” inference rule (along with Modus Ponens in the This Is Not Your Grandfather’s Belief Revision
last two steps) can achieve this: from Now(t) infer Belief revision8 provides a possible way to view the above
Now(t+1): clock rule: insert Now(11:30) as an update, which triggers
relaxation of the KB – removal of Now(11:00) among oth-
t: Now(t) er changes. Yet that last phrase (“among other changes”)
------------------ is where E-logic reveals one of its main hurdles: standard
t+1: Now(t+1) notions of belief revision – being based on a notion of clo-
sure under consequence – cannot serve as a mechanism for
a robot to use, simply because such closure in general is
3
In concert with Nilsson’s notion of an agent with a lifetime of its own very expensive (in most cases non-terminating or even
(Nilsson 1983). undecidable). This is the omniscience problem, and is uni-
4
If lunch for a robot sounds silly, the reader is invited to imagine that the
task instead is to approach and disarm a bomb at noon (when local civil-
ians will have been safely moved away).
5
If one wants to be picky, perhaps this should have been inferred a little
earlier, say at 11:29, so that the walking can actually start by time 11:30.
7
Here we are ignoring such details, and also the granularity of time steps. For standard approaches, see (Pnueli 1977; Baral&Zhao 2008; Gonzalez
6
After 11:30, there is no need to continue inheriting the plan; current et al 2002; Barringer et al 2013; Kraus&Lehmann 1986)
8
implementations of active logic do not take advantage of this “garbage See, e.g., (Gardenfors 2003; Sloan&Turan 1999; Goldsmith et al 2004;
collection” but we expect our next version to do so. Delgrande et al 2013; Diller et al 2015) for traditional E-logic approaches.
2
versally recognized as unrealistic: producing consequences And again, I-logics are vehicles for this real-time ongoing
is time-consuming.9 sort of reasoning. Indeed, an agent can only reason with
Traditional (E-logic) belief revision also suffers from “re- what it has at hand.10
cency prejudice” (Perlis 1997, 2000), in which newly ac-
quired information is taken to have a firm validity that pre- I-logic (at least in its active-logic form) not only brings
existing beliefs must yield to. Yet it is hard to think of a many benefits but (perhaps surprisingly) is not particularly
case in which a new item P should take precedence over mired in the weeds of implementational details. This is not
one’s entire KB. The reasons for preferring P would surely to say that all such issues are now fully resolved – this is a
in large measure be deeply embedded in that very KB as long work very much still in progress. But looking under
part of one’s understanding of many relevant aspects of the the robotic hood, so to speak, is essential if we are to come
world. Thus P and the KB (including information as to to grips with how CSR can actually take place in robotic
where this new P came from) would need to “fight it out” creations coming in the (seemingly quite near) future.
as to whether to accept P or not; and any conclusion could
vary over time as the agent devotes more thought to the Thus instead of axioms, at any moment, our artificial agent
matter (and/or may decide to seek more information). has a specific collection of beliefs (stored in memory) and
Goodbye to Axioms this collection changes as inferences are drawn, percep-
Very little in CSR can reasonably be taken as firmly given tions made, and so on. Among these changes – and central
over an agent’s lifetime. Perhaps some mathematical con- to most of the distinct features of active logic – is the up-
cepts, perhaps some definitions. But more commonly, we dating of the present time as in the clock rule. There is no
hold beliefs for awhile and then relax them if sufficient notion of inferential closure; the current beliefs are simply
counterevidence arises. Or, in many cases, we already have whatever has been inferred/perceived and kept so far (i.e.,
that evidence, in the form of other beliefs to the effect that inherited to the present time).
something is in flux (the time, an airplane’s location, and
so on); sometimes change is the rule. It is hard then to find A belief can fail to inherit for a variety of reasons. No be-
much to take as axiomatic. Here are two more examples. lief of the form Now(t) is inherited – it is replaced by
(1) Your eleven-year-old son tells you that Barack Now(t+1). Other failures of inheritance are illustrated in
Obama is 6’8” tall. You do not take this as a fact; various cases below. But more importantly we now turn to
on the contrary – although you may not have any the power of introspective reasoning that becomes possible
specific height in mind for Obama – you do be- in I-logics endowed with a notion evolving time.
lieve 6’8” is sufficiently unusual (and presidents
are sufficiently in the news) that it would have
Introspection Is a Many-Splendored Thing
been remarked on a lot and you would have heard
it before. So you discount the information from Introspection is one of the most valuable tools that come
your son. But if your son then tells you that almost for free in an I-logic.11 It in turn facilitates powerful
Obama has been slouching so as to disguise his methods for detecting and defusing contradictions, manag-
height ever since his twenties, and that he is in ing nonmonotonic inference, reasoning about and adjusting
fact 6’8”, would you still be so sure he is wrong? semantics, tracking actions, and much more. In this and
(2) You hear the TV meteorologist say that the tem- several sections that follow, we explain and illustrate a
perature dropped to 1 degree below zero last number of these ideas.
night; and you accept this. But you would not be
especially startled to learn later that the meteorol- Given a belief P at time t, an agent ought to be able to note
ogist has misread her notes and that the low was 1 later on (say at time t+1) that it had that belief earlier. This
degree above zero; or that the thermometer had can be achieved in active logic by means of a rule such as
given a false reading. the following (positive introspection), where the KB-
In each case, many background assumptions are in effect.
At this point one might be tempted to opt for probabilities. 10
See for instance the Oxford Reference on Neurath’s boat – “The power-
But while the latter clearly have an important role to play ful image conjured up by Neurath, in his Anti-Spengler (1921), whereby
in AI, they need not come in quite here. Instead, we often the body of knowledge is compared to a boat that must be repaired at sea:
‘we are like sailors who on the open sea must reconstruct their ship but
simply reserve judgment, or suspend a previous judgment. are never able to start afresh from the bottom…’. Any part can be re-
placed, provided there is enough of the rest on which to stand. The image
opposes that according to which knowledge must rest upon foundations,
9
This is sometimes embraced as a necessary evil (Reiter 2001); or dealt thought of as themselves immune from criticism, and transmitting their
with via specialized semantics (Levesque&Lakemeyer 2000) which how- immunity to other propositions by a kind of laying-on of hands.”
11
ever does not adequately address or ameliorate the time-consumption And so perhaps “introspective logic” would be a more apt name than
aspect. internal logic.
3
predicate symbol refers to the agent’s own knowledge is how an I-logic can benefit (in the specific form of active
base: logic): If the wffs P and ~P both appear as t-beliefs, then
t: P neither are inherited as (t+1)-beliefs and instead Contra(t,
--------------- P, ~P) is inferred as a (t+1)-belief. Thus the agent retains in
t+1: KB(P,t) the evolving present the fact that there had been an earlier
contradiction, but is no longer directly subject to it, and ex
Similarly, another rule (negative introspection) can provide contradiction quodlibet (from a contradiction all follows)
the result that one did not just previously have a given be- is thereby disarmed.15
lief:12
t: … Thus instead of being a logician’s anathema, contradictions
----------------- can be a robot’s best friend, helping it adjust its KB to
t+1: ~KB(P,t) [if P is not present at the previous step] come more into line with reality. Contradictions simply
remain undiscovered in the KB until they are discovered
These two rules are trivial to implement and cheap to run, (in the P, ~P form) over time – and then defused. This is a
involving no more than a linear-time lookup at time t+1 to very different approach from more customary paracon-
see what wffs are or are not among the t-beliefs.13 Yet a sistent logics, most of which skirt around the edges of a
surprising number of capabilities flow from this, as ex- contradiction – rather than acknowledge it and use it to
panded upon in the next several subsections. make changes to the KB – or in effect assume they can all
Non-monotonicity be hunted out in advance.16
At this point we can already carry out some simple cases of
nonmonotonic reasoning. For instance, the default that B’s In the case of Tweety above, new information that she is a
are typically F’s (birds typically fly) can be captured like penguin and does not fly will provide (say at time-step t) a
this: if one doesn’t already (as in a moment ago) know that direct contradiction between Flies(tweety) and
a given bird doesn’t fly, then assume it does. In active- ~Flies(tweety), which then at time t+1 will result in the KB
logic notation this can be written as follows: having neither of these inherited from step t, but instead
will have an assertion that such a contradiction did arise at
∀𝑥 [ (∀t) {Bird(x) & ~KB(~Flies(x),t-1)} à Flies(x) ] time t. If the agent has further information – such as that
penguins are a subclass of birds, and that subclass proper-
Then given Bird(tweety), all it takes to infer that Tweety ties are more trustworthy17 – then ~Flies(tweety) can be
can fly is ~KB(~Flies(tweety),t-1), which comes instantly reinstated. If not, then the agent remains in doubt.
from negative introspection – unless one does already
know Tweety cannot fly. No fuss, no muss – no need for It is our contention that this sort of fluctuating conflict-
complex consistency checks or internal model-building; resolution over time is the only option for an actual agent
conclusions are held as long as they are held, and can be engaged in reasoning as the world evolves.
surrendered when evidence so suggests.14 Semantics and Pragmatics
In an I-logic, semantics can take on an entirely new aspect,
Thus, one might later on come to believe Tweety is a pen- where the agent can exert control and both determine and
guin – whether by observation or simply additional infer- reason about what its expressions do or don’t stand for.18
ence. This will then appear as a (direct) contradiction in the This is one of the most powerful aspects of introspection
KB: two beliefs of the form P and ~P will both be present that we have noted so far. In effect, one can reason about
at the same time-step. Which brings us to the next subsec- one’s own expressions – simply by means of introspective-
tion. ly examining past beliefs and subexpressions thereof. One
Contradictions
Contradictions are virtually inevitable in commonsense 15
To be sure, whatever circumstances that produced P and ~P may do so
reasoning (Perlis 1997). While this is generally considered again, so this is not a panacea. But it can be shown (Miller 1993) that
a major nuisance for CSR, it can actually be a boon. Here under reasonably broad conditions this too will resolve into a stable state.
16
E.g., see (Roos 1992) for a more traditional E-logic treatment; and
(Anderson et al 2013) for more on an active logic approach.
12 17
Many issues arise here that we do not have space to address, such as: to Such a rule has been implemented in one of our active logic programs.
18
which wffs P are the introspection rules applied (if care is not taken, the That is, this refers to meanings the agent assigns to its expressions,
KB will quickly become swamped). A much longer paper in preparation quite apart from what a logic-designer may have in mind. Note that the
will deal with this. recent Facebook robot-incident of “inventing a new language” is not of
13
A t-belief is simply any belief in the KB at time t. this sort at all: those robots did not assign meanings to anything, either in
14
Of course, an agent can also remain in doubt, or even be deliberately the original English or in their later made-up phrases. See
tentative (such as with probabilities and during learning; see (http://www.newsweek.com/2017/08/18/ai-facebook-artificial-
(Getoor&Taskar 2007)). intelligence-machine-learning-robots-robotics-646944.html )
4
can even assign new expressions, if for instance a new en- ing and going during reasoning. Here is one example dia-
tity is observed, or if one infers that two entities were being log, in which reasoning involves inferences that evolve
conflated as one (as in the cases of ambiguity or of misi- over time, that has been implemented in active logic (Pu-
dentification). rang et al 1996):
(A) Kathy: Are the roses fresh?
In fact, AI systems are generally notorious for altogether (B) Bill: They are in the fridge.
ignoring the expression/meaning distinction, as in: Joe is a (C) Bill: But they are not fresh.
person and also we just now used “Joe” to refer to him. At some point prior to (C), Bill supposes Kathy will draw
People can and do (and must) note and make use of the from (B) the implicature that the roses are fresh, so in (C)
difference between language and what language refers to. he dispels that inaccuracy. Thus Bill has to reason about
Our artificial agents need to be able to do the same; other- the effects of the ongoing conversation and make adjust-
wise they can hardly be said to know anything (Perlis ments to it.
2016), let alone reason about errors. With all the recent The One Wise Man Problem
successes in NLP (mostly coming from deep learning), still Much has been made of the Three-Wise-Men problem –
there is almost no language-like introspection, no meanings see (Konolige 1984; Elgot-Drapkin&Perlis 1990). A realis-
associated with words in a way that allows reasoning, let tic treatment has to take into account the passage of time as
alone adjusting meanings. the wise men think; and this can be done in traditional
temporal logic, as long as the wise men themselves are not
On the other hand, introspection allows representation of required to use that same logic. But suppose we do want to
beliefs (at least at previous steps) as objects that can be capture the reasoning of such an agent; for instance – to
reasoned about. This has numerous ramifications, which make the problem especially simple – the King who wants
for lack of space we can only briefly allude to in the rest of to assure himself that his one wise man is not an idiot. So
this section. the King proposes this problem to his wise man: “Is 15 a
Ambiguity and Misidentification prime number?” Being no genius himself, the King has to
A potentially ambiguous expression (say, “Jean’s car”) can think for awhile before deciding the answer is “no” – and if
be recognized as such (e.g., by noticing a direct contradic- by then the wise man has not yet answered, the King can
tion – “this is Jean’s car, and the key to Jean’s car isn’t the start looking for a replacement. But to do this reasoning
key to this car”). This in turn triggers an effort to resolve (which involves introspection), the King will need I-logic,
the contradiction. Maybe Jean has two cars (ambiguity); or and in particular an I-logic that closely tracks time.
maybe this is the wrong key or that is not her car at all
(misidentification).
What Am I Doing?
The latter case is especially interesting, for it requires some It is important that an agent not only plan and take actions,
expression to represent an object (the wrong key or wrong but that it also know when it is in fact doing so. Otherwise
car), but not the expression that had been used a moment strange behaviors can result. In one of our robotic studies
ago. Miller and Perlis (Miller&Perlis 1996) propose a spe- recently, robot Alice was programmed to point and say “I
cial active-logic function-symbol tfitb to produce a new see Julia” whenever it heard an utterance containing the
name on demand, for the “thing formerly interpreted to be” word “Julia” (actually, it was doing no actual word-
something else. processing at the time, but simply matching the input
Focal points sound-stream to a stored one). So it got itself into a loop,
A related idea comes up in planning, especially multiagent hearing “Julia” from its own loudspeaker and then pointing
planning. It may be important to identify an entity that an- and repeating the same phrase over and over.
other agent is likely to similarly identify – for instance a But taking a cue from neuropsychology,19 we were able
good location to meet up or to leave a message, or an “ob- to encode a rule for noting one’s own activity: whenever an
vious” item to pick out of a long list (e.g., the first, last, or action is undertaken, Do(x) is inferred (recall the Lunch
middle one). This in turn may require coming up with a example), and at the next step Doing(x) can be inferred,
new expression that was not previously in one’s ontology. and inherited as long as the activity is still underway.20 We
In (Kraus et al 2000) an approach to this is given using have implemented this in a grounded way, so that when
active logic. Alice undertakes to speak she infers that she is engaged in
Pragmatics
In conversation, all sorts of assumptions arise and are con- 19
The so-called efference copy, see (Brody et al 2015).
20
firmed or dispelled, often by means of further conversa- This is a different method from that used in (Bringsjord et al 2015)
where voice recognition appears to take precedence over recall of one’s
tion. Thus NLP-dialog is a prime example of beliefs com- own actions.
5
a speaking action (but also checks what she hears to make
sure it matches her expected speech). Levesque and Lakemeyer (Levesque&Lakemeyer 2000, pp
195-196) argue that attending to internal inference mecha-
nisms to avoid omniscience makes behavioral predictions
Reasoned Learning impossible. They deal with omniscience instead by en-
Machine learning (ML) has taken center stage in recent largements of the semantics to allow “non-standard world
years, and for good reason: it has made justly fabled states” that keep out undesired agent-beliefs. But it is un-
strides, and surely will be a major part of any future gen- clear what predictions one could hope to make, given an
eral-purpose AI. But alone it is insufficient. The practices agent with thousands of explicit beliefs, other than ones of
usually referred to as ML are ones of habituation or train- such generality as to be virtually useless about that particu-
ing. A human turns a trainable system on, allows it to train, lar agent’s behavior. Will it complete a given task (even a
perhaps applies it, and later turns it off; in itself, traditional purely inferential one) within ten days? One surely cannot
ML has little if any autonomy. expect anything other than a careful examination of the
robot’s actual processing to reveal such results.
But a general-purpose AI (robotic or otherwise) will need
to decide what to learn, and when and how, and whether On the other hand, Richard Weyhrauch and Carolyn Tal-
learning is working and/or should stop. Moreover, as noted cott (Weyhrauch 1980; Weyhrauch&Talcott 1990, 1994;
in the Introduction, cultural (symbolic) transmission is also Talcott 2003) initiated the FOL approach (one instance of
a major source of learning.21 And finally, a system will an I-logic) which aimed at providing reasoning mecha-
need to know what it has or hasn’t already learned.22 nisms for actual use by an agent; however this effort has
remained in a fragmentary state. An interesting addendum
An I-logic (particularly, active logic) – in keeping a history to FOL is WristWatch (Weyhrauch & Talcott 1997)—a
of its own KB over time – can potentially examine that dynamic context from which to answer questions about
history, infer that it has (or lacks) certain capabilities, and time, specifically about the ever-changing meanings of the
then decide whether to activate an appropriate ML process; constants now and then as updated by their “tick” inference
see (Elgot-Drapkin, et al 1991) for a brief introduction. rule. Weyhrauch and Talcott speculate about supplying a
robot with WristWatch embedded into FOL as its mecha-
nism to reason about time.
Related Work
Pei Wang's Non-Axiomatic Logic (aka NARS) provides a
Ray Reiter (Reiter 2001) considers numerous issues that
(term-logic based) reasoning system which aims to be fi-
arise in commonsense reasoning (CSR) when an agent’s
nite, real-time and open (Wang 2013). It shares some fea-
deliberations occur within a dynamic setting, and in partic-
tures with active logic, in that it is non-monotonic, allows
ular, how a formal logic might be used by an agent to do its
for self-reference and is intended to be situated (in that
own reasoning, and have that reasoning keep up with
knowledge is not disembodied but should be based on the
changing events (pp.163-164). Reiter succeeds in isolating
agent's experience). While Chapter 9 of (Wang 2013) ad-
various themes surrounding this: omniscience, internal
dresses potential meta-cognition in his system, no particu-
contradictions, and so on. But in the end he advocates in-
lar mechanisms for monitoring an ongoing reasoning pro-
stead the “external design stance.” Action languages (Gel-
cess seem to be specified. Gestures toward such mecha-
fond&Lifschitz 1998) are another firmly E-logical ap-
nisms are made (by, e.g., referencing "doubt" and "wait"
proach that thus again are suitable for external analysis of
operations), but we are not aware of any attempt to opera-
an agent but not for real-time use by an agent, let alone by
tionalize these. Later iterations of NARS (Wang&Hammer
one with a potentially inconsistent KB; the same holds for
2015; Hammer et al 2016) address temporality and recog-
temporal action logics (TAL; see Doherty 1998) and the
nize the problem of assuming that "the reasoning system
temporal logic of actions (TLA; see Lamport 1994).
itself is outside the flow of time" (Wang&Hammer
2015). The temporality in this system differs from active
In a survey of commonsense reasoning (Davis 2017) the E-
logic, however, in that the flow of time is not itself seen as
and I- distinction is also raised (under different terminolo-
an object of reasoning.
gy); but, like Reiter, he focuses primarily on the external
stance. A survey on robot deliberation (Ingrand&Ghallab
Jacek Malec and his group (Asker&Malec 2005) extended
2017) does not address this distinction.
active logic and proposed a labeled deductive system
(LDS) which attaches a label to every well-formed formu-
21
See also (Levesque 2017). la. LDS allows the inference rules to analyze and modify
22
But again see (Getoor&Taskar 2007) for another approach. labels, or even trigger on specific conditions defined on the
6
labels. They demonstrated the use of LDS by formalizing decide to change problems, and are keenly aware of (and
models of short-term memory, followed up by studying make use of) their progress or lack of it over time (Perlis,
several scenarios (Heins 2009). In related work, Nowaczyk 2016). Polya’s advice is aimed at the latter, with practical
2006) extends active logic to partial planning situations. in-the-moment strategies to attend to. And while mathe-
matical logic has been extraordinarily successful in its own
An interesting middle-ground is taken in TRL – timed rea- right, it has afforded relatively mild impact or insight into
soning logic – see (Alechina et al 2004a,b; Ag- mathematical practice overall.
notes&Alechina 2007). While TRL remains at the E-logic
level, it can express fairly detailed aspects of internal pro- We repeat from our Introduction: The single most salient
cessing. In that respect it is similar to the meta-level step- departure that I-logics make from E-logics is that of taking
logics in (Elgot-Drapkin&Perlis 1990). Because of more into account the actual process of inferring as something
limited expressive power, TRL tends to be decidable. On that itself takes time. This departure provides a very rich
the other hand, the semantics given in (Anderson, et al set of tools that we hope to have illustrated here.
2008) appears to offer a compelling psychologically plau-
sible alternative. But it is noteworthy that none of these
address the agent-controlled-semantics issue above. References
Agnotes, T. and Alechina, N. 2007. The dynamics of syntactic
The planning community is beginning to acknowledge the knowledge. Journal of Logic and Computation. February 2007
importance of taking planning-time into account as part of Alechina, N., Logan, B., and Whitsey, M. 2004a. A complete and
the planning process; see for instance (Ghallab et al 2016; decidable logic for resource-bounded agents. In Proceedings of
Lin et al 2015). The earliest published work we are aware the Third International Joint Conference on Autonomous Agents
and Multi-Agent Systems (AAMAS 2004). ACM Press.
of on this is (Nirkhe et al 1991).
Alechina, N., Logan, B., and Whitsey, M. 2004b. Modelling
communicating agents in timed reasoning logics. Proceedings, 9th
A recent article (Tenorth&Beetz 2017) discusses complex European Conference (JELIA) – Lecture Notes in AI 3229,
interactions between robotic control, knowledge represen- Springer.
tations at various levels, and reasoning over those repre- Anderson, M., Gomaa, W., Grant, J., and Perlis, D. 2013. An
sentations, including temporal reasoning. While the inten- approach to human-level commonsense reasoning. In K. Tanaka,
tion is to provide robots with inferential abilities, the ap- F. Berto, E. Mares, and F. Paoli (eds.), Paraconsistency: Logic
proach appears to remain in the E-logic framework. and Applications. Springer.
Anderson, M., Gomaa, W., Grant, J., and Perlis, D. 2008. Active
logic semantics for a single agent in a static world. Artificial In-
Conclusion: Reasoning is a Process telligence 172: 1045-63.
Anderson, M. and Perlis, D. 2005. Logic, self-awareness and self-
A reasoner is engaged in reasoning, and makes decisions improvement: The metacognitive loop and the problem of brittle-
during (and as part of) that reasoning, such as whether to ness. Journal of Logic and Computation. . 15(1)
continue along present lines, or try a new tack, or give up, Asker, M. and Malec. J. 2005. Reasoning with limited resources:
or seek assistance. That is, a reasoning agent itself is en- Active logics expressed as labeled deductive systems. Bulletin of
gaged in some version of what we have called I-logic. On the Polish Academy of Sciences, Technical Sciences, Vol. 53, No.
1, 2005, pp. 69-78.
the other hand, the study of reasoning can of course pro-
Baral, C. and Jicheng Z. 2008. Non-monotonic temporal logics
ceed at many levels and in many forms.
that facilitate elaboration tolerant revision of
goals. AAAI 2008: 406-4
It may be premature – despite many decades of work (in- Barringer H, Fisher M, Gabbay DM, Gough G, editors. 2013.
cluding some by ourselves) – to try to pin down precise Advances in temporal logic. Springer Science & Business Media;
specifications (i.e., in an E-logic) of broad CSR behaviors. 2013 Nov 11.
We know so little of the notion of intelligence at this point, Bringsjord, S., Licato, J., Govindarajulu, N., Ghosh, R., and Sen,
that it may be more useful to get lots more experience with A. 2015. Real robots that pass human tests of self-consciousness.
reasoning behavior itself (that is, via I-logics that can actu- In Proceedings of IEEE the 24th International Symposium on
Robots and Human Interactive Communications
ally be used by automated agents/robots). At least, this is
the perspective we are exploring here. Brody, J., Cox, M.T., and Perlis, D. 2014. Incorporating elements
of a processual self into active logic. In M. Waser (Ed.), Imple-
menting selves with safe motivational systems and self-
An analogy with (Polya 1945) is tempting. While mathe- improvement: Papers from the Spring Symposium, AAAI Press.
matical logic is the very epitome of E-logic (fully focused Brody, J., and Perlis, D. 2015. Who's talking? Efference copy and
on entailment/consequence), it largely ignores the situation a robot's sense of agency. AAAI Fall Symposium 2015, Arlington
of actual mathematician-reasoners who question axioms, VA, 2015
7
Davis, E. 2017. Logical formalizations of commonsense reason- Miller, M. 1993. A view of one’s past, and other aspects of rea-
ing: a survey. J. Artificial Intelligence Research, 59, 651-723. soned change in belief. PhD dissertation, University of Maryland.
Dell’Aglio, D., Della Valle, E., Van Harmelen, F., and Bernstein, Miller, M., and Perlis, D. 1996. Automated inference in active
A. 2017. Stream reasoning: A survey and outlook. Data Science, logics. J Applied Non-classical Logics, 6(1).
in press. Mueller, E. 2015. Commonsense Reasoning, 2nd edition. Morgan
Delgrande, J., Peppas, P., and Woltran, S. 2013. AGM-Style Be- Kauffman.
lief Revision of Logic Programs under Answer Set Seman- Nilsson, N. 1983. Artificial intelligence prepares for 2001. AI
tics. LPNMR 2013: 264-276 Magazine, 4(4).
Diller, H., Adrian Haret, Thomas Linsbichler, Stefan Rüm- Nirkhe, M., Kraus, S., and Perlis, D. 1991. Fully deadline-
mele, Stefan Woltran, 2015. An Extension-Based Approach to coupled planning: One step at a time. International Symposium
Belief Revision in Abstract Argumentation. IJCAI 2015: 2926- on Methodologies for Intelligent Systems (ISMIS 1991).
2932
Nowaczyk, S. 2006. Partial planning for situated agents based on
Doherty, P. 1998. TAL| Projects| AIICS| IDA. Electronic Trans- active logic. Workshop on Logics for Resource Bounded Agents,
actions on Artificial Intelligence, 2(3-4), 273-306. ESSLLI
Elgot-Drapkin, J., and Perlis, D. 1990. Reasoning situated in time, Perlis, D. 1997. Sources of, and exploiting, inconsistency: prelim-
I: basic concepts. J. of Experimental and Theoretical AI. inary report. Journal of Applied Non-Classical Logics.
Elgot-Drapkin, J., Miller, M., and Perlis, D. 1991. Memory, rea- Perlis, D. 2000. The role(s) of belief in AI. In J. Minker (ed.)
son and time: the step-logic approach. In R. Cummins and J. Logic-Based AI, Kluwer.
Pollock (eds): Philosophy and AI: Essays at the Interface. MIT
Press. Perlis, D. 2016. The five dimensions of reasoning in the wild.
AAAI-2016.
Gärdenfors P, 2003. (editor) Belief revision. Cambridge Universi-
ty Press. Pnueli A. 1977. The temporal logic of programs. In Foundations
of Computer Science, 1977., 18th Annual Symposium. Oct 31
Gelfond, M. and Lifschitz, V. 1998. Action languages. Electronic (pp. 46-57). IEEE.
transactions on AI, v.3.
Polya, G. 1945. How to Solve It. Princeton Univ. Press.
Getoor, L, and Taskar, B. (editors) 2007. Introduction to Rela-
tional Statistical Learning. MIT Press. Purang, K., Gurney, J., and Perlis, D. 1996. AAAI Spring Sym-
posium.
Ghallab, M, Nau, D., and Traverso, P. 2016. Automated Planning
and Acting. Cambridge Univ. Press. Rajan, K. and Saffiotti, A. (editors) 2017. Special issue on AI and
robotics. Artificial Intelligence, v 247.
Goldsmith, J., Sloan, R.,Szorenyi, B., and Turán, G. 2004. Theory
revision with queries: Horn, read-once, and parity formulas. AIJ Reiter, R. 2001. Knowledge in Action. MIT Press.
156(2): 139-176 Roos, N. 1992. A logic for reasoning with inconsistent
Gonzalez, G., Baral, C., and Cooper, P. 2002. Modeling multime- knowledge. Artificial Intelligence, 57, 69-103.
dia displays using action based temporal logic. VDB 2002: 141- Sloan, R. and Turán, G. 1999. On theory revision with que-
155 ries. COLT 1999: 41-52
Hammer, P., Lofthouse, T., and Wang, P. 2016. "The OpenNARS Talcott, C. 2003. FOL: Towards an architecture for building au-
implementation of the non-axiomatic reasoning system." Interna- tonomous agents from building blocks of first order logic.
tional Conference on Artificial General Intelligence. Springer http://www-formal.stanford.edu/FOL/03jan-umd.ppt (Slides from
International Publishing talk at U Maryland.)
Heins, T. 2009.. A case study of active logic. Master's thesis, Tenorth, M. and Beetz, M. 2017. Representations for robot
Department of Computer Science, Lund University. knowledge in the KnowRob framework. Artificial Intelligence, v
Ingrand, F. and Ghallab, M. 2017. Deliberation for autonomous 247.
robots: A survey. Artificial Intelligence, v 247. Wang. P. 2013. Non-axiomatic logic: A model of intelligent rea-
Konolige, K. 1984. A deduction model of belief and its logics. soning. World Scientific.
PhD dissertation, Stanford University. Wang, P., and Hammer. 2015. "Issues in temporal and causal
Kraus, S. and Lehmann, D. 1986. Knowledge, Belief and inference." International Conference on Artificial General Intelli-
Time. ICALP 1986: 186-195 gence. Springer.
Kraus S., Rosenschein, J. S., and Fenster, M. 2000. Exploiting Weyhrauch, R 1980. Prolegomena to a theory of mechanized
focal points among alternative solutions: two approaches. Annals formal reasoning. Artificial Intelligence, 13, 133-170
of Mathematics and Artificial Intelligence, 28(1-4):187-258. Weyhrauch, R., and Talcott, C. 1990. Towards a theory of mech-
Lamport, L. 1994. The temporal logic of actions. ACM Transac- anizable theories: I FOL contexts – the extensional view. Europe-
tions on Programming Languages and Systems (TOPLAS), 16(3), an Conference on Artificial Intelligence (ECAI).
872-923. Weyhrauch, R., and Talcott, C. 1994. The logic of FOL Systems:
Levesque, H. 2017. Common Sense, the Turing Test, and the formulated in set theory. In Logic, Language, and Computation.
Quest for Real AI. MIT Press. 119-132.
Levesque, H., and Lakemeyer, G. 2000. The Logic of Knowledge Weyhrauch, R, and Talcott, C. 1997. WristWatch – an FOL theo-
Bases. MIT Press. ry of time. http://www-formal.stanford.edu/FOL/w.ps
Lin, C., Andrey, K., Ece, K., and Horvitz, E. 2015. Metareason-
ing for planning under uncertainty. arXiv:1505.00399v1
8