<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Internal Reasoning of Robots</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Don</forename><surname>Perlis</surname></persName>
							<email>perlis@cs.umd.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Goucher College</orgName>
								<orgName type="institution" key="instit1">University of Maryland</orgName>
								<orgName type="institution" key="instit2">Bar Ilan University</orgName>
								<address>
									<settlement>Bethesda</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Justin</forename><surname>Brody</surname></persName>
							<email>justin.brody@goucher.edu</email>
							<affiliation key="aff0">
								<orgName type="department">Goucher College</orgName>
								<orgName type="institution" key="instit1">University of Maryland</orgName>
								<orgName type="institution" key="instit2">Bar Ilan University</orgName>
								<address>
									<settlement>Bethesda</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sarit</forename><surname>Kraus</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Goucher College</orgName>
								<orgName type="institution" key="instit1">University of Maryland</orgName>
								<orgName type="institution" key="instit2">Bar Ilan University</orgName>
								<address>
									<settlement>Bethesda</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Miller</surname></persName>
							<email>mjmiller@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Goucher College</orgName>
								<orgName type="institution" key="instit1">University of Maryland</orgName>
								<orgName type="institution" key="instit2">Bar Ilan University</orgName>
								<address>
									<settlement>Bethesda</settlement>
									<region>MD</region>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Internal Reasoning of Robots</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C60FBFF9085ABDEFB0CBC08CFF74D952</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:00+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We argue for the value of examining the internal processes that robots might actually use to draw inferences in a timely way in a dynamic world. This requires a significantly different way of thinking about logic and reasoning, which in turn bears on some traditional logic-related problems such as omniscience and reasoning in the presence of a contradiction, as well as on a wide variety of other AI issues. A nonstandard internally-evolving notion of time seems to be the key that unlocks other tools.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>We teeter on the edge of the age of general-purpose robots. It thus becomes ever more important that commonsense reasoning (CSR) examine in some detail just how such a robot will actually think, i.e., produce inferences over time (as it plans, decides, assesses, questions, learns, explores, updates, reconsiders, etc). In particular, robots will need to keep their reasoning abreast of at least some aspects of the evolving world, including the passage of time and how they are progressing with regard to their own (also evolving) goals. <ref type="bibr">1</ref> On the surface much of CSR may seem to be aiming at just these issues. <ref type="bibr">2</ref> But the bulk of such work follows what Ray Reiter has called the "external design stance" (Reiter 2001,  pp 292-293): that of a designer-scientist "entirely external to … [and] … looking down on some world inhabited by an agent." Indeed, a lot of this work is very relevant and has led to major advances in our understanding: situation calculus, nonmonotonic reasoning, and much more. Still, the external stance is nevertheless a very highly idealized abstraction that creates an unworkable barrier regarding a robot's internal reasoning, and in addition faces huge hurdles such as omniscience, contradiction-intolerance, and more.</p><p>Work primarily supported by the U. S. Office of Naval Research. <ref type="bibr">1</ref> While we recognize that Markov decision processes (MDPs) and related technical tools are standard items in much of current (often highlystructured special-task) robotic work, general-purpose robots will be bombarded with "culturally supplied" information from other agents, signage, online, and so on, and will need to reason in real-time with such information. Hence a knowledge base (KB) managed in large measure by inferential processes seems unavoidable.</p><p>2 See for instance (Rajan&amp;Saffiotti 2017) for very recent work.</p><p>This paper attempts to shed light on that barrier and those hurdles, and to highlight an alternative that drives a sharp wedge between two notions of logic: (i) the standard "external" kind (E-logics) that specify features from afar via closure under (some form of) consequence or entailment relation, and (ii) "internal" ones (I-logics) that represent (and indeed can actually be used for) the inferential processing undertaken by an agent over time. (We especially focus on active logic, which is perhaps the most developed form of I-logic so far. Active logic grew out of ideas in (Elgot-Drapkin&amp;Perlis 1990), and has been continually investigated ever since (Nirkhe et al 1991; Miller&amp;Perlis  1996; Kraus et al 2000; Anderson et al 2008; Brody et al  2014; Brody&amp;Perlis 2015).)</p><p>As we will see, some of the issues faced by E-logics (e.g., omniscience) simply go away in an I-logic approach. In addition, we have found a wide array of unexpected benefits of such an approach, that ties CSR to many other parts of AI. Thus the present paper is also a kind of progress report, pulling together many aspects of our attempt to look under the robotic hood, to craft appropriate logic mechanisms to go there, and to explore applications across AI. As such, it will have a large number of short sections; we beg the reader's indulgence, for we see this as the most useful way to communicate the range of these ideas compactly.</p><p>The single most salient departure that I-logics make from E-logics is that of taking into account the actual process of inferring as something that itself takes time. Thus when a conclusion is inferred, it has become a later time than prior to reaching that conclusion. This time-stratification spreads successive inferences out and leaves a self-updating record of an agent's evolving beliefs up until the present moment (which itself then moves ahead one more step, and so on indefinitely). Secondarily, this stratification then provides a very simple yet far-reaching form of introspection: looking back at one's beliefs of past moments and drawing conclusions bearing on everything from non-monotonicity and contradiction-handling, to ambiguity resolution, agent control of semantics, and awareness of own actions. Third, the notions of axiom and theorem and entailment are no longer very informative: beliefs come and go -still due to (various forms of) inference, but including evolving time and the ability to give up (i,e., disinherit) beliefs that are judged as no longer appropriate.</p><p>Active logic in particular posits an unending 3 sequence of time-steps, at each of which the knowledge base (KB) has a finite number of wffs, considered as the beliefs that the reasoning agent holds (at that step); the contents of the KB then fluctuate in time, and there is no final state where the agent arrives at its "finished" belief-set. It is the agent's behavior through time that is of interest.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Elementary Example: Go to Lunch</head><p>A robot needs to get to a noon lunch date, and it is now 11am. How can it ever decide to start walking? The problem is that, given Now(11:00), standard logics will treat this as an axiom and so the robot will never realize the time has changed, e.g., that it has now become 11:30 and it should start walking. <ref type="bibr">4</ref> Clearly it is essential that the robot be able to update its belief as to what time it is. At time 11:31 it has just inferred Do(walk). <ref type="bibr">5</ref> Notice that beliefs of the form Now(t) come and go, whereas the "plan" to walk starting at 11:30 continues to be inherited. <ref type="bibr">6</ref> A "clock" inference rule (along with Modus Ponens in the last two steps) can achieve this: from Now(t) infer Now(t+1):</p><p>t: Now(t) ------------------t+1: Now(t+1) <ref type="bibr">3</ref> In concert with Nilsson's notion of an agent with a lifetime of its own (Nilsson 1983). <ref type="bibr">4</ref> If lunch for a robot sounds silly, the reader is invited to imagine that the task instead is to approach and disarm a bomb at noon (when local civilians will have been safely moved away). <ref type="bibr">5</ref> If one wants to be picky, perhaps this should have been inferred a little earlier, say at 11:29, so that the walking can actually start by time 11:30.</p><p>Here we are ignoring such details, and also the granularity of time steps. <ref type="bibr">6</ref> After 11:30, there is no need to continue inheriting the plan; current implementations of active logic do not take advantage of this "garbage collection" but we expect our next version to do so.</p><p>While this may seem simple enough, it radically changes the notion of a logic from an external specification (Elogic) of a system in another world, to an internal mechanism (I-logic) operating within and as part of that world. In particular, the example is written in the notation of active logic, the I-logic approach that we have been pursuing.</p><p>We next offer three clarifications to avoid confusion between E-and I-logics.</p><p>This Is Not Your Grandmother's Temporal Logic Temporal logics are well known. <ref type="bibr">7</ref> But, in virtually all cases, they are not properly temporal -that is, they do not vary with time. In fact, they are examples of E-logics, taking an external timeless stance even while looking in on a world that may evolve in time. In effect, temporal logics have a frozen permanent now from which they can express facts about what is, will be, or was the case at various specified moments. But inferences made using such logics do not correspond to anything changing within the world being explored.</p><p>Yet a wealth of beneficial connections arise between a properly temporal (I-logic) version of CSR and much of the rest of AI -e.g., NLP, perception, robotics, planning. As noted, this paper attempts to bring together a wide range of such benefits as well as provide motivation for the underlying logical apparatus, especially in the active logic form of I-logic. In effect, time-change is the root out of which all the rest flows. In particular, it dispenses with omniscience quite trivially: an agent believes only what it has had time to come to believe so far; anything else it may come to believe only later on (as further inferences are drawn). Such an agent certainly does not believe (contain in its KB) all wffs that are entailed by its current beliefs. Indeed, current beliefs may well be inconsistent -more on that below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>This Is Not Your Grandfather's Belief Revision</head><p>Belief revision 8 provides a possible way to view the above clock rule: insert Now(11:30) as an update, which triggers relaxation of the KB -removal of Now(11:00) among other changes. Yet that last phrase ("among other changes") is where E-logic reveals one of its main hurdles: standard notions of belief revision -being based on a notion of closure under consequence -cannot serve as a mechanism for a robot to use, simply because such closure in general is very expensive (in most cases non-terminating or even undecidable). This is the omniscience problem, and is uni-versally recognized as unrealistic: producing consequences is time-consuming.<ref type="foot" target="#foot_2">9</ref> </p><p>Traditional (E-logic) belief revision also suffers from "recency prejudice" (Perlis 1997, 2000), in which newly acquired information is taken to have a firm validity that preexisting beliefs must yield to. Yet it is hard to think of a case in which a new item P should take precedence over one's entire KB. The reasons for preferring P would surely in large measure be deeply embedded in that very KB as part of one's understanding of many relevant aspects of the world. Thus P and the KB (including information as to where this new P came from) would need to "fight it out" as to whether to accept P or not; and any conclusion could vary over time as the agent devotes more thought to the matter (and/or may decide to seek more information).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Goodbye to Axioms</head><p>Very little in CSR can reasonably be taken as firmly given over an agent's lifetime. Perhaps some mathematical concepts, perhaps some definitions. But more commonly, we hold beliefs for awhile and then relax them if sufficient counterevidence arises. Or, in many cases, we already have that evidence, in the form of other beliefs to the effect that something is in flux (the time, an airplane's location, and so on); sometimes change is the rule. It is hard then to find much to take as axiomatic. Here are two more examples.</p><p>(1) Your eleven-year-old son tells you that Barack Obama is 6'8" tall. You do not take this as a fact; on the contrary -although you may not have any specific height in mind for Obama -you do believe 6'8" is sufficiently unusual (and presidents are sufficiently in the news) that it would have been remarked on a lot and you would have heard it before. So you discount the information from your son. But if your son then tells you that Obama has been slouching so as to disguise his height ever since his twenties, and that he is in fact 6'8", would you still be so sure he is wrong? (2) You hear the TV meteorologist say that the temperature dropped to 1 degree below zero last night; and you accept this. But you would not be especially startled to learn later that the meteorologist has misread her notes and that the low was 1 degree above zero; or that the thermometer had given a false reading. In each case, many background assumptions are in effect. At this point one might be tempted to opt for probabilities. But while the latter clearly have an important role to play in AI, they need not come in quite here. Instead, we often simply reserve judgment, or suspend a previous judgment.</p><p>And again, I-logics are vehicles for this real-time ongoing sort of reasoning. Indeed, an agent can only reason with what it has at hand.<ref type="foot" target="#foot_3">10</ref> I-logic (at least in its active-logic form) not only brings many benefits but (perhaps surprisingly) is not particularly mired in the weeds of implementational details. This is not to say that all such issues are now fully resolved -this is a long work very much still in progress. But looking under the robotic hood, so to speak, is essential if we are to come to grips with how CSR can actually take place in robotic creations coming in the (seemingly quite near) future.</p><p>Thus instead of axioms, at any moment, our artificial agent has a specific collection of beliefs (stored in memory) and this collection changes as inferences are drawn, perceptions made, and so on. Among these changes -and central to most of the distinct features of active logic -is the updating of the present time as in the clock rule. There is no notion of inferential closure; the current beliefs are simply whatever has been inferred/perceived and kept so far (i.e., inherited to the present time).</p><p>A belief can fail to inherit for a variety of reasons. No belief of the form Now(t) is inherited -it is replaced by Now(t+1). Other failures of inheritance are illustrated in various cases below. But more importantly we now turn to the power of introspective reasoning that becomes possible in I-logics endowed with a notion evolving time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introspection Is a Many-Splendored Thing</head><p>Introspection is one of the most valuable tools that come almost for free in an I-logic. 11 It in turn facilitates powerful methods for detecting and defusing contradictions, managing nonmonotonic inference, reasoning about and adjusting semantics, tracking actions, and much more. In this and several sections that follow, we explain and illustrate a number of these ideas. Given a belief P at time t, an agent ought to be able to note later on (say at time t+1) that it had that belief earlier. This can be achieved in active logic by means of a rule such as the following (positive introspection), where the KB-predicate symbol refers to the agent's own knowledge base: t: P ---------------t+1: KB(P,t) Similarly, another rule (negative introspection) can provide the result that one did not just previously have a given belief:<ref type="foot" target="#foot_5">12</ref> t: … -----------------t+1: ~KB(P,t) [if P is not present at the previous step] These two rules are trivial to implement and cheap to run, involving no more than a linear-time lookup at time t+1 to see what wffs are or are not among the t-beliefs. <ref type="bibr">13</ref> Yet a surprising number of capabilities flow from this, as expanded upon in the next several subsections.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Non-monotonicity</head><p>At this point we can already carry out some simple cases of nonmonotonic reasoning. For instance, the default that B's are typically F's (birds typically fly) can be captured like this: if one doesn't already (as in a moment ago) know that a given bird doesn't fly, then assume it does. In activelogic notation this can be written as follows:</p><formula xml:id="formula_0">∀𝑥 [ (∀t) {Bird(x) &amp; ~KB(~Flies(x),t-1)} à Flies(x) ]</formula><p>Then given Bird(tweety), all it takes to infer that Tweety can fly is ~KB(~Flies(tweety),t-1), which comes instantly from negative introspection -unless one does already know Tweety cannot fly. No fuss, no muss -no need for complex consistency checks or internal model-building; conclusions are held as long as they are held, and can be surrendered when evidence so suggests. <ref type="bibr">14</ref> Thus, one might later on come to believe Tweety is a penguin -whether by observation or simply additional inference. This will then appear as a (direct) contradiction in the KB: two beliefs of the form P and ~P will both be present at the same time-step. Which brings us to the next subsection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Contradictions</head><p>Contradictions are virtually inevitable in commonsense reasoning (Perlis 1997). While this is generally considered a major nuisance for CSR, it can actually be a boon. Here is how an I-logic can benefit (in the specific form of active logic): If the wffs P and ~P both appear as t-beliefs, then neither are inherited as (t+1)-beliefs and instead Contra(t, P, ~P) is inferred as a (t+1)-belief. Thus the agent retains in the evolving present the fact that there had been an earlier contradiction, but is no longer directly subject to it, and ex contradiction quodlibet (from a contradiction all follows) is thereby disarmed. <ref type="bibr">15</ref> Thus instead of being a logician's anathema, contradictions can be a robot's best friend, helping it adjust its KB to come more into line with reality. Contradictions simply remain undiscovered in the KB until they are discovered (in the P, ~P form) over time -and then defused. This is a very different approach from more customary paraconsistent logics, most of which skirt around the edges of a contradiction -rather than acknowledge it and use it to make changes to the KB -or in effect assume they can all be hunted out in advance. <ref type="bibr">16</ref> In the case of Tweety above, new information that she is a penguin and does not fly will provide (say at time-step t) a direct contradiction between Flies(tweety) and ~Flies(tweety), which then at time t+1 will result in the KB having neither of these inherited from step t, but instead will have an assertion that such a contradiction did arise at time t. If the agent has further information -such as that penguins are a subclass of birds, and that subclass properties are more trustworthy<ref type="foot" target="#foot_10">17</ref> -then ~Flies(tweety) can be reinstated. If not, then the agent remains in doubt.</p><p>It is our contention that this sort of fluctuating conflictresolution over time is the only option for an actual agent engaged in reasoning as the world evolves.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Semantics and Pragmatics</head><p>In an I-logic, semantics can take on an entirely new aspect, where the agent can exert control and both determine and reason about what its expressions do or don't stand for. <ref type="bibr">18</ref> This is one of the most powerful aspects of introspection that we have noted so far. In effect, one can reason about one's own expressions -simply by means of introspectively examining past beliefs and subexpressions thereof. One can even assign new expressions, if for instance a new entity is observed, or if one infers that two entities were being conflated as one (as in the cases of ambiguity or of misidentification).</p><p>In fact, AI systems are generally notorious for altogether ignoring the expression/meaning distinction, as in: Joe is a person and also we just now used "Joe" to refer to him. People can and do (and must) note and make use of the difference between language and what language refers to. Our artificial agents need to be able to do the same; otherwise they can hardly be said to know anything (Perlis 2016), let alone reason about errors. With all the recent successes in NLP (mostly coming from deep learning), still there is almost no language-like introspection, no meanings associated with words in a way that allows reasoning, let alone adjusting meanings.</p><p>On the other hand, introspection allows representation of beliefs (at least at previous steps) as objects that can be reasoned about. This has numerous ramifications, which for lack of space we can only briefly allude to in the rest of this section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Ambiguity and Misidentification</head><p>A potentially ambiguous expression (say, "Jean's car") can be recognized as such (e.g., by noticing a direct contradiction -"this is Jean's car, and the key to Jean's car isn't the key to this car"). This in turn triggers an effort to resolve the contradiction. Maybe Jean has two cars (ambiguity); or maybe this is the wrong key or that is not her car at all (misidentification).</p><p>The latter case is especially interesting, for it requires some expression to represent an object (the wrong key or wrong car), but not the expression that had been used a moment ago. Miller and Perlis (Miller&amp;Perlis 1996) propose a special active-logic function-symbol tfitb to produce a new name on demand, for the "thing formerly interpreted to be" something else.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Focal points</head><p>A related idea comes up in planning, especially multiagent planning. It may be important to identify an entity that another agent is likely to similarly identify -for instance a good location to meet up or to leave a message, or an "obvious" item to pick out of a long list (e.g., the first, last, or middle one). This in turn may require coming up with a new expression that was not previously in one's ontology. In (Kraus et al 2000) an approach to this is given using active logic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Pragmatics</head><p>In conversation, all sorts of assumptions arise and are confirmed or dispelled, often by means of further conversation. Thus NLP-dialog is a prime example of beliefs com-ing and going during reasoning. Here is one example dialog, in which reasoning involves inferences that evolve over time, that has been implemented in active logic (Purang et al 1996): (A) Kathy: Are the roses fresh? (B) Bill: They are in the fridge. (C) Bill: But they are not fresh. At some point prior to (C), Bill supposes Kathy will draw from (B) the implicature that the roses are fresh, so in (C) he dispels that inaccuracy. Thus Bill has to reason about the effects of the ongoing conversation and make adjustments to it.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The One Wise Man Problem</head><p>Much has been made of the Three-Wise-Men problemsee (Konolige 1984; Elgot-Drapkin&amp;Perlis 1990). A realistic treatment has to take into account the passage of time as the wise men think; and this can be done in traditional temporal logic, as long as the wise men themselves are not required to use that same logic. But suppose we do want to capture the reasoning of such an agent; for instance -to make the problem especially simple -the King who wants to assure himself that his one wise man is not an idiot. So the King proposes this problem to his wise man: "Is 15 a prime number?" Being no genius himself, the King has to think for awhile before deciding the answer is "no" -and if by then the wise man has not yet answered, the King can start looking for a replacement. But to do this reasoning (which involves introspection), the King will need I-logic, and in particular an I-logic that closely tracks time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>What Am I Doing?</head><p>It is important that an agent not only plan and take actions, but that it also know when it is in fact doing so. Otherwise strange behaviors can result. In one of our robotic studies recently, robot Alice was programmed to point and say "I see Julia" whenever it heard an utterance containing the word "Julia" (actually, it was doing no actual wordprocessing at the time, but simply matching the input sound-stream to a stored one). So it got itself into a loop, hearing "Julia" from its own loudspeaker and then pointing and repeating the same phrase over and over.</p><p>But taking a cue from neuropsychology,<ref type="foot" target="#foot_12">19</ref> we were able to encode a rule for noting one's own activity: whenever an action is undertaken, Do(x) is inferred (recall the Lunch example), and at the next step Doing(x) can be inferred, and inherited as long as the activity is still underway. <ref type="bibr">20</ref> We have implemented this in a grounded way, so that when Alice undertakes to speak she infers that she is engaged in a speaking action (but also checks what she hears to make sure it matches her expected speech).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Reasoned Learning</head><p>Machine learning (ML) has taken center stage in recent years, and for good reason: it has made justly fabled strides, and surely will be a major part of any future general-purpose AI. But alone it is insufficient. The practices usually referred to as ML are ones of habituation or training. A human turns a trainable system on, allows it to train, perhaps applies it, and later turns it off; in itself, traditional ML has little if any autonomy.</p><p>But a general-purpose AI (robotic or otherwise) will need to decide what to learn, and when and how, and whether learning is working and/or should stop. Moreover, as noted in the Introduction, cultural (symbolic) transmission is also a major source of learning. <ref type="bibr">21</ref> And finally, a system will need to know what it has or hasn't already learned. <ref type="bibr">22</ref> An I-logic (particularly, active logic) -in keeping a history of its own KB over time -can potentially examine that history, infer that it has (or lacks) certain capabilities, and then decide whether to activate an appropriate ML process; see (Elgot-Drapkin, et al 1991) for a brief introduction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>Ray Reiter (Reiter 2001) considers numerous issues that arise in commonsense reasoning (CSR) when an agent's deliberations occur within a dynamic setting, and in particular, how a formal logic might be used by an agent to do its own reasoning, and have that reasoning keep up with changing events (pp.163-164). Reiter succeeds in isolating various themes surrounding this: omniscience, internal contradictions, and so on. But in the end he advocates instead the "external design stance." Action languages (Gel-fond&amp;Lifschitz 1998) are another firmly E-logical approach that thus again are suitable for external analysis of an agent but not for real-time use by an agent, let alone by one with a potentially inconsistent KB; the same holds for temporal action logics (TAL; see Doherty 1998) and the temporal logic of actions (TLA; see Lamport 1994).</p><p>In a survey of commonsense reasoning (Davis 2017) the Eand I-distinction is also raised (under different terminology); but, like Reiter, he focuses primarily on the external stance. A survey on robot deliberation (Ingrand&amp;Ghallab 2017) does not address this distinction. <ref type="bibr">21</ref> See also (Levesque 2017). <ref type="bibr">22</ref> But again see (Getoor&amp;Taskar 2007) for another approach. Levesque and Lakemeyer (Levesque&amp;Lakemeyer 2000, pp  195-196) argue that attending to internal inference mechanisms to avoid omniscience makes behavioral predictions impossible. They deal with omniscience instead by enlargements of the semantics to allow "non-standard world states" that keep out undesired agent-beliefs. But it is unclear what predictions one could hope to make, given an agent with thousands of explicit beliefs, other than ones of such generality as to be virtually useless about that particular agent's behavior. Will it complete a given task (even a purely inferential one) within ten days? One surely cannot expect anything other than a careful examination of the robot's actual processing to reveal such results.</p><p>On the other hand, Richard Weyhrauch and Carolyn Talcott (Weyhrauch 1980; Weyhrauch&amp;Talcott 1990, 1994;  Talcott 2003) initiated the FOL approach (one instance of an I-logic) which aimed at providing reasoning mechanisms for actual use by an agent; however this effort has remained in a fragmentary state. An interesting addendum to FOL is WristWatch (Weyhrauch &amp; Talcott 1997)-a dynamic context from which to answer questions about time, specifically about the ever-changing meanings of the constants now and then as updated by their "tick" inference rule. Weyhrauch and Talcott speculate about supplying a robot with WristWatch embedded into FOL as its mechanism to reason about time. Pei Wang's Non-Axiomatic Logic (aka NARS) provides a (term-logic based) reasoning system which aims to be finite, real-time and open (Wang 2013). It shares some features with active logic, in that it is non-monotonic, allows for self-reference and is intended to be situated (in that knowledge is not disembodied but should be based on the agent's experience). While Chapter 9 of (Wang 2013) addresses potential meta-cognition in his system, no particular mechanisms for monitoring an ongoing reasoning process seem to be specified. Gestures toward such mechanisms are made (by, e.g., referencing "doubt" and "wait" operations), but we are not aware of any attempt to operationalize these. Later iterations of NARS (Wang&amp;Hammer 2015; Hammer et al 2016) address temporality and recognize the problem of assuming that "the reasoning system itself is outside the flow of time" (Wang&amp;Hammer  2015). The temporality in this system differs from active logic, however, in that the flow of time is not itself seen as an object of reasoning. Jacek Malec and his group (Asker&amp;Malec 2005) extended active logic and proposed a labeled deductive system (LDS) which attaches a label to every well-formed formula. LDS allows the inference rules to analyze and modify labels, or even trigger on specific conditions defined on the labels. They demonstrated the use of LDS by formalizing models of short-term memory, followed up by studying several scenarios (Heins 2009). In related work, Nowaczyk 2006) extends active logic to partial planning situations.</p><p>An interesting middle-ground is taken in TRL -timed reasoning logic -see (Alechina et al 2004a,b; Ag-notes&amp;Alechina 2007). While TRL remains at the E-logic level, it can express fairly detailed aspects of internal processing. In that respect it is similar to the meta-level steplogics in (Elgot-Drapkin&amp;Perlis 1990). Because of more limited expressive power, TRL tends to be decidable. On the other hand, the semantics given in (Anderson, et al  2008) appears to offer a compelling psychologically plausible alternative. But it is noteworthy that none of these address the agent-controlled-semantics issue above.</p><p>The planning community is beginning to acknowledge the importance of taking planning-time into account as part of the planning process; see for instance (Ghallab et al 2016;  Lin et al 2015). The earliest published work we are aware of on this is (Nirkhe et al 1991).</p><p>A recent article (Tenorth&amp;Beetz 2017) discusses complex interactions between robotic control, knowledge representations at various levels, and reasoning over those representations, including temporal reasoning. While the intention is to provide robots with inferential abilities, the approach appears to remain in the E-logic framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion: Reasoning is a Process</head><p>A reasoner is engaged in reasoning, and makes decisions during (and as part of) that reasoning, such as whether to continue along present lines, or try a new tack, or give up, or seek assistance. That is, a reasoning agent itself is engaged in some version of what we have called I-logic. On the other hand, the study of reasoning can of course proceed at many levels and in many forms.</p><p>It may be premature -despite many decades of work (including some by ourselves) -to try to pin down precise specifications (i.e., in an E-logic) of broad CSR behaviors. We know so little of the notion of intelligence at this point, that it may be more useful to get lots more experience with reasoning behavior itself (that is, via I-logics that can actually be used by automated agents/robots). At least, this is the perspective we are exploring here.</p><p>An analogy with (Polya 1945) is tempting. While mathematical logic is the very epitome of E-logic (fully focused on entailment/consequence), it largely ignores the situation of actual mathematician-reasoners who question axioms, decide to change problems, and are keenly aware of (and make use of) their progress or lack of it over time (Perlis,  2016). Polya's advice is aimed at the latter, with practical in-the-moment strategies to attend to. And while mathematical logic has been extraordinarily successful in its own right, it has afforded relatively mild impact or insight into mathematical practice overall.</p><p>We repeat from our Introduction: The single most salient departure that I-logics make from E-logics is that of taking into account the actual process of inferring as something that itself takes time. This departure provides a very rich set of tools that we hope to have illustrated here.</p></div>			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_0">For standard approaches, see<ref type="bibr" target="#b36">(Pnueli 1977;</ref> Baral&amp;Zhao 2008;<ref type="bibr" target="#b22">Gonzalez et al 2002;</ref><ref type="bibr" target="#b8">Barringer et al 2013;</ref> Kraus&amp;Lehmann 1986)   </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_1"> See, e.g., (Gardenfors 2003; Sloan&amp;Turan 1999;<ref type="bibr" target="#b21">Goldsmith et al 2004;</ref><ref type="bibr" target="#b13">Delgrande et al 2013;</ref><ref type="bibr" target="#b14">Diller et al 2015)</ref> for traditional E-logic approaches.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_2">This is sometimes embraced as a necessary evil<ref type="bibr" target="#b37">(Reiter 2001</ref>); or dealt with via specialized semantics (Levesque&amp;Lakemeyer 2000) which however does not adequately address or ameliorate the time-consumption aspect.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_3">See for instance the Oxford Reference on Neurath's boat -"The powerful image conjured up by Neurath, in hisAnti-Spengler (1921), whereby the body of knowledge is compared to a boat that must be repaired at sea: 'we are like sailors who on the open sea must reconstruct their ship but are never able to start afresh from the bottom…'. Any part can be replaced, provided there is enough of the rest on which to stand. The image opposes that according to which knowledge must rest upon foundations, thought of as themselves immune from criticism, and transmitting their immunity to other propositions by a kind of laying-on of hands</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_4">."11  And so perhaps "introspective logic" would be a more apt name than internal logic.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_5">Many issues arise here that we do not have space to address, such as: to which wffs P are the introspection rules applied (if care is not taken, the KB will quickly become swamped). A much longer paper in preparation will deal with this.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="13" xml:id="foot_6">A t-belief is simply any belief in the KB at time t.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="14" xml:id="foot_7">Of course, an agent can also remain in doubt, or even be deliberately tentative (such as with probabilities and during learning; see (Getoor&amp;Taskar 2007)).</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="15" xml:id="foot_8">To be sure, whatever circumstances that produced P and ~P may do so again, so this is not a panacea. But it can be shown<ref type="bibr" target="#b31">(Miller 1993</ref>) that under reasonably broad conditions this too will resolve into a stable state.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="16" xml:id="foot_9">E.g., see<ref type="bibr" target="#b37">(Roos 1992)</ref> for a more traditional E-logic treatment; and<ref type="bibr" target="#b3">(Anderson et al 2013)</ref> for more on an active logic approach.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="17" xml:id="foot_10">Such a rule has been implemented in one of our active logic programs.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="18" xml:id="foot_11">That is, this refers to meanings the agent assigns to its expressions, quite apart from what a logic-designer may have in mind. Note that the recent Facebook robot-incident of "inventing a new language" is not of this sort at all: those robots did not assign meanings to anything, either in the original English or in their later made-up phrases. See (http://www.newsweek.com/2017/08/18/ai-facebook-artificialintelligence-machine-learning-robots-robotics-646944.html )</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="19" xml:id="foot_12">The so-called efference copy, see(Brody et al  </note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2015" xml:id="foot_13">).20  This is a different method from that used in<ref type="bibr" target="#b9">(Bringsjord et al 2015)</ref> where voice recognition appears to take precedence over recall of one's own actions.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The dynamics of syntactic knowledge</title>
		<author>
			<persName><forename type="first">T</forename><surname>Agnotes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Alechina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<date type="published" when="2007-02">2007. February 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A complete and decidable logic for resource-bounded agents</title>
		<author>
			<persName><forename type="first">N</forename><surname>Alechina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Logan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Whitsey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS</title>
				<meeting>the Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2004">2004a. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Modelling communicating agents in timed reasoning logics</title>
		<author>
			<persName><forename type="first">N</forename><surname>Alechina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Logan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Whitsey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings, 9 th European Conference (JELIA) -Lecture Notes in AI 3229</title>
				<meeting>9 th European Conference (JELIA) -Lecture Notes in AI 3229</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004b</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An approach to human-level commonsense reasoning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Gomaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Paraconsistency: Logic and Applications</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Tanaka</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Berto</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Mares</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Paoli</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Active logic semantics for a single agent in a static world</title>
		<author>
			<persName><forename type="first">M</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Gomaa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Grant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">172</biblScope>
			<biblScope unit="page" from="1045" to="1063" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Logic, self-awareness and selfimprovement: The metacognitive loop and the problem of brittleness</title>
		<author>
			<persName><forename type="first">M</forename><surname>Anderson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Logic and Computation</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Reasoning with limited resources: Active logics expressed as labeled deductive systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Asker</surname></persName>
		</author>
		<author>
			<persName><forename type="middle">J</forename><surname>Malec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bulletin of the Polish Academy of Sciences, Technical Sciences</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="69" to="78" />
			<date type="published" when="2005">2005. 2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Non-monotonic temporal logics that facilitate elaboration tolerant revision of goals</title>
		<author>
			<persName><forename type="first">C</forename><surname>Baral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Jicheng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AAAI</title>
		<imprint>
			<biblScope unit="volume">2008</biblScope>
			<biblScope unit="page" from="406" to="410" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Advances in temporal logic</title>
		<author>
			<persName><forename type="first">H</forename><surname>Barringer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fisher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Gabbay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gough</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013-11-11">2013. 2013 Nov 11</date>
			<publisher>Springer Science &amp; Business Media</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Real robots that pass human tests of self-consciousness</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bringsjord</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Licato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Govindarajulu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IEEE the 24th International Symposium on Robots and Human Interactive Communications Brody</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Waser</surname></persName>
		</editor>
		<meeting>IEEE the 24th International Symposium on Robots and Human Interactive Communications Brody</meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2014">2015. 2014</date>
		</imprint>
	</monogr>
	<note>Implementing selves with safe motivational systems and selfimprovement: Papers from the Spring Symposium</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Who&apos;s talking? Efference copy and a robot&apos;s sense of agency</title>
		<author>
			<persName><forename type="first">J</forename><surname>Brody</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Fall Symposium 2015</title>
				<meeting><address><addrLine>Arlington VA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Logical formalizations of commonsense reasoning: a survey</title>
		<author>
			<persName><forename type="first">E</forename><surname>Davis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Artificial Intelligence Research</title>
		<imprint>
			<biblScope unit="volume">59</biblScope>
			<biblScope unit="page" from="651" to="723" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Stream reasoning: A survey and outlook</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dell'aglio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Della Valle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Van Harmelen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bernstein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Science</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note>in press</note>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">AGM-Style Belief Revision of Logic Programs under Answer Set Semantics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Delgrande</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Peppas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Woltran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">LPNMR</title>
		<imprint>
			<biblScope unit="volume">2013</biblScope>
			<biblScope unit="page" from="264" to="276" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">An Extension-Based Approach to Belief Revision in Abstract Argumentation</title>
		<author>
			<persName><forename type="first">H</forename><surname>Diller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adrian</forename><surname>Haret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Linsbichler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Rümmele</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Woltran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IJCAI</title>
		<imprint>
			<biblScope unit="volume">2015</biblScope>
			<biblScope unit="page" from="2926" to="2932" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">TAL| Projects| AIICS| IDA</title>
		<author>
			<persName><forename type="first">P</forename><surname>Doherty</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronic Transactions on Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">3-4</biblScope>
			<biblScope unit="page" from="273" to="306" />
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Memory, reason and time: the step-logic approach</title>
		<author>
			<persName><forename type="first">J</forename><surname>Elgot-Drapkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Elgot-Drapkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perlis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Philosophy and AI: Essays at the Interface</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Cummins</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Pollock</surname></persName>
		</editor>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1990">1990. 1991</date>
		</imprint>
	</monogr>
	<note>. of Experimental and Theoretical</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Gärdenfors</surname></persName>
		</author>
		<title level="m">Belief revision</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Action languages</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gelfond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lifschitz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Electronic transactions on AI</title>
				<imprint>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">3</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Introduction to Relational Statistical Learning</title>
		<editor>Getoor, L, and Taskar, B.</editor>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Automated Planning and Acting</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ghallab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Nau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Traverso</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>Cambridge Univ. Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Theory revision with queries: Horn, read-once, and parity formulas</title>
		<author>
			<persName><forename type="first">J</forename><surname>Goldsmith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sloan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Szorenyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Turán</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AIJ</title>
		<imprint>
			<biblScope unit="volume">156</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="139" to="176" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Modeling multimedia displays using action based temporal logic</title>
		<author>
			<persName><forename type="first">G</forename><surname>Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Baral</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cooper</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">VDB</title>
		<imprint>
			<biblScope unit="page" from="141" to="155" />
			<date type="published" when="2002">2002. 2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The OpenNARS implementation of the non-axiomatic reasoning system</title>
		<author>
			<persName><forename type="first">P</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lofthouse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial General Intelligence</title>
				<imprint>
			<publisher>Springer International Publishing Heins</publisher>
			<date type="published" when="2009">2016. 2009</date>
		</imprint>
		<respStmt>
			<orgName>Department of Computer Science, Lund University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Master&apos;s thesis</note>
	<note>A case study of active logic</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Deliberation for autonomous robots: A survey</title>
		<author>
			<persName><forename type="first">F</forename><surname>Ingrand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghallab</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">247</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">A deduction model of belief and its logics</title>
		<author>
			<persName><forename type="first">K</forename><surname>Konolige</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1984">1984</date>
		</imprint>
		<respStmt>
			<orgName>Stanford University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD dissertation</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Knowledge, Belief and Time</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Lehmann</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICALP</title>
		<imprint>
			<biblScope unit="volume">1986</biblScope>
			<biblScope unit="page" from="186" to="195" />
			<date type="published" when="1986">1986</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Exploiting focal points among alternative solutions: two approaches</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Rosenschein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Fenster</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Annals of Mathematics and Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="issue">1-4</biblScope>
			<biblScope unit="page" from="187" to="258" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">The temporal logic of actions</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lamport</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Programming Languages and Systems (TOPLAS)</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="872" to="923" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">Common Sense, the Turing Test, and the Quest for Real AI</title>
		<author>
			<persName><forename type="first">H</forename><surname>Levesque</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">The Logic of Knowledge Bases</title>
		<author>
			<persName><forename type="first">H</forename><surname>Levesque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lakemeyer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2000">2000</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Automated inference in active logics</title>
		<author>
			<persName><forename type="first">C</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Andrey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ece</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Horvitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Miller</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1505.00399v1</idno>
	</analytic>
	<monogr>
		<title level="m">A view of one&apos;s past, and other aspects of reasoned change</title>
				<imprint>
			<date type="published" when="1993">2015. 1993. 1996</date>
			<biblScope unit="volume">6</biblScope>
		</imprint>
		<respStmt>
			<orgName>University of Maryland</orgName>
		</respStmt>
	</monogr>
	<note>Metareasoning for planning under uncertainty. PhD dissertation</note>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">Commonsense Reasoning</title>
		<author>
			<persName><forename type="first">E</forename><surname>Mueller</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
			<publisher>Morgan Kauffman</publisher>
		</imprint>
	</monogr>
	<note>2 nd edition</note>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Artificial intelligence prepares for 2001</title>
		<author>
			<persName><forename type="first">N</forename><surname>Nilsson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI Magazine</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="1983">1983</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Fully deadlinecoupled planning: One step at a time</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nirkhe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kraus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Symposium on Methodologies for Intelligent Systems (ISMIS</title>
				<imprint>
			<date type="published" when="1991">1991. 1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Partial planning for situated agents based on active logic</title>
		<author>
			<persName><forename type="first">S</forename><surname>Nowaczyk</surname></persName>
		</author>
		<author>
			<persName><surname>Esslli Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Sources of, and exploiting, inconsistency: preliminary report</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Minker</surname></persName>
		</editor>
		<meeting><address><addrLine>Perlis, D</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI</publisher>
			<date type="published" when="1997">2006. 1997. 2000. 2016. 2016</date>
		</imprint>
	</monogr>
	<note>The five dimensions of reasoning in the wild</note>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">The temporal logic of programs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Pnueli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Foundations of Computer Science</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Purang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Gurney</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</editor>
		<imprint>
			<publisher>Princeton Univ. Press</publisher>
			<date type="published" when="1945">1977. 1977. Oct 31. 1945. 1996</date>
			<biblScope unit="page" from="46" to="57" />
		</imprint>
	</monogr>
	<note>AAAI Spring Symposium</note>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">A logic for reasoning with inconsistent knowledge</title>
		<author>
			<persName><forename type="first">;</forename><surname>Reiter</surname></persName>
		</author>
		<author>
			<persName><surname>Roos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Rajan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Saffiotti</surname></persName>
		</editor>
		<meeting><address><addrLine>N</addrLine></address></meeting>
		<imprint>
			<publisher>MIT Press</publisher>
			<date type="published" when="1992">2017. 2001. 1992</date>
			<biblScope unit="volume">57</biblScope>
			<biblScope unit="page" from="69" to="103" />
		</imprint>
	</monogr>
	<note>Special issue on AI and robotics</note>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">On theory revision with queries</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sloan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Turán</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">COLT</title>
		<imprint>
			<biblScope unit="page" from="41" to="52" />
			<date type="published" when="1999">1999. 1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Talcott</surname></persName>
		</author>
		<ptr target="http://www-formal.stanford.edu/FOL/03jan-umd.ppt(SlidesfromtalkatUMaryland" />
		<title level="m">FOL: Towards an architecture for building autonomous agents from building blocks of first order logic</title>
				<imprint>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Representations for robot knowledge in the KnowRob framework</title>
		<author>
			<persName><forename type="first">M</forename><surname>Tenorth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Beetz</surname></persName>
		</author>
		<author>
			<persName><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Non-axiomatic logic: A model of intelligent reasoning</title>
				<imprint>
			<date type="published" when="2013">2017. 2013</date>
			<biblScope unit="volume">247</biblScope>
		</imprint>
	</monogr>
	<note>World Scientific</note>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Prolegomena to a theory of mechanized formal reasoning</title>
		<author>
			<persName><forename type="first">P</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><surname>Weyhrauch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial General Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1980">2015. 1980</date>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="133" to="170" />
		</imprint>
	</monogr>
	<note>Issues in temporal and causal inference</note>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Towards a theory of mechanizable theories: I FOL contexts -the extensional view</title>
		<author>
			<persName><forename type="first">R</forename><surname>Weyhrauch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Talcott</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">European Conference on Artificial Intelligence (ECAI)</title>
				<imprint>
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">The logic of FOL Systems: formulated in set theory</title>
		<author>
			<persName><forename type="first">R</forename><surname>Weyhrauch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Talcott</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Logic, Language, and Computation</title>
				<imprint>
			<date type="published" when="1994">1994</date>
			<biblScope unit="page" from="119" to="132" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Weyhrauch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Talcott</surname></persName>
		</author>
		<ptr target="http://www-formal.stanford.edu/FOL/w.ps" />
		<title level="m">WristWatch -an FOL theory of time</title>
				<imprint>
			<date type="published" when="1997">1997</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
