<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Steps Towards Commonsense-Driven Belief Revision in the Event Calculus</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nikoleta</forename><surname>Tsampanaki</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<address>
									<settlement>FORTH</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giorgos</forename><surname>Flouris</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<address>
									<settlement>FORTH</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Theodore</forename><surname>Patkos</surname></persName>
							<email>patkos@ics.forth.gr</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Computer Science</orgName>
								<address>
									<settlement>FORTH</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Steps Towards Commonsense-Driven Belief Revision in the Event Calculus</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">ED333EE62BA4E7371AECFEC803ADF19C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recent extensions of the Event Calculus resulted in powerful formalisms, able to reason about a multitude of commonsense phenomena in causal domains, involving epistemic notions, functional fluents and probabilistic aspects, among others. Surprisingly, little attention has been paid to the problem of automatically revising (correcting) a Knowledge Base when an observation contradicts predictions regarding the world. Despite mature work on the related belief revision field, adapting such results for the case of action theories is non-trivial. This paper reports on ongoing work for addressing this problem by proposing a generic framework in the context of the Event Calculus, along with ASP encodings of the revision algorithm.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Action languages are well-established logical theories for reasoning about the dynamics of changing worlds, aiming at "formally characterizing the relationship between the knowledge, the perception and the action of autonomous agents" <ref type="bibr" target="#b7">[Van Harmelen et al., 2007]</ref>. One of the most prominent action languages is the Event Calculus <ref type="bibr" target="#b3">[Kowalski and Sergot, 1986;</ref><ref type="bibr" target="#b4">Miller and Shanahan, 2002]</ref>, which incorporates certain useful features for representing causal and narrative information that differentiate it from other similar formalisms. The Event Calculus explicitly represents temporal knowledge, enabling reasoning about the effects of a narrative of events along a time line. It also relies on a non-monotonic treatment of events, in the sense that by default there are no unexpected effects or event occurrences.</p><p>Powerful extensions of the main formalism have been developed to accommodate, for instance, epistemic extensions <ref type="bibr" target="#b4">[Miller et al., 2013;</ref><ref type="bibr" target="#b4">Ma et al., 2013;</ref><ref type="bibr">Patkos and Plexousakis, 2009]</ref>, probabilistic uncertainty <ref type="bibr" target="#b6">[Skarlatidis et al., 2015;</ref><ref type="bibr" target="#b1">D'Asaro et al., 2017]</ref> or knowledge derivations with non-binary-valued fluents <ref type="bibr" target="#b4">[Miller et al., 2013]</ref>. Moreover, progress in generalizing the stable model semantics used in Answer Set Programming (ASP) has opened the way for the reformulation of Event Calculus axiomatizations into logic programs that can be executed with efficient ASP solvers <ref type="bibr" target="#b2">[Ferraris et al., 2011]</ref>. This allowed for exploiting state-of-the-art tools that outperform previous SAT-or logic programming-based solvers in almost all classes of problems related to practical applications <ref type="bibr" target="#b3">[Lee and Palla, 2012]</ref>.</p><p>However, to the best of our knowledge, little work has been done on supporting belief change in the Event Calculus, in cases when the new information contradicts the already inferred knowledge. Specifically, the existing non-epistemic extensions accommodate belief update, which concerns beliefs that change as the result of the realization that the world has changed through some action. The epistemic extensions further focus on modeling the notions of knowledge, thus supporting belief expansion, where newly acquired information can enrich the belief state of agents about aspects that were previously considered unknown. Yet, the ability to accommodate, through proper revisions, sensed information that contradicts existing beliefs is not supported. This problem is more general than belief expansion or even than diagnosis, as it not only intends to identify the reasons that explain the contradictions, but also to suggest proper modifications of the belief state of the agent under certain, potentially domaindependent, criteria <ref type="bibr" target="#b0">[Alchourron et al., 1985]</ref>.</p><p>In this paper, we present steps towards a formal method for accommodating belief revision on top of Event Calculus axiomatizations. We consider both the epistemic and nonepistemic case, relying on the possible-worlds representation to give formal semantics to an agent's belief state. We formalize notions of commonsense revisions that take into consideration different knowledge states, such as factual (or observed), inferred and unknown beliefs. Finally, we present a methodology and an ASP encoding that can implement the formalism. The current framework is based on certain simplifying assumptions, such as deterministic domains and lack of state constraints (state axioms), which limit its broadness. Yet, this work can form the substrate for further extensions concerning a richer set of commonsense features, such as default beliefs, non-determinism, introspective belief changes, non-binary aspects etc, along with formal results showing that it is generic enough to be applied to different Event Calculus dialects.</p><p>Example: Consider the classical Yale Shooting scenario, where a loaded gun is fired against a living, walking turkey. An observer may believe that, after the shot, the turkey is dead. If future observations contradict her beliefs, e.g., by noticing that the turkey is still walking, the observer will need to assess different potential revisions of her belief state: can it be that she was so mistaken and the shooter did not fire the gun in the first place? Or is it just that the initial, default belief about the gun being loaded was not accurate? Moreover, how would the revisions be affected if the initial state of the gun is unknown?</p><p>Although simplistic, this setting of the Yale Shooting scenario can be generalized to account for different levels of commonsense inferences, some of which may be domainindependent, e.g., revising aspects that were initially unknown rather than aspects that have already been observed, while others may be domain-dependent, e.g., considering certain observations as being less reliable than others. For such types of domains, we develop in the sequel a formal methodology for revising the belief state of an agent, taking into consideration commonsense and epistemic notions.</p><p>It should be noted that, even though the belief change literature has been used as a source of inspiration for addressing the problem, the related technical results cannot be directly applied, as they are based on assumptions that are not relevant for our setting (e.g., monotonicity of the underlying representation language). Thus, our approach leverages only the related methodologies, establishing connections among our ideas and the corresponding ideas from belief change.</p><p>The rest of the paper is structured as follows: in Section 2 we remind the basics about the Event Calculus that will be needed in the next sections. Sections 3 and 4 give the theoretical underpinnings of our methodology for the non-epistemic and the epistemic case respectively, describing the problem (and the proposed solution) in formal terms. In Section 5 we describe the implementation of the methodology in ASP, while in Section 6 we discuss related work. The paper concludes in Section 7 with remarks about our next steps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Preliminaries</head><p>Our account of change and causality is based on the discrete time Event Calculus axiomatized in <ref type="bibr" target="#b5">[Mueller, 2015]</ref>, while the modeling of possible worlds for representing epistemic notions is inspired by the epistemic extension of the Functional Event Calculus (EF EC) <ref type="bibr" target="#b4">[Miller et al., 2013]</ref>. In particular, we consider a sort E for events (variables e, e , e 1 , ...), a sort F for fluents (f, f , f 1 , ...) and a sort T for timepoints (t, t , t 1 , ...), which is restricted to the integers. <ref type="foot" target="#foot_0">1</ref> The key predicates are HoldsAt() ⊆ F × T denoting the truth value of a fluent at a particular timepoint, Happens() ⊆ E ×T capturing the occurrence of events, Initiates() ⊆ E × F × T and T erminates() ⊆ E × F × T , denoting that an event e causes fluent f to become true or false respectively in the next timepoint. The notions of cause, effect and inertia are captured in the DEC domain independent set of axioms (see <ref type="bibr" target="#b5">[Mueller, 2015]</ref>). As we restrict our considerations to deterministic domains for the time being, we do not axiomatize fluents that are released from the law inertia.</p><p>In order to support epistemic reasoning, we introduce two new sorts, in the style of EF EC: a sort W for representing possible worlds (variables w, w , w 1 , ...) and a sort I for instants (variables i, i , i 1 , ...). The idea is to represent time as a system of parallel lines, where each world is understood as an identifier for a possible time line. Finally, we assume that the constant W a of sort W signifies the actual world.</p><p>3 Revisions in the Non-Epistemic Case</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Revision Setting and Principles</head><p>The representation of a dynamic domain requires the coupling of domain-independent and domain-dependent axioms with our knowledge about the world and the related narrative. Overall, we define a Knowledge Base as follows:</p><formula xml:id="formula_0">Definition 1 A Knowledge Base (KB) capturing a dynamic domain is defined as Φ = DEC ∧ Σ ∧ Γ 0 ∧ ∆ ∧ Ω where</formula><p>• DEC is the conjunction of the Discrete Event Calculus domain-independent axioms,</p><p>• Σ is the conjunction of the domain-dependent axioms,</p><p>• Γ 0 is the initial knowledge, i.e., a conjunction of ground (¬)HoldsAt(F i , 0) axioms at timepoint 0,</p><p>• ∆ is the narrative of actions, i.e., a conjunction of ground Happens(E i , T j ) axioms,</p><p>• Ω is a conjunction of unique name axioms.</p><p>Domain axioms in ∆ can be partially defined and then minimized to address the Frame Problem and related issues. Γ 0 axioms cannot be partially defined, as we assume complete world knowledge initially (this assumption will be lifted in Section 4). We denote by Φ |= φ the fact that Φ implies φ.</p><p>We assume that from time to time we observe some part of the world, i.e., we obtain the truth value of certain fluents. Our current assumption is that observations can only contain a conjunction of (¬)HoldsAt() statements. We denote by Γ T an observation obtained at timepoint T . Now let's turn our attention to the problem of revising a KB Φ with an observation Γ T . We follow the Principle of Consistency Maintenance <ref type="bibr" target="#b0">[Dalal, 1988]</ref>, which requires that the result of revising Φ with Γ T should be consistent. In addition, we make the standard assumption that is expressed by the Principle of Primacy of New Information <ref type="bibr" target="#b0">[Dalal, 1988]</ref> (and formalized by the postulate of success in the AGM postulates <ref type="bibr" target="#b0">[Alchourron et al., 1985]</ref>), which states that the new observation should always be entailed after the revision.</p><p>The special characteristics of the Event Calculus force us to introduce two new principles. The first is the Principle of Persistence of Background Knowledge, which states that the revision process will only affect the initial knowledge (Γ 0 ) and/or the narrative (∆). Thus, the domain-independent (DEC) and domain-dependent (Σ) axioms, as well as the unique name axioms (Ω), should not be affected. This avoids issues associated to the problem of learning the domain from observations, which is not in the scope of this paper.</p><p>The second new requirement is the Principle of Disallowing Proactive Change, which, informally, states that we cannot use an observation referring to time T in order to add events in the narrative beyond that timepoint. Essentially, this limits the direct effects of an observation (and the corresponding revision) to past timepoints, even though such effects may also have indirect ramifications related to the truth value of fluents in the future.</p><p>Finally, we adopt the Principle of Minimal Change <ref type="bibr" target="#b3">[Katsuno and Mendelzon, 1991]</ref> (also known as the Principle of Persistence of Prior Knowledge <ref type="bibr" target="#b0">[Dalal, 1988]</ref>), which states that the new KB should be as "close" as possible to the original KB; in other words, from all the possible change results (revision candidates) that satisfy the other principles, we should choose the one that retains "the most information".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">The Revision Operator</head><p>Following the above principles, we can formally define the set of revision candidates as follows:</p><formula xml:id="formula_1">Definition 2 Given a KB Φ = DEC ∧ Σ ∧ Γ 0 ∧ ∆ ∧ Ω and an observation Γ T , a KB Φ is a revision candidate of Φ with Γ T iff: • Φ is of the form Φ = DEC ∧Σ∧Γ 0 ∧∆ ∧Ω (Principle of Persistence of Background Knowledge). • No formula in (∆ \ ∆ ) ∪ (∆ \ ∆) refers to timepoints t &gt; T (Principle of Disallowing Proactive Change).</formula><p>• Φ is a consistent KB (Principle of Consistency).</p><p>• Φ |= Γ T (Principle of Primacy of New Information).</p><p>The set of all revision candidates of Φ with Γ T will be denoted by RC(Φ, Γ T ).</p><p>Note that Definition 2 imposes that the part DEC ∧Σ∧Ω of all revision candidates is identical to the corresponding part of the original KB (following the Principle of Persistence of Background Knowledge), and also formalizes all other principles (except from the Principle of Minimal Change). The latter is not considered because RC(Φ, Γ T ) is meant to represent all the conceivably possible revision results, not the optimal ones. The notion of minimal change is often subjective, context-and/or domain-dependent, so we chose to include it as a separately configurable component of our framework.</p><p>To formalize the Principle of Minimal Change, we will use the standard approach of introducing a preference relation ≺ T Φ . The idea is that if Φ 1 ≺ T Φ Φ 2 , Φ 1 is strictly more preferred than Φ 2 for the result of the revision of Φ with an observation at timepoint T . Note that this is different from the relations among interpretations <ref type="bibr" target="#b3">[Katsuno and Mendelzon, 1991]</ref> and formulas <ref type="bibr" target="#b3">[Gardenfors and Makinson, 1988</ref>] that have been used elsewhere for the same purpose. Establishing the connection among our preference relation and these works is part of our future work. For now, it suffices to assume that ≺ T Φ is wellfounded (so that we can always find a minimal element in a non-empty set). Further properties (e.g., totality, transitivity) may improve algorithmic efficiency in identifying the optimal solution, but this is irrelevant for now.</p><p>We are now ready to define the revision operator. Intuitively, the idea is that we select those elements of RC(Φ, Γ T ) that are minimal with respect to ≺ T Φ . In case multiple minimal elements exist, their disjunction is taken. It is also interesting to note that, in the special case when RC(Φ, Γ T ) = ∅, we do not revise the KB; this can happen, e.g., when the observation itself is inconsistent or when there is no way to satisfy the observation without changing background knowledge, such as the domain axioms. Formally: Definition 3 The revision operator • is a binary operator, defined as follows:</p><p>•</p><formula xml:id="formula_2">Φ • Γ T = Φ if RC(Φ, Γ T ) = ∅. • Φ • Γ T = {Φ | Φ ∈ RC(Φ, Γ T ) and there is no Φ ∈ RC(Φ, Γ T ) such that Φ ≺ T Φ Φ } otherwise.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Defining a Specific Preference Relation</head><p>To define the preference relation more precisely, we will leverage a cost-based model which assesses minimality based on the amount of information lost or modified from the original KB in order to accommodate the observation. In particular, considering two KBs Φ, Φ the cost to move from Φ to Φ will be defined on the basis of the formulas that can be inferred by one of these KBs but not the other. To formalise this, we first define the following sets: This represents all events that we had to add in the narrative of Φ to accommodate the observation, up to T .</p><formula xml:id="formula_3">Modified Knowledge M K T (Φ, Φ ) = {HoldsAt(F, T ) | T ≤ T</formula><formula xml:id="formula_4">Lost Events LE T (Φ, Φ ) = {Happens(E, T ) | T ≤ T , Φ |= Happens(E, T ) and Φ |= ¬Happens(E, T )}.</formula><p>This represents all events that we had to retract from the narrative of Φ to accommodate the observation, up to T . Note that the above definitions do not consider the consequences of changes for future timepoints, i.e., beyond a certain timepoint T . This will be used to ignore any future repercussions of our changes, considering only the changes up to the timepoint of the observation.</p><p>The cost between two KBs Φ, Φ up to the timepoint T is defined as:</p><formula xml:id="formula_5">cost T (Φ, Φ ) = w M K • |M K T (Φ, Φ )| + w N E • |N E T (Φ, Φ )| + w LE • |LE T (Φ, Φ )|,</formula><p>where w M K , w N E , w LE are the corresponding weights associated to each change (a parameter of our model). Now, the ≺ T Φ relation can be easily defined as follows:</p><formula xml:id="formula_6">Φ 1 ≺ T Φ Φ 2 iff cost T (Φ, Φ 1 ) &lt; cost T (Φ, Φ 2 ).</formula><p>It is trivial to show that this relation is well-founded, as required by the definition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4">Discussion on the preference function</head><p>Note that the above definition of the cost function corresponds to the non-epistemic version that our current implementation supports. Yet, we also consider various alternative options:</p><p>• Partitioning fluents and/or events into different "importance categories", each with its own weight, w 1 , . . . , w n . This way, e.g., two different "new events" would cost differently, depending on the weight of the involved event. For this case an aggregation function would be required, to aggregate w i with the weight (w M K , w N E , w LE ) of the corresponding category of the formula whose weight is considered. This could help accommodate the case where default fluents can more easily be changed than non-default ones, or for cases where specific fluents cost more (in terms of epistemic effort or practical consequences) to change. • Another future consideration would consider a degradation of the cost over time. This can support the intuition that it should be more expensive to change knowledge about past timepoints than knowledge about more recent timepoints. Again, this could be supported with an appropriate aggregation function, combining the weight coming from the type of change (w M K , w N E , w LE ), the weight coming from the fluent or event (w i ) and the weight coming from the timepoint where the corresponding change occurred. • Further, it is clear that the theory requires a qualitative, relative ordering of the different revision candidates. Even though this can be reduced to the comparison of the result of numerical functions, like the above, it is still possible to use a purely qualitative comparison method, or even hybrid methods combining qualitative and quantitative components. For this early version of the work, we chose the more simple quantitative approach, which is also more amenable to the implementation method chosen (see Section 5).</p><p>The above ideas can be neatly formalized by considering a weight function associating a "weight" to each possible formula (HoldsAt(), Happens()) and computing the cost as the total "weight" of the formulas implied by Φ but not Φ (or vice-versa).</p><p>Despite its early stage of development, the proposed ≺ T Φ relation has several intuitively desirable formal properties. First, we show that "fewer" changes (with respect to the standard set-theoretic subset relation) are better than "more":</p><formula xml:id="formula_7">Proposition 1 Consider three KBs Φ, Φ 1 , Φ 2 . Set C T (Φ i ) = {φ | Φ |= φ, Φ i |= φ and φ refers to a timepoint t ≤ T }, for i = 1, 2. If C T (Φ 1 ) ⊂ C T (Φ 2 ), then Φ 1 ≺ T Φ Φ 2 .</formula><p>As a corollary, we get that not changing a KB is always cheaper than changing it, and this will happen whenever the observation does not contradict our expectations:</p><formula xml:id="formula_8">Proposition 2 Φ ≺ T Φ Φ for all Φ, Φ , T . Proposition 3 If Φ ∈ RC(Φ, Γ T ), then Φ • Γ T = Φ. Proposition 4 If Φ |= Γ T , then Φ • Γ T = Φ.</formula><p>Example (cont.) Returning to the Yale shooting example described before, the observer's KB Φ Y ale can be described by the following axiomatization, stating that the gun is loaded at timepoint 0 and fired at timepoint 1. Initiates(Load, Loaded, t)</p><p>(3.1) HoldsAt(Loaded, t) → T erminates(Shoot, Loaded, t)</p><p>(3.2) HoldsAt(Loaded, t) → T erminates(Shoot, Alive, t) Assume now that the observer receives information that contradicts her current inferences, e.g., Γ 3 = HoldsAt(Alive, 3) (note that Φ Y ale |= HoldsAt(Alive, 3)). A possible reaction to this observation would be that the observer was mistaken and the shooter did not fire the gun. That is, ∆ = ∅ and Γ 0 = Γ 0 . So, a revision candidate of Φ Y ale , following Definition 2, would be</p><formula xml:id="formula_9">(3.3) HoldsAt(Alive, 0) (3.4) HoldsAt(Loaded, 0) (3.5) Happens(Shoot, 1) (3.6) That is, Σ = (3.1) ∧ (3.2) ∧ (3.3), Γ 0 = (3.4) ∧ (3.</formula><formula xml:id="formula_10">Φ Y ale = DEC ∧ Σ ∧ Γ 0 ∧ ∆ ∧ Ω.</formula><p>Another possible revision would be that the observer was mistaken and the gun was not loaded in the first place. That is, ∆ = ∆ and Γ 0 = (3.4) ∧ ¬HoldsAt(Loaded, 0).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The revision candidate of</head><formula xml:id="formula_11">Φ Y ale is now Φ Y ale = DEC ∧ Σ ∧ Γ 0 ∧ ∆ ∧ Ω.</formula><p>Consequently, Φ Y ale , Φ Y ale ∈ RC(Φ Y ale , Γ 3 ). Note that many other KBs are included in RC(Φ Y ale , Γ 3 ), but all of them would introduce more changes (with regards to the subset relation) than these two (see also Proposition 1), so they are not considered due to lack of space.</p><p>To find the ≺ 3 Φ -minimal element(s) of RC(Φ Y ale , Γ 3 ), we only need to compare Φ with Φ . The weights associated with the cost function were set to w M K = 1, w N E = 2, w LE = 2, so the corresponding costs are:</p><formula xml:id="formula_12">cost 3 (Φ Y ale , Φ Y ale ) = 6, cost 3 (Φ Y ale , Φ Y ale ) = 4, so Φ Y ale ≺ 3</formula><p>Φ Φ Y ale and Φ Y ale is the revision result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Revisions in the Epistemic Case</head><p>The high demands that are imposed on autonomous systems in real domains have led to variations of Event Calculus theories that can support reasoning with partial world knowledge. Such epistemic extensions can accommodate both known and unknown fluents, using a special type of "sense" actions to acquire new knowledge, which by definition only affect the belief state of the agent, causing no effect to the state of the domain. In this section, we discuss how revision of beliefs can be achieved when the sensed information contradicts existing knowledge. We rely on the approach introduced in the recent EF EC dialect that implemented an adaptation of the possible worlds model to give formal semantics to belief predicates.</p><p>In EF EC, the function &lt;&gt;: W × I → T is introduced to map world/instant pairs to timepoints. Timepoint &lt; W, I &gt; represents instant I in possible world W , where: ∀t∃w, i.t =&lt; w, i &gt; (DOX1) The time lines believed to be accessible at any given moment are captured by the relation K ⊆ W × W, which represents the accessibility relation between possible worlds, as in modal logics. As ordinary, we formally define belief of some fluent f at some timepoint as the fact that this fluent has the same truth value in all worlds that are accessible from the actual world:</p><formula xml:id="formula_13">Bel(f, &lt; W a , i &gt;) ≡ (DOX2) ∀wK(w, W a ) → HoldsAt(f, &lt; w, i &gt;) BelN ot(f, &lt; W a , i &gt;) ≡ (DOX3) ∀wK(w, W a ) → ¬HoldsAt(f, &lt; w, i &gt;) BelW h(f, &lt; W a , i &gt;) ≡ (DOX4) Bel(f, &lt; W a , i &gt;) ∨ BelN ot(f, &lt; W a , i &gt;)</formula><p>In contrast to EF EC though, we do not consider K to be an equivalence relation. <ref type="foot" target="#foot_1">2</ref> Instead, in order to model belief rather than knowledge, we only assume that K is serial, which is equivalent to stating that the agent cannot believe contradictions (also known as the Consistency Axiom):</p><formula xml:id="formula_14">∀i.Bel(f, &lt; W a , i &gt;) → ¬BelN ot(f, &lt; W a , i &gt;)(DOX5) ∀i.BelN ot(f, &lt; W a , i &gt;) → ¬Bel(f, &lt; W a , i &gt;)(DOX6)</formula><p>Notice that from the above axioms we do not assume that the actual world is accessible too (K is not reflexive). As a result, erroneous beliefs can still be inferred, requiring a revision mechanism whenever observations (that reflect W a ) do not comply with the agent's beliefs.</p><p>Finally, we need to define a domain-independent axiom to ensure the existence of possible worlds in the initial state.</p><formula xml:id="formula_15">∀f.¬BelW h(f, &lt; W a , 0 &gt;) → (DOX7) ∃w 1 , w 2 .K(w 1 , W a ) ∧ K(w 2 , W a ) ∧ HoldsAt(f, &lt; w 1 , 0 &gt;) ∧ ¬HoldsAt(f, &lt; w 2 , 0 &gt;)</formula><p>Notice that according to our assumption of never losing knowledge (there is no non-determinism), it is sufficient to preserve the number of possible worlds generated at the initial timepoint while reasoning, since there is no way of generating more worlds. We do not need to eliminate worlds either, in order to allow for reasoning about the past.</p><p>Based on the aforementioned formalization of belief, we extend the definition of a KB to accommodate lack of knowledge at the initial and at future timepoints: Definition 4 An epistemic KB is defined as eΦ = DEC ∧ DOX ∧ Σ ∧ eΓ 0 ∧ ∆ ∧ Ω, where</p><p>• DEC is the conjunction of the Discrete Event Calculus domain-independent axioms,</p><p>• DOX is the conjunction of the epistemic axioms to support beliefs,</p><p>• Σ is the conjunction of the domain-dependent axioms,</p><p>• eΓ 0 is the initial beliefs, i.e., a conjunction of ground Bel(F i , &lt; W a , 0 &gt;), BelN ot(F j , &lt; W a , 0 &gt;) axioms at timepoint 0,</p><p>• ∆ is the narrative of actions, i.e., a conjunction of ground Happens(E i , &lt; w, I j &gt;) axioms,</p><p>• Ω is a conjunction of unique name axioms.</p><p>The definition for eΦ is more general than the one given for Φ. Specifically, if we assume complete world knowledge at timepoint 0, a single possible world is generated, making eΦ equivalent to Φ.</p><p>As before, ∆ axioms can be partially defined and then minimized to address the Frame Problem and related issues. eΓ 0 axioms can be partially defined as well; fluents that are unknown at time instant 0 generate the set of possible worlds according to axiom (DOX7).</p><p>The case of revision in the epistemic Event Calculus is virtually identical to the case of the non-epistemic one. Thus, whatever we discussed in Section 3 can be applied here as well. The main difference in the epistemic case is that we can now provide a more fine-grained preference relation, taking special provisions for the case where a fluent whose value was originally unknown, became known (true or false). To do so, an approach similar to the one used in Subsection 3.3 could be used, namely, identifying the sets of formulas of the form Bel(. . . ), BelN ot(. . . ), BelW h(. . . ) that are implied/not implied by eΦ, eΦ respectively.</p><p>In particular, considering two epistemic KBs eΦ, eΦ the cost to move from eΦ to eΦ will be defined on the basis of the formulas that can be inferred by one of these KBs but not the other. To formalise this, we first define the following sets:</p><formula xml:id="formula_16">Modified Knowledge M K T (eΦ, eΦ ) = {Bel(F, &lt; W a , T &gt;) | T ≤ T and either eΦ |= Bel(F, &lt; W a , T &gt;) and eΦ |= BelN ot(F, &lt; W a , T &gt;), or eΦ |= BelN ot(F, &lt; W a , T &gt;) and eΦ |= Bel(F, &lt; W a , T &gt;)}.</formula><p>This represents all Bel() or BelN ot() statements whose truth value was changed during the transition from eΦ to eΦ , up to T .</p><formula xml:id="formula_17">New Knowledge N K T (eΦ, eΦ ) = {Bel(F, &lt; W a , T &gt;) | T ≤ T and either eΦ |= ¬BelW h(F, &lt; W a , T &gt;) and eΦ |= Bel(F, &lt; W a , T &gt;), or eΦ |= ¬BelW h(F, &lt; W a , T &gt;) and eΦ |= BelN ot(F, &lt; W a , T &gt;)}.</formula><p>This represents all ¬BelW h() statements whose unknown value was changed to known during the transition from eΦ to eΦ , up to T .</p><p>Lost Knowledge LK T (eΦ, eΦ ) = {¬BelW h(F, &lt; W a , T &gt;) | T ≤ T and either eΦ |= Bel(F, &lt; W a , T &gt;) and eΦ |= ¬BelW h(F, &lt; W a , T &gt;), or eΦ |= BelN ot(F, &lt; W a , T &gt;) and eΦ |= ¬BelW h(F, &lt; W a , T &gt;)}. This represents all Bel() or BelN ot() statements whose truth value was changed to unknown during the transition from eΦ to eΦ , up to T .</p><formula xml:id="formula_18">New Events N E T (Φ, Φ ) = {Happens(E, &lt; W a , T &gt; ) | T ≤ T , Φ |= ¬Happens(E, &lt; W a , T &gt;) and Φ |= Happens(E, &lt; W a , T &gt;)}.</formula><p>This represents all events that we had to add in the narrative of Φ to accommodate the observation, up to T .</p><formula xml:id="formula_19">Lost Events LE T (Φ, Φ ) = {Happens(E, &lt; W a , T &gt; ) | T ≤ T , Φ |= Happens(E, &lt; W a , T &gt;) and Φ |= ¬Happens(E, &lt; W a , T &gt;)}.</formula><p>This represents all events that we had to retract from the narrative of Φ to accommodate the observation, up to T .</p><p>The cost between two KBs eΦ, eΦ up to the timepoint T is defined as:</p><formula xml:id="formula_20">cost T (eΦ, eΦ ) = w M K • |M K T (eΦ, eΦ )| + w N K • |N K T (eΦ, eΦ )| + w LK • |LK T (eΦ, eΦ )| + w N E • |N E T (eΦ, eΦ )| + w LE • |LE T (eΦ, eΦ )|,</formula><p>where w M K , w N K , w LK , w N E , w LE are the corresponding weights associated to each change (a parameter of our model). Now, the ≺ T Φ relation can be easily defined as follows:</p><formula xml:id="formula_21">eΦ 1 ≺ T Φ eΦ 2 iff cost T (eΦ, eΦ 1 ) &lt; cost T (eΦ, eΦ 2 ).</formula><p>It is trivial to show that this relation is well-founded, as required by the definition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Implementation</head><p>The proposed framework was implemented for both the non-epistemic and the epistemic case, using the architecture shown in Figure <ref type="figure">1</ref>. <ref type="foot" target="#foot_2">3</ref> The figure shows the loop of steps performed whenever a new observation arrives, along with the corresponding input/output modules (rulesets). There are two main reasoning steps, interconnected via two Java programs: the first reasoning step generates one answer set in the nonepistemic case and multiple answer sets in the epistemic case, each denoting a possible world (see Section 4), and the second step generates cost-optimal revisions. Reasoning is performed by implementing the Event Calculus axiomatizations as ASP rules and by utilizing the Clingo reasoner <ref type="bibr" target="#b3">[Gebser et al., 2011]</ref>. Similarly, the domain-dependent axioms, the epistemic axioms, the main program, the meta-program, the new observations and the current beliefs are all implemented in ASP. The current beliefs module contains the running information about the world state and the narrative of actions. A Java parser intervenes between the two reasoning steps to transform the information contained in the generated answer sets, i.e., the possible worlds, into the agent's belief predicates. These are introduced in ASP form in the revision metaprogram, which provides the revision results, based on the DOX axioms described in Section 4. Finally, a Java parser parses all other modules needed for automating the process and for connecting the reasoning to the outside world.</p><p>The most important module is the revision meta-program, which implements the revision algorithm. In case of inconsistency, it takes as input the new observation and the result from the Java parser, which expresses the running belief state. The meta-program computes the revision using the cost-optimal Figure <ref type="figure">1</ref>: The reasoning loop for revising the KB of an agent. revisions generator, via a logic program. Roughly, this program generates combinations of fluents in the initial state, as well as combinations of event occurrences at each timepoint, for every possible world, aiming at keeping only the combinations that lead to a consistent KB and are consistent with the new observation (revision candidates). The cost associated with each revision candidate is calculated, based on the cost function described in Section 4; this is implemented with ASP rules that penalize each truth value or event that is different from the output of the first reasoning step. Finally, an optimization statement filters out all non-optimal revision candidates (answer sets). At the end, the program returns the disjunction of the optimal candidates.</p><p>Returning to the Yale shooting example described before, an output of the program in the epistemic case is discussed next. Namely, the initial beliefs are that the turkey is alive at timepoint 0 and a shot happens at timepoint 1, but we do not know whether the gun is loaded or not at timepoint 0. Thus, we do not know whether the turkey is alive or not at timepoint 3. Assume now that we receive a new contradicting information that the turkey is alive at timepoint 3. We present the optimal revision that accommodates this observation into our knowledge base in Figure <ref type="figure" target="#fig_2">2</ref>, based on the following weights:</p><formula xml:id="formula_22">w M K = 1, w N K = 1, w LK = 2, w N E = 2, w LE = 2.</formula><p>More specifically, the optimal revision is to assume that we were mistaken and the gun was not loaded in the first place. Thus, we have the least possible cumulative cost, as we gain knowledge on the state of the turkey at timepoints 2 and 3, as well as, on the state of the gun at timepoints 0 and 1. Had we accepted the revision that the shooter did not fire the gun, we would lose knowledge and event happenings at various timepoints, and as a result, the cost would be greater in total. Belief change (also known as belief revision) is a mature field of study dealing with the adaptation of a KB in the face of new information <ref type="bibr" target="#b0">[Alchourron et al., 1985]</ref>. Traditionally, two types of change have been considered: revision and update <ref type="bibr" target="#b3">[Katsuno and Mendelzon, 1991]</ref>. Revision deals with cases where the new information is some observation or refinement of our knowledge about the world, whereas update deals with cases where the adaptation is dictated by some action or event that changed the world itself. The case of belief update is inherently captured by the semantics and reasoning of Event Calculus, where one can explicitly declare events, as well as the effects and preconditions of such events. However, studies regarding the revision of action theories when our observations of the actual world are inconsistent with the theory's predictions regarding the world's state are limited.</p><p>Most works in the classical belief change literature are dealing with the so-called classical logics <ref type="bibr" target="#b3">[Flouris et al., 2006]</ref>, which have certain nice properties, both in terms of semantics (monotonicity, compactness, inclusion of the classical tautological implication, etc) and in terms of syntax (closed with respect to the usual operators ∧, ∨, ¬, etc), Extensions of these theories to apply for ontological languages <ref type="bibr" target="#b3">[Flouris et al., 2006;</ref><ref type="bibr" target="#b5">Qi and Du, 2009]</ref>, or compact and monotonic logics in general <ref type="bibr" target="#b5">[Ribeiro et al., 2013]</ref> have been considered as well. However, to the best of our knowledge, no such study exists for non-monotonic formalisms, partly because many non-monotonic formalisms (most notably, defeasible logic, default logic and paraconsistent logics) have inherent ways to reason under inconsistency without trivializing inference. Thus, technical results from the related literature are not directly applicable in our setting.</p><p>Studies that account for epistemic considerations of the Event Calculus are more closely related to ours. More specifically, the EF EC variant introduced in <ref type="bibr" target="#b4">[Miller et al., 2013;</ref><ref type="bibr" target="#b4">Ma et al., 2013]</ref> is the first to rely on the possible worlds semantics to reason about knowledge. EF EC supports a multitude of features, such as reasoning about the future and past, or dealing with non-determinism and concurrency. Our work utilizes the same underlying structures to formalize the treatment of epistemic notions, extending them with the ability to revise contradicting knowledge, although it is currently significantly limited in the set of supporting features. In <ref type="bibr">[Patkos and Plexousakis, 2009]</ref>, a different epistemic extension of discrete-time Event Calculus theories is presented, using a deduction-oriented model of knowledge.</p><p>Beyond the Event Calculus, possible-worlds based epistemic extensions for reasoning about actions and knowledge have been developed in the context of other calculi. The first approach that inspired this direction of research is owed to <ref type="bibr" target="#b4">[Moore, 1985]</ref>, who presented a Kripke-like formulation of epistemic notions of modal logic in action languages by reifying possible worlds as situations. <ref type="bibr" target="#b6">[Scherl and Levesque, 2003]</ref> adapted this framework in the Situation Calculus, using possible situations to specify how the mental state of an agent should change with ordinary and sense actions, providing also a solution to the frame problem for knowledge. Other studies introduced further fea-tures: <ref type="bibr">[Thielscher, 2000]</ref> adapted the model in the context of the Fluent Calculus, <ref type="bibr">[Scherl, 2003]</ref> covered concurrent actions, while <ref type="bibr">[Kelly and Pearce, 2008]</ref> introduced epistemic modalities for groups of agents. Non-possible-worlds based epistemic action frameworks include <ref type="bibr" target="#b4">[Morgenstern, 1987;</ref><ref type="bibr" target="#b1">Demolombe and Pozos-Parra, 2000;</ref><ref type="bibr" target="#b6">Son and Baral, 2001;</ref><ref type="bibr" target="#b5">Petrick and Levesque, 2002;</ref><ref type="bibr" target="#b8">Vassos and Levesque, 2007;</ref><ref type="bibr" target="#b4">Liu and Lakemeyer, 2009]</ref>. In all these frameworks, knowledge is assumed to be always correct and observations that contradict inferred knowledge will lead to inconsistency.</p><p>The ability to deal with belief changes has lately started to gain interest within other action languages, as in <ref type="bibr" target="#b6">[Shapiro et al., 2011] and</ref><ref type="bibr" target="#b6">[Schwering et al., 2015]</ref> in the Situation Calculus, but without taking time into account. <ref type="bibr" target="#b8">[Van Zee et al., 2015]</ref> developed a new action formalism for revision of temporal belief bases; even though related to our work, <ref type="bibr" target="#b8">[Van Zee et al., 2015]</ref> do not directly address the problem of revising theories in Event Calculus, but instead define a new logic of action that is closer to propositional logic, thereby allowing technical results from the belief change literature to be directly applicable in their framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Conclusion</head><p>This paper reported on early work towards a formal framework for changing Event Calculus theories in the face of new (and potentially unexpected) observations. Our framework is necessary in the cases where an intelligent agent observes, or otherwise becomes aware of, information that contradicts what was expected by the underlying theory. Even though the rich technical results from the belief change literature are not generally applicable to our setting, we leveraged on some key ideas and adapted them for our purposes. Our approach was based on a set of principles and a preference relation that models the well-known Principle of Minimal Change.</p><p>We are currently working on extending the framework with more features and providing a more efficient implementation, along with a more generic preference relation. We are working to accommodate default knowledge, irrelevant fluents, degradation of the cost over time and other domain-specific features. Our implementation will be extended to allow easy parameterization and customization of the preference relation to be used, even at run-time, in order to experiment with the behaviour of different preferences and preference families.</p><p>Also, we are planning on establishing stronger connections with existing results from belief change (e.g., satisfaction of certain postulates, or connections between our preference relation and various selection functions or orderings that have been used in other contexts), thereby more thoroughly understanding the properties of the proposed framework. Further, even though our theoretical framework is generic enough to support more complex flavours of action theories and DEC, our implementation will need to be significantly extended to support different Event Calculus dialects and a richer set of commonsense features such as non-determinism, state constraints, and introspective belief changes.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>and either Φ |= HoldsAt(F, T ) and Φ |= ¬HoldsAt(F, T ), or Φ |= ¬HoldsAt(F, T ) and Φ |= HoldsAt(F, T )}. This represents all HoldsAt() statements whose truth value was changed during the transition from Φ to Φ , up to T . New Events N E T (Φ, Φ ) = {Happens(E, T ) | T ≤ T , Φ |= ¬Happens(E, T ) and Φ |= Happens(E, T )}.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>5) and ∆ = (3.6), whereas the remaining components of Φ Y ale follow from Definition 1. It can be shown that Φ Y ale |= ¬HoldsAt(Alive, 2).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The output of the Java agent, divided in two sections. The upper section in blue represents the original belief state, and the lower section in green represents the revised belief state of the agent.</figDesc><graphic coords="6,54.00,540.68,243.01,133.26" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">In the sequel, variables start with a small letter, and constants with a capital letter. Wherever not explicitly stated, variables are assumed to be universally quantified.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">Note also that the current setting only permits belief statements that refer to the state of world fluent at the same time point. In other words, we do not represent the beliefs at some timepoint about the state of fluents at another timepoint. Such statements are supported in EF EC and will be considered for future extensions of our axiomatization.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://www.ics.forth.gr/isl/ CS17BelRevPaper/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On the logic of theory change: Partial meet contraction and revision functions</title>
		<author>
			<persName><forename type="first">Alchourron</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI-88</title>
				<imprint>
			<date type="published" when="1985">1985. 1985. 1988. 1988</date>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="475" to="479" />
		</imprint>
	</monogr>
	<note>Investigations into a theory of knowledge base revision: Preliminary report</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A simple and tractable extension of situation calculus to epistemic logic</title>
		<author>
			<persName><forename type="first">D</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Asaro</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">LPNMR-17</title>
				<imprint>
			<date type="published" when="2000">2017. 2017. 2000. 2000</date>
			<biblScope unit="page" from="515" to="524" />
		</imprint>
	</monogr>
	<note>ISMIS-00</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Stable models and circumscription</title>
		<author>
			<persName><surname>Ferraris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">175</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="236" to="263" />
			<date type="published" when="2011">2011. 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Reformulating the situation calculus and the event calculus in the general theory of stable models and in answer set programming</title>
		<author>
			<persName><surname>Flouris</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">On the difference between updating a knowledge base and revising it</title>
				<editor>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Kelly</surname></persName>
		</editor>
		<editor>
			<persName><surname>Pearce</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="1986">2006. 2006. 1988. 1988. 2011. 2011. 1991. 1991. 2008. 2008. 1986. 1986. 2012. 2012</date>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="571" to="620" />
		</imprint>
	</monogr>
	<note>JAIR</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Some alternative formulations of the event calculus. Computational Logic: Logic Programming and Beyond, Essays in Honour</title>
		<author>
			<persName><forename type="first">Lakemeyer</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lakemeyer</surname></persName>
		</author>
		<author>
			<persName><surname>Ma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Formal Theories of the Commonsense World</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Hobbs</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Moore</surname></persName>
		</editor>
		<imprint>
			<publisher>Morgenstern</publisher>
			<date type="published" when="1985">2009. 2009. 2013. 2013. 2002. 2002. 2013. 1985. 1985. 1987. 1987</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="319" to="358" />
		</imprint>
	</monogr>
	<note>IJCAI-87</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Reasoning with knowledge, action and time in dynamic and uncertain domains</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">E T</forename><surname>Mueller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mueller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Patkos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Plex</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Petrick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Levesque</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><surname>Du</surname></persName>
		</author>
		<author>
			<persName><surname>Ribeiro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Minimal change: Relevance and recovery revisited</title>
				<editor>
			<persName><forename type="first">Levesque</forename><surname>Petrick</surname></persName>
		</editor>
		<meeting><address><addrLine>San Francisco, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers Inc</publisher>
			<date type="published" when="2002">2015. 2015. 2009. 2009. 2002. 2002. 2009. 2009. 2013. 2013. 2003. 2003</date>
			<biblScope unit="volume">02</biblScope>
			<biblScope unit="page" from="1" to="39" />
		</imprint>
	</monogr>
	<note>Knowledge, action, and the frame problem</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Reasoning about the interaction of knowlege, time and concurrent actions in the situation calculus</title>
		<author>
			<persName><forename type="first">;</forename><forename type="middle">R B</forename><surname>Scherl</surname></persName>
		</author>
		<author>
			<persName><surname>Scherl</surname></persName>
		</author>
		<author>
			<persName><surname>Schwering</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI-03</title>
				<imprint>
			<publisher>Thielscher</publisher>
			<date type="published" when="2000">2003. 2003. 2015. 2015. 2011. 2011. 2015. 2001. 2001. 2000. 2000</date>
			<biblScope unit="volume">175</biblScope>
			<biblScope unit="page" from="109" to="120" />
		</imprint>
	</monogr>
	<note>Artificial Intelligence. M. Thielscher. Representing the knowledge of a robot. In KR-00</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Handbook of Knowledge Representation</title>
		<author>
			<persName><surname>Van Harmelen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007. 2007</date>
			<publisher>Elsevier Science</publisher>
			<pubPlace>San Diego, USA</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Progression of situation calculus action theories with incomplete information</title>
		<author>
			<persName><surname>Van Zee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI-15</title>
				<imprint>
			<date type="published" when="2007">2015. 2015. 2007. 2007</date>
			<biblScope unit="page" from="3250" to="3256" />
		</imprint>
	</monogr>
	<note>IJCAI-07</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
