<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Modelling Assumption-Based Reasoning Using Contexts</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Mark</forename><surname>Jago</surname></persName>
							<email>mtw@cs.nott.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department" key="dep1">School of Computer Science</orgName>
								<orgName type="department" key="dep2">Department of Philosophy</orgName>
								<orgName type="institution">University of Nottingham Nottingham</orgName>
								<address>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Modelling Assumption-Based Reasoning Using Contexts</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D735A0206D5CB3FA602F6C0EE62B0D99</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:05+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose a two-fold use of context in reasoning about agents. The first concerns the modelling of beliefs which are entertained only within the scope of some assumption. We illustrate this use by describing as logical model of an agent which reasons in a natural deduction style by making assumptions to derive new formulas. The second use of context we propose here concerns the modelling of reasoning as a step-by-step temporal process, allowing us to model agents whose resources are temporally bounded. In such a setting, the beliefs ascribed to an agent need not be closed under consequence, as they are in many traditional epistemic logics.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>This essay is concerned with the modelling of agents who make assumptions and use them in their deductive reasoning. This work differs form other formal accounts of contextual reasoning in that it deals with resource bounded agents, namely agents which requires computational resources to deduce new beliefs from old. We take as an illustration an agent who reasons in a natural deduction style, but who takes time to reach its conclusions.</p><p>We treat an assumption as a sentence entertained by an agent for a particular purpose. An assumption is not a belief, yet has the psychological effect of a belief whilst the assumption is entertained. In assuming a formula, one reasons and behaves as if it were the case (this is, then, something like the psychological notion of pretence).</p><p>We use the notion of context to model this process of making assumptions. Following <ref type="bibr" target="#b3">[4]</ref>, a context is a localised set of beliefs, connected to other contexts by inter-context bridge rules. Each context represents the epistemic consequences of an act of pretence (i.e., of assumption making) on behalf of the agent; for example, the context in which an agent makes the assumption that the moon is made of green cheese contains the sentences that the agent would believe, were it to consider that assumption to be true.</p><p>Contexts are a suitable tool for this purpose as they can be embedded, thus allowing one to model the making of assumptions within assumptions. In the single agent setting of this paper, we reserve one special context, the empty context, as a model of the agent's actual beliefs, i.e. those sentences entertained with no assumption whatever. In a more general multi-agent setting, we will have a unit context for each agent i, corresponding to each agent i's assumptionfree beliefs.</p><p>We should perhaps point out that this is a different use of the term context from that which most frequently appears in the philosophical literature, which denotes certain aspects of the world (an example is Kaplan's <ref type="bibr" target="#b4">[5]</ref> use of such contexts in his analysis of pure indexicals). Here, contexts are comprised of psychological entities (which may be thought of as specific concepts, sentences of an internal language of thought or "mentalese", . . . ).</p><p>We argue that epistemic logics which utilise traditional possible worlds semantics are not suited to this kind of contextual reasoning. An agent may well assume a formula which contradicts its current beliefs; indeed, this is the essence of drawing conclusions by reductio ad absurdum. However, traditional epistemic logics based on classical consequence relations become trivial in the presence of such inconsistencies. This suggests a paraconsistent approach. Paraconsistent logics feature in epistemic frameworks that make use of impossible possible worlds, such as Levesque's logic of explicit and implicit belief.</p><p>However, we also wish to model resource bounded agents; we are particularly interested in agents which take time to reach their conclusions. We should therefore avoid what Perlis et al <ref type="bibr">[3, p.1]</ref> call a "final tray approach" to modelling an agent's beliefs, which tells us what an agent will eventually believe, given its initial beliefs and the rules it reasons with, but may not at a particular point during that reasoning process.</p><p>While such approaches are applicable in many situations, there are cases in which a primary point of the model is to determine whether the agent can reach a given conclusion in a period of time. We may cite examples such as verifying security protocols and modelling the behaviour of theorem provers. In our example we discuss an who reasons in a natural deduction style, producing one new formula at a time. If we were to assume the synchronous deductive closure of an agent's beliefs within any context then our agent would be able to reach all conclusions that is possibly could immediately; this defeats the point of modelling, which is to establish whether such an agent could reach a certain conclusion in a set time.</p><p>We take an approach which differs from traditional possible worlds-based epistemic logics. We take a belief to be part of the internal state of an agent, which allows it to act given the appropriate desires and intentions (i.e., goals and plans). We represent the internal state of an agent as a finite set of sentences in some logical language. Whilst we are primarily thinking of rule-based agents, we can accommodate other styles of agent programming by providing a translation of values for program variables into logical formulas, for example. We take as our model of an agent a function which describes how the agent's beliefs change from one deductive cycle to the next. Given a set of initial beliefs, we get a sequence of sets of sentences, each set representing the internal state of the agent at that point in time (figure <ref type="figure" target="#fig_0">1</ref>). We refer to each such set at a timeslice. In addition to modelling how the agent's actual beliefs evolve over a period of time, we can model the beliefs it would come to in the hypothetical circumstance of believing some formula φ. We term these the agent's φ-beliefs and we say they constitute the φ-context. We combine our model with a partitioning of the agent's internal state (at a moment of time) into contexts. This allows us to separate the agent's beliefs entertained under different assumptions, and thus to model multiple assumptions concurrently. The context containing the sentences that would eventually be believed if φ were to be believed provides our model of assuming that φ is the case. By placing a total ordering on assumptions that can be made, we can view these contexts as a sequence. Since each context corresponds to assuming some formula, such an ordering simply amounts to a total order on formulas. Since assumptions can be made within assumptions, we can have contexts within contexts, as shown in figure <ref type="figure">2</ref>.</p><formula xml:id="formula_0">beliefs at t = 1 E E E inf inf inf beliefs at t = 2 beliefs at t = 3 beliefs at t = 4 . . .</formula><p>We arrive at a model consisting of a grid of sets of sentences, with rows representing assumption contexts and columns representing timeslices of those contexts, as shown in figure <ref type="figure" target="#fig_1">3</ref>.</p><p>We use a labelled language L trl whose syntax is suited to describing such models. We make use of a set of context letters C and a set of timeslice labels Suppose the agent we are modelling derives formulas in a propositional language L over ¬, ∧, ∨ and →. Well-formed formulas of L trl then consist of a label l, which describes both a context at a timepoint and a body which is a formula taken from the agent's language L. A label l is written as a pair (c, t) where c ∈ C * and t ∈ T (where c is some sequence c 1 , . . . , c n ). Well formed L trl formulas are thus of the form (c, t) : φ. We assume that T is totally ordered; for our purposes, we can take T to be the set of natural numbers. We write rules connecting contexts and timeslices in the style of natural deduction inference rules as follows:</p><formula xml:id="formula_1">(c, t) : φ 1 . . . (c , t) : φ n (c , t + 1) : ψ</formula><p>Here, c, c , c are variables ranging over contexts, t ranges over timeslices and φ 1 , . . . , φ n , ψ range over well formed formulas of the agent's language L. Note that, while each φ i≤n and ψ may belong to a distinct context, rules always connect a timeslice t of the relevant context to the successor timeslice t + 1. Thus, all rules conform to the step logic paradigm, i.e., the idea of modelling inference as a step-by-step process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Modelling a Natural Deduction Style Reasoner</head><p>In this section, we introduce as a working example an agent which reasons in a natural deduction style. Such an agent is required to make and later cancel assumptions in order to derive new formulas. Our aim is to model this process using contexts connected by assumption rules, similar to the bridge rules of context logic). Moreover, we model the agent's deliberation as a step-by-step process of rule application. We again use contexts to represent this process, this time connected by temporal rules, similar to the rules of step logic <ref type="bibr" target="#b2">[3]</ref>.</p><p>We thus make a dual use of the notion of context to represent both temporal and assumption information. It is important to note that contexts representing assumptions and contexts representing temporal states are not distinct; rather, each context represents both temporal information and information about assumptions. A third use of contexts would be to use distinct contexts to represent different agents. Although we do not pursue this interpretation here, the formal apparatus it requires is already present in the logical framework we present in section 5 below.</p><p>We begin by looking how temporal rules relate timeslices. Intuitively, each timeslice label t ∈ T corresponds to a timepoint in a linear, non-branching temporal structure and the timeslice labelled by t contains the beliefs of our agent at that time (within some context). As a convenient shorthand, we will often write t + 1 to denote the successor of t, although we assume no need for arithmetical operations. Temporal rules may connect a timeslice labelled with t to one labelled with t only when t &lt; t . This respects the fact that causation, as a matter of metaphysical necessity, is forwards causation and thus our agent can only reason forwards in time.</p><p>Simple operations that our agent can perform from one timeslice to the next include: ∧-introduction and elimination, ¬¬-elimination and ∨-introduction. These inferences are simple as they operate on formulas in a single context. Example temporal rules are listed below:</p><formula xml:id="formula_2">(c, t) : φ (c, t) : ψ (c, t + 1) : φ ∧ ψ ∧ int (c, t) : φ ∧ ψ (c, t + 1) : φ ∧ elimL (c, t) : φ ∧ ψ (c, t + 1) : ψ ∧ elimR (c, t) : ¬¬φ (c, t + 1) : φ ¬ elim</formula><p>To illustrate how these temporal rules connect timeslices, figure <ref type="figure" target="#fig_2">4</ref> below shows an agent's beliefs evolving from φ ∧ ¬¬ψ at some time t: It is important to note that the formulas which hold at a timeslice only do so on the basis their appearing in the consequent of some rule. In particular, there is nothing inherently monotonic about the Timed Reasoning framework being described here. ∧ int tells us, for example, that φ ∧ ψ will be a t + 1-formula when φ and ψ are both t-formulas but we have not said whether φ or ψ are themselves t + 1 formulas. It may be the case, for example, that each timeslice (of a particular context) may only contain a fixed number of formulas, due to an agent's memory restrictions. Such a case would clearly be nonmonotonic; that is, Γ φ would not ensure that Γ ∪ ∆ φ for any sets of formulas Γ, ∆ and formula φ, for formulas derived from ∆ might take up space in some future timeslice required to store formulas of Γ . However, since we require the natural deduction agent we present here to reason monotonically, we need to add an extra temporal rule to ensure just that:</p><formula xml:id="formula_3">E E t t + 1 t + 2 φ ∧ ¬¬ψ φ ∧ ¬¬ψ φ, ¬¬ψ φ ∧ ¬¬ψ φ, ¬¬ψ, ψ</formula><p>(c, t) : φ (c, t + 1) : φ MON We now introduce assumption contexts and the corresponding bridging rules, which we will refer to as assumption rules. We take as our examples reasoning by reductio ad absurdum and →-introduction, as both require a reasoner to make an assumption which is later withdrawn. To begin with, consider using reductio ad absurdum to derive ¬(φ ∧ ¬φ); one first assumes φ ∧ ¬φ, derives each conjunct using ∧-elimination and, on noting the contradiction, establishes that the assumed formula cannot be the case.</p><p>We use contexts to model the making of assumptions: formulas entertained within that context are thus those within the scope of the corresponding assumption. It is therefore useful to label such contexts with the assumptions they correspond to, i.e., the very formula assumed in making the assumption. Thus, we take C, our set of context symbols, to be L, the internal language of our agent. Contexts are thus labelled with sequences of well formed formulas of L. We denote the empty context by •.</p><p>We must ensure that, when a formula φ features in a context label c, that φ also appears as a formula in that context. In addition, we must make sure that c formulas are also c formulas whenever c is a subsequence of c . This is so we respect the fact that we may use the formulas holding in a context c in any context which we opened from within c: (c, t) : φ (where φ ∈ c) CON (c 1 , t) : φ (c 2 c1 c3 , t + 1) : φ INHERIT Embedded contexts are represented as sequences read left-to-right, thus if the sequence φ, (ψ ∨ χ) were used to label a context, that context would represent the agent as having assumed φ and then, within the scope of that assumption, assuming ψ ∨χ as well. As mentioned above, cφ denotes the concatenation to the right of φ to c (where c is some context label). Thus, cφ can be used to label the context which represents the agent assuming φ from the c-context. We may then write a rule for reduction ad absurdum as follows:</p><formula xml:id="formula_4">(cφ, t) : χ, (cφ, t) : ¬χ (c, t + 1) : ¬φ ¬ int</formula><p>We may read this rule as saying: having assumed φ from a context c and having derived a contradiction (i.e., some formula χ and its negation) in cφ, we may infer ¬φ in c at the next timeslice. Figure <ref type="figure">5</ref> shows how we model an agent using reductio ad absurdum to derive ¬(φ ∧ ¬φ).</p><p>An agent derives an implication in much the same way. Having assumed φ in a context c and having then derived ψ, one may infer φ → ψ in the original context c at the next timeslice: </p><formula xml:id="formula_5">(cφ, t) : ψ (c, t + 1) : φ → ψ → int</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Adequacy of the Model</head><p>We now show that our framework is adequate as a model of the natural deduction agent we have described in that, for any propositional formula derivable in standard natural deduction, there is a corresponding labelled formula derivable using our labelled logic. Recall that "•" is a context label which denotes the</p><formula xml:id="formula_6">E E φ∧¬φ • ¬(φ∧¬φ) d d E E φ ∧ ¬φ φ, ¬φ φ∧¬φ, φ, ¬φ φ∧¬φ, Fig. 5. Reductio ad absurdum E E φ φψ • φ → (ψ → φ) E E φ ψ → φ, φ ψ → φ, φ (ψ → φ)∧φ s s E E φ, ψ φ, ψ φ ∧ ψ φ, ψ φ ∧ ψ</formula><p>Fig. <ref type="figure" target="#fig_3">6</ref>. → introduction empty context and that the formulas appearing in a context label represent assumptions used to derive the formulas which hold in that context. Thus, formulas which hold in • at some timepoint t hold with no assumptions being made. If we think of formulas holding in a context c to be what the agent's beliefs would be, were it to believe the formulas in c, then we may think of the formulas holding in • as the agent's actual beliefs.</p><p>In this section, we will show that, for any (unlabelled) propositional formula φ of the agent's internal language L derivable from a set of (unlabelled) propositional formulas Γ using standard natural deduction rules, it is possible to derive a labelled formula (•, t) : φ, for some timepoint t, from a labelled version of Γ . We concentrate on an agent with an internal language L ¬,∧ over ¬, ∧ and a set of propositional letters P only (of course, the other connectives could be introduced by definition in the usual way). We will refer to the set of bridge rules we have introduced for these connectives, namely ∧ int , ∧ elimL , ∧ elimR , ¬ int and ¬ elim , as well as the rules MON, INHERIT and CON, as R.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 1 (Derivation).</head><p>A labelled formula (c, t) : φ is derivable from a set of labelled formulas Γ l using the rules in R, written Γ l R (i, t) : φ, if there is a sequence of labelled formulas (c 1 , t 1 ) : φ 1 , . . . , (c m , t n ) : φ k such that:</p><p>1. each formula in the sequence is either a member of Γ l , or is obtained from Γ l by applying one of the bridge rules in R; and 2. the last labelled formula in the sequence is (c, t) : φ.</p><p>Let Γ be a set of unlabelled formulas and φ an unlabelled formula; we write Γ mp φ when φ can be derived from Γ using the usual rules of natural deduction. We now form a set of labelled formulas Γ l = {(•, 1) : φ | φ ∈ Γ } which correspond to placing each φ ∈ Γ in the empty (i.e., assumption-free) context at time 1.</p><p>Theorem 1. Let Γ be a set of unlabelled propositional formulas and φ be an unlabelled propositional formula in the agent's language L ¬,∧ . Then Γ nd φ iff for some t, Γ l R (•, t) : φ, where</p><formula xml:id="formula_7">Γ l = {(•, 1) : ψ | ψ ∈ Γ }.</formula><p>The proof can be found in <ref type="bibr" target="#b5">[6]</ref>.</p><p>Corollary 1. For any finite set of labelled formulas Γ l and a labelled formula φ l , it is decidable whether Γ l R φ l .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Semantics</head><p>In this section, we introduce a semantics with respect to which the rules R presented above are sound and complete. To establish completeness, we introduce the notions of a sufficient model and of a minimal model. To keep notation uniform, we use φ, ψ to denote arbitrary unlabelled formulas and Γ to denote a set thereof; the superscript l denotes labelled versions, e.g., φ l and Γ l denote an arbitrary labelled formula and a set thereof, labelled as described above.</p><p>Definition 2 (Models). Let Σ ⊂ L * be a finite set of finite sequences of formulas over L and inf be a function of type</p><formula xml:id="formula_8">Σ −→ N −→ ℘(L). A model M is a tuple Σ, inf , {m c t | c ∈ C * , t ∈ N} where: 1. each m c t is a finite set of formulas in L such that m c t+1 = inf (c, t) 2. inf satisfies: φ 1 ∧ φ 2 ∈ m c t =⇒ φ 1 , φ 2 ∈ inf (c, t) ¬¬φ ∈ m c t =⇒ φ ∈ inf (c, t) φ ∈ m c t =⇒ φ ∈ inf (c, t) φ, ¬φ ∈ m cψ t =⇒ ¬ψ ∈ inf (c, t) φ ∈ c =⇒ φ ∈ inf (c, t) φ ∈ m c t =⇒ φ ∈ inf (c , t) where c = • • • c • • • When a model M</formula><p>minimally (w.r.t. ⊆) satisfies these conditions, we say that it is a minimal model.</p><p>Intuitively, inf (c, t) looks at a timeslice t and infers which formulas should be in the context c at the next timeslice. Definition 3 (Satisfaction). We say a labelled formula of the form (c, t) : φ is satisfied by a model M , written M |= (c, t) : φ, iff φ ∈ m c t .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 4 (Validity and Entailment).</head><p>A labelled formula φ l is (i) Σ-valid, written |= Σ φ l iff M |= φ l for all models M over Σ and (ii) a Σ-consequence of a set of labelled formulas Γ l , written Γ l |= Σ φ l , whenever φ l is satisfied by all models M over Σ which also satisfy every ψ l ∈ Γ l .</p><p>Since different derivations may rely on making different sequences of assumptions, there is a special class of model, comprising only those that ensure all such assumptions are represented. We call such models sufficient that particular derivation. Note that we could not simply assume that Σ contains all possible sequences over L in models in this class, for this would allow the possibility of infinitely many formulas being introduced to a local state by the inf condition, which would violate the definition of each m c t as a finite set. Instead, we use the following definition of sufficiency: Definition 5 (Sufficient Models). A model M is sufficient for a pair Γ l , φ l if either Γ l R φ l or else there exists a derivation Γ l R φ l containing a sequence of labelled formulas (c 1 , t 1 ) : φ 1 , . . . , (c m , t n ) : φ k and each ci≤m ∈ Σ ∈ M . Given a semantic sequent Γ l |= Σ φ l , we say that Σ is sufficient (for that sequent) iff every model M over Σ is sufficient for Γ l , φ l . . We can now establish that the rules R are sound and complete w.r.t. the semantics just proposed. We begin by preparing the following lemma: Lemma 1. Let M be minimal for Γ l and sufficient for Γ l , (c, t) : φ . Then for every formula φ, φ ∈ m c t iff Γ l R (c, t) : φ.</p><p>Again, the proof can be found in <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Theorem 2 (Soundness &amp; Completeness).</head><p>There is a sufficient Σ such that</p><formula xml:id="formula_9">Γ l R φ l iff Γ l |= Σ φ l .</formula><p>Proof. Soundness is standard: clearly, the rules in R preserve validity. For completeness, suppose Γ l |= Σ φ l . By definitions 2 and 5, there is a minimal model</p><formula xml:id="formula_10">M over Σ of Γ l such that M |= φ l . By Lemma 1, Γ l R φ l . Corollary 2. φ is a classical consequence of Γ iff Γ l |= Σ (•, t) : φ for some t. Corollary 3. It is decidable whether Γ l |= Σ φ l .</formula><p>Before we conclude, we briefly look at how the framework presented here can incorporate goal-centered behaviour to avoid some of the indeterminacy required in choosing assumptions. The agent we have described so far is nondeterministic in the sense that it begins by guessing which assumptions to make; our model begins with the corresponding contexts and derives the consequences.</p><p>In this sense, we have a model of the consequences of making some assumption, rather than a model of the reasons an agent may have for doing so.</p><p>To incorporate this latter notion, we can add goal contexts to our framework. In the setting of our natural deduction reasoner, a goal context contains the formula or formulas which the agent would like to try to derive. For example, if a goal is to derive φ → ψ, one can begin by assuming φ. Within the φ-context, one's subgoal is to derive ψ. Thus, each context will have a goal context associated with it. We can refine the behaviour of our model by introducing bridge rules between goal contexts, labelled with a sequence ending in a unique goal symbol G, and assumption contexts: We should stress, however, that this section is not intended as a full account of a deterministic natural deduction theorem prover. It is offered as a demonstration of how propositional attitudes other than beliefs, such as desires (goals) and intentions (plans) may be included within the Timed Reasoning Logics framework. Figure <ref type="figure">7</ref> below shows a model of an agent incorporating these rules. Goal contexts are represented as the smaller rectangles below the assumption contexts they correspond to -we have left out formulas such as (φ ∧ φ) ∧ φ.</p><p>Semantically, a model will still contain a set m c t for each context in Σ at each timeslice t, but we may now calculate which contexts are required in Σ for a sufficient model M , independently of the rules in R. Given a goal α, we set Σ = σ(α), where σ : L trl −→ ℘(L trl ) satisfies the following recursive equations:</p><formula xml:id="formula_11">σ(p) = ∅ (1) σ(φ → ψ) = {φ} ∪ σ(ψ)<label>(2)</label></formula><p>A full specification of the semantics for goal-assumption interaction is left for future work.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. Timeslices</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 2. Embedding contextst = 1 t = 2 t = 3</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Temporal rules</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 6</head><label>6</label><figDesc>Figure 6 below shows a model of an agent deriving φ → (ψ → φ) (formulas of the form φ ∧ φ have been left out of the diagram to save space).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>(</head><label></label><figDesc>cG, t) : φ → ψ (φ, t+1) : φ (cG, t) : φ → ψ (cφG, t + 1) : ψ (Note that the former replaces the earlier rule (c, t) : φ for φ ∈ c and that the INHERIT rule (section 3) must now only apply when neither context is a goal context.)</figDesc></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This research is supported by the AHRB. Thanks to Natasha Alechina, Thorsten Altenkirch, Brian Logan and Sieuwert van Otterloo for commenting on earlier drafts of the paper.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Fig. <ref type="figure">7</ref>. Reasoning with goal contexts</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Summary</head><p>We have argued that contexts are suitable tools for reasoning about agents in at least two ways. Firstly, they can be used to model assumptions, thus providing a finer grained account than those in which a formula is either believed or disbelieved. Secondly, contexts can represent temporal stages, or timeslices, of an agent's reasoning process. This is a crucial notion if we aim to model resource bounded agents. A further application of context is to model multiple agents, each agent being modelled in its own context. Such an approach is explored in <ref type="bibr" target="#b0">[1]</ref> and <ref type="bibr" target="#b1">[2]</ref>. Future work aims to explore a full semantics for interaction between beliefs (including assumptions), goals/desires and plans/intentions and to look at real-world applications of assumption-making in reasoning, with games such as Cluedo as a test case.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A complete and decidable logic for resource-bounded agents</title>
		<author>
			<persName><forename type="first">N</forename><surname>Alechina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Logan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Whitsey</surname></persName>
		</author>
		<author>
			<persName><surname>Jago</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004)</title>
				<meeting>Third International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2004)</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2004-07">July 2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Modelling communicating agents in timed reasoning logics</title>
		<author>
			<persName><forename type="first">Natasha</forename><surname>Alechina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brian</forename><surname>Logan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Whitsey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">(</forename><surname>Jago</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">JELIA</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">José</forename><surname>Júlio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Alferes</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">João</forename><surname>Alexandre</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Leite</forename></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="volume">3229</biblScope>
			<biblScope unit="page" from="95" to="107" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A preliminary excursion into Step-Logics</title>
		<author>
			<persName><forename type="first">J</forename><surname>Drapkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Perlis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the SIGART International Symposium on Methodologies for Intelligent Systems</title>
				<meeting>the SIGART International Symposium on Methodologies for Intelligent Systems</meeting>
		<imprint>
			<date type="published" when="1986">1986</date>
			<biblScope unit="page" from="262" to="269" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Local models semantics, or contextual reasoning = locality + compatability</title>
		<author>
			<persName><forename type="first">C</forename><surname>Ghidini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giunchiglia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">127</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="221" to="259" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Demonstratives</title>
		<author>
			<persName><forename type="first">David</forename><surname>Kaplan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Themes from Kaplan</title>
				<editor>
			<persName><forename type="first">J</forename><surname>Almog</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Perry</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Wettstein</surname></persName>
		</editor>
		<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1989">1989</date>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="481" to="563" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Timed reasoning logics: An example</title>
		<author>
			<persName><forename type="first">M</forename><surname>Whitsey</surname></persName>
		</author>
		<author>
			<persName><surname>Jago</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the Logic and Communication in Multi-Agent Systems workshop</title>
				<meeting>of the Logic and Communication in Multi-Agent Systems workshop<address><addrLine>LCMAS</addrLine></address></meeting>
		<imprint>
			<publisher>Loria</publisher>
			<date type="published" when="2004">2004. 2004</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
