<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Artificial Intelligence</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Modelling Assumption-Based Reasoning Using Contexts</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mark Jago?</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computer Science &amp; Department of Philosophy University of Nottingham Nottingham</institution>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2001</year>
      </pub-date>
      <volume>127</volume>
      <issue>2</issue>
      <abstract>
        <p>We propose a two-fold use of context in reasoning about agents. The first concerns the modelling of beliefs which are entertained only within the scope of some assumption. We illustrate this use by describing as logical model of an agent which reasons in a natural deduction style by making assumptions to derive new formulas. The second use of context we propose here concerns the modelling of reasoning as a step-by-step temporal process, allowing us to model agents whose resources are temporally bounded. In such a setting, the beliefs ascribed to an agent need not be closed under consequence, as they are in many traditional epistemic logics.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>This essay is concerned with the modelling of agents who make assumptions
and use them in their deductive reasoning. This work differs form other formal
accounts of contextual reasoning in that it deals with resource bounded agents,
namely agents which requires computational resources to deduce new beliefs
from old. We take as an illustration an agent who reasons in a natural deduction
style, but who takes time to reach its conclusions.</p>
      <p>We treat an assumption as a sentence entertained by an agent for a particular
purpose. An assumption is not a belief, yet has the psychological effect of a
belief whilst the assumption is entertained. In assuming a formula, one reasons
and behaves as if it were the case (this is, then, something like the psychological
notion of pretence).</p>
      <p>We use the notion of context to model this process of making assumptions.
Following [4], a context is a localised set of beliefs, connected to other
contexts by inter-context bridge rules. Each context represents the epistemic
consequences of an act of pretence (i.e., of assumption making) on behalf of the
agent; for example, the context in which an agent makes the assumption that
? This research is supported by the AHRB. Thanks to Natasha Alechina, Thorsten Altenkirch,</p>
      <p>Brian Logan and Sieuwert van Otterloo for commenting on earlier drafts of the paper.
the moon is made of green cheese contains the sentences that the agent would
believe, were it to consider that assumption to be true.</p>
      <p>Contexts are a suitable tool for this purpose as they can be embedded, thus
allowing one to model the making of assumptions within assumptions. In the
single agent setting of this paper, we reserve one special context, the empty
context, as a model of the agent’s actual beliefs, i.e. those sentences entertained
with no assumption whatever. In a more general multi-agent setting, we will
have a unit context for each agent i, corresponding to each agent i’s
assumptionfree beliefs.</p>
      <p>We should perhaps point out that this is a different use of the term context
from that which most frequently appears in the philosophical literature, which
denotes certain aspects of the world (an example is Kaplan’s [5] use of such
contexts in his analysis of pure indexicals). Here, contexts are comprised of
psychological entities (which may be thought of as specific concepts, sentences
of an internal language of thought or “mentalese”, . . . ).</p>
      <p>We argue that epistemic logics which utilise traditional possible worlds
semantics are not suited to this kind of contextual reasoning. An agent may
well assume a formula which contradicts its current beliefs; indeed, this is the
essence of drawing conclusions by reductio ad absurdum. However, traditional
epistemic logics based on classical consequence relations become trivial in the
presence of such inconsistencies. This suggests a paraconsistent approach.
Paraconsistent logics feature in epistemic frameworks that make use of impossible
possible worlds, such as Levesque’s logic of explicit and implicit belief.</p>
      <p>However, we also wish to model resource bounded agents; we are
particularly interested in agents which take time to reach their conclusions. We should
therefore avoid what Perlis et al [3, p.1] call a “final tray approach” to
modelling an agent’s beliefs, which tells us what an agent will eventually believe,
given its initial beliefs and the rules it reasons with, but may not at a particular
point during that reasoning process.</p>
      <p>While such approaches are applicable in many situations, there are cases
in which a primary point of the model is to determine whether the agent can
reach a given conclusion in a period of time. We may cite examples such as
verifying security protocols and modelling the behaviour of theorem provers. In
our example we discuss an who reasons in a natural deduction style, producing
one new formula at a time. If we were to assume the synchronous deductive
closure of an agent’s beliefs within any context then our agent would be able to
reach all conclusions that is possibly could immediately; this defeats the point
of modelling, which is to establish whether such an agent could reach a certain
conclusion in a set time.
We take an approach which differs from traditional possible worlds-based
epistemic logics. We take a belief to be part of the internal state of an agent, which
allows it to act given the appropriate desires and intentions (i.e., goals and plans).
We represent the internal state of an agent as a finite set of sentences in some
logical language. Whilst we are primarily thinking of rule-based agents, we can
accommodate other styles of agent programming by providing a translation of
values for program variables into logical formulas, for example. We take as our
model of an agent a function which describes how the agent’s beliefs change
from one deductive cycle to the next. Given a set of initial beliefs, we get a
sequence of sets of sentences, each set representing the internal state of the agent
at that point in time (figure 1).</p>
      <p>beliefs
at t = 1
in-f
beliefs
at t = 2
in-f
beliefs
at t = 3
in-f
beliefs
at t = 4
. . .</p>
      <p>We refer to each such set at a timeslice. In addition to modelling how the
agent’s actual beliefs evolve over a period of time, we can model the beliefs it
would come to in the hypothetical circumstance of believing some formula φ .
We term these the agent’s φ -beliefs and we say they constitute the φ -context.
We combine our model with a partitioning of the agent’s internal state (at a
moment of time) into contexts. This allows us to separate the agent’s beliefs
entertained under different assumptions, and thus to model multiple assumptions
concurrently. The context containing the sentences that would eventually be
believed if φ were to be believed provides our model of assuming that φ is the
case. By placing a total ordering on assumptions that can be made, we can view
these contexts as a sequence. Since each context corresponds to assuming some
formula, such an ordering simply amounts to a total order on formulas. Since
assumptions can be made within assumptions, we can have contexts within
contexts, as shown in figure 2.</p>
      <p>We arrive at a model consisting of a grid of sets of sentences, with rows
representing assumption contexts and columns representing timeslices of those
contexts, as shown in figure 3.</p>
      <p>We use a labelled language Ltrl whose syntax is suited to describing such
models. We make use of a set of context letters C and a set of timeslice labels
beliefs
beliefs
beliefs
actual
beliefs
T . Each context is labelled with a finite sequence c¯ = hc1, . . . , cni where each
i≤ n ∈ C (i.e. context labels are words whose alphabet is C). We denote the set
of all context labels by C∗ . For convenience, we write c¯k to denote the result of
concatenating some k ∈ C to a sequence c¯ ∈ C∗ .</p>
      <p>Suppose the agent we are modelling derives formulas in a propositional
language L over ¬, ∧, ∨ and →. Well-formed formulas of L
L
label l, which describes both a context at a timepoint and a body which is a
formula taken from the agent’s language L. A label l is written as a pair (c¯, t)
where c¯ ∈ C∗ and t ∈ T (where c¯ is some sequence c1, . . . , cn). Well formed
trl formulas are thus of the form (c¯, t) : φ . We assume that T is totally ordered;
for our purposes, we can take T to be the set of natural numbers. We write rules
trl then consist of a
connecting contexts and timeslices in the style of natural deduction inference
rules as follows:
(c¯, t) : φ 1</p>
      <p>. . . (c¯0, t) : φ n
(c¯00, t + 1) : ψ
Here, c¯, c¯0, c¯00 are variables ranging over contexts, t ranges over timeslices and
φ 1, . . . , φ n, ψ range over well formed formulas of the agent’s language L. Note
that, while each φ i≤ n and ψ may belong to a distinct context, rules always
connect a timeslice t of the relevant context to the successor timeslice t + 1. Thus,
all rules conform to the step logic paradigm, i.e., the idea of modelling inference
as a step-by-step process.
3</p>
    </sec>
    <sec id="sec-2">
      <title>Modelling a Natural Deduction Style Reasoner</title>
      <p>In this section, we introduce as a working example an agent which reasons in
a natural deduction style. Such an agent is required to make and later cancel
assumptions in order to derive new formulas. Our aim is to model this process
using contexts connected by assumption rules, similar to the bridge rules of
context logic). Moreover, we model the agent’s deliberation as a step-by-step
process of rule application. We again use contexts to represent this process, this
time connected by temporal rules, similar to the rules of step logic [3].</p>
      <p>We thus make a dual use of the notion of context to represent both temporal
and assumption information. It is important to note that contexts representing
assumptions and contexts representing temporal states are not distinct; rather,
each context represents both temporal information and information about
assumptions. A third use of contexts would be to use distinct contexts to represent
different agents. Although we do not pursue this interpretation here, the formal
apparatus it requires is already present in the logical framework we present in
section 5 below.</p>
      <p>We begin by looking how temporal rules relate timeslices. Intuitively, each
timeslice label t ∈ T corresponds to a timepoint in a linear, non-branching
temporal structure and the timeslice labelled by t contains the beliefs of our
agent at that time (within some context). As a convenient shorthand, we will
often write t + 1 to denote the successor of t, although we assume no need for
arithmetical operations. Temporal rules may connect a timeslice labelled with t
to one labelled with t0 only when t &lt; t0. This respects the fact that causation,
as a matter of metaphysical necessity, is forwards causation and thus our agent
can only reason forwards in time.</p>
      <p>Simple operations that our agent can perform from one timeslice to the next
include: ∧-introduction and elimination, ¬¬-elimination and ∨-introduction.
These inferences are simple as they operate on formulas in a single context.
Example temporal rules are listed below:
(c¯, t) : φ (c¯, t) : ψ
(c¯, t + 1) : φ ∧ ψ
∧int
(c¯, t) : φ ∧ ψ
(c¯, t + 1) : φ
∧elimL
(c¯, t) : φ ∧ ψ
(c¯, t + 1) : ψ ∧elimR
(c¯, t) : ¬¬φ
(c¯, t + 1) : φ
¬elim
To illustrate how these temporal rules connect timeslices, figure 4 below shows
an agent’s beliefs evolving from φ ∧ ¬¬ψ at some time t:</p>
      <p>t
φ ∧ ¬¬ψ</p>
      <p>It is important to note that the formulas which hold at a timeslice only do so
on the basis their appearing in the consequent of some rule. In particular, there
is nothing inherently monotonic about the Timed Reasoning framework being
described here. ∧int tells us, for example, that φ ∧ ψ will be a t + 1-formula
when φ and ψ are both t-formulas but we have not said whether φ or ψ are
themselves t + 1 formulas. It may be the case, for example, that each timeslice
(of a particular context) may only contain a fixed number of formulas, due to
an agent’s memory restrictions. Such a case would clearly be nonmonotonic;
that is, Γ ` φ would not ensure that Γ ∪ Δ ` φ for any sets of formulas Γ, Δ
and formula φ , for formulas derived from Δ might take up space in some future
timeslice required to store formulas of Γ . However, since we require the natural
deduction agent we present here to reason monotonically, we need to add an
extra temporal rule to ensure just that:</p>
      <p>(c¯, t) : φ
(c¯, t + 1) : φ</p>
      <p>MON</p>
      <p>We now introduce assumption contexts and the corresponding bridging rules,
which we will refer to as assumption rules. We take as our examples reasoning
by reductio ad absurdum and →-introduction, as both require a reasoner to make
an assumption which is later withdrawn. To begin with, consider using reductio
ad absurdum to derive ¬(φ ∧ ¬φ ); one first assumes φ ∧ ¬φ , derives each
conjunct using ∧-elimination and, on noting the contradiction, establishes that the
assumed formula cannot be the case.</p>
      <p>We use contexts to model the making of assumptions: formulas entertained
within that context are thus those within the scope of the corresponding
assumption. It is therefore useful to label such contexts with the assumptions they
correspond to, i.e., the very formula assumed in making the assumption. Thus,
we take C, our set of context symbols, to be L, the internal language of our
agent. Contexts are thus labelled with sequences of well formed formulas of L.
We denote the empty context by · .</p>
      <p>We must ensure that, when a formula φ features in a context label c¯, that φ
also appears as a formula in that context. In addition, we must make sure that
c¯ formulas are also c¯0 formulas whenever c¯ is a subsequence of c¯0. This is so
we respect the fact that we may use the formulas holding in a context c¯ in any
context which we opened from within c¯:
(c¯, t) : φ
(where φ ∈ c¯) CON</p>
      <p>(c¯1, t) : φ
(c¯2c¯1c¯3, t + 1) : φ</p>
      <p>INHERIT</p>
      <p>Embedded contexts are represented as sequences read left-to-right, thus if
the sequence hφ, (ψ ∨ χ )i were used to label a context, that context would
represent the agent as having assumed φ and then, within the scope of that
assumption, assuming ψ ∨ χ as well. As mentioned above, c¯φ denotes the concatenation
to the right of φ to c¯ (where c¯ is some context label). Thus, c¯φ can be used to
label the context which represents the agent assuming φ from the c¯-context. We
may then write a rule for reduction ad absurdum as follows:
(c¯φ, t ) : χ , (c¯φ, t ) : ¬χ
(c¯, t + 1) : ¬φ
¬int
We may read this rule as saying: having assumed φ from a context c¯ and having
derived a contradiction (i.e., some formula χ and its negation) in c¯φ , we may
infer ¬φ in c¯ at the next timeslice. Figure 5 shows how we model an agent using
reductio ad absurdum to derive ¬(φ ∧ ¬φ ).</p>
      <p>An agent derives an implication in much the same way. Having assumed φ
in a context c¯ and having then derived ψ , one may infer φ → ψ in the original
context c¯ at the next timeslice:</p>
      <p>(c¯φ, t ) : ψ
(c¯, t + 1) : φ → ψ
→int
Figure 6 below shows a model of an agent deriving φ → (ψ → φ ) (formulas of
the form φ ∧ φ have been left out of the diagram to save space).
4</p>
    </sec>
    <sec id="sec-3">
      <title>Adequacy of the Model</title>
      <p>We now show that our framework is adequate as a model of the natural
deduction agent we have described in that, for any propositional formula derivable in
standard natural deduction, there is a corresponding labelled formula derivable
using our labelled logic. Recall that “· ” is a context label which denotes the</p>
      <p>- φ → (ψ → φ )
sQ
φ ∧¬φ,
φ , ¬φ
¬(φ ∧¬φ )
φ, ψ
φ ∧ ψ
ψ → φ, φ
(ψ → φ )∧φ
empty context and that the formulas appearing in a context label represent
assumptions used to derive the formulas which hold in that context. Thus,
formulas which hold in · at some timepoint t hold with no assumptions being made. If
we think of formulas holding in a context c¯ to be what the agent’s beliefs would
be, were it to believe the formulas in c¯, then we may think of the formulas
holding in · as the agent’s actual beliefs.</p>
      <p>In this section, we will show that, for any (unlabelled) propositional formula
φ of the agent’s internal language L derivable from a set of (unlabelled)
propositional formulas Γ using standard natural deduction rules, it is possible to derive
a labelled formula (· , t) : φ , for some timepoint t, from a labelled version of Γ .
We concentrate on an agent with an internal language L
¬,∧ over ¬, ∧ and a set
¬elim, as well as the rules MON, INHERIT and CON, as R.
of propositional letters P only (of course, the other connectives could be
introduced by definition in the usual way). We will refer to the set of bridge rules we
have introduced for these connectives, namely ∧int, ∧elimL, ∧elimR, ¬int and
Definition 1 (Derivation). A labelled formula (c¯, t) : φ is derivable from a set
of labelled formulas Γ l using the rules in R, written Γ l
`R (i, t) : φ , if there is
a sequence of labelled formulas (c¯1, t1) : φ 1, . . . , (c¯m, tn) : φ k such that:
1. each formula in the sequence is either a member of Γ l, or is obtained from
Γ l by applying one of the bridge rules in R; and
2. the last labelled formula in the sequence is (c¯, t) : φ .</p>
      <p>Let Γ be a set of unlabelled formulas and φ an unlabelled formula; we write
Γ `mp φ when φ can be derived from Γ using the usual rules of natural
deduction. We now form a set of labelled formulas Γ l = {(· , 1) : φ | φ ∈ Γ } which
correspond to placing each φ ∈ Γ in the empty (i.e., assumption-free) context
at time 1.</p>
      <p>Theorem 1. Let Γ be a set of unlabelled propositional formulas and φ be an
unlabelled propositional formula in the agent’s language L¬,∧. Then Γ `nd φ
iff for some t, Γ l `R (· , t) : φ , where Γ l = {(· , 1) : ψ | ψ ∈ Γ }.</p>
      <sec id="sec-3-1">
        <title>The proof can be found in [6].</title>
        <p>a
Corollary 1. For any finite set of labelled formulas Γ l and a labelled formula
φ l, it is decidable whether Γ l `R φ l.
5</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Semantics</title>
      <p>In this section, we introduce a semantics with respect to which the rules R
presented above are sound and complete. To establish completeness, we introduce
the notions of a sufficient model and of a minimal model. To keep notation
uniform, we use φ, ψ to denote arbitrary unlabelled formulas and Γ to denote a set
thereof; the superscript l denotes labelled versions, e.g., φ l and Γ l denote an
arbitrary labelled formula and a set thereof, labelled as described above.
Definition 2 (Models). Let Σ ⊂ L ∗ be a finite set of finite sequences of
formulas over L and inf be a function of type Σ →− N →− ℘(L). A model M is a
tuple hΣ, inf , {mtc¯ | c¯ ∈ C∗ , t ∈ N}i where:
1. each mtc¯ is a finite set of formulas in L such that mtc¯+1 = inf (c¯, t)
2. inf satisfies:
φ 1 ∧ φ 2 ∈ mtc¯ =⇒ φ 1, φ 2 ∈ inf (c¯, t)
¬¬φ ∈ mtc¯ =⇒ φ ∈ inf (c¯, t)
φ ∈ mtc¯ =⇒ φ ∈ inf (c¯, t)
φ, ¬φ ∈ mtc¯ψ =⇒ ¬ψ ∈ inf (c¯, t)
φ ∈ c¯ =⇒ φ ∈ inf (c¯, t)
φ ∈ mtc¯ =⇒ φ ∈ inf (c¯0, t) where c¯0 = · · · c¯ · · ·
When a model M minimally (w.r.t. ⊆ ) satisfies these conditions, we say that it
is a minimal model.</p>
      <p>Intuitively, inf (c¯, t) looks at a timeslice t and infers which formulas should be
in the context c¯ at the next timeslice.</p>
      <p>Definition 3 (Satisfaction). We say a labelled formula of the form (c¯, t) : φ is
satisfied by a model M , written M |= (c¯, t) : φ , iff φ ∈ mtc¯.</p>
      <p>Definition 4 (Validity and Entailment). A labelled formula φ l is (i) Σ -valid,
written |=Σ φ l iff M |= φ l for all models M over Σ and (ii) a Σ -consequence
of a set of labelled formulas Γ l, written Γ l =
| Σ φ l, whenever φ l is satisfied by
all models M over Σ which also satisfy every ψ l ∈ Γ l.</p>
      <p>Since different derivations may rely on making different sequences of
assumptions, there is a special class of model, comprising only those that ensure
all such assumptions are represented. We call such models sufficient that
particular derivation. Note that we could not simply assume that Σ contains all
possible sequences over L in models in this class, for this would allow the
possibility of infinitely many formulas being introduced to a local state by the inf
condition, which would violate the definition of each mtc¯ as a finite set. Instead,
we use the following definition of sufficiency:
Definition 5 (Sufficient Models). A model M is sufficient for a pair hΓ l, φ li
if either Γ l 0R φ l or else there exists a derivation Γ l `R φ l containing a
sequence of labelled formulas (c¯1, t1) : φ 1, . . . , (c¯m, tn) : φ k and each c¯i≤ m ∈
Σ ∈ M . Given a semantic sequent Γ l |=Σ φ l, we say that Σ is sufficient (for
that sequent) iff every model M over Σ is sufficient for hΓ l, φ l.i.
We can now establish that the rules R are sound and complete w.r.t. the
semantics just proposed. We begin by preparing the following lemma:
Lemma 1. Let M be minimal for Γ l and sufficient for hΓ l, (c¯, t) : φ i. Then for
every formula φ , φ ∈ mtc¯ iff Γ l `R (c¯, t) : φ .</p>
      <sec id="sec-4-1">
        <title>Again, the proof can be found in [6].</title>
        <p>a</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Theorem 2 (Soundness &amp; Completeness). There is a sufficient Σ such that</title>
      <p>Γ l `R φ l iff Γ l |=Σ φ l.</p>
      <p>Proof. Soundness is standard: clearly, the rules in R preserve validity. For
completeness, suppose Γ l =</p>
      <p>| Σ φ l. By definitions 2 and 5, there is a minimal model
M over Σ of Γ l such that M |= φ l. By Lemma 1, Γ l `R φ l. a
Corollary 2. φ is a classical consequence of Γ iff Γ l =
| Σ (· , t) : φ for some t.</p>
      <p>Corollary 3. It is decidable whether Γ l |=Σ φ l.</p>
    </sec>
    <sec id="sec-6">
      <title>Reasoning with Goals</title>
      <p>Before we conclude, we briefly look at how the framework presented here can
incorporate goal-centered behaviour to avoid some of the indeterminacy
required in choosing assumptions. The agent we have described so far is
nondeterministic in the sense that it begins by guessing which assumptions to make;
our model begins with the corresponding contexts and derives the consequences.
In this sense, we have a model of the consequences of making some assumption,
rather than a model of the reasons an agent may have for doing so.</p>
      <p>To incorporate this latter notion, we can add goal contexts to our framework.
In the setting of our natural deduction reasoner, a goal context contains the
formula or formulas which the agent would like to try to derive. For example, if a
goal is to derive φ → ψ , one can begin by assuming φ . Within the φ -context,
one’s subgoal is to derive ψ . Thus, each context will have a goal context
associated with it. We can refine the behaviour of our model by introducing bridge
rules between goal contexts, labelled with a sequence ending in a unique goal
symbol G, and assumption contexts:
(c¯G, t) : φ → ψ
(φ, t +1) : φ
(c¯G, t) : φ → ψ
(c¯φ G, t + 1) : ψ
(Note that the former replaces the earlier rule (c¯, t) : φ for φ ∈ c¯ and that the
INHERIT rule (section 3) must now only apply when neither context is a goal
context.)</p>
      <p>We should stress, however, that this section is not intended as a full account
of a deterministic natural deduction theorem prover. It is offered as a
demonstration of how propositional attitudes other than beliefs, such as desires (goals) and
intentions (plans) may be included within the Timed Reasoning Logics
framework. Figure 7 below shows a model of an agent incorporating these rules. Goal
contexts are represented as the smaller rectangles below the assumption contexts
they correspond to — we have left out formulas such as (φ ∧ φ ) ∧ φ .</p>
      <p>Semantically, a model will still contain a set mtc¯ for each context in Σ at
each timeslice t, but we may now calculate which contexts are required in Σ
for a sufficient model M , independently of the rules in R. Given a goal α , we
set Σ = σ (α ), where σ : Ltrl →− ℘(Ltrl) satisfies the following recursive
equations:</p>
      <p>σ (p) = ∅
σ (φ → ψ ) = {φ } ∪ σ (ψ )
(1)
(2)
A full specification of the semantics for goal–assumption interaction is left for
future work.
ψφ
h
h
ψφ i</p>
      <p>Gi
hφ i
hφ Gi
hi·</p>
    </sec>
    <sec id="sec-7">
      <title>Summary</title>
      <p>
        We have argued that contexts are suitable tools for reasoning about agents in at
least two ways. Firstly, they can be used to model assumptions, thus providing
a finer grained account than those in which a formula is either believed or
disbelieved. Secondly, contexts can represent temporal stages, or timeslices, of an
agent’s reasoning process. This is a crucial notion if we aim to model resource
bounded agents. A further application of context is to model multiple agents,
each agent being modelled in its own context. Such an approach is explored in
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and [2].
      </p>
      <p>Future work aims to explore a full semantics for interaction between
beliefs (including assumptions), goals/desires and plans/intentions and to look at
real-world applications of assumption-making in reasoning, with games such as
Cluedo as a test case.
resource-bounded agents. In Proc. Third International Joint Conference on Autonomous
Agents and Multi-Agent Systems (AAMAS 2004). ACM Press, July 2004.
2. Natasha Alechina, Brian Logan, and Mark Whitsey (now Jago). Modelling communicating
agents in timed reasoning logics. In Jose´ Ju´lio Alferes and Joa˜o Alexandre Leite, editors,
JELIA, volume 3229 of Lecture Notes in Computer Science, pages 95–107. Springer, 2004.
3. J. Drapkin and D. Perlis.</p>
      <p>A preliminary excursion into Step-Logics.</p>
      <p>Proceedings of the
SIGART International Symposium on Methodologies for Intelligent Systems, pages 262–269,
1986.
4. C. Ghidini and F. Giunchiglia. Local models semantics, or contextual reasoning = locality +
5. David Kaplan. Demonstratives. In J. Almog, J. Perry, and H. Wettstein, editors, Themes from</p>
      <p>Kaplan, chapter 17, pages 481–563. Oxford University Press, New York, 1989.
6. M. Whitsey (now Jago). Timed reasoning logics: An example. In Proc. of the Logic and
Communication in Multi-Agent Systems workshop (LCMAS 2004). Loria, 2004.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>N.</given-names>
            <surname>Alechina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Logan</surname>
          </string-name>
          , and
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Whitsey (now Jago). A complete and decidable logic for</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>