=Paper= {{Paper |id=None |storemode=property |title=Intentional Reasoning as Non-monotonic Reasoning |pdfUrl=https://ceur-ws.org/Vol-804/03_LANMR11.pdf |volume=Vol-804 |dblpUrl=https://dblp.org/rec/conf/lanmr/Castro-ManzanoBG11 }} ==Intentional Reasoning as Non-monotonic Reasoning == https://ceur-ws.org/Vol-804/03_LANMR11.pdf
       Intentional Reasoning as Non-monotonic
                      Reasoning

José Martı́n Castro-Manzano1 , Axel Arturo Barceló-Aspeitia1 , and Alejandro
                            Guerra-Hernández2
                      1
                        Instituto de Investigaciones Filosóficas
                    Universidad Nacional Autónoma de México
 Circuito Mario de la Cueva s/n Ciudad Universitaria, México, D.F., México, 04510
        jmcmanzano@hotmail.com, abarcelo@minerva.filosoficas.unam.mx
                      2
                        Departamento de Inteligencia Artificial
                              Universidad Veracruzana
              Sebastián Camacho No. 5, Xalapa, Ver., México, 91000
                                   aguerra@uv.mx



      Abstract. Intentional reasoning is also logical reasoning. Since it is a
      dynamic process that involves reasoning from beliefs, goals and time, it
      requires both a temporal semantics and a non-monotonic behavior. In
      this work we propose a model of intentional reasoning as a case of non-
      monotonic reasoning. We also show the consistency and soundness of the
      system.

      Keywords: Defeasible logic, temporal logic, BDI logic.


1   Introduction
Intentional reasoning is also a form of logical reasoning. Moreover, it is a dynamic
process that uses beliefs and intentions, but also time. It has been mainly mod-
eled via BDI logics, for instance [18,20,22]; however, there are two fundamental
problems with such approaches: in first place, human reasoning is not and should
not be monotonic [15], and thus, the logical models should be non-monotonic;
and in second place, intentional states should respect temporal norms, and so,
the logical models need to be temporal as well. Thereby, the proof process of
intentional reasoning has to have some sort of control over time and has to take
into account a form of non-monotonic inference using beliefs and intentions.
    In the current picture defeasible logics have been mainly developed to rea-
son about beliefs [17] but have been barely used to reason about temporal
structures [11]; on the other hand, intentional logics have been mostly used
to reason about intentional states and temporal behavior but most of them
are monotonic. In this work we propose intentional reasoning as a particular
case of non-monotonic reasoning. And in order to solve the double problem we
mentioned above, our main contribution is the adaptation and extension for
CT LAgentSpeak(L) [13] semantics with a non-monotonic framework. So, a defea-
sible temporal logic that deals with the non-monotonicity of intentions while



                                         23
taking care of temporal structures is proposed. We also show the consistency
and soundness of the system.
    In Section 2 we discuss the case of intentional reasoning as non-monotonic
reasoning and we expose a non-monotonic framework for intentional reasoning
in Section 3. In Section 4 we display the system, its consistency and soundness.
Finally, in Section 5 we discuss the results and we mention future work.


2   Intentional reasoning is non-monotonic reasoning

The BDI models based upon Bratman’s theory [4] tend to interpret intentions
as a unique fragment [18,20,22] while Bratman’s richer framework distinguished
three classes of intentions: deliverative, non-deliverative and policy-based. In
particular, policy-based intentions are of great importance given their structure
and behavior: they have the form of rules and behave like plans. These remarks
are relevant because the existing formalisms, despite of recognizing the intimate
relationship between plans and intentions, seem to forget that intentions behave
like plans.
    As Bratman has argued, plans are intentions as well [4]. In this way we can
set policy-based intentions to be structures te : ctx ← body [2] (see Table 1).
Now, consider the next example for sake of argument: on(X, Y ) ← put(X, Y ).
This intention tells us that, for an agent to achieve on(a, b), it typically has to
put a on b. If we imagine such an agent is immersed in a dynamic environment,
of course the agent will try to put, typically, a on b; nevertheless, a rational
agent would only do it as long as it is possible.
    Therefore, it results quite natural to talk about some intentions that are
maintained typically but not absolutely. And so, it is reasonable to conclude
that intentions, and particularly policy-based, allow defeasible intentional rea-
soning [10]. However, the current BDI models are monotonic and non-monotonic
logics are barely used to reason about time [11] or intentional states. Thus, a de-
feasible temporal logic that deals with the non-monotonicity of intentions while
taking care of temporal structures has not been developed yet.
    Thus, for example, standard First Order Logic is an instance of monotonic
atemporal reasoning; default logic [19] is an instance of non-monotonic atemporal
reasoning. In turn, BDI logic [18,20,22] is an example of temporal but monotonic
reasoning. Our proposal is a case of temporal and non-monotonic reasoning.
    Traditional BDI models [5,6,14,18,20,22] formalize intentional reasoning in
a monotonic way, while our proposal aims to do it non-monotonically. This is
not only needed, it is also justified since intentions imply pro-activity, inertia
(once an intention has been taken, it resists being abandoned) and admissibil-
ity (once an intention has been taken, the agent will not consider contradictory
options). Therefore intentions and intentional reasoning require a notion of com-
mitment (given the principle of pro-activity), a notion of consistency (given the
admissibility criteria) and a notion of retractability (given the notion of inertia).
These features guarantee, respectively, that intentions need mechanisms of com-
mitment, defeasibility and consistency which, in turn, allow the research about



                                         24
intentions in terms of revision or non-monotonicity: just as the changes of beliefs
require a theory of belief revision [1] or a non-monotonic logic [17], the changes
of intentions require a theory of intention revision or a non-monotonic logic of
intention.


3   Non-monotonic framework
Despite enormous avances in this area, if we take into account the philosophical
foundations of rational agency [4], it is not hard to see that most BDI logics
fail to grasp all the properties of intentions: functional properties like proactiv-
ity, admissibility and inertia; descriptive properties like partiality, hierarchy and
dynamism; and of course, the normative properties: internal consistency, strong
consistency and means-end coherence. The explanation of these properties can
be found in [4]. Following these ideas we propose the next framework:
Definition 1 (Non-monotonic intentional framework) A non-monotonic inten-
tional framework is a tuple B, I, FB , FI , , |∼ , , ∼| ,  where:
 – B denotes the belief base.
 – I denotes the set of intentions.
 – FB ⊆ B denotes the basic beliefs.
 – FI ⊆ I denotes the basic intentions.
 –  and  are strong consequence relations.
 – |∼ and ∼| are weak consequence relations.
 – ⊆ I 2 s.t.  is acyclic.
     With the help of this framework we can start to represent the non-monotonic
nature of intentional reasoning. We assume a commitment strategy embedded
in the agent architecture, i.e, we assume the inertia of intentions by a fixed
mechanism that is single-minded, because if there is no commitment or the
agent is blindly-committed, there is no sense in talking about inertia [12,13],
i.e., in reconsidering intentions.
     As usual, B denotes the beliefs base, which are literals. FB stands for the
beliefs that are considered basic; and similarly FI stands for intentions considered
as basic. Each intention φ ∈ I is a structure te : ctx ← body where te represents
the goal of the intention –so we preserve proactivity–, ctx a context and the rest
denotes the body. When ctx or body are empty we write te : ← or just te.
     We also preserve internal consistency by allowing the context of an intention,
ctx(φ), ctx(φ) ∈ B and by letting te be the head of the intention. So, strong
consistency is implied by internal consistency (given that strong consistency is
ctx(φ) ∈ B). Means-end coherence is implied by admissibility and the hierarchy
of intentions is represented by the order relation, which we require to be acyclic
in order to solve conflicts between intentions. Again, all these features can be
found in [4]. And with this framework we can arrange a notion of inference where
we say that φ is strongly (weakly) derivable from a sequence ∆ iff there is a proof
of ∆  φ (∆ |∼ φ). And also, that φ is not strongly (weakly) provable iff there
is a proof of ∆  φ (∆ ∼| φ), where ∆ = B, I.



                                         25
4     N CT LAgentSpeak(L)

As we said above different logics have been proposed to characterize the ratio-
nal behavior of agents. The most common logics are BDI systems [18,20,22] in
which the behavior of the agents is specified in terms of changes in their mental
states. These logics are used to reason about rational agents, but are not used to
program them. We also have programming languages that have been proposed
to reduce the gap between the theory (the logical specification) and the practice
(the implementation). In this work we adopt AgentSpeak(L) [18] because it has
a well defined operational semantics. The problem, however, is that these partic-
ular semantics exclude modalities which are important to represent intentional
states.
    To avoid this problem we use CT LAgentSpeak(L) [13] as a logical tool for the
formal specification. Of course, initially, the approach is similar to a BDI CT L
system defined after B KD45 DKD I KD with the temporal operators: next ( ),
eventually (♦), always (), until (U), optional (E), inevitable (A), and so on,
defined after CT L∗ [7,9]. In this section we are going to expose the syntax and
semantics of CT LAgentSpeak(L) .


4.1    Syntax of AgentSpeak(L)

An agent ag is formed by a set of plans ps and beliefs bs (grounded literals).
Each plan has the form te : ctx ← h. The context ctx of a plan is a literal or
a conjunction of them. A non empty plan body h is a finite sequence of actions
A(t1 , . . . , tn ), goals g (achieve ! or test ? an atomic formula P (t1 , . . . , tn )), or
beliefs updates u (addition + or deletion −).           denotes empty elements, e.g.,
plan bodies, contexts, intentions. The trigger events te are updates (addition or
deletion) of beliefs or goals. The syntax is shown in Table 1.


           ag ::= bs ps                      h ::= h1 ;  | 
           bs ::= b1 . . . bn (n ≥ 0)        h1 ::= a | g | u | h1 ; h1
           ps ::= p1 . . . pn (n ≥ 1)        at ::= P (t1 , . . . , tn ) (n ≥ 0)
           p    ::= te : ctx ← h             a ::= A(t1 , . . . , tn ) (n ≥ 0)
           te ::= +at | − at | + g | − g     g ::= !at | ?at
           ctx ::= ctx1 |                   u ::= +b | − b
           ctx1 ::= at | ¬at | ctx1 ∧ ctx1
                 Table 1. Sintax of AgentSpeak(L) adapted from [2].




4.2    Semantics of AgentSpeak(L)

The operational semantics of AgentSpeak(L) are defined by a transition system,
as showed in Figure 1, between configurations ag, C, M, T, s, where:



                                             26
                                                  Rel2



                                 SelEv           SelEv1             RelPl             Rel1            ApplPl



                                                          SelEv2
                   ProcMsg                                                                   Appl2    Appl1

                             ClrInt1                      SelInt2
                    AchvGl   ClrInt3            ClrInt2                         SelInt                SelAppl

                                       ClrInt                                                         SelAppl
                                                                            SelInt1           ExtEv
                             TestGl1        AddBel         Action                             IntEv
                             TestGl2        DelBel

                                                                    ExecInt                           AddIM




          Fig. 1. The interpreter for AgentSpeak(L) as a transition system.


 – ag is an agent program formed by beliefs bs and plans ps.
 – An agent circumstance C is a tuple I, E, A where I is the set of intentions
   {i, i , . . . , n} s.t. i ∈ I is a stack of partially instantiated plans p ∈ ps; E is a
   set of events {te, i , te , i  , . . . , n}, s.t. te is a triggerEvent and each i is
   an intention (internal event) or an empty intention (external event); and
   A is a set of actions to be performed by the agent in the environment.
 – M is a tuple In, Out, SI that works as a mailbox, where In is the mailbox
   of the agent, Out is a list of messages to be delivered by the agent and
   SI is a register of suspended intentions (intentions that wait for an answer
   message).
 – T is a tuple R, Ap, ι, , ρ that registers temporal information: R is the set of
   relevant plans given certain triggerEvent; Ap is the set of applicable plans
   (the subset of R s.t. bs |= ctx); ι,  and ρ register, respectively, the intention,
   the event and the current plan during an agent execution.
 – The label s ∈ {SelEv, RelP l, AppP l, SelAppl, SelInt, AddIM, ExecInt,
   ClrInt, P rocM sg} indicates the current step in the reasoning cycle of the
   agent.

    Under such semantics a run is a set Run = {(σi , σj )|Γ  σi → σj } where Γ
is the transition system defined by AgentSpeak(L) operational semantics and σi ,
σj are agent configurations.

                   CT L
4.3   Syntax of BDIAS(L)

CT LAgentSpeak(L) may be seen as an instance of BDI CT L . Similar approaches
have been accomplished for other programming languages [8]. The idea is to
define some BDI CT L semantics in terms of AgentSpeak(L) structures. So, we
need a language able to express temporal and intentional states. Thus, we require
in first place some way to express these features.



                                                           27
                            CT L
Definition 2 (Syntax of BDIAS(L) ) If φ is an AgentSpeak(L) atomic formula,
                                                                CT L
then BEL(φ), DES(φ) and INT(φ) are well formed formulas of BDIAS(L)  .
      To specify the temporal behavior we use CT L∗ in the next way.
                 CT L                            CT L
Definition 3 (BDIAS(L) temporal syntax) Every BDIAS(L) formula is a state
formula s:
 – s ::= φ|s ∧ s|¬s
 – p ::= s|¬p|p ∧ p|Ep|Ap|     p|♦p|p|p U p

                        CT L
4.4     Semantics of BDIAS(L)
Initially the semantics of BEL, DES and INT is adopted from [3]. So, we use the
next function:
             agoals( ) = {},
                              {at} ∪ agoals(i)        if p = +!at : ct ← h,
             agoals(i[p]) =
                              agoals(i)               otherwise
which gives us the set of atomic formulas (at) attached to an achievement goal
(+!) and i[p] denotes the stack of intentions with p at the top.
                    CT L
Definition 4 (BDIAS(L)    semantics) The operators BEL, DES and INT are de-
fined in terms of an agent ag and its configuration ag, C, M, T, s:
                           BELag,C,M,T,s (φ) ≡ φ ∈ bs

                                                               
            INTag,C,M,T,s (φ) ≡ φ ∈          agoals(i) ∨               agoals(i)
                                        i∈CI                 te,i∈CE

                   DESag,C,M,T,s (φ) ≡ +!φ, i ∈ CE ∨ INT(φ)
where CI denotes current intentions and CE suspended intentions.

                        CT L
4.5     The system N BDIAS(L)
By now we have a defeasible framework for intentions that lacks temporal repre-
sentation; while the BDI temporal model described before grasps the temporal
representation but lacks non-monotonicity. The next step is the proposal of a
system denoted by N BDI because it has a non-monotonic behavior. An inten-
                   CT L
tion φ in N BDIAS(L)     is of course a structure g : ctx ← body where g is the
head, ctx is the context and body is the body of the rule. We will denote an
intention φ with head g by φ[g]. Also, a negative intention is denoted by φ[g c ],
i.e., the intention φ with ¬g as the head.
     The semantics of this theory will require a Kripke structure K = S, R, V 
where S is the set of agent configurations, R is an access relation defined after
the transition system Γ and V is a valuation function that goes from agent
configurations to true propositions in those states.



                                            28
Definition 5 Let K = S, Γ, V , then:
 – S is a set of agent configurations c = ag, C, M, T, s.
 – Γ ⊆ S 2 is a total relation such that for all c ∈ Γ there is a c ∈ Γ s.t.
   (c, c ) ∈ Γ .
 – V is valuation s.t.:
     - VBEL (c, φ) = BELc (φ) where c = ag, C, M, T, s.
     - VDES (c, φ) = DESc (φ) where c = ag, C, M, T, s.
     - VINT (c, φ) = INTc (φ) where c = ag, C, M, T, s.
 – Paths are sequences of configurations c0 , . . . , cn s.t. ∀i(ci , ci+1 ) ∈ R. We use
   xi to indicate the i-th state of path x. Then:
S1 K, c |= BEL(φ) ⇔ φ ∈ VBEL (c)
S2 K, c |= DES(φ) ⇔ φ ∈ VDES (c)
S3 K, c |= INT(φ) ⇔ φ ∈ VINT (c)
S4 K, c |= Eφ ⇔ ∃x = c1 , . . . ∈ K|K, x |= φ
S5 K, c |= Aφ ⇔ ∀x = c1 , . . . ∈ K|K, x |= φ
P1 K, c |= φ ⇔ K, x0 |= φ where φ is a state formula.
P2 K, c |= φ ⇔ K, x1 |= φ.
P3 K, c |= ♦φ ⇔ K, xn |= φ for n ≥ 0
P4 K, c |= φ ⇔ K, xn |= φ for all n
P5 K, c |= φ U ψ ⇔ ∃k ≥ 0 s.t. K, xk |= ψ and for all j, k, 0 ≤ j < k|K, cj |= φ
   or ∀j ≥ 0 : K, xj |= φ
    As we saw in Section 3, we have four cases of proof: if the sequence is ∆  φ,
we say φ is strongly provable; if it is ∆  φ we say φ is not strongly provable. If
is ∆ |∼ φ we say φ is weakly provable and if it is ∆ ∼| φ, then φ is not weakly
provable.
Definition 6 (Proof ) A proof of φ from ∆ is a finite sequence of beliefs and
intentions satisfying:
1. ∆  φ iff
  1.1. A(INT(φ)) or
  1.2. A(∃φ[g] ∈ FI : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ)  ψ[g  ])
2. ∆ |∼ φ iff
  2.1. ∆  φ or
  2.2. ∆  ¬φ and
    2.2.1. ♦E(INT(φ) U ¬BEL(ctx(φ))) or
    2.2.2. ♦E(∃φ[g] ∈ I : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ) |∼ ψ[g  ]) and
      2.2.2.1. ∀γ[g c ] ∈ I, γ[g c ] fails at ∆ or
      2.2.2.2. ψ[g  ]  γ[g c ]
3. ∆  φ iff
  3.1. ♦E(INT(¬φ)) and
  3.2. ♦E(∀φ[g] ∈ FI : ¬BEL(ctx(φ)) ∨ ∃ψ[g  ] ∈ body(φ)  ψ)
4. ∆ ∼| φ iff
  4.1. ∆  φ and
  4.2. ∆  ¬φ or
    4.2.1. A¬(INT(φ) U ¬BEL(ctx(φ))) and
    4.2.2. A(∀φ[g c ] ∈ I : ¬BEL(ctx(φ)) ∨ ∃ψ[g  ] ∈ body(φ) ∼| ψ[g  ]) or
      4.2.2.1. ∃γ[g c ] ∈ I s.t. γ[g c ] succeds at ∆ and
      4.2.2.2. ψ[g  ]  γ[g c ]



                                          29
4.6   Consistency
Now we suggest a square of opposition in order to describe some desired prop-
erties of the system.
Proposition 1 (Subalterns1 ) If  φ then |∼ φ.
Proof. Let us assume that  φ but not |∼ φ, i.e., ∼| φ. Then, given  φ we have
two general cases. Case 1: given the initial assumption that  φ, by Definition 6
item 1.1, we have that A(INT(φ)). Now, given the second assumption, i.e., that
∼| φ, by Definition 6 item 4.1, we have  φ. And so, ♦E(INT(¬φ)), and thus, by
the temporal semantics, we get ¬φ; however, given the initial assumption, we
also obtain φ, which is a contradiction.
    Case 2: given the assumption that  φ, by Definition 6 item 1.2, we have
that ∃φ[g] ∈ FI : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ)  ψ[g  ]. Now, given the second
assumption, that ∼| φ, we also have  φ and so we obtain ♦E(∀φ[g] ∈ FI :
¬BEL(ctx(φ)) ∨ ∃ψ[g  ] ∈ body(φ)  ψ), and thus we can obtain ∀φ[g] ∈ FI :
¬BEL(ctx(φ)) ∨ ∃ψ[g  ] ∈ body(φ)  ψ) which is ¬(∃φ[g] ∈ FI : BEL(ctx(φ)) ∧
∀ψ[g  ] ∈ body(φ)  ψ[g  ]). 
Corollary 1 (Subalterns2 ) If ∼| φ then  φ.
Proposition 2 (Contradictories1 ) There is no φ s.t.  φ and  φ.
Proof. Assume that there is a φ s.t.  φ and  φ. If  φ then, by Definition 6
item 3.1, ♦E(INT(¬φ)). Thus, by proper semantics, we can obtain ¬φ. However,
given that  φ it also follows that φ, which is a contradiction. 
Corollary 2 (Contradictories2 ) There is no φ s.t. |∼ φ and ∼| φ.
Proposition 3 (Contraries) There is no φ s.t.  φ and ∼| φ.
Proof. Assume there is a φ such that  φ and ∼| φ. By Proposition 1, it follows
that |∼ φ, but that contradicts the assumption that ∼| φ by Corollary 2. 
Proposition 4 (Subcontraries) For all φ either |∼ φ or  φ.
Proof. Assume it is not the case that for all φ either |∼ φ or  φ. Then there
is φ s.t. ∼| φ and  φ. Taking ∼| φ it follows from Corollary 1 that  φ. By
Proposition 2 we get a contradiction with  φ. 
    So, gathering the results, we get the next square of opposition where c denotes
contradictories, s subalterns, k contraries and r subcontraries.

                             φ<        k     > ∼| φ
                              <
                                               >
                              ∨                   ∨
                              s         c         s

                               ∨ <            > ∨
                             |∼ φ <     r     >φ



                                        30
Corollary 3 (Coherence) If  φ then it is false that  φ. And if If ∼| φ then it
is false that |∼ φ.

   Proposition 1 and Corollary 1 represent supraclassicality; Proposition 2 and
Corollary 2 stand for consistency while the remaining statements specify the
coherence of the square, and thus, the overall coherence of the system.


4.7   Soundness

Now, the idea is to show the framework is sound with respect to its semantics.
Thus, as usual, we will need some notions of satisfaction and validity.

Definition 7 (Satisfaction) A formula φ is true in K iff φ is true in all config-
urations σ in K. This is to say, K |= φ ⇔ K, σ |= φ for all σ ∈ S.

Definition 8 (Run of an agent in a model) Given
                                                 an initial
                                                         configuration β, a
                                         β    β    β
transition system Γ and a valuation V , KΓ = SΓ , RΓ , V denotes a run of an
agent in a model.
                                         CT L
Definition 9 (Validity) A formula φ ∈ BDIAS(L) is true for any agent run in
Γ iff ∀KΓβ |= φ

   Further, we will denote (∃KΓβ |= φ U ¬BEL(ctx(φ)))∨ |= φ by |≈ φ. We can
observe, moreover, that |= φ ≥ |≈ φ and ≈| φ ≥ =| φ. With these remarks we
should find a series of translations s.t.:


                           φ     > ∀KΓβ |= φ     > |= φ

                                      >               ∨
                                          |∼ φ    > |≈ φ

Proposition 5 The following relations hold:

                 a)  φ implies |= φ         b) |∼ φ implies |≈ φ

Proof. Base case. Taking ∆i as a sequence with i = 1, we have two basic cases.
    Case a) If we assume  φ, we have two subcases. First subcase is given by Def-
inition 6 item 1.1. Thus we have A(INT(φ)). This means, by Definition 5 items
P4 and S5 and Definition 4, that for all paths and all states φ ∈ CI ∨ CE . We can
represent this expression, by way of a translation, in terms of runs. Since paths
and states are sequences of agent configurations we have that ∀KΓβ |= φ, which
implies |= φ. Second subcase is given by Definition 6 item 1.2, which in terms of
runs means that for all runs ∃φ[g] ∈ FI : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ)  ψ[g  ].
Since ∆1 is a single step, body(φ) = and for all runs BEL(ctx(φ))), ctx(φ) ∈ FB .
Then ∀KΓβ |= φ which, same as above, implies |= φ.



                                           31
     Case b) Let us suppose |∼ φ. Then we have two subcases. The first one is
given by Definition 6 item 2.1. So, we have that  φ which, as we showed above,
already implies |= φ. On the other hand, by item 2.2, we have  ¬φ and two
alternatives. The first alternative, item 2.2.1, is ♦E(INT(φ) U ¬BEL(ctx(φ))).
Thus, we can reduce this expression by way of Definition 5 items P3 and S4, to
a translation in terms of runs: ∃KΓβ |= φ U ¬BEL(ctx(φ)), which implies |≈ φ.
The second alternative comes from item 2.2.2, ♦E(∃φ[g] ∈ I : BEL(ctx(φ)) ∧
∀ψ[g  ] ∈ body(φ) |∼ ψ[g  ]) which in terms of runs means that for some run
∃φ[g] ∈ I : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ) |∼ ψ[g  ], but ∆1 is a single step, and
thus body(φ) = . Thus, there is a run in which ∃φ[g] ∈ I : BEL(ctx(φ)), i.e.,
(∃KΓβ |= (φ U ¬BEL(ctx(φ))) by using the weak case of Definition 6 P5. Thus,
by addition, (∃KΓβ |= (φ U ¬BEL(ctx(φ)))∨ |= φ, and therefore, |≈ φ.
     Inductive case. Case a) Let us assume that for n ≤ k, if ∆n  φ then ∆ |= φ.
And suppose ∆n+1 . Further, suppose ∆n  φ, then we have two alternatives.
First one being, by Definition 6 item 1.1, that we have an intention φ s.t. ctx(φ) =
body(φ) = . Since body(φ) is empty, it trivially holds at n, and by the induction
hypothesis, body(φ) ⊆ ∆n+1 , and thus |= φ. Secondly, by Definition 6 item 1.2,
for all runs ∃φ[g] ∈ I : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ)  ψ[g  ]. Thus, for all runs
n, ∀ψ[g  ] ∈ body(φ)  ψ[g  ], and so by the induction hypothesis, body(φ) ⊆ ∆n+1 ,
i.e., ∆  ψ[g  ]. Therefore, |= φ.
     Case b) Let us assume that for n ≤ k, if ∆n |∼ φ then ∆ |≈ φ. And suppose
∆n+1 . Assume ∆n |∼ φ. We have two alternatives. The first one is given by
Definition 6 item 2.1, i.e.,  φ, which already implies |= φ. The second alternative
is given by item 2.2, ∆  ¬φ and two subcases: ♦E(INT(φ) U ¬BEL(ctx(φ))) or
♦E(∃φ[g] ∈ I : BEL(ctx(φ)) ∧ ∀ψ[g  ] ∈ body(φ) |∼ ψ[g  ]). If we consider the
first subcase there are runs n which comply with the definition of |≈ φ. In the
remaining subcase we have ∀ψ[g  ] ∈ body(φ) |∼ ψ[g  ], since body(φ) ⊆ ∆n , by
the induction hypothesis ∆ |∼ ψ[g  ], and thus, ∆n+1 |∼ φ, i.e., |≈ φ. 
     Moreover, we can find a series of translations for the remaining fragments:


         ∼| φ    > ∃KΓβ |= ¬φ ∧ ∀KΓβ |= ¬(φ U ¬BEL(ctx(φ)))          > ≈| φ

                                                                        ∨
                                      >
                                          φ                         > =| φ

Proposition 6 The following relations hold:

                 a)  φ implies =| φ         b) ∼| φ implies ≈| φ

Proof. Base case. Taking ∆i as a sequence with i = 1, we have two basic cases.
    Case a) If we assume  φ, by Definition 6 item 3.1, we have ♦E(INT(¬φ)).
Since ∆1 is a single step, already we have its translation ∃KΓβ |= ¬φ, which
implies =| φ.
    Case b) Let us suppose ∼| φ. By Definition 6 item 4.1 we can a use the
single condition  φ, but as we showed above,  φ has a translation ∃KΓβ |= ¬φ;



                                          32
and by item 4.2.1 A¬(INT(φ) U ¬BEL(ctx(φ))), which in terms of runs means
∀KΓβ |= ¬(φ U ¬BEL(ctx(φ))). Thus, by conjunction of these last translations,
≈| φ.
    Inductive case. Case a) Let us assume that for n ≤ k, if ∆n  φ then
∆ =| φ. And suppose ∆n+1 . Take ∆n  φ. By Definition 6 item 3.2, we have
♦E(INT(¬φ)), which can be translated directly to a run of size n s.t. ∃KΓβ |= ¬φ,
and by the induction hypothesis, body(φ) ⊆ ∆n+1 , and thus =| φ.
    Case b) Let us consider that for n ≤ k, if ∆n ∼| φ then ∆ ≈| φ. And
suppose ∆n+1 . Let us suppose ∼| φ. By Definition 6 item 4.1 we can a use the
single condition  φ, and as we showed above, it has a translation ∃KΓβ |= ¬φ.
Further, by Definition 6 item 4.2 and 4.2.1 we can extract, respectively, ∆n |= ¬φ
and A¬(INT(φ) U ¬BEL(ctx(φ))), i.e., for ∆n , ∀KΓβ |= ¬(φ U ¬BEL(ctx(φ))).
Thus, by the induction hypothesis and by conjunction of these last translations,
∆n+1 ∼| φ, thus, ≈| φ. 


5   Conclusion
We proposed a logic to deal with the non-monotonicity of intentional reasoning
while taking care of temporal structures. We were able to do this by extending
a defeasible framework with some temporal semantics. Then we observed the
system preserves supraclassicality, consistency and soundness.
    The relevance of this work becomes clear once we notice that, although
intentions have received a lot of attention, their dynamic features have not
been studied completely [21]. There are formal theories of intentional reason-
ing [5,6,14,18,20,22] but very few of them consider the revision of intentions [21]
or the non-monotonicity of intentions [10] as legitimate research topics, which
we find odd since the foundational theory guarantees that such research is le-
gitimate and necessary [4]. Recent works confirm the status of this emerging
area [10,21,16].
    Finally, as part of our current work, we are trying to find relations between
the notion of inference of this system and a notion of revision. Plus, since our
model is related to AgentSpeak(L), an implementation may follow organically.

Acknowledgements. The authors would like to thank the anonymous review-
ers for their useful comments and precise corrections. The first author is sup-
ported by the CONACyT scholarship 214783.


References
1. Alchourrón, C. E., Gärdenfors, P., Makinson, D.: On the logic of theory change:
   Partial meet contraction and revision functions. Journal of Symbolic Logic (1985)
   50 510–530
2. Bordini, R.H., Wooldridge, M., Hübner, J.F.: Programming Multi-Agent Systems
   in AgentSpeak using Jason (Wiley Series in Agent Technology). John Wiley & Sons
   (2007)



                                         33
3. Bordini, R.H., Moreira, Á.F.: Proving BDI Properties of Agent-Oriented Program-
   ming Languages. Annals of Mathematics and Artificial Intelligence (2004) 42 197–
   226
4. Bratman, M.E.: Intention, Plans, and Practical Reason. Cambridge University Press
   (1999)
5. Chen, X, Liu, G.: A logic of intention. Proceedings of the 16th international joint
   conference on Artifical intelligence (1999) 1 172–177
6. Cohen, P., Levesque, H.: Intention is choice with commitment. Art. Int. (1990) 42(3),
   213-261
7. Clarke, E.M. Jr., Grumberg, O.,Peled. D.A.: Model Checking. MIT Press (1999).
8. Dastani, M., van Riemsdijk, M.B., Meyer, J.C.: A grounded specification language
   for agent programs. Proceedings of the 6th international joint conference on Au-
   tonomous agents and multiagent systems ’07 AAMAS ’07 (2007) 1–8
9. Emerson, A.: Temporal and modal logic. Handbook of Theoretical Computer Sci-
   ence, Elsevier Science Publishers (1995) 995–1072
10. Governatori, G., Padmanabhan, V. and Sattar, A.: A Defeasible Logic of Policy-
   based Intentions. Proceedings of the 15th Australian Joint Conference on Artificial
   Intelligence: Advances in Artificial Intelligence. LNAI-2557, Springer Verlag (2002)
11. Governatori, G., Terenziani, P.: Temporal Extensions to Defeasible Logic. Proceed-
   ings of the 20th Australian joint conference on Advances in artificial intelligence
   (2007) 476–485
12. Guerra-Hernández, A., Castro-Manzano, J.M., El-Fallah-Seghrouchni, A.: Toward
   an AgentSpeak(L) Theory of Commitment and Intentional Learning. Proceedings
   of the 7th Mexican International Conference on Artificial Intelligence: Advances
   in Artificial Intelligence MICAI 2008, LNCS, vol. 5317, 848–858, Springer-Verlag,
   Berlin Heidelberg, (2008)
13. Guerra-Hernández, A., Castro-Manzano, J.M., El-Fallah-Seghrouchni, A.: CTL
   AgentSpeak(L): a Specification Language for Agent Programs. J. Algorithms (2009)
   64(1) 31–40
14. Konolige, K., Pollack, M. E.: A representationalist theory of intentions. Proceed-
   ings of International Joint Conference on Artificial Intelligence (IJCAI-93) (1993)
   390–395
15. Nute, D.: Defeasible logic. INAP 2001, LNAI 2543M, Springer-Verlag, (2003) 151-
   169
16. Icard, Th., Pacuit. E., Shoham, Y.: Joint revision of belief and intention. Pro-
   ceedings of the Twelfth International Conference on the Principles of Knowledge
   Representation and Reasoning, (2010)
17. Prakken, H., Vreeswijk, G.: Logics for defeasible argumentation. In D. Gabbay
   and F. Guenthner (eds.), Handbook of Philosophical Logic, second edition, Vol 4,
   Kluwer Academic Publishers (2002) 219-318
18. Rao, A.S.: AgentSpeak(L): BDI agents speak out in a logical computable language.
   In: de Velde, W.V., Perram, J.W. (eds.) MAAMAW. LNCS, vol. 1038, Springer,
   Heidelberg (1996) 42–55
19. Reiter, R.: A logic for default reasoning. Artificial Intelligence, (1980) 13 81-132
20. Singh, M.P., Rao, A.S., Georgeff, M.P.: Formal Methods in DAI: Logic-Based Rep-
   resentation and Reasoning. In: Multiagent Systems: A Modern Approach to Dis-
   tributed Artificial Intelligence, MIT Press, Cambridge (1999) 331–376
21. van der Hoek, W., Jamroga, W., Wooldridge, M.: Towards a theory of intention
   revision. Synthese (2007) 155(2) 265–290
22. Wooldridge, M.: Reasoning about Rational Agents. MIT Press (2000)




                                           34