=Paper= {{Paper |id=Vol-533/paper-5 |storemode=property |title=An Introduction to Intention Revision: Issues and Problems |pdfUrl=https://ceur-ws.org/Vol-533/07_LANMR09_04.pdf |volume=Vol-533 |dblpUrl=https://dblp.org/rec/conf/lanmr/Castro-Manzano09 }} ==An Introduction to Intention Revision: Issues and Problems== https://ceur-ws.org/Vol-533/07_LANMR09_04.pdf
    An Introduction to Intention Revision: Issues
                   and Problems

                           José Martı́n Castro-Manzano

                        Instituto de Investigaciones Filosóficas
                     Universidad Nacional Autónoma de México
    Circuito Mario de la Cueva s/n Ciudad Universitaria, 04510 Coyoacán, México
                               jmcmanzano@hotmail.com
                          http://www.filosoficas.unam.mx



       Abstract. The change of beliefs on the basis of new information has
       been widely studied; however, the change of other mental states has
       received less attention, and particularly, intentions. Despite there are
       philosophical and formal theories about intentions, few of them consider
       the revision of intentions.We suggest introductory guidelines to define a
       research program for the revision of intentions regarding that: (i) inten-
       tions are intimately related to the beliefs and desires of agents immersed
       in a dynamic world; (ii) intentions are directly related to planning; and
       (iii) a reconsideration function is needed.

       Key words: Intention, reconsideration, BDI, agents.


1    Introduction
Belief revision is a paradigmatic research program: it is a relatively new area
of research that joins two diciplines: computer science and philosophy. Since
programmers dealt with databases they faced the problem of updating their
information. On the other hand, certain philosophers dealt with the change of
information within epistemic structures. So, we can identify, respectively, two
important moments in the history of this research program: one in [6]; and the
other in [9] and in [12]. A general theory can be found in [1]. This last approach
constitutes the core for any program of belief revision.
    Thus, although the change of beliefs on the basis of new information has
been widely studied with success during the last 25 years, the dynamic process
of other mental states has received less attention, and particularly, intentions
[10]. Certainly, there are philosophical and formal theories of intention [2], [3],
[4], [5], [11], [13] but few of them, if any, consider the possibility of the revision
of intentions [10].
    In this work we suggest some general and introductory guidelines in order
to define a program for intention revision. We think this topic is important
because (i) intentions are intimately related to the beliefs and desires of the
agents immersed in a dynamic world; (ii) intentions are related directly with
planning; and (iii) a function of reconsideration is needed.




                                         76
2

    The general background of this work assumes the theories of intention as
represented by [2], [3], [4]; and the belief revision program as represented by [1].
    The rest of the paper is distributed in the next way: in section 2 we describe
what do we mean by intention revision and we describe some methodological
problems. In section 3 we discuss some issues regarding the problem of represen-
tation. In section 4 we adapt and suggest some general postulates for the revision
of intentions. Finally, in section 5 we discuss the ideas of this introduction and
we give some details about future work.


2      Intention revision
We can study intentions from two general perspectives. One internal, e.g., what
are intentions and how do they behave; the other, external, regarding the prob-
lems intentions generate, e.g., how do they relate to other mental states and how
those relations can be modelled. We will follow this double approach.

2.1     Internal perspective
For our introduction we will need an approach based upon the BDI model of
rational agency [16], [13]. This model receives its name from the use of Beliefs,
Desires and Intentions in order to model the rationality of agents. Intuitively,
the beliefs correspond to the information the agent has about itself and its
environment. Desires correspond to the motivational part of the agent, what the
agent wants to see as accomplished. Finally, the intentions correspond to the
deliberative part and consist in the desires the agent is commited to achieve.
    Intentions, as an irreducible component of the BDI model [2], have certain
features that, taken together, make them different from beliefs and desires:

    – Pro-activity. Intentions are pro-active, they move the agent to achieve a
      goal [2]. In this sense, intentions are conduct-controlling components. It is
      important to note, however, that intentions are not equal to desires. Both
      intentions and desires are pro-attitudes, but intentions imply commitment
      and consistency, while desires do not.
    – Inertia. Intentions also possess inertia, that is to say, once an intention has
      been taken, it resists being abandoned. If the intention was adopted and
      inmediately abandoned, we would have to say the intention was never taken;
      however, if the reason that generated the intention disappears, it is rational
      to abandon the intention [2].
    – Admissibility. Intentions also provide a filter of admissibility. Once an in-
      tention has been taken, this constraints the future practical reasonings: while
      the agent holds a particular intention, the agent will not consider contradic-
      tory options. Thus, intentions provide a filter [2], [4].

    In this way, we can say that intentions require a notion of commitment (given
the principle of pro-activity), a notion of consistency (given the admissibility
criteria) and a notion of retractability (given the notion of inertia).




                                         77
                                                                                  3

    Plans, as long as they are sets of actions, are intentions and in this sense,
they share the same properties: they are conduct-controlling, they have inertia
and they work as inputs for future practical reasonings [2]. Moreover, plans have
certain features:

 – Plans are partial. Plans are partial, and not complete, because they lack
   complete information about the state of the world, e.g., the environment is
   not accesible.
 – Plans are not statical. Plans cannot be static structures because the
   environment of the agent is dynamic.
 – Plans are hierarchical. Plans contain means-ends reasons that have to
   have an ordered process.

      But plans also require the next features:
 – Internal consistency. Plans must be executable.
 – Strong consistency. Plans must be consistent with the agent’s beliefs.
 – Means-ends coherence. The means-ends reasoning of the plan must be
   consistent with the global ends of the plan.
    These last features lead us to consider some other problems: that intentions
are not isolated mental states [10]. Modifying intentions implies modifying be-
liefs, and sometimes, modifying beliefs may modify intentions. In this sense, the
strong consistency shows us that beliefs and intentions mantain certain relation-
ships: the asymmetry thesis. Bratman [2] considers these relations as principles
of rationality:

 – Intention-belief inconsistency. It is irrational for an agent to intent φ
   and believe at the same time that it will not achieve φ.
 – Intention-belief incompletness. It is rational for an agent to intent φ and
   at the same time not believe that it will achieve φ.

    Thus, we can say that the notions of consistency and retractability are not
exclusive of beliefs; and that the difficulty of considering intention revision lies
in the relation between intentions and beliefs.

2.2     External perspective
Based on Bratman, Cohen and Levesque [4] suggested seven ideas -or problems-
that a theory of intentions must take into account:
 – 1. Intentions pose problems for agents, who need to determine ways of achiev-
   ing them.
 – 2. Intentions provide a filter for adopting other intentions, which must not
   conflict.
 – 3. Agents track the success of their intentions, and are inclined to try again
   if their attempts fail.
 – 4. Agents believe their intentions are possible.




                                        78
4

    – 5. Agents do not believe they will not bring about their intentions.
    – 6. Under certain circumstances, agents believe they will bring about their
      intentions.
    – 7. Agents need not intend all the expected side effects of their intentions.

    With these criteria, Cohen and Levesque construct a formal theory of in-
tention based on the notion of persistent goal (according to them, an intention
is a form of persistent goal [4]). However, this theory does not deal with the
dynamics of intentions [10], [15]. The dynamics of intentions should deal with
the problem of how an agent adopts and abandons intentions and what changes
these processes produce in other BDI components. The dynamics of intentions
requires a theory of intention revision, in the same way the changes in beliefs
require a theory of belief revision. So, we modestly add the next postulate to
the criteria of Cohen and Levesque:

    – 8. Agents can retract their intentions when such intentions present problems.

Broadly speaking, this idea is the one that constitutes the core of intention
revision.


2.3     What is intention revision? An example

Let us see, by way of an example, what is intention revision. Assume our agent is
immerse in an environment that is inaccesible, non-determistic, episodic, discrete
and dynamic [14]. Furthermore, suppose that the agent has certain beliefs and
intentions (state α) and that, eventually, desires to achieve certain state of the
world (state β) -we represent this situation with the black arrow in figure 1.




                    Fig. 1. States of the agent and the environment




                                         79
                                                                                   5

    In this way, the agent generates an intention of the form put(B, C). Now,
given the properties of the environment, let us suppose the agent perceives the
state γ -which is denoted by the red arrow- where it is not the case that f ree(C).
Therefore, the intention will fail, the set of intentions would become inconsistent
and the goals of the agent will not be achieved.
    Let us see this situation in a more precise way. Suppose that we have an
agent with a database of intentions -and beliefs- that includes the next data:

 – p1 !put(x, y).
 – p2 +!put(x, y) : −f ree(x).
 – p3 +!put(x, y) : −f ree(y).
 – p4 +!put(x, y) : −!move(x).

where !φ stands for an intention formula and +φ for an addition of a formula. If
the database is equipped with some inference engine, the next formula is required
to accomplish the intention:

 – p5 f ree(x).

Now, suppose that it is the case that x is not free. This means that we have to
add the negation of p5 to the database. But then the set of intentions becomes
inconsistent in an intuitive sense. If we want to keep the database consistent,
which is a sound methodology, we need to revise the database. This implies that
some of the intentions may have to be retracted; however, we do not need to
revise the whole set of intentions for that would be an unnecessary lost of time
and information. Thus, we have to choose what formulas -i.e., intentions- to
retract.
    The problem of intention revision, thus, is double: in first place, because it is
intimately related to other mental states (like beliefs and desires); and in second
place, because logic by itself is not sufficient to determine which intentions should
be retracted. These problems lead us to take into account that the change of
intentions is associated with changes in beliefs, and that we require extra-logical
concepts to deal with these changes.
    To complicate the setting even more, beliefs and intentions have certain log-
ical consequences: when retracting intentions we have to choose which conse-
quences (beliefs or intentions) we have to retract.
    But to mantain the consistency and the maximum number of intentions ac-
complished, should we revise all the intentions? The answer is no, because, as
we will see, the costs over time and memory would be huge.


2.4   Some methodological problems with intention revision

When dealing with intention revision some methodological problems appear: one
related to representation, one related to inference, and finally, one related with
a function of selection.




                                        80
6

    – The problem of representation. How should the intentions be repre-
      sented? Most databases work with facts and rules of some kind. The lan-
      guage used to represent intentions -together with beliefs and desires- may
      be related to some logical formalism (for instance, first order logic). This
      problem is, therefore, double: what language should we use to represent our
      data? And is this language adequate to relate the BDI components within a
      context of revision?
    – The problem of the consequences. What is the relation between the
      elements represented as facts and the elements that are inferred? This rela-
      tion is sensible to the database. In some cases the elements that have been
      inferred have some special status in comparison with the facts; however, de-
      pending on which representation we use we will be able to distinguish these
      differences.
    – The problem of the function of selection. How should we choose which
      elements to retract? Logic by itself is not sufficient to decide which intentions
      should be maintained and which intentions should be retracted. We need a
      heuristic to determine this selection. One idea is that the loss of information
      should be minimal, for instance, by way of an ordering [7].


3      Models to represent intentional states

We will use a propositional model considering that the elements of the intentional
system are propositional formulas. Of course, even with this representation we
can have several alternatives. First, we have to pick an appropriate language (for
instance, databases may be represented in a Prolog style). In this introduction
we will work with a first order language.
    We assume that the language L is closed for the operators ¬, ∧, ∨, ⇒ eval-
uated in a boolean way. We use φ, ψ, ... as propositional variables in L. The
language L not only accepts what is explicitly represented in the database, but
also, the consequences of it. Thus, another factor we have to determine is: which
logical system should govern the set of intentions? In practice, the answer to this
question depends on what mechanism of inference is coupled with the database;
however, when doing this theoretical analysis, we will proceed by declaring the
general functions of revision. So, for this introduction, we will use a classical
propositional logic.


3.1     Sets of intentional states

The easiest way to represent an intentional state is by using well-formed formulas
(wff) of L. According to this, we can define a set of intentional states (intentional
set, from now on) through a set Σ of wff of L that satisfy the axiom of generalized
reflexivity (C): if Σ ` φ then φ ∈ Σ. The condition C assures us that Σ is closed
under logical consequence. By the properties of classical logic, whenever Σ is
inconsistent, then for all φ, Σ ` φ. We will denote this with Σ⊥ . This means
that there is an intentional set that is inconsistent.




                                          81
                                                                                  7

    There is a very close correspondence between intentional sets and possible
worlds models. For any set WΣ of possible worlds we can define a corresponding
intentional set Σ as the set of those sentences that are true in all worlds in WΣ .
From a computational point of view, however, intentional sets are much more
tractable than possible worlds models.


3.2    Intentional bases

Nevertheless, we have to consider that some intentions are not basic, but inferred.
It is not possible to express this distinction through intentional sets, for the set
theoretic representation does not provide markers or flags to indicate which
intentions are basic and which are inferred. Moreover, it seems that when we
make intentional changes we do not change the whole set of intentions, but
a finite subset of it. Formally, this idea can be represented by letting BΣ be
a base for an intentional set Σ if and only if BΣ is a finite subset of Σ and
Cn(BΣ ) = Σ. Then, we introduce the functions for intention revision in bases of
intentions (intentional bases from now on). The distinction between intentional
set and intentional base allows us to generate and distinguish different structures,
                                                 0                               0
e.g., assume two intentional bases BΣ and BΣ       such that Cn(BΣ ) = Cn(BΣ       )
             0
but BΣ 6= BΣ . If we want to implement intentional revision systems, intentional
bases are easier to handle than intentional sets.


4     Postulates for intention revision

When dealing with intention revision there are two main strategies that may
be followed: to present in an explicit manner the construction of the process
of revision or to formulate the general ideas to realize such constructions. The
first solution to the problem consists in developing algorithms that compute the
functions; the second approach consists in describing the postulates to define the
functions to further develop the algorithms.
    In this introduction we will follow the second approach. The formulations
of the postulates will be given through a series of ideas and conditions. The
heuristic behind is similar to the idea of belief revision: the intentional changes
should provide (i) a maximum of preservation of information (i.e., a minimum
change in the intentions) and (ii) consistency.
    Intention revision should occur when a new piece of information that is in-
consistent with the database is added to the system in such a way that the
resulting set in inconsistent. But this change is not the only one that may occur.
Depending on how intentions are represented and what intentions are accepted,
different intentional changes are possible. We can distinguish four intentional
changes, three of them similar to belief changes:

 – Expansion. A new formula φ is added to a Σ together with the logical
   consequences of the addition. The system that results from expanding Σ by
   a sentence φ will be denoted as Σ ⊕ φ.




                                       82
8

    – Revision. A new formula φ that is inconsistent with Σ is added, but in order
      to mantain consistency in the resulting system, some of the old formulas in
      Σ have to be deleted. This is denoted by Σ φ.
    – Contraction. Some formula φ in Σ is retracted without adding any new
      facts. In order to mantain the system closed under logical consequence, some
      members of Σ must be deleted. This will be denoted by Σ φ.
    – Reconsideration. A new formula φ is added to Σ, but eventually such
      formula has be to contracted or revised. This is denoted by Σ ⊗ φ.

    Expansions are closed under logical consequence (i.e., the expansion of the
intentional set with a new formula is Σ ⊕ φ = {ψ|Σ ∪ φ ` ψ}); however, it is
not possible to give a similar characterization of the other changes. The problem
of revision, contraction and reconsideration has its roots in the lack of purely
logical reasons to accomplish these processes. Thus, we can have different ways
to research, specify and verify them.
    For the time being, we will assume that the intentional sets model intentional
bases. In what follows we will formulate some postulates for intention revision.
The motivation behind these postulates (adapted from [1]) is that when we
modify our intentions we have to keep to a minimum the change of intentions
and we have to maintain consistency. For an agent the obtention of information
implies costs and the environment in which is immersed is dynamic, for these
reasons the unnecessary loss of information and time have to be avoided. On the
other hand, we also require compromise, for the space in memory is not for free.
This is an optimization heuristic; and although it is possible to give a quantitative
definition of the loss of time or information, it is hard and impractical for our
purposes. Instead, we will follow another specification: given that intentions are
hierarchical plans [2], we believe that when retracting intentions we must retract
the ones with a lesser hierarchy, and given that reconsideration reduces the time
of revision, we believe we have to retract intentions on the basis of general
rules [8]. In what follows, we will specify the postulates for intention revision
considering these ideas.

4.1     Postulates for revision
For intention revision, the first postulate requires closure:
Postulate 1 ( 1) For any formula φ and any intentional set Σ, Σ               φ is a
intentional set.
The second postulate guarantees that the input sentence is accepted in the re-
vision:
Postulate 2 ( 2) φ ∈ Σ         φ.
A revision process should occur when the input φ contradicts what is already in
Σ, that is ¬φ ∈ Σ. However, in order to have the revision function defined for
all inputs, we can easily extend it to cover the case when ¬φ ∈
                                                              / Σ. Thus, revision
is identified with expansion:




                                        83
                                                                                9

Postulate 3 ( 3) Σ       φ ⊆ Σ ⊕ φ.

Postulate 4 ( 4) If ¬φ ∈
                       / Σ, then Σ ⊕ φ ⊆ Σ          φ.

The purpose of a revision is to produce a new consistent intentional set. Thus
Σ φ should be consistent, unless φ is logically impossible:
Postulate 5 ( 5) Σ       φ = K⊥ if and only if ` ¬φ.
We also require equivalence:
Postulate 6 ( 6) If ` φ ⇔ ψ, then Σ         φ=Σ      ψ.
The postulates ( 1) to ( 6) are the basic postulates for revision. The final two
conditions concern composite intention revisions. The idea is that, if Σ φ is a
revision of Σ and Σ φ is to be changed by a further formula ψ; such change
should be made by expansions of Σ φ whenever possible. The minimal change
of Σ to include both φ and ψ, that is, Σ φ ∧ ψ, ought to be the same as the
expansion of Σ φ by ψ, so long as ψ does not contradict the intentions in Σ φ:
Postulate 7 ( 7) Σ       φ ∧ ψ ⊆ (Σ    φ) ⊕ ψ .

Postulate 8 ( 8) If ¬ψ ∈
                       /Σ        φ, then (Σ      φ) ⊕ ψ ⊆ Σ   φ∧ψ .

When ¬ψ ∈
        / Σ, then (Σ       φ) ⊕ ψ is Σ⊥ .

4.2   Postulates for contraction
We also need closure:
Postulate 9 ( 1) For any formula φ and any intentional set Σ, Σ           φ is an
intentional set.
Because Σ φ is formed from Σ by giving up some intentions, no new intentions
should appear:
Postulate 10 ( 2) Σ       φ⊆Σ .
When φ ∈ / Σ, the optimization heuristic requires that nothing has to be re-
tracted:
Postulate 11 ( 3) If φ ∈
                       / Σ, then Σ          φ=Σ .
The formula to be contracted should not be a logical consequence of the inten-
tions in Σ φ:
Postulate 12 ( 4) If 6` φ, then φ ∈
                                  /Σ        φ.
From ( 1) to ( 4) it follows that if φ ∈  / Σ, then (Σ φ) ⊕ φ ⊆ Σ. In other
words, if we first retract φ and then add φ again to the resulting intentional, no
intentions are accepted that were not accepted in the original intentional set.
The optimization heuristic demands that as many intentions as possible should
be kept in Σ φ. So, we need a recovery:




                                      84
10

Postulate 13 ( 5) If φ ∈ Σ, then Σ ⊆ (Σ          φ) ⊕ φ .
This postulate enables us to undo contractions, and although it is controversial,
we will assume it for sake of the introduction. The sixth postulate is analogous
to (Σ 6):
Postulate 14 ( 6) If ` φ ⇔ ψ, then Σ        φ=Σ         ψ.
These postulates are the basic set of postulates for intention contraction. Again,
two further postulates for contractions with respect to conjunctions will be
added. The motivations for these postulates are similar to ( 7) and ( 8).
Postulate 15 ( 7) Σ       φ∩Σ      ψ⊆Σ       φ∧ψ .

Postulate 16 ( 8) If φ ∈
                       /Σ        φ ∧ ψ, then Σ     φ∧ψ ⊆Σ         ψ.

4.3   Postulates for reconsideration
The postulates above are adaptations of [1]. The following are different in some
respects. The first postulate requires closure as well:
Postulate 17 (⊗1) For any formula φ and any intentional set Σ, Σ ⊗ φ is an
intentional set.
Reconsideration leads to revision [2] or contraction:
Postulate 18 (⊗2) (Σ ⊗ φ ⊆ Σ        φ) ∨ (Σ ⊗ φ ⊆ Σ         φ).
The purpose of a reconsideration is to produce a new consistent intentional set:
Postulate 19 (⊗3) Σ ⊗ φ = K⊥ if and only if ` ¬φ.
We also require equivalence:
Postulate 20 (⊗4) If ` φ ⇔ ψ, then Σ ⊗ φ = Σ ⊗ ψ.
This set of postulates is the basic set of postulates for reconsideration. We also
have the next ideas. The reconsideration Σ ⊗ φ should be done by expansions
whenever possible. And the minimal change of Σ to include φ and ψ should be
the same as the expansion of Σ ⊗ φ by ψ.
Postulate 21 (⊗5) Σ ⊗ φ ∧ ψ ⊆ (Σ ⊗ φ) ⊕ ψ.

Postulate 22 (⊗6) If ¬ψ ∈
                        / Σ ⊗ φ, then (Σ ⊗ φ) ⊕ ψ ⊆ Σ ⊗ φ ∧ ψ.

Postulate 23 (⊗7) Σ ⊗ φ ∩ Σ ⊗ ψ ⊆ Σ ⊗ φ ∧ ψ.

   We now display some results regarding intention revision, but first, we require
some definitions: an intention φ is abandoned if and only if φ is retracted from
Σ either by a contraction or a revision. And an intention φ is continued if and
only if φ ∈ (Σ ⊗ φ) ⊕ φ.
   The following results are straightforward.




                                      85
                                                                                    11

Proposition 1 The following statements hold:
 – 1. If φ is reconsidered, then φ is abandoned or continued.

                      Σ ⊗ φ ⇒ (Σ       φ∨Σ      φ) ∨ ((Σ ⊗ φ) ⊕ φ)

 – 2. Inconsistency of reconsideration results from the inconsistency of inten-
   tions.
                               Σ⊥ ⇒ Σ ⊗ φ = Σ⊥
 – 3. Reconsidering a consistent Σ with the current intentions does not remove
   any intention.
                                 (Σ ⊗ φ) ⊕ φ = Σ
 – 4. Successful reconsideration produces an intentional set.

                                       Σ⊗φ=Σ

Proof. To prove statement 1, assume φ is reconsidered. By ⊗2, it follows that Σ
φ ∨ Σ φ, which means φ is abandoned; by addition, φ is continued. Statement 2
follows from ⊗3. Statement 3 results from the definition of a continued intention.
Statements 4 follows from ⊗1. 
    At this point, we have presented some issues and problems of intention revi-
sion by isolating intentions from other mental states. In the next proposition we
will try to relate intentions and beliefs through the reconsideration function. To
see the next results, recall that it is irrational for an agent to intent φ and believe
at the same time that it will not achieve φ: this is intention-belief inconsistency.
To avoid this inconsistency, an agent must abandon intentions impossible to
achieve. And recall that it is rational for an agent to intent φ and at the same
time not believe that will achieve ¬φ: this is intention-belief incompletness. To
accomplish this property, an agent must continue its intentions.
    The following representation theorems require some auxiliar definitions: we
say an agent believes φ, BELφ, if φ ∈ Σ; and an agent has an intention to φ,
INTENDφ, if !φ ∈ Σ.

Proposition 2 The following statements hold:
 – 1. Reconsideration implies intention-belief incompletness.

                           ` Σ ⊗ φ ⇒ INTENDφ ∧ ¬BEL¬φ

 – 2. Reconsideration avoids intention-belief inconsistency.

                            6` Σ ⊗ φ ⇒ INTENDφ ∧ BEL¬φ

Proof. Assume Σ ⊗ φ. Furthermore, assume that INTENDφ is also given. We
have two options: the intention is possible or impossible to achieve.
For statement 1: if the intention is possible to achieve after reconsideration, then
φ ∈ (Σ ⊗ φ) ⊕ φ, which means the agent can continue its intention. Thus, φ ∈ Σ.
For statement 2: if the intention is impossible to achieve, then Σ⊥ , which means




                                         86
12

that the reconsideration is inconsistent, but inconsistent reconsideration cannot
be the case given ⊗5. 
    Intuitively, this means that if an agent reconsiders, such agent is closer to
rationality by following the intention-belief incompletness property, because the
agent continues intentions possible to achieve. And also, the agent is far from
irrationality by avoiding the intention-belief inconsistency, because after recon-
sideration the agent cannot have inconsistent reconsiderations, and so, the agent
has to drop intentions not possible to achieve.

5    Conclusions
Let us sum up some of the main ideas and results of this introduction:
 – A) Agents can retract their intentions when such intentions present prob-
   lems.
 – B) If an agent reconsiders an intention, such intention is abandoned or con-
   tinued.
 – C) If an agent reconsiders, such agent is closer to rationality by following the
   intention-belief incompletness property and by avoiding the intention-belief
   inconsistency.
So, we have sketched some general and introductory guidelines for intention
revision by following the theories of intention and by considering that intentions
are not isolated and are related to planning. We are aware these ideas lead to
more problems. Some of the open problems we do not want to let to mention
are the following:
 – How do we relate the topic of this introduction with a non-monotonic logic?
   Since we can reconsider intentions, we have the possibility to relate our
   proposal with a non-monotonic consequence relation. Recall from section 2,
   for instance, that from a state α we want to achieve a state β through some
   execution of intentions, formally:
                                   α : p1 , . . . , pn
                                          β
   but eventually happens that some plan pi fails, which takes to intention
   revision. Future work requires the treatment of this situation.
 – How do we relate the BDI components and temporal logic with the postulates
   we have proposed? One of the problems that we have presented is that,
   although we provide an abstract definition of the revision functions, we do
   not take into account the role of time within the reasoning process. Another
   problem is that we have considered intentions in an isolated way; this is
   necessary nonetheless, since intentions are irreducible componentes of the
   BDI architecture [2]; however, this is not sufficient. We have to relate the
   functions to other mental states through bridge rules of the form:
                                      B1 ,...,Bn
                                      α:p1 ,...,pn
                                           β




                                      87
                                                                                     13

   that specify the change of states given certain beliefs (Bi ) and intentions.
   But we also have to construct representation theorems, such as proposition
   2, in order to relate different formalisms.
 – What is the role of desires within this specification? The BDI architecture
   also requires desires. We know intentions are the desires the agents have
   committed to achieve. How does the change of desires affect the intentional
   changes?
 – Which programming language can be adequate to model our proposal? An-
   other problem we have is that our approach in this introduction is closer to
   an abstract logical specification, but it is far from implementation. Future
   work requires an integration of this proposal with an implementation.
    The introduction we have presented here does not pretend to be exhaus-
tive. On the contrary, we believe that the issues and problems we have showed
are complex enough to be solved within the extensions of this work; but also,
we believe they are clear enough to open a research program about intention
revision.

Acknowledgements. The author would like to thank the anonymous reviewers
and Dr. Axel Barceló for their helpful comments and precise corrections. The
author is supported by the CONACyT scholarship 214783.


References
1. Alchourrón, C. E., Gardenfors, P., Makinson, D.: On the logic of theory change:
   partial meet contraction and revision functions. Journal of Symbolic Logic, 50, 510-
   530 (1985).
2. Bratman, M.: Intention, Plans, and Practical Reason. Harvard University Press,
   Cambridge (1987).
3. Bratman, M. E., Israel, D. J., Pollack, M. E.: Plans and resource-bounded practical
   reasoning. Computational Intelligence, 4, 349-355 (1988).
4. Cohen, P., Levesque, H.: Intention is choice with commitment. Artificial Intelligence
   42(3), 213-261 (1990).
5. Dignum, F., Meyer, J.-J. Ch., Wieringa, R. J., Kuiper, R.: A modal approach to
   intentions, commitments and obligations: Intention plus commitment yields obliga-
   tion. In M. A. Brown, J. Carmo (eds.), Deontic logic, agency and normative systems,
   pp. 80-97, Springer-Verlag (1996).
6. Fagin, R., Ullman, J. D., Vardi, M. Y.: On the semantics of updates in databases.
   Proceedings of Second ACM SIGACT-SIGMOD, Atlanta, 352-365 (1983).
7. Gardenfors, P., Makinson. D.: Revisions of knowledge systems using epistemic en-
   trenchment. In Proceedings of the Second Conference on Theoretical Aspects of
   Reasoning about Knowledge, M. Vardi (ed.), Los Altos, CA: Morgan Kaufmann
   (1988).
8. Guerra-Hernández A., Castro-Manzano, J.M., El-Fallah-Seghrouchni, A.: CTLA-
   gentSpeak(L): a Specification Language for Agent Programs. Journal of Algorithms
   in Cognition, Informatics and Logic, (2009).
9. Harper, W. L.: Rational conceptual change. In PSA 1976, East Lansing, Mich:
   Philosophy of Science Association (1977).




                                         88
14

10. Hoek, W. van der, Jamroga, W., Wooldridge, M.: Towards a theory of intention
   revision. Synthese, Springer-Verlag (2007).
11. Konolige, K., Pollack, M. E.: A representationalist theory of intentions. In Proceed-
   ings of International Joint Conference on Artificial Intelligence (IJCAI-93), 390-395,
   San Mateo: Morgan Kaufmann(1993).
12. Levi, I.: The Enterprise of Knowledge. MIT Press, Cambridge, Mas-
   sachusetts(1980).
13. Rao, A.S., Georgeff, M.P.: Modelling Rational Agents within a BDI-Architecture.
   In Huhns, M.N., Singh, M.P., (eds.) Readings in Agents, pp. 317-328. Morgan Kauf-
   mann (1998)
14. Russell, S. J., Norvig, P.: Artificial Intelligence. A modern approach. Prentice Hall,
   New Jersey, USA (1995).
15. Singh, M.P.: A critical examination of the Cohen-Levesque Theory of Intentions.
   In Proceedings of the European Conference on Artificial Intelligence (1992).
16. Wooldridge, M.: Introduction to Multiagent Systems. John Wiley and Sons,Ltd.
   (2001).




                                          89