<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>An Introduction to Intention Revision: Issues and Problems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jose Mart n Castro-Manzano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Instituto de Investigaciones Filoso cas Universidad Nacional Autonoma de Mexico Circuito</institution>
          <addr-line>Mario de la Cueva s/n Ciudad Universitaria, 04510 Coyoacan</addr-line>
          ,
          <country country="MX">Mexico</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The change of beliefs on the basis of new information has been widely studied; however, the change of other mental states has received less attention, and particularly, intentions. Despite there are philosophical and formal theories about intentions, few of them consider the revision of intentions.We suggest introductory guidelines to de ne a research program for the revision of intentions regarding that: (i) intentions are intimately related to the beliefs and desires of agents immersed in a dynamic world; (ii) intentions are directly related to planning; and (iii) a reconsideration function is needed.</p>
      </abstract>
      <kwd-group>
        <kwd>Intention</kwd>
        <kwd>reconsideration</kwd>
        <kwd>BDI</kwd>
        <kwd>agents</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Belief revision is a paradigmatic research program: it is a relatively new area
of research that joins two diciplines: computer science and philosophy. Since
programmers dealt with databases they faced the problem of updating their
information. On the other hand, certain philosophers dealt with the change of
information within epistemic structures. So, we can identify, respectively, two
important moments in the history of this research program: one in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]; and the
other in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] and in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. A general theory can be found in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This last approach
constitutes the core for any program of belief revision.
      </p>
      <p>
        Thus, although the change of beliefs on the basis of new information has
been widely studied with success during the last 25 years, the dynamic process
of other mental states has received less attention, and particularly, intentions
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Certainly, there are philosophical and formal theories of intention [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] but few of them, if any, consider the possibility of the revision
of intentions [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>In this work we suggest some general and introductory guidelines in order
to de ne a program for intention revision. We think this topic is important
because (i) intentions are intimately related to the beliefs and desires of the
agents immersed in a dynamic world; (ii) intentions are related directly with
planning; and (iii) a function of reconsideration is needed.</p>
      <p>
        The general background of this work assumes the theories of intention as
represented by [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]; and the belief revision program as represented by [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The rest of the paper is distributed in the next way: in section 2 we describe
what do we mean by intention revision and we describe some methodological
problems. In section 3 we discuss some issues regarding the problem of
representation. In section 4 we adapt and suggest some general postulates for the revision
of intentions. Finally, in section 5 we discuss the ideas of this introduction and
we give some details about future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Intention revision</title>
      <p>We can study intentions from two general perspectives. One internal, e.g., what
are intentions and how do they behave; the other, external, regarding the
problems intentions generate, e.g., how do they relate to other mental states and how
those relations can be modelled. We will follow this double approach.
2.1</p>
      <p>
        Internal perspective
For our introduction we will need an approach based upon the BDI model of
rational agency [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. This model receives its name from the use of Beliefs,
Desires and Intentions in order to model the rationality of agents. Intuitively,
the beliefs correspond to the information the agent has about itself and its
environment. Desires correspond to the motivational part of the agent, what the
agent wants to see as accomplished. Finally, the intentions correspond to the
deliberative part and consist in the desires the agent is commited to achieve.
      </p>
      <p>
        Intentions, as an irreducible component of the BDI model [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], have certain
features that, taken together, make them di erent from beliefs and desires:
{ Pro-activity. Intentions are pro-active, they move the agent to achieve a
goal [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In this sense, intentions are conduct-controlling components. It is
important to note, however, that intentions are not equal to desires. Both
intentions and desires are pro-attitudes, but intentions imply commitment
and consistency, while desires do not.
{ Inertia. Intentions also possess inertia, that is to say, once an intention has
been taken, it resists being abandoned. If the intention was adopted and
inmediately abandoned, we would have to say the intention was never taken;
however, if the reason that generated the intention disappears, it is rational
to abandon the intention [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
{ Admissibility. Intentions also provide a lter of admissibility. Once an
intention has been taken, this constraints the future practical reasonings: while
the agent holds a particular intention, the agent will not consider
contradictory options. Thus, intentions provide a lter [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>In this way, we can say that intentions require a notion of commitment (given
the principle of pro-activity), a notion of consistency (given the admissibility
criteria) and a notion of retractability (given the notion of inertia).</p>
      <p>
        Plans, as long as they are sets of actions, are intentions and in this sense,
they share the same properties: they are conduct-controlling, they have inertia
and they work as inputs for future practical reasonings [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Moreover, plans have
certain features:
{ Plans are partial. Plans are partial, and not complete, because they lack
complete information about the state of the world, e.g., the environment is
not accesible.
{ Plans are not statical. Plans cannot be static structures because the
environment of the agent is dynamic.
{ Plans are hierarchical. Plans contain means-ends reasons that have to
have an ordered process.
      </p>
      <sec id="sec-2-1">
        <title>But plans also require the next features:</title>
        <p>{ Internal consistency. Plans must be executable.
{ Strong consistency. Plans must be consistent with the agent's beliefs.
{ Means-ends coherence. The means-ends reasoning of the plan must be
consistent with the global ends of the plan.</p>
        <p>
          These last features lead us to consider some other problems: that intentions
are not isolated mental states [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Modifying intentions implies modifying
beliefs, and sometimes, modifying beliefs may modify intentions. In this sense, the
strong consistency shows us that beliefs and intentions mantain certain
relationships: the asymmetry thesis. Bratman [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] considers these relations as principles
of rationality:
{ Intention-belief inconsistency. It is irrational for an agent to intent
and believe at the same time that it will not achieve .
{ Intention-belief incompletness. It is rational for an agent to intent
at the same time not believe that it will achieve .
and
        </p>
        <p>Thus, we can say that the notions of consistency and retractability are not
exclusive of beliefs; and that the di culty of considering intention revision lies
in the relation between intentions and beliefs.
2.2</p>
        <p>
          External perspective
Based on Bratman, Cohen and Levesque [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] suggested seven ideas -or
problemsthat a theory of intentions must take into account:
{ 1. Intentions pose problems for agents, who need to determine ways of
achieving them.
{ 2. Intentions provide a lter for adopting other intentions, which must not
con ict.
{ 3. Agents track the success of their intentions, and are inclined to try again
if their attempts fail.
        </p>
        <p>{ 4. Agents believe their intentions are possible.
{ 5. Agents do not believe they will not bring about their intentions.
{ 6. Under certain circumstances, agents believe they will bring about their
intentions.
{ 7. Agents need not intend all the expected side e ects of their intentions.</p>
        <p>
          With these criteria, Cohen and Levesque construct a formal theory of
intention based on the notion of persistent goal (according to them, an intention
is a form of persistent goal [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]). However, this theory does not deal with the
dynamics of intentions [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ], [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. The dynamics of intentions should deal with
the problem of how an agent adopts and abandons intentions and what changes
these processes produce in other BDI components. The dynamics of intentions
requires a theory of intention revision, in the same way the changes in beliefs
require a theory of belief revision. So, we modestly add the next postulate to
the criteria of Cohen and Levesque:
        </p>
        <p>{ 8. Agents can retract their intentions when such intentions present problems.
Broadly speaking, this idea is the one that constitutes the core of intention
revision.
2.3</p>
        <p>
          What is intention revision? An example
Let us see, by way of an example, what is intention revision. Assume our agent is
immerse in an environment that is inaccesible, non-determistic, episodic, discrete
and dynamic [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ]. Furthermore, suppose that the agent has certain beliefs and
intentions (state ) and that, eventually, desires to achieve certain state of the
world (state ) -we represent this situation with the black arrow in gure 1.
        </p>
        <p>In this way, the agent generates an intention of the form put(B; C). Now,
given the properties of the environment, let us suppose the agent perceives the
state -which is denoted by the red arrow- where it is not the case that f ree(C).
Therefore, the intention will fail, the set of intentions would become inconsistent
and the goals of the agent will not be achieved.</p>
        <p>Let us see this situation in a more precise way. Suppose that we have an
agent with a database of intentions -and beliefs- that includes the next data:
{ p1 !put(x; y).
{ p2 +!put(x; y) : f ree(x).
{ p3 +!put(x; y) : f ree(y).</p>
        <p>{ p4 +!put(x; y) : !move(x).
where ! stands for an intention formula and + for an addition of a formula. If
the database is equipped with some inference engine, the next formula is required
to accomplish the intention:</p>
        <p>{ p5 f ree(x).</p>
        <p>Now, suppose that it is the case that x is not free. This means that we have to
add the negation of p5 to the database. But then the set of intentions becomes
inconsistent in an intuitive sense. If we want to keep the database consistent,
which is a sound methodology, we need to revise the database. This implies that
some of the intentions may have to be retracted; however, we do not need to
revise the whole set of intentions for that would be an unnecessary lost of time
and information. Thus, we have to choose what formulas -i.e., intentions- to
retract.</p>
        <p>The problem of intention revision, thus, is double: in rst place, because it is
intimately related to other mental states (like beliefs and desires); and in second
place, because logic by itself is not su cient to determine which intentions should
be retracted. These problems lead us to take into account that the change of
intentions is associated with changes in beliefs, and that we require extra-logical
concepts to deal with these changes.</p>
        <p>To complicate the setting even more, beliefs and intentions have certain
logical consequences: when retracting intentions we have to choose which
consequences (beliefs or intentions) we have to retract.</p>
        <p>But to mantain the consistency and the maximum number of intentions
accomplished, should we revise all the intentions? The answer is no, because, as
we will see, the costs over time and memory would be huge.
2.4</p>
        <p>
          Some methodological problems with intention revision
When dealing with intention revision some methodological problems appear: one
related to representation, one related to inference, and nally, one related with
a function of selection.
{ The problem of representation. How should the intentions be
represented? Most databases work with facts and rules of some kind. The
language used to represent intentions -together with beliefs and desires- may
be related to some logical formalism (for instance, rst order logic). This
problem is, therefore, double: what language should we use to represent our
data? And is this language adequate to relate the BDI components within a
context of revision?
{ The problem of the consequences. What is the relation between the
elements represented as facts and the elements that are inferred? This
relation is sensible to the database. In some cases the elements that have been
inferred have some special status in comparison with the facts; however,
depending on which representation we use we will be able to distinguish these
di erences.
{ The problem of the function of selection. How should we choose which
elements to retract? Logic by itself is not su cient to decide which intentions
should be maintained and which intentions should be retracted. We need a
heuristic to determine this selection. One idea is that the loss of information
should be minimal, for instance, by way of an ordering [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
3
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Models to represent intentional states</title>
      <p>We will use a propositional model considering that the elements of the intentional
system are propositional formulas. Of course, even with this representation we
can have several alternatives. First, we have to pick an appropriate language (for
instance, databases may be represented in a Prolog style). In this introduction
we will work with a rst order language.</p>
      <p>We assume that the language L is closed for the operators :, ^, _, )
evaluated in a boolean way. We use ; ; ::: as propositional variables in L. The
language L not only accepts what is explicitly represented in the database, but
also, the consequences of it. Thus, another factor we have to determine is: which
logical system should govern the set of intentions? In practice, the answer to this
question depends on what mechanism of inference is coupled with the database;
however, when doing this theoretical analysis, we will proceed by declaring the
general functions of revision. So, for this introduction, we will use a classical
propositional logic.
3.1</p>
      <p>Sets of intentional states
The easiest way to represent an intentional state is by using well-formed formulas
(w ) of L. According to this, we can de ne a set of intentional states (intentional
set, from now on) through a set of w of L that satisfy the axiom of generalized
re exivity (C): if ` then 2 . The condition C assures us that is closed
under logical consequence. By the properties of classical logic, whenever is
inconsistent, then for all , ` . We will denote this with ?. This means
that there is an intentional set that is inconsistent.</p>
      <p>There is a very close correspondence between intentional sets and possible
worlds models. For any set W of possible worlds we can de ne a corresponding
intentional set as the set of those sentences that are true in all worlds in W .
From a computational point of view, however, intentional sets are much more
tractable than possible worlds models.
3.2</p>
      <p>Intentional bases
Nevertheless, we have to consider that some intentions are not basic, but inferred.
It is not possible to express this distinction through intentional sets, for the set
theoretic representation does not provide markers or ags to indicate which
intentions are basic and which are inferred. Moreover, it seems that when we
make intentional changes we do not change the whole set of intentions, but
a nite subset of it. Formally, this idea can be represented by letting B be
a base for an intentional set if and only if B is a nite subset of and
Cn(B ) = . Then, we introduce the functions for intention revision in bases of
intentions (intentional bases from now on). The distinction between intentional
set and intentional base allows us to generate and distinguish di erent structures,
e.g., assume two intentional bases B and B0 such that Cn(B ) = Cn(B0 )
but B 6= B0 . If we want to implement intentional revision systems, intentional
bases are easier to handle than intentional sets.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Postulates for intention revision</title>
      <p>When dealing with intention revision there are two main strategies that may
be followed: to present in an explicit manner the construction of the process
of revision or to formulate the general ideas to realize such constructions. The
rst solution to the problem consists in developing algorithms that compute the
functions; the second approach consists in describing the postulates to de ne the
functions to further develop the algorithms.</p>
      <p>In this introduction we will follow the second approach. The formulations
of the postulates will be given through a series of ideas and conditions. The
heuristic behind is similar to the idea of belief revision: the intentional changes
should provide (i) a maximum of preservation of information (i.e., a minimum
change in the intentions) and (ii) consistency.</p>
      <p>Intention revision should occur when a new piece of information that is
inconsistent with the database is added to the system in such a way that the
resulting set in inconsistent. But this change is not the only one that may occur.
Depending on how intentions are represented and what intentions are accepted,
di erent intentional changes are possible. We can distinguish four intentional
changes, three of them similar to belief changes:
{ Expansion. A new formula is added to a together with the logical
consequences of the addition. The system that results from expanding by
a sentence will be denoted as .
{ Revision. A new formula that is inconsistent with is added, but in order
to mantain consistency in the resulting system, some of the old formulas in
have to be deleted. This is denoted by .
{ Contraction. Some formula in is retracted without adding any new
facts. In order to mantain the system closed under logical consequence, some
members of must be deleted. This will be denoted by .
{ Reconsideration. A new formula is added to , but eventually such
formula has be to contracted or revised. This is denoted by .</p>
      <p>Expansions are closed under logical consequence (i.e., the expansion of the
intentional set with a new formula is = f j [ ` g); however, it is
not possible to give a similar characterization of the other changes. The problem
of revision, contraction and reconsideration has its roots in the lack of purely
logical reasons to accomplish these processes. Thus, we can have di erent ways
to research, specify and verify them.</p>
      <p>
        For the time being, we will assume that the intentional sets model intentional
bases. In what follows we will formulate some postulates for intention revision.
The motivation behind these postulates (adapted from [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]) is that when we
modify our intentions we have to keep to a minimum the change of intentions
and we have to maintain consistency. For an agent the obtention of information
implies costs and the environment in which is immersed is dynamic, for these
reasons the unnecessary loss of information and time have to be avoided. On the
other hand, we also require compromise, for the space in memory is not for free.
This is an optimization heuristic; and although it is possible to give a quantitative
de nition of the loss of time or information, it is hard and impractical for our
purposes. Instead, we will follow another speci cation: given that intentions are
hierarchical plans [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], we believe that when retracting intentions we must retract
the ones with a lesser hierarchy, and given that reconsideration reduces the time
of revision, we believe we have to retract intentions on the basis of general
rules [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In what follows, we will specify the postulates for intention revision
considering these ideas.
4.1
      </p>
      <p>Postulates for revision
For intention revision, the rst postulate requires closure:</p>
      <sec id="sec-4-1">
        <title>Postulate 1 ( 1) For any formula</title>
        <p>intentional set.
and any intentional set
The second postulate guarantees that the input sentence is accepted in the
revision:
Postulate 2 ( 2)
2</p>
        <p>A revision process should occur when the input contradicts what is already in
, that is : 2 . However, in order to have the revision function de ned for
all inputs, we can easily extend it to cover the case when : 2= . Thus, revision
is identi ed with expansion:
Postulate 3 ( 3)</p>
        <p>.</p>
        <p>Postulate 4 ( 4) If : 2= , then</p>
        <p>The postulates ( 1) to ( 6) are the basic postulates for revision. The nal two
conditions concern composite intention revisions. The idea is that, if is a
revision of and is to be changed by a further formula ; such change
should be made by expansions of whenever possible. The minimal change
of to include both and , that is, ^ , ought to be the same as the
expansion of by , so long as does not contradict the intentions in :
Postulate 7 ( 7)
Postulate 8 ( 8) If : 2</p>
        <p>=
When : 2= , then (
^
)
(</p>
        <p>)
is formed from</p>
        <p>by giving up some intentions, no new intentions
, the optimization heuristic requires that nothing has to be
rePostulate 11 ( 3) If
= , then
2
=
.</p>
        <p>The formula to be contracted should not be a logical consequence of the
intentions in :
Postulate 12 ( 4) If 6` , then</p>
        <p>From ( 1) to ( 4) it follows that if 2= , then ( ) . In other
words, if we rst retract and then add again to the resulting intentional, no
intentions are accepted that were not accepted in the original intentional set.
The optimization heuristic demands that as many intentions as possible should
be kept in . So, we need a recovery:
Postulate 13 ( 5) If
, then
(</p>
        <p>)
2
This postulate enables us to undo contractions, and although it is controversial,
we will assume it for sake of the introduction. The sixth postulate is analogous
to ( 6):
Postulate 14 ( 6) If `
These postulates are the basic set of postulates for intention contraction. Again,
two further postulates for contractions with respect to conjunctions will be
added. The motivations for these postulates are similar to ( 7) and ( 8).
^ , then</p>
        <p>
          .
=
^
.
^
The postulates above are adaptations of [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The following are di erent in some
respects. The rst postulate requires closure as well:
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Postulate 17 ( 1) For any formula intentional set.</title>
        <sec id="sec-4-2-1">
          <title>Reconsideration leads to revision [2] or contraction:</title>
          <p>Postulate 18 ( 2) (
) _ (</p>
          <p>).
,
is an
The purpose of a reconsideration is to produce a new consistent intentional set:
Postulate 19 ( 3)</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>We also require equivalence:</title>
          <p>= K? if and only if ` : .</p>
          <p>Postulate 20 ( 4) If `
, , then
=
.</p>
          <p>This set of postulates is the basic set of postulates for reconsideration. We also
have the next ideas. The reconsideration should be done by expansions
whenever possible. And the minimal change of to include and should be
the same as the expansion of by .</p>
          <p>Postulate 21 ( 5)
Postulate 22 ( 6) If :
Postulate 23 ( 7)
^</p>
          <p>)
^ .</p>
          <p>We now display some results regarding intention revision, but rst, we require
some de nitions: an intention is abandoned if and only if is retracted from
either by a contraction or a revision. And an intention is continued if and
only if 2 ( ) .</p>
          <p>The following results are straightforward.
^ .
Proposition 1 The following statements hold:
{ 1. If
is reconsidered, then</p>
          <p>is abandoned or continued.
) (
_
) _ ((
)
)
{ 2. Inconsistency of reconsideration results from the inconsistency of
intentions.
{ 3. Reconsidering a consistent</p>
          <p>any intention.
{ 4. Successful reconsideration produces an intentional set.</p>
          <p>?
with the current intentions does not remove
? )
(
)
=
=
=
Proof. To prove statement 1, assume is reconsidered. By 2, it follows that
_ , which means is abandoned; by addition, is continued. Statement 2
follows from 3. Statement 3 results from the de nition of a continued intention.
Statements 4 follows from 1.</p>
          <p>At this point, we have presented some issues and problems of intention
revision by isolating intentions from other mental states. In the next proposition we
will try to relate intentions and beliefs through the reconsideration function. To
see the next results, recall that it is irrational for an agent to intent and believe
at the same time that it will not achieve : this is intention-belief inconsistency.
To avoid this inconsistency, an agent must abandon intentions impossible to
achieve. And recall that it is rational for an agent to intent and at the same
time not believe that will achieve : : this is intention-belief incompletness. To
accomplish this property, an agent must continue its intentions.</p>
          <p>The following representation theorems require some auxiliar de nitions: we
say an agent believes , BEL , if 2 ; and an agent has an intention to ,
INTEND , if ! 2 .</p>
          <p>Proposition 2 The following statements hold:
{ 1. Reconsideration implies intention-belief incompletness.
{ 2. Reconsideration avoids intention-belief inconsistency.</p>
          <p>`
6`
) INTEND</p>
          <p>^ :BEL:
) INTEND
^ BEL:
Proof. Assume . Furthermore, assume that INTEND is also given. We
have two options: the intention is possible or impossible to achieve.
For statement 1: if the intention is possible to achieve after reconsideration, then
2 ( ) , which means the agent can continue its intention. Thus, 2 .
For statement 2: if the intention is impossible to achieve, then ?, which means
that the reconsideration is inconsistent, but inconsistent reconsideration cannot
be the case given 5.</p>
          <p>Intuitively, this means that if an agent reconsiders, such agent is closer to
rationality by following the intention-belief incompletness property, because the
agent continues intentions possible to achieve. And also, the agent is far from
irrationality by avoiding the intention-belief inconsistency, because after
reconsideration the agent cannot have inconsistent reconsiderations, and so, the agent
has to drop intentions not possible to achieve.
5</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>Let us sum up some of the main ideas and results of this introduction:
{ A) Agents can retract their intentions when such intentions present
problems.
{ B) If an agent reconsiders an intention, such intention is abandoned or
continued.
{ C) If an agent reconsiders, such agent is closer to rationality by following the
intention-belief incompletness property and by avoiding the intention-belief
inconsistency.</p>
      <p>So, we have sketched some general and introductory guidelines for intention
revision by following the theories of intention and by considering that intentions
are not isolated and are related to planning. We are aware these ideas lead to
more problems. Some of the open problems we do not want to let to mention
are the following:
{ How do we relate the topic of this introduction with a non-monotonic logic?
Since we can reconsider intentions, we have the possibility to relate our
proposal with a non-monotonic consequence relation. Recall from section 2,
for instance, that from a state we want to achieve a state through some
execution of intentions, formally:</p>
      <p>
        : p1; : : : ; pn
but eventually happens that some plan pi fails, which takes to intention
revision. Future work requires the treatment of this situation.
{ How do we relate the BDI components and temporal logic with the postulates
we have proposed? One of the problems that we have presented is that,
although we provide an abstract de nition of the revision functions, we do
not take into account the role of time within the reasoning process. Another
problem is that we have considered intentions in an isolated way; this is
necessary nonetheless, since intentions are irreducible componentes of the
BDI architecture [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]; however, this is not su cient. We have to relate the
functions to other mental states through bridge rules of the form:
B1;:::;Bn
:p1;:::;pn
that specify the change of states given certain beliefs (Bi) and intentions.
But we also have to construct representation theorems, such as proposition
2, in order to relate di erent formalisms.
{ What is the role of desires within this speci cation? The BDI architecture
also requires desires. We know intentions are the desires the agents have
committed to achieve. How does the change of desires a ect the intentional
changes?
{ Which programming language can be adequate to model our proposal?
Another problem we have is that our approach in this introduction is closer to
an abstract logical speci cation, but it is far from implementation. Future
work requires an integration of this proposal with an implementation.
      </p>
      <p>The introduction we have presented here does not pretend to be
exhaustive. On the contrary, we believe that the issues and problems we have showed
are complex enough to be solved within the extensions of this work; but also,
we believe they are clear enough to open a research program about intention
revision.</p>
      <p>Acknowledgements. The author would like to thank the anonymous reviewers
and Dr. Axel Barcelo for their helpful comments and precise corrections. The
author is supported by the CONACyT scholarship 214783.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Alchourron</surname>
            ,
            <given-names>C. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gardenfors</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makinson</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>On the logic of theory change: partial meet contraction and revision functions</article-title>
          .
          <source>Journal of Symbolic Logic</source>
          ,
          <volume>50</volume>
          ,
          <fpage>510</fpage>
          -
          <lpage>530</lpage>
          (
          <year>1985</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Bratman</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Intention</surname>
            , Plans, and
            <given-names>Practical</given-names>
          </string-name>
          <string-name>
            <surname>Reason</surname>
          </string-name>
          . Harvard University Press, Cambridge (
          <year>1987</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Bratman</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Israel</surname>
            ,
            <given-names>D. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pollack</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          :
          <article-title>Plans and resource-bounded practical reasoning</article-title>
          .
          <source>Computational Intelligence</source>
          ,
          <volume>4</volume>
          ,
          <fpage>349</fpage>
          -
          <lpage>355</lpage>
          (
          <year>1988</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Cohen</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Levesque</surname>
          </string-name>
          , H.:
          <article-title>Intention is choice with commitment</article-title>
          .
          <source>Arti cial Intelligence</source>
          <volume>42</volume>
          (
          <issue>3</issue>
          ),
          <fpage>213</fpage>
          -
          <lpage>261</lpage>
          (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dignum</surname>
          </string-name>
          , F., Meyer, J.-J. Ch.,
          <string-name>
            <surname>Wieringa</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kuiper</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>A modal approach to intentions, commitments and obligations: Intention plus commitment yields obligation</article-title>
          . In M. A.
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , J. Carmo (eds.),
          <source>Deontic logic, agency and normative systems</source>
          , pp.
          <fpage>80</fpage>
          -
          <lpage>97</lpage>
          , Springer-Verlag (
          <year>1996</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Fagin</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ullman</surname>
            ,
            <given-names>J. D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vardi</surname>
          </string-name>
          , M. Y.:
          <article-title>On the semantics of updates in databases</article-title>
          .
          <source>Proceedings of Second ACM SIGACT-SIGMOD, Atlanta</source>
          ,
          <fpage>352</fpage>
          -
          <lpage>365</lpage>
          (
          <year>1983</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Gardenfors</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Makinson</surname>
          </string-name>
          . D.:
          <article-title>Revisions of knowledge systems using epistemic entrenchment</article-title>
          .
          <source>In Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge</source>
          , M. Vardi (ed.), Los Altos, CA: Morgan Kaufmann (
          <year>1988</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Guerra-Hernandez</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Castro-Manzano</surname>
            ,
            <given-names>J.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>El-Fallah-Seghrouchni</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <string-name>
            <surname>CTLAgentSpeak(L):</surname>
          </string-name>
          <article-title>a Speci cation Language for Agent Programs</article-title>
          .
          <source>Journal of Algorithms in Cognition, Informatics and Logic</source>
          , (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Harper</surname>
            ,
            <given-names>W. L.</given-names>
          </string-name>
          :
          <article-title>Rational conceptual change</article-title>
          .
          <source>In PSA</source>
          <year>1976</year>
          ,
          <string-name>
            <given-names>East</given-names>
            <surname>Lansing</surname>
          </string-name>
          , Mich: Philosophy of Science Association (
          <year>1977</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Hoek</surname>
          </string-name>
          , W. van der,
          <string-name>
            <surname>Jamroga</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Towards a theory of intention revision</article-title>
          .
          <source>Synthese</source>
          , Springer-Verlag (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Konolige</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pollack</surname>
            ,
            <given-names>M. E.</given-names>
          </string-name>
          :
          <article-title>A representationalist theory of intentions</article-title>
          .
          <source>In Proceedings of International Joint Conference on Arti cial Intelligence (IJCAI-93)</source>
          ,
          <fpage>390</fpage>
          -
          <lpage>395</lpage>
          , San Mateo: Morgan Kaufmann(
          <year>1993</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Levi</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>The Enterprise of Knowledge</article-title>
          . MIT Press, Cambridge, Massachusetts(
          <year>1980</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Rao</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>George</surname>
            ,
            <given-names>M.P.</given-names>
          </string-name>
          :
          <article-title>Modelling Rational Agents within a BDI-Architecture</article-title>
          . In Huhns,
          <string-name>
            <given-names>M.N.</given-names>
            ,
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.P.</surname>
          </string-name>
          , (eds.) Readings in Agents, pp.
          <fpage>317</fpage>
          -
          <lpage>328</lpage>
          . Morgan Kaufmann (
          <year>1998</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Russell</surname>
            ,
            <given-names>S. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Norvig</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Arti cial Intelligence. A modern approach</article-title>
          . Prentice Hall, New Jersey, USA (
          <year>1995</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>M.P.:</given-names>
          </string-name>
          <article-title>A critical examination of the Cohen-Levesque Theory of Intentions</article-title>
          .
          <source>In Proceedings of the European Conference on Arti cial Intelligence</source>
          (
          <year>1992</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Wooldridge</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Introduction to Multiagent Systems</article-title>
          . John Wiley and Sons,Ltd. (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>