<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">An Introduction to Intention Revision: Issues and Problems</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">José</forename><forename type="middle">Martín</forename><surname>Castro-Manzano</surname></persName>
							<email>jmcmanzano@hotmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Instituto de Investigaciones Filosóficas</orgName>
								<orgName type="institution">Universidad Nacional Autónoma de México Circuito Mario de la Cueva s</orgName>
								<address>
									<addrLine>n Ciudad Universitaria</addrLine>
									<postCode>04510</postCode>
									<settlement>Coyoacán</settlement>
									<country key="MX">México</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">An Introduction to Intention Revision: Issues and Problems</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">B3ABA28185821F8F92738661D72138F5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:24+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Intention</term>
					<term>reconsideration</term>
					<term>BDI</term>
					<term>agents</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The change of beliefs on the basis of new information has been widely studied; however, the change of other mental states has received less attention, and particularly, intentions. Despite there are philosophical and formal theories about intentions, few of them consider the revision of intentions.We suggest introductory guidelines to define a research program for the revision of intentions regarding that: (i) intentions are intimately related to the beliefs and desires of agents immersed in a dynamic world; (ii) intentions are directly related to planning; and (iii) a reconsideration function is needed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Belief revision is a paradigmatic research program: it is a relatively new area of research that joins two diciplines: computer science and philosophy. Since programmers dealt with databases they faced the problem of updating their information. On the other hand, certain philosophers dealt with the change of information within epistemic structures. So, we can identify, respectively, two important moments in the history of this research program: one in <ref type="bibr" target="#b5">[6]</ref>; and the other in <ref type="bibr" target="#b8">[9]</ref> and in <ref type="bibr" target="#b11">[12]</ref>. A general theory can be found in <ref type="bibr" target="#b0">[1]</ref>. This last approach constitutes the core for any program of belief revision.</p><p>Thus, although the change of beliefs on the basis of new information has been widely studied with success during the last 25 years, the dynamic process of other mental states has received less attention, and particularly, intentions <ref type="bibr" target="#b9">[10]</ref>. Certainly, there are philosophical and formal theories of intention <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>, <ref type="bibr" target="#b4">[5]</ref>, <ref type="bibr" target="#b10">[11]</ref>, <ref type="bibr" target="#b12">[13]</ref> but few of them, if any, consider the possibility of the revision of intentions <ref type="bibr" target="#b9">[10]</ref>.</p><p>In this work we suggest some general and introductory guidelines in order to define a program for intention revision. We think this topic is important because (i) intentions are intimately related to the beliefs and desires of the agents immersed in a dynamic world; (ii) intentions are related directly with planning; and (iii) a function of reconsideration is needed.</p><p>The general background of this work assumes the theories of intention as represented by <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b2">[3]</ref>, <ref type="bibr" target="#b3">[4]</ref>; and the belief revision program as represented by <ref type="bibr" target="#b0">[1]</ref>.</p><p>The rest of the paper is distributed in the next way: in section 2 we describe what do we mean by intention revision and we describe some methodological problems. In section 3 we discuss some issues regarding the problem of representation. In section 4 we adapt and suggest some general postulates for the revision of intentions. Finally, in section 5 we discuss the ideas of this introduction and we give some details about future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Intention revision</head><p>We can study intentions from two general perspectives. One internal, e.g., what are intentions and how do they behave; the other, external, regarding the problems intentions generate, e.g., how do they relate to other mental states and how those relations can be modelled. We will follow this double approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Internal perspective</head><p>For our introduction we will need an approach based upon the BDI model of rational agency <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b12">[13]</ref>. This model receives its name from the use of Beliefs, Desires and Intentions in order to model the rationality of agents. Intuitively, the beliefs correspond to the information the agent has about itself and its environment. Desires correspond to the motivational part of the agent, what the agent wants to see as accomplished. Finally, the intentions correspond to the deliberative part and consist in the desires the agent is commited to achieve.</p><p>Intentions, as an irreducible component of the BDI model <ref type="bibr" target="#b1">[2]</ref>, have certain features that, taken together, make them different from beliefs and desires:</p><p>-Pro-activity. Intentions are pro-active, they move the agent to achieve a goal <ref type="bibr" target="#b1">[2]</ref>. In this sense, intentions are conduct-controlling components. It is important to note, however, that intentions are not equal to desires. Both intentions and desires are pro-attitudes, but intentions imply commitment and consistency, while desires do not. -Inertia. Intentions also possess inertia, that is to say, once an intention has been taken, it resists being abandoned. If the intention was adopted and inmediately abandoned, we would have to say the intention was never taken; however, if the reason that generated the intention disappears, it is rational to abandon the intention <ref type="bibr" target="#b1">[2]</ref>. -Admissibility. Intentions also provide a filter of admissibility. Once an intention has been taken, this constraints the future practical reasonings: while the agent holds a particular intention, the agent will not consider contradictory options. Thus, intentions provide a filter <ref type="bibr" target="#b1">[2]</ref>, <ref type="bibr" target="#b3">[4]</ref>.</p><p>In this way, we can say that intentions require a notion of commitment (given the principle of pro-activity), a notion of consistency (given the admissibility criteria) and a notion of retractability (given the notion of inertia).</p><p>Plans, as long as they are sets of actions, are intentions and in this sense, they share the same properties: they are conduct-controlling, they have inertia and they work as inputs for future practical reasonings <ref type="bibr" target="#b1">[2]</ref>. Moreover, plans have certain features:</p><p>-Plans are partial. Plans are partial, and not complete, because they lack complete information about the state of the world, e.g., the environment is not accesible. -Plans are not statical. Plans cannot be static structures because the environment of the agent is dynamic. -Plans are hierarchical. Plans contain means-ends reasons that have to have an ordered process.</p><p>But plans also require the next features:</p><p>-Internal consistency. Plans must be executable.</p><p>-Strong consistency. Plans must be consistent with the agent's beliefs.</p><p>-Means-ends coherence. The means-ends reasoning of the plan must be consistent with the global ends of the plan.</p><p>These last features lead us to consider some other problems: that intentions are not isolated mental states <ref type="bibr" target="#b9">[10]</ref>. Modifying intentions implies modifying beliefs, and sometimes, modifying beliefs may modify intentions. In this sense, the strong consistency shows us that beliefs and intentions mantain certain relationships: the asymmetry thesis. Bratman <ref type="bibr" target="#b1">[2]</ref> considers these relations as principles of rationality:</p><p>-Intention-belief inconsistency. It is irrational for an agent to intent φ and believe at the same time that it will not achieve φ. -Intention-belief incompletness. It is rational for an agent to intent φ and at the same time not believe that it will achieve φ.</p><p>Thus, we can say that the notions of consistency and retractability are not exclusive of beliefs; and that the difficulty of considering intention revision lies in the relation between intentions and beliefs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">External perspective</head><p>Based on Bratman, Cohen and Levesque <ref type="bibr" target="#b3">[4]</ref> suggested seven ideas -or problemsthat a theory of intentions must take into account:</p><p>-1. Intentions pose problems for agents, who need to determine ways of achieving them. -2. Intentions provide a filter for adopting other intentions, which must not conflict. -3. Agents track the success of their intentions, and are inclined to try again if their attempts fail. -4. Agents believe their intentions are possible.</p><p>-5. Agents do not believe they will not bring about their intentions. -6. Under certain circumstances, agents believe they will bring about their intentions. -7. Agents need not intend all the expected side effects of their intentions.</p><p>With these criteria, Cohen and Levesque construct a formal theory of intention based on the notion of persistent goal (according to them, an intention is a form of persistent goal <ref type="bibr" target="#b3">[4]</ref>). However, this theory does not deal with the dynamics of intentions <ref type="bibr" target="#b9">[10]</ref>, <ref type="bibr" target="#b14">[15]</ref>. The dynamics of intentions should deal with the problem of how an agent adopts and abandons intentions and what changes these processes produce in other BDI components. The dynamics of intentions requires a theory of intention revision, in the same way the changes in beliefs require a theory of belief revision. So, we modestly add the next postulate to the criteria of Cohen and Levesque:</p><p>-8. Agents can retract their intentions when such intentions present problems.</p><p>Broadly speaking, this idea is the one that constitutes the core of intention revision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">What is intention revision? An example</head><p>Let us see, by way of an example, what is intention revision. Assume our agent is immerse in an environment that is inaccesible, non-determistic, episodic, discrete and dynamic <ref type="bibr" target="#b13">[14]</ref>. Furthermore, suppose that the agent has certain beliefs and intentions (state α) and that, eventually, desires to achieve certain state of the world (state β) -we represent this situation with the black arrow in figure <ref type="figure" target="#fig_0">1</ref>. In this way, the agent generates an intention of the form put(B, C). Now, given the properties of the environment, let us suppose the agent perceives the state γ -which is denoted by the red arrow-where it is not the case that f ree(C). Therefore, the intention will fail, the set of intentions would become inconsistent and the goals of the agent will not be achieved.</p><p>Let us see this situation in a more precise way. Suppose that we have an agent with a database of intentions -and beliefs-that includes the next data:</p><formula xml:id="formula_0">-p 1 !put(x, y). -p 2 +!put(x, y) : −f ree(x). -p 3 +!put(x, y) : −f ree(y). -p 4 +!put(x, y) : −!move(x).</formula><p>where !φ stands for an intention formula and +φ for an addition of a formula. If the database is equipped with some inference engine, the next formula is required to accomplish the intention:</p><p>p 5 f ree(x). Now, suppose that it is the case that x is not free. This means that we have to add the negation of p 5 to the database. But then the set of intentions becomes inconsistent in an intuitive sense. If we want to keep the database consistent, which is a sound methodology, we need to revise the database. This implies that some of the intentions may have to be retracted; however, we do not need to revise the whole set of intentions for that would be an unnecessary lost of time and information. Thus, we have to choose what formulas -i.e., intentions-to retract.</p><p>The problem of intention revision, thus, is double: in first place, because it is intimately related to other mental states (like beliefs and desires); and in second place, because logic by itself is not sufficient to determine which intentions should be retracted. These problems lead us to take into account that the change of intentions is associated with changes in beliefs, and that we require extra-logical concepts to deal with these changes.</p><p>To complicate the setting even more, beliefs and intentions have certain logical consequences: when retracting intentions we have to choose which consequences (beliefs or intentions) we have to retract.</p><p>But to mantain the consistency and the maximum number of intentions accomplished, should we revise all the intentions? The answer is no, because, as we will see, the costs over time and memory would be huge.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Some methodological problems with intention revision</head><p>When dealing with intention revision some methodological problems appear: one related to representation, one related to inference, and finally, one related with a function of selection.</p><p>-The problem of representation. How should the intentions be represented? Most databases work with facts and rules of some kind. The language used to represent intentions -together with beliefs and desires-may be related to some logical formalism (for instance, first order logic). This problem is, therefore, double: what language should we use to represent our data? And is this language adequate to relate the BDI components within a context of revision? -The problem of the consequences. What is the relation between the elements represented as facts and the elements that are inferred? This relation is sensible to the database. In some cases the elements that have been inferred have some special status in comparison with the facts; however, depending on which representation we use we will be able to distinguish these differences.</p><p>-The problem of the function of selection. How should we choose which elements to retract? Logic by itself is not sufficient to decide which intentions should be maintained and which intentions should be retracted. We need a heuristic to determine this selection. One idea is that the loss of information should be minimal, for instance, by way of an ordering <ref type="bibr" target="#b6">[7]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Models to represent intentional states</head><p>We will use a propositional model considering that the elements of the intentional system are propositional formulas. Of course, even with this representation we can have several alternatives. First, we have to pick an appropriate language (for instance, databases may be represented in a Prolog style). In this introduction we will work with a first order language. We assume that the language L is closed for the operators ¬, ∧, ∨, ⇒ evaluated in a boolean way. We use φ, ψ, ... as propositional variables in L. The language L not only accepts what is explicitly represented in the database, but also, the consequences of it. Thus, another factor we have to determine is: which logical system should govern the set of intentions? In practice, the answer to this question depends on what mechanism of inference is coupled with the database; however, when doing this theoretical analysis, we will proceed by declaring the general functions of revision. So, for this introduction, we will use a classical propositional logic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Sets of intentional states</head><p>The easiest way to represent an intentional state is by using well-formed formulas (wff) of L. According to this, we can define a set of intentional states (intentional set, from now on) through a set Σ of wff of L that satisfy the axiom of generalized reflexivity (C): if Σ φ then φ ∈ Σ. The condition C assures us that Σ is closed under logical consequence. By the properties of classical logic, whenever Σ is inconsistent, then for all φ, Σ φ. We will denote this with Σ ⊥ . This means that there is an intentional set that is inconsistent.</p><p>There is a very close correspondence between intentional sets and possible worlds models. For any set W Σ of possible worlds we can define a corresponding intentional set Σ as the set of those sentences that are true in all worlds in W Σ . From a computational point of view, however, intentional sets are much more tractable than possible worlds models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Intentional bases</head><p>Nevertheless, we have to consider that some intentions are not basic, but inferred. It is not possible to express this distinction through intentional sets, for the set theoretic representation does not provide markers or flags to indicate which intentions are basic and which are inferred. Moreover, it seems that when make intentional changes we do not change the whole set of intentions, but a finite subset of it. Formally, this idea can be represented by letting B Σ be a base for an intentional set Σ if and only if B Σ is a finite subset of Σ and Cn(B Σ ) = Σ. Then, we introduce the functions for intention revision in bases of intentions (intentional bases from now on). The distinction between intentional set and intentional base allows us to generate and distinguish different structures, e.g., assume two intentional bases B Σ and B Σ such that Cn(B Σ ) = Cn(B Σ ) but B Σ = B Σ . If we want to implement intentional revision systems, intentional bases are easier to handle than intentional sets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Postulates for intention revision</head><p>When dealing with intention revision there are two main strategies that may be followed: to present in an explicit manner the construction of the process of revision or to formulate the general ideas to realize such constructions. The first solution to the problem consists in developing algorithms that compute the functions; the second approach consists in describing the postulates to define the functions to further develop the algorithms.</p><p>In this introduction we will follow the second approach. The formulations of the postulates will be given through a series of ideas and conditions. The heuristic behind is similar to the idea of belief revision: the intentional changes should provide (i) a maximum of preservation of information (i.e., a minimum change in the intentions) and (ii) consistency.</p><p>Intention revision should occur when a new piece of information that is inconsistent with the database is added to the system in such a way that the resulting set in inconsistent. But this change is not the only one that may occur. Depending on how intentions are represented and what intentions are accepted, different intentional changes are possible. We can distinguish four intentional changes, three of them similar to belief changes:</p><p>-Expansion. A new formula φ is added to a Σ together with the logical consequences of the addition. The system that results from expanding Σ by a sentence φ will be denoted as Σ ⊕ φ.</p><p>-Revision. A new formula φ that is inconsistent with Σ is added, but in order to mantain consistency in the resulting system, some of the old formulas in Σ have to be deleted. This is denoted by Σ φ. -Contraction. Some formula φ in Σ is retracted without adding any new facts. In order to mantain the system closed under logical consequence, some members of Σ must be deleted. This will be denoted by Σ φ. -Reconsideration. A new formula φ is added to Σ, but eventually such formula has be to contracted or revised. This is denoted by Σ ⊗ φ.</p><p>Expansions are closed under logical consequence (i.e., the expansion of the intentional set with a new formula is Σ ⊕ φ = {ψ|Σ ∪ φ ψ}); however, it is not possible to give a similar characterization of the other changes. The problem of revision, contraction and reconsideration has its roots in the lack of purely logical reasons to accomplish these processes. Thus, we can have different ways to research, specify and verify them.</p><p>For the time being, we will assume that the intentional sets model intentional bases. In what follows we will formulate some postulates for intention revision. The motivation behind these postulates (adapted from <ref type="bibr" target="#b0">[1]</ref>) is that when we modify our intentions we have to keep to a minimum the change of intentions and we have to maintain consistency. For an agent the obtention of information implies costs and the environment in which is immersed is dynamic, for these reasons the unnecessary loss of information and time have to be avoided. On the other hand, we also require compromise, for the space in memory is not for free. This is an optimization heuristic; and although it is possible to give a quantitative definition of the loss of time or information, it is hard and impractical for our purposes. Instead, we will follow another specification: given that intentions are hierarchical plans <ref type="bibr" target="#b1">[2]</ref>, we believe that when retracting intentions we must retract the ones with a lesser hierarchy, and given that reconsideration reduces the time of revision, we believe we have to retract intentions on the basis of general rules <ref type="bibr" target="#b7">[8]</ref>. In what follows, we will specify the postulates for intention revision considering these ideas.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Postulates for revision</head><p>For intention revision, the first postulate requires closure: Postulate 1 ( 1) For any formula φ and any intentional set Σ, Σ φ is a intentional set.</p><p>The second postulate guarantees that the input sentence is accepted in the revision:</p><formula xml:id="formula_1">Postulate 2 ( 2) φ ∈ Σ φ.</formula><p>A revision process should occur when the input φ contradicts what is already in Σ, that is ¬φ ∈ Σ. However, in order to have the revision function defined for all inputs, we can easily extend it to cover the case when ¬φ / ∈ Σ. Thus, revision is identified with expansion:</p><formula xml:id="formula_2">Postulate 3 ( 3) Σ φ ⊆ Σ ⊕ φ. Postulate 4 ( 4) If ¬φ / ∈ Σ, then Σ ⊕ φ ⊆ Σ φ.</formula><p>The purpose of a revision is to produce a new consistent intentional set. Thus Σ φ should be consistent, unless φ is logically impossible:</p><p>Postulate 5 ( 5) Σ φ = K ⊥ if and only if ¬φ.</p><p>We also require equivalence:</p><formula xml:id="formula_3">Postulate 6 ( 6) If φ ⇔ ψ, then Σ φ = Σ ψ.</formula><p>The postulates ( 1) to ( <ref type="formula">6</ref>) are the basic postulates for revision. The final two conditions concern composite intention revisions. The idea is that, if Σ φ is a revision of Σ and Σ φ is to be changed by a further formula ψ; such change should be made by expansions of Σ φ whenever possible. The minimal change of Σ to include both φ and ψ, that is, Σ φ ∧ ψ, ought to be the same as the expansion of Σ φ by ψ, so long as ψ does not contradict the intentions in Σ φ:</p><formula xml:id="formula_4">Postulate 7 ( 7) Σ φ ∧ ψ ⊆ (Σ φ) ⊕ ψ . Postulate 8 ( 8) If ¬ψ / ∈ Σ φ, then (Σ φ) ⊕ ψ ⊆ Σ φ ∧ ψ . When ¬ψ / ∈ Σ, then (Σ φ) ⊕ ψ is Σ ⊥ .</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Postulates for contraction</head><p>We also need closure:</p><p>Postulate 9 ( 1) For any formula φ and any intentional set Σ, Σ φ is an intentional set.</p><p>Because Σ φ is formed from Σ by giving up some intentions, no new intentions should appear:</p><formula xml:id="formula_5">Postulate 10 ( 2) Σ φ ⊆ Σ .</formula><p>When φ / ∈ Σ, the optimization heuristic requires that nothing has to be retracted:</p><formula xml:id="formula_6">Postulate 11 ( 3) If φ / ∈ Σ, then Σ φ = Σ .</formula><p>The formula to be contracted should not be a logical consequence of the intentions in Σ φ:</p><formula xml:id="formula_7">Postulate 12 ( 4) If φ, then φ / ∈ Σ φ .</formula><p>From ( 1) to ( 4) it follows that if φ / ∈ Σ, then (Σ φ) ⊕ φ ⊆ Σ. In other words, if we first retract φ and then add φ again to the resulting intentional, no intentions are accepted that were not accepted in the original intentional set.</p><p>The optimization heuristic demands that as many intentions as possible should be kept in Σ φ. So, we need a recovery:</p><formula xml:id="formula_8">Postulate 13 ( 5) If φ ∈ Σ, then Σ ⊆ (Σ φ) ⊕ φ .</formula><p>This postulate enables us to undo contractions, and although it is controversial, we will assume it for sake of the introduction. The sixth postulate is analogous to (Σ 6):</p><formula xml:id="formula_9">Postulate 14 ( 6) If φ ⇔ ψ, then Σ φ = Σ ψ .</formula><p>These postulates are the basic set of postulates for intention contraction. Again, two further postulates for contractions with respect to conjunctions will be added. The motivations for these postulates are similar to <ref type="bibr" target="#b6">( 7)</ref> and <ref type="bibr" target="#b7">( 8)</ref>.</p><formula xml:id="formula_10">Postulate 15 ( 7) Σ φ ∩ Σ ψ ⊆ Σ φ ∧ ψ . Postulate 16 ( 8) If φ / ∈ Σ φ ∧ ψ, then Σ φ ∧ ψ ⊆ Σ ψ.</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Postulates for reconsideration</head><p>The postulates above are adaptations of <ref type="bibr" target="#b0">[1]</ref>. The following are different in some respects. The first postulate requires closure as well:</p><p>Postulate 17 (⊗1) For any formula φ and any intentional set Σ, Σ ⊗ φ is an intentional set.</p><p>Reconsideration leads to revision <ref type="bibr" target="#b1">[2]</ref> or contraction:</p><formula xml:id="formula_11">Postulate 18 (⊗2) (Σ ⊗ φ ⊆ Σ φ) ∨ (Σ ⊗ φ ⊆ Σ φ).</formula><p>The purpose of a reconsideration is to produce a new consistent intentional set:</p><p>Postulate 19 (⊗3) Σ ⊗ φ = K ⊥ if and only if ¬φ.</p><p>We also require equivalence:</p><formula xml:id="formula_12">Postulate 20 (⊗4) If φ ⇔ ψ, then Σ ⊗ φ = Σ ⊗ ψ.</formula><p>This set of postulates is the basic set of postulates for reconsideration. We also have the next ideas. The reconsideration Σ ⊗ φ should be done by expansions whenever possible. And the minimal change of Σ to include φ and ψ should be the same as the expansion of Σ ⊗ φ by ψ.</p><p>Postulate 21</p><formula xml:id="formula_13">(⊗5) Σ ⊗ φ ∧ ψ ⊆ (Σ ⊗ φ) ⊕ ψ. Postulate 22 (⊗6) If ¬ψ / ∈ Σ ⊗ φ, then (Σ ⊗ φ) ⊕ ψ ⊆ Σ ⊗ φ ∧ ψ. Postulate 23 (⊗7) Σ ⊗ φ ∩ Σ ⊗ ψ ⊆ Σ ⊗ φ ∧ ψ.</formula><p>We now display some results regarding intention revision, but first, we require some definitions: an intention φ is abandoned if and only if φ is retracted from Σ either by a contraction or a revision. And an intention φ is continued if and only if φ ∈ (Σ ⊗ φ) ⊕ φ.</p><p>The following results are straightforward.</p><p>that the reconsideration is inconsistent, but inconsistent reconsideration cannot be the case given ⊗5.</p><p>Intuitively, this means that if an agent reconsiders, such agent is closer to rationality by following the intention-belief incompletness property, because the agent continues intentions possible to achieve. And also, the agent is far from irrationality by avoiding the intention-belief inconsistency, because after reconsideration the agent cannot have inconsistent reconsiderations, and so, the agent has to drop intentions not possible to achieve.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusions</head><p>Let us sum up some of the main ideas and results of this introduction:</p><p>-A) Agents can retract their intentions when such intentions present problems. -B) If an agent reconsiders an intention, such intention is abandoned or continued. -C) If an agent reconsiders, such agent is closer to rationality by following the intention-belief incompletness property and by avoiding the intention-belief inconsistency.</p><p>So, we have sketched some general and introductory guidelines for intention revision by following the theories of intention and by considering that intentions are not isolated and are related to planning. We are aware these ideas lead to more problems. Some of the open problems we do not want to let to mention are the following:</p><p>-How do we relate the topic of this introduction with a non-monotonic logic? Since we can reconsider intentions, we have the possibility to relate our proposal with a non-monotonic consequence relation. Recall from section 2, for instance, that from a state α we want to achieve a state β through some execution of intentions, formally: α : p 1 , . . . , p n β but eventually happens that some plan p i fails, which takes to intention revision. Future work requires the treatment of this situation. -How do we relate the BDI components and temporal logic with the postulates we have proposed? One of the problems that we have presented is that, although we provide an abstract definition of the revision functions, we do not take into account the role of time within the reasoning process. Another problem is that we have considered intentions in an isolated way; this is necessary nonetheless, since intentions are irreducible componentes of the BDI architecture <ref type="bibr" target="#b1">[2]</ref>; however, this is not sufficient. We have to relate the functions to other mental states through bridge rules of the form: B1,...,Bn α:p1,...,pn β that specify the change of states given certain beliefs (B i ) and intentions. But we also have to construct representation theorems, such as proposition 2, in order to relate different formalisms. -What is the role of desires within this specification? The BDI architecture also requires desires. We know intentions are the desires the agents have committed to achieve. How does the change of desires affect the intentional changes? -Which programming language can be adequate to model our proposal? Another problem we have is that our approach in this introduction is closer to an abstract logical specification, but is far from implementation. Future work requires an integration of this proposal with an implementation.</p><p>The introduction we have presented here does not pretend to be exhaustive. On the contrary, we believe that the issues and problems we have showed are complex enough to be solved within the extensions of this work; but also, we believe they are clear enough to open a research program about intention revision.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. States of the agent and the environment</figDesc><graphic coords="4,180.12,457.49,255.13,183.76" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgements. The author would like to thank the anonymous reviewers and Dr. Axel Barceló for their helpful comments and precise corrections. The author is supported by the CONACyT scholarship 214783.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proposition 1</head><p>The following statements hold:</p><p>-1. If φ is reconsidered, then φ is abandoned or continued.</p><p>-2. Inconsistency of reconsideration results from the inconsistency of intentions.</p><p>Reconsidering a consistent Σ with the current intentions does not remove any intention. At this point, we have presented some issues and problems of intention revision by isolating intentions from other mental states. In the next proposition we will try to relate intentions and beliefs through the reconsideration function. To see the next results, recall that it is irrational for an agent to intent φ and believe at the same time that it will not achieve φ: this is intention-belief inconsistency. To avoid this inconsistency, an agent must abandon intentions impossible to achieve. And recall that it is rational for an agent to intent φ and at the same time not believe that will achieve ¬φ: this is intention-belief incompletness. To accomplish this property, an agent must continue its intentions.</p><p>The following representation theorems require some auxiliar definitions: we say an agent believes φ, BELφ, if φ ∈ Σ; and an agent has an intention to φ,</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Proposition 2</head><p>The following statements hold:</p><p>-1. Reconsideration implies intention-belief incompletness.</p><p>Proof. Assume Σ ⊗ φ. Furthermore, assume that INTENDφ is also given. We have two options: the intention is possible or impossible to achieve. For statement 1: if the intention is possible to achieve after reconsideration, then φ ∈ (Σ ⊗ φ) ⊕ φ, which means the agent can continue its intention. Thus, φ ∈ Σ. For statement 2: if the intention is impossible to achieve, then Σ ⊥ , which means</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">On the logic of theory change: partial meet contraction and revision functions</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Alchourrón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Gardenfors</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Makinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Symbolic Logic</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page" from="510" to="530" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Intention, Plans, and Practical Reason</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bratman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1987">1987</date>
			<publisher>Harvard University Press</publisher>
			<pubPlace>Cambridge</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Plans and resource-bounded practical reasoning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Bratman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Israel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Pollack</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="349" to="355" />
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Intention is choice with commitment</title>
		<author>
			<persName><forename type="first">P</forename><surname>Cohen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Levesque</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="213" to="261" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A modal approach to intentions, commitments and obligations: Intention plus commitment yields obligation</title>
		<author>
			<persName><forename type="first">F</forename><surname>Dignum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-J</forename><surname>Meyer</surname></persName>
		</author>
		<author>
			<persName><surname>Ch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Wieringa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kuiper</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Deontic logic, agency and normative systems</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Brown</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Carmo</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1996">1996</date>
			<biblScope unit="page" from="80" to="97" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">On the semantics of updates in databases</title>
		<author>
			<persName><forename type="first">R</forename><surname>Fagin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ullman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Y</forename><surname>Vardi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Second ACM SIGACT-SIGMOD</title>
				<meeting>Second ACM SIGACT-SIGMOD<address><addrLine>Atlanta</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1983">1983</date>
			<biblScope unit="page" from="352" to="365" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Revisions of knowledge systems using epistemic entrenchment</title>
		<author>
			<persName><forename type="first">P</forename><surname>Gardenfors</surname></persName>
		</author>
		<author>
			<persName><forename type="middle">D</forename><surname>Makinson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second Conference on Theoretical Aspects of Reasoning about Knowledge</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Vardi</surname></persName>
		</editor>
		<meeting>the Second Conference on Theoretical Aspects of Reasoning about Knowledge<address><addrLine>Los Altos, CA</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1988">1988</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">CTLA-gentSpeak(L): a Specification Language for Agent Programs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Guerra-Hernández</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Castro-Manzano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>El-Fallah-Seghrouchni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Algorithms in Cognition</title>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Informatics and Logic</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Rational conceptual change</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Harper</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">PSA 1976</title>
				<meeting><address><addrLine>East Lansing, Mich</addrLine></address></meeting>
		<imprint>
			<publisher>Philosophy of Science Association</publisher>
			<date type="published" when="1977">1977</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Towards a theory of intention revision</title>
		<author>
			<persName><forename type="first">W</forename><surname>Hoek</surname></persName>
		</author>
		<author>
			<persName><surname>Van Der</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Jamroga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Springer-Verlag</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A representationalist theory of intentions</title>
		<author>
			<persName><forename type="first">K</forename><surname>Konolige</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Pollack</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of International Joint Conference on Artificial Intelligence (IJCAI-93)</title>
				<meeting>International Joint Conference on Artificial Intelligence (IJCAI-93)<address><addrLine>San Mateo</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1993">1993</date>
			<biblScope unit="page" from="390" to="395" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">The Enterprise of Knowledge</title>
		<author>
			<persName><forename type="first">I</forename><surname>Levi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1980">1980</date>
			<publisher>MIT Press</publisher>
			<pubPlace>Cambridge, Massachusetts</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Modelling Rational Agents within a BDI-Architecture</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Rao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Georgeff</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Readings in Agents</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Huhns</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Singh</surname></persName>
		</editor>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="317" to="328" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Norvig</surname></persName>
		</author>
		<title level="m">Artificial Intelligence. A modern approach</title>
				<meeting><address><addrLine>New Jersey, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Prentice Hall</publisher>
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A critical examination of the Cohen-Levesque Theory of Intentions</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the European Conference on Artificial Intelligence</title>
				<meeting>the European Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Introduction to Multiagent Systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001</date>
			<publisher>John Wiley and Sons,Ltd</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
