<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Reasoning about Goals in BDI Agents: the PRACTIONIST Framework</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>DINFO-University of Palermo</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>ICAR-Italian National Research Council</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>R&amp;D Laboratory - ENGINEERING Ingegneria Informatica S.p.A</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>SET - Universit de Technologie Belfort-Montbliard</institution>
          ,
          <country country="FR">France</country>
        </aff>
        <aff id="aff4">
          <label>4</label>
          <institution>Vito Morreale</institution>
        </aff>
      </contrib-group>
      <fpage>187</fpage>
      <lpage>194</lpage>
      <abstract>
        <p>ions than others (e.g. user stories) in the analysis and design of software applications. Thus, the PRACTIONIST framework supports a goal-oriented approach for developing agent systems according to the Belief-Desire-Intention (BDI) model. In this paper we describe the goal model of PRACTIONIST agents, in terms of the general structure and the relations among goals. Furthermore we show how PRACTIONIST agents use their goal model to reason about goals during their deliberation process and means-ends reasoning as well as while performing their activities.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>I. INTRODUCTION</title>
      <p>
        With the increasing management complexity and
maintenance cost of advanced information systems, attention in
recent years has fallen on self-* systems and particularly on
the autonomic computing approach and autonomic systems. In
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] authors argue that adopting a design approach that supports
the definition of a space of possible behaviours related to the
same function is one of the ways to make a system autonomic.
Then the system should be able to select at runtime the best
behaviour on the basis of the current situation. Goals can be
used as an abstraction to model the functions around which
the systems can autonomously select the proper behaviour.
      </p>
      <p>In this view, the explicit representation of goals and the
ability to reason about them play an important role in several
requirements analysis and modelling techniques, especially
when adopting the agent-oriented paradigm.</p>
      <p>
        In this area, one of the most popular and successful agent
models is the BDI [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], which derives from the philosophical
tradition of practical reasoning first developed by Bratman [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
It states that agents decide, moment by moment, which actions
to perform in order to pursue their goals. Practical reasoning
involves a deliberation process, to decide what states of affairs
to achieve, and a means-ends reasoning, to decide how to
achieve them.
      </p>
      <p>
        Nevertheless there is a gap between BDI theories and
several implementation [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Indeed, most of existing BDI agent
platforms (e.g. JACK [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], JAM [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]) generally use goals instead
of desires. Moreover, the actual implementations of mental
states differ somewhat from their original semantics: desires
(or goals) are treated as event types (such as in AgentSpeak(L)
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]) or procedures (such as in 3APL [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]) and intentions are
executing plans. Therefore the deliberation process and
meansends reasoning are not well separated, as being committed to
an intention (ends) is the same as executing a plan (means).
      </p>
      <p>Moreover, some available BDI agent platforms do not
support the explicit representation and implementation of goals
or desires with their properties and relations, but they deal with
them in a procedural and event-based fashion. As a result,
while such an explicit representation of goals provide useful
and stable abstractions when analysing and designing
agentbased systems, there is a gap between the products of those
phases and what development frameworks support.</p>
      <p>
        According to Winikoff et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], ”by omitting the declarative
aspect of goals the ability to reason about goals is lost”. What
is actually lost is the ability to know if goals are impossible,
achieved, incompatible with other goals, and so forth. This in
turn can support the commitment strategies of agents and their
ability to autonomously drop, reconsider, replace or pursue
goals.
      </p>
      <p>
        However, some other BDI agent platforms deal with
declarative goals. Indeed, in JADEX goals are explicitly represented
according to a generic model, enabling the agents to handle
their life cycle and reasoning about them [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Nevertheless, the
model defined in JADEX does not deal with relations among
goals.
      </p>
      <p>
        The PRACTIONIST framework [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] adopts a goal-oriented
approach to develop BDI agents and stresses the separation
between the deliberation process and the means-ends
reasoning, with the abstraction of goal used to formally define both
desires and intentions during the deliberation phase. Indeed,
in PRACTIONIST a goal is considered as an analysis, design,
and implementation abstraction compliant to the semantics
described in this paper. In other words, PRACTIONIST agents
can be programmed in terms of goals, which then will be
related to either desires or intentions according to whether
some specific conditions are satisfied or not.
      </p>
      <p>After a brief overview of the general structure of
PRACTIONIST agents and their execution model (section II), this
paper addresses the definition of the goal model (section III).
We also describe how PRACTIONIST agents are able to
reason about available goals according to their goal model,
current beliefs, desires, and intentions (see section IV). All
aforementioned issues and the proposed model are fully
implemented in the PRACTIONIST framework and available when
developing applications by using the goal-oriented approach
and the concepts described in this paper (section V). Finally,
in section VI we present a simple example that illustrates the
definition and the usage of goals and their relations.</p>
    </sec>
    <sec id="sec-2">
      <title>II. PRACTIONIST AGENTS</title>
      <p>
        The PRACTIONIST framework aims at supporting the
programmer in developing BDI agents and is built on top of JADE
[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], a widespread platform that implements the FIPA1
specifications. Therefore, our agents are deployed within JADE
containers and their main cycle is implemented by means of
a JADE cyclic behaviour.
      </p>
      <p>A PRACTIONIST agent is a software component endowed
with the following elements:
• a set of perceptions and the corresponding perceptors that
listen to some relevant external stimuli;
• a set of beliefs representing the information the agent
has got about both its internal state and the external
environment;
• a set of goals the agent wishes or wants to pursue. They
represent some states of affairs to bring about or activities
to perform and will be related to either its desires or
intentions (see below);
• a set of goal relations the agent uses during the
deliberation process and means-ends reasoning;
• a set of plans that are the means to achieve its intentions;
• a set of actions the agent can perform to act over its
environment; and
• a set of effectors that actually execute the actions.</p>
      <p>
        Beliefs, plans, and the execution model are briefly described
in this section, while goals are the subject of this paper and are
presented in the following sections. However, for a detailed
description of the structure of PRACTIONIST agents, the
reader should refer to [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        The BDI model refers to beliefs instead of knowledge, as
beliefs are not necessarily true, while knowledge usually refers
to something that is true [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. According to this, an agent may
believe true something that is false from the other agents’ or
the designer’s point of view, but the idea is just to provide the
agents with a subjective window over the world.
      </p>
      <p>
        Therefore each PRACTIONIST agent is endowed with a
prolog belief base, where beliefs are asserted, removed, or
entailed through inference on the basis of KD45 modal logic
rules [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] and user-defined formulas. Currently the
PRACTIONIST framework supports two prolog engines, i.e.
SWIProlog2 and one that was derived from TuProlog3.
      </p>
      <p>In the PRACTIONIST framework plans represent an
important container in which developers define the actual behaviors
of agents.</p>
    </sec>
    <sec id="sec-3">
      <title>1http://www.fipa.org 2http://www.swi-prolog.org 3http://tuprolog.alice.unibo.it</title>
      <p>Each agent may own a declared set of plans (the plan
library), each specifying the course of acts the agent will
undertake in order to pursue its intentions, or to handle
incoming perceptions, or to react to changes of its beliefs.</p>
      <p>PRACTIONIST plans have a set of slots that are used
by agents during the means-ends reasoning and the actual
execution of agent activities. Some of these slots are: the
trigger event, which defines the event (i.e. goals, perceptions,
and belief updating) each plan is supposed to handle; the
context, a set of condition that must hold before the plan can
be actually performed; the body, which includes the acts the
agent performs during the execution of the plan.</p>
      <p>Through their perceptors, agents search for stimuli
(perceptions) from the environment and transform them into (external)
events, which in turn are put into the Event Queue (figure
1). Such a queue also contains internal events, which are
generated when either an agent is committed to a goal or there
is some belief updates. The former type of internal events is
particularly important in PRACTIONIST agents, as described
in the following sections.</p>
      <p>The main cycle of a PRACTIONIST agent is implemented
within a cyclic behaviour, which consists of the following
steps.</p>
      <p>1) it selects and extracts an event from the queue, according
to a proper Event Selection logic;
2) it handles the selected event through the following
means-ends reasoning process: (i) the agent figures out
the practical plans, which are those plans whose trigger
event matches the selected event (Options in figure
1); (ii) among practical plans, the agent detects the
applicable ones, which are those plan whose context
is believed true, and selects one of them (main plan);
(iii) it builds the intended means, which will contain the
main plan and other alternative practical plans. In case
of goal event updates the corresponding intended means
stack; otherwise it creates a new intended means stack.</p>
      <p>It should be noted that every intended means stack can
contain several intended means, each able to handle a given
event, possibly through several alternative means.</p>
      <p>Moreover all intended means stacks are concurrently
executed, in order to provide the agents with the capability of
performing several activities (perhaps referring to related or
non-related objectives) in parallel. When executing each stack,
the top level intended means is in turn executed, by performing
its main plan. If it fails for some reason, one of alternative
plans is then performed, until the corresponding ends (related
to the triggering event) is achieved.</p>
      <p>During the execution of a plan, several acts can be
performed, such as desiring to bring about some states of affairs
or to perform some action, adding or removing beliefs, sending
ACL messages, and so forth. Particularly, desiring to pursue
a goal triggers a deliberation/filtering process, in which the
agent figures out whether that goal must be actually pursued
or not, on the basis of the goal model declared for that agent.</p>
      <p>The interaction among intended means belonging to
different stacks can occur at a goal level, since each plan could wait
for the success/failure of some goal that the agent is pursuing
through another intended means.</p>
    </sec>
    <sec id="sec-4">
      <title>III. GOAL MODEL</title>
      <p>In the PRACTIONIST framework, a goal is an objective
to pursue and we use it as a mean to transform desires into
intentions through the satisfaction of some properties. In other
words, our agents are programmed in terms of goals, which
then will be related to either desires or intentions according
to whether some specific conditions are satisfied or not.</p>
      <p>Formally, a PRACTIONIST goal g is defined as follows:
g = hσg, πgi
(1)
where:
• σg is the success condition of the goal g;
• πg is the possibility condition of the goal g stating
whether g can be achieved or not.</p>
      <p>Since we consider such elements as local properties of
goals, in the PRACTIONIST framework we defined them as
operations that have to be implemented for each kind of goal
(figure 3).</p>
      <p>In order to describe the goal model, we first provide some
definitions about the properties of goals.</p>
      <p>Definition 1 A goal g1 is inconsistent with a goal g2
(g1⊥g2) if and only if when g1 succeeds, then g2 fails.</p>
      <p>Definition 2 A goal g1 entails a goal g2 or equivalently g2
is entailed by g1 (g1 → g2) if and only if when g1 succeeds,
then also g2 succeeds.</p>
      <p>Definition 3 A goal g1 is a precondition of a goal g2
(g1 7→ g2) if and only if g1 must succeed in order to be
possible to pursue g2.</p>
      <p>Definition 4 A goal g1 depends on a goal g2 (g1 ,→ g2) if
and only if g2 is precondition of g1 and g2 must be successful
while pursuing g1.</p>
      <p>Therefore the dependence is a stronger form of precondition.
Both definitions let us specify that some goals must be
successful before (and during, in case of dependency) pursuing
some other goals (refer to section IV for more details).</p>
      <p>Now, given a set G of goals and based on the above
definitions, it is also possible to define some relations
between those goals.</p>
      <p>Definition 5 The inconsistency Γ ⊆ G × G is a binary
symmetric relation on G, defining goals that are inconsistent
with each other. Formally,
Γ = {(gi, gj) i, j = 1, ..., |G| : gi⊥gj} .
(2)</p>
      <p>When two goals are inconsistent with each other, it might
Ξ = {(gi, gj) i, j = 1, ..., |G| : gi → gj } .
(4)</p>
      <p>Definition 8 The precondition set Π ⊆ G × G is a binary
relation on G, defining which goals are precondition of other
goals. Formally,</p>
      <p>Π = {(gi, gj ) i, j = 1, ..., |G| : gi 7→ gj} .</p>
      <p>Definition 9 The dependence Δ ⊆ G×G is a binary relation
on G, defining which goals depend on other goals. Formally,
Δ = {(gi, gj ) i, j = 1, ..., |G| : gi ,→ gj} .</p>
      <p>Finally, on the basis of the above properties and relations
we can now define the structure of the goal model of
PRACTIONIST agents as follows
Γ0 = {(gi, gj) ∈ Γ : gi
gj } .</p>
      <p>Therefore if there is no preference between two inconsistent
0
goals, the corresponding pair does not belong to the set Γ .
Moreover, since several goals can be pursued in parallel,
there is no need to prefer some goal to another goal if they
are not inconsistent each other.</p>
      <p>Definition 7 The entailment Ξ ⊆ G × G is a binary relation
on G, defining which goals entail other goals. Formally,
(3)
(5)
(6)
(7)
be useful to specify that one is preferred to the other. We
denote that gi is preferred to gj with gi gj.
0</p>
      <p>Definition 6 The relation of preference Γ ⊆ Γ defines the
pair of goals (gi, gj ) where gi⊥gj and gi gj . Formally,
to a goal, an agent can just relate it to a desire, which it is
not committed to because of several possible reasons (e.g. it
believes that the goal is not possible). On the other hand, a
goal can be related to an intention, that is the agent is actually
and actively committed to pursue it.</p>
      <p>Let GM = hG, Γ, Γ0, Ξ, Π, Δi be a goal model of a
PRACTIONIST agent α and, at a given time, G0 ⊆ G be the
set of its active goals, which are those goals that the agent is
already committed to.</p>
      <p>Suppose that α starts its deliberation process and generates
the goal g = hσg, πgi as an option. Therefore the agent would
like to commit to g, that is its desire is to bring about the goal
g. However, since an agent will not be able to achieve all its
desires, it performs the following process in the context of its
deliberation phase (figure 2): the agent checks if it believes
that the goal g is possible and not inconsistent (see definition
0
1) with active goals (belonging to G ).</p>
      <p>If both conditions hold the desire to pursue g will be
promoted to an intention. Otherwise, in case of inconsistency
among g and some active goals, the desire to pursue g will
become an intention only if g is preferred to such inconsistent
goals, which will in turn be dropped.</p>
      <p>
        In any case, if the desire to pursue g is promoted to an
intention, before starting the means-ends reasoning, the agent
α checks if it believes that the goal g succeeds (that is, if it
believes that the success condition σg holds) or whether the
goal g is entailed (see definition 2) by some of the current
active goals. In case of both above conditions do not hold,
the agent will perform the means-ends reasoning, by either
selecting a plan from a fixed plan library or dynamically
generating a plan and finally executing it (details on this
means-ends reasoning can be found in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]).
      </p>
      <p>Indeed, if the goal g succeeds or is entailed by some current
active goals (i.e. some other means is working to achieve a
goal that entails the goal g), there is no reason to pursue it.
Therefore, the agent does not need to make any means-ends
reasoning to figure out how to pursue the goal g.</p>
      <p>Otherwise, before starting the means-ends reasoning, if
some declared goals are precondition for g, the agent will
first desire to pursue such goals and then the goal g.</p>
      <p>
        In the PRACTIONIST framework, as a default, an agent
will continue to maintain an intention until it believes that
either such an intention has been achieved or it is no longer
possible to achieve the intention. This commitment strategy to
intention is called single-minded commitment [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. In order to
perform such a behaviour, the agent continuously checks if it
believes that the goal g has just succeeded and that the goal
g is still possible.
      </p>
      <p>Moreover the agent checks if some dependee goal does
not succeed. If so, it will desire to pursue such a goal and
then continue pursuing the goal g. When all dependee goals
succeed, the agent resumes the execution of the plan.</p>
      <p>In order to be able to recover from plan failures and try
other means to achieve an intention, if the selected plan fails
or is no longer appropriate to achieve the intention, then the
agent selects one of applicable alternative plans within the
GM = hG, Γ, Γ0, Ξ, Π, Δi
where:
• G is the set of goals the agent could pursue;
• Γ is the inconsistency relation among goals;
• Γ0 is the preference relation among inconsistent goals;
• Ξ is the entailment relation among goals;
• Π is the precondition relation among goals;
• Δ is the dependence relation among goals.</p>
    </sec>
    <sec id="sec-5">
      <title>IV. REASONING ABOUT GOALS</title>
      <p>In this section we show how the goal elements previously
defined are used by PRACTIONIST agents when reasoning
about goals during their deliberation process and the
meansends reasoning. We also highlight the actual relations between
them and mental attitudes, i.e. desires and intentions.</p>
      <p>In PRACTIONIST agents goals and their properties are
defined on the basis of what agents believe. Thus, an agent will
believe that a goal g = hσg, πgi has succeeded if it believes
that its success condition σg is true. The same holds for the
other properties.</p>
      <p>It is important to note that, in PRACTIONIST, desires and
intentions are mental attitudes towards goals, which are in
turn considered as descriptions of objectives. Thus, referring
check if the goal
is possible</p>
      <p>check if the goal is check if the goal
inconsistent with active goals succeeds
[ goal inconsistent AND not preferred ]
same intended means and executes it.</p>
      <p>If none of the alternative plans was able to successfully
pursue the goal g, the agent take into consideration the goals
that entail g. Thus the agent selects one of them and considers
it as an option, processing it in the way described in this
section, from deliberation to means-ends reasoning.</p>
      <p>If there is no plan to pursue alternative goals, the
achievement of the intention has failed, as the agent has not other ways
to pursue its intention. Thus, according to agents beliefs, the
goal was possible, but the agent was no able to pursue it (i.e.
there are no plans).</p>
    </sec>
    <sec id="sec-6">
      <title>V. THE SUPPORT FOR THE GOAL MODEL IN THE</title>
      <p>PRACTIONIST FRAMEWORK</p>
      <p>In order to provide the PRACTIONIST framework with the
support for the definition/handling of agent goal models and
the capabilities for reasoning about goals, we identified and
fulfilled the following requirements:
• registration of the goals that each agent could try to
pursue during his life cycle;
• registration of the relations among such goals;
• checking whether two goals are inconsistent and which
the preferred one is (if any);
• getting the list of goals that entail a given goal;
• getting the list of goals that are precondition of a given
goal;
• getting the list of goals which a given goal depends on.</p>
      <p>A proper ad-hoc search algorithm explores the goal model
and answers the queries, on the basis of both declared and
implicit relations. Indeed, implicit relations (especially
inconsistence and entailment) can be inferred from the semantics
of some built-in goals, such as state goals (e.g. achieve(ϕ),
cease(ϕ), maintain(ϕ), and avoid(ϕ), where ϕ is a closed
formula of FOL). Therefore, the goal reasoner takes into
account implicit relations such as achieve(ϕ)⊥achieve(¬ϕ),
achieve(ϕ)⊥cease(ϕ), maintain(ϕ)⊥avoid(ϕ), and so
forth.</p>
      <p>Figure 3 shows the actual structure of the GoalModel
that each agent owns (PRACTIONISTAgent is the
abstract class that has to be extended when developing
PRACTIONIST agents). Such a model stores
information about declared goals (with their internal properties,
i.e. success and possibility condition) and the four types
of relations these goals are involved in. Specifically the
interface GoalRelation provides the super interface
for all goal relations supported by the PRACTIONIST
framework (i.e. EntailmentRel, InconsistencyRel,
DependencyRel, and PreconditionRel) and defines
the operation verifyRel, whose purpose is to check each
specific relation.</p>
      <p>In order to exploit the features provided by the goal model
and understand if a given goal the agent desires to pursue is
inconsistent with or implied by some active goals, the agent
must have information about such active goals and whether
them are related to either desires or intentions. Therefore, each
PRACTIONIST agent owns an ActiveGoalsHandler
component, which, with the aid of the GoalModel, has the
responsibility of keeping track of all executing intended means
stacks with the corresponding waiting and executing goals and
managing requests made by the agent.</p>
      <p>Thus, at any given time, the ActiveGoalsHandler is
aware of current desires and intentions of the agent, referring
them to active goals.</p>
    </sec>
    <sec id="sec-7">
      <title>VI. AN EXAMPLE</title>
      <p>In this section we present the Tileworld example to illustrate
how to use the goal model presented in this paper and the
support provided by the PRACTIONIST framework.</p>
      <p>
        The Tileworld example was initially introduced in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] as
a system with a highly parameterized environment that could
be used to investigate the reasoning in agents. The original
Tileworld consists of a grid of cells on which tiles, obstacles
and holes (of different size and point value) can exist. Each
agent can move up, down left or right within the grid to pick
up and move tiles in order to fill the holes. Each hole has an
associated score, which is awarded to the agent that has filled
the hole. The main goal of the agent is to score as many points
as possible.
      </p>
      <p>Tileworld simulations are dynamic and the environment
changes continually over time. Since this environment is
highly parameterized, the experimenter can alter various
aspects of it through a set of available ”knobs”, such as the
rate at which new holes appear (dynamism), the rate at
which obstacles appear (hostility), difference in hole scores
(variability of utility), and so forth.</p>
      <p>Such applications, with a potentially high degree of
dynamism, can benefit from the adoption of a goal-oriented
design approach, where the abstraction of goal is used to
declaratively represent agents’ objectives and states of affairs
that can be dynamically achieved through some means.</p>
      <p>Figure 4 shows the Tileworld environment, where new UML class diagram with dependencies stereotyped with the
agents can be added or removed and the corresponding pa- name of the goal relations. Actually some relations only hold
rameters can be dynamically changed. under certain condition and the diagram does not show such</p>
      <p>In our Tileworld demonstrator two types of agents were details.
developed, the Tileworld Management Agent (TWMA) and According to the diagram, the TWPA has to be
the Tileworld Player Agent (TWPA): the former is the agent registered with the TWMA before increasing its
that manages and controls the environment, by creating and score (the goal ScorePoints depends on the goal
destroying tiles, holes and obstacles, according to the parame- RegisterWithManager). Moreover, in order to score
ters set by the user; the latter is the agent moving within the points, the TWPA has to fill as many holes as possible (the
grid and whose primary goal is to maximize its score by filling goal FillHole entails the goal ScorePoints). But, in
holes with tiles. A player agent does not get any notification order to fill a hole, the TWPA has to hold a tile and find a
about the environment changes (i.e. by the management agent), hole (the goal FillHole depends on the goal HoldTile
but it can ask such an information (e.g. what the current and requires the goal FillHole as precondition); finally,
state of a cell is) by means of sensing actions, in order to the TWPA has to find the tile to hold it (the goal HoldTile
adopt the best strategy on the basis of the current state of has the goal FindTile as a precondition).
the environment. In fact, for each state of the environment According to the above-mentioned description, the
follow(e.g., static, dynamic, very dynamic, etc.) at least a strategy is ing source code from the TWPAgent class shows how goals
provided. All the strategies are implemented through plans that and relations among them are added to the agent and thus
share the same goal and differ for their operative conditions how to create the goal model through the PRACTIONIST
(i.e. the context). framework:</p>
      <p>It should be noted that, since PRACTIONIST agents are protected void initialize()
endowed with the ability of dynamically building plans starting {
fcrooumldabgeivgeennegroaateldanodna-thseet-floyf abvyaitlaakbilnegaicntitoonasc,csooumnet setmraetreggiinegs .G.o.alModel gm = getGoalModel();
situations. // Goal declaration
plaTcheed ipnltaoyetrheaggernidt, hitass pboesliiteifosn,abitosustcothree, othbejecsttsatethoatf tahree gggmmm...aaadddddd(((nnneeewww HSRocelogdriTesiPtloeeir(nW)ti)st;(h)M)a;nager());
environment, etc. gm.add(new FindTile());</p>
      <p>The TWPA top level goal is to score as many points as gm.add(new FillHole(getBeliefBase()));
possible, but to do this, it has to register itself with the gm.add(new FindHole());
manager, look for the holes and for the tiles, hold a tile, and // relations among goals
fill a hole. gm.add(new Dep_ScorePoints_RegisterWithManager());</p>
      <p>We designed the TWPA by adopting the goal-oriented ggmm..aadddd((nneeww DEenpt__FSiclolrHeoPloei_nHtosl_dFTiillleH(o)l)e;());
approach described in this paper and directly implemented gm.add(new Pre_HoldTile_FindTile());
its goal-related entities (i.e. goals and relations) thank to the gm.add(new Pre_FillHole_FindHole());
support provided by the PRACTIONIST framework. In figure ...
5 a fragment of the goal model of the TWPA is shown as a }</p>
      <p>In order to better understand how the above-mentioned
relations are implemented, the following source code shows
the precondition relation among the goals HoldTile and
FindTile:
public class Pre_HoldTile_FindTile</p>
      <p>implements PreconditionRel
public Goal verifyRel(Goal goal1, Goal goal2)
{
if((goal1 instanceof HoldTile) &amp;&amp;
(goal2 instanceof FindTile))</p>
      <p>return new FindTile;
return null;
{
}
}
...</p>
      <p>FillHole does not need to include the statements to desire
either the dependee (i.e. HoldTile) or precondition (i.e.</p>
      <p>FindHole) goals, as shown in the following code fragment.
public class FillHolePlan extends GoalPlan
{
public void body() throws PlanExecutionException
{</p>
      <p>String posPred = "pos(obj1: X,obj2: Y)";
AbsPredicate pos =
getBeliefBase().retrieveAbsPredicate(</p>
      <p>AbsPredicateFactory.create(posPred));
int xPos = pos.getInteger("obj1");
int yPos = pos.getInteger("obj2");
doAction(new ReleaseTileAction(xPos, yPos,</p>
      <p>twaServer.getHoleValue(xPos, yPos)));</p>
      <p>When the player agent desires to pursue a goal, it checks
if this goal is involved in some relations and in that case }
it reasons about them during the deliberation, means-ends,
and intention reconsideration processes. Thus, developers only The Tileworld domain highlights how the PRACTIONIST
need to specify goals and relations among them at the design goal model is particularly adequate to model dynamic
envitime. ronments in a very declarative manner.</p>
      <p>As an example, when the TWPA desires to fill a hole (i.e.</p>
      <p>FillHole), according to the defined goal model and the VII. CONCLUSIONS AND FUTURE WORK
semantics described in section 2, the agent automatically will In the PRACTIONIST framework, desires and intentions are
check if it just holds a tile (i.e. HoldTile); if not, such a mental attitudes towards goals, which are in turn considered
goal will be desired. On the other hand, the agent will check as descriptions of objectives.
if it has found a hole (i.e. FindHole) and again, if not, it In this paper we described how a declarative representation
will desire that. of goals can support the definition of desires and intentions in</p>
      <p>Moreover, when pursuing the goal FillHole, the agent PRACTIONIST agents. It also supports the detection and the
will continuously check the success of all its dependee goals resolution of conflicts among agents’ objectives and activities.
(i.e. HoldTile) and maintain them in case of failure. This results in a reduction of the gap between BDI theories
It should be noted that the plan to pursue the goal and several available implementations.</p>
      <p>We also described how goals and relations are used by
PRACTIONIST agents during their deliberation process and
the execution of their activities; particularly it is described how
agents manages these activities by using the support for the
goal model shown in the previous sections.</p>
      <p>It should be noted that, unlike several BDI and non-BDI
agent platforms, the PRACTIONIST framework supports the
declarative definition of goals and the relations among them,
as described in this paper. This provides the ability to believe
if goals are impossible, already achieved, incompatible with
other goals, and so forth. This in turn supports the commitment
strategies of agents and their ability to autonomously drop,
reconsider, replace or pursue intentions related to active goals.</p>
      <p>The ability of PRACTIONIST agents to reason about goals
and the relations among them (as described in section IV)
lets programmers implicitly specify several behaviours for
several circumstances, without having to explicitly code such
behaviours, letting agents figure out the right activity to
perform on the basis of the current state and the relations
among its potential objectives.</p>
      <p>Goals can be adopted throughout the whole development
process. Thus, we are defining a development methodology
where goals play a central role and maintain the same
semantics from early requirements to the implementation phase.</p>
      <p>As a part of our future strategy, we aims at extending the
proposed model with further properties of goals and relations
among them. Finally, we aim at applying the concepts and
the model described in this paper in the development of
real-world applications based on BDI agents.</p>
      <p>Acknowledgments. This work is partially supported by
the Italian Ministry of Education, University and Research
(MIUR) through the project PASAF.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lapouchnian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liaskos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mylopolous</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yu</surname>
          </string-name>
          , “
          <article-title>Towards requirements-driven autonomic systems design</article-title>
          ,
          <source>” Proceedings of the 2005 workshop on Design and evolution of autonomic application software</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>2005</year>
          , aCM Press, New York, NY, USA.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Georgeff</surname>
          </string-name>
          , “
          <article-title>BDI agents: from theory to practice</article-title>
          ,”
          <source>in Proceedings of the First International Conference on Multi-Agent Systems</source>
          . San Francisco, CA: MIT Press,
          <year>1995</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>319</lpage>
          . [Online]. Available: http://www.uni-koblenz.de/f˜ruit/LITERATURE/rg95.ps.gz
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Bratman</surname>
          </string-name>
          , Intention, Plans, and
          <string-name>
            <given-names>Practical</given-names>
            <surname>Reason</surname>
          </string-name>
          . Cambridge, MA: Harvard University Press,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Winikoff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Padgham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Harland</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Thangarajah</surname>
          </string-name>
          , “
          <article-title>Declarative &amp; procedural goals in intelligent agent systems</article-title>
          ,” in KR,
          <year>2002</year>
          , pp.
          <fpage>470</fpage>
          -
          <lpage>481</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>P.</given-names>
            <surname>Busetta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rnnquist</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hodgson</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Lucas</surname>
          </string-name>
          , “
          <article-title>Jack intelligent agents - components for intelligent agents in java</article-title>
          ,”
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Huber</surname>
          </string-name>
          , “
          <article-title>Jam: A bdi-theoretic mobile agent architecture</article-title>
          .” in Agents,
          <year>1999</year>
          , pp.
          <fpage>236</fpage>
          -
          <lpage>243</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          , “
          <string-name>
            <surname>AgentSpeak</surname>
          </string-name>
          (L):
          <article-title>BDI agents speak out in a logical computable language</article-title>
          ,” in Seventh European Workshop on Modelling Autonomous Agents in a Multi-Agent World, R. van Hoe, Ed.,
          <string-name>
            <surname>Eindhoven</surname>
          </string-name>
          , The Netherlands,
          <year>1996</year>
          . [Online]. Available: citeseer.ist.psu.edu/article/rao96agentspeakl.html
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K. V.</given-names>
            <surname>Hindriks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. S. D.</given-names>
            <surname>Boer</surname>
          </string-name>
          , H. W. van der, and J. J. Meyer, “
          <article-title>Agent programming in 3APL,” Autonomous Agents and Multi-Agent Systems</article-title>
          , vol.
          <volume>2</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>401</lpage>
          ,
          <year>1999</year>
          , publisher: Kluwer Academic Publishers, Netherlands.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Braubach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pokahr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Lamersdorf</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Moldt</surname>
          </string-name>
          , “
          <article-title>Goal representation for bdi agent systems</article-title>
          ,” in
          <source>Second International Workshop on Programming Multiagent Systems: Languages and Tools</source>
          , 7
          <year>2004</year>
          , pp.
          <fpage>9</fpage>
          -
          <lpage>20</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Morreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bonura</surname>
          </string-name>
          , G. Francaviglia,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cossentino</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Gaglio</surname>
          </string-name>
          , “
          <article-title>Practionist: a new framework for bdi agents</article-title>
          ,”
          <source>in Proceedings of the Third European Workshop on Multi-Agent Systems (EUMAS'05)</source>
          ,
          <year>2005</year>
          , p.
          <fpage>236</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bellifemine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Poggi</surname>
          </string-name>
          , and G. Rimassa, “
          <article-title>JADE - a FIPAcompliant agent framework</article-title>
          ,”
          <source>in Proceedings of the Practical Applications of Intelligent Agents</source>
          ,
          <year>1999</year>
          . [Online]. Available: http://jmvidal.cse.sc.edu/library/jade.pdf
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>B. F.</given-names>
            <surname>Chellas</surname>
          </string-name>
          ,
          <source>Modal Logic: An Introduction</source>
          . Cambridge: Cambridge University Press,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Rao</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Georgeff</surname>
          </string-name>
          , “
          <article-title>Modeling rational agents within a BDI-architecture,”</article-title>
          <source>in Proceedings of the 2nd International Conference on Principles of Knowledge Representation and Reasoning</source>
          . Morgan Kaufmann publishers Inc.: San Mateo, CA, USA,
          <year>1991</year>
          , pp.
          <fpage>473</fpage>
          -
          <lpage>484</lpage>
          . [Online]. Available: http://citeseer.nj.nec.com/rao91modeling.html
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Pollack</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Ringuette</surname>
          </string-name>
          , “
          <article-title>Introducing the tileworld: Experimentally evaluating agent architectures</article-title>
          ,
          <source>” National Conference on Artificial Intelligence</source>
          , pp.
          <fpage>183</fpage>
          -
          <lpage>189</lpage>
          ,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>