<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jos´e Cascalho</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Departamento de Inform ́atica Faculdade de Ciˆencias da Universidade de Lisboa Bloco C6</institution>
          ,
          <addr-line>Piso 3, Campo Grande 1749-016 Lisboa</addr-line>
          ,
          <country country="PT">Portugal</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Helder Coelho</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Universidade dos A ̧cores 9701-851 Angra do Hero ́ısmo</institution>
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we discuss how to tune agents' behaviours by explicitly modify a set of elements previously defined and included in the agents' architecture. By tuning, we mean influencing agents' world view, changing their preferences and even modify their beliefs about which goals are possible. This elements which we call attributes, such as urgency, insistence and intensity, are able to modify agents' priorities with regard to the resource consumption, to modify the evaluation of the implicit costs of action execution and even to change agents' view about their capabilities to execute an action. In a preliminary experimental evaluation made in a multi-agent system environment, a modified predator-prey workbench, we show how the attributes are important elements while trying to improve predators' global efficiency. In the final discussion, we argue about the benefits of having this set of attributes which allow agents to be selected and modified by stakeholders to support environmental odds, like a team manager would do.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>In this work we discuss the use of attributes to characterise different agents behaviours. These
attributes, intensity, insistence, importance and urgency are embedded in the agents’ architecture
and we use them to change the agents’ decision process. For example, the attribute insistence
measures how much effort an agent is capable to do to achieve a goal. A high value of insistence
means that an agent persists in achieving a specific goal during a predefined time.</p>
      <p>
        These attributes come from previous work [
        <xref ref-type="bibr" rid="ref1">3</xref>
        ][
        <xref ref-type="bibr" rid="ref3">5</xref>
        ] and have been already used on trying to
modulate the decision process in cognitive agents [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ][
        <xref ref-type="bibr" rid="ref14">16</xref>
        ].
      </p>
      <p>
        In this paper we explore these attributes of the agents’ architecture in a predator-prey
environment [
        <xref ref-type="bibr" rid="ref9">11</xref>
        ]. We create predators with different values of those attributes and test the predators’
global performance, by measuring the average time the predators stay alive in the world.
      </p>
      <p>We argue that these different behaviours that can be created using different attributes
(characterised by environment properties where agents belong), can be used to define agents’ types that
could help a stakeholder to select the right set of agents among a predefined set, as a way to improve
system’s performance.</p>
      <p>Our thesis is that agents, like humans, need to be different (or behave differently) to tackle
different situations like an emergency where it is important an energetic response versus a trading
where there is a need to evaluate carefully the different proposals from different traders.</p>
      <p>ConnectorIterator
+First()
+Next()
+Current()
0..* targets</p>
      <p>0..1</p>
      <sec id="sec-1-1">
        <title>Connector connects</title>
        <p>ConnectorHeader
+CreateIterator()
h1a..s* Goal
AndConnector
OrConnector</p>
      </sec>
      <sec id="sec-1-2">
        <title>Condition belief 1</title>
        <sec id="sec-1-2-1">
          <title>Accomplishment belief 1</title>
          <p>has</p>
          <p>Mentality
+Execute() has</p>
          <p>Preferences belief
Means-end Belief
Insistence
+Evaluate()
Uncertainty
+Evaluate()</p>
          <p>Intensity
+Evaluate()</p>
          <p>Urgency
+Evaluate()
know-how
has
0..1 Know-how belief
1 Impossibility Belief support</p>
        </sec>
        <sec id="sec-1-2-2">
          <title>1 CanDo Belief support</title>
          <p>Intensity</p>
          <p>Urgency
Insistense</p>
          <p>ActionIterator
+First()
+Next()
+Current()
0..1 ActionHeaderList
1 +CreateIterator()
has 1..*</p>
          <p>Action
#bActivity: State
#bState: bool</p>
          <p>has
Condition Belief 1..*</p>
          <p>Uncertainty</p>
          <p>Action
Action 1
Action 2</p>
          <p>
            The agents’ mind architecture has a set of goals which are linked by an OR-connector or an
AND-connector (AND/OR decomposition [
            <xref ref-type="bibr" rid="ref15">17</xref>
            ]). These connectors allow the agents’ designer to
create a tree (AND/OR tree) which makes possible to define ‘and-goals‘ (the children of an
ANDconnector node) or ‘or-goals‘ (the children of an OR-connector). Both constitute alternative paths
to satisfy a goal. The former means that a goal can be satisfied if all the and-goals are executed
successfully (represents problem decomposition into sub-problems all of which must be solved ),
while the latter means that to satisfy the goal, it suffices only one of the or-goals be satisfied.
          </p>
          <p>In figure 1 a schematic representation of the architecture shows the different classes and their
main connections. Next, we explain succinctly some of the beliefs supporting the agent’s decision
process:
• The accomplishment belief defines the conditions in which a goal is satisfied.
• The know-how belief links a goal to an Action, i.e. a plan to satisfy a goal. In our
architecture this corresponds to a sequence of atomic actions.
• The beliefs condition belief defines the (pre-)conditions that support the execution of the
goals and the actions of a plan.
• The cando belief evaluates the internal agents’ capabilities. The difference between a
condition belief and a cando belief is that the former is part of the goal (or action) external
condition to the goal’s (or action’s) execution while the latter is an evaluation of the agents’
ability (an agent can have the conditions but not the ability to reach a goal)1. The attribute
intensity is evaluated with respect to the agents’ ability to execute or not execute a goal.
1Both beliefs evaluate if the goal can be satisfied if the action associated to the know-how belief is executed.
• The preferences belief defines the order in which a sub-goal is chosen. The urgency
influences this order.
• The means-end belief test the conditions for a goal execution (see below for a detailed
explanation of the role of this belief).</p>
          <p>Let’s now describe the main cycle ( reasoning-cycle) of the agent’s mind. The agent keeps a
pointer to the goal that is executing. The agent starts to select the goal in the top of the tree,
the start node. In the means-end belief, a test is made to evaluate whether a goal is satisfied
or not by calling the accomplishment belief. If it is satisfied, he looks for another goal in the
AND/OR tree. Otherwise, he checks if the condition belief is true and verifies if there is a plan
to execute the goal (know-how belief ). Finally the agent evaluates the Impossibility belief
and the Cando belief. If all conditions return true, he selects an action to execute, following the
plan attached to the goal.</p>
          <p>An agent search for a new goal in the AND/OR tree when there is at least one condition, from
the conditions enumerated above, returning false.</p>
          <p>The leafs or terminal nodes (having no child nodes) in the goals-tree might have variables
which must be instantiated. If this is the case, inside the means-end belief, an instantiation
policy is used to order the list of possible instantiations. This instantiation policy is supported by
preference belief which orders the options based on a predefined criteria 2.</p>
          <p>After each reasoning-cycle in which the mental state is updated, an agent will execute an action
(possibly a NULL action).
3</p>
          <p>G1
G2(Search prey)</p>
          <p>G3(Chase prey)
G4</p>
          <p>G5 G6
k4 k5
G4 Quick chase
G5 Medium speed chase
G6 Slow chase
k6</p>
          <p>G7(Chase
and Comm.)</p>
          <p>G8 (Chase)
G9
i G10</p>
          <p>G11 i G12
k9 k10
G9 Approach with comm.</p>
          <p>G10 Chase with comm</p>
          <p>k11 k12
G11 Approach without comm.</p>
          <p>G12 Chase without comm.
i Instantiation translation</p>
          <p>Knowhow belief</p>
          <p>Connector AND</p>
          <p>Connector OR</p>
          <p>In figure 2 we present the goal-tree we used in the experiments. An agent walks through the
goal-tree looking for a goal not satisfied and executable. When he has the conditions to achieve the
goal, he uses the plan attached to that goal and executes the actions in the plan. An evaluation
of success is kept. When a goal succeeds (the accomplishment belief returns true) he stops the
searching activity and, in the next cycle, restarts again the search from at the start node of the
tree. If the search for a the conditions to select a goal fails, the agent returns a null action.</p>
          <p>Intensity measures the willingness of the agents to spend their energy when satisfying a goal.
Energy is a resource, so we can say that intensity measures how much of a resource an agent is
willing to spend to satisfy a goal. In the meta-model we link the intensity attribute to the belief
‘CanDo‘. This means that to each goal’s plan and possibly to each action inside that plan, an
2The instantiation policy is related to the attribute importance which is not discussed in this paper.
evaluation is made about the possibility of executing that plan or action with respect to agent’s
ability to spend that resources. Note that ‘intensity‘ doesn’t judge about the worthwhile of an
action. Agents get a constant value of intensity and this will characterise their behaviour.</p>
          <p>Insistence attribute is added to the agent’s means-end analysis in the architecture (see figure
1). For each goal, an agent defines a plan as a sequence of steps to accomplish a goal. It measures
the agents persistence toward a goal, i.e. how much time an agent will try to satisfy a goal. If
after a number of attempts a goal is not yet accomplished, the agent reconsiders the strategy to
accomplish that goal, marking as a failure the previous strategy and searching for another strategy.
Agents receive a constant value of insistence and this will characterise their behaviour.</p>
          <p>The urgency acts as a regulator of agents behaviour. A set of context variables are used to
calculate the urgency value and are usual related to distressed situation. With the increase of the
urgency the agents will change (amplify or contract) the effects of the intensity or insistence (i.e.
with urgency equal to 0.9, an agent with a value of intensity equal to 0.5 could behave as an agent
with a value of intensity equal to 0.8). Instead of being defined as a constant value, urgency is
dynamically dependent of environment parameters. These parameters can change also dynamically
which means that an agent is able to adapt in real time to changes in the environment.
4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Experimental setup</title>
      <p>
        We adapted a pursuit simulator [
        <xref ref-type="bibr" rid="ref9">11</xref>
        ] by adding the energy resource to the predators. This new
parameter changes the way the simulator ends a game, i.e., a game ends after the death of all
predators3 before they could catch all preys launched in the game’s beginning. When a new game
starts, the energy is restored to initial values. In the original simulator, an unspecified number of
predators tries to catch one or more prey agents. A game in the simulator is defined by episodes
and cycles. In each cycle the simulator receives information through sockets about the moves of
predator and prey agents, and messages exchanged among predators. An episode ends when all
prey is caught and a new episode starts with all predator and prey agents randomly repositioned
in the field. Data about how long agents take to catch all prey agents are kept as a statistical
measurement of predator efficiency.
      </p>
      <p>
        The pursuit domain was introduced in [2]. It has been widely used testbed for multi-agent
systems. Several variations of the original descriptions have been studied over the years (see [
        <xref ref-type="bibr" rid="ref8">10</xref>
        ]
for more details). The domain we used in our experiments consists of a discrete, grid world of
20X20, in which the predators catch a prey when predator and prey share the same cell (one of the
predefined capture criteria in the simulator [
        <xref ref-type="bibr" rid="ref9">11</xref>
        ]). Predators and prey can move to north, south,
east and west or stay in their position. They consume more energy when they move than when
they maintain their position. They can send messages to each other4. Finally, their total energy is
incremented by a certain predefined amount after they catch a prey.
      </p>
      <p>In the experimental environment the density of prey agents is equal to 0.03 (12 preys per 400
places) and the number of predators corresponds to 1/4 (4 predators for 12 prey) and 2/3 (8
predators for 12 preys) of the total of preys. To measure the global performance of all the agents,
for each experiment we record the number of episodes they survive. We run the simulator about
80 times per experiment, i.e. for different set of agents, we run about 80 times the workbench.
5</p>
    </sec>
    <sec id="sec-3">
      <title>Experiments description</title>
      <p>In the next sections we explain the different experiments we made to test the different attributes,
intensity, insistence and urgency, under the pursuit domain (see figure 3) . We divided our
experiments up into two different parts. In the first part we evaluated the attributes intensity and urgency
associated to the prey search. Those experiments are related to the selection of the goals G4, G5
or G6 of the goals-tree presented in figure 2. Four predators chased twelve random preys (the preys
select randomly one of the possible movements, north,south, east, west or don’t move). Selecting
behaviours controlled by the attributes urgency and intensity contributed to the improvement in
3They die after their energy decreases below a survival threshold.
4In this setup, an agent doesn’t consume energy by sending messages</p>
      <sec id="sec-3-1">
        <title>Selecting search</title>
        <p>behaviours (G2)</p>
      </sec>
      <sec id="sec-3-2">
        <title>Selecting chase behaviours (G3)</title>
        <sec id="sec-3-2-1">
          <title>Intensity and Urgency</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Intensity and Insistence</title>
          <p>50% of the global system performance5. In the second part, we tested the attributes insistence and
intensity in a new setup where preys, instead of moving randomly, run away from the predators
but with different speeds6. In this part of experiment we used the right part of the goals-tree, the
goals G9, G10 and G11, G12 pairs.
5.1</p>
          <p>
            Searching preys with attributes intensity and urgency
An intensity measures the ‘potential‘ an agent applies to satisfy a goal [
            <xref ref-type="bibr" rid="ref10">12</xref>
            ]. We link this attribute
to the resource consumption.Then an high intensity agent (IA) is a predator that selects a strategy
that corresponds to a high energy consuming behaviour. The agent selects the goals differently
based on the energy he has and his value of the intensity attribute. This selection is made by the
’Preference belief’ following the premises:
• It is given to an agent a certain level of intensity i.
• To the goals G4,G5 and G6 correspond plans with different levels of energy consumption per
time step (ets), where G4 has the highest consumption and G6 has the smallest.
• The agent divides etsG4 by his total energy, multiplies by i and if the consumption is in the
interval above selects G4:
          </p>
          <p>0% ≤ i ∗ etsG4 &lt; 5%− &gt; G4
• It do the same for the other two goals, with the percentages and following the order of the
expressions above:
5% ≤ i ∗ etsG5 &lt; 10%− &gt; G5
10% ≤ i ∗ etsG6 &lt; 50%− &gt; G6</p>
          <p>i ∗ etsG6 ≥ 50%− &gt; N ome
• Finally he doesn’t move if energy consumption is above 50% for the less consumption goal:
Using two parameters from the context, the optimum value for the number of preys to be caught
and the average capture time of all preys in the episodes, the agent calculate the value of urgency
to satisfy the goal ‘searching a prey‘.</p>
          <p>The urgency value is calculated using the following expression:</p>
          <p>urgency = weight ∗ (bias + 1 − N r.P reyCaught/P reyT arget)+
56sWeee [d4e]fifonre athfruelelytdyepsecroifptpiorenyso,f qthuiiscke,xnpoerrimmaelnatnd slow preys, to which we ascribed different number of moves
per cycle rate</p>
          <p>(1 − weight) ∗ (bias + N r.Cycles/CyclesAverage − 1),
where bias gives a ‘value of reference‘ for the urgency when the number of prey caught is
equal to the value P reyT arget and the the number of cycles in the episode is equal to the value
CyclesAverage.</p>
          <p>Our goal was to allow the agent to select the goal that reveal the greatest efficacy. While
applying the attribute urgency to the preference belief the agent do the following:
• If the urgency is low, the agent tends to save resources and so it will use the strategies with
which the energy/steps rate is smaller.
• If the urgency is high, the agent must use all the resources he has to quickly solve the problem
he has at hand, so he admits to select the best strategy even if he has to spend the rest of
the resources he has got.</p>
          <p>
            The scenario with the attributes urgency and intensity increased the agents’ performance by
50% when compared with the scenario with just the attribute intensity [
            <xref ref-type="bibr" rid="ref2">4</xref>
            ]. Urgency rationalizes
an energy consumption by letting agents select behaviours with higher consuming rates only in
urgency situations.
5.2
          </p>
          <p>Chasing preys with intensity and insistence attributes
In these experiments we created new preys which runaway from predators. We add three type of
preys each one with different speeds (average of the number of moves per time step 7).</p>
          <p>The agents not only choose the preys to catch but also select the direction they use when
approaching preys. The preferences belief will determine which prey is selected, in this case the
nearest prey8 ,i.e. an agent instantiates the goals ‘approach prey‘ and ‘chase prey‘ giving values to
these two parameters. For example, an agent can select the nearest prey with number ’1’ and with
‘north chase direction‘.</p>
          <p>Furthermore, agents have two sources information, visual and agent’s messages. A predator,
when chasing a prey (goal G10), sends a broadcast message with information about his relative
position to the prey he is chasing. When the other agents receive this message they check if they
see the predator who sends the message and if this is the case they calculate the position of the
prey he is chasing9. This increments the number of preys each agent can see.</p>
          <p>The insistence measures the persistence an agent has toward a goal. After selecting a prey to
chase, a predator chase that prey a maximum number of cycles (parameter chase prey max cycles).
If he doesn’t catch the prey within this time interval, he gives up from catching that prey and tries
to catch another one. Besides, when an agent gives up, he inserts the prey number in a list of
failures, the preys’ dark list ). Another parameter (dark list time interval prey)defines how much
time the information about a failure is kept in that list. While this information is in the list the
agent won’t chase that prey again.</p>
          <p>The agent uses the following expression to calculate the maximum time he chases the same prey,
chase prey max cyclesprey = distance to prey ∗ einsistence∗k
and the expression to keep information about the failed attempt to catch a specified prey,
dark list time intervalprey = max interval ∗ insistencek ∗ e1−insistencek
The parameter max interval is constant with value 40. The parameter k is equal to 3 or 4 (k3 or
k4). These values were selected to allow the values of chase prey max cycles to be between 1 and
208 cycles (k = 3) or between 11 and 512 cycles (k = 4) and the values of dark list time interval
to be between 0 and 40 cycles.</p>
          <p>Finally, in this experiment, the intensity is used to decide if agents chasing a prey broadcast
a message with the information about its relative location (selecting between the branches G7 or
7All preys moves slower than predators, but among them there are 4 quick speed preys, 4 slow speed preys and
4 w8ith a speed between the others.</p>
          <p>9TThhiesredeiscnis’tioannisabpsaorltutoef pagoesinttiso’nproelfiecryenwcheicinh tchaenwboerlcdh,asnogepdr,edbauttortshiosnilsyndoettedrimsciunsesetdheinprtehyisppoasipteiorn relative to
their own position.</p>
          <p>G8 of the goal-tree). With the intensity equal to 1.0, it is certain that they broadcast the relative
position of all the preys they chase. The probability of doing it decreases with the decreasing value
of intensity.
Several experiments were made with different values of insistence. The table 1 summarise the
results. In the table we verify that the best result is obtained for insistence equal to 1.0 and k = 3,
for all 8 predators (see last row in the table). We were expected that the effect of insistence could
improve the performance because this attribute gives to agents the capability to chase a prey for a
large amount of time. On the other hand, we suspected that if an agent had too much insistence
he could prejudice his global performance because he would eventually lost almost all of his energy
trying to capture a quick prey. So we changed the parameter k from k3 to k4 (last column in the
table) extending the time the agent would chase a prey without quiting. And we found that now
with insistence equal to 1.0, the mean become 8.0. Comparing the means, we also observe that the
insistence equal to 0.6 is now the best result and surprisingly surpass the previous value for k3 and
insistence equal to 1.0, but supporting our suspicion.</p>
          <p>When changing k3 to k4, the predators never quit chasing preys after they fix them. They will
only stop chasing one prey if the prey disappears (for some reason they lost their trace) or if the
prey is caught. The figures 4 ,5 and 6 show the number of times a predator quit chasing a prey
because he surpasses the threshold defined for the insistence. We notice that for insistence 1.0 with
k4, predators seldom quit chasing preys. A quite different scenario is observed for insistence 1.0,
k=k3 and for insistence 0.6, k4 (figures4 and 5respectively).
5.4</p>
          <p>The effect of intensity
In the table of figure 5.4 the results for different values of intensity are presented. We notice that
the results get better as the intensity gets higher. We conclude about the importance the agents’
messages about the position of preys being chased have in this specific experimental setup.
6</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>In predator-prey scenario, the use of attributes becomes extremely useful in searching for good
solutions. Intensity, insistence and urgency were tested and their values influenced significantly
the global performance of the system. This open alleys for a deeply research of the possible types
30 40</p>
      <p>Quiting prey chasing for insistence 1.0
10
20
50
60
70</p>
      <p>30 40
Quiting prey chasing for insistence 1.0 (k4)
50
60
70
of agents that could be defined with help of these attributes, and also to extend this results for
different applications where a definition of the same attributes could be implemented.</p>
      <p>Indeed to a stakeholder several properties must exist and should be maintain in a system,
especially when in presence of some distressed situations. Along with the usually desirable performance
also the ability to avoid catastrophic situations or the need to satisfy a real-time response are
considered as important system requirements[1]. We argue that the set of attributes presented in this
paper can be used as a tool for agents adaptation to the environment in order to satisfy the desired
goals of a stakeholder. There are two ways in which a stakeholder can take benefit of having these
attributes in agents’ definition. First, they can be aggregated to create agents types. For example,
suppose we characterise a social agent as an agent who always communicates his goals to others
when having a high intensity (he has defined the Intensity* attribute presented in the table 6). If
we add to this social agent the Insistence attribute, we have a social agent who selects carefully
his goals. Second, stakeholders can tune one or more of these attributes to get agents’ behaviour
more close to their needs.</p>
      <p>As a long term goal we propose to create multi-agent systems in which a set of different agents
types are tested under different simulated scenarios. The best performing sets could be selected to
be used later in a real-time environments. The selection among different sets could then, be made
automatically, changing a team of agents in different scenarios.</p>
      <p>
        The role of attributes in a cognitive agents’ architecture have been discussed by Sloman [
        <xref ref-type="bibr" rid="ref13">15</xref>
        ][
        <xref ref-type="bibr" rid="ref14">16</xref>
        ].
Following Correa[
        <xref ref-type="bibr" rid="ref5">7</xref>
        ][
        <xref ref-type="bibr" rid="ref6">8</xref>
        ], in our previous work [
        <xref ref-type="bibr" rid="ref3">5</xref>
        ][
        <xref ref-type="bibr" rid="ref1">3</xref>
        ], the attributes are associated to the definition
of a mental state and it is explained how these attributes can increase the plasticity of the agents’
reasoning process.
      </p>
      <p>The two key roles of the attributes used in the agents’ architecture are the following:
• To permit to control how the agents use their resources.
• To provide different ways of satisfying goals in different contexts.</p>
      <p>For example, the insistence is related to how many times an agent persist on a specific strategy
to satisfy a goal, while intensity discriminates the cases in which agents can execute a specific
Insistence</p>
      <p>An agent with a high insistence, persists in achieving a goal.</p>
      <p>If he fails, he refuses to satisfy that goal again for a period
δf (x), where f (x) depends on the time at which the agent
spent on trying to achieve that goal.</p>
      <p>An agent communicates preys’ positions only if his
intensity is above a threshold. The rationale behind this is the
following: An agent communicates only when he knows the
probability of achieving alone the goal is small.</p>
      <p>
        The probability of an agent communicates preys’ position
increases with the increasing of the value of his intensity
attribute. The rational behind is the following: An agent
communicates his goals to the another agents to increase
his probability of success.
action to satisfy a goal by linking its execution capability to their energy resources at disposal in
the execution context. Emotions are often mentioned along with the issues of agents’ adaptation
to changing environments, and control use of their resources [
        <xref ref-type="bibr" rid="ref10">12</xref>
        ]. Those and another related topics
are also relevant in our research. For example in [
        <xref ref-type="bibr" rid="ref12">14</xref>
        ], emotion is regarded as a mechanism capable
of creating action tendencies, which is related to our notion of agents’ types. In [
        <xref ref-type="bibr" rid="ref7">9</xref>
        ] it is suggested
the use of affective control states in which the same type of attributes we used are applied to control
goals selection embedded in goal motivations.
      </p>
      <p>
        Finally, the flexible selection of strategies which deal with time pressure is discussed in [
        <xref ref-type="bibr" rid="ref11">13</xref>
        ],
which is related we used it in our work,i.e., on-line selection of the best strategy with respect to
resource consumption.
      </p>
    </sec>
    <sec id="sec-5">
      <title>References</title>
      <p>[1] Victor Basili, Paolo Donzelli, and Sima Asgari. A unified model of dependability: Capturing
dependability in context. IEEE Software, 21(6):19–25, 2004.
[2] M. Benda, V. Jagannathan, and R. Dodhiawala. On optimal cooperation of knowledge sources.</p>
      <p>Technical report, Boeing Artificial Intelligence Center, Boeing Computer Services, 1985.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Jose</given-names>
            <surname>Cascalho</surname>
          </string-name>
          , Luis Antunes, and
          <string-name>
            <given-names>Helder</given-names>
            <surname>Coelho</surname>
          </string-name>
          .
          <article-title>Toward a motivated bdi using attributes embedded in mental states</article-title>
          .
          <source>In XI Conferencia de la Asociacio´n Espan˜ola para la Inteligencia Artificial (CAEPIA</source>
          <year>2005</year>
          ), volume
          <volume>2</volume>
          , pages
          <fpage>215</fpage>
          -
          <lpage>224</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Jose</given-names>
            <surname>Cascalho</surname>
          </string-name>
          , Luis Antunes, Milton Correa, and
          <string-name>
            <given-names>Helder</given-names>
            <surname>Coelho</surname>
          </string-name>
          .
          <article-title>Characterising agents' behaviours:selecting goal strategies based on attributes</article-title>
          .
          <source>In Matthias Klusch</source>
          , Micahel Rovatsos, and Terry Payne, editors,
          <source>Cooperative Information Agents X</source>
          , volume
          <volume>4149</volume>
          , pages
          <fpage>402</fpage>
          -
          <lpage>415</lpage>
          . Springer,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Jose</given-names>
            <surname>Cascalho</surname>
          </string-name>
          , Leonel Nobrega, Milton Correa, and
          <string-name>
            <given-names>Helder</given-names>
            <surname>Coelho</surname>
          </string-name>
          .
          <article-title>Exploring the mechanisms behind a bdi-like architecture</article-title>
          .
          <source>In Conceptual Modeling Simulation Conference</source>
          , pages
          <fpage>153</fpage>
          -
          <lpage>158</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Conte</surname>
          </string-name>
          . Cognitive and
          <string-name>
            <given-names>Social</given-names>
            <surname>Action</surname>
          </string-name>
          . UCL Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Corrˆea and H. Coelho</surname>
          </string-name>
          .
          <article-title>From mental states and architectures to agents' programming</article-title>
          . In H. Coelho, editor,
          <source>Proceedings of the Sixth Iberoamerican Confrence in Artificial Intelligence</source>
          , volume
          <volume>1484</volume>
          <source>of Lectures Notes in Artificial Intelligence</source>
          , pages
          <fpage>64</fpage>
          -
          <lpage>75</lpage>
          . Springer-Verlag,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Milton</given-names>
            <surname>Corrˆea and Helder Coelho</surname>
          </string-name>
          .
          <article-title>Collective mental states in an extended mental states framework</article-title>
          .
          <source>In International Conference on Collective Intentionality IV, Certosa di Pontignano</source>
          , pages
          <fpage>13</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Davis</surname>
          </string-name>
          and
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Lewis</surname>
          </string-name>
          .
          <article-title>Affect and affordance: Architectures without emotion</article-title>
          . In AAAI, editor,
          <source>AAAI Spring symposium</source>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Haynes</surname>
          </string-name>
          and
          <string-name>
            <given-names>Sandip</given-names>
            <surname>Sen</surname>
          </string-name>
          .
          <article-title>Evolving behavioral strategies in predators and prey</article-title>
          . In Sandip Sen, editor,
          <source>IJCAI-95 Workshop on Adaptation and Learning in Multiagent Systems</source>
          , pages
          <fpage>32</fpage>
          -
          <lpage>37</lpage>
          , Montreal, Quebec, Canada,
          <fpage>20</fpage>
          -
          <lpage>25</lpage>
          1995. Morgan Kaufmann.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kok</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Vlassis</surname>
          </string-name>
          .
          <article-title>The pursuit domain package</article-title>
          .
          <source>Technical report</source>
          , Informatics Institute, University of Amsterdam, The Netherlands,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Lu</surname>
          </string-name>
          <article-title>´ıs Morgado and Grac¸a Gaspar. Emotion based adaptive reasoning for resource bounded agents</article-title>
          .
          <source>In AAMAS '05: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems</source>
          , pages
          <fpage>921</fpage>
          -
          <lpage>928</lpage>
          , New York, NY, USA,
          <year>2005</year>
          . ACM Press.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Sanguk</given-names>
            <surname>Noh</surname>
          </string-name>
          and
          <string-name>
            <given-names>Piotr J.</given-names>
            <surname>Gmytrasiewicz</surname>
          </string-name>
          .
          <article-title>Flexible multi-agent decision making under time pressure</article-title>
          .
          <source>Systems, Man and Cybernetics</source>
          ,
          <string-name>
            <surname>Part</surname>
            <given-names>A</given-names>
          </string-name>
          , IEEE Transactions on,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E.</given-names>
            <surname>Oliveira</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Sarmento</surname>
          </string-name>
          .
          <article-title>Emotional valence-based mechanisms and agent personality</article-title>
          .
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sloman. Motive Mechanisms Emotions</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.A.</given-names>
            <surname>Boden</surname>
          </string-name>
          (ed),
          <source>The Philososphy of Artificial Intelligence</source>
          , pages
          <fpage>231</fpage>
          -
          <lpage>247</lpage>
          . Oxford University Press,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sloman</surname>
          </string-name>
          .
          <article-title>Varieties of affect and the cogaff architecture schema</article-title>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Angelo</surname>
            <given-names>Susi</given-names>
          </string-name>
          , Anna Perini,
          <string-name>
            <surname>and John Mylopoulos.</surname>
          </string-name>
          <article-title>The tropos metamodel and its use</article-title>
          .
          <source>Informatica</source>
          ,
          <volume>29</volume>
          (
          <issue>4</issue>
          ):
          <fpage>401</fpage>
          -
          <lpage>408</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>