<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">TUNING BEHAVIOURS USING ATTRIBUTES EMBEDDED IN AN AGENTS&apos; ARCHITECTURE</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">José</forename><surname>Cascalho</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Universidade dos Açores</orgName>
								<address>
									<postCode>9701-851</postCode>
									<settlement>Angra do Heroísmo</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Helder</forename><surname>Coelho</surname></persName>
							<email>hcoelho@di.fc.ul.pt</email>
							<affiliation key="aff1">
								<orgName type="department">Departamento de Informática Faculdade de Ciências da</orgName>
								<orgName type="institution">Universidade de Lisboa</orgName>
								<address>
									<addrLine>Bloco C6, Piso 3, Campo Grande</addrLine>
									<postCode>1749-016</postCode>
									<settlement>Lisboa</settlement>
									<country key="PT">Portugal</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">TUNING BEHAVIOURS USING ATTRIBUTES EMBEDDED IN AN AGENTS&apos; ARCHITECTURE</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">A4B3F0CE2A7CC99BED8FF096E80991C5</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T14:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper we discuss how to tune agents' behaviours by explicitly modify a set of elements previously defined and included in the agents' architecture. By tuning, we mean influencing agents' world view, changing their preferences and even modify their beliefs about which goals are possible. This elements which we call attributes, such as urgency, insistence and intensity, are able to modify agents' priorities with regard to the resource consumption, to modify the evaluation of the implicit costs of action execution and even to change agents' view about their capabilities to execute an action.</p><p>In a preliminary experimental evaluation made in a multi-agent system environment, a modified predator-prey workbench, we show how the attributes are important elements while trying to improve predators' global efficiency.</p><p>In the final discussion, we argue about the benefits of having this set of attributes which allow agents to be selected and modified by stakeholders to support environmental odds, like a team manager would do.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>In this work we discuss the use of attributes to characterise different agents behaviours. These attributes, intensity, insistence, importance and urgency are embedded in the agents' architecture and we use them to change the agents' decision process. For example, the attribute insistence measures how much effort an agent is capable to do to achieve a goal. A high value of insistence means that an agent persists in achieving a specific goal during a predefined time.</p><p>These attributes come from previous work <ref type="bibr">[3][5]</ref> and have been already used on trying to modulate the decision process in cognitive agents <ref type="bibr" target="#b14">[15]</ref> <ref type="bibr" target="#b15">[16]</ref>.</p><p>In this paper we explore these attributes of the agents' architecture in a predator-prey environment <ref type="bibr" target="#b10">[11]</ref>. We create predators with different values of those attributes and test the predators' global performance, by measuring the average time the predators stay alive in the world.</p><p>We argue that these different behaviours that can be created using different attributes (characterised by environment properties where agents belong), can be used to define agents' types that could help a stakeholder to select the right set of agents among a predefined set, as a way to improve system's performance.</p><p>Our thesis is that agents, like humans, need to be different (or behave differently) to tackle different situations like an emergency where it is important an energetic response versus a trading where there is a need to evaluate carefully the different proposals from different traders.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">The Agent's architecture</head><p>Our agents form a goal governed-system <ref type="bibr" target="#b5">[6]</ref>, that is, the agents have goals explicitly represented which they want to satisfy. We are interested in studying the agents' inner mechanism for selecting behaviours. The agents' mind architecture has a set of goals which are linked by an OR-connector or an AND-connector (AND/OR decomposition <ref type="bibr" target="#b16">[17]</ref>). These connectors allow the agents' designer to create a tree (AND/OR tree) which makes possible to define 'and-goals' (the children of an ANDconnector node) or 'or-goals' (the children of an OR-connector). Both constitute alternative paths to satisfy a goal. The former means that a goal can be satisfied if all the and-goals are executed successfully (represents problem decomposition into sub-problems all of which must be solved ), while the latter means that to satisfy the goal, it suffices only one of the or-goals be satisfied.</p><p>In figure 1 a schematic representation of the architecture shows the different classes and their main connections. Next, we explain succinctly some of the beliefs supporting the agent's decision process:</p><p>• The accomplishment belief defines the conditions in which a goal is satisfied.</p><p>• The know-how belief links a goal to an Action, i.e. a plan to satisfy a goal. In our architecture this corresponds to a sequence of atomic actions.</p><p>• The beliefs condition belief defines the (pre-)conditions that support the execution of the goals and the actions of a plan.</p><p>• The cando belief evaluates the internal agents' capabilities. The difference between a condition belief and a cando belief is that the former is part of the goal (or action) external condition to the goal's (or action's) execution while the latter is an evaluation of the agents' ability (an agent can have the conditions but not the ability to reach a goal) <ref type="foot" target="#foot_0">1</ref> . The attribute intensity is evaluated with respect to the agents' ability to execute or not execute a goal.</p><p>• The preferences belief defines the order in which a sub-goal is chosen. The urgency influences this order.</p><p>• The means-end belief test the conditions for a goal execution (see below for a detailed explanation of the role of this belief).</p><p>Let's now describe the main cycle ( reasoning-cycle) of the agent's mind. The agent keeps a pointer to the goal that is executing. The agent starts to select the goal in the top of the tree, the start node. In the means-end belief, a test is made to evaluate whether a goal is satisfied or not by calling the accomplishment belief. If it is satisfied, he looks for another goal in the AND/OR tree. Otherwise, he checks if the condition belief is true and verifies if there is a plan to execute the goal (know-how belief ). Finally the agent evaluates the Impossibility belief and the Cando belief. If all conditions return true, he selects an action to execute, following the plan attached to the goal.</p><p>An agent search for a new goal in the AND/OR tree when there is at least one condition, from the conditions enumerated above, returning false.</p><p>The leafs or terminal nodes (having no child nodes) in the goals-tree might have variables which must be instantiated. If this is the case, inside the means-end belief, an instantiation policy is used to order the list of possible instantiations. This instantiation policy is supported by preference belief which orders the options based on a predefined criteria<ref type="foot" target="#foot_1">2</ref> .</p><p>After each reasoning-cycle in which the mental state is updated, an agent will execute an action (possibly a NULL action).  In figure <ref type="figure" target="#fig_1">2</ref> we present the goal-tree we used in the experiments. An agent walks through the goal-tree looking for a goal not satisfied and executable. When he has the conditions to achieve the goal, he uses the plan attached to that goal and executes the actions in the plan. An evaluation of success is kept. When a goal succeeds (the accomplishment belief returns true) he stops the searching activity and, in the next cycle, restarts again the search from at the start node of the tree. If the search for a the conditions to select a goal fails, the agent returns a null action.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Role of attributes</head><p>Intensity measures the willingness of the agents to spend their energy when satisfying a goal. Energy is a resource, so we can say that intensity measures how much of a resource an agent is willing to spend to satisfy a goal. In the meta-model we link the intensity attribute to the belief 'CanDo'. This means that to each goal's plan and possibly to each action inside that plan, an evaluation is made about the possibility of executing that plan or action with respect to agent's ability to spend that resources. Note that 'intensity' doesn't judge about the worthwhile of an action. Agents get a constant value of intensity and this will characterise their behaviour.</p><p>Insistence attribute is added to the agent's means-end analysis in the architecture (see figure <ref type="figure" target="#fig_0">1</ref>). For each goal, an agent defines a plan as a sequence of steps to accomplish a goal. It measures the agents persistence toward a goal, i.e. how much time an agent will try to satisfy a goal. If after a number of attempts a goal is not yet accomplished, the agent reconsiders the strategy to accomplish that goal, marking as a failure the previous strategy and searching for another strategy. Agents receive a constant value of insistence and this will characterise their behaviour.</p><p>The urgency acts as a regulator of agents behaviour. A set of context variables are used to calculate the urgency value and are usual related to distressed situation. With the increase of the urgency the agents will change (amplify or contract) the effects of the intensity or insistence (i.e. with urgency equal to 0.9, an agent with a value of intensity equal to 0.5 could behave as an agent with a value of intensity equal to 0.8). Instead of being defined as a constant value, urgency is dynamically dependent of environment parameters. These parameters can change also dynamically which means that an agent is able to adapt in real time to changes in the environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experimental setup</head><p>We adapted a pursuit simulator <ref type="bibr" target="#b10">[11]</ref> by adding the energy resource to the predators. This new parameter changes the way the simulator ends a game, i.e., a game ends after the death of all predators<ref type="foot" target="#foot_2">3</ref> before they could catch all preys launched in the game's beginning. When a new game starts, the energy is restored to initial values. In the original simulator, an unspecified number of predators tries to catch one or more prey agents. A game in the simulator is defined by episodes and cycles. In each cycle the simulator receives information through sockets about the moves of predator and prey agents, and messages exchanged among predators. An episode ends when all prey is caught and a new episode starts with all predator and prey agents randomly repositioned in the field. Data about how long agents take to catch all prey agents are kept as a statistical measurement of predator efficiency.</p><p>The pursuit domain was introduced in <ref type="bibr" target="#b1">[2]</ref>. It has been widely used testbed for multi-agent systems. Several variations of the original descriptions have been studied over the years (see <ref type="bibr" target="#b9">[10]</ref> for more details). The domain we used in our experiments consists of a discrete, grid world of 20X20, in which the predators catch a prey when predator and prey share the same cell (one of the predefined capture criteria in the simulator <ref type="bibr" target="#b10">[11]</ref>). Predators and prey can move to north, south, east and west or stay in their position. They consume more energy when they move than when they maintain their position. They can send messages to each other <ref type="foot" target="#foot_3">4</ref> . Finally, their total energy is incremented by a certain predefined amount after they catch a prey.</p><p>In the experimental environment the density of prey agents is equal to 0.03 (12 preys per 400 places) and the number of predators corresponds to 1/4 (4 predators for 12 prey) and 2/3 (8 predators for 12 preys) of the total of preys. To measure the global performance of all the agents, for each experiment we record the number of episodes they survive. We run the simulator about 80 times per experiment, i.e. for different set of agents, we run about 80 times the workbench.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Experiments description</head><p>In the next sections we explain the different experiments we made to test the different attributes, intensity, insistence and urgency, under the pursuit domain (see figure <ref type="figure" target="#fig_2">3</ref>) . We divided our experiments up into two different parts. In the first part we evaluated the attributes intensity and urgency associated to the prey search. Those experiments are related to the selection of the goals G4, G5 or G6 of the goals-tree presented in figure <ref type="figure" target="#fig_1">2</ref>. Four predators chased twelve random preys (the preys select randomly one of the possible movements, north,south, east, west or don't move). Selecting behaviours controlled by the attributes urgency and intensity contributed to the improvement in 50% of the global system performance<ref type="foot" target="#foot_4">5</ref> . In the second part, we tested the attributes insistence and intensity in a new setup where preys, instead of moving randomly, run away from the predators but with different speeds <ref type="foot" target="#foot_5">6</ref> . In this part of experiment we used the right part of the goals-tree, the goals G9, G10 and G11, G12 pairs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Searching preys with attributes intensity and urgency</head><p>An intensity measures the 'potential' an agent applies to satisfy a goal <ref type="bibr" target="#b11">[12]</ref>. We link this attribute to the resource consumption.Then an high intensity agent (IA) is a predator that selects a strategy that corresponds to a high energy consuming behaviour. The agent selects the goals differently based on the energy he has and his value of the intensity attribute. This selection is made by the 'Preference belief' following the premises:</p><p>• It is given to an agent a certain level of intensity i.</p><p>• To the goals G4,G5 and G6 correspond plans with different levels of energy consumption per time step (ets), where G4 has the highest consumption and G6 has the smallest.</p><p>• The agent divides ets G4 by his total energy, multiplies by i and if the consumption is in the interval above selects G4: 0% ≤ i * ets G4 &lt; 5%− &gt; G4</p><p>• It do the same for the other two goals, with the percentages and following the order of the expressions above:</p><formula xml:id="formula_0">5% ≤ i * ets G5 &lt; 10%− &gt; G5 10% ≤ i * ets G6 &lt; 50%− &gt; G6</formula><p>• Finally he doesn't move if energy consumption is above 50% for the less consumption goal:</p><formula xml:id="formula_1">i * ets G6 ≥ 50%− &gt; N ome</formula><p>Using two parameters from the context, the optimum value for the number of preys to be caught and the average capture time of all preys in the episodes, the agent calculate the value of urgency to satisfy the goal 'searching a prey'.</p><p>The urgency value is calculated using the following expression: where bias gives a 'value of reference' for the urgency when the number of prey caught is equal to the value P reyT arget and the the number of cycles in the episode is equal to the value CyclesAverage.</p><p>Our goal was to allow the agent to select the goal that reveal the greatest efficacy. While applying the attribute urgency to the preference belief the agent do the following:</p><p>• If the urgency is low, the agent tends to save resources and so it will use the strategies with which the energy/steps rate is smaller.</p><p>• If the urgency is high, the agent must use all the resources he has to quickly solve the problem he has at hand, so he admits to select the best strategy even if he has to spend the rest of the resources he has got.</p><p>The scenario with the attributes urgency and intensity increased the agents' performance by 50% when compared with the scenario with just the attribute intensity <ref type="bibr" target="#b3">[4]</ref>. Urgency rationalizes an energy consumption by letting agents select behaviours with higher consuming rates only in urgency situations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Chasing preys with intensity and insistence attributes</head><p>In these experiments we created new preys which runaway from predators. We add three type of preys each one with different speeds (average of the number of moves per time step<ref type="foot" target="#foot_6">7</ref> ).</p><p>The agents not only choose the preys to catch but also select the direction they use when approaching preys. The preferences belief will determine which prey is selected, in this case the nearest prey<ref type="foot" target="#foot_7">8</ref> ,i.e. an agent instantiates the goals 'approach prey' and 'chase prey' giving values to these two parameters. For example, an agent can select the nearest prey with number '1' and with 'north chase direction'.</p><p>Furthermore, agents have two sources information, visual and agent's messages. A predator, when chasing a prey (goal G10), sends a broadcast message with information about his relative position to the prey he is chasing. When the other agents receive this message they check if they see the predator who sends the message and if this is the case they calculate the position of the prey he is chasing <ref type="foot" target="#foot_8">9</ref> . This increments the number of preys each agent can see.</p><p>The insistence measures the persistence an agent has toward a goal. After selecting a prey to chase, a predator chase that prey a maximum number of cycles (parameter chase prey max cycles). If he doesn't catch the prey within this time interval, he gives up from catching that prey and tries to catch another one. Besides, when an agent gives up, he inserts the prey number in a list of failures, the preys' dark list). Another parameter (dark list time interval prey)defines how much time the information about a failure is kept in that list. While this information is in the list the agent won't chase that prey again.</p><p>The agent uses the following expression to calculate the maximum time he chases the same prey, chase prey max cycles prey = distance to prey * e insistence * k</p><p>and the expression to keep information about the failed attempt to catch a specified prey,</p><formula xml:id="formula_2">dark list time interval prey = max interval * insistence k * e 1−insistence k</formula><p>The parameter max interval is constant with value 40. The parameter k is equal to 3 or 4 (k3 or k4). These values were selected to allow the values of chase prey max cycles to be between 1 and 208 cycles (k = 3) or between 11 and 512 cycles (k = 4) and the values of dark list time interval to be between 0 and 40 cycles. Finally, in this experiment, the intensity is used to decide if agents chasing a prey broadcast a message with the information about its relative location (selecting between the branches G7 or (Nr.Pred)xInsistence Mean (k=3 ) Mean (k=4 ) <ref type="bibr" target="#b7">(8)</ref> G8 of the goal-tree). With the intensity equal to 1.0, it is certain that they broadcast the relative position of all the preys they chase. The probability of doing it decreases with the decreasing value of intensity.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">The effect of insistence</head><p>Several experiments were made with different values of insistence. The table <ref type="table" target="#tab_1">1</ref> summarise the results. In the table we verify that the best result is obtained for insistence equal to 1.0 and k = 3, for all 8 predators (see last row in the table). We were expected that the effect of insistence could improve the performance because this attribute gives to agents the capability to chase a prey for a large amount of time. On the other hand, we suspected that if an agent had too much insistence he could prejudice his global performance because he would eventually lost almost all of his energy trying to capture a quick prey. So we changed the parameter k from k3 to k4 (last column in the table) extending the time the agent would chase a prey without quiting. And we found that now with insistence equal to 1.0, the mean become 8.0. Comparing the means, we also observe that the insistence equal to 0.6 is now the best result and surprisingly surpass the previous value for k3 and insistence equal to 1.0, but supporting our suspicion. When changing k3 to k4, the predators never quit chasing preys after they fix them. They will only stop chasing one prey if the prey disappears (for some reason they lost their trace) or if the prey is caught. The figures 4 ,5 and 6 show the number of times a predator quit chasing a prey because he surpasses the threshold defined for the insistence. We notice that for insistence 1.0 with k4, predators seldom quit chasing preys. A quite different scenario is observed for insistence 1.0, k=k3 and for insistence 0.6, k4 (figures4 and 5respectively).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4">The effect of intensity</head><p>In the table of figure <ref type="figure">5</ref>.4 the results for different values of intensity are presented. We notice that the results get better as the intensity gets higher. We conclude about the importance the agents' messages about the position of preys being chased have in this specific experimental setup.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>In predator-prey scenario, the use of attributes becomes extremely useful in searching for good solutions. Intensity, insistence and urgency were tested and their values influenced significantly the global performance of the system. This open alleys for a deeply research of the possible types  of agents that could be defined with help of these attributes, and also to extend this results for different applications where a definition of the same attributes could be implemented. Indeed to a stakeholder several properties must exist and should be maintain in a system, especially when in presence of some distressed situations. Along with the usually desirable performance also the ability to avoid catastrophic situations or the need to satisfy a real-time response are considered as important system requirements <ref type="bibr" target="#b0">[1]</ref>. We argue that the set of attributes presented in this paper can be used as a tool for agents adaptation to the environment in order to satisfy the desired goals of a stakeholder. There are two ways in which a stakeholder can take benefit of having these attributes in agents' definition. First, they can be aggregated to create agents types. For example, suppose we characterise a social agent as an agent who always communicates his goals to others when having a high intensity (he has defined the Intensity* attribute presented in the table <ref type="table">6</ref>). If we add to this social agent the Insistence attribute, we have a social agent who selects carefully his goals. Second, stakeholders can tune one or more of these attributes to get agents' behaviour more close to their needs.</p><p>As a long term goal we propose to create multi-agent systems in which a set of different agents types are tested under different simulated scenarios. The best performing sets could be selected to be used later in a real-time environments. The selection among different sets could then, be made automatically, changing a team of agents in different scenarios.</p><p>The role of attributes in a cognitive agents' architecture have been discussed by Sloman <ref type="bibr">[15][16]</ref>. Following Correa <ref type="bibr" target="#b6">[7]</ref>[8], in our previous work <ref type="bibr" target="#b4">[5]</ref>[3], the attributes are associated to the definition of a mental state and it is explained how these attributes can increase the plasticity of the agents' reasoning process.</p><p>The two key roles of the attributes used in the agents' architecture are the following:</p><p>• To permit to control how the agents use their resources.</p><p>• To provide different ways of satisfying goals in different contexts.</p><p>For example, the insistence is related to how many times an agent persist on a specific strategy to satisfy a goal, while intensity discriminates the cases in which agents can execute a specific Insistence An agent with a high insistence, persists in achieving a goal.</p><p>If he fails, he refuses to satisfy that goal again for a period δf (x), where f (x) depends on the time at which the agent spent on trying to achieve that goal.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Intensity</head><p>An agent communicates preys' positions only if his intensity is above a threshold. The rationale behind this is the following: An agent communicates only when he knows the probability of achieving alone the goal is small. Intensity* The probability of an agent communicates preys' position increases with the increasing of the value of his intensity attribute. The rational behind is the following: An agent communicates his goals to the another agents to increase his probability of success.</p><p>Table <ref type="table">3</ref>: Different definitions of agents attributes using contextual parameters.</p><p>action to satisfy a goal by linking its execution capability to their energy resources at disposal in the execution context. Emotions are often mentioned along with the issues of agents' adaptation to changing environments, and control use of their resources <ref type="bibr" target="#b11">[12]</ref>. Those and another related topics are also relevant in our research. For example in <ref type="bibr" target="#b13">[14]</ref>, emotion is regarded as a mechanism capable of creating action tendencies, which is related to our notion of agents' types. In <ref type="bibr" target="#b8">[9]</ref> it is suggested the use of affective control states in which the same type of attributes we used are applied to control goals selection embedded in goal motivations. Finally, the flexible selection of strategies which deal with time pressure is discussed in <ref type="bibr" target="#b12">[13]</ref>, which is related we used it in our work,i.e., on-line selection of the best strategy with respect to resource consumption.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The agent's architecture.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The AND/OR tree of goals.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The experiments.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>( 1 −</head><label>1</label><figDesc>urgency = weight * (bias + 1 − N r.P reyCaught/P reyT arget)+ weight) * (bias + N r.Cycles/CyclesAverage − 1),</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :Figure 5 :</head><label>45</label><figDesc>Figure 4: Nr. of times predators quit chasing preys without catching them (insistence=1.0, k3).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Nr. of times predators quit chasing preys without catching them (insistence=1.0, k4).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>Average of the number episodes per game, for a total of 70 games with k=k3 and k=k4. All the five experiments had 8 predators and 12 preys.</figDesc><table><row><cell>x0.3</cell><cell>14.3</cell><cell>-</cell></row><row><cell>(4)x0.3 (4)x0.6</cell><cell>19.2</cell><cell>-</cell></row><row><cell>(8)x0.6</cell><cell>18.1</cell><cell>23.2</cell></row><row><cell>(4)x0.6 (4)x1.0</cell><cell>16.9</cell><cell>-</cell></row><row><cell>(8)x1.0</cell><cell>21.2</cell><cell>8.0</cell></row><row><cell cols="3">(Nr.Pred)xIntensity Mean Standard deviation</cell></row><row><cell>(8)x0.5</cell><cell>8.27</cell><cell>6.55</cell></row><row><cell>(8)x0.8</cell><cell>13.31</cell><cell>14.78</cell></row><row><cell>(8)x1.0</cell><cell>21.2</cell><cell>19.3</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>Average of the number episodes per game for 70 games. All experiments had 8 predators, with insistence 1.0 and intensity equal to 0.5 and 0.8.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Both beliefs evaluate if the goal can be satisfied if the action associated to the know-how belief is executed.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">The instantiation policy is related to the attribute importance which is not discussed in this paper.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">They die after their energy decreases below a survival threshold.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">In this setup, an agent doesn't consume energy by sending messages</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">see<ref type="bibr" target="#b3">[4]</ref> for a fully description of this experiment</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">We define three type of preys, quick, normal and slow preys, to which we ascribed different number of moves per cycle rate</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">All preys moves slower than predators, but among them there are 4 quick speed preys, 4 slow speed preys and 4 with a speed between the others.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_7">This decision is part of agents' policy which can be changed, but this is not discussed in this paper</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_8">There isn't an absolute position reference in the world, so predators only determine the prey position relative to their own position.</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A unified model of dependability: Capturing dependability in context</title>
		<author>
			<persName><forename type="first">Victor</forename><surname>Basili</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Donzelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sima</forename><surname>Asgari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Software</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="19" to="25" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">On optimal cooperation of knowledge sources</title>
		<author>
			<persName><forename type="first">M</forename><surname>Benda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Jagannathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dodhiawala</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1985">1985</date>
		</imprint>
		<respStmt>
			<orgName>Boeing Artificial Intelligence Center, Boeing Computer Services</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Toward a motivated bdi using attributes embedded in mental states</title>
		<author>
			<persName><forename type="first">Jose</forename><surname>Cascalho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luis</forename><surname>Antunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Helder</forename><surname>Coelho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">XI Conferencia de la Asociación Española para la Inteligencia Artificial</title>
				<meeting><address><addrLine>CAEPIA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005. 2005</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="215" to="224" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Characterising agents&apos; behaviours:selecting goal strategies based on attributes</title>
		<author>
			<persName><forename type="first">Jose</forename><surname>Cascalho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luis</forename><surname>Antunes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Milton</forename><surname>Correa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Helder</forename><surname>Coelho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Cooperative Information Agents X</title>
				<editor>
			<persName><forename type="first">Matthias</forename><surname>Klusch</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Micahel</forename><surname>Rovatsos</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Terry</forename><surname>Payne</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2006">2006</date>
			<biblScope unit="volume">4149</biblScope>
			<biblScope unit="page" from="402" to="415" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Exploring the mechanisms behind a bdi-like architecture</title>
		<author>
			<persName><forename type="first">Jose</forename><surname>Cascalho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Leonel</forename><surname>Nobrega</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Milton</forename><surname>Correa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Helder</forename><surname>Coelho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Conceptual Modeling Simulation Conference</title>
				<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="153" to="158" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Cognitive and Social Action</title>
		<author>
			<persName><forename type="first">C</forename><surname>Castelfranchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Conte</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1995">1995</date>
			<publisher>UCL Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">From mental states and architectures to agents&apos; programming</title>
		<author>
			<persName><forename type="first">M</forename><surname>Corrêa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Coelho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Iberoamerican Confrence in Artificial Intelligence</title>
		<title level="s">Lectures Notes in Artificial Intelligence</title>
		<editor>
			<persName><forename type="first">H</forename><surname>Coelho</surname></persName>
		</editor>
		<meeting>the Sixth Iberoamerican Confrence in Artificial Intelligence</meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="volume">1484</biblScope>
			<biblScope unit="page" from="64" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Collective mental states in an extended mental states framework</title>
		<author>
			<persName><forename type="first">Milton</forename><surname>Corrêa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Helder</forename><surname>Coelho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Collective Intentionality IV, Certosa di Pontignano</title>
				<imprint>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="13" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Affect and affordance: Architectures without emotion</title>
		<author>
			<persName><forename type="first">D</forename><surname>Davis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">C</forename><surname>Lewis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI, editor, AAAI Spring symposium</title>
				<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Evolving behavioral strategies in predators and prey</title>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Haynes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sandip</forename><surname>Sen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IJCAI-95 Workshop on Adaptation and Learning in Multiagent Systems</title>
				<editor>
			<persName><forename type="first">Sandip</forename><surname>Sen</surname></persName>
		</editor>
		<meeting><address><addrLine>Montreal, Quebec, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann</publisher>
			<biblScope unit="page" from="20" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">The pursuit domain package</title>
		<author>
			<persName><forename type="first">J</forename><surname>Kok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vlassis</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<pubPlace>The Netherlands</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Informatics Institute, University of Amsterdam</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Emotion based adaptive reasoning for resource bounded agents</title>
		<author>
			<persName><forename type="first">Luís</forename><surname>Morgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Graça</forename><surname>Gaspar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAMAS &apos;05: Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems</title>
				<meeting><address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="921" to="928" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Flexible multi-agent decision making under time pressure</title>
		<author>
			<persName><forename type="first">Sanguk</forename><surname>Noh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Piotr</forename><forename type="middle">J</forename><surname>Gmytrasiewicz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Systems, Man and Cybernetics, Part A</title>
				<imprint>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Emotional valence-based mechanisms and agent personality</title>
		<author>
			<persName><forename type="first">E</forename><surname>Oliveira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sarmento</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Motive Mechanisms Emotions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sloman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Philososphy of Artificial Intelligence</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Boden</surname></persName>
		</editor>
		<imprint>
			<publisher>Oxford University Press</publisher>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="231" to="247" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Varieties of affect and the cogaff architecture schema</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sloman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The tropos metamodel and its use</title>
		<author>
			<persName><forename type="first">Angelo</forename><surname>Susi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anna</forename><surname>Perini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Mylopoulos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Informatica</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="401" to="408" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
