<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Minority Game: A Logic-Based Approach in TuCSoN</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Enrico Oliva Mirko Viroli Andrea Omicini</string-name>
          <email>andrea.omicini@unibo.it</email>
          <email>enrico.oliva@unibo.it</email>
          <email>mirko.viroli@unibo.it</email>
        </contrib>
      </contrib-group>
      <fpage>181</fpage>
      <lpage>186</lpage>
      <abstract>
        <p>- Minority Game is receiving an increasing interest because it models emergent properties of complex systems including rational entities, such as for instance the evolution of financial markets. As such, Minority Game provides for a simple yet stimulating scenario for system simulation. In this paper, we aim at presenting a logic approach to the Minority Game whose goal is to overcome the well-known limits of the equation model in the verification of the system behaviour. We realise the social system simulation using a novel MAS metamodel based on agents and artifacts, where the agent rationality is obtained using a BDI architecture. To this end, we adopt the TuCSoN infrastructure for agent coordination, and its logic-based tuple centre abstractions as artifact representatives. By implementing Minority Game over TuCSoN, we show some of the benefits of the artifact model in terms of flexibility and controllability of the simulation. A number of parameters can affect the behaviour of Minority Game simulation: such parameters are explicitly represented in the coordination artifact, so that they can be tuned up during the simulation. In particular, experiments are shown where memory size and number of wrong moves are adopted as the tuning parameters.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>I. INTRODUCTION</p>
      <p>
        Minority Game (MG) is a mathematical model that takes
inspiration from the “El Farol Bar” problem introduced by
Brian Arthur (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). It is based on a simple scenario where at each
step a set of agents perform a boolean vote which conceptually
splits them in two classes: the agents in the smaller class win.
In this game, a rational agent keeps track of previous votes
and victories, and has the goal of winning throughout the steps
of the game—for which a rational strategy has to be figured
out. Several researches showed that, although very simple, this
model takes into account crucial aspects of some interesting
complex systems coupling rationality with emergence: e.g.
bounded rationality, heterogeneity, competition for limited
resources, and so on. For instance, MG is a good model to study
market fluctuation, as an emergent property resulting from
interactions propagating from micro scale (agent interaction)
to macro scale (collective behaviour).
      </p>
      <p>
        As showed by (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), a multiagent system (MAS) can be used
to realise a MG simulation—there, BDI agents provide for
rationality and planning. An agent-based simulation is
particularly useful when the simulated systems include autonomous
entities that are diverse, thus making it difficult to exploit the
traditional framework of mathematical equations.
      </p>
      <p>
        The Minority Game is a social simulation that aims at
reproducing a simplified human social scenario. A (human)
society is composed by different kinds of people with different
behaviours, and its composition affects the progress of the
game. In principle, a logic-based approach based on BDI agent
makes it easier to explicitly model a variety of diverse social
behaviours. Also, in this scenario, argumentation theory (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) is
useful to model the information exchange and sharing between
humans/agents so as to improve the agent reasoning abilities,
as well as to provide a more realistic simulation of a society.
      </p>
      <p>
        In this paper we proceed along this direction, and adopt a
novel MAS meta-model based on the notion of artifact (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ).
The notion of artifact is inspired by Activity Theory (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ): it
represents those abstractions living in the MAS environment
that provide a function, which agents can exploit to achieve
individual and social goals. The engineering principles promoted
by this meta-model makes it possible to flexibly balance the
computational burden of the whole system between autonomy
of the agents and the designed behaviour of artifacts.
      </p>
      <p>
        In order to implement MG simulations we adopt the
TuCSoN infrastructure for agent coordination (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ), which
introduces tuple centres as artifact representatives. A tuple centre
is a programmable coordination medium living in the MAS
environment, used by agents interacting by exchanging tuples
(logic tuples in the case of TuCSoN logic tuple centres). As
we are not concerned much with the mere issues of agent
intelligence, we rely here on a weak form of rationality,
through logic-based agents adopting pre-compiled plans called
operating instructions (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ).
      </p>
      <p>By implementing MG over TuCSoN, we can experiment
with flexibility and controllability of the artifact model, and
see if and how they apply to the simulation – in particular,
artifacts allow for a greater level of controllability with respect
to agents. To this end, in this paper we show how the model
allows some coordination parameters to be changed during the
run of a simulation with no need to stop the agents: this can
be useful e.g. to change the point of equilibrium, controlling
the collective behaviour resulting by interactions propagated
from the entities at the micro level.</p>
      <p>The remainder of this paper is organised as follows. First,
we introduce the general simulation framework based on
agents and artifacts. Then, we provide the reader with some
relevant details of the Minority Game. Some quantitative
results of MG simulation focussing on system dynamics and
run-time changes are presented, just before final remarks.
Player</p>
      <p>Monitor</p>
      <p>Tuning</p>
      <p>Figure 1): all the agents share the same coordination artifact.
The agent types differ because of their role and behaviour:
player agents play MG, the monitor agent is an observer
of interactions which visualises the progress of the system,
the tuning agent can change some rules or parameters of
coordination, and drives the simulation to new states. Note
that the main advantage of allowing a dynamic tuning of
parameters instead of running different simulations lays in
the possibility of tackling emergent aspects which would not
necessarily appear in new runs.</p>
      <p>The main control loop of a player agent is a sequence of
actions: observing the world (perception), updating its KB
(effects), scheduling next intention (precondition), elaborating
and executing a plan (action). This structure is depicted in
Figure 2. Moreover, in order to connect agent mental states
with interactions, we use the concept of action preconditions
and perception effects as usual.</p>
      <p>Fig. 1. TuCSoN Simulation Framework for MG</p>
    </sec>
    <sec id="sec-2">
      <title>II. THE TuCSoN FRAMEWORK FOR SIMULATION</title>
      <p>
        The architecture proposed for MAS simulation is based on
TuCSoN (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ), which is an infrastructure for the coordination of III. MINORITY GAME
MASs. TuCSoN provides agents with an environment made MG was introduced and first studied by (
        <xref ref-type="bibr" rid="ref10">10</xref>
        ), as a means
of logic tuple centres, which are logic-based programmable to evaluate a simple model where agents compete through
tuple spaces. The language used to program the coordination adaptation for finite resources. MG is a mathematical
repbehaviour of tuple centres is ReSpecT, which specifies how resentation from ‘El Farol Bar’ problem introduced by (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
a tuple centre has to react to an observable event (e.g. when providing an example of inductive reasoning in scenarios of
a new tuple is inserted) and has to accordingly change the bounded rationality. The game consists in an odd number N of
tuple-set state (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ). Tuple centres are a possible incarnation of agents: at each discrete time step t of the game an agent i takes
the coordination artifact notion (
        <xref ref-type="bibr" rid="ref9">9</xref>
        ), representing a device that an action ai(t), either 1 or −1. Agents taking the minority
persists independently of agent life-cycle and provides services action win, whereas the majority looses. After a round, the
to let agents participate to social activities. total action result is calculated as:
      </p>
      <p>In our simulation framework we adopt logic-based agents, N
namely, agents built using a logic programming style, keeping A(t) = X ai(t)
a knowledge base (KB) of facts and acting according to i
some rule—rules and facts thus forming a logic theory. The In order to take decisions agents adopt strategies. A strategy
implementation is based on tuProlog technology1 for Java- is a choosing device that takes as input the last m winning
Prolog integration, and relies on its inference capabilitieAsgfoernt mreesnutlatsl,satandteprovides the action (1 or −1) to perform in the
agent rationality. Agents roughly follow the BDI architecture next time step. The parameter m is the size of the memory
(as showed in Figure 2), as the KB models agent beliefs while of the past results (in bits), and 2m is therEeffofreecthse potential
rules model agent intentions. past history that defines the number of possible entries for a</p>
      <p>To coordinate agents we take inspiration from natural sys- strategy.
tems like ant-colonies, where coordination is achieved through The typical strategy implementation is as follows. Each
the mediation of the environment: our objective is to have a agent carries a sequence of 2m actions, called a strategy, e.g.
possibly large and dynamic set of agents which coordinate
each other through the environment while bringing about their
goals.</p>
      <p>Externally, we can observe overall system parameters by Effects
inspecting the environment, namely, the tuple centres agents Beliefs
interact with. In this way we can try different sPysretecmonbdeit-ions
haviours changing only the coordination behaviour of the
environment. Furthermore we can change, during the simulation, InDteenstirioenss
some coordination parameters (expressed as tuples in a tuple Preconditions
centre), programming and then observing the transition of the
whole system either to a new point of equilibrium or to a Action Perception
divergence.</p>
      <p>Three kinds of agents are used in our simulation: player
agents, monitor agents and tuning agents (as depicted in</p>
      <p>Fig. 2. Agent Architecture
m = 3 23actions = [+1, +1, −1, −1, +1, −1, +1, +1]. The
information on past m wins is stored considering the success
of − group if A(t) &gt; 0 or + group if A(t) &lt; 0. Such a
past history is mapped on the natural number that results by
considering − as 0 and + as 1. Such a number is used as
position in the sequence of the next action to take: for instance,
if [−, +, −] is the past winning group, we read it as 010 (that
is, 2), and accordingly pick the decision in position 2 inside
[+1, +1, −1, −1, +1, −1, +1, +1], that is −1.</p>
      <p>Each agent actually carries a number s ≥ 2 of strategies.</p>
      <p>During the game the agent evaluates all its strategies according
to their success, and hence at each step it decides based on
the most successfull strategy so far. Figure 3 shows a typical
evolution of the game.</p>
      <p>
        One of the most important applications of MG is in the
market models: (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) use MG as a coarse-grained model for
financial markets to study their fluctuation phenomena and
statistical properties. Even though the model is coarse-grained
and provides an over-simplified micro-scale description, it
anyway captures the most relevant features of system interaction,
and generates collective properties that are quite similar to
those of the real system.
      </p>
      <p>
        Another point of view, presented e.g. by (
        <xref ref-type="bibr" rid="ref12">12</xref>
        ), considers the
MG as a point in space of a Resource Allocation Game (RAG).
      </p>
      <p>In this work a generalisation of MG is presented that relaxes
the constraints on the number of resources, studying how the
system behaves within a given range.</p>
      <sec id="sec-2-1">
        <title>A. MG Logic-Based Approach</title>
        <p>MG can be considered a social simulation that aims to
reproduce a simplified human scenario. Each (human) agent,
in this scenario, must do a choice under the minority global
rule. In order to study the system composed by different
kinds of players with different behaviours, we here adopt
a logic-based approach to build the players. In this way,
it is possible to observe particular social behaviours which
would otherwise remain hidden in the approximation of the
mathematical model.</p>
        <p>
          A more recent paper (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) observes that MG players could be
naturally modelled as agents with a full BDI model, and adopts
a new adaptive stochastic MG with dynamically evolving
strategies in the simulation. We can then apply our simulation
framework, with Logic Agents and Coordination Artifacts,
to test the MG from a logic-based point of view, and to
experiment with some dynamic tuning strategy.
        </p>
        <p>
          The next step is to consider players as in an
argumentation scenario (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ), where agents have the ability to exchange
arguments with the purpose to make their own choice or to
persuade others to change theirs.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>B. MG Performance</title>
        <p>In order to track the performance of an MG system,
the most interesting quantity is variance, defined as σ2 =
[A(t) − A(t)]2: it shows the variability of the bets around the
average value A(t). In particular, the normalised version of
variance ρ = σ2/N is considered.</p>
        <p>Generally speaking, variance is the inverse of global
efficiency: as variance decreases agent coordination improves,
making more agents winning. Variance is interestingly affected
by the parameters of the model, such as number of agents (N ),
memory (m) and number of strategies (s): in particular, the
fluctuation of variance is shown to depend only on the ratio
α = 2m/N between agent memory and the number N of
agents.</p>
        <p>For large values of α—the number of agents is small with
respect to the number of possible histories—the outcomes are
seemingly random: the reason for this is that the information
that agents observe about the past history is too complex for
their limited processing analysis.</p>
        <p>When new agents are added, fluctuation decreases and
agents perform better by choosing randomly, in this case ρ = 1
and α ≈ 1/2, as visible in the results of our simulation in
Figure 4—the game enters into a regime where the loosing
group is close to N/2, hence we might say coordination is
performing well.</p>
        <p>If the number of agents increase further, fluctuations rapidly
increase beyond the level of random agents and the game
enters into the crowded regime. With a low value of α the
value of σ2/N is very large: it scales like σ2/N ≈ α−1.</p>
        <p>The results of other observations suggest that the behaviour
of MG can be classified in two phases: an information-rich
asymmetric phase, and an unpredictable or symmetric phase.</p>
        <p>A phase transition is located where σ2/N attains its minimum
(αc = 1/2), and it separates the symmetric phase with α &lt; αc
from an asymmetric phase with α &gt; αc.</p>
        <p>All these cases have been observed with the TuCSoN
simulation framework described in next section.</p>
        <p>IV. THE SIMULATION FRAMEWORK operation set_spec(). The following ReSpecT reaction
The construction of MG simulations with MASs is based is fired when an agent inserts tuple play(X), and triggers
on the TuCSoN framework and on tuProlog as an inferential the whole behaviour:
engine to program logic agents. The main innovative aspect
of this MG simulation is the possibility of studying the
evolution of the system with particular and different kinds of
agent behaviour at the micro level, imposed as coordination
parameters which are changed on-the-fly.
reaction(out(play(X)),(
%read the last value of count
in_r(count(Y)),
Z is Y+1,
%calculate the partial result
in_r(sum(M)),
V is M+X,
out_r(sum(V)),
%store the new value of count
out_r(count(Z))
%this action will be catch
)).</p>
      </sec>
      <sec id="sec-2-3">
        <title>A. Operating Instructions</title>
        <p>
          Each agent has an internal plan, structured as an algebraic
composition of allowed actions (with their preconditions) and
perceptions (with their effects), that enables the agent to use
the coordination artifact to play the MG. This plan can be
seen as Operating Instructions (
          <xref ref-type="bibr" rid="ref7">7</xref>
          ), a formal description based
on Labelled Transition Systems (LTS) that the agent reads to
understand what its step-by-step behaviour should be. Through
an inference process, the agent accordingly chooses the next
action to execute, thus performing the cycle described in
Section II.
        </p>
        <p>Operating instructions are expressed by the following
theory:
% pre=Preconditions
% eff=Effects
% act=Action
% per=Perception
firststate(agent(first,[])).
definitions([
def(first,[],...),
%definition of the main control loop
def(main,[S],
[act(out(play(X)),pre(choice(S,X))),
per(in(result(Y)),eff(res(Y))),
agent(main,[S])]
),
...
]).</p>
        <p>The first part of operating instructions is expressed by
term first, where the agent reads the game parameters that
are stored in the KB, and randomly creates its own set of
strategies.</p>
        <p>In the successive part main, the agent executes its main
cycle. It first puts tuple play(X) in the tuple space, where
X = ±1 is agent vote. The precondition of this action
choice(S,X) is used to bind in the KB X with the
value currently chosen by the agent according to strategy S.
Then, the agent gets the whole result of the game in tuple
result(Y) and applies it to its KB. After this perception,
the cycle is iterated again.</p>
      </sec>
      <sec id="sec-2-4">
        <title>B. Tuple Centre Behaviour</title>
        <p>The interaction protocol between agents and the
coordination artifact is then simply structured as follows. First each
agent puts the tuple for its vote. When the tuples for all agents
have been received, the tuple centre checks them, computes the
result of the game—either 1 or −1 is winning—and prepares
a result tuple to be read by agents.</p>
        <p>The ReSpecT program for this behaviour is loaded in the
tuple centre by a configuration agent at bootstrap, through</p>
        <p>This reaction considers the bet (X), counts the bets (Z),
and computes the partial result of the game (V). When
all the agents have played, the artifact produces the tuple
winner(Result,Turn,NumberOfLoss,MemorySize,last/more)
which is the main tuple of MG coordination.
reaction(out_r(count(X)),(
%check if all agents have already played
rd_r(numag(Num)),
X=:=Num,
in_r(totcount(T)),
Turn is T+1,
rd_r(game(G)),
%read the result of the game
in_r(sum(Result)),
%reset the sum value
out_r(sum(0)),
rd_r(countsession(CS)),
in_r(count(Y)),
%reset the count value
out_r(count(0)),
%calculate variance
in_r(qsum(SQ)),
NSQ is Result*Result+SQ,
out_r(qsum(NSQ)),
%calculate mean
in_r(totsum(R)),
NewS is R+Result,
out_r(totsum(NewS)),
rd_r(numloss(NumberOfLoss)),
rd_r(mem(MemorySize)),
% put out the tuple with the result
out_r(winner(Result,Turn,NumberOfLoss,
MemorySize,G)),
out_r(totcount(Turn))
)).</p>
        <p>Fig. 5. Interface of the Monitor Agent</p>
        <p>The winner tuple contains the result of the game
(Result), the number of steps (Turn), two tuning
parameters (NumberOfLoss and MemorySize) and one constant
to communicate agents whether they have to stop or to play
further (last/more). Figure 5 reports the graphical interface
of the monitor agent that during its lifet-ime reads the tuple
winner and draws variance.</p>
      </sec>
      <sec id="sec-2-5">
        <title>C. Tuning the Simulation</title>
        <p>In classical MG simulation there are a number of parameters
that can affect the system behaviour, which are explicitly
represented in the tuple centre in form of tuples: the number of
agents numag(X), memory size mem(X), and the number of
strategies numstr(X). In our framework, we have introduced
as a further parameter the number of wrong moves after
which the single agent should be recalculate own strategy,
represented as a tuple numloss(X). Such a threshold is
seemingly useful to break the symmetry in the strategy space
when the system is in a pathological state, i.e., when all
agents have the same behaviour and the game oscillates from
minimum to maximum value.</p>
        <p>In our framework, it is possible to explore the possibility
to dynamically tune up the coordination rules by changing
numloss and mem coordination parameters, which are stored
as tuples in the coordination artifact. The simulation
architecture built in this way, in fact, allows for on-the-fly change of
some game configuration parameters—such as the dimension
of agent memory—with no need to stop the simulation and
re-program the agents.</p>
        <p>By changing the parameters, the tuning agent can drive the
system from an equilibrium state to another, by controlling
agent strategies, the dimension of memory, or the number of
losses that an agent can accept before discarding a strategy.
This agent observes system variance, and decides whether and
how to change tuning parameters: reference variance is
calculated by first making agents playing the game randomly—
see Figure 4. The new value of parameters is stored in
tuple centre through tuples numloss(NumberOfLoss) and
mem(MemorySize), the rules of coordination react and
update the information that will be read by the agents.</p>
      </sec>
      <sec id="sec-2-6">
        <title>D. Simulation Results</title>
        <p>The result of the tuned simulation in Figures 6 and 7 shows
how the system changes its equilibrium state and achieves
a better value of variance.2 In this simulation the tuning
agent is played by a human that observes the evolution of
the system and acts through the tuning interface to change
the coordination parameters, such as threshold of losses and
memory, hopefully finding new and better configurations. The
introduction of the threshold of losses in the agent behaviour
is useful when the game is played by few agents: these
parameters enable system evolution and a better agent cooperative
behaviour.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>V. CONCLUSION</title>
      <p>In this paper, we aim at introducing new perspectives on
agent-based simulation by adopting a novel MAS meta-model
based on agents and artifacts, and by applying it to Minority
Game simulation. We implement and study MG over the
TuCSoN coordination infrastructure, and show some benefits
of the artifact model in terms of flexibility and controllability
of the simulation. In particular, in this work we focus on the
possibility to build a feedback loop on the rules of coordination
driving a system to a new and better equilibrium state. Many
related agent simulation tools actually exist: as this paper is a
starting point, we plan to perform a systematic comparison
of their expressiveness and features. In the future, we are
interested in constructing an intelligent and adaptive tuning
agent with a BDI architecture, substituting the human agent
in driving the evolution over time of the system behaviour.</p>
    </sec>
    <sec id="sec-4">
      <title>VI. ACKNOWLEDGEMENTS</title>
      <p>The first author of this paper, Enrico Oliva, would like
to warmly thank Dr. Peter McBurney and the Department
of Computer Science at University of Liverpool for their
scientific support and their hospitality during his stay in
Liverpool, when this paper was mostly written.</p>
      <p>2In Figure 6, the first phase of equilibrium is followed by a second one
obtained by changing the threshold parameter S = 5. Finally, a third phase
is obtained changing the dimension of the memory to m = 5.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>W. B.</given-names>
            <surname>Arthur</surname>
          </string-name>
          , “
          <article-title>Inductive reasoning and bounded rationality (the El Farol problem</article-title>
          ),
          <source>” American Economic Review</source>
          , vol.
          <volume>84</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>406</fpage>
          -
          <lpage>411</lpage>
          , May
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W.</given-names>
            <surname>Renz</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Sudeikat</surname>
          </string-name>
          , “
          <article-title>Modeling Minority Games with BDI agents - a case study,” in Multiagent System Technologies, ser</article-title>
          . LNCS,
          <string-name>
            <given-names>T.</given-names>
            <surname>Eymann</surname>
          </string-name>
          , F. Klu¨gl, W. Lamersdorf,
          <string-name>
            <given-names>M.</given-names>
            <surname>Klusch</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. N.</given-names>
            <surname>Huhns</surname>
          </string-name>
          , Eds. Springer,
          <year>2005</year>
          , vol.
          <volume>3550</volume>
          , pp.
          <fpage>71</fpage>
          -
          <lpage>81</lpage>
          , 3rd German Conference (MATES
          <year>2005</year>
          ), Koblenz, Germany,
          <fpage>11</fpage>
          -
          <lpage>13</lpage>
          Sept.
          <year>2005</year>
          . Proceedings. [Online]. Available: http: //www.springerlink.com/link.asp?id=y62q174g56788gh8
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Parsons</surname>
          </string-name>
          and
          <string-name>
            <given-names>P.</given-names>
            <surname>McBurney</surname>
          </string-name>
          , “
          <article-title>Argumentation-based communication between agents.” in Communication in Multiagent Systems, ser</article-title>
          . Lecture Notes in Computer Science, M.-P. Huget, Ed., vol.
          <volume>2650</volume>
          . Springer,
          <year>2003</year>
          , pp.
          <fpage>164</fpage>
          -
          <lpage>178</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Viroli</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          , “
          <article-title>Programming MAS with artifacts,” in Programming Multi-Agent Systems, ser</article-title>
          . LNAI,
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Bordini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dastani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Dix</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>El Fallah</surname>
          </string-name>
          Seghrouchni, Eds. Springer, Mar.
          <year>2006</year>
          , vol.
          <volume>3862</volume>
          , pp.
          <fpage>206</fpage>
          -
          <lpage>221</lpage>
          , 3rd International Workshop (PROMAS
          <year>2005</year>
          ),
          <source>AAMAS</source>
          <year>2005</year>
          , Utrecht,
          <source>The Netherlands, 26 July</source>
          <year>2005</year>
          .
          <article-title>Revised</article-title>
          and
          <string-name>
            <given-names>Invited</given-names>
            <surname>Papers</surname>
          </string-name>
          . [Online]. Available: http://www.springerlink.com/openurl.asp?genre= article&amp;issn=
          <fpage>0302</fpage>
          -
          <lpage>9743</lpage>
          &amp;volume=
          <volume>3862</volume>
          &amp;spage=
          <fpage>206</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          , and E. Denti, “
          <article-title>Activity Theory as a framework for MAS coordination,” in Engineering Societies in the Agents World III, ser</article-title>
          . LNCS, P. Petta,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tolksdorf</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F.</given-names>
            <surname>Zambonelli</surname>
          </string-name>
          , Eds. Springer-Verlag,
          <year>Apr</year>
          .
          <year>2003</year>
          , vol.
          <volume>2577</volume>
          , pp.
          <fpage>96</fpage>
          -
          <lpage>110</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          and
          <string-name>
            <given-names>F.</given-names>
            <surname>Zambonelli</surname>
          </string-name>
          , “
          <article-title>Coordination for Internet application development</article-title>
          ,” Autonomous Agents and
          <string-name>
            <surname>Multi-Agent</surname>
            <given-names>Systems</given-names>
          </string-name>
          , vol.
          <volume>2</volume>
          , no.
          <issue>3</issue>
          , pp.
          <fpage>251</fpage>
          -
          <lpage>269</lpage>
          , Sept.
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Viroli</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Ricci</surname>
          </string-name>
          , “
          <article-title>Instructions-based semantics of agent mediated interaction</article-title>
          ,
          <source>” in 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2004</year>
          ),
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sierra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sonenberg</surname>
          </string-name>
          , and M. Tambe, Eds., vol.
          <volume>1</volume>
          . New York, USA: ACM,
          <fpage>19</fpage>
          -
          <issue>23</issue>
          <year>July 2004</year>
          , pp.
          <fpage>102</fpage>
          -
          <lpage>109</lpage>
          . [Online]. Available: http://portal.acm.org/citation.cfm? id=
          <volume>1018409</volume>
          .
          <fpage>1018737</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          and E. Denti, “Formal ReSpecT,” Electronic Notes in Theoretical Computer Science, vol.
          <volume>48</volume>
          , pp.
          <fpage>179</fpage>
          -
          <lpage>196</lpage>
          ,
          <year>June 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Omicini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ricci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Viroli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          , and L. Tummolini, “
          <article-title>Coordination artifacts: Environmentbased coordination for intelligent agents</article-title>
          ,
          <source>” in 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2004</year>
          ),
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sierra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sonenberg</surname>
          </string-name>
          , and M. Tambe, Eds., vol.
          <volume>1</volume>
          . New York, USA: ACM,
          <fpage>19</fpage>
          -
          <issue>23</issue>
          <year>July 2004</year>
          , pp.
          <fpage>286</fpage>
          -
          <lpage>293</lpage>
          . [Online]. Available: http://portal.acm.org/citation.cfm? id=
          <volume>1018409</volume>
          .
          <fpage>1018752</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Challet</surname>
          </string-name>
          and Y.-C. Zhang, “
          <article-title>Emergence of cooperation and organization in an evolutionary game,” Physica A: Statistical and Theoretical Physics</article-title>
          , vol.
          <volume>246</volume>
          , no.
          <issue>3-4</issue>
          , pp.
          <fpage>407</fpage>
          -
          <lpage>418</lpage>
          , Dec.
          <year>1997</year>
          . [Online]. Available: http://dx.doi.org/10.1016/S0378-
          <volume>4371</volume>
          (
          <issue>97</issue>
          )
          <fpage>00419</fpage>
          -
          <lpage>6</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Challet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Marsili</surname>
          </string-name>
          , and Y.-C. Zhang, “
          <article-title>Modeling market mechanism with minority game,” Physica A: Statistical and Theoretical Physics</article-title>
          , vol.
          <volume>276</volume>
          , no.
          <issue>1-2</issue>
          , pp.
          <fpage>284</fpage>
          -
          <lpage>315</lpage>
          , Feb.
          <year>2000</year>
          . [Online]. Available: http://dx.doi.org/10.1016/S0378-
          <volume>4371</volume>
          (
          <issue>99</issue>
          )
          <fpage>00446</fpage>
          -X
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>H. V. D.</given-names>
            <surname>Parunak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brueckner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sauter</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Savit</surname>
          </string-name>
          , “
          <article-title>Effort profiles in multi-agent resource allocation</article-title>
          ,
          <source>” in 1st International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2002</year>
          ),
          <string-name>
            <given-names>C.</given-names>
            <surname>Castelfranchi</surname>
          </string-name>
          and W. L. Johnson, Eds. Bologna, Italy: ACM,
          <fpage>15</fpage>
          -
          <issue>19</issue>
          <year>July 2002</year>
          , pp.
          <fpage>248</fpage>
          -
          <lpage>255</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>N. R.</given-names>
            <surname>Jennings</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sierra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sonenberg</surname>
          </string-name>
          , and M. Tambe, Eds.,
          <source>3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS</source>
          <year>2004</year>
          ). New York, USA: ACM,
          <fpage>19</fpage>
          -
          <issue>23</issue>
          <year>July 2004</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>