<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Helder Coelho</string-name>
          <email>hcoelho@di.fc.ul.pt</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jo˜ao Balsa</string-name>
          <email>jbalsa@di.fc.ul.pt</email>
        </contrib>
      </contrib-group>
      <abstract>
        <p>One of the major puzzles in performing multi-agent-based simulations is the validity of their results. Optimisation of simulation parameters can lead to results that can be deceitful, optimistic, or plainly wrong. When the issue at stake is inherently complex, which is frequently the case with social phenomena, the search for emergent outcomes is closely related to macro effects deriving from micro behaviours, and the drawing of valid conclusions from the analysis of the observed results should be done with extra care. Multi-agent-based social simulation is increasingly used not only to understand and explain phenomena, but also to predict outcomes and even to prescribe measures to be adopted by colective (public or private) entities. The notion that conclusions of simulation studies will be applied to real social settings brings an added responsibility to the researcher. Principled methodologies are needed that can minimise the ad hoc nature of experimentation. In this paper, we present a set of methodological principles to explore the space of possible designs involved in simulation experiments. Principles are needed not only for the design of agents and the societies they are immersed in, but also for the design of models of simulations themselves. Several techniques are shown that can provide an increasingly broad covering of the space of possible experiment designs. We also explore some alternatives on how to progressively complexify particular mechanisms.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>more complex, and is still in need of some principles with which to guide the researchers in such a
way that strengthens confidence in the obtained results, and their analysis.</p>
      <p>
        So, in this paper we will propose a draft of a set of methodological principles with which to guide
exploratory simulations in Social Science phenomena. This methodology builds up from other MAS
and MABS methodologies to address all levels of complexity in such a simulation, namely, the agent
cognitive level, the societal level, and the experimental (simulation) level itself. The leitmotiv of
this methodology will be centred around complexity. We need to explore complex systems to get to
know them, not to simplify them to a point we can easily know them. To this end, we build up on
our vision of MAD methodology, (back and forth journeys in design proposed in [
        <xref ref-type="bibr" rid="ref7">12</xref>
        ]), complement
it with more recent developments on individual decision in the BVG (Beliefs-Values-Goals) choice
framework [
        <xref ref-type="bibr" rid="ref1">6</xref>
        ], and a schematic vision of exploratory simulation we addressed in [5].
      </p>
      <p>
        The development of this methodology was based on the tax compliance scenario as inspiration
and applicational support [
        <xref ref-type="bibr" rid="ref3">3, 4, 8</xref>
        ]. We should note that the kind of activities that e*plore involves,
by no means eases up the task of the developer and simulator. What it does it to explicitly consider
the structure of the development of the several models. The result of this exploration of the space
of possible models could be compared in terms of complexity and effort with the usual process of
sequential development, programming, and refinement of one model. However, instead of looking
for the model, it does consider design options and lists alternative models. In these alternative
scenarios, somewhat simplified visions of the problem are studied. Admittedly, this involves the
risk that some necessary complexity is lost in the separation of characteristics. Still, no single one of
this models is the absolute answer to the proposed problem. In the exploration of these individual
models and their variability, we aim at getting deeper insight into the several facets of the target
phenomenon, so that a unified view can be built, modelled and simulated.
      </p>
      <p>The rest of the paper is organised as follows. In the next section we address some of the most
representative methodologies for experimentation in MAS and MABS, and focus on their evolution.
We then summarise the idea of exploratory simulation as proposed in the literature and enumerate
and discuss the persistent methodological problems still to be found despite all systematisation
efforts. We then present our first attempt at a unifying methodology for (exploratory,
multi-agentbased) social simulation. Section 5 discusses the purpose of social simulation, and recommends
prudence on the generalisation of its findings. The following section discusses the methodological
steps in depth, focussing especially on evaluation. Section 6.1 takes on Sloman’s idea of
exploration of design space in this context, and proposes cumulative ways of covering design space by
manipulating models design. Finally, section 7 enumerates the steps of the methodology, before we
produce some concluding remarks.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Methodologies for Development of Multi-Agent Systems and Multi-Agent-Based Simulation</title>
      <p>
        Recently, very serious efforts were produced on the issue of building up a solid methodology for
deploying multi-agent systems (MAS). Perhaps the most achieved and influent of these efforts is
Gaia, by Wooldridge et al. [
        <xref ref-type="bibr" rid="ref20">25</xref>
        ]. Gaia is involved in the MAS area coming of age, in what it attempts
to establish a set of concepts and principles to build on a system and its components that is general
and comprehensive, and apt to deal with the enormous development of agent systems we have
watched.
      </p>
      <p>In Gaia, the founding idea is that a MAS is a computational organisation consisting of several
interacting roles. Gaia is proposed from an engineering standpoint, which is clear from the domain
characteristics adopted. However, some of those characteristics are not adequate when we take on
a more scientific stance. Gaia assumes that “the goal is to obtain a system that maximises some
global quality measure (...) [and] is not intended for systems that admit the possibility of true
conflict.” [25, page 286]</p>
      <p>
        In this light, we start our search for a more general methodology for social simulation, having
Cohen’s 1991 MAD (Modelling, Analysis and Design) [
        <xref ref-type="bibr" rid="ref8">13</xref>
        ] in mind. Cohen was worried about
defining the general lines of an experimental method for artificial intelligence. Controlled
experiments are designed to suggest or provide evidence for theories that can explain differences in the
performance of systems. Acknowledging that empirical results are seldom general, Cohen insisted
that nothing prevents the researcher from “inventing general theories as interpretations of results
of studies in simulation testbeds, and nothing prevents (...) from designing additional studies to
test predictions of these theories in several simulation testbeds” [21, page 39].
      </p>
      <p>
        MAD (Modelling, Analysis and Design) involves seven activities [
        <xref ref-type="bibr" rid="ref8">13</xref>
        ]: (1) evaluate the
environmental factors that affect behaviour; (2) model the causal relations between system design, its
environment, and its behaviour; (3) design or redesign a system (or part of one); (4) predict how
the system will behave; (5) run experiments to test predictions; (6) explain unexpected results
and modify the models and design of the system; and (7) generalise models to classes of systems,
environments and behaviours.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref7">12</xref>
        ] we have critically addressed this methodology from a systems development standpoint:
to program is not only to code either formal or informal descriptions, so we have proposed to slide
Cohen’s ecology triangle along a line that could be travelled back and forth, as we depict in figure
1.
      </p>
      <p>Formalisations</p>
      <p>Systems</p>
      <p>
        In [5] we readdressed this methodology and confronted it with Gilbert’s methodology for
computational simulation [
        <xref ref-type="bibr" rid="ref12">17</xref>
        ]: (1) identify a “puzzle,” a question whose answer is unknown; (2) definition
of the target of modelling; (3) normally, some observations of the target are necessary, to provide
the parameters and initial conditions of the model; (4) after developing the model (probably in
the form of a computer program), the simulation is executed, and its results are registered; (5)
verification assures the model is correctly developed; (6) validation ensures that the behaviour of
the model corresponds to the behaviour of the target; and (7) finally, the sensitivity analysis tells
how sensitive the model is to small changes in the parameters and initial conditions.
      </p>
      <p>
        Both methodologies are quite similar, but in MAD there is no return to the original phenomenon.
While Cohen’s emphasis is on the system, Gilbert is more concerned with the original phenomenon
to be modelled and simulated. In [5], we proposed some methodological principles with which to
confront the results of simulations, and proposed a merge between extended MAD and a description
of exploratory simulation, crossed with the the idea of heterogeneous agents with an individual
choice framework, that took the experiment designer inside the whole methodological scheme. The
key idea is not to mask complexity away from experimentation with complex models and systems.
The existing methodologies are not capable of dealing with the complexity contained in today’s
exploratory simulations (ES) with agent-based social systems. This concern (see also [
        <xref ref-type="bibr" rid="ref5">10</xref>
        ]) comes
from the best of reasons: today’s agent technology, together with the increased computational
power available, brought the social scientists to tackle new problems (or scaled up old problems),
through computational simulations, that they would not dream of until recently. The existing
methodologies are too focussed on realising a system tuned for a given purpose, whereas in ES that
purpose is too vague and complex to be defined from the start.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Exploratory Simulation</title>
      <p>
        The notion of agent and computational simulation are the master beams of the new complexity
science [
        <xref ref-type="bibr" rid="ref10">15</xref>
        ]. Computational simulation is methodologically appropriate when a social phenomenon
is not directly accessible [
        <xref ref-type="bibr" rid="ref14">19</xref>
        ]. One of the reasons for this inaccessibility is the target phenomenon
being so complex that the researcher cannot grasp its relevant elements. Simulation is based in a
more observable phenomenon than the target one. Often, the study of the model is as interesting
as the study of the phenomenon itself, and the model becomes a legitimate object of research [
        <xref ref-type="bibr" rid="ref9">14</xref>
        ].
There is a shift from the focus of research of natural societies (the behaviour of a society model can
be observed “in vitro” to test the underlying theory) to the artificial societies themselves (study
M
I
…
      </p>
      <p>T</p>
      <p>C</p>
      <p>E
A
intuitions</p>
      <p>R
intuitions</p>
      <p>H
O
intuitions</p>
      <p>
        V
of possible societies). The questions to be answered cease to be “what happened?” and “what
may have happened?” and become “what are the necessary conditions for a given result to be
obtained?,” and cease to have a purely descriptive character to acquire a prescriptive one. A new
stance can be synthesised, and designated “exploratory simulation” [
        <xref ref-type="bibr" rid="ref9">14</xref>
        ]. The prescriptive character
(exploration) cannot be simplistically resumed to a optimisation, such as the descriptive character
is not a simple reproduction of the real social phenomena.
      </p>
      <p>
        In this methodological stance, the site of the experimenter becomes central, which reinforces
the need of defining common ground between him/her and the mental content of the agents in the
simulation (see figure 2). Hales [
        <xref ref-type="bibr" rid="ref15">20</xref>
        ] claims that experimentation in artificial societies demands for
new methods, different from traditional induction and deduction. Like Axelrod says: “Simulation
is a third form of making science. (...) While induction can be used to discover patterns in data,
and deduction can be used to find consequences of assumptions, the modelling of simulations can
be used as an aid to intuition” [7, page 24].
      </p>
      <p>
        However, as Casti stresses [
        <xref ref-type="bibr" rid="ref6">11</xref>
        ], there are difficulties in concretising the verification process: the
goal of these simulation models is not to make predictions, but to obtain more knowledge and
insight. In [5], we emphasised the fact that theories, explanations and hypotheses are being
constructed, not only given and tested. Simulation is precisely the search for theories and hypotheses.
These come from conjectures, through metaphors, intuitions, etc. Even evaluation needs intuitions
from the designer to lead to new hypotheses and explanations. This process allows the agent’s
choices to approximate the model that is provided as reference. Perhaps this model is not as
accurate as it should be, but it can always be replaced by another, and the whole process of simulation
can provide insights about this other model.
4
      </p>
    </sec>
    <sec id="sec-4">
      <title>Persistent Methodological Problems</title>
      <p>In this section we summarise the problems that persist after all these methodological undertakings
that have crossed the last decade or so whilst this multi-disciplinary area of multi-agent-based
exploratory social simulation was being delineated, and its goals and possibilities were better
understood. Next, we will claim that the area as a whole is ready to go further and propose solutions
for real world (target system) problems and questions.
4.1</p>
      <sec id="sec-4-1">
        <title>Validity and Significance of Results</title>
        <p>
          All modellers, simulators and experiments are worried about the validity and significance of the
models they build and use. Unfortunately, as we have seen from the comparison between the two
methodologies above, once the models are built, tested and deployed, the experimenter may tend
to look at them as being the real system, and forget they are still only models. And so, outcomes
of the MABS are still outcomes of a simulation, not necessarily similar or representative of how the
world would react in the same conditions. This was the criticism behind the proposal of Extended
MAD [
          <xref ref-type="bibr" rid="ref7">12</xref>
          ], but as more and more models and simulations are being created and explored, we notice
that this basically flawed stance should still be stressed and fought against. Promises can kill a
research program, and social simulation is still at its infancy and needs to be protected.
4.2
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>The Role of the Observer/Experimenter</title>
        <p>
          Another persistent issue is the place and role of the experiment designer. Discrepancies between
the notions of causality and correlation may lead to poor interpretations of the modelling efforts.
Since a recurrent issue of exploratory simulation is emergence, and this concept depends on what
the observer is expecting (or, more formally, can demonstrate to be derivable) from the system
design, there are several issues to be addressed. In truth, they have been mentioned by several
authors in the literature and public addresses, perhaps only not systematically. We will provide
some illustrations of the importance of this issue:
• Axelrod defended in [
          <xref ref-type="bibr" rid="ref2">7</xref>
          ] that models and simulations should be described in such a way so as
to be reproducible and indeed reproduced by different people, in an effort to ensure validation
of experiment designs and their outcomes;
• Gilbert described [
          <xref ref-type="bibr" rid="ref13">18</xref>
          ] several varieties of emergence, including ‘second order emergence,’ in
which agents themselves recognised emergent features of the society and this influenced their
behaviour, while Antunes et al. [3] introduced a micro-level ‘perception’ of a macro-level
measure as influencing individual agent’s behaviour;
• Campos et al [
          <xref ref-type="bibr" rid="ref5">10</xref>
          ] enumerate seven roles for experimenters in a multi-agent simulation. Many
before have argued the necessity of the ‘tester’ role being played by a different individual
from the ‘designer’ or ‘developer.’ This set of roles does not stress this necessity, but goes far
beyond in specialising the roles involved in experimentation.
4.3
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>Exploring Design Spaces</title>
        <p>
          The notion of exploration of the design space against the niche space was introduced in MAS
by Aaron Sloman [
          <xref ref-type="bibr" rid="ref17 ref18">22, 23</xref>
          ] to clarify how one can find a solution (architecture) for a particular
problem. Stemming from broad but shallow agent architectures, designs are proposed and tested
against original specifications, and finally, some variations introduced to check how the specific
architecture adapted to the niche space it was developed for. In most MABS simulations reported
in the literature, this last step is not performed, and again the reader is left with the notion that
the way models were built was either the only or the best possible design. This brings us back to
the concern about exemplification instead of demonstration.
        </p>
        <p>However, the picture gets even darker when we consider not only agent design, but also
experiment design. It could be said that we are exploring a multi-dimensional region using only
two-dimensional tools. Any kind of variation could be introduced by considering any other relevant
dimension, and we must possess the means with which to assess relevance of the features under
exam and their consequences for the outcome of experiments.
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>The Purpose of Agent-Based Exploratory Simulation</title>
      <p>The dramatic effect of considering ill, biased, or flawed methodological principles for complex
simulations becomes apparent when we consider its possible purposes. Many of these are often only
implicitly considered, so it is important to stress all of them here.</p>
      <p>
        1. By building computational models, scientists are forced to operationalise the concepts and
mechanisms they use for their formulations. This point is very important as we are in
crosscultural field, and terminology and approaches can differ a lot from one area to another;
2. The first and many times only purpose of many simulations is to get to understand better
some complex phenomenon. In MABS, ‘understand’ means to describe, to model, to program,
to manipulate, to explore, to have a hands-in approach to the definition of a phenomenon or
process;
3. Another purpose of exploratory simulation is to experiment with the models, formulate
conjectures, test theories, explore alternatives of design but also of definitions, rehearse different
approaches to design, development, carry out explorations of different relevance of perceived
features, compare consequences of possible designs, test different initial conditions and
simulation parameters, explore ‘what-if’ alternatives. In sum, go beyond observed phenomena
and established models, and play with the simulation while letting imagination run free;
4. With MABS, we ultimately aim to explain a given phenomenon, usually from the real social
world. The sense of explaining is linked to causality more than to correlation. As Gilbert [
        <xref ref-type="bibr" rid="ref13">18</xref>
        ]
says, we need explanation not only at the macro level, but also at the individual level. Our
explanation of the phenomena we observe in simulation is solid because we must make the
effort of creating and validating the mechanisms at the micro level, by providing solid and
valid reasons for individual behaviours;
5. When we achieve such a level of understanding, we are able to predict how our models
react to change, and this prediction is verifiable in the real phenomenon, through empirical
observations. It is important to stress that even empirical observations presuppose a model
(which data were collected, which questionnaires were used, etc.). A recent effort that may
prove very useful in understanding the complexities of this process is the Model to Model
workshop series [1, 2];
6. Finally, we have such confidence in the validity and prediction capability of our simulation
system, that we are ready to help rehearse new policies and prescribe measures to be
applied to the real phenomenon with real actors. It is obvious that no rigour can be spared
when a simulation program achieves this point, and initial restrained application is highly
recommended.
6
      </p>
    </sec>
    <sec id="sec-6">
      <title>How to Conduct Agent-Based Exploratory Simulation</title>
      <p>In the most interesting social simulations, agents are autonomous, in what individual agents have
their own reasons for the choices they make and the behaviours they display. Simulations are hence
run with a heterogeneous set of agents, closely resembling what happens in real social systems, where
individuality and heterogeneity are key features. So, individual action is situated, adaptive,
multidimensional, complex. If individual autonomy produces additional complexity in MAS, emergent,
collective and global behaviour derived from the interactions of dozens of agents renders the whole
outcome of simulations even more complex and unpredictable.</p>
      <p>An important feature of social simulation is that usually researchers are not only concerned with
the overall trajectories of the system, much less their aggregated evaluation (in terms of averages
or other statistical measures). Equilibria, non-equilibria, phase transitions, attractors, etc. are as
important as observing the individual trajectory of given agents, and examining its reasons and
causes. This is important both to validate the model at individual and global levels, but also
because the whole dynamics of the system and its components is influenced by the micro-macro
link.</p>
      <p>In e*plore, important phases are to determine what characteristics are important and what
measures (values) are to be taken at both those levels, what is the appropriate design of individual
cognitive apparatus and of inter-personal relationship channels (other methodologies such as
Extended MAD or Gaia might prove useful for this), what roles the experiment designer will play and
how his/her beliefs are represented inside the simulation, how to perform translation (specification,
coding, validation, etc.) along the lines of a new (hyper-)triangle (much more complex than the one
in figure 1) and complement it with complex dynamic evaluations, how to design models, agents,
systems, experiments, simulations, in order to travel alongside the space of models to cover problem
characteristics and to evaluate truthfulness of a certain agent design. All this while keeping in mind
that we are looking for a solution for a problem in the real world.
6.1</p>
      <sec id="sec-6-1">
        <title>Systematically Transversing Design Space</title>
        <p>
          According to Gilbert [
          <xref ref-type="bibr" rid="ref13">18</xref>
          ], it was Epstein and Axtell [
          <xref ref-type="bibr" rid="ref11">16</xref>
          ] who pioneered the technique of starting a
simple model and refining it. This can be considered an adaptation of Sloman’s increasing depth
in his broad but shallow agent models, but this time applied to the hole MAS and not only the
individual agent. In this section we propose that when we need to explore the space of possible
designs, several techniques can be used to ensure complete and comprehensive covering. We have
been used these ideas in the tax compliance scenario [
          <xref ref-type="bibr" rid="ref3">3, 4, 8</xref>
          ], where we envisage to get a deeper
insight into individual and collective behaviour involved in tax evasion and better support and
confidence for our exploratory ideas.
        </p>
        <p>While we propose the following techniques as a way of consecutively enriching and rehearsing
new agent and societal models, we offer their application to the exploration of the design space of
experiments themselves. Variations of the models involved in experiments depend on an amazing
number of features to be repeatedly fixed and spanned over their domain. These include initial
conditions, parameters, realist estimations of lacking numbers, etc., but we have to consider higher
order decisions, such as mechanisms that can change/update/vary those parameters, and even
interconnections among those mechanisms. All of these are design options to be made, and their
validity must be strengthened by convenient exploration around them.</p>
        <p>Refining
(Sugarscape)</p>
        <p>Tiling</p>
        <p>Adding up</p>
        <p>Choosing</p>
        <p>Enlarging</p>
        <p>Figure 3 illustrates how a set of models can be designed and composed to comprehensively
cover the space of possible designs. Models evolve from models by means of several different
techniques and their combinations: refining, tiling, adding up, choosing, enlarging, etc. These are
all standard techniques used in the development of models and systems. Exploration through these
techniques involves moving from one model to another through introducing variability in the models
characteristics, be them either parameters and variables, or objects, agents and environments, or
social mechanisms (for interaction, protocols, dynamic structures), or even experiment
designrelated.</p>
        <p>In a short explanation of these techniques, we will refer to the object of variation as a
“mechanism.” A mechanism can be simply seen as a variable that represents some concept, or a complex
set of social rules that the model includes. So, a mechanism is not necessarily individual, and
the variability we propose must be applied to all parts of design (individual agent, environment,
interactions between agents, societal rules and even experiment design). So, refining involves
substituting some simple mechanism for a slightly more complex one. Tiling means to explore some
design alternative by covering the whole space of possibilities for a given mechanism. Adding up
involves the summation of two or more models developped in parallel, and addressing different
aspects of the target phenomenon. Choosing is the inverse of adding up, to give up some model
or some characteristics of a model that do not seem promising for the overall solution. Enlarging
means to augment a model by adding new features, and relating them to the existing ones.</p>
        <p>
          The idea behind this strategic exploration of the experiment design space is to build up theory
from the exploration of models. As an example, consider our experiments on tax compliance [
          <xref ref-type="bibr" rid="ref3">3, 4, 8</xref>
          ].
Existing theoretical models were plainly unsatisfactory. On the other hand, we had no solid
empirical data with which to calibrate and ultimately validate our models. So, we opted for a strategy
of mimicking the standard mainstream model (which we called Ec0) with which we recorded a set
of base data against which to compare the outcome of subsequent models. Then, we successively
introduced new models with specific characteristics, either at the micro (individual) or at the macro
(societal) levels, with some reasons, conjectures or intuitions.So, Ecτ0 introduced expanded history
in the individual decision; Ec1 proposed agent individuality, whereas Ec2 postulated individual
adaptivity ; Ec3∗ introduced sociality, it is the first model where the individual decision depends on a
social perception; Ec3∗i explored one particular type of interaction, imitation; and finally Ec4∗
postulated social heterogeneity, different agent breeds in a conflictual relation. Other models are still
being shaped, such as Ec?∗k a model where perception is limited to a k-sized neighbourhood. This
tentative coverage of our problem and model space uses several combined techniques of figure 3.
When building up experimental designs, it is usual to defend and adopt the so-called KISS (“keep it
simple, stupid!”) principle [
          <xref ref-type="bibr" rid="ref2">7</xref>
          ]. In some sense, Sloman’s “broad but shallow” design principle starts
off from this principle. Still, models must never be simpler than they should. The solution for this
tension is to take the shallow design and increasingly deepen (or thicken, as we proposed in Kyoto
for WCSS’06) it while gaining insight and understanding about the problem at hand. The idea
is to explore the design of agents, (interactions), (institutions), societies and finally experiments
(including simulations and analysis of their outcomes) by making the initially simple (and simplistic)
particular notion used increasingly more complex, dynamic, and rooted in consubstantiated facts.
As Moss argued in his WCSS’06 plenary presentation, “Arbitrary assumptions must be relaxed in a
way that reflects some evidence.” This complex movement involves the experimenter him/herself,
and according to Moss includes “qualitative micro validation and verification (V&amp;V), numerical
macro V&amp;V, top-down verification, bottom-up validation,” all of this whereas facing that “equation
models are not possible, due to finite precision of computers.”
        </p>
        <p>A possible sequence of deepening a concept representing some agent feature (say parameter c,
standing for honesty, income, or whatever) could be to consider it initially a constant, then a
variable, then assign it some random distribution, then some empirically validated random distribution,
then include a dedicated mechanism for calculating c, then an adaptive mechanism for calculating c,
then to substitute c altogether for a mechanism, and so on and so forth. These sequence illustrates
some of the combination of techniques depicted in figure 3.
7
We can synthesize the steps of e*plore methodology:
i. identify the subject to be investigated, by stating specific items, features or marks;
ii. unveil state-of-the-art across the several scientific areas involved to provide context. The
idea is to enlarge coverage before narrowing the focus, to focus prematurely on solutions may
prevent the in-depth understanding of problems;
iii. propose definition of the target phenomenon. Pay attention to its operationality;
iv. identify relevant aspects in the target phenomenon, in particular, list individual and collective
measures with which to characterise it;
v. if available, collect observations of the relevant features and measures;
vi. develop the appropriate models to simulate the phenomenon. Use the features you uncovered
and program adequate mechanisms for individual agents, for interactions among agents, for
probing and observing the simulation. Be careful to base behaviours in reasons that can
be supported on appropriate individual motivations. Develop visualisation and data
recording tools. Document every design option thoroughly. Run the simulations, collect results,
compute selected measures;
vii. return to step iii, and calibrate everything : your definition of the target, of adequate measures,
of all the models, verify your designs, validate your models by using the selected measures.</p>
        <p>Watch individual trajectories of selected agents, as well as collective behaviours;
viii. introduce variation in your models: in initial conditions and parameters, in individual and
collective mechanisms, in measures. Return to step v;
ix. After enough exploration of design space is performed, use your best models to propose
predictions. Confirm it with past data, or collect data and validate predictions. Go back
to the appropriate step to ensure rigour;
x. Make a generalisation effort and propose theories and/or policies. Apply to the target
phenomenon. Watch global and individual behaviours. Recalibrate.
8</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Concluding Remarks</title>
      <p>When embracing a new project on the dynamics of tax evasion, we were struck by the difficulty in
adequately designing the MAS models and simulation experiments in such a way that the results of
our investigation could be reliable enough as to provide solid cues on how to act in the real world
side of the problem. We crossed this concern with our old approaches to methodological principia
to the design and deployment of MAS, to outline a set of steps that allow to think holistically and
in a complex way about the carrying out of social simulation experiments.</p>
      <p>The e*plore methodology goes beyond other proposals in MAS, because it takes a step back
from the core of action, and looks at the experimentation process as a whole where the researcher
has a role and intents. This is the reason why it starts from a broad, multi-disciplinary research
on the issue to take on, and proposes a lot of cycles in the development process, to ensure not
only verification and validation, but also comprehensive coverage of the experiment design space.
This is accomplished through the use of several variation techniques, but its foundations lay on
the researcher’s experience, rigour and honesty, but also intuition and creativity. At this stage of
our proposal, we cannot offer better guidance to transverse that space, since its cartography is not
available, and its topology is too complex.</p>
      <sec id="sec-7-1">
        <title>Workshop,</title>
      </sec>
      <sec id="sec-7-2">
        <title>March</title>
        <p>31-April
1,
2003,</p>
      </sec>
      <sec id="sec-7-3">
        <title>Marseille,</title>
      </sec>
      <sec id="sec-7-4">
        <title>France.</title>
      </sec>
      <sec id="sec-7-5">
        <title>September 16-19, 2004,</title>
      </sec>
      <sec id="sec-7-6">
        <title>Valladolid,</title>
      </sec>
      <sec id="sec-7-7">
        <title>Spain.</title>
        <p>References
[1] Model to Model</p>
        <p>http://cfpm.org/m2m/.
[2] Second Model to Model Workshop,</p>
        <p>www.insisoc.org/ESSA04/M2M2.htm.
[3] Luis Antunes, Jo˜ao Balsa, Luis Moniz, Paulo Urbano, and Catarina Roseta Palma. Tax
compliance in a simulated heterogeneous multi-agent society. In Jaime Sim˜ao Sichman and Luis
Antunes, editors, Multi-Agent-Based Simulation VI, volume 3891 of LNAI. Springer-Verlag,
2006.
[4] Luis Antunes, Joa˜o Balsa, Ana Resp´ıcio, and Helder Coelho. Tactical exploration of tax
compliance decisions in multi-agent based simulation. In Luis Antunes and Keiki Takadama,
editors, Proc. MABS 2006, 2006.
[5] Luis Antunes and Helder Coelho. On how to conduct experiments with self-motivated agents.</p>
        <p>In Gabriela Lindemann, Daniel Moldt, and Mario Paolucci, editors, Regulated Agent-Based
Social Systems: First International Workshop, RASTA 2002, volume 2934 of LNAI.
SpringerVerlag, 2004.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Luis</given-names>
            <surname>Antunes</surname>
          </string-name>
          , Jo˜ao Faria, and
          <string-name>
            <given-names>Helder</given-names>
            <surname>Coelho</surname>
          </string-name>
          .
          <article-title>Improving choice mechanisms within the BVG architecture</article-title>
          .
          <source>In Intelligent Agents VII, Proc. of ATAL</source>
          <year>2000</year>
          , volume
          <volume>1986</volume>
          <source>of LNAI. SpringerVerlag</source>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Robert</given-names>
            <surname>Axelrod</surname>
          </string-name>
          .
          <article-title>Advancing the art of simulation in the social sciences</article-title>
          .
          <source>In Rosaria Conte</source>
          , Rainer Hegselmann, and Pietro Terna, editors,
          <source>Simulating Social Phenomena</source>
          , volume
          <volume>456</volume>
          <source>of LNEMS</source>
          . Springer,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [8] Jo˜ao Balsa, Luis Antunes, Ana Resp´ıcio, and Helder Coelho.
          <article-title>Autonomous inspectors in tax compliance simulation</article-title>
          .
          <source>In Proc. 18th European Meeting on Cybernetics and Systems Research</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Federico</given-names>
            <surname>Bergenti</surname>
          </string-name>
          ,
          <string-name>
            <surname>Marie-Pierre Gleizes</surname>
          </string-name>
          , and Franco Zambonelli, editors.
          <source>Methodologies and Software Engineering for Agent Systems: The Agent-Oriented Software Engineering Handbook</source>
          . Kluwer Ac. Press,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Andr´e M. C. Campos</surname>
          </string-name>
          ,
          <string-name>
            <surname>Anne M. P. Canuto</surname>
          </string-name>
          , and
          <string-name>
            <surname>Jorge H. C.</surname>
          </string-name>
          <article-title>Fernandes</article-title>
          .
          <article-title>Towards a methodology for developing agent-based simulations: The masim methodology</article-title>
          .
          <source>In Proc. AAMAS</source>
          <year>2004</year>
          , pages
          <fpage>1494</fpage>
          -
          <lpage>1495</lpage>
          ,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [11]
          <string-name>
            <surname>John</surname>
            <given-names>L. Casti.</given-names>
          </string-name>
          <article-title>Would-be business worlds</article-title>
          .
          <source>Complexity</source>
          ,
          <volume>6</volume>
          (
          <issue>2</issue>
          ),
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Helder</surname>
            <given-names>Coelho</given-names>
          </string-name>
          , Luis Antunes, and
          <string-name>
            <given-names>Luis</given-names>
            <surname>Moniz</surname>
          </string-name>
          .
          <article-title>On agent design rationale</article-title>
          .
          <source>In Proc. XI Brazilian Symposium on AI. SBC and LIA</source>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Paul</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Cohen</surname>
          </string-name>
          .
          <article-title>A Survey of the Eighth National Conference on AI: Pulling together or pulling apart</article-title>
          ?
          <source>AI Magazine</source>
          ,
          <volume>12</volume>
          (
          <issue>1</issue>
          ):
          <fpage>16</fpage>
          -
          <lpage>41</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Rosaria</given-names>
            <surname>Conte</surname>
          </string-name>
          and
          <string-name>
            <given-names>Nigel</given-names>
            <surname>Gilbert</surname>
          </string-name>
          .
          <article-title>Introduction: computer simulation for social theory</article-title>
          . In Artificial Societies:
          <article-title>the computer simulation of social life</article-title>
          . UCL Press,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Rosaria</surname>
            <given-names>Conte</given-names>
          </string-name>
          , Rainer Hegselmann, and
          <string-name>
            <given-names>Pietro</given-names>
            <surname>Terna</surname>
          </string-name>
          . Introduction:
          <article-title>Social simulation - a new disciplinary synthesis</article-title>
          .
          <source>In Simulating Social Phenomena</source>
          , volume
          <volume>456</volume>
          <source>of LNEMS</source>
          . Springer,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Joshua</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Epstein</surname>
            and
            <given-names>Robert</given-names>
          </string-name>
          <string-name>
            <surname>Axtell</surname>
          </string-name>
          .
          <article-title>Growing artificial societies</article-title>
          .
          <source>The Brookings Institution</source>
          and The MIT Press, Washington,
          <string-name>
            <surname>D.C.</surname>
          </string-name>
          and Cambridge, MA (resp.),
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Nigel</given-names>
            <surname>Gilbert</surname>
          </string-name>
          .
          <article-title>Models, processes and algorithms: Towards a simulation toolkit</article-title>
          . In Ramzi Suleiman, Klaus G. Troitzsch, and Nigel Gilbert, editors,
          <source>Tools and Techniques for Social Science Simulation. Physica-Verlag, Heidelberg</source>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Nigel</given-names>
            <surname>Gilbert</surname>
          </string-name>
          .
          <article-title>Varieties of emergence</article-title>
          .
          <source>In Proc. Agent</source>
          <year>2002</year>
          :
          <article-title>Social agents: ecology, exchange, and evolution</article-title>
          , Chicago,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Nigel</given-names>
            <surname>Gilbert</surname>
          </string-name>
          and Jim Doran, editors. Simulating Societies:
          <article-title>the computer simulation of social phenomena</article-title>
          . UCL Press, London,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>David</given-names>
            <surname>Hales</surname>
          </string-name>
          .
          <article-title>Tag Based Co-operation in Artificial Societies</article-title>
          .
          <source>PhD thesis</source>
          , Univ. Essex,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Steve</surname>
            <given-names>Hanks</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Martha E.</given-names>
            <surname>Pollack</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Paul R.</given-names>
            <surname>Cohen</surname>
          </string-name>
          .
          <article-title>Benchmarks, test beds, controlled experimentation, and the design of agent architectures</article-title>
          .
          <source>AI Magazine</source>
          ,
          <volume>14</volume>
          (
          <issue>4</issue>
          ),
          <year>Winter 1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>Aaron</given-names>
            <surname>Sloman</surname>
          </string-name>
          .
          <article-title>Prospects for AI as the general science of intelligence</article-title>
          .
          <source>In Proc. of AISB'93</source>
          . IOS Press,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>Aaron</given-names>
            <surname>Sloman</surname>
          </string-name>
          .
          <article-title>Explorations in design space</article-title>
          .
          <source>In Proc. of the 11th European Conference on Artificial Intelligence</source>
          ,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [24]
          <string-name>
            <surname>William</surname>
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Swartout</surname>
            and
            <given-names>Robert</given-names>
          </string-name>
          <string-name>
            <surname>Balzer</surname>
          </string-name>
          .
          <article-title>On the inevitable interwining of specification and implementation</article-title>
          .
          <source>Communications of ACM</source>
          ,
          <volume>25</volume>
          (
          <issue>7</issue>
          ):
          <fpage>438</fpage>
          -
          <lpage>440</lpage>
          ,
          <year>1982</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Wooldridge</surname>
          </string-name>
          ,
          <string-name>
            <surname>Nicholas R. Jennings</surname>
            , and
            <given-names>David</given-names>
          </string-name>
          <string-name>
            <surname>Kinny</surname>
          </string-name>
          .
          <article-title>The GAIA methodology for agent-oriented analysis and design</article-title>
          .
          <source>Journal of Autonomous Agents and Multi-Agent Systems</source>
          ,
          <volume>3</volume>
          (
          <issue>3</issue>
          ),
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>