<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Exploring the Use of Metaheuristic Search to Infer Models of Dynamic System Behaviour</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>James R. Williams</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Simon Poulding</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Richard F. Paige</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fiona A. C. Polack</string-name>
          <email>fionag@cs.york.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, University of York</institution>
          ,
          <addr-line>Deramore Lane, York, YO10 5GH</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>As software systems become more pervasive and increase in both size and complexity, the requirement for systems to be able to selfadapt to changing environments becomes more important. This paper describes a model-based approach for adapting to dynamic runtime environments using metaheuristic optimisation techniques. The metaheuristics exploit metamodels that capture the important components in the adaptation process. Firstly, a model of the environment's behaviour is extracted using a combination of inference and search. The model of the environment is then used in the discovery of a model of optimal system behaviour { i.e. how the system should best behave in response to the environment. The system is then updated based on this model. This paper focuses on investigating the extraction of a behaviour model of the environment and describes how our previous work can be utilised for the adaptation stage. We contextualise the approach using an example and analyse di erent ways of applying the metaheuristic algorithms for discovering an optimal model of the case study's environment.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Software that can adapt to changes in its environment without, or with
minimal, human intervention is a key challenge in current software development [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
One method of deciding upon the most appropriate adaptation to perform is to
utilise metaheuristic optimisation techniques. These techniques may be used to
e ciently locate (near) optimal solutions by assessing how close candidate
solutions are to solving a problem and using this information to guide the search over
the the space of all possible solutions [
        <xref ref-type="bibr" rid="ref4 ref5">5, 4</xref>
        ]. Therefore, in reformulating system
adaptation as an optimisation problem, these techniques can be applied.
      </p>
      <p>
        Previous work by Ramirez and Cheng [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] uses metaheuristics to discover
optimal system con gurations at runtime. These con gurations are represented as
a graph of interconnected system components. A genetic algorithm is used to
activate and deactivate links between components in the hunt for an optimal
con guration, using sensory information from the environment to evaluate
candidate con gurations. Earlier work by Goldsby and Cheng [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] uses metaheuristics
to discover optimal component behaviour models (state machines). Performance
issues, related to an expensive tness function and the time taken to generate
models, were cited for this technique as only being e ective at design time [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        The main contribution of this paper is a model- and search-based approach
to self-adaptation. We focus in this paper on adaptation at the component level,
providing a ne-grained level of adaptation whilst aiming to overcome the
performance issues found in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Rather than encoding static information from the
environment in tness functions (as in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]), our approach uses metaheuristics to
extract a model of the environment's dynamic behaviour. From this environment
model, we apply a second metaheuristic to discover a model of the optimal
system/component behaviour. In order to do this, we use a generic search-amenable
representation of models { making our approach applicable to any problem
domain where adaptable parts of the system, and the environment's behaviour,
can be represented as a model. It is worth noting that modelling the behaviour
of the environment may be very challenging and expensive. Our previous
research demonstrated how we can discover optimal system models [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In this
paper we focus on exploring the use of metaheuristics to extract a model of the
environment's behaviour. We investigate ways in which we might improve the
metaheuristic's e cacy in order to make this approach applicable at runtime.
      </p>
      <p>
        To illustrate our approach, we aim to adapt our \Super Awesome Fighter"
(SAF) computer game [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] to maintain the challenge of the game as a player
learns new strategies. The environment to which the SAF system must adapt is
thus a human player, for which a behaviour model is extracted. The objective
is to evolve the game { in terms of providing suitable opponents { and tailor
it for that player. We de ne maximal gameplay enjoyment and challenge as a
condition where, irrespective of skill level, a player never plays an opponent that
they have no chance of beating, nor ghts an opponent that they nd easy to
defeat. The system component being adapted, therefore, is the opponent.
      </p>
      <p>The paper is as follows. Section 2 describes our adaptation approach and
introduces the SAF game, whilst highlighting how our previous work can be
used to discover optimal system models. Section 3 then presents our approach
to extracting models from an environmental behaviour trace at runtime, and
describes how this can be applied to SAF. We evaluate the e cacy of our approach
applied to SAF in section 4, examining ways to improve the performance of the
metaheuristic. The paper concludes in section 5 by discussing limitations and
highlighting future work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Adaptation through Optimisation</title>
      <p>
        Figure 1 shows our runtime adaptation approach that combines the use of models
and metaheuristic search. The rst stage to adapting runtime system behaviour
in response to a dynamic environment is to extract a model from the behaviour
trace of the environment. Utilising a meaningful representation of the sensory
information can enable better runtime decision making [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This data model
guides a metaheuristic search algorithm to extracting a model of the behaviour
of the environment (e.g. a set of rules, or a nite state machine). By
determining a model of the environment's behaviour, we can use a second metaheuristic
algorithm to discover a model of the optimal system/component behaviour in
response to the environment. In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] we demonstrate how to automatically
discover models with desirable characteristics using metaheuristic optimisation. To
do this, we de ned a way to search over the space of models conforming to
a given metamodel of a textual language. We have extended this technique to
all models (textual or graphical) by de ning an integer-based, metaheuristic
search-amenable representation of models that can encode any model
conforming to any given metamodel [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. After mapping this encoding to a model, it
can be evaluated by a problem-speci c tness function. This representation
allows us to easily apply metaheuristic optimisation techniques to any problem
where the solution is a model, therefore by de ning metamodels for the
adaptable components in a system we can discover optimal con gurations in response
to environment changes.
      </p>
      <p>We now describe the example
through which we explore our
adaptation approach.
2.1</p>
      <p>
        Example: SAF
Super Awesome Fighter1 has been Environment Optimal System B
developed to illustrate MDE con- Monitor Behaviour Model
cepts to high-school students. The produces produces
game is played between two
humanssppeeccii eedd gghhtteerrs,anodr aonpere-hduemnaend- Data Model OFrpatmimeiwsaotirokn
opponent. Fighters are speci ed us- input to input to
ing a bespoke language, the Fighter
Description Language { a language ExMtroadcetolr input BehSavyisoteumrMAodel
developed using MDE technologies to
(Xtext2 [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]) { meaning that each
      </p>
      <p>ghter in SAF is a model. SAF logs Fig. 1. The process of extracting a
the state at each time step in a ght model of the environment and using it
for playback purposes. to adapt system behaviour.</p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] we exploited the FDL to
derive opponents with desirable behaviour using metaheuristic optimisation
algorithms. In the context of SAF, the solution space is all instances of the FDL
grammar. The optimisation algorithm is guided through the search space by
assigning tnesses to each candidate solution it examines { i.e. how close that
candidate solution is to solving the designated problem. The grammatical
evolution algorithm [
        <xref ref-type="bibr" rid="ref7 ref9">7, 9</xref>
        ] is used to instantiate candidate ghters which are then
evaluated against some desirable criteria. The goal was to discover opponents
that would be `interesting and challenging' for the player, which was de ned as
1 super-awesome-fighter.appspot.com
2 www.eclipse.org/Xtext
      </p>
      <sec id="sec-2-1">
        <title>System A</title>
        <p>(Environment)
monitors
responds
to</p>
      </sec>
      <sec id="sec-2-2">
        <title>System B</title>
        <p>adapts
meaning an opponent who wins 80% of its ghts against other ghters (the
human's goal is to iteratively improve their player so that it beats the opponents).
We demonstrated that it was possible to discover these desirable ghters, thus
concluding that the optimisation algorithm was e ective. Although performed
in a purely illustrative context (a simple computer game), the principles extend
to any domain where searching for optimal models is of interest.</p>
        <p>The process of selecting opponents for a player can be seen as analogous to
adaptation { the opponent is the adaptable component, and the game aims to
adapt the opponent in order to keep the player interested. Assuming the game
is directly interactive (unlike SAF where the player is already represented by a
model), it is possible to extract a model of the player's behaviour. This model
can then be used to select appropriate opponents to pit against the human. In
other words, we use a model of the human behaviour to discover an optimal
model of the adaptable system behaviour. It is purely due to the nature of our
case study that both of these models conform to the same metamodel (FDL):
the important point is that both the environment and the adaptable part of
the system can be represented by models. Therefore, the technique presented
in this paper is applicable to adapting system behaviour for any system whose
adaptable components can be represented as a model and providing that this
model can be evaluated to guide the optimisation algorithm.</p>
        <p>As our approach, when applied to SAF, will be to extract and discover FDL
models, we now brie y overview the FDL.</p>
        <p>Fighter Description Language FDL conforms to the metamodel shown in
gure 2. The language allows players to specify their character's Personality {
the punch and kick power and reach { and de ne their character's behaviour
using a set of Rules. Each rule is composed of a Condition, a MoveAction, and a
FightAction. Listing 1.1 shows an example of a ghter described using FDL. It
also illustrates some of the more complex parts of the language (which are not
shown in gure 2). Firstly, there is the choose construct which allows players
to specify a set of actions (movement or ghting) and the game selects one at
random. Secondly, FDL supports composite conditions using the and and or
constructs. Finally there is a special Condition called always. This is a rule that
is always applicable; if none of the other rules' condition's are valid, then the
always rule is applied.
1 JackieChan {
2 kickPower = 7
3 punchPower = 5
4 kickReach = 3
5 punchReach = 9
6 far [ run_towards punch_high ]
7 near and stronger [ choose ( stand crouch ) kick_high ]
8 weaker [ run_away choose ( block_high block_low )]
9 always [ walk_towards punch_high ]
10 }</p>
        <p>Listing 1.1. An example character de ned using FDL.</p>
        <p>1
Personality</p>
        <p>*
Characteristic
punchReach : Int
punchPower : Int
kickReach : Int
kickPower : Int</p>
        <p>Bot
name : String</p>
        <p>1
Behaviour
*
Rule</p>
        <p>&lt;&lt;enum&gt;&gt;
MoveActionType
walk_towards
walk_away
run_towards
run_away
jump
crouch
stand</p>
        <p>&lt;&lt;enum&gt;&gt;
FightActionType
block_low
block_high
punch_low
punch_high
kick_low
kick_high</p>
        <p>&lt;&lt;enum&gt;&gt;
ConditionType
always
near
far
much_stronger
stronger
even
weaker
much_weaker
0..1 0..1 0..1</p>
        <p>MoveAction FightAction Condition
type : MoveActionType type : FightActionType type : ConditionType</p>
        <p>At runtime, each player's rules are examined sequentially in the order they
were de ned and the rst rule whose condition(s) are applicable in the current
game state is selected for execution.</p>
        <p>We now describe our metaheuristic-based approach to extracting a model of
then environment and describe how it can be applied to SAF.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Model Extraction using Search</title>
      <p>
        Our approach proposes that information of interest expressed by the
environment be captured in a data model. This model becomes the starting point of a
metaheuristic search algorithm which attempts to extract a model of the
environment's behaviour. This model is then used to discover optimal system behaviour
(see section 2 and [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]). This section overviews our environment behaviour model
extraction process and describes its application in the context of SAF.
3.1
      </p>
      <p>
        Model Extraction Process
Extracting a model from a corpus of data can be achieved in numerous ways,
such as game theory or inference from domain knowledge [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. In this paper, we
examine an extraction process based on metaheuristic optimisation. The process
assumes the existence of two metamodels: one that meaningfully captures data
coming from the environment; and another that represents the environment's
behaviour { this could, for example, be a state machine or a domain-speci c
model. The information contained in the environment data model is used to
guide the metaheuristic algorithm to infer the model of environment's behaviour.
In order to infer the behaviour model, we utilise our generic, search-amenable
representation of models [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. For model extraction purposes, we search over the
space of environment behaviour models (de ned by the metamodel) using the
environment data model to inform the search. We describe this approach now
using SAF as a contextualising example.
3.2
      </p>
      <p>SAF: Extracting Player Behaviour
Our goal to make SAF self-adapting needs to be met by trying to infer a FDL
model of the way the human is playing the game, and using this model to select
appropriate opponents. In SAF's current form, player's specify their behaviour
using a FDL model, which may not accurately re ect human behaviour in a
more interactive, reactive game. To address this, our experiments (section 4)
create mutations of the human model as a method of introducing noise into the
data to better simulate the non-uniform behaviour of a human player. Having
the original FDL model of the actual human, however, does allow us to validate
the extracted model by comparing it against the human model (see section 4).</p>
      <p>
        Our extraction work ow is a series of model management operations, and
is illustrated (with respect to the experimentation) in gure 3. To increase the
chances of nding optimal solutions, we de ned a simpli ed (but equivalent)
version of the FDL metamodel which is more amenable3 to our model-search
technique. We have de ned a two-way model-to-model transformation from our
simpli ed metamodel to the FDL metamodel. During the evaluation of a
candidate solution, it is transformed into FDL for use in SAF. The model
management operations, and the model-search tool, are all written using the Epsilon4
languages, and are chained together using Epsilon's orchestration framework [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
The Environment SAF produces a log le for each
where each game state is formatted as follows:
ght that takes place,
      </p>
      <p>P 1X ; P 1H ; P 1M ; P 1F ; P 2X ; P 2H ; P 2M ; P 2F
where P N refers to either player one or player two; X is the player's position; H
is the player's health; and M and F are the player's movement action and ght
action that they are performing. Some actions take more than one time step.
In these cases, SAF logs the player as being blocked and stops the player from
performing another action. The contents of the log les are the behaviour trace
being produced by the environment.</p>
      <p>We have de ned a data metamodel (available at www.jamesrobertwilliams.
co.uk/adaptation) which captures the information contained in the logs in a
more useful way - frequency counts of conditions and actions. In every game
state, there are always two conditions that are applicable. This is due to the
fact that conditions come in two distinct categories: those related to health
(e.g. much weaker, stronger ) and those related to distance from the opponent
3 The FDL metamodel is reference-heavy; a feature which the model-search tool
struggles with. We believe this refactoring process could be automated but leave this for
future work.
4 www.eclipse.org/epsilon
(far, near ). Exactly one condition from each category will be applicable in each
game state. Therefore, our data model captures the log le information in two
ways. Firstly, it captures the frequencies of actions against single conditions, and
secondly, it captures the frequencies of actions against pairs of conditions. The
two conditions applicable in each game state are calculated using the same rules
that SAF uses. For example, near is applicable when jP 1X P 2X j 5.</p>
      <p>Logging two conditions in every state makes manually inferring the player's
behaviour rules very tricky. When de ning behaviour rules in FDL, the player is
not required to specify two conditions and so a rule whose only condition is near
may be executed in any of the health-related conditions. Therefore, although
manual inference of the behaviour rules initially seemed trivial, further inspection
suggested that it is not and therefore might be a good candidate for metaheuristic
optimisation.</p>
      <p>Partial Model Extraction The logs produced by SAF only give information
about the behaviour of the ghter, and not its personality. This information
cannot be extracted from the data model5 and so we propose a two stage
extraction process. These stages are SAF-speci c but may proof useful for other
case studies. Firstly, we extract the behaviour rules from the data model using
a genetic algorithm (GA), and then we use a hill climbing algorithm to discover
the personality parameters and thus complete the environment behaviour model.
We use a GA to discover the rules because the behaviour rules are complex and
interrelated, meaning that there are likely to be many local optima which makes
the problem unsuitable for hill climbing. To discover the personality parameters,
however, the search space is much smaller and contains fewer local optima and
so hill climbing is used.</p>
      <p>To evaluate the tness of a candidate solution in the GA, we compare its
behaviour with respect to the information contained in the data model. Each
condition pair entry in the data model is passed to the candidate solution as
if it were the game state. The rst rule that is applicable with respect to the
pair of conditions is selected and `executed' the same number of times that the
condition pairs appear in the data model. The rule is `executed' multiple times
because it may contain action choices and therefore result in di erent actions.
The frequencies of the resulting move and ght actions are compared against
the frequencies found in the data model. The tness is calculated as the sum of
squares of the distance between the target frequency and the actual frequency.
We also reward partial matches (e.g. where the move action matches, but the
ght action does not).</p>
      <p>Population Seeding For many metaheuristic optimisation algorithms, the
effectiveness of the algorithm can depend on where in the solution space the
algorithm begins. For population-based metaheuristic algorithms, such as GAs, it is
5 Thorough analysis of the log le can actually shed some light on the personality. For
instance, by tracking the distance moved when running, or analysing the e ects of
punches that make contact. This, however, is out of scope for this paper.
common to create an initial population of diverse candidate solutions in order
to promote exploration of the search space.</p>
      <p>We can, however, make use of the information contained in the data model
and attempt to create an initial population which starts the search in an area of
the solution space that is closer to the optimal solution and therefore improve the
performance of the search. This process is called seeding, and is not SAF-speci c.
We have implemented two di erent types of seeding which we compare in section
4. Although seeding can be useful, it may bias the search towards sub-optimal
solutions. As such, when seeding we also include some other solutions created
randomly. We assess the e ectiveness of these seeding strategies in section 4.
Random Seeding Our random seeding algorithm creates a set of simple candidate
solutions ( ghters) by random selecting entries from the data model and creating
behaviour rules. Up to 4 composite condition rules are created from randomly
selected condition-pair entries in the data model, and up to 3 single condition
rules are created by randomly selecting single condition entries from the data
model. Once the set of rules has been created, the ghter is mapped down to its
integer string form and added to the initial population.
`Intelligent' Seeding Instead of random selecting entries from the data model,
we can do a small amount of inference using domain knowledge to create the
rules. For example if we have two condition-pair entries which have di erent
action sets, we can create a composite condition rule which uses FDL's choice
construct to select between the actions.</p>
      <p>Model Completion As previously mentioned, it may not always be
possible to infer the entire environment behaviour model, as is the case in SAF. In
this instance, we propose using a di erent metaheuristic optimisation algorithm
called hill climbing. This algorithm attempts to nd the correct allocation of
personality parameters to the partial model that resulted from the GA. As SAF
is aware of the previous opponents that the player has fought, we can use them
to evaluate candidate personality parameters. The partial model is, therefore,
updated with the candidate personality and then pit against the same opponents
that produced the data in the original data model. The tness of the personality
is calculated as the distance between the resulting data model and the original
data model.</p>
      <p>We now illustrate this approach by investigating whether or not it is able to
successfully extract human models of increasing complexity.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Exploring the E ectiveness of Search</title>
      <p>We now evaluate our approach by attempting to successfully extract the models
of three input ghters. The ghters (available at www.jamesrobertwilliams.
co.uk/adaptation) are of increasing complexity: the simple ghter uses only</p>
      <sec id="sec-4-1">
        <title>Opponent</title>
      </sec>
      <sec id="sec-4-2">
        <title>Generator</title>
        <p>produces</p>
        <p>RRRaaannndddooommm
OppOOopnpeonotnnsee(nnFttDssL)
pp
input to</p>
      </sec>
      <sec id="sec-4-3">
        <title>Human Player</title>
      </sec>
      <sec id="sec-4-4">
        <title>Model (FDL)</title>
        <p>input to</p>
      </sec>
      <sec id="sec-4-5">
        <title>Mutant</title>
      </sec>
      <sec id="sec-4-6">
        <title>Creator</title>
        <p>produces
HumRRaaannnddPoolmamyer
MOoOdppepplsoon(nFeeDnntLtss)</p>
      </sec>
      <sec id="sec-4-7">
        <title>Fight (SAF)</title>
        <p>produces</p>
      </sec>
      <sec id="sec-4-8">
        <title>Fight Trace Data</title>
      </sec>
      <sec id="sec-4-9">
        <title>Model</title>
        <p>input to</p>
      </sec>
      <sec id="sec-4-10">
        <title>FDL Rule</title>
      </sec>
      <sec id="sec-4-11">
        <title>Extractor (GA)</title>
        <p>produces</p>
      </sec>
      <sec id="sec-4-12">
        <title>Partial FDL</title>
      </sec>
      <sec id="sec-4-13">
        <title>Model</title>
      </sec>
      <sec id="sec-4-14">
        <title>Semantic</title>
      </sec>
      <sec id="sec-4-15">
        <title>Comparison</title>
        <p>Environment Behaviour</p>
        <p>Model Extractor</p>
      </sec>
      <sec id="sec-4-16">
        <title>Population</title>
      </sec>
      <sec id="sec-4-17">
        <title>Seeding Alg.</title>
        <p>input
to
input
to</p>
      </sec>
      <sec id="sec-4-18">
        <title>Personality</title>
      </sec>
      <sec id="sec-4-19">
        <title>Hill Climbing</title>
        <p>produces</p>
      </sec>
      <sec id="sec-4-20">
        <title>Complete FDL</title>
      </sec>
      <sec id="sec-4-21">
        <title>Model</title>
        <p>atomic conditions and no action choice; the medium ghter has some rules with
composite conditions but still no action choice; and the complex ghter has rules
with both composite conditions and action choices.</p>
        <p>To make the problem more challenging and simulate the non-uniform
behaviour of real humans, we also automatically create new instances of each ghter
with slight stochastic variations, called mutants. For instance, we might mutate
the condition of a particular rule to produce variable behaviour. When creating
the data model, one of the mutants is selected randomly to participate in the
ght. This produces a noisier data model, which is intended to simulate more
realistic behaviour.</p>
        <p>Following the process outlined in gure 3, we investigate six scenarios for
each of the three human models, analysing both the e ects of mutants and the
e ects of population seeding:
1. #mutants = 0; population seeding = none
2. #mutants = 0; population seeding = random
3. #mutants = 0; population seeding = intelligent
4. #mutants = 5; population seeding = none
5. #mutants = 5; population seeding = random
6. #mutants = 5; population seeding = intelligent</p>
        <p>For fairness, each experiment is executed ten times, each with a di erent
seed to the pseudo-random number generator. The environment is simulated by
ghting the human (and mutants) against 50 random opponents.</p>
        <p>Algorithm Settings The GA is con gured with a population of size 10, and
executed for 20 generations, and the personality hill climbing algorithm was
executed for 50 generations. The complete set of parameters used in for
metaheuristic algorithm is available at www.jamesrobertwilliams.co.uk/adaptation. The
parameter values selected for a metaheuristic technique plays a crucial role in
its e cacy. At this stage, we are not concerned with e ciency (although this is
obviously extremely important for adaptation at runtime) and so no substantial
e ort was made to tune the parameters to this problem. Our goal is determine the
feasibility of using this approach to adapt components at runtime, and analyse
whether population seeding is a potential method of improving the performance.
Response In order to validate whether or not we successfully extracted the
human model, we devised a semantic ghter comparator. A structural comparison
of the ghters would not be enough, as di erent combinations of rules and
personalities can result in equivalent behaviour. Our semantic comparator,
therefore, aims to see if the the extracted model is semantically equivalent to the
input model. In other words, when ghting against the same opponents, do they
perform as well. In order to calculate a semantic similarity score, the extracted
model and the input model both ght against 100 randomly generated
opponents. Each opponent is fought ten times, and the number of times that the
human model (extracted or input) wins is noted. If the human has mutants,
one of the mutants is selected randomly for each individual ght. The semantic
similarity score is then calculated as:</p>
        <p>Pi1=001 j#W INiinput</p>
        <p>#W INeixtractedj
100
where #W IN miodel is the number of times out of ten that the model beat
opponent i. Scores range between 0 (semantically equivalent) and 10 (semantically
dissimilar).</p>
        <p>It is worth emphasising that this semantic comparison would not occur at
runtime (indeed, it would not be possible); it is used here purely as a way of
validating the approach. In addition to a semantic similarity score, we log the
time taken for the experiment to execute from start to nish (i.e. follow the
entire path through gure 3).
4.1</p>
        <p>Results
Figure 4 shows the results of each experiment. The increase in di culty caused
by adding mutants is immediately apparent when examining the results of the
simple and medium humans. Without mutants the metaheuristics nd near
optimal solutions (when seeded), but struggle when mutants are introduced.
Unexpectedly, the addition of mutants actually improved the performance for the
complex human; the variation in the results suggests that the tness landscape
is noisy, which could attribute towards this phenomenon.</p>
        <p>The simple and medium human results highlight the e ect of seeding. The
GA performs poorly without being seeded { likely due to the randomly initialised
population containing many more complex solutions than simple ones. This
randomness appears to be advantageous for the complex human where it is better
to omit than perform seeding { highlighting that our seeding techniques may
8
e
rsc
o
n
iso 6
ra
p
m
o
c
c
itn 4
a
m
e
S
2
0
●
●
●
●
●
●
●
●
noA1seed raAn2dom intAe3ligent noA4seed raAn5dom intAel6igent noB1seed raBnd2om intBel3igent noB4seed raBn5dom inteBl6igent noC1seed raCn2dom intCe3ligent nCo4seed raCn5dom intCe6ligent
no mseuetdants seed musteaendts seed no mseuetdants seed musteaendts seed no msueetadnts seed musteaendts seed</p>
        <p>Simple Human Medium Human Complex Human
need re-examining. Furthermore, the random seed outperforms the intelligent
seed for a similar reason { the intelligent seeder creates much more complex
solutions using choice and composite rule conditions which are not needed by the
two simpler humans. We performed a three-way analysis of variance (ANOVA)
and found that all three factors (human complexity, mutants, and seeding) have
a signi cant e ect (p &lt; 0:05).</p>
        <p>Execution times for the entire extraction process ranged in the region of one
to ten minutes (median ve). This, however, includes the semantic validation
phase which would not occur at runtime. Furthermore, no e ort has been made to
tune the performance of the implementation of the metaheuristics or
integer-tomodel transformation that they use. While this approach may not be fast enough
for systems requiring near-instantaneous adaptation, a tuned implementation
may adapt su ciently quickly for systems where the environment changes more
slowly (such as the change in the ability of a game player considered here).
5</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Discussion</title>
      <p>The aim of this paper was to explore the potential of utilising metaheuristics to
enable e cient component-based adaptation at runtime. We presented a
modeldriven approach to adapting systems at runtime which utilises metaheuristic
optimisation in order to, rstly, extract a model of the environment, and secondly,
discover the optimal system response. Our focus was on using metaheuristic
optimisation techniques to extract a model of the environment's behaviour, and we
illustrated using a case study the positive e ects of seeding the metaheuristic.</p>
      <p>One issue is performance. Search can be expensive to perform and may not
be appropriate for systems whose adaptation time is very small. On the contrary,
search can be useful in some time-dependent scenarios (such as adaptation), as
it can always return a `good enough' solution at any point in its execution (as
opposed to a constructive approach to model inference which is unable to return
a solution until it completes).</p>
      <p>There is much work that we would like to investigate in the future: further
analysis of seeding strategies and search parameters to improve performance;
optimisation of the search algorithms; extracting an environment model based
on time (e.g. player behaviour may change at di erent times, such as the start
or end); analysis of the realism that mutants introduce; application of the
techniques to other systems; comparison against other model extraction techniques.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>This research was supported by the EPSRC, through the Large-Scale Complex
IT Systems project, EP/F001096/1, and the Dynamic Adaptive Automated
Software Engineering project, EP/J017515/1. The authors would like to thank Louis
Rose and Dimitrios Kolovos for their advice and feedback.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>B. H.</given-names>
            <surname>Cheng</surname>
          </string-name>
          , R. Lemos,
          <string-name>
            <given-names>H.</given-names>
            <surname>Giese</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Inverardi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Magee</surname>
          </string-name>
          .
          <article-title>Software engineering for self-adaptive systems: A research roadmap</article-title>
          .
          <source>In LNCS 5525</source>
          , pages
          <fpage>1</fpage>
          {
          <fpage>26</fpage>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>S. E</given-names>
            <surname>tinge and</surname>
          </string-name>
          <string-name>
            <given-names>M.</given-names>
            <surname>Voelter</surname>
          </string-name>
          . oAW xText:
          <article-title>A framework for textual DSLs</article-title>
          .
          <source>In Proc. Workshop on Modelling, Eclipse Con</source>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Goldsby</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. H.C.</given-names>
            <surname>Cheng.</surname>
          </string-name>
          Avida-MDE:
          <article-title>a digital evolution approach to generating models of adaptive software behavior</article-title>
          .
          <source>Proc. GECCO'08</source>
          , pages
          <fpage>1751</fpage>
          {
          <fpage>1758</fpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          .
          <article-title>The current state and future of search based software engineering</article-title>
          .
          <source>In Proc. FOSE '07</source>
          , pages
          <fpage>342</fpage>
          {
          <fpage>357</fpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>M.</given-names>
            <surname>Harman</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. F.</given-names>
            <surname>Jones</surname>
          </string-name>
          .
          <article-title>Search based software engineering</article-title>
          .
          <source>Information and Software Technology</source>
          ,
          <volume>43</volume>
          (
          <issue>14</issue>
          ):
          <volume>833</volume>
          {
          <fpage>839</fpage>
          ,
          <year>December 2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Kolovos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Rose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Paige</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcia-Dominguez</surname>
          </string-name>
          .
          <article-title>The Epsilon book</article-title>
          .
          <source>Unpublished</source>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>N. O</given-names>
            <surname>'Neill</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Ryan</surname>
          </string-name>
          .
          <article-title>Grammatical evolution</article-title>
          .
          <source>IEEE Trans. Evol. Comput.</source>
          ,
          <volume>5</volume>
          (
          <issue>4</issue>
          ):
          <volume>349</volume>
          {
          <fpage>358</fpage>
          ,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Ramirez</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. H. C.</given-names>
            <surname>Cheng</surname>
          </string-name>
          .
          <article-title>Evolving models at run time to address functional and non-functional adaptation requirements</article-title>
          .
          <source>In MART'</source>
          <year>2009</year>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>C.</given-names>
            <surname>Ryan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Collins</surname>
          </string-name>
          , and
          <string-name>
            <surname>M. O'Neill</surname>
          </string-name>
          .
          <article-title>Grammatical evolution : Evolving programs for an arbitrary language</article-title>
          .
          <source>In LNCS 1391</source>
          , pages
          <fpage>83</fpage>
          {
          <fpage>96</fpage>
          . Springer,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>M. Sanchez</surname>
            ,
            <given-names>I. Barrero</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Villalobos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D.</given-names>
            <surname>Deridder</surname>
          </string-name>
          .
          <article-title>An Execution Platform for Extensible Runtime Models</article-title>
          .
          <source>In MART'</source>
          <year>2008</year>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Paige</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Kolovos</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F. A. C.</given-names>
            <surname>Polack</surname>
          </string-name>
          .
          <article-title>Search-based model driven engineering</article-title>
          .
          <source>Technical Report YCS-2012-475</source>
          , Department of Computer Science, University of York,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Williams</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Poulding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Rose</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. F.</given-names>
            <surname>Paige</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F. A. C.</given-names>
            <surname>Polack</surname>
          </string-name>
          .
          <article-title>Identifying desirable game character behaviours through the application of evolutionary algorithms to model-driven engineering metamodels</article-title>
          .
          <source>LNCS</source>
          ,
          <volume>6956</volume>
          :
          <fpage>112</fpage>
          {
          <fpage>126</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>