<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Extracting Physics from Blended Platformer Game Levels</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Adam Summerville</string-name>
          <email>asummerville@cpp.edu</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anurag Sarkar</string-name>
          <email>sarkar.an@northeastern.edu</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sam Snodgrass</string-name>
          <email>sam@modl.ai</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Joseph Osborn</string-name>
          <email>joseph.osborn@pomona.edu</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>California State Polytechnic University</institution>
          ,
          <country country="US">USA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Northeastern University</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Pomona College</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Several recent PCGML methods have focused on generating game levels and content that blend the properties of multiple games. However, these works ignore the fact that blended levels must in some way have blended physics models that enable playable levels. In this work, we present an approach for extracting jump physics models for such blended game domains. We make use of variational autoencoders (VAEs) trained on level data from six platformers, encoded using a previously introduced path and affordance vocabulary. Our results show that the extraction model is able to reasonably recreate the original physics models when given ground truth paths, and is able to produce physics models that can reliably allow an agent to play the generated levels. We also find that there are promising results for blended physics models behaving intuitively between physics models of the original games being blended.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        While methods for procedural content generation via
machine learning (PCGML) (Summerville et al. 2018) were
initially motivated by wanting to generate novel content
in the style of existing games such as Super Mario Bros.
        <xref ref-type="bibr" rid="ref13 ref13 ref4 ref4 ref5 ref5 ref8">(Summerville and Mateas 2016; Guzdial and Riedl 2016a;
Snodgrass and Ontan˜o´n 2017)</xref>
        and The Legend of Zelda
        <xref ref-type="bibr" rid="ref3">(Summerville and Mateas 2015)</xref>
        , a new body of work has
emerged that focuses on PCGML techniques that seek to
leverage trained models to blend existing game domains
and/or generate new domains altogether. This has produced
works that leverage more creative PCGML approaches such
as domain transfer
        <xref ref-type="bibr" rid="ref13 ref4 ref5">(Snodgrass and Ontanon 2016;
Snodgrass 2019)</xref>
        , model blending
        <xref ref-type="bibr" rid="ref10 ref13 ref4 ref5 ref6">(Guzdial and Riedl 2016b;
Sarkar and Cooper 2018)</xref>
        , computational creativity
        <xref ref-type="bibr" rid="ref10 ref6">(Guzdial and Riedl 2018)</xref>
        , training on multiple domains to learn
blended domains
        <xref ref-type="bibr" rid="ref1 ref12">(Sarkar, Yang, and Cooper 2019)</xref>
        or a
combination of the above
        <xref ref-type="bibr" rid="ref11">(Snodgrass and Sarkar 2020)</xref>
        .
      </p>
      <p>
        While some works have included path information, there
has been no notion of completing the circle i.e., do the
physics latent within generated paths encode a physics
model that would allow for playing the level? And if so, how
does one extract these latent physics models? Further, while
working within the domain of a single game might make
this unnecessary e.g., just use the original Mario physics
when generating Mario levels – in blended domains, there
is no ground truth physics model to fall back upon.
Recently,
        <xref ref-type="bibr" rid="ref11">Sarkar et al. (2020)</xref>
        trained generative models for
such blended domains by leveraging a new path and
affordance vocabulary that enabled generation of blended
levels with paths and jumps. In this work, we directly extend
this work by leveraging the jumps found in these
generated blended levels to extract physics models for different
blended domains. We do this by first generating levels
targeting specific games and game blends, with special
attention to generating paths that encode the directionality of the
path. We then extract physics models that could reasonably
have created the generated paths. We test this procedure by
comparing the extracted physics to the ground truth physics,
and examine the physics of blended domains, seeing how the
physics alter with respect to the level geometry.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Most prior techniques for procedural content generation via
machine learning (PCGML) (Summerville et al. 2018) have
focused on learning models for a single game. Such
methods have involved using autoencoders
        <xref ref-type="bibr" rid="ref8">(Jain et al. 2016)</xref>
        ,
LSTMs
        <xref ref-type="bibr" rid="ref13 ref4 ref5">(Summerville and Mateas 2016)</xref>
        , GANs (Volz et
al. 2018), Bayes Nets
        <xref ref-type="bibr" rid="ref13 ref4 ref5 ref8">(Guzdial and Riedl 2016a)</xref>
        , n-grams
        <xref ref-type="bibr" rid="ref2">(Dahlskog, Togelius, and Nelson 2014)</xref>
        and Markov
models (Snodgrass and Ontan˜o´n 2017) for learning generative
models for games such as Super Mario Bros., The Legend
of Zelda and Kid Icarus. In an effort to address
generalization and lack of data as well as wanting to discover and
create new game domains (similar to e.g. the game blending
framework of
        <xref ref-type="bibr" rid="ref3">Gow and Corneli (2015)</xref>
        ), more recent works
have built models that work with multiple games and
domains at the same time. This has included domain
transfer
        <xref ref-type="bibr" rid="ref13 ref4 ref5">(Snodgrass and Ontanon 2016; Snodgrass 2019)</xref>
        , game
generation
        <xref ref-type="bibr" rid="ref10 ref6">(Guzdial and Riedl 2018)</xref>
        and game blending
        <xref ref-type="bibr" rid="ref1 ref10 ref11 ref12 ref6">(Sarkar and Cooper 2018; Sarkar, Yang, and Cooper 2019;
Snodgrass and Sarkar 2020)</xref>
        . Recent work
        <xref ref-type="bibr" rid="ref11">(Sarkar et al.
2020)</xref>
        built on these latter game blending approaches by
extending their domain from two to six games, introducing a
path and affordance vocabulary and training on levels
annotated with A* paths derived from the jump arcs of the
respective games. This enabled generation of blended
levels spanning all the games while also containing traversable
paths and jumps. In this paper, we utilize the paths and jumps
in the blended levels generated by this latter approach to
extract physics models for the blended domains.
      </p>
      <p>
        Such physics models have not been the subject of much
prior PCGML work with a majority of prior PCGML
research focusing on learning models of game levels and only
a few attempting to learn models of game physics and game
rules.
        <xref ref-type="bibr" rid="ref7">Guzdial, Li, and Riedl (2017</xref>
        ) presented an approach
termed game engine search for learning the rules of Super
Mario Bros. using video gameplay data. Summerville,
Osborn, and Mateas (2017) learned a hybrid automaton
describing the jump physics in Mario. Similarly, Summerville
et al. (2017) used data from a Nintendo Entertainment
System (NES) emulator to learn automata describing the jump
physics of a large number of NES platformer games. To our
knowledge, our work is the first to extract such physics
models for blended game domains.
      </p>
    </sec>
    <sec id="sec-3">
      <title>Level Data and Representation</title>
      <p>
        For our approach, we used six classic NES platformer games
- Super Mario Bros., Super Mario Bros. II: The Lost
Levels, Ninja Gaiden, Metroid, Mega Man, and Castlevania - all
represented using the path and affordance vocabulary
introduced in
        <xref ref-type="bibr" rid="ref11">(Sarkar et al. 2020)</xref>
        , which in turn was derived
using the Video Game Level Corpus (Summerville et al. 2016)
and the Video Game Affordance Corpus
        <xref ref-type="bibr" rid="ref1">(Bentley and
Osborn 2019)</xref>
        . Because these games have disparate
vocabularies of tiles, we need a common language to describe all of
the levels – solidity, climbability, passable, powerup,
hazard, moving, portal, collectable, and breakable. These
affordances can be combined – e.g., a breakable brick would be
“breakable+solid” – which leads to 14 unique combinations
(see
        <xref ref-type="bibr" rid="ref11">(Sarkar et al. 2020)</xref>
        for a more detailed description).
      </p>
      <p>
        A key difference between the level representations found
here and in the earlier work of
        <xref ref-type="bibr" rid="ref11">Sarkar et al. (2020)</xref>
        is the
representation found here includes not just path information
but also the directionality of the path – the starting and
ending position found in a segment have a special
representation. This allows the downstream physics extraction process
to extract the correct physics as paths are not necessarily
bi-directional (e.g., very large falls should be represented as
such, and not very high jumps). See Figure 1 for an example.
      </p>
      <p>To account for differences in sizes and dimensions of the
levels in each game, we used a uniform segment size of
15 32 for all games, adding vertical padding as required.
We focused on horizontal sections of levels, thereby
ignoring the vertical sections found in Ninja Gaiden, Metroid and
Mega Man. After a filtering process to discard duplicate
segments and segments mixing discrete rooms, we ended up
with 1907 segments for Mario (SMB) (referring to both
version of Mario mentioned above), 504 segments for Ninja
Gaiden, 1833 segments for Metroid, 924 segments for Mega
Man and 775 segments for Castlevania.</p>
    </sec>
    <sec id="sec-4">
      <title>Generative Model</title>
      <p>
        For generating levels from which to extract physics, we used
a Gated Recurrent Unit-Variational Autoencoder
(GRUVAE), implemented using PyTorch
        <xref ref-type="bibr" rid="ref9">(Paszke et al. 2017)</xref>
        . The
encoder consisted of 3 hidden layers of size 1024 while
the decoder had 2 hidden layers of size 256—both using a
dropout rate of 50%. To help with convergence, the
variational loss was annealed linearly from 0 to 0.05 times the
variational loss over the first 5 epochs before the rest of the
training continued at that rate—for a total of 50 epochs using
the Adam optimizer and a learning rate of 10 5. At decoding
time, the decoder is initialized with a latent embedding and
then decodes in an auto-regressive manner with sampling.
For each generation, we sampled 10 segments and kept the
one with the lowest perplexity (highest likelihood).
      </p>
    </sec>
    <sec id="sec-5">
      <title>Physics Extraction</title>
      <p>To extract the physics, we must first define the “physics” of
a static level. In part, this seems ridiculous, as a static level
cannot have a conventional physics model, as there is no
notion of time. However, while this seems like an intractable
problem, we believe that for several platformer games, there
is an implicit correlation between horizontal position and
time – e.g., a speedrunner of Mario is almost always moving
to the right as quickly as they possibly can. In fact, the A
agent that we use to simulate “playing” the levels also
operates under this assumption. Thus, we think it is reasonable to
relax the physics models from a notion of y position versus
time to a relation of y position to x position – with the
understanding that the x position is supposed to be constantly
progressing in the direction of the goal. It is important to
note that the “physics” model we are extracting actually
supports an infinite number of different possible physics
models – changing the maximal x speed will result in different
physics models. Some games have much slower horizontal
speeds (Castlevania has a maximal horizontal speed of 3.7
tiles per second), while others have much faster speeds
(Super Mario Bros. has a maximal horizontal speed of 10 tiles</p>
      <sec id="sec-5-1">
        <title>Standard Physics Model Extracted Physics Model</title>
      </sec>
      <sec id="sec-5-2">
        <title>Parameters</title>
        <sec id="sec-5-2-1">
          <title>Impulse ( BByt )</title>
        </sec>
        <sec id="sec-5-2-2">
          <title>Gravity ( BBty2 )</title>
          <p>Player has control
over height of jump</p>
        </sec>
        <sec id="sec-5-2-3">
          <title>Impulse ( BBxy )</title>
        </sec>
        <sec id="sec-5-2-4">
          <title>Gravity ( BBxy2 )</title>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>Assumptions</title>
        <p>Player takes the
highest possible jump
Player can alter horizontal
position during jump</p>
        <p>Player is always moving at
maximum horizontal speed
per second) – the rest of the games we looked at have speeds
of around 5.5 tiles per second. If one wished to take these
extracted physics and use them in a playable game, the
different x speeds would result in different feeling games, but
somewhere in the 4 to 10 tiles per second range would result
in games playable by humans.</p>
        <p>We also note that a large number of platformer games
allow for the player to control the arc of the jump based
on how long they hold the jump button – in this work,
Super Mario Bros., Metroid, and Mega Man all allow for this,
while Castlevania and Ninja Gaiden do not – and this
notion of player control is not contained within the static maps.
Again, we make the simplifying assumption that higher
jumps are preferred – we want to determine the frontier of
what space is reachable. Table 1 describes the “physics” in
contrast to the standard physics found in the game.</p>
        <p>One final note – many games have different physics
models when the player is falling in their jump versus when
they are in the rising portion of their jump – e.g., in Super
Mario Bros. gravity can more than double when the player
is falling. As such, we learn a separate gravity value for the
rising and falling portions of a jump.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Extraction</title>
        <p>Having defined the physics model of a static level, we
now discuss the process for extracting said physics model.
To determine how the path represents the player’s position
through time, a Breadth-First Search is performed,
beginning from the start position and progressing until the end
position is found. This provides a coarse notion of the
progression of the path. The path is then followed and the
algorithm described in Figure 2 is used to separate the portions of
the path that are (1) grounded, (2) jumping, and (3) falling.</p>
        <p>Once segmented, the segments are filtered to remove
noisy jumps:</p>
        <p>Any jump or fall of two or fewer data points is removed –
these are too small to derive any useful physics from
Any segment that moves more than 2 tiles in a single step
is removed – these represent “broken” paths and are likely
to represent a corruption of the physics</p>
        <p>For each x position in a jump, the highest
corresponding y value is recorded – e.g. if a jump consists of
function SEGMENT EXTRACTION(path)
Ñ jumps; f alls
l Ð path[0]
gp Ð onGround(l)
yp Ð l:y
jumps Ð rs
jumping Ð not gp
jumping Ð not gp
f alls Ð rs
seg Ð rls
for l in path[1:] do
g Ð onGround(l)
y Ð l:y
seg.appendplq
if g and not gp then
jumping Ð False
f alling Ð False
f alls.appendpsegq
seg Ð rls
else if not g and y ¡ yp then
if not jumping then
jumping Ð True
f alling Ð False
seg Ð rsegr 1ss
end if
else if not g and y   yp then
if jumping then</p>
        <p>jumps:appendpsegq
end if
if not f alling then
f alling Ð True
jumping Ð False
seg Ð rsegr 1ss
end if
end if
end for
end function
 Landed
 In Jump
 Falling
rr0; 0s; r0; 1s; r1; 1s; r1; 2ss then the highest recorded
positions per x value are rr0; 1s; r1; 2ss. This is done for all jumps
and the statistics for the x positions found across all jumps
are calculated. Jumps are then scored by how many of their
y positions agree with the P percentile y values across all
jumps. P is then a hyperparameter that can be tuned to
determine what one expects to see from the jumps – given that
3 of the games have variable height jumps, our inductive
bias is that jumps higher than the median should be selected
given that we wish to find the upper extents of possible
jumps. We filter jumps that have more than 50%
disagreement with the P percentile jump. In the next section, we
discuss the criterion for the selection of P . Finally, given
the filtered jumps and falls, we perform an Ordinary Least
Squares regression where the dependent variable is y
position and the independent variables are x (corresponding to
Impulse) and x2 (corresponding to Gravity).</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Evaluation/Discussion</title>
      <p>To evaluate the extracted physics, there are a number of
concerns.
1. Faithfulness of the Generated Physics – Do the paths
contained in segments generated targeting a specific game
domain faithfully recreate the physics found in the
original segments?
2. Validity of the Extraction Process – Is the extraction
process capable of reconstructing the original jump
parameters from the training data (where the paths were
generated from an agent using the original parameters)?</p>
      <sec id="sec-6-1">
        <title>3. Interpretability of Blended Physics – Do the segments</title>
        <p>found in interpolations between the original games result
in physics that are interpolated between the games?</p>
      </sec>
      <sec id="sec-6-2">
        <title>Faithfulness to Original Physics</title>
        <p>To assess whether the original physics can be extracted, we
first use the extraction process on the training segments –
these should have the physics flawlessly encoded within
them. We ran a hyperparameter grid search over the P
percentile to ascertain what percentile leads to the most
accurate physics model. We assess the accuracy of the physics
model by calculating the Sum of Squared Error (SSE) for
y values per x value for the jump models produced by the
physics models extracted for the original segments. P
75% led to the lowest total SSE summed over all of the
games, although different games had differing values (From
Castlevania at 60% to Super Mario Bros. at 80%).
However, the mean value of the percentiles was 72.6%, so we
feel comfortable with using the 75th percentile jumps for
the physics model (which confirms our inductive bias that
we wanted jumps higher than the median).</p>
        <p>Of course, in some sense, the aesthetics of the
reconstructed jump arcs are more important than the error – most
importantly, do the extracted jumps result in the same
maneuvering and reachability as the true physics? Figure 3
shows the true jump arcs in comparison with the jump arcs
extracted from the original segments. We see that the
extracted jump for Metroid (orange) reaches the same heights,
but does not reach quite as far horizontally, due to a higher
falling gravity. We see that the extracted jump for Mario
(red) is a bit short in height (reaching only 3.5 tiles high
instead of 4 tiles in height) but has the same horizontal space
covered. Mega Man (light blue) has a slightly different arc,
but reaches the exact same height and has the same
horizontal space. Ninja Gaiden’s (dark blue) extracted jump falls
short of the true jump both in height (reaching a maximum
of 3.8 tiles instead of 4) and in distance (9 tiles instead of
10). Finally, Castlevania’s extracted jump has the same
horizontal reach, but reaches higher (2.8 tiles instead of 2 tiles).</p>
        <p>Generally, these jumps would support much of the same
gameplay, although the height differences for Super Mario
Bros. and Ninja Gaiden would need to be bumped up to
the nearest whole number of tiles to have the same
gameplay. With this, we feel satisfied that the physics extraction
process works well enough to faithfully extract physics that
would support playing the game, and we turn our attention
to the generated levels, to see how faithfully they are able to
represent the physics.</p>
      </sec>
      <sec id="sec-6-3">
        <title>Latent Reconstructions</title>
        <p>To evaluate the generated physics, we sampled the
generative model to produce level segments that were from the
latent space of the encoding corresponding to each game.
To do this, we first obtained the latent encoding for every
level segment from a given game. We then calculated the
mean and standard deviation for these encodings. Finally,
we sampled 2000 encodings from a normal distribution with
the calculated parameters – these encodings were then
decoded into level segments. As a note, the level segments
were sometimes lacking in a beginning and ending (due to
the stochastic nature of the generation process) – these
segments were excluded from the physics extraction process as
it is impossible to determine the progression of the
generated path – in all, this led to the dropping of 326 segments in
total (3.26% of the generated segments) with Metroid
having the highest proportion of corrupted segments (8.1%).
The physics models were extracted using the same 75th
percentile criterion as computed in the original levels (no
hyperparameter search). Table 2 shows the RMSEs between
the physics models extracted from the original and
generated segments when compared with the actual game physics.
Both errors are broadly comparable (and is in fact lower for
generated segments in the case of Super Mario Bros.).</p>
        <p>As noted above, the aesthetics of the reconstructed jump
arcs are as, if not more, important than the errors. Figure 4
shows the true jump arcs in comparison with the jump arcs
extracted from the generated segments. We see that the
extracted jump for Metroid (orange) reaches the same height,
but does not reach quite as far horizontally, due to a higher
fall gravity. We also see that the extracted jump for Mario
(red) reaches the same height but extends horizontally.</p>
        <p>We note that for the rest of the games, the generated arcs
show a regression towards the physics of Super Mario Bros..
The generated Mega Man and Castlevania arcs are nearly
identical to the Super Mario Bros. arc. Finally, we see that
the generated Ninja Gaiden (dark blue) arc is very similar
to the one extracted from the original segments reaching not
quite as high but having the same horizontal duration.</p>
        <p>Again, generally, these physics would support the same
gameplay – in fact Super Mario Bros. would be playable as
is with no intervention with the model extracted from the
generated levels (unlike the model extracted from the
original segments). Also, while the models for Castlevania and
Mega Man are more lenient for the generated extractions
than the true physics, the levels would be playable with the
extracted models.</p>
      </sec>
      <sec id="sec-6-4">
        <title>Blended Physics</title>
        <p>Unlike the above categories, there is no direct comparison
to see how well the extracted physics recreate the original
physics – instead visual inspection is the best way to
assess the interpolated physics. Figure 5 shows the physics
extracted from interpolations between different games. To get
the interpolations, we take 10 segments from each game,
encode them into the latent space, interpolate between all pairs
of encodings at 25%-75%, 50%-50%, 75%-25%, and then
decode 20 times (since the decoding process is stochastic,
the same encoding can produce different segments). We note
that for most pairs, the interpolated physics seems to settle
into a jump that is actually unlike the exemplars, but
relatively stable across the blends – a jump that reaches about
3 tiles in height and 8 tiles in width (which is actually quite
similar to the jump of Mega Man). This jump is somewhat
average across the games (although a jump of 3.5 in height
and 9 in width would be closer to average), so it seems that
most blends actually go through a sort of in-between average
space that just encodes generic platformerness as opposed
to any real per-game-pair specific physics. That being said,
Metroid – being the most extreme of the physics – does tend
to have some blends that incorporate its higher and longer
jumps – namely, blends with Castlevania (Figure 5h), Mega
Man (Figure 5j) and Ninja Gaiden (Figure 5b).</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Conclusion and Future Work</title>
      <p>In this paper, we presented a method for extracting “physics”
from static levels of the kind often used in PCGML level
generation. We compared the extractions from both ground
truth training examples and generations to the ground truth
physics. In addition, we explored the physics found within
blended domains, with some promising examples of blended
physics.</p>
      <p>In the future, we would like to expand this work to explore
different level orientations (e.g., vertical levels found in Kid
Icarus). We would also like to explore the inverse process –
given a physics model, generate levels that are playable.
(a) Interpolation between SMB and Ninja Gaiden
(b) Interpolation between Ninja Gaiden and Metroid
(c) Interpolation between SMB and Metroid
(d) Interpolation between Ninja Gaiden and Castlevania
(e) Interpolation between SMB and Castlevania
(f) Interpolation between Ninja Gaiden and Mega Man
(g) Interpolation between SMB and Mega Man
(h) Interpolation between Metroid and Castlevania
(i) Interpolation between Mega Man and Castlevania
(j) Interpolation between Metroid and Mega Man</p>
      <p>Snodgrass, S., and Ontan˜o´n, S. 2017. Learning to generate
video game maps using Markov models. IEEE Transactions
on Computational Intelligence and AI in Games (TCIAIG).
Snodgrass, S., and Sarkar, A. 2020. Multi-domain level
generation and blending with sketches via example-driven
BSP and variational autoencoders. In Fifteenth International
Conference on the Foundations of Digital Games (FDG).
Snodgrass, S. 2019. Levels from sketches with
exampledriven binary space partition. In Fifteenth Conference on
Artificial Intelligence and Interactive Digital Entertainment
(AIIDE).</p>
      <p>Summerville, A., and Mateas, M. 2015. Sampling Hyrule:
Sampling probabilistic machine learning for level
generation. Tenth International Conference on the Foundations of
Digital Games (FDG).</p>
      <p>Summerville, A., and Mateas, M. 2016. Super Mario as a
string: Platformer level generation via LSTMs. Proceedings
of 1st International Joint Conference of DiGRA and FDG.
Summerville, A. J.; Snodgrass, S.; Mateas, M.; and
Ontan˜o´n, S. 2016. The VGLC: The video game level corpus.
In Seventh Workshop on Procedural Content Generation at
First Joint International Conference of DiGRA and FDG.
Summerville, A.; Osborn, J.; Holmga˚rd, C.; and Zhang,
D. W. 2017. Mechanics automatically recognized via
interactive observation: Jumping. In Twelfth International
Conference on the Foundations of Digital Games (FDG), 1–10.
Summerville, A.; Snodgrass, S.; Guzdial, M.; Holmga˚rd, C.;
Hoover, A. K.; Isaksen, A.; Nealen, A.; and Togelius, J.
2018. Procedural content generation via machine learning
(PCGML). IEEE Transactions on Games (ToG).
Summerville, A.; Osborn, J.; and Mateas, M. 2017. Charda:
Causal hybrid automata recovery via dynamic analysis. In
26th International Joint Conference on Artificial
Intelligence (IJCAI).</p>
      <p>Volz, V.; Schrum, J.; Liu, J.; Lucas, S. M.; Smith, A.; and
Risi, S. 2018. Evolving Mario levels in the latent space of
a deep convolutional generative adversarial network. In
Genetic and Evolutionary Computation Conference (GECCO),
221–228. ACM.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>Bentley</surname>
            ,
            <given-names>G. R.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Osborn</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>The videogame affordances corpus</article-title>
          .
          <source>2019 Experimental AI in Games Workshop (EXAG).</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Dahlskog</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Togelius</surname>
            , J.; and Nelson,
            <given-names>M. J.</given-names>
          </string-name>
          <year>2014</year>
          .
          <article-title>Linear levels through n-grams</article-title>
          .
          <source>Proceedings of the 18th International Academic MindTrek.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>Gow</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Corneli</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2015</year>
          .
          <article-title>Towards generating novel games using conceptual blending</article-title>
          .
          <source>In Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>Guzdial</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2016a</year>
          .
          <article-title>Game level generation from gameplay videos</article-title>
          .
          <source>In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <string-name>
            <surname>Guzdial</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2016b</year>
          .
          <article-title>Learning to blend computer game levels</article-title>
          .
          <source>In Seventh International Conference on Computational Creativity (ICCC).</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <surname>Guzdial</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Riedl</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Automated game design via conceptual expansion</article-title>
          .
          <source>In Fourteenth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).</source>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <surname>Guzdial</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ; and Riedl,
          <string-name>
            <surname>M. O.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Game engine learning from video</article-title>
          .
          <source>In 26th International Joint Conference on Artificial Intelligence (IJCAI)</source>
          ,
          <fpage>3707</fpage>
          -
          <lpage>3713</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <surname>Jain</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Isaksen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Holmgard</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Togelius</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <surname>Paszke</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Gross</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Chintala</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Chanan,
          <string-name>
            <given-names>G.</given-names>
            ;
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            ;
            <surname>DeVito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ;
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            ;
            <surname>Desmaison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ;
            <surname>Antiga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            ; and
            <surname>Lerer</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <year>2017</year>
          .
          <article-title>Automatic differentiation in PyTorch</article-title>
          .
          <source>In Conference on Neural Information Processing Systems</source>
          (NeurIPS) Autodiff Workshop.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <surname>Sarkar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2018</year>
          .
          <article-title>Blending levels from different games using LSTMs</article-title>
          . In 2018 Experimental AI in Games Workshop (EXAG).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <surname>Sarkar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Summerville</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Snodgrass</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ; Bentley, G.; and
          <string-name>
            <surname>Osborn</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <year>2020</year>
          .
          <article-title>Exploring level blending across platformers via paths and affordances</article-title>
          .
          <source>In Sixteenth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).</source>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <surname>Sarkar</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ; and
          <string-name>
            <surname>Cooper</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2019</year>
          .
          <article-title>Controllable level blending between games using variational autoencoders</article-title>
          .
          <source>In 2019 Experimental AI in Games Workshop (EXAG).</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Snodgrass</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , and
          <string-name>
            <surname>Ontanon</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <year>2016</year>
          .
          <article-title>An approach to domain transfer in procedural content generation of twodimensional videogame levels</article-title>
          .
          <source>In Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference (AIIDE).</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>