=Paper= {{Paper |id=Vol-2862/paper15 |storemode=property |title=Entity Embedding as Game Representation |pdfUrl=https://ceur-ws.org/Vol-2862/paper15.pdf |volume=Vol-2862 |authors=Nazanin Yousefzadeh Khameneh,Matthew Guzdial |dblpUrl=https://dblp.org/rec/conf/aiide/KhamenehG20 }} ==Entity Embedding as Game Representation== https://ceur-ws.org/Vol-2862/paper15.pdf
                                Entity Embedding as Game Representation

                              Nazanin Yousefzadeh Khameneh and Matthew Guzdial
                        Department of Computing Science, Alberta Machine Intelligence Institute (Amii)
                                               University of Alberta, Canada
                                             {nyousefz, guzdial}@ualberta.ca




                           Abstract                                 Riedl presented a method in which a new graph-like game
  Procedural content generation via machine learning                representation is learned in order to generate new games by
  (PCGML) has shown success at producing new video game             recombining existing ones (Guzdial and Riedl 2018). How-
  content with machine learning. However, the majority of the       ever, the graph-like representation is not well-suited to sta-
  work has focused on the production of static game content,        tistical machine learning methods. Osborn et al. proposed
  including game levels and visual elements. There has been         a system to derive a complete model of game structure
  much less work on dynamic game content, such as game              and rules as a new representation for games (Osborn, Sum-
  mechanics. One reason for this is the lack of a consistent        merville, and Mateas 2017). This approach has only been
  representation for dynamic game content, which is key for         proposed, not yet implemented. Recently, there have been
  a number of statistical machine learning approaches. We           some projects on game generation with PCGML using the
  present an autoencoder for deriving what we call “entity
  embeddings”, a consistent way to represent different dynamic
                                                                    Video Game Description Language (VGDL) (Machado et al.
  entities across multiple games in the same representation.        2019), however we note that all of the dynamic knowledge
  In this paper we introduce the learned representation, along      had to be hand-authored, rather than learned from existing
  with some evidence towards its quality and future utility.        games.
                                                                        Ideally, we would like to have a representation that would
                       Introduction                                 allow us to represent machine-learned knowledge of dy-
                                                                    namic game elements that is suitable to statistical machine
Generating game content using Machine Learning (ML)
                                                                    learning tasks. In this paper we aim to learn such a represen-
models trained on existing data is referred to as Procedu-
                                                                    tation of dynamic game entities as low dimensional vectors
ral Content Generation via Machine Learning (PCGML).
                                                                    in which mechanical information is preserved. We call this
PCGML has shown success in both game development and
                                                                    approach entity embedding. This entity embedding attempts
technical games research (Karth and Smith 2017; Sum-
                                                                    to obtain the functional (mechanics) similarities between en-
merville et al. 2018). Most of the prior work has focused on
                                                                    tities not the aesthetic (appearance) ones.
level generation and not mechanic generation or entire game
                                                                        In this paper, we use a Variational AutoEncoder (VAE)
generation. One reason is that there are few available shared
                                                                    to re-represent Atari Games with an entity embedding in a
representation frameworks across games. Games vary sig-
                                                                    lower dimension representation. We evaluate our approach
nificantly from one another, across multiple game systems,
                                                                    with some similarity measures in comparison to a K-nearest
game genres, and particular instances of a genre. We also
                                                                    neighbors-inspired baseline. In addition, we demonstrate
cannot use the mechanics of just one individual game, as
                                                                    some qualitative examples of the potential applications of
modern machine learning approaches rely on training data
                                                                    this representation.
sizes that far surpass what a single game could provide.
Given all of this, it is difficult to represent functional pieces
of different games in the same representation. A suitable                                Related Work
shared data representation for different games would make           In this section we focus on related PCGML approaches, and
it possible to do different PCGML-related tasks more eas-           related deep neural network (DNN) based approaches for
ily like novel game generation, novel game mechanic gen-            modeling the dynamic elements of games.
eration, transferring knowledge between games, automated               In the introduction we identified that the majority of
reasoning over games, and so on.                                    PCGML approaches have been applied to level generation
   There is not a large body of prior work in data represen-        or the generation of other static content. However, there are
tation for dynamic game content in PCGML. Guzdial and               some prior examples that touched on the generation of dy-
Copyright c 2020 for this paper by its authors. Use permitted un-   namic game entities. Guzdial et al. (Guzdial, Li, and Riedl
der Creative Commons License Attribution 4.0 International (CC      2017) introduced an approach to learning the rules of a
BY 4.0).                                                            game from gameplay video and then applied these learned
rules to rule generation (Guzdial and Riedl 2018). Similarly,    information for each entity based on the learned ruleset by
Summerville et al. employed a causal learning framework          vectorizing the learned rule information for each entity. Fi-
to learn certain semantic properties of various game enti-       nally, we train our VAE with this vectorized representation
ties (Summerville et al. 2017). They later proposed apply-       to obtain the latent space.
ing this as part of a pipeline to generate new games (Os-           Our trained VAE gives us our entity embedding as points
born, Summerville, and Mateas 2017). Recently, Bentley           in a learned 25-dimensional latent space. We In this repre-
and Osborn (Bentley and Osborn 2019) presented a corpus          sentation we can represent changes over entities as vectors
of such semantic game properties, which could be employed        (one entity at one point becoming another entity at another
in PCGML. However, this dynamic information would have           point), and whole games as graphs or point clouds (where
to be hand-authored and then applied to a PCGML prob-            each point is an entity in the game). We anticipate that these
lem as in the work of Machado et al. (Machado et al. 2019).      compact representations will make PCGML work that in-
Comparatively, we seek to re-represent learned dynamic           volves mechanical, dynamic, or semantic information far
game information in a smaller, latent representation which       easier.
can then be used for PCGML tasks.
   In this work we rely on a Variational AutoEncoder (VAE).      Ruleset Learning
VAE’s have been applied to many other PCGML level gen-           Since our goal is to have our representation reflect the se-
eration tasks. They have been used to generate levels for Su-    mantics (mechanics) of the entities, we decided to obtain
per Mario Bros. (Jain et al. 2016; Guzdial et al. 2018) and      game rules to collect this information. Thus we make use
Lode Runner (Thakkar et al. 2019). Sarkar et al. employed a      of the game engine learning algorithm from Guzdial et al.
VAE to learn two different game level distributions and then     (Guzdial, Li, and Riedl 2017) to learn rulesets for each
generate content from in-between these learned distributions     game. The algorithm tries to predict the next frame with a
(Sarkar, Yang, and Cooper 2020). We also rely on learning        current engine (sequence of rules), if the predicted frame is
content from multiple games, however our content is a rep-       sufficiently similar to the original one the engine remains the
resentation of dynamic game entities instead of structural       same, otherwise it optimizes the current engine via search.
information.                                                     Each rule consists of conditional facts and effect facts. The
   Outside of PCGML, there exists work in learning to            facts are percept-like representations that denote individual
model dynamic elements of games to help automated game           atomic units of knowledge about the game (Ugur, Sahin,
playing tasks. Ha and Schmidhuber presented their approach       and Öztop 2009). For each rule to fire all the conditional
“World Models” that used a VAE as part of its pipeline to        facts must be true. Upon firing, the rule replaces one fact
learn a forward model, a way of predicting what will happen      (the preffect) with another fact (the posteffect). This allows
next according to the mechanics of a world (Ha and Schmid-       the rules to model changes in a game, like movement, enti-
huber 2018). These “World Models” were helpful in improv-        ties appearing and disappearing, and changes in entity states.
ing agent performance on playing the modelled games, but         that represents the mechanics of the game and from which it
the learned representation of dynamic elements was far too       is possible to simulate the whole game (Guzdial and Riedl
large to use as input in a PCGML process. Similarly, Go-         2018). Below is the list of the types of facts we use in this
Explore uses a VAE-like model for determining whether it         paper:
has been to a particular state (Ecoffet et al. 2019). However,
                                                                 • Animation contains SizeX SizeY of the entity.
the goal is simply to use the latent embedding as a distance
function comparing game states, not as a representation for      • VelocityX indicates the velocity of the entity horizontally.
PCGML. Most recently, Kim et al. (Kim et al. 2020) pre-          • VelocityY indicates the velocity of the entity vertically.
sented an approach to use an augmented Generative Adver-
                                                                 • PositionX this fact is the value of an entity in the x dimen-
sarial Neural Network (GAN) to model an entire game by
                                                                    sion of a frame.
separately modeling dynamic and static entities. We simi-
larly seek to model dynamic entities with a DNN, but we          • PositionY this fact is the value of an entity in the y dimen-
focus on modeling the entities of multiple games instead of         sion of a frame.
having a model trained to recreate one specific game. Fur-          For example, in the following rule (Rule X), entity
ther, just as with the World Models approach, the represen-      A’s speed in X direction will change from 0 to 5 as its
tation is too large to apply a PCGML approach on it.             conditional facts match the current game state
                                                                 RULE X:
                   System Overview                               VelocityXFact: [A, 0]→VelocityXFact: [A, 5]
In this paper, we develop a method for embedding entities        VelocityXFact: [A, 0]
from multiple games in a 25-dimension latent vector. We fo-      VelocityYFact: [A, 0]
cus on the domain of Atari games to test this approach, as       AnimationFact: [A, (8, 4, 3)]
the games are relatively simple while still being more com-      PositionXFact: [A, 79]
plex than hand-authored games in the VGDL, which makes           PositionYFact: [A, 17]
hand-authoring knowledge from them non-viable. We iden-
tify this dynamic information automatically from these Atari     VelocityXFact: [B, 0]
games by running the rule learning algorithm introduced by       VelocityYFact: [B, 0]
Guzdial et al. (Guzdial, Li, and Riedl 2017). We then collect    AnimationFact: [B, (5, 6, 3)]
                                                Figure 1: Architecture of our VAE


PositionXFact: [B, 93]                                             ityX and VelocityY which can be negative. We convert each
PositionYFact: [B, 42]                                             entity to a vector of shape 1x1600 (8x200). Due to the fact
etc.                                                               that all the absolute values of the features are always less
For more information on the Engine learning process please         than 100, we chose 200 as one-hot encoding size (to repre-
see (Guzdial, Li, and Riedl 2017).                                 sent positive and negative values). We found that the second
                                                                   representation results far exceeded the first for our evalua-
Dataset                                                            tions and so we only report those.
In this paper, we made use of two Atari games, Centipede
                                                                   Training
and Space-invaders, as represented in the Arcade Learning
Environment (ALE) (Bellemare et al. 2013). We chose these          Autoencoders are efficient tools for dimensionality reduc-
two Atari games since both games have similar mechan-              tion. These tools approximate a latent structure of a fea-
ics in which the player is a fighter who shoots at enemies.        ture set. We need to reduce the dimensionality of the enti-
We ran the Game Engine Algorithm on roughly 100 frames             ties in order to learn an entity representation with less vari-
of each game to obtain the game rules. Each rule consists          ance. We decided to make use of a Variational AutoEncoder
of some conditional facts and an effect. These conditional         (VAE), as it would allow us to learn a more consistent latent
facts describe mechanical features of entities like size, ve-      space and sample novel entities from this learned distribu-
locity, position and so on. The effect is made up of a pre-        tion. We applied VAE to our dataset to learn the parameters
effect and post-effect, also describing mechanical features.       of a probability distribution to represent our entities. Thus it
In frames where they are true (i.e. the mechanical features        makes it possible to sample from this distribution and gener-
exactly match) that rule fires meaning that the post-effect re-    ate new entities. Since it is a generative model, we can apply
places the pre-effect (e.g. the velocity of an entity changes).    this feature to PCGML tasks like generating entities simi-
   After obtaining the game rules we ran a parser through          lar to the input, blending the entities in the latent space and
each rule to save the mechanical information of each game          so on. We tried various VAE architectures for training. We
entity as an integer in an individual vector of shape (1x8).       obtained the final model empirically, which we visualize in
For example, entity ’A’ in game ’B’ is represented as a vec-       Figure 1. As the Figure demonstrates, our architecture has
tor which contains, EntityID: A, SizeX, SizeY, VelocityX, Ve-      one fully connected hidden layer with Relu activation in the
locityY, PositionX, PositionY and GameID: B. We note that          encoder, which then feeds into a 25-dimensional embedding
different in-game entities would generate multiple instances       layer with Relu activation. The decoder section architecture
of this representation. Further, velocity and position values      is an inverse of the encoder section, starting with a Sigmoid
had to be integers as they were measured over the space of         activation fully connected layer. We implemented this model
pixels. Our goal was to represent each mechanical state that       with the Adam optimizer with a learning rate of 0.001 and
each game entity (EntityID) could legally be in according to       binary cross-entropy as our loss function. We implemented
the game rules.                                                    this model in Keras.
   During development we used two different representa-
tions of our dataset. First we used a one-hot encoding for         Generation
GameID and EntityID while all other features remained in-          The decoder generates 1x1600 vectors. We then use
tegers. However we have less than 100 EntityIDs and only 2         our one-hot representation for the decoder’s output, by
GameIDs, we choose 100 and 10 as one-hot encoding sizes            querying the generated outputs and finding the largest
for EntityIDs and GameIDs, respectively. This is because of        value in each ([[0-200][201-400][401-600][601-800][801-
potential future studies with more entities or games. In the       1000][1001-1200][1201-1400][1401-1600]]) segments and
second representation we apply one-hot encoding to all fea-        replacing it with 1 and others with 0. Thus, we can generate
tures. All of the features are greater than zero except Veloc-     entirely novel outputs not previously seen during training.
             Figure 2: Visualization of 20% of our test entities comparisons to our VAE output and our baselines.


                        Evaluation                                     Metric         Jaccard Distance     Euclidean Distance
The entire purpose of this new entity embedding representa-             VAE                0.0937                5.6364
tion is to accurately represent the semantic information in a           PCA                0.2291                18.0278
more compact representation. Therefore, accuracy is key. In         SE Euclidean           0.2013                2.8881
order to evaluate the accuracy of our VAE we ran an evalu-           SE Jaccard            0.1388                6.0629
ation to compare the performance of our VAE on a held out
testset of 10% of our available data. As a baseline, we were      Table 1: Comparison of our two distance functions over our
inspired by K-Nearest Neighbors and so selected the most          test data, comparing the output of our VAE to our baselines
similar entity from the training dataset to each entity in the    of the most similar training entity (SE) and PCA according
testset. To determine the most similar entity we applied two                  to the two different distance functions.
different similarity measures as below:
• Jaccard Similarity that is the measure of similarity for the
   two sets of data based on their overlap.                       and test sets have substantial overlap. We also compare the
• Euclidean distance that computes the square root of the         VAE with Principal Component Analysis (PCA) which is an
   sum of squared differences between elements of the two         unsupervised, non-parametric statistical technique used for
   vectors.                                                       dimensionality reduction in machine learning. If the VAE is
                                                                  able to perform similarly or better than the closest training
   We found the most similar training entity to the test entity   instance, on a test instance it has never seen before, this will
with these two methods. We then compare the VAE recon-            indicate that the VAE has learned an accurate entity repre-
struction of the test entity and the selected training entity     sentation.
with the original entity. To do this we consider (A) the num-
ber of equal values and (B) the difference of unequal val-
ues between original and predicted entity vectors. We em-                                    Results
ploy the Euclidean and Jaccard distance functions again as        The mean values across our test set for our distance scores
comparison metrics. Lower values are better for both met-         are in Table 1. In the table we refer to the most similar en-
rics as it indicates fewer differences. This is notably quite     tity in the training set to the test set as “SE”. Thus “SE
a strong baseline, given that many entities in the training       Euclidean” is the most similar entity in the training set to
                          Figure 3: Eight random variations made to entity (center) in the latent space.




                                                                       Figure 5: Comparing the average entity between two
                                                                    existing entities in terms of the vector representation and in
                                                                                            the latent space.


     Figure 4: t-SNE Visualization of our latent space.             to note that the VAE generates the exact same entity roughly
                                                                    50% of the time for the testset.

a particular test set using the Euclidean distance function.                         Qualitative Examples
As is shown in the table, our VAE outperforms the other             In this section we display the distribution of entities in the
methods when we use the Jaccard distance. However, the              latent space using the t-Distributed Stochastic Neighbor Em-
SE Euclidean performs better when we use Euclidean dis-             bedding (t-SNE) technique. This technique is for dimension-
tance, though notably we outperform the SE Jaccard base-            ality reduction and is well suited for the visualization of
line even with the Euclidean distance function. This indi-          high-dimensional datasets (Maaten and Hinton 2008). In ad-
cates that the VAE outputs and the original entities share          dition, we explore the latent space by presenting some qual-
more equal feature values but the individual feature values         itative examples.
at times have larger variation compared to the closest entity          We provide a t-SNE visualization in Figure 4. The repre-
in the training data. Furthermore, our proposed VAE out-            sentation depicts the distribution of entities in the projected
performs PCA which is another dimensionality reduction              2D space. Note that clusters correspond to the two games
method. We demonstrate distance score for individual ran-           in our dataset, verifying the power of the model in discrim-
domly selected entities from our testset in Figure 2. As is         inating the entities based on the GameID feature. We also
shown, the majority of the values of the VAE are lower com-         note that this indicates that we can represent games as clus-
pared with the baselines except a few outliers. It is important     ters of points in this space. We hope to explore the possible
Figure 6: Two randomly selected frames of the games Centipede and Space invaders. Entities in blue circles are the Alien and
  Mushroom from Space invaders and Centipede respectively, both destroyable entities. Entities with red circles are player
                  entities. The tables indicates the Euclidean distance of these entities in the latent space.


research direction of this feature in future.                       tion. We plan to run another study to analyze if we can use
   We also examine some qualitative examples to explore             generated entities to generate new types of rules or levels
entities interpolation in the latent space. To do this, we ran-     of an existing game, or entirely new games.
domly chose a pair of entities. We then calculate the average
of both the vector and latent representations of the pair. As     • Transfer Learning: As we discussed in the Ruleset sec-
is shown in Figure 5, the latent average is more like taking        tion, the embedding is based on mechanical features of
features from each entity while the original average is just        entities in each rule of a game. Each rule references a set
the mean of two entity vectors (all numbers rounded down            of entities (group of conditional facts) together in a frame
for the average). This indicates that our latent space is not       which causes an effect. We might expect similar mechani-
replicating the geometric information presented in the vec-         cal effects if we have entities from another different game
tor representation.                                                 with a similar latent representation. We anticipate a need
   Our second qualitative example is to analyze the sur-            for another study to investigate this.
roundings of an entity inside the latent space. We first add
various normal random vectors in the range (-0.2 to 0.2) to a     • Extending the Dataset: We trained this model on around
randomly selected entity’s embedding. We choose this range          100 frames of two Atari games with similar mechanics.
since the entity does not change perceptively with lower            Extending our dataset by adding extracted rules of other
ranges. Figure3 displays the randomly selected original en-         similar and dissimilar games is another potential future
tity and 8 random neighbor entities around it. The random           direction.
variations seem quite consistent in terms of sizes and IDs.
This indicates that similarly shaped entities with similar IDs                          Conclusions
are closer together inside the latent space. This also applies
to velocity and position features.                                In this paper we presented an approach to derive an en-
   Our third qualitative example indicates that entities with     tity embedding, a latent representation of in-game entities
similar mechanical features from different games are more         based on their mechanical information, via a Variational Au-
similar in the latent space compared to entities with different   toEncoder (VAE). We discussed how we trained this VAE,
mechanical features from the same game. This indicates that       and evaluated the entity embedding in terms of its accu-
our latent space places mechanically similar entities closer      racy at representing unseen test entities. We found that the
to one another in its learned latent space.                       VAE outperformed our K-Nearest Neighbor inspired base-
                                                                  lines in most cases, indicating a general learned embedding.
                      Future Work                                 We hope that this representation will lead to new applica-
We trained a VAE on mechanical features of entities in order      tions of PCGML involving game mechanics.
to derive an entity embedding. We argue that this entity em-
bedding can be a shared representation that enables various
PCGML-related tasks. We list potential directions for future                        Acknowledgements
work below.
                                                                  We acknowledge the support of the Natural Sciences and
• Entity Blending: We can generate new entities based on          Engineering Research Council of Canada (NSERC), and the
  existing ones as is shown in the Qualitative Examples sec-      Alberta Machine Intelligence Institute (Amii).
                       References                               Thakkar, S.; Cao, C.; Wang, L.; Choi, T. J.; and Togelius,
Bellemare, M. G.; Naddaf, Y.; Veness, J.; and Bowling, M.       J. 2019. Autoencoder and evolutionary algorithm for level
2013. The arcade learning environment: An evaluation plat-      generation in lode runner. In 2019 IEEE Conference on
form for general agents. Journal of Artificial Intelligence     Games (CoG), 1–4. IEEE.
Research 47:253–279.                                            Ugur, E.; Sahin, E.; and Öztop, E. 2009. Affordance learning
Bentley, G. R., and Osborn, J. C. 2019. The videogame af-       from range data for multi-step planning. In EpiRob.
fordances corpus. In 2019 Experimental AI in Games Work-
shop.
Ecoffet, A.; Huizinga, J.; Lehman, J.; Stanley, K. O.; and
Clune, J. 2019. Go-explore: a new approach for hard-
exploration problems. arXiv preprint arXiv:1901.10995.
Guzdial, M., and Riedl, M. 2018. Automated game design
via conceptual expansion. In Fourteenth Artificial Intelli-
gence and Interactive Digital Entertainment Conference.
Guzdial, M.; Reno, J.; Chen, J.; Smith, G.; and Riedl, M.
2018. Explainable pcgml via game design patterns. arXiv
preprint arXiv:1809.09419.
Guzdial, M.; Li, B.; and Riedl, M. O. 2017. Game engine
learning from video. In IJCAI, 3707–3713.
Ha, D., and Schmidhuber, J. 2018. World models. arXiv
preprint arXiv:1803.10122.
Jain, R.; Isaksen, A.; Holmgård, C.; and Togelius, J. 2016.
Autoencoders for level generation, repair, and recognition.
In Proceedings of the ICCC Workshop on Computational
Creativity and Games.
Karth, I., and Smith, A. M. 2017. Wavefunctioncollapse
is constraint solving in the wild. In Proceedings of the
12th International Conference on the Foundations of Dig-
ital Games, 1–10.
Kim, S. W.; Zhou, Y.; Philion, J.; Torralba, A.; and Fidler,
S. 2020. Learning to simulate dynamic environments with
gamegan. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, 1231–1240.
Maaten, L. v. d., and Hinton, G. 2008. Visualizing data using
t-sne. Journal of machine learning research 9(Nov):2579–
2605.
Machado, T.; Gopstein, D.; Nealen, A.; and Togelius, J.
2019. Pitako-recommending game design elements in ci-
cero. In 2019 IEEE Conference on Games (CoG), 1–8.
IEEE.
Osborn, J. C.; Summerville, A.; and Mateas, M. 2017. Au-
tomated game design learning. In 2017 IEEE Conference
on Computational Intelligence and Games (CIG), 240–247.
IEEE.
Sarkar, A.; Yang, Z.; and Cooper, S. 2020. Control-
lable level blending between games using variational au-
toencoders. arXiv preprint arXiv:2002.11869.
Summerville, A.; Behrooz, M.; Mateas, M.; and Jhala, A.
2017. What does that?-block do? learning latent causal
affordances from mario play traces. In Workshops at the
Thirty-First AAAI Conference on Artificial Intelligence.
Summerville, A.; Snodgrass, S.; Guzdial, M.; Holmgård, C.;
Hoover, A. K.; Isaksen, A.; Nealen, A.; and Togelius, J.
2018. Procedural content generation via machine learning
(pcgml). IEEE Transactions on Games 10(3):257–270.