<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Towards Automatic Extraction of Tile Types from Level Images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Sam</forename><surname>Snodgrass</surname></persName>
							<email>s.snodgrass@northeastern.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">Northeastern University</orgName>
								<address>
									<settlement>Boston</settlement>
									<region>MA</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Towards Automatic Extraction of Tile Types from Level Images</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">BA3764F9CFB52A5AEB28720AB04B636C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:39+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent years, the use of machine learning for procedural content generation (PCGML) has been growing. These PCGML approaches require a training corpus of levels, often annotated or represented in some abstracted way. Due to the manual effort required to annotate or translate a sufficient training corpus, most PCGML techniques have only been explored in a handful of domains. In this paper we take a step towards addressing this core issue of PCGML by presenting an unsupervised method for automatically extracting a representation for a level domain, given only images of the levels. This approach is a move towards making PCGML more broadly applicable by reducing the effort required to create a training corpus. We evaluate our approach by comparing the automatically extracted tile representation against existing PCGML training level corpus representations.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>Procedural content generation via machine learning (PCGML) <ref type="bibr" target="#b7">(Summerville et al. 2018</ref>) is a growing field of research that automatically extracts models from existing game content, and uses those learned models to generate new content. These approaches rely on a corpus of training data from which to estimate their models. However, the creation of such training data (often through manual annotation or domain-specific scripts) can require a large time commitment as well as expert domain knowledge in order to reason about the representation of the data for a given domain. This requirement of annotating training data is in direct opposition to one of the core benefits of PCGML: reducing the amount of domain knowledge that must be encoded by users. Subsequently, most PCGML approaches have only been tested in a handful of domain where training data is readily available (e.g., Super Mario Bros. (Guzdial and Riedl 2016; Snodgrass and Ontan ˜ ón 2016b; Summerville and Mateas 2016), The Legend of Zelda (Summerville and Mateas 2015), and Lode Runner and Kid Icarus <ref type="bibr">(Snodgrass and Ontan ˜ ón 2016b))</ref>.</p><p>In this paper we begin research into relieving PCGML techniques' reliance on manual annotations and domain specific knowledge from users. We present a proof of concept unsupervised approach for extracting a representative set of tile types from video game level images, which can then be used to represent levels from the given game. Our unsupervised approach attempts to find groups of functionally similar objects using only positional and structural level information. Our goal is to further increase the usability of PCGML techniques and broaden the applicability of such techniques to new domains by reducing the amount of domain knowledge required to explore a new domain.</p><p>The remainder of this paper is organized as follows: first, we discuss the relevant related work; we then present our approach for extracting tile sets; next, we present our experimental set-up, including the domain in which we test our approach and how we evaluate our approach; then we present and discuss our results; finally, we close by drawing our conclusions, and suggesting avenues of future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Related Work</head><p>Most of the current PCGML techniques for level generation techniques require annotated or abstracted training levels, often derived from level images <ref type="bibr" target="#b7">(Summerville and Mateas 2016;</ref><ref type="bibr">Snodgrass and Ontañón 2016b;</ref><ref type="bibr">Summerville and Mateas 2015)</ref>. A notable exception is Guzdial and Reidl's (Guzdial and Riedl 2016) approach which leverages a spritesheet and gameplay videos in order to automatically identify structures and construct its own internal graphical representation of the levels. This is an interesting approach that is able to leverage its representation to create remarkable results. Additionally, the use of gameplay videos can be reduced to using a static level image where each frame is either considered separately as a level, or concatenated together to form a full level. Therefore, methods that rely on gameplay videos can also benefit from an unsupervised representation learning approach.</p><p>We are not the first to recognize the tension of needing annotated training data in PCGML. <ref type="bibr" target="#b7">Summerville et al. (Summerville et al. 2016</ref>) created and maintain the Video Game Level Corpus, a repository of video game levels represented in a variety of formats, including graphical and tile-based for the purpose of video game research. Regardless of these efforts, there are a vast number of video games, and it is infeasible to convert many of their levels using the current manual or domain specific methods. Others have attempted to sidestep the need for annotated training data in new domains by combining models for various domains (Guzdial Final Converted Levels</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Represent Levels</head><p>Figure <ref type="figure">1</ref>: This figure shows the flow of our approach. We start with a set of level images; extract a set of unique sprites from those images; re-represent the levels using a unique identifier associated with each unique sprite; train a model on those represented levels; perform clustering on the sprites using the distance between the trained distributions as the metric, yielding a set of clusters corresponding to tile types; and finally we represent the input levels using the extracted tile types.</p><p>and Riedl 2018) or by transferring a learned model from one domain which has training data to another domain with more limited training data <ref type="bibr" target="#b6">(Snodgrass and Ontanon 2016a)</ref>. However, they still require training data for some (ideally functionally similar) domain, and thus do not address the root of the problem. Some have explored the role of different structures and tile types in different game domains through interaction. In the General Video Game AI competition <ref type="bibr">(Pérez-Liébana et al. 2016)</ref> the various agents needed to analyze and determine what the different elements in the given levels were without prior knowledge. The agents in this case were able to interact with the provided level and build up a world model this way. The most closely related work is that of Summerville et al <ref type="bibr" target="#b7">(Summerville et al. 2017</ref>) which tries to determine what the elements in a Super Mario Bros. level do by analyzing gameplay traces and in-game events, and clustering player and object interactions modeled as probabilistic events. Notice, each of these approaches relies on the use of agent interactions in order to extract the function of the tiles, while we are first interested in seeing how far we can go only analyzing the structural elements of the level in order to determine the functional groupings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Approach</head><p>In this section we propose our approach for automatically determining a representative set of tile types for a domain given a set of level images. At a high level our approach works in three phases: first, we automatically label the images using the set of unique sprites found in the level images; next, we train a Markov random field <ref type="bibr" target="#b0">(Cross and Jain 1983)</ref> model on the levels treating each of those unique sprites as a temporary tile type; finally, we cluster the sprites using the distances between their learned probability distributions.</p><p>Figure <ref type="figure">1</ref> shows the flow of our approach. We discuss each of the above stages in more detail below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Labeling</head><p>In this stage we first parse the input level images in order to extract a set of unique x × y pixel sprites. We then treat each of those unique sprites as a temporary tile type, and use them to re-represent the input levels in an intermediate tile-based representation, resulting in a set of tile-based levels that can be passed to the next stage of the pipeline.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Training</head><p>In this stage we train a statistical model on the newly created tile levels. Many machine learning approaches can be used here in order to extract a generalized representation of the input levels, but in our experiments we leverage a Markov random field approach <ref type="bibr" target="#b0">(Cross and Jain 1983)</ref>.</p><p>We train a Markov random field using a neighborhood of the four surrounding sprites in the level, as shown in Figure 2. Using the Markov random field, we estimate P (c|t) from the set of levels represented in the tile format described above, where c is a configuration of surrounding tile types at a given position in the level, and t is the tile type at the center of that configuration. A similar modeling approach using Markov random fields has previously been used by Snodgrass and Ontañón 2016b to model and generate game levels. The key difference here is that Snodgrass and Ontañón estimated P (t|c) in order to capture the proper placement of tile types within a level and replicate it during generation, whereas we estimate P (c|t) so that we can more easily compare which configurations and patterns occur around specific tile types, thus allowing us to reason more directly about how different tile types occur with different patterns in the input levels.  </p><formula xml:id="formula_0">S 4,1 S 1,1 S 2,1 S 3,1 …" …" …" …" …" …" …"</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Clustering</head><p>In this stage we cluster the sprites based on the learned probability distributions corresponding to each sprite. To achieve the clustering, we leverage a hierarchical clustering approach implemented in R <ref type="bibr" target="#b3">(Maechler et al. 2013</ref>). We use a hierarchical clustering approach so that we can easily inspect the clusters at varying levels of granularity.</p><p>For our distance metric, we compute the total variation distance (Verdú 2014) between the probability distributions learned for the given sprites. The total variation distance can be thought of as the maximum distance between two probability distributions for any one event. Specifically, we compute: max (|P (c|t i ) − P (c|t j )|) ∀c ∈ C, where the P are the probability distributions trained by the MRF, and t i and t j are the tiles for which the distributions are being compared, and C is the set of all possible surrounding tile configurations.</p><p>This hierarchical clustering approach results in a dendrogram where each leaf corresponds to a unique sprite. Thus, once the clustering is complete we can experiment with cutting the tree at different heights to investigate the clusters resulting from that cut. It is beneficial to be able to explore different granularities of clusters for our analysis, but other common methods for estimating the ideal number of clusters can be employed in place of manual inspection (e.g., the average silhouette method <ref type="bibr" target="#b2">(Kaufman and Rousseeuw 2009)</ref>). Additionally, other common clustering techniques can be employed here, such as k-medoids or DBSCAN.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Experimental Evaluation</head><p>In this section we discuss our experimental design including the domain explored, the evaluations metrics used, and the results of our experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Domain</head><p>We test our approach with the first level of Super Mario Bros., a platforming game that has commonly been used as a testbed in PCG <ref type="bibr" target="#b4">(Marino, Reis, and Lelis 2015;</ref><ref type="bibr">Shaker et al. 2011;</ref><ref type="bibr" target="#b5">Mawhorter and Mateas 2010)</ref> and PCGML <ref type="bibr" target="#b1">(Dahlskog, Togelius, and Nelson 2014;</ref><ref type="bibr">Guzdial and Riedl 2016;</ref><ref type="bibr">Snodgrass and Ontañón 2016b;</ref><ref type="bibr" target="#b7">Summerville and Mateas 2016)</ref>. We use only the first level in our experiments in order to explore the feasibility of our approach by using a limited number of unique sprites. Many current PCGML approaches that have been tested in this domain have leveraged a tile-based representation of the levels, and there have been several different tile representations used with varying degrees of fidelity. In our experiments, we compare our automatically extracted tile sets against two manually-defined tile sets:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>S ------------S ------------S ------------S ----? -------S ------------S ------------S ------------S --B M B ? B -----S ------------S ----------[ ] S ---g ------p P S # # # # # # # # # # # # S S S S S S S S S S S S S</head><p>• Simple Manual: This is the tile set used by the VGLC to represent levels in this domain. It consists of 11 tile types, and abstracts different enemy types to a single tile type, and represents above ground levels without treetops, bridges, or moving platforms. For the level we use in our experiments 7 of these tile types used.</p><p>• Complex Manual: This is the tile set used by Snodgrass and Ontañón in their more recent work <ref type="bibr">(Snodgrass and Ontañón 2016b)</ref>. This tile set consists of 45 tile types. It distinguishes between the different enemy types, it distinguishes blocks based on their contents, and is able to represent all above ground levels, castle levels, and underground levels. Figure <ref type="figure" target="#fig_2">3</ref> shows a section of a level represented in this format. For the level we use in our experiments 15 of these tile types are needed.</p><p>The mappings of the set of unique extracted sprites to these sets of tile types can be seen in Tables 1 (left) and 2 (left), respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation Methods</head><p>Using the approach outlined previously, we extract a set of unique sprites from the input level image, and then cluster them in order to automatically determine different sets of tile types. We evaluate the results of our clustering by investigating the cluster statistics and through manual inspection of the clusters themselves. For the cluster statistics, we compute the average silhouette of the clusters created by cutting the dendrogram created by the hierarchical clustering method at several levels. This measures how well, on average, each sprite fits within its own cluster versus other clusters. We also note the largest and smallest cluster sizes for each clustering. This metric shows us the spread of the clusters and can help inform users about what a desirable number of clusters may be. For the manual inspection, we explore the differences between the found clusters and the manually defined tile types. For our evaluation we cut the dendrogram to produce 7 clusters and 15 clusters, so that we can compare these clustering results to the manually defined 7 and 15 tile types that have been used previously to represent the chosen level.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Results</head><p>Tables <ref type="table" target="#tab_2">1 and 2</ref> show the manually defined tile sets and the sets of sprites represented by each tile type (left) and the clusters found by our approach (right). In many cases our approach clusters tiles that are functionally similar ample, in both settings, a cluster is found containing many of the brick and question mark tiles. Additionally, the pipetop tiles are accurately grouped together in the simple setting (albeit with a background tile), and are accurately split in the complex setting. This is encouraging as it shows that distinctions can be made automatically with only structural information (to some extent), but there are clear issues. Cluster 1 (in the simple setting) contains a mix of background sprites and enemies, and in general background sprites are interspersed with many of the clusters. This may be because while the background sprites all perform similar functions, the individual sprites (e.g., cloud sections, bush sections, etc.) often only appear in specific configurations, and thus have a very sparse (and very distinct) probability distribution, which makes them more likely to get grouped in with other sprites. For example, in the simple setting the bush and hill background sprites get clustered with the bottom pipe sections. Notably, all of these sprites typically appear just above the ground sprites. This behavior is reflected in the complex setting as well. This suggests a shortcoming of the distance metric used for clustering and potentially an insufficiency in using only the positional information of the sprites.</p><p>In the future, more informative distance metrics could be considered which may encompass frequencies of the tiles appearances or perhaps the shapes on contiguous tiles similar to Guzdial and Riedl's clustering approach (Guzdial and Riedl 2016).</p><p>Tables <ref type="table" target="#tab_3">3</ref> shows the average silhouette of the clusters, as well as the maximum and minimum cluster sizes. Recall that the average silhouette of a clustering approximates the spread of the clusters, and a smaller silhouette means that the elements within a cluster are more closely related. As expected, with more clusters the average silhouette typically decreases. An interesting exception here is that the silhouette when k = 15 (for the complex setting) is larger than when k = 7 (for the simple setting). This indicates that when k = 7 we get tighter, more similar clusters. This is somewhat reflected in our manual inspection of the clustering results discussed above.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Limitations and Future Work</head><p>As this was an initial step, there are two major limitations that we hope to address in the future. First, while our clustering approach was able to group some functionally simi-lar sprites together (e.g., bricks, pipes) it struggled with the grouping of others (most notably the background sprites: sky, cloud, hill, and bush). This is partially a shortcoming of the distance metric and clustering employed, as several of the background sprites appear in similar configurations as other sprites (e.g., the hills and the pipes). We aim to first explore more robust modeling and distance metric options. However, as Guzdial et al. discuss <ref type="bibr">(Guzdial, Sturtevant, and Li 2016)</ref>, we likely need both static and dynamic analysis (i.e., analysis of the structural and gameplay elements, respectively) to get a complete understanding of the domain. Therefore, once we more fully explore our static analysis options (i.e., distance metrics, clustering options, modeling approaches), we will incorporate dynamic analysis. Dynamic analysis will help our model reason more clearly about the functionality of the sprites, and cluster them more cohesively. Incorporating dynamic analysis can easily undermine the goal of our work (i.e., reducing reliance on domain knowledge and domain specific scripts) by requiring a specific agent for each domain. To avoid this potential tension, we will explore the use of reinforcement learning agents, specifically those used in the General Video Game AI competition, that may be able to learn how to play the game without requiring much domain knowledge.</p><p>The second limitation of our work is that we have thus far only applied it to one domain, and only a fraction of that domain. To address this, we will first apply our approach to a larger subset of Super Mario Bros. levels, including those with different visual sprite sets, such as castle and underground levels. We are also interested in exploring our approach in domains unexplored by PCGML techniques thus far, such as Metroid, which do not have a defined set of tiles used by researchers that can be treated as the "ground truth" as we did in the Super Mario Bros. domain. This will help us explore the robustness of our refinements and force us to devise and explore more general methods of evaluation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusions</head><p>In this paper we presented a proof of concept approach for automatically extracting a set of representative tile types from a set of input level images without requiring domain or design knowledge from the user. This approach is meant as a step towards alleviating the reliance of PCGML users on domain specific scripts and manual annotation when creating training data. Using our approach, we automatically extracted 2 sets of tile types and found some similarities between the extracted tile sets and manually defined tile sets of the same size. The found clusters also contained quite a bit of noise and did not perfectly delineate the sprites by functionality. In the future we will explore methods for defining cleaner clusters and further exploring how this automatic approach performs more broadly, both in other domains and paired with a variety of PCGML techniques.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>x,y | S x-1,y , S x+1,y , S x,y-1 , S x,y+1 )</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: This figure shows the network structure used when training our Markov random field approach. The red cell indicates the current tile and the blue cells indicated the surrounding configuration or neighborhood.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: This figure shows a section of a Super Mario Bros. level represented using the complex manual tile set.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Input Levels Sprite Extractor Unique Sprites Represent Levels Re-represented Levels PCGML Trained Tile Distributions Clustering Identified Clusters/Extracted Tile Types</head><label></label><figDesc></figDesc><table><row><cell>A:</cell></row><row><cell>B:</cell></row><row><cell>C:</cell></row><row><cell>D:</cell></row><row><cell>E:</cell></row><row><cell>F:</cell></row><row><cell>G:</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 :</head><label>1</label><figDesc>This table shows the Simple Manual set tile types and their corresponding sprites from the training level (left), and the Simple Clustering set of automatically extracted tile types determined via clustering (right). The arrangement of the clustering results does not indicate a relation to the manual tile type.</figDesc><table><row><cell></cell><cell>Simple Manual</cell><cell></cell><cell>Simple Clustering</cell></row><row><cell>Tile Type</cell><cell>Sprites</cell><cell>Cluster</cell><cell>Sprites</cell></row><row><cell>Empty</cell><cell></cell><cell>1</cell><cell></cell></row><row><cell>Enemy</cell><cell></cell><cell>Cluster 2</cell><cell></cell></row><row><cell>?-Block</cell><cell></cell><cell>Cluster 3</cell><cell></cell></row><row><cell>Brick</cell><cell></cell><cell>Cluster 4</cell><cell></cell></row><row><cell>Solid</cell><cell></cell><cell>Cluster 5</cell><cell></cell></row><row><cell>Left Pipe</cell><cell></cell><cell>Cluster 6</cell><cell></cell></row><row><cell>Right Pipe</cell><cell></cell><cell>Cluster 7</cell><cell></cell></row><row><cell></cell><cell>Complex Manual</cell><cell></cell><cell>Complex Clustering</cell></row><row><cell>Tile Type</cell><cell>Sprites</cell><cell>Cluster</cell><cell>Sprites</cell></row><row><cell>Empty</cell><cell></cell><cell>Cluster 1</cell><cell></cell></row><row><cell>Flagpole</cell><cell></cell><cell>Cluster 2</cell><cell></cell></row><row><cell>Goomba</cell><cell></cell><cell>Cluster 3</cell><cell></cell></row><row><cell>?-Block</cell><cell></cell><cell>Cluster 4</cell><cell></cell></row><row><cell>Brick</cell><cell></cell><cell>Cluster 5</cell><cell></cell></row><row><cell>Powerup</cell><cell></cell><cell>Cluster 6</cell><cell></cell></row><row><cell>Solid</cell><cell></cell><cell>Cluster 7</cell><cell></cell></row><row><cell>Extra Life</cell><cell></cell><cell>Cluster 8</cell><cell></cell></row><row><cell>Top-Left Pipe</cell><cell></cell><cell>Cluster 9</cell><cell></cell></row><row><cell>Top-Right Pipe</cell><cell></cell><cell>Cluster 10</cell><cell></cell></row><row><cell>Coin Brick</cell><cell></cell><cell>Cluster 11</cell><cell></cell></row><row><cell>Star Block</cell><cell></cell><cell>Cluster 12</cell><cell></cell></row><row><cell>Left Pipe</cell><cell></cell><cell>Cluster 13</cell><cell></cell></row><row><cell>Right Pipe</cell><cell></cell><cell>Cluster 14</cell><cell></cell></row><row><cell>Koopa</cell><cell></cell><cell>Cluster 15</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2 :</head><label>2</label><figDesc>This table shows the Complex Manual set tile types and their corresponding sprites from the training level (left), and the Complex Clustering set of automatically extracted tile types determined via clustering (right). The arrangement of the clustering results does not indicate a relation to the manual tile type.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3 :</head><label>3</label><figDesc>Cluster statistics for the clusters found by cutting the dendrogram for various numbers of clusters. 7 and 15 are used in the rest of our evaluation because the manually defined tile sets have represent the chosen training level with 7 (simple) and 15 (complex) tile types.</figDesc><table><row><cell>k</cell><cell cols="3">Avg. Silhouette Max. Size Min. Size</cell></row><row><cell>2</cell><cell>0.3778221</cell><cell>46</cell><cell>7</cell></row><row><cell>7</cell><cell>0.1788397</cell><cell>21</cell><cell>1</cell></row><row><cell>11</cell><cell>0.1791865</cell><cell>21</cell><cell>1</cell></row><row><cell>15</cell><cell>0.1825074</cell><cell>20</cell><cell>1</cell></row><row><cell>20</cell><cell>0.09250035</cell><cell>16</cell><cell>1</cell></row><row><cell>26</cell><cell>0.08664249</cell><cell>10</cell><cell>1</cell></row></table><note>. For ex-</note></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Markov random field texture models</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">R</forename><surname>Cross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Jain</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Pattern Analysis and Machine Intelligence</title>
				<imprint>
			<date type="published" when="1983">1983</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="25" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Combinatorial creativity for procedural content generation via machine learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dahlskog</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Togelius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Nelson</surname></persName>
		</author>
		<author>
			<persName><surname>Guzdial</surname></persName>
		</author>
		<author>
			<persName><surname>Guzdial</surname></persName>
		</author>
		<author>
			<persName><surname>Guzdial</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference</title>
				<imprint>
			<date type="published" when="2014">2014. 2016. 2018. 2016</date>
			<biblScope unit="volume">3</biblScope>
		</imprint>
	</monogr>
	<note>Experimental AI in Games Workshop</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Finding groups in data: an introduction to cluster analysis</title>
		<author>
			<persName><forename type="first">L</forename><surname>Kaufman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Rousseeuw</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>John Wiley &amp; Sons</publisher>
			<biblScope unit="volume">344</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">cluster: Cluster Analysis Basics and Extensions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Maechler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rousseeuw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Struyf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hubert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hornik</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An empirical evaluation of evaluation metrics of procedurally generated Mario levels</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Marino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">M</forename><surname>Reis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">H</forename><surname>Lelis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Eleventh Artificial Intelligence and Interactive Digital Entertainment Conference</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Procedural level generation using occupancy-regulated extension</title>
		<author>
			<persName><forename type="first">P</forename><surname>Mawhorter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mateas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Intelligence and Games (CIG), 2016 IEEE Conference on</title>
				<imprint>
			<date type="published" when="2010">2010. 2010. 2016. 2011</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="332" to="347" />
		</imprint>
	</monogr>
	<note>The 2010 mario AI championship: Level generation track</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">An approach to domain transfer in procedural content generation of twodimensional videogame levels</title>
		<author>
			<persName><forename type="first">S</forename><surname>Snodgrass</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ontanon</surname></persName>
		</author>
		<author>
			<persName><surname>Snodgrass</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Twelfth Artificial Intelligence and Interactive Digital Entertainment Conference</title>
				<editor>
			<persName><forename type="first">A</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">Mateas</forename></persName>
		</editor>
		<meeting><address><addrLine>Summerville</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2016a. 2016b. 2015</date>
		</imprint>
	</monogr>
	<note>Sampling hyrule: Sampling probabilistic machine learning for level generation</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">What does that?-block do? learning latent causal affordances from mario play traces</title>
		<author>
			<persName><forename type="first">A</forename><surname>Summerville</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mateas</surname></persName>
		</author>
		<author>
			<persName><surname>Summerville</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Seventh Workshop on Procedural Content Generation at First Joint International Conference of DiGRA and FDG</title>
				<imprint>
			<publisher>Citeseer</publisher>
			<date type="published" when="2014">2016. 2016. 2017. 2018. 2014</date>
			<biblScope unit="page" from="1" to="3" />
		</imprint>
	</monogr>
	<note>Total variation distance and the distribution of relative information</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
