<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Using a Genetic Algorithm to Evolve a D* Search Heuristic</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andrew</forename><surname>Giese</surname></persName>
							<email>gieseanw@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Dayton</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jennifer</forename><surname>Seitzer</surname></persName>
							<email>seitzer@udayton.edu</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Dayton</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Using a Genetic Algorithm to Evolve a D* Search Heuristic</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">29A0DA48F3C24A3FB71E132D5E1B3903</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T20:16+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Evolutionary computation (EC) is the sub-discipline of artificial intelligence that iteratively derives solutions using techniques from genetics. In this work, we present a genetic algorithm that evolves a heuristic static evaluation function (SEF) function to be used in a real-time search navigation scheme of an autonomous agent. This coupling of algorithmic techniques (GAs with real time search by autonomous agents) makes for interesting formalistic and implementation challenges. Genetic evolution implies the need for a fitness function to guide a convergence in the solution being created. Thus, as part of this work, we present a fitness function that dictates the efficacy of a generated static evaluation function. In this work, we present algorithmic and formalistic designs, implementation details, and performance results of this multi-layered software endeavor.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Introduction</head><p>The A* Algorithm is a greedy best-first search algorithm that is used to calculate an optimal path between two points, or nodes, on a graph <ref type="bibr" target="#b1">(Hart et. al 1968)</ref> The algorithm can be adapted to a run in real-time by way of restarting execution as new environment information becomes available, called a Dynamic A* (D*) search. The A* search uses a static evaluation function (SEF) that uses heuristics to find a path of state transformations from a start state to a goal state. The SEF assesses the merit of each child state as it is generated by assigning it a numeric value based on information about that state. The score allows the A* to direct its search by prioritizing the expansion of child nodes that could potentially expand into a goal state while neglecting child nodes that are less likely to lead to a goal.</p><p>In a real time environment, information about the actual goal state is unavailable and unobtainable for any given iteration of the search for an agent (because it is out of range of the agent's sensors). Therefore, the SEF must direct the search to the most appropriate state that anticipates system information as it becomes available. In our work, the SEF seeks to maximize some aspects of an agent's state while minimizing others. By evolving a weight on each "aspect-variable", we are able to create offline a highly effective SEF that can predict obstacles and challenges that occur in real time during the execution of D*. In this paper we present the offline pursuit of using a genetic algorithm as a mechanism to evolve an optimal SEF to be used in the real-time execution of A*.</p><p>Additionally, we use the simulator that will eventually benefit from this optimized SEF to provide feedback in the evolution process. That is, the simulator serves as the fitness function for the evolving SEF.</p><p>This work is novel in that it combines techniques of evolutionary computation using genetic algorithms and the use and refinement of a heuristic for the D* algorithm. There are many applications of genetic algorithms in diverse domains such as bioinformatics <ref type="bibr" target="#b2">(Hill 2005)</ref>, gaming <ref type="bibr" target="#b4">(Lucas 2006</ref><ref type="bibr">), music composition (Weale 2004)</ref>, and circuit optimization <ref type="bibr" target="#b8">(Zhang 2006)</ref>. Additionally, work in D* has been studied and developed in theory <ref type="bibr" target="#b6">(Ramalingam 1996)</ref> as well as specific applications such as robotics <ref type="bibr" target="#b3">(Koenig 2005)</ref>. We are using the all-inclusive examination that genetic algorithms affords us to find the perfect (or near perfect) heuristic function for a derivative of the very traditional AI search, A*.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The A* and D* Algorithms</head><p>In this work, the evolution of a static evaluation function using a genetic algorithm is applied to an autonomous agent operating in an environment provided by Infinite Mario Bros., an open-source, faithful recreation of Nintendo's Super Mario World. The agent (Mario) uses a realtime execution of an A* search, called D*, to direct its movement through the environment to ultimately reach the goal (the end of the level). Mario may use information about what is currently visible onscreen, but beyond that nothing is known, making a calculation of an actual path to the goal impossible. Therefore, the SEF of the D* must direct Mario towards states that are on the path to the goal.</p><p>Mario has a total of thirteen distinct action combinations that allow him to negotiate the environment. These are move left, move right, duck, and two others-jump and speed-that can be used in combination with the other actions and each other. Jump allows movement along the y-axis, and can be used in combination with right, left, and duck. Speed allows for faster movement left or right, and higher jumps. This means that from any state, there could be up to thirteen child nodes. Since the agent must operate at 24 frames per second, the agent is allotted approximately 40 milliseconds to perceive its current state, decide what to do next, and return a chosen action. With up to thirteen child nodes from any node in the search tree, any algorithm that decides what Mario is going to do next must do so quickly and efficiently. A "brute force" approach that analyzes all possible children was infeasible given available computing machinery, and therefore a dynamic D* search is more appropriate.</p><p>The SEF of a D* search uses information about a state to direct the search efficiently. For Mario, much information is immediately available from each percept provided by the environment. This information includes the position of Mario, the amount of damage Mario has taken, the positions and types of enemies onscreen, and position and types of obstacles onscreen. Other information can be tracked over time, like number of kills, X velocity, Y velocity, coins collected, time remaining, etc. The task of our system was to discover what effects, if any, the values of these variables should have on the valuation performed by the SEF of a node in the search graph. A high value for a variable might proportionally increase the cost to transition to that state, or conversely could proportionally decrease the transition cost.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The System</head><p>In 2009 and 2010 Julian Togelius of the ICE-GIC held a competition for entrants to create an autonomous agent (bot) that would play Markus Persson's Infinite Mario Bros. the best. "Best" in this sense means the distance a bot could travel within a given level and time limit. If two bots finished a level they were awarded equal scores, but if neither finished, the bot that travelled furthest was deemed better. In both iterations of the competition, the same bot was victorious. This bot was written by Robin Baumgarten <ref type="bibr">(Baumgarten)</ref>. Robin's bot used a D* search coupled with an accurate method for expanding child nodes, and a human-generated static evaluation function for the D*. Our system is a heavily modified version of Robin's, with the majority of the A* rewritten for legibility and efficiency while the means to produce child nodes was mostly preserved.</p><p>Every 24 frames, the environment provides the agent with a percept that includes the locations and types of all sprites on the screen, including Mario. The agent must return an action to the environment that the environment then effects upon Mario. For each percept received, the agent runs an A* search for 39ms or until the agent has planned out to 15 levels of the search tree. The agent keeps an internal representation of the world, and tracks Mario's x and y velocities among other things not provided by each percept.</p><p>After ensuring that its internal representation is consistent with the environment-provided one, the agent begins an A* search from Mario's current position and velocity. Children are generated by considering which actions are available to Mario at any node. That is, a child state reflects where Mario would be and how fast he would be moving if performed action A from node M. (Figure <ref type="figure">1</ref>) A child state also informs the search of whether Mario would take damage, die, kill an enemy, collect a coin, etc. upon performing action A from node M. A static evaluation function provides weights on Mario's X position, X velocity, Y position, Y velocity, Mario's damage taken, whether Mario is carrying a shell, Mario's X position multiplied by X velocity, and Mario's Y position multiplied by Y velocity. These weights are values between -1 and 1. After multiplying weights to their associated state variables, the sum of products forms the final SEF score for that node.</p><p>Figure <ref type="figure">1</ref> This SEF score is an estimation of the amount of work required to reach a goal state from the current node, and as such nodes with lower SEF scores are preferable. As an A* algorithm dictates, the level of the search tree at which the node was discovered is also added to the score. This is the "greedy" part of an A* search where not just a solution is desired, but the best solution. For Mario, the cost to transition from one state to another is uniform; all neighboring states have the same arc cost to travel to a neighbor. Adding the sum of arc costs into the SEF score for a node is a means by which the "work" required to reach a node in the graph is represented, so that if the same node is reached by two separate paths, the shortest path is favored. Since the D* search operates in a partially observable world, an admissible heuristic is difficult to discern, hence the motivation for a Genetic Algorithm to search for the optimal weights to apply in the SEF.</p><p>During the D* search, children nodes are generated from the current state of the agent. Generated children are placed on an open list sorted from lowest to highest scores. The child with the lowest score is taken from that list, and its children are generated. This process repeats until the agent has searched for 39ms or has searched 14 states (empirical number), at which point it returns the action that leads to the most optimal path for the current available information.</p><p>The values for the weights used in the agent's SEF mentioned above are deemed to be "unknown" to the system, and are provided via parameters supplied by an external entity, in this case a Genetic Algorithm. The genetic algorithm is implemented as defined in <ref type="bibr">(Russell and Norvig 2003)</ref>. The chromosome being evolved is an array of 8 floating point values, each between -1.0 and 1.0. The mutation rate was 1%. Each generation of chromosomes was tested for fitness by running a simulation on a training level where the agent used the chromosome's genes as the weights on statevariables evaluated by the SEF in a D* search. The fitness of the chromosome was a summation of Mario's distance travelled, and if he completed the level, also the remaining time Mario had to complete the level. A higher fitness score indicates a better, or more fit, chromosome. This is in contrast to the Static Evaluation function where a lower score indicates a more ideal state.</p><p>The test level that each bot was scored on had a variety of characteristics. The most important of these is that the level was short. As each chromosome needed to be used in an actual bot, a single fitness test could last upwards of a minute even if the bot could finish the level successfully. A short level guaranteed that if a bot was going to finish a level, it could do so without much time spent. The second characteristic of the level was an imposed time limit. This time limit places an upper bound on the possible time a bot could spend in a level. Slow bots, bots that stood still, or bots that got stuck therefore all required a maximum of N seconds to evaluate.</p><p>An ideal level must also contain challenges and obstacles that a full level will have on its maximum difficulty. These challenges include portions with a high volume of enemies, some which that cannot be destroyed by landing on their heads; portions with Bullet Bill towers of varying heights; gaps of varying width; pipes with Piranha Plants leaping out of them; and portions with mixtures of these scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Optimization to D* and GA</head><p>The D* search still performed sub-optimally given computing hardware, so the search tree needed to be pared down. Paring the tree followed a simple formula: if two child nodes generated the same score from the SEF, the first child node was kept and the other discarded. In a further endeavor to pare the tree, the maximum degree for a node was reduced from 13 to 11 by discounting nodes reachable through the action of ducking by the Agent. In an effort to avoid a bias in the reproduction phase of the genetic algorithm, a generated and tested chromosome was only added to the population if either its fitness score was unique or, failing that, the genes on the chromosome were unique among the chromosomes with the same fitness. If this precaution was not taken, a glut of identical chromosomes with the same score could skew the parental selection process unfairly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>The Experiment</head><p>The Experiment was conducted across two iterations of the Genetic Algorithm. For the initial one, a starting population of 10 chromosomes instantiated with random values was created. A total of 800 generations were iterated over, with five children produced per generation. The test level had a time limit of 36 in-game seconds (~26 seconds in realtime), and a length of 300 blocks (~4800 pixels). The level's "seed" used by the level generation engine was 4 and the difficulty was set to 15. The program execution lasted over 20 hours.</p><p>After this initial iteration completed, five of the top-scoring agents were used as the starting population for the second iteration of the Genetic Algorithm. The level length was increased three-fold to a length of 900 blocks (~14400 pixels), the time limit set to 100 in-game seconds, the "seed" to 65, and the difficulty retained at 15. 320 generations were evaluated, again with five children produced for each generation. As this test level's length and time were much larger than the first iteration of the GA, the execution time prolonged to about 30 hours.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Results</head><p>The technique of using a GA to evolve the SEF of a D* search allowed a system to generate an effective SEF in the absence of a priori knowledge about what makes one agent state more desirable than another. The results of this experiment demonstrate little to no direct correlations between individual weights and bot scores, implying a trial-and-error search for a human would be difficult and time-consuming.</p><p>For the initial iteration of the GA, over three thousand unique bots were evaluated over the course of 800 generations. 285 of those tied for the top score of 3955. An interesting note is that the first bot to score this amount was produced during the 11 th generation of the GA.</p><p>The weights used in the bot's SEF that the GA iterated on varied greatly. Figures <ref type="figure">2 and 3</ref> show typical scatter plots for the values of weights over the course of the GA's execution.</p><p>Figure <ref type="figure">2</ref> Figure <ref type="figure">3</ref> The weights on state-variables may appear to quickly converge to a few values and remain there. However, over time the amount of variance for any weight does not decline linearly. Figure <ref type="figure">4</ref> shows a plot of the amount of variation for each weight grouped by 50 generations. No r declination of the standard deviation among populations of weight values is present.</p><p>Figure <ref type="figure">4</ref> The scores that bots received likewise reached a local maximum early (generation 11), and were unable to improve thereafter (Figure <ref type="figure">5</ref>). Figure <ref type="figure">5</ref> Upon examining the possible correlation between weight values and bot scores, a similar quandary is encountered where bots received top scores despite the weight values (Figures <ref type="figure">6 and 7</ref>), save for the case of the weight on X Position multiplied by X Velocity (Figure <ref type="figure">8</ref>). In the case of the value of the weight on the agent's X Position multiplied by the agent's X Velocity, a negative weight positively correlates to a higher score, and every single positive weight has a score of 0.0 or less (a negative score indicates the agent travelled backwards).</p><p>Figure <ref type="figure">6</ref> Figure <ref type="figure">7</ref> Figure <ref type="figure">8</ref> For the second iteration of the GA, similar results to the first iteration were obtained. However, only 3 bots out of a population of over a thousand shared the top score. Figure <ref type="figure" target="#fig_0">9</ref> presents the distribution of scores that bots received during the course of execution. Since the initial population of this iteration comprised top-scoring bots of the first iteration, it is understandable that so many bots scored so well so early, however a clear ceiling to the scores is visible, indicating the algorithm likely could not escape a local maxima.   Although the weight values produced by the Genetic Algorithm for the top bots were distributed across the gamut of possible values, the end result was in fact bots that performed roughly as well as Robin Baumgarten's bot that won the ICE-GIC competition two years in a row. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusions</head><p>In this work, we presented the novel technique of using a genetic algorithm as an offline meta-search for an optimal static evaluation function to be used by the D* search of a real-time autonomous agent. The end results were Static Evaluation Function parameters that, upon use in the SEF for a real-time agent, enabled the agent to perform as well as the current best in its environment.</p><p>The fact that our agent performed as well as the current best is significant because we made very few assumptions about the valuation of agent states in a static evaluation function. That is, the algorithms presented in this paper automated this task. The implication is that similar techniques could be employed for autonomous agents in other, possibly real-world, environments with high confidence in the end result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Future Work</head><p>The work presented here has much potential for expansion. Future work should include utilizing parallel computing clusters like Beowulf to take advantage of the natural independence between the analyses of members in a population by the GA's fitness function, as well as the evaluation of nodes in the open list of the D* algorithm by the algorithm's SEF. This sort of capability will allow for not only a deeper D* search, but shorter generations in the Genetic Algorithm and therefore the ability to run the algorithm for more generations in the same amount of time.</p><p>Potential future work could also include employing pattern matching techniques to identify a discrete set of distinct scenarios an agent would encounter. An agent could then utilize a separate SEF for each scenario.</p><p>Under the notion of pattern-matching, even further future research would focus on generating probability tables for the likelihood of scenarios occurring after each other. Knowing the probability of a scenario to occur next would allow an agent to make an accurate prediction of an optimal path before receiving its next percept.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 9 Figures</head><label>9</label><figDesc>Figure 9</figDesc><graphic coords="5,55.50,516.00,237.75,143.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 10 Figure 11</head><label>1011</label><figDesc>Figure 10</figDesc><graphic coords="5,320.25,307.25,237.75,143.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 12</head><label>12</label><figDesc>Figure 12</figDesc><graphic coords="5,320.25,492.25,237.75,143.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell></cell><cell cols="5">displays a comparison of bot scores between</cell></row><row><cell>Robin's</cell><cell>Bot</cell><cell>(AStarAgent)</cell><cell>and</cell><cell>our</cell><cell>bot</cell></row><row><cell cols="6">(AStarAgentEvolved) for a variety of levels whose</cell></row><row><cell cols="3">difficulties were set to 15.</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc></figDesc><table /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Infinite Super Mario AI</title>
		<author>
			<persName><forename type="first">Robin</forename><surname>Baumgarten</surname></persName>
		</author>
		<ptr target="&lt;http://www.doc.ic.ac.uk/~rb1006/projects:marioai&gt;" />
		<imprint>
			<date type="published" when="2009-09-09">9 September 2009. 8 February 2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A Formal Basis for the Heuristic Determination of Minimum Cost Paths</title>
		<author>
			<persName><forename type="first">Peter</forename><forename type="middle">E</forename><surname>Hart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Nils</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bertram</forename><surname>Nilsson</surname></persName>
		</author>
		<author>
			<persName><surname>Raphael</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transaction of Systems Science and Cybernetics SSC</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="100" to="107" />
			<date type="published" when="1968">1968</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Genetic algorithm for large-scale maximum parsimony phylogenetic analysis of proteins</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Lundgren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fredriksson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">B</forename><surname>Schiöth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biochimica et Biophysica Acta</title>
		<imprint>
			<biblScope unit="volume">1725</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="19" to="29" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Fast Replanning for Navigation in Unknown Terrain</title>
		<author>
			<persName><forename type="first">S</forename><surname>Koenig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Likhachev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions on Robotics</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="354" to="363" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Evolutionary computation and games</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lucas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kendell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Comput Intell Mag</title>
		<imprint>
			<biblScope unit="page" from="10" to="18" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">An introduction to genetic algorithms</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mitchell</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1998">1998</date>
			<publisher>MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">An incremental algorithm for a generalization of the shortest-path problem</title>
		<author>
			<persName><forename type="first">G</forename><surname>Ramalingam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Reps</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Algorithms</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="267" to="305" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">Stuart</forename><forename type="middle">J</forename><surname>Russel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Peter</forename><surname>Norvig</surname></persName>
		</author>
		<title level="m">Artificial Intelligence: A Modern Approach</title>
				<meeting><address><addrLine>Upper Saddle River</addrLine></address></meeting>
		<imprint>
			<publisher>Prentice Hall/Pearson Education</publisher>
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Pseudocoevolutionary Genetic Algorithms for Power Electronic Circuits Optimization</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Lo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans Systems, Man, and Cybernetics</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="590" to="598" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
