<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fractional Genetic Programming for a More Gradual Evolution</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Artur</forename><surname>Rataj</surname></persName>
							<email>arataj@iitis.gliwice.pl</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Theoretical and Applied Computer Science</orgName>
								<address>
									<addrLine>Ba ltycka 5</addrLine>
									<settlement>Gliwice</settlement>
									<country key="PL">Poland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fractional Genetic Programming for a More Gradual Evolution</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DD68CFCE6E46F0C5859F1F37C7379B0C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T18:47+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>genetic programming</term>
					<term>real-coded genetic algorithm</term>
					<term>evolutionary method</term>
					<term>gradualism</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We propose a softening of a genetic program by the so-called fractional instructions. Thanks to their adjustable strengths, a new instruction can be gradually introduced to a program, and the other instructions may gradually adapt to the new member. In this way, a transformation of one candidate into another can be continuous. Such an approach makes it possible to take advantage of properties of real-coded genetic algorithms, but in the realm of genetic programming. We show, that the approach can be successfully applied to a precise generalisation of functions, including those exhibiting periodicity.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>One of the basic concepts in genetics is quantitative inheritance, i.e. a gradual regulation of the strength of a single genetic trait. Such an inheritance is realised by e.g. encoding a single trait with several genes, which collaboratively add up to strengthen the trait.</p><p>A quantitative inheritance enables an easy way for an effective exploitative searching of the candidate space -an offspring, which is similar to its parents, has a high chance of being viable in similar environmental conditions, and thus to be a next step in walking the candidate space. The similarity might make it more difficult to find a substantially different environment by the offspring, yet a need to live in a radically different environment is often not the case. Consider e.g. the formation of limbs in animals -they evolved from fins in the process of a long, exploitative transition, that employed slight changes in expressions of genes and a gradual appearance of new genes. Actually, it has been recently shown, that just varying of the expression of certain genes in fish makes their fins more limb-like <ref type="bibr" target="#b4">[5]</ref>. Such a graduality enabled a progressive change of the environment from sea to land, with children being able to live in an environment similar to that of their parents, though.</p><p>The concept of graduality or continuity of the candidate space is the basis of real-coded genetic algorithms <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b13">14]</ref>, supporting a number of dedicated genetic operators <ref type="bibr" target="#b6">[7]</ref>.</p><p>We propose genetic programs in a form that is halfway between a tree representation <ref type="bibr" target="#b2">[3]</ref> and a linear list <ref type="bibr" target="#b10">[11]</ref>-a linear sequence of functions (instructions) is communicating using a stack, as introduced by <ref type="bibr" target="#b11">[12]</ref>, The basic difference of our approach, though, lies in the graduality of the stack operations, constructed to support the discussed continuity of instructions. The continuity is similar to that of <ref type="bibr" target="#b12">[13]</ref>. There are two basic differences, though:</p><p>-No need for special continuous equivalents of common operations like addition or multiplication. Instead, the original functions can be used directly.</p><p>The graduality is provided elsewhere-by a specially constructed stack. -A program does not strictly follow a tree structure, as a value of a node can be reused by a number of other nodes, which promotes reusability of an instruction result and in effect reduces redundancy. Like in the case of instructions, the dependencies out of the tree hierarchy can also form gradually.</p><p>The paper is constructed as follows: Sect. 2 describes fractional stack operations. On the basis of these definitions, fractional operations are discussed in Sect. 3. These build a fractional program, described in Sect. 4. A method of evolving such programs is introduced in Sect. 5. Section 6 presents some tests. Finally, there is a discussion in Sect. 7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">A Fractional Stack</head><p>Let the stack, in order to be compatible with the fractional instructions, have a continuous length, so that it can support a fractional pushing and popping. Thanks to this, a weak operation, i.e. having a low strength, may interact with the stack so that there is only a minimal change to the stack state.</p><p>Let each element on the stack have a real value x and also a real length. The length is specified by the strength s of a push operation push(s, x), and contributes to the total length of the stack, which is a different value than the total number of elements in that stack, i.e. the stack size. A pop operation, in turn, y = pop(s), having the strength s, shortens the stack length by s, by removing 0 or more top elements from the stack, and also possibly shortening the length of the element f , which is on the top after that removal. The value popped y is a mean of values of the elements removed or shortened, weighted by the respective lengths of each element removed, and also by the amount of shortening of the length of f . Let us consider an example. Two operations push(0.6, 10) and push(1.2, 20) led to a stack whose top fragment is shown in Fig. <ref type="figure" target="#fig_0">1(a)</ref>. Then, a pop(1.5) operation, in order to shorten the stack by a proper value, needed to remove the top element and shorten the neighbouring element by 0.3, as seen in Fig. <ref type="figure" target="#fig_0">1(b</ref>). The value returned by that operation would be a weighted average</p><formula xml:id="formula_0">1.2 • 20 + 0.3 • 10 1.2 + 0.3 = 18.</formula><p>Let i be an index of an element in the stack, 0 for the bottommost element, S − 1 for the topmost element, where S is the total number of elements in the stack. Then let L i be the length of the ith element in the stack.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Fractional Instructions</head><p>Given the stack operations, definitions of the fractional instructions are trivial. Possible arguments of such an instruction are always popped from the stack, and a possible result of the instruction is pushed back to the stack. A length of each of the stack operations is given by the instruction strength. A low-strength instruction can only minimally modify the stack contents, fulfilling in this way the graduality criterion. In particular, a zero-strength instruction does nothing.</p><p>Let the set of instructions be limited to several stack handling operations, and also a handful of arithmetic operations. Each instruction's mnemonic is preceded by the instruction's strength. Table <ref type="table">3</ref> lists the definitions of all instructions. Arguments of binary operators are evaluated from left to right. As seen, s POP does not use the popped value, the whole role of the instruction is to modify the stack state. We introduce s COPY n:x, that multiplies the length of n elements at the stack top by x. The instruction is modelled after the instruction DUP -an equivalent of COPY 1:2, used to simplify programs implemented in stack-based languages, like FORTH, FIFTH <ref type="bibr" target="#b7">[8]</ref> or Java bytecode <ref type="bibr" target="#b9">[10]</ref>. In our case of fractional instructions, COPY can act as an adapter of n argument passed between operations of different strengths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">A Fractional Program</head><p>Before the program is executed, its N i input values X i , i = 0, 1, . . . N i − 1, are pushed to the stack with a sequence of operations 1 + p PUSH X i . The fixed Table <ref type="table">1</ref>. A list of basic instructions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mnemonic</head><p>Stack operations s PUSH x push(s, x) s POP pop(s)</p><formula xml:id="formula_1">s COPY n:x l0 = LS−1, v0 = pop(l0), l1 = LS−1, v1 = pop(l1), . . . ln−1 = LS−1, vn−1 = pop(ln−1), push(xln−1, vn−1), push(xln−2, vn−2), . . . push(xl0, v0) s ADD push s, pop(s) + pop(s) s MUL push s, pop(s)pop(s) s NEG push s, −pop(s) s INV push s, pop(s) −1 s SIN push s, sin pop(s)</formula><p>value p ≥ 0 is a padding, that decreases the probability of a stack underflowthe program may instead have a chance of popping the input value or its part again. This is the only case of an instruction, that has a strength greater than 1, but such an instruction never occurs in a program itself. We will use a padding of 1 in tests.</p><p>If it is unlikely that a program needs a very long stack to solve a given problem, the stack size might have some constant limit imposed, so that stack overflows become possible. This way, such programs, as likely too complex, are deemed invalid, which favours simpler solutions and also saves time by preventing a further exploitation of such programs.</p><p>A program, after finishing, yields a single output value Y , which is equal to whatever is left in the stack, but excluding its possible bottommost part, if any, never used by the program, as that part contains unprocessed input values. Thus,</p><formula xml:id="formula_2">Y = pop S−1 k=0 L k − min (L u , N i ) (1)</formula><p>where L u is the global minimum length of the stack across all pop(s) instructions executed within the program, or ∞ if there were no such instructions. See that if the first instruction of a program is 1 COPY N i :z, and then all of the following instructions are of a strength z, then for given X i and any z &gt; 0 the program yields exactly the same Y . This shows, that an instruction retains its characteristics for any strength, beside a different level of interaction with the stack. What is important, though, is the ratio between strengths of instructions that communicate using the same stack, and not the absolute values of these strengths.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Evolutionary Method</head><p>We have developed an evolutionary method that dynamically adjusts the exploration/exploitation scheme like it is often practised in genetic programming <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b8">9]</ref>. The exploitation employs a novel technique that emphasises the advantage of gradual instructions. To keep this paper focused, we do not go into the complexities of maintaining a population of candidates.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1">Exploration -a new candidate</head><p>A candidate program has a fixed length P n and a fixed maximum stack size P s , both roughly accommodated to the function to fit to, so that too complex solutions are impossible.</p><p>There are several types of instructions to randomly pick. A priority is given to s PUSH x, as otherwise most candidates would turn out to be invalid due to stack underflows. We have chosen in tests that this type will be chosen with P PUSH = 0.3, while for the 7 other types P ¬PUSH = 0.1. The pushed value will be drawn using a uniform distribution, spanning reasonable limits −10, 10 . Arguments of s COPY n:x will be chosen using an uniform distribution as well. The ranges are n ∈ 1, 2 as it accommodates a typical number of values popped by an instruction; x ∈ 0, 2 not getting too large, as sequences of s COPY n:x may effectively enlarge the upper limit of multiplying lengths of stack elements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2">Exploitation -tuning of a candidate</head><p>A gradual exploitation shows the flexibility of the fractional programs. An instruction may have its strength gradually reduced even until it disappears, or another instruction may appear with its strength near 0. Both such a deletion and such an insertion may have an arbitrarily small influence on the program, as very small strengths translate to only a minimal interaction with the stack.</p><p>The exploitation is a random walk in the candidate space, beginning with a new candidate created by exploration. A backtrack to the previous position occurs, if the mean square error (MSE) of the new candidate generalised of fitting to a function g(x) is not better than the respective MSE of the previous candidate, or if the new candidate is invalid because of a stack underflow or overflow, or because of an arithmetic error like a division by zero. Let a single step be called a modification Δ.</p><p>We are unsure which Δ is the best one for a given problem. Thus, we choose to introduce a variability of its parameters -they are picked using probability distributions before each step. Let these parameters be as follows -a minimum threshold Θ min and an instruction perturbance level Φ, both chosen from 0, 1 using an uniform distribution. Θ min controls how many instructions are modified on average within Δ. The idea is, that it would sometimes be enough to modify only a single instruction within Δ, and it might even be advantageous to do so, if exactly a single instruction needs to be modified, in order to get a better candidate; parallel modifications of other instructions might create an unwanted drift in the candidate space in such a case. Yet, sometimes more instructions need to be modified at once, in order to omit the Hamming wall <ref type="bibr" target="#b1">[2]</ref>. Also, a computation time is obviously crucial in genetic programming, and the random walk might be faster at times if more instructions are modified per Δ. We allow each case by Θ min being variable. Φ stems from a similar reasoning -it translates to how much an instruction is modified on average within Δ. Sometimes a fast walk is needed when e.g. a candidate is far from the optimum point in the candidate space. Yet, sometimes small, precise steps are needed instead, when e.g. the exploited candidate oscillates very closely around the optimum point.</p><p>Let within a single Δ, each of the instructions I be modified as follows:</p><p>1. Pick Θ ∈ 0, 1 using an uniform distribution. If Θ &lt; Θ min , then do not modify I. Go to 2 only otherwise. 2. Let a strength perturbance Φ s = δ s Φ and a value perturbance</p><formula xml:id="formula_3">Φ v = δ v Φ</formula><p>control, respectively, how much an instruction strength and an instruction argument can be perturbed. We picked in tests fixed δ s = δ v = 1/20 as a trade-off between the exploitation speed and precision. 3. Let s be the current strength of I, and s the modified one. Let</p><formula xml:id="formula_4">s = max 0, min 1, s + uni − Φ s 2 , Φ s 2 ,<label>(2)</label></formula><p>where uni(a, b) picks a random value in the range a, b using an uniform distribution. 4. If s = 0, then let I be removed. Go to 5 only otherwise. 5. If I is of the type s PUSH v, let the pushed value v be modified to become v :</p><formula xml:id="formula_5">v = max −10, min 10, v + uni − Φ v 2 , Φ v 2 . (<label>3</label></formula><formula xml:id="formula_6">)</formula><p>6. If I is of the type s COPY n:m, let the length multiplier m be modified to become m :</p><formula xml:id="formula_7">m = max 0, min 2, m + uni − Φ v 2 , Φ v 2 . (<label>4</label></formula><formula xml:id="formula_8">)</formula><p>Note that v and m are clipped to the same range as when a new instruction is created during the exploration.</p><p>After the instructions are modified, and if there have been r &gt; 0 instructions removed during that process, then let r new instructions be picked, exactly as during the exploration, but let these new instructions be randomly inserted to the program, so that it retains the original number of instructions P n . Let the strength of each be Φ s , that is, a small value related to a maximum single modification of an instruction strength.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3">Adapting the exploration/exploitation ratio</head><p>Let the fitting quality of a candidate C to g(x) be MSE(C). Let the best candidate so far be B. If none, assume MSE(B) = ∞.</p><p>The search loop consists of two phases: explore by creating a new candidate T, and then exploit T, with the extent related to the ratio of quality of T to B. If, after the whole exploitation stage, MSE(T) &lt; MSE(B), then let T become the new B. The loop ends, if MSE(B) ≤ MSE max , where MSE max is a fixed value, chosen depending on the minimum acceptable quality of the candidate which we want to find.</p><p>The mentioned extent decides on the exploration/exploitation balance, and also on the distribution of the computation time to the exploitations of candidates T, given their relative quality. As MSE(T) may decrease during the exploitation, let the extent be dynamically updated using the current value of MSE(T). The extent is represented by a maximum failure count (MFC). The value expresses the maximum possible number of subsequent exploitation steps, which do not result in finding a better candidate. Thus, if a new path to improve T is found, then the path is given a chance, irrespective of the length of exploitation of the candidate so far. If MFC is reached, the exploitation is terminated, and the search loop returns to the exploration phase as described. MFC is computed for a new T, and, to adapt it dynamically as discussed, also whenever the exploitation of T results in a candidate with lower MSE(T).</p><p>Let us consider a formula for MFC. Let its minimum value be MFC min , so that any candidate is given a moderate chance to improve, irrespective of that candidate's quality. We want, though, to give a considerably greater MFC for candidates whose initial quality is estimated to be decent. Let the formula have a fixed parameter t r &gt; 0 to reflect that:</p><formula xml:id="formula_9">MFC = t r tanh t s MSE(T) MSE(B) − 1 + 1 + MFC min + 1 2 . (<label>5</label></formula><formula xml:id="formula_10">)</formula><p>We see a hyperbolic tangent that serves as a threshold function. It is centred around MSE(T) MSE(B) = 1. The threshold steepness is regulated by t s = 10. Let MFC min and t r be adjusted in a test of a relatively complex function, in order to tune the evolutionary method towards more advanced tasks. We will use a set of samples g L (x), described in detail further in Sect. 6. The parameters in question will be tuned to minimise the average computation time t of the evolutionary method. To save on time, each test will be terminated whenever t = 1000. Figure <ref type="figure">2</ref>(a) shows, that MFC min = 500 is approximately optimal. Let us then, having that value, test a range of values t r -a respective diagram in Fig. <ref type="figure">2(b)</ref> shows a respective optimum value of about t r = 1500.</p><p>A diagram of (5), using the adjusted parameters, is shown in Fig. <ref type="figure">3</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Tests</head><p>Let us take advantage of the property of the presented method, that it is not limited to a polynomial approximation, and chose a function to generalise g(x) that contains a periodic function. Let us also make g(x) relatively non-trivial for an evolutionary approach, by making the diagrams of its sub-components not resembling g(x) -this may make it less possible, that such a sub-component alone would be considered interesting enough in the exploitation stage to be gradually completed by other sub-components. We will also model g(x) so that along a certain part of its domain it is very similar to a function y = x 2 . The parabola will serve as a honey-pot, that will attempt to distract our evolutionary method by a deep and, thanks to the symbolic simplicity of the honey pot, easily reachable local minimum. Let g(x) be y = 8 1 + sin(0.59x + 4.6) .</p><p>(</p><formula xml:id="formula_11">)<label>6</label></formula><p>The diagram in Fig. <ref type="figure">4</ref> confirms, that the sub-functions of g(x) are dissimilar to g(x), fitting-wise. An ideal generalising program is shown in Fig. <ref type="figure" target="#fig_3">5</ref>  the program, but we will allow a larger P n = 15, as it is unlikely, that such a compact candidate will be found, especially without any preceding intermediate stages with a larger number of instructions. The program requires only two elements in the stack, but let the stack size be P s = 10 because of the reasons similar to the above ones.</p><p>Let there be two sets of samples for the evolutionary method. A 'short' set g S (x) contains 30 samples of g(x) only from within a domain fragment S = 0.1, 3.6 , and a 'long' set g L (x) contains 30 samples from L = 0.1, 13.6 . Both fragments start from 0.1 and not from 0, so that a lesser percentage of candidates become invalid because of a possible division by zero.</p><p>To convey the computational efficiency, any of the tests will be limited with a maximum computation time of 1000 seconds, and a number of tests that succeeded in fulfilling this criterion will be given.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1">Extrapolation of an inflection</head><p>We see that g S (x) is very similar to the honey-pot, and as the honey-pot lacks any inflection point, the hint to the evolutionary method that g(x) has an inflection point within S would be rather subtle. Let us use the presented method to extrapolate g(x) outside S, to see if the method was sensitive to that subtle hint. Let MSE max = 1 • 10 −5 be rather small, as g S (x) is easy to fit by the discussed method.</p><p>Out of the 20 tests run, 6 met the 1000 seconds criterion. Their generalising diagrams are illustrated in Fig. <ref type="figure" target="#fig_4">6</ref>. Five of these could be described as reasonable y = g(x) extrapolations, given the limited domain fragment covered by g S (x) -various interpretations of the slopes, of a stationary point and of a deflection point are seen. We see that the evolutionary method was able to propose interpretations very different from the honey-pot function. The sixth extrapolations contains garbage to the left of g S (x).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2">Extrapolation of periodicity</head><p>The set g L (x) presents a strong hint of being periodical. We expect the genetic programs to finely extrapolate that periodicity. Let MSE max = 1 • 10 −4 be larger than the error threshold used in the previous test, as g L (x) turns out to be relatively more difficult to fit. This time, out of 20 tests run, all met the 1000 seconds criterion. The generalising diagrams of the first 10 tests are illustrated in Fig. <ref type="figure" target="#fig_5">7</ref>. We see a visually almost perfect fit, despite the relaxed value of MSE max . A fitting error shows, though, that the generalising programs are not identical to g(x), and that the fitting quality decreases on average as the extrapolation distance increases. Browsing the code of some of the respective final programs, depicted in Fig. <ref type="figure">8</ref>, confirms, that g(x) appears to be simpler than the generalising programs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">Discussion</head><p>If we look at the code of the final candidates, shown in Fig. <ref type="figure">8</ref>, it is seen, that they still contain instructions of very different strengths, even if the generalised function could be defined only in terms of instructions having a binary strength of 0 or 1. The fractional property of the programs serves thus not only the purpose of a continuous evolution, but also extends the expressivity of the candidates. Consider e.g. the following program: 1 PUSH x; 0.5 MUL, which computes x 2 , even that x is pushed only once, and no power instruction is present.</p><p>As the programs utilise common algebraic operators and common mathematical functions, a question might be raised, if elegant symbolic equations could be extracted from them. If a generalising program appears to be closely fitting, yet representing a much complex equation that the generalised function, like in the case of the test in Sect. 6.2, then perhaps a decrease of MSE max might refine the programs up to the point of making some of the fractional instructions effectively almost disappear, by making their strengths very close to zero. An algorithm simplifying a symbolic equation might then be applied, which would hypothesise, that such instructions are redundant, and would remove them completely before simplifying the rest of the program in order to extract a symbolic equation.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>An example of a fractional pop(s): (a) before, and (b) after the operation.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 . 3 Fig. 3 .</head><label>233</label><figDesc>Fig. 3. A diagram of maximum failure count, given the ratio of MSE between the exploited candidate and the best one.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>y = x 2 yFig. 4 .</head><label>24</label><figDesc>Fig. 4.A diagram of f (x), some of its sub-components, and the honey-pot parabola.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. A program representing exactly g(x).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Fig. 6 .</head><label>6</label><figDesc>Fig. 6. A diagram g(x), and of a set of 6 functions fitting to g S (x) at MSEmax = 1•10 −5 .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Fig. 7 .</head><label>7</label><figDesc>Fig. 7. A diagram g(x), a set of 10 functions fitting to g L (x) at MSEmax = 1 • 10 −4 , and a fitting error δy of each.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>. There are 9 instructions in</figDesc><table><row><cell>0</cell><cell>&lt;1&gt; PUSH 0.59</cell></row><row><cell>1</cell><cell>&lt;1&gt; MUL</cell></row><row><cell>2</cell><cell>&lt;1&gt; PUSH 4.6</cell></row><row><cell>3</cell><cell>&lt;1&gt; ADD</cell></row><row><cell>4</cell><cell>&lt;1&gt; SIN</cell></row><row><cell>5</cell><cell>&lt;1&gt; PUSH 1.0</cell></row><row><cell>6</cell><cell>&lt;1&gt; ADD</cell></row><row><cell>7</cell><cell>&lt;1&gt; PUSH 8.0</cell></row><row><cell>8</cell><cell>&lt;1&gt; MUL</cell></row></table></figure>
		</body>
		<back>

			<div type="funding">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This work has been supported under the project An active creation of a model of space using a 3d scanner and an autonomous robot, N N516 440738</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The exploration/exploitation tradeoff in dynamic cellular genetic algorithms</title>
		<author>
			<persName><forename type="first">E</forename><surname>Alba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Dorronsoro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Evolutionary Computation</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="126" to="142" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
	<note>IEEE Transactions on</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Charbonneau</surname></persName>
		</author>
		<idno>NCR/NT-451+</idno>
		<title level="m">Release notes for pikaia 1.2</title>
				<meeting><address><addrLine>Boulder, Colorado</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2002">2002</date>
		</imprint>
		<respStmt>
			<orgName>STR, NCR Technical Note</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Tech. rep.</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A representation for the adaptive generation of simple sequential programs</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">L</forename><surname>Cramer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">1st International Conference on Genetic Algorithms</title>
				<meeting><address><addrLine>Pittsburgh, PA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Carnegie-Mellon University</publisher>
			<date type="published" when="1986">1986</date>
			<biblScope unit="page" from="183" to="187" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On evolutionary exploration and exploitation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Eiben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schippers</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Fundamenta Informaticae</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="issue">1-4</biblScope>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Hoxd13 Contribution to the Evolution of Vertebrate Appendages</title>
		<author>
			<persName><forename type="first">R</forename><surname>Freitas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gómez-Marín</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Wilson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Casares</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Gómez-Skarmeta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Developmental Cell</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="1219" to="1229" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Real-coded genetic algorithms, virtual alphabets, and blocking</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Goldberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Complex Systems</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="139" to="167" />
			<date type="published" when="1990">1990</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A taxonomy for the crossover operator for real-coded genetic algorithms: An experimental study</title>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lozano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Sánchez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="309" to="338" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">FIFTH: A stack based GP language for vector processing</title>
		<author>
			<persName><forename type="first">H</forename><surname>Kenneth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">A</forename><surname>Robbins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Von Ronne</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">EuroGP&apos;07 Proceedings of the 10th European Conference on Genetic Programming</title>
				<meeting><address><addrLine>Berlin; Heidelberg, Valencia, Spain</addrLine></address></meeting>
		<imprint>
			<publisher>Springer-Verlag</publisher>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="102" to="113" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Auto-tuning strategy for evolutionary algorithms: balancing between exploration and exploitation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Soft Computing</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="157" to="168" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Java virtual machine specification</title>
		<author>
			<persName><forename type="first">T</forename><surname>Lindholm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Yellin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1999">1999</date>
			<publisher>Addison-Wesley Longman Publishing Co., Inc</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Genetic programming with linear representation: a survey</title>
		<author>
			<persName><forename type="first">M</forename><surname>Oltean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Groşan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Dioşan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Mihȃilȃ</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal on Artificial Intelligence Tools</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">02</biblScope>
			<biblScope unit="page" from="197" to="238" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Stack-based genetic programming</title>
		<author>
			<persName><forename type="first">T</forename><surname>Perkis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE World Congress on Computational Intelligence</title>
				<meeting><address><addrLine>Orlando, FL, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1994">1994. 1994</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="148" to="153" />
		</imprint>
	</monogr>
	<note>Proceedings of the First IEEE Conference on</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Continuously evolving programs in genetic programming using gradient descent</title>
		<author>
			<persName><forename type="first">W</forename><surname>Smart</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Second Asian-Pacific Workshop on Genetic Programming</title>
				<editor>
			<persName><forename type="first">R</forename><forename type="middle">I</forename><surname>Mckay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">B</forename><surname>Cho</surname></persName>
		</editor>
		<meeting>The Second Asian-Pacific Workshop on Genetic Programming<address><addrLine>Cairns, Australia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Genetic algorithms for real parameter optimization</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Wright</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Foundations of Genetic Algorithms</title>
				<imprint>
			<date type="published" when="1990">1990</date>
			<biblScope unit="page" from="205" to="218" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
