<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Coimbra (PT) / Online
" redouane.dahmani@student.uva.nl (R. Dahmani); sven.boogmans@student.uva.nl (S. Boogmans);
arne.meijs@student.uva.nl (A. Meijs); D.vandenBerg@uva.nl (D.v.d. Berg)
~ https://github.com/RedRedouane/Paintings-from-Polygons-Simulated-Annealing (R. Dahmani);
http://www.heuristieken.nl (D.v.d. Berg)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Paintings-from-Polygons: Simulated Annealing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Redouane Dahmani</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sven Boogmans</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Arne Meijs</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daan van den Berg</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Informatics Institute, University of Amsterdam</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Institute for Interdisciplinary Studies (IIS), University of Amsterdam</institution>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Master Information Studies, University of Amsterdam</institution>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Minor Programmeren, University of Amsterdam</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Paintings-from-polygons is an optimization problem in which the objective is to approximate a target bitmap by optimally arranging a fixed number of semi-opaque colored polygons. In a recent report, two optimization algorithms showed strong performance on the task, but the third, simulated annealing, failed miserably. The authors conjectured its poor performance to be due to the specific cooling schedule used. In this study, we use the same problem instances but parameterize simulated annealing with eight new cooling schedules known from literature: Geman &amp; Geman with diferent  -parameter settings, geometric, linear, cosine, linear reheat, sigmoid, and staircase. We find that the previous conjecture was right: the performance of simulated annealing on this problem critically relies on its cooling schedule, and can be quite succesful once parameterized properly. Moreover, we find a consistent 'performance hierarchy' in which most of the cooling schedules' performance appears related to their average temperatures only.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Paintings-from-Polygons</kwd>
        <kwd>Computational creativity</kwd>
        <kwd>Evolutionary art</kwd>
        <kwd>Paintings</kwd>
        <kwd>Polygons</kwd>
        <kwd>Simulated annealing</kwd>
        <kwd>Cooling schedules</kwd>
        <kwd>NP-hard</kwd>
        <kwd>Optimization</kwd>
        <kwd>Heuristic algorithm</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        As a discipline, computational creativity has flourished in recent history. Subjects such as
casual creation, photo quality classification, music genre detection, evolutionary architecture and
visual imagery have gone through substantial development in the past decade [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ][
        <xref ref-type="bibr" rid="ref3">3</xref>
        ][
        <xref ref-type="bibr" rid="ref4">4</xref>
        ][5][6].
Many of these are in some way related to optimization algorithms such as hillclimbing, plant
propagation, simulated annealing or genetic algorithms, which have all proven successful in
both theoretical and practical settings [7][8][9][10].
      </p>
      <p>Being located on the intersection of computational creativity and algorithmic optimization,
the paintings-from-polygons (PFP) problem’s objective is to approximate a given target bitmap
by arranging a fixed number of semi-opaque colored polygons. On initialization, the polygon
constellation’s properties are assigned random values, but the numbers of polygon and vertices
are fixed (to 250 and 1000 in this study) throughout an experiment [12]. From there,
heuristic algorithms can change a polygon’s color, move one of its vertices, transfer a vertex from
one polygon to the next, or mutate the drawing index of the polygons, a parameter that
determines the rendering sequence when generating an output bitmap. By iteratively applying
these mutations to the polygon constellation and rendering afterwards, a target bitmap, usually
a painting, is approximated ever closer (Fig. 1). The objective value for each rendered bitmap
is the diference to the target bitmap, expressed as the mean squared error (MSE) over the color
  =  ∑⋅⋅3 (  −    )2 (1)</p>
      <p>=1  ⋅</p>
      <p>Here,  and  are the dimensions of the target bitmap in pixels,   is the value of a
red, green or blue (RGB) channel in the bitmap rendered from the polygon constellation, and
   is the value of the corresponding RGB-channel for the target bitmap. Since the
maximum diference between two RGB channels is 255, and there are three channels in one pixel,
the maximum MSE is 2552 ⋅ 3 = 195075. The minimum MSE of zero is reached if the rendered
polygon constellation and the target bitmap are identical, a situation that is highly unlikely for
any realistic number of vertices. A simplified form of PFP was proven to be NP-hard in 2020
[13]. Although this technically speaking guarantees nothing about the full PFP-problem, it has
no known subexponential exact methods, and thereby provides a suitable testing ground for
heuristic algorithms. It also gives rise to some interesting questions about the complexity1 of
visual imagery, and produces aesthetically pleasing results.</p>
      <p>There are 2563(180⋅240) ≈ 7.93 ⋅ 10312,107 diferent existable bitmaps of 180 by 240 pixels, the
default size in the experiments. A maximum of 180 ⋅ 240 ⋅ 4 = 172, 800 vertices is therefore
suficient to exactly reproduce any given target bitmap of aformentioned dimensions[12]. However,
Paauw and Van den Berg also give a lower bound of 39,328 vertices, the lowest number that
can give rise to more diferent constellations than the number of possible bitmaps of 180 by 240
pixels. These numbers are a long way beyond computational eficiency, and assuming the state
space is not completely convex, the problem could well be untractable, and heuristic algorithms
such as simulated annealing, could provide a feasible way forward.</p>
      <p>In their seminal study, Paauw and Van den Berg used a stochastic hillclimber, simulated
annealing, and plant propagation on paintings-from-poygons [12]. These three algorithms
all retained their best found objective values (or ‘fitness’) throughout the runs of 106 function
evaluations (Fig. 1, lower part). The hillclimber and plant propagation achieved relatively good
results, but the same could not be said for simulated annealing. In this study however, we’ll
take care of that.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Simulated Annealing</title>
      <p>Introduced by Kirkpatrick, Gelatt &amp; Vecchi in 1983 and independently by Černý in 1985, the
simulated annealing algorithm draws inspiration directly from statistical mechanics, the
central discipline of condensed matter physics [7][14][15]. In the physical annaealing process, a
desirable low energy state of a metal is reached by optimizing the spatial configuration of the
atoms. To reach such a configuration, dislocated atoms need to move to vacant sites which
can be facilitated by cooling the liquid metal slowly. Higher temperatures increase the atoms’
movement, but also increases the number of expected vacant sites due to the increase of mixing
entropy. Simulated annealing emulates this process for combinatorial optimization problems,
1The term is intentionally loosely applied here; it could mean ‘Kolmogorov complexity’, or ‘RLE-complexity’,
amongst others.
even up to the exact Boltzmann factor (Eq.2) used for ‘movement’ through a combinatorial
state space. As such, it is one of the few heuristic algorithms in the field that is more than just
a metaphor; it truly pries at the barrier between physics and information science. Good
solutions to combinatorial problems such as timetabling, the wiring of electronic systems, the job
shop scheduling problem and the traveling salesman problem have been found earlier using
simulated annealing [14][7][16][17][18].</p>
      <p>For many heuristic algorithms, parameterization is “essential for good algorithm
performance” [19], and simulated annealing is no exception. It has proven quite sensitive to
parameter changes in an early study by Christopher Skiścim and Bruce Golden, who also illustrated
the dificulty of cooling down slowly on the one hand, but not too slowly on the other [20]. A
great number of cooling schedules have been tested throughout history, varying from linear
temperature decrease to schedules that also allow reheating [21]. The logarithmic schedule by
Geman &amp; Geman however, is the only one that has theoretically been proven to reach a global
optimum [22][23]. But the proof critically relies on the number of iterations going to infinity,
so aptly elaborated on by Kenneth Boese and Andrew Kahng: a ‘where-you-are’ schedule like
Geman &amp; Geman’s, which needs infinite computation time, is practically useless [21]. This
is the critical mistake Paauw and Van den Berg made on paintings-from-polygons, and even
though they did retain the ‘best-so-far’ constellations throughout the runs, their intermediate
and final results from simulated annealing were of abominable quality compared to their other
two algorithms[12]. According to the authors, its failure is “easily explained from its high
temperature", a claim that immediately warrants a follow-up investigation. In this study, we will
try nine diferent cooling schedules, see to what extend the authors were right, and whether
any improvement is possible.</p>
      <p>For now, we specifically do not address the other two algorithms from the earlier study, and
with good reason: PPA has shown to be largely parameter insensitive and unbiased ([24][25][26],
ongoing work), and the stochastic hillclimber’s behaviour is so closely akin to simulated
annealing with Geman&amp;Geman on  = 1, that a separate quantitative parametrization analysis
seems somewhat superfluous. A future study in qualitative parameters however (i.e. the
algorithmic components), is still an open avenue waiting to be explored.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Simulated Annealing on Paintings-from-Polygons</title>
      <p>A simulated annealing run on the paintings-from-polygons problem starts with randomly
distributing  vertices over  polygons ( = 1000,  = 250 in this study). Each polygon is first
assigned three vertices, after which the remaining vertices are randomly distributed. Then, all
the vertices are assigned random coordinate values within the dimensions of the target bitmap.
Every polygon is randomly assigned color and opacity (eq.: ‘alpha’) values within the bounds
of its four RGBA-channels. Each iteration, a randomly selected mutation is performed on the
polygon constellation. There are four diferent mutations types, each of which has equal
probability of occurring:
1. The change color mutation randomly chooses one polygon and one of its RGBA
channels, and assigns it a new random value 0 ≤  ≤ 255.
2. The move vertex mutation randomly chooses one vertex from one polygon and assigns
it new random values 0 ≤  &lt;  
and 0 ≤  &lt;  
.</p>
      <p>is placed somewhere on the line between two neighboring vertices in  2.
3. The transfer vertex mutation randomly chooses two polygons,  1 and  2, in which  1
is not a triangle. A randomly selected vertex is then deleted from  1, and a new vertex
4. The change drawing index mutation randomly chooses a polygon and assigns its
drawing index a new random value 0 ≤  &lt;  . If the new index is lower than the current index,
all indexes of the in-between polygons will be increased by one. If the new index is higher
than the current index, all indexes of the in-between polygons above the new index will
be decreased by one.
is given by the Boltzmann factor
After each iteration, the new constellation is rendered and the new MSE is calculated. If the
new constellation has a lower MSE (equiv.: a smaller pixel-by-pixel diference with the target
bitmap), the mutation is accepted and this new constellation will be the starting point for the
next iteration. Mutations that lead to a constellation with a higher MSE are not immediately
rejected though. Analogous to the annealing metaphor, they might still be accepted, depending
on the magnitude of the diference and the temperature. The acceptance probability:  ( 
 (</p>
      <p>) =  ( −Δ  )
in which Δ</p>
      <p>is the diference in MSE between the new and the old constellation, and  
is the temperature at iteration  retrieved from the cooling schedule. Notice the correlation
between the MSE diference and the acceptance probability: a greater increase in MSE is still
acceptable as long as the temperature   is still high. Correct parameterization of the cooling
schedule is therefore of critical importance to both annealing and simulated annealing. For
PFP however, it appears as average the temperature alone predicts an upper bound on the
performance, irrespective of the actual trajectory a schedule might traverse.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Cooling Schedules</title>
      <p>The first three cooling schedules used in this study are of the Geman &amp; Geman type, in which
the temperature at iteration  is given by

( + 1)
.
and for  = 1, it ranges from</p>
      <p>1 ≈ 3 to  106 ≈ 0.17 (Fig. 2).</p>
      <p>In the linear cooling schedule, the temperature   is defined as:
In the seminal study, its parameter was set to  = 195075, as it is the maximal existable error
barrier that separates two local minima in this problem [12]. In this study, we replicate the
experiment with its original settings, but also add two new parameter settings for this schedule:
 = 50 and  = 1. The number of iterations  = 106, lowering the temperature from
to  106 ≈ 32512 for  = 195075. For  = 50, the temperature descends from  1 ≈ 166 to  106 ≈ 8,
 1 ≈ 648025</p>
      <p>)
(2)
(3)
higher its average temperature. Note that the vertical scale for the top row is much larger, due to the
extremities in the Geman &amp; Geman schedule.
(4)
(5)
  =  1 ⋅ (1 −</p>
      <p>)
 

where in this study</p>
      <p>= 106 and  1 = 1000, resulting in a linear decrease of temperature
throughout the iterations (Fig. 2). The staircase cooling schedule also starts  1 = 1000, but
drops its temperature once every 105 iterations, resulting in a temperature of 0 for the final
iterations. The motivation for the staircase cooling schedule stems from a 2016-publication by
105
Strobl and Barker, who observed that temperatures should be kept constant for some duration
of time in order to reach thermal equilibrium [27].</p>
      <p>In the sigmoidal cooling schedule, temperature follows a sigmoidal decrease given by
  =</p>
      <p>1000
1 +  10−5⋅( −5⋅105)
This cooling schedule decreases slowly in temperature during the run, with its steepest decline,
its maximum absolute derivative, at 5 ⋅ 105 iterations. It comes from a website with cooling
schedules once maintained by Brian T. Luke. It appears to have gone ofline 2, but the schedule
(and the author) can still be found in a set of public slides at Stanford University [29]. The
geometric cooling schedule has a history of giving “satisfactory results” on a wide variety of
optimization problems [30]. Its temperature   is defined as:</p>
      <p>+1 =  ⋅  
where in this study,  is fixed at 0.99999, with  1 = 1000.</p>
      <p>In a study on best-so-far solutions by Boese and Kahng, optimal cooling schedules were
schedule follows a simple cosine function, defined as
shown to be non-monotonically decreasing [21]. As the simulated annealing algorithm used
on PFP study also retains its best-so-far value, periodical reheat might provide an alluring
alternative to more traditional schedules. The linear reheat cooling schedule linearly lowers its
temperature to zero in epochs of 105 iterations, but then reheats to half the previous epoch’s
temperature, as can be seen in figure 2. This results in nine reheating cycles, with their
respective peaks being   = {1000, 500, ..., 3.90625} at  = {0, 105, ..., 9 ⋅ 105}. The cosine cooling
  = 500 ⋅  (
 1 = 1000 to and ends at  106 = 0.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments and Results</title>
      <p>The fraction
the constant 1t6e7r5m3

106 ) and
i+s5u0s0edattotheenseunrde 9o.f5tchyecleeqsuianti1o0n6 eitnesruartieosntsh(enootsicciellathtiantg16fu7n5c3ti≈on2 s∗9t.a5rts at
The performance of the diferent cooling schedules is evaluated by performing five runs of
simulated arnealing with each cooling schedule on each of the nine diferent 180x240 target
bitmaps: “Salvator Mundi”, “Lady with an Ermine” and “Mona Lisa” painted by Leonardo Da
Vinci between 1490 and 1503, “The Starry Night” by Van Gogh in 1889, Portrait of composer
Johann Sebastian Bach by Elias Gottlieb Haussmann in 1746, “The Kiss” by Gustav Klimt in 1908,
“Composition with Red, Blue and Yellow” by Mondriaan in 1930, “The Persistence of Memory”
by Dali in 1931, and “Convergence” by Pollock in 1952. Each run consisted of 106 iterations3,
with a nonchanging total of 250 polygons and 1000 vertices (the experiment’s source code and
data are publicly accessible [32]). For every combination of cooling schedule and target bitmap,
the normalized average performance is calculated as the mean final MSE of the five runs</p>
      <sec id="sec-5-1">
        <title>2BTL’s cooling schedules can still be found through the waybackmachine [28].</title>
        <p>3“[Function evaluations are the gold standard for measuring the performance of heuristic algorithms]” [31], but
in our case, the number iterations corresponds exactly to the number of evaluations, which is typical for simulated
annealing.
in which  is the average MSE of the five runs after 106 iterations,   the lowest MSE of the
ifve, and   the highest. Notice that a low normalized MSE corresponds to a high
performance.</p>
        <p>After completing all 405 runs, a firm pattern in the performance of the cooling schedules
emerges (Fig. 3). The cooling schedule of Geman &amp; Geman with  = 1 performs best on all
paintings, immediately followed by the geometric, linear reheat and Geman &amp; Geman with
 = 50. These top four best performing cooling schedules form a group with relatively similar
performance when compared to the other cooling schedules. Fifth, sixth and seventh are the
sigmoidal, linear and cosine cooling schedules, followed by the poor performance of the
staircase. Very much in last place is the high  -valued Geman &amp; Geman cooling schedule used in
the study of Paauw and Van den Berg [12]. It is by far the worst possible choice out of these
nine.</p>
        <p>The performance hierarchy of the cooling schedules can be characterized more rigorously
then by its final MSE order alone. It turns out that seven of nine cooling schedules’ average log
temperatures and final MSEs are bound by a linear equation 4 for each painting, with the error
of fit between 0.11 and 0.04 on all paintings. Two schedules however, are diferent. These are
the linear reheat schedule, with an error of fit 6 to 11 times greater, and the geometric cooling
schedule with an error of fit 6 to 44 times greater, depending on the specific painting. The
remarkable thing is however, these errors are larger in downward direction.</p>
        <p>In other words, when the final MSE and the temperature are compared, all schedules show
roughly the same performance, independent of their actual shape. The only two exceptions
are the linear reheat and geometric cooling schedules, which perform better than other
schedules with the same average temperature. A future experiment, with all cooling schedules
normalized to the same average temperature, should either confirm or disconfirm whether these</p>
      </sec>
      <sec id="sec-5-2">
        <title>4polynomial equations on nonlog data yield comparable results.</title>
        <p>diferences are substantial.</p>
        <p>Another remarkable phenomenon can be witnessed in the structural development of the
polygon constellations of this experiment. After random initialization, a typical constellation5
holds about 111 triangles, 71 quadrangles, 40 pentagons, 18 hexagons and smaller numbers of
7-gons to to 12-gons. But a typical end constellation has up to 28% fewer quadrangles,
pentaand hexagons, and very small numbers of larger polygons, up to as many as 35 vertices. Most
striking however, is that the number of triangles increases by about 16%, to roughly 128. It
is unknown what causes this increase, but it is thought that triangles ‘facilitate’ optimization
through their relative mobility in vertex position. This however, still requires further
confirmation.</p>
        <p>A final interesting phenomenon is that the three best performing cooling schedules appear
to exert an influence on the relation between a polygon’s opacity and its drawing index. For
the three best performing cooling schedules, a higher polygon drawing index corresponds to a
lower opacity (Fig. 4). When fitting a parabola, its steepness increases 3 to 24 times for the best
three cooling schedules, suggesting that succesful optimization runs produce more transparent
polygons on top. Again, further analysis could (dis)confirm the consistency of this efect .</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>It appears that Paauw and Van den Berg were right in their assessment that the poor
performance of their simulated annealing was due to the high temperatures used at the time. As
shown in this study, the success of the algorithm depends largely on the average temperature
of the schedule only, a few interesting exceptions aside. Furthermore, successful runs seem to
increase the number triangles in a constellation, and decrease the opacity of its lastly drawn
polygons. To what extent these properties are more typical to the problem or to simulated
annealing itself still remains a subject for future endeavours.</p>
      <p>5Here, ’a typical constellation’ means: values averaged over all constellations in the experiment.
[5] A. Neumann, B. Alexander, F. Neumann, Evolutionary image transition and painting
using random walks, Evolutionary Computation (2020) 1–34.
[6] J. Berg, N. G. A. Berggren, S. A. Borgeteien, C. R. A. Jahren, A. Sajid, S. Nichele, Evolved art
with transparent, overlapping, and geometric shapes, in: Symposium of the Norwegian
AI Society, Springer, 2019, pp. 3–15.
[7] S. Kirkpatrick, C. D. Gelatt, M. Vecchi, Optimization by simulated annealing., Science
220(4598) (1983) 671–680. doi:10.1126/science.220.4598.671.
[8] J. Sleegers, D. van den Berg, Plant propagation &amp; hard hamiltonian graphs, Evo* 2020
(2020) 10.
[9] M. Sulaiman, A. Salhi, A. Khan, S. Muhammad, W. Khan, On the theoretical analysis of
the plant propagation algorithms, Mathematical Problems in Engineering 2018 (2018).
[10] A. E. Eiben, J. E. Smith, Introduction to Evolutionary Computing, first ed., Springer, Berlin,
2003. doi:10.1007/978-3-662-05094-1.
[11] JSB, Paintings-from-polygons movie clip, 2020. https://www.youtube.com/watch?v=
u91gRGY8ElQ (Last accessed on May 23rd, 2020).
[12] M. Paauw, D. Van den Berg, Paintings, polygons and plant propagation, in:
International Conference on Computational Intelligence in Music, Sound, Art and Design (Part
of EvoStar), Springer, 2019, pp. 84–97.
[13] D. van den Berg, Simplified paintings-from-polygons is np-hard, Evo* 2020 (2020) 15.
[14] V. Černy`, Thermodynamical approach to the traveling salesman problem: An eficient
simulation algorithm, Journal of optimization theory and applications 45 (1985) 41–51.
[15] D. S. Johnson, C. R. Aragon, L. A. McGeoch, C. Schevon, Optimization by simulated
annealing: An experimental evaluation; part i, graph partitioning, Operations research
37 (1989) 865–892.
[16] D. Abramson, M. Krishnamoorthy, H. Dang, et al., Simulated annealing cooling schedules
for the school timetabling problem, Asia Pacific Journal of Operational Research 16 (1999)
1–22.
[17] P. J. Van Laarhoven, E. H. Aarts, J. K. Lenstra, Job shop scheduling by simulated annealing,</p>
      <p>Operations research 40 (1992) 113–125.
[18] R. A. Rutenbar, Simulated annealing algorithms: An overview, IEEE Circuits and Devices
magazine 5 (1989) 19–26.
[19] S. K. Smit, A. E. Eiben, Comparing parameter tuning methods for evolutionary algorithms,
in: 2009 IEEE congress on evolutionary computation, IEEE, 2009, pp. 399–406.
[20] C. C. Skiścim, B. L. Golden, Optimization by simulated annealing: A preliminary
computational study for the tsp, in: Proceedings of the 15th conference on Winter
SimulationVolume 2, IEEE Press, 1983, pp. 523–535.
[21] K. D. Boese, A. B. Kahng, Best-so-far vs. where-you-are: implications for optimal
finitetime annealing, Systems &amp; control letters 22 (1994) 71–78.
[22] S. Geman, D. Geman, Stochastic relaxation, gibbs distributions, and the bayesian
restoration of images, IEEE Transactions on pattern analysis and machine intelligence (1984)
721–741.
[23] Y. Nourani, B. Andresen, A comparison of simulated annealing cooling strategies., Journal
of Physics A: Mathematical and General 31 (1998) 8373–8385. doi:10.1088/0305-4470/
31/41/011.
[24] M. de Jonge, D. van den Berg, Plant propagation parameterization: Ofspring &amp; population
size, Evo* 2020 (2020) 19.
[25] M. de Jonge, D. van den Berg, Parameter sensitivity patterns in the plant propagation
algorithm, IJCCI 2020: Proceedings of the 12th International Joint Conference on
Computational Intelligence (2020).
[26] W. Vrielink, D. van den Berg, Fireworks algorithm versus plant propagation algorithm.,
in: IJCCI, 2019, pp. 101–112.
[27] M. A. Strobl, D. Barker, On simulated annealing phase transitions in phylogeny
reconstruction, Molecular phylogenetics and evolution 101 (2016) 46–55.
[28] B. T. Luke, [via wayback machine] simulated annealing cooling schedules, 2019. http:
//web.archive.org/web/20190620071317/http://www.btluke.com/simanf1.html (Last
accessed on May 23rd, 2020).
[29] S. University, Stats 318: Lecture, 2020. https://statweb.stanford.edu/~candes/teaching/
stats318/Handouts/Annealing.pdf (Last accessed on May 23rd, 2020).
[30] P. N. Strenski, S. Kirkpatrick, Analysis of finite length annealing schedules, Algorithmica
6 (1991) 346–366.
[31] T. Bartz-Beielstein, C. Doerr, J. Bossek, S. Chandrasekaran, T. Eftimov, A. Fischbach,
P. Kerschke, M. Lopez-Ibanez, K. M. Malan, J. H. Moore, et al., Benchmarking in
optimization: Best practice and open issues, arXiv preprint arXiv:2007.03488 (2020).
[32] R. Dahmani, Reddy’s repo, 2020. https://github.com/RedRedouane/
Paintings-from-Polygons-Simulated-Annealing (Last accessed on May 23rd, 2020).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Colton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>McCormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Berns</surname>
          </string-name>
          , E. Petrovskaya,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cook</surname>
          </string-name>
          ,
          <article-title>Adapting and enhancing evolutionary art for casual creation</article-title>
          , in: International Conference on Computational Intelligence in Music, Sound,
          <source>Art and Design (Part of EvoStar)</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>17</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Banal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ciesielski</surname>
          </string-name>
          ,
          <article-title>A deep learning neural network for classifying good and bad photos</article-title>
          , in: International Conference on Computational Intelligence in Music, Sound,
          <source>Art and Design (Part of EvoStar)</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F.</given-names>
            <surname>Heerde</surname>
          </string-name>
          , I. Vatolkin, G. Rudolph,
          <article-title>Comparing fuzzy rule based approaches for music genre classification</article-title>
          , in: International Conference on Computational Intelligence in Music, Sound,
          <source>Art and Design (Part of EvoStar)</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>35</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Muehlbauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Burry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <article-title>An aesthetic-based fitness measure and a framework for guidance of evolutionary design in architecture</article-title>
          , in: International Conference on Computational Intelligence in Music, Sound,
          <source>Art and Design (Part of EvoStar)</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>134</fpage>
          -
          <lpage>149</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>