<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Towards an FCA-based Recommender System for Black-Box Optimization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jose ne Asmus</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Borchmann</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivo F. Sbalzarini</string-name>
          <email>ivosg@mpi-cbg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dirk Walther</string-name>
          <email>Dirk.Waltherg@tu-dresden.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Advancing Electronics Dresden (cfAED)</institution>
          ,
          <addr-line>Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>MOSAIC Group, Center for Systems Biology Dresden (CSBD), Max Planck Institute of Molecular Cell Biology and Genetics</institution>
          ,
          <addr-line>Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>TU Dresden, Theoretical Computer Science</institution>
          ,
          <addr-line>Dresden</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Black-box optimization problems are of practical importance throughout science and engineering. Hundreds of algorithms and heuristics have been developed to solve them. However, none of them outperforms any other on all problems. The success of a particular heuristic is always relative to a class of problems. So far, these problem classes are elusive and it is not known what algorithm to use on a given problem. Here we describe the use of Formal Concept Analysis (FCA) to extract implications about problem classes and algorithm performance from databases of empirical benchmarks. We explain the idea in a small example and show that FCA produces meaningful implications. We further outline the use of attribute exploration to identify problem features that predict algorithm performance.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Optimization problems are ubiquitous in science and engineering, including
optimizing the parameters of a model [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], nding an optimal statistical estimator,
or nding the best operating conditions for an electronic circuit or a biochemical
network. An optimization problem is de ned by a parameter space spanned by
the variables to be optimized, and an objective function de ned over that space.
The goal is to nd points in the parameter space where the objective function
has an extremum. In a black-box problem, the function is not known, but it
can be evaluated point-wise. The objective function is hence given as an oracle,
which, given a point in parameter space as an input, returns the objective
function value at that point. Most practical applications in science and engineering
are black-box problems, where the objective function may comprise running a
numerical computer simulation or performing a laboratory measurement in order
to, e.g., determine how well an electronic circuit works.
      </p>
      <p>
        Owing to their practical importance, hundreds of algorithms and heuristics
have been developed to (approximately) solve black-box optimization problems.
The \No Free Lunch Theorem" [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] states that no algorithm performs better (i.e.,
nds the extremum in less iterations or nds a better extremum) than any other
algorithm on all problems. Algorithm performance is hence relative to a problem
or a class of problems. It is, however, unknown what these problem classes are,
and how one should choose a \good" algorithm given a certain problem instance.
      </p>
      <p>
        We aim at developing a recommender system for proposing e cient
algorithms for a given black-box optimization problem. Formal Concept Analysis
(FCA) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] has previously been shown useful in recommender systems for rating
systems and advertisement [
        <xref ref-type="bibr" rid="ref4 ref8">4, 8</xref>
        ]. Here we present the idea of using attribute
exploration in FCA to nd discriminative attributes of empirical benchmark data
of di erent algorithms tested on di erent problems. We show that FCA produces
meaningful implications about problem features and algorithms that imply good
performance. Some of the implications found con rm hypotheses that are
commonly known in the optimization community, others are novel. We also outline
the use of attribute exploration to discover sets of problem features that are
particularly predictive for algorithm performance. This is work in progress and
we are not presenting a nal solution. We do, however, believe that FCA will
provide a powerful tool on the way towards a generic recommender system and
problem classi cation for black-box optimization.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Black-box Optimization Problems</title>
      <p>We consider real-valued scalar black-box problems, modeled as an oracle function
f : Rd ! R mapping a d-tuple x of real numbers (problem parameters) to a
scalar real number f (x) (objective function value). The function f is not assumed
to be explicitly known. A black-box optimization problem entails nding extremal
points of f using only point-wise function evaluations. The iterative sequence of
evaluations used to progress toward an extremum is called the search strategy.</p>
      <p>
        The Black-Box Optimization Benchmark (BBOB) test suit is a standard
collection of test functions used to empirically benchmark di erent optimization
heuristics [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The suit contains 24 benchmark functions with known ground
truth for parameter spaces of di erent dimensions. Figure 1 plots four example
functions in two dimensions.
      </p>
      <p>(a) Sphere
(b) Rastrigin sep.</p>
      <p>(c) Weierstrass
(d) Katsuura</p>
      <p>A problem instance is a triple (f; d; ), where f is an instance of a black-box
function, d &gt; 0 is the parameter space dimension, and 2 R 0 a tolerance.
Solving a problem instance (f; d; ) means nding a d-tuple x 2 Rd such that
jfmin f (x)j &lt; , where fmin is the global minimum of f . Note that the
groundtruth global minimum fmin is known for the benchmark functions, but not in a
practical problem instance.
3</p>
    </sec>
    <sec id="sec-3">
      <title>An FCA-based Recommender System</title>
      <p>
        We propose a system for recommending suitable algorithms for a given
blackbox optimization problem based on FCA. Similar recommender systems, albeit
not based on FCA, have been suggested [
        <xref ref-type="bibr" rid="ref15 ref3 ref9">3, 9, 15</xref>
        ], and FCA has been applied
to collaborative recommender systems for high-dimensional ratings [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] and in
contextual advertising [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. To the best of our knowledge, the use of FCA in a
recommender system for black-box optimization has not been attempted so far.
      </p>
      <sec id="sec-3-1">
        <title>3.1 Performance Measures</title>
        <p>
          A run of an algorithm solving a problem instance is called successful. The
performance of an algorithm a on a problem instance p is measured by the expected
running time (ERT) [
          <xref ref-type="bibr" rid="ref1 ref3">1, 3</xref>
          ]:
        </p>
        <p>ERT(a; p) = N +(a; p) +
1</p>
        <p>+(a; p)
+(a; p)</p>
        <p>N (a; p);
where N +(a; p) and N (a; p) are the average numbers of function evaluations
for successful and unsuccessful runs of a on p, respectively, and +(a; p) is the
ratio of the number of successful runs over the number of all runs of a on p.</p>
        <p>The problem instances (fj ; d; ), where j 2 f1; :::; 24g, d 2 f2; 3; 5; 10; 20; 40g,
and 2 E = f103, 101, 10 1, 10 3, 10 5, 10 8g, have been used to test over
112 algorithms; the dataset is available online.1 Note that the algorithms treat
the benchmark functions as black boxes, i.e., they only evaluate the functions,
whereas the actual function remains hidden to the algorithm. The known global
minimum is only used to evaluate the performance of the algorithms.</p>
        <p>Often, algorithms cannot solve problem instances for any tolerance value
within a certain number of function evaluations; see Table 1. To obtain a
performance measure also in these cases, we introduce a penalty of 106 for every
tolerance level that the algorithm could not solve. This penalized ERT (pERT)
of an algorithm a on an instance f of a benchmark function in d dimensions for
a set E of tolerances is here de ned as:</p>
        <p>ERT(a; (f; d; best))
pERT(a; f; d; E) =</p>
        <p>d
where best is the smallest value in E such that ERT(a; (f; d; )) 6= 1.
+ 106jf 2 E j ERT(a; (f; d; )) = 1gj;</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2 Benchmark Data Used</title>
        <p>
          Table 1 shows the performance values of four algorithms on the benchmark
functions Sphere and Rastrigin separable in 10 dimensions. The algorithm
(1+1)CMA-ES implements an evolutionary strategy with covariance matrix
adaptation [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]. We also include two variants of Particle Swarm Optimization (PSO):
1 http://coco.gforge.inria.fr
Tolerance
Sphere
267
2 319
        </p>
        <p>
          74
3 450
10 3
standard PSO [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] and PSO-BFGS, a hybrid of PSO and the
Broyden-FletcherGoldfarb-Shanno (BFGS) algorithm [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. The fourth algorithm is the classical
Simplex algorithm [
          <xref ref-type="bibr" rid="ref16 ref17">16, 17</xref>
          ]. Each algorithm can solve a problem instance for the
Sphere function to any value of tolerance, whereas on the separable Rastrigin
function this is the case only for PSO, but with higher ERT. The last column
shows the pERT values that account for any failed tolerance level with a penalty.
        </p>
        <p>
          Functions can be characterized by features. Several high-level features have
been hypothesized to predict black-box optimization algorithm performance,
such as multi-modality, global structure, and separability [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. Table 2 lists the
extent to which di erent BBOB test functions exhibit these features.
MultiFunction
        </p>
        <p>multi-modality global structure separability
modality refers to the number of local minima of a function.</p>
        <p>Global structure refers to the structure that emerges when only considering
minima (in a minimization problem ), i.e., the point set left after deleting all
nonoptimal points. Separability speci es whether a function is a Cartesian product
of one-dimensional functions. For instance, the Sphere function (Fig. 1(a)) is
unimodal and, therefore, has no global structure (only one point remains when
restricting to minima) and is highly separable. The separable Rastrigin function
(cf. Fig. 1(b)) is separable, highly multimodal, and has a strong global structure
(the set of local minima forms a convex funnel).
3.3</p>
      </sec>
      <sec id="sec-3-3">
        <title>Results</title>
        <p>
          Di erent numerical measures have been suggested to estimate the features of a
black-box function from sample evaluations. These include tness distance
correlations [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] and ICoFis, a method based on information content [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ].
Furthermore, in [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] it is shown that in several cases it is possible to predict the high-level
features of Table 2 from low-level features that can be computed from samples
only. Based on these observations, we develop an FCA-based recommender
system that suggests well-performing algorithms for a given black-box optimization
problem by extracting implications from the BBOB database. The recommender
system consists of three steps; cf. Algorithm 3.1. In Step 1, features of the
blackbox function that is given as an input are estimated by drawing samples from
the function. In Step 2, these features are used to determine the features of
algorithms that perform well on such functions. In Step 3, a speci c algorithm is
chosen that has these algorithm features. Determining good algorithm features
Algorithm 3.1 Recommender System
Input: unknown black-box function from which samples can be drawn
Output: algorithm which is suitable for optimizing input function
        </p>
        <p>Step 1: estimate features of input function
Step 2: imply features of algorithms that do well on functions with such features
Step 3: pick an algorithm that has these features
(Step 2) is done using implications that form the background knowledge of the
recommender system. The premise of an implication contains algorithm features
together with features of benchmark functions, whereas the conclusion consists
of attributes corresponding to the performances of the algorithms when solving
the functions. The implications are obtained from the BBOB database by FCA.
More precisely, we build a formal context K = (G; M; I), where G is the set of
objects consisting of all pairs of benchmark functions and algorithms, M is the
set of attributes consisting of the features of algorithms and benchmark functions
together with the attributes assigned to performance values, and I is the
incidence matrix between the objects in G and the attributes in M . We obtain the
implications from the context K by computing the canonical base (or another
implicational base) of K.</p>
        <p>We illustrate our approach in Table 3. The objects are pairs of benchmark
functions and algorithms, as introduced in Sec. 3.2. The attributes are features
of algorithms, performance attributes Pi, for i 2 f 0; 5; 6; 7 g;2 and the function
features (Table 2). As algorithm features we use the respective names of the
algorithms together with the features deterministic and hybrid (stating that the
algorithm is a combination of di erent search strategy). We consider Table 3 as
a formal context and compute its canonical base to obtain the implications:
2 To obtain a binary table, we partition the range of pERT-values into 10 intervals
and introduce attribute names Pi for the intervals.</p>
        <p>f (1+1)-CMA-ES; multimod high g ! f P6 g</p>
        <p>
          f PSO; sep high g ! f P0 g
f PSO; sep none; multimod high g ! f P6 g
f PSO-BFGS; sep high; globstruct strong; multimod high g ! f P6 g
f sep high; globstruct none g ! f P0 g
f sep none; multimod high; globstruct none; Alg hybrid g ! f P5 g
f globstruct med g ! f P6 g
Based on this background knowledge, the recommender system would, e.g.,
suggest to use the algorithm PSO for functions with high separability. If additionally
the function does not exhibit a global structure (e.g., the function is unimodal),
then any algorithm would do well. For functions with high multi-modality, no
separability, and no global structure, the system would recommend a hybrid
algorithm over (1+1)-CMA-ES, con rming previous observations [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
        </p>
        <p>We note that the quality of the recommendations made by the system
depends on the \quality" of the implications constituting the background
knowledge. To obtain better recommendations (Step 3), suitable features of the input
function need to be e ciently computable (Step 1) and the implications need to
form a basis of a su cient benchmark set (Step 2).
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Discovering Function Features</title>
      <p>
        It is not clear that the function features currently used in the optimization
community de ne meaningful problem classes with respect to expected algorithm
performance. It is an open and non-trivial problem to discover new features for
this purpose. We sketch a method that could assist an expert in nding new
features using attribute exploration from FCA [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. This goal-oriented approach
may be more practical than an undirected synthesis of new features.
      </p>
      <p>In attribute exploration, we start from an initial formal context K and a set
S of valid implications of K. Then, we compute new implications A ! B such
that A ! B is valid in K, but does not follow from S, where A and B are sets
of attributes. In case such an implication can be found, a (human) expert has to
con rm or reject A ! B. In the former case, A ! B is added to S. If the expert
rejects A ! B, s/he has to provide a counterexample that is then added to K.
If no new implications can be computed any more, the search terminates.</p>
      <p>Consider the context Kfeatures = (G; M; I), where G is the set of benchmark
functions, M the set of function features, and the incidence relation I represents
the fact that a function has a certain feature. Applying attribute exploration
directly to Kfeatures would not work, as we are seeking new attributes (features),
and not new objects (benchmark functions). Since objects and attributes
behave symmetrically in every formal context, however, we can apply attribute
1
exploration to the dual context Kfeatures = (M; G; I 1).</p>
      <p>1</p>
      <p>Implications from Kfeatures are of the form f fi1 ; : : : ; fij g ! f fij+1 g, fik 2 G.
Presenting such implications to an expert means asking whether all features that
Testfunction</p>
      <p>Algorithm
fi1 ; : : : ; fij have in common are also features of function fij+1 . A counterexample
would then correspond to a new feature that fi1 ; : : : ; fij have in common, but
that is not shared by fij+1 . These are the features we are looking for.</p>
      <p>To illustrate the method, suppose that the context Kfeatures contains the
same functions as in Table 3, but the feature multi-modality as the only
attribute. Then, using our approach, the expert would need to judge the
implication f Rastrigin g ! f Weierstrass g; and s/he could reject the implication by
providing a \new" feature global structure strong, which the separable Rastrigin
function has, but the Weierstrass function has not (counterexample).
5</p>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>We have outlined the use of FCA in building a recommender system for
blackbox optimization. The implications resulting from the small, illustrative example
presented here are meaningful in that they both con rm known facts and suggest
new ones. Attribute exploration could moreover be a powerful tool for discovering
new features that are more predictive of the expected performance.</p>
      <p>The ideas presented here are work in progress. Ongoing and future work
will address the computability/decidability of features on black-box functions,
and evaluate the proposed FCA-based recommender system using the BBOB
database. We will investigate novel function and algorithm features, and compare
the results obtained by FCA with results from clustering and regression analysis.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A.</given-names>
            <surname>Auger</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          .
          <article-title>Performance evaluation of an advanced local search evolutionary algorithm</article-title>
          .
          <source>In Proc. of CEC-2005</source>
          , pp.
          <volume>1777</volume>
          {
          <issue>1784</issue>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>A.</given-names>
            <surname>Auger</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          .
          <article-title>Benchmarking the (1+1)-CMA-ES on the BBOB-2009 function testbed</article-title>
          .
          <source>In Proc. of GECCO-2009</source>
          , pp.
          <volume>2459</volume>
          {
          <fpage>2466</fpage>
          . ACM,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>B.</given-names>
            <surname>Bischl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Mersmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Trautmann</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Preuss</surname>
          </string-name>
          .
          <article-title>Algorithm selection based on exploratory landscape analysis and cost-sensitive learning</article-title>
          .
          <source>In Proc. of GECCO12</source>
          , pp.
          <volume>313</volume>
          {
          <fpage>320</fpage>
          . ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4. P. du
          <string-name>
            <surname>Boucher-Ryan</surname>
            and
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Bridge</surname>
          </string-name>
          .
          <article-title>Collaborative recommending using formal concept analysis</article-title>
          .
          <source>Knowledge-Based Systems</source>
          ,
          <volume>19</volume>
          (
          <issue>5</issue>
          ):
          <volume>309</volume>
          {
          <fpage>315</fpage>
          ,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>M.</given-names>
            <surname>El-Abd</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Kamel</surname>
          </string-name>
          .
          <article-title>Black-box optimization benchmarking for noiseless function testbed using particle swarm optimization</article-title>
          .
          <source>In Proc. of GECCO-2009</source>
          , pp.
          <volume>2269</volume>
          {
          <fpage>2274</fpage>
          . ACM,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>B.</given-names>
            <surname>Ganter</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Wille</surname>
          </string-name>
          .
          <source>Formal Concept Analysis: Mathematical Foundations</source>
          . Springer,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Finck</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ros</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Auger</surname>
          </string-name>
          , et al.
          <article-title>Real-parameter black-box optimization benchmarking 2009: Noiseless functions</article-title>
          de nitions.
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>D. I.</given-names>
            <surname>Ignatov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Kuznetsov</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Poelmans</surname>
          </string-name>
          .
          <article-title>Concept-based biclustering for internet advertisement</article-title>
          .
          <source>In ICDM Workshop</source>
          ,
          <year>2012</year>
          , pp.
          <volume>123</volume>
          {
          <issue>130</issue>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>O.</given-names>
            <surname>Mersmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bischl</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Trautmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Preuss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Weihs</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Rudolph</surname>
          </string-name>
          .
          <article-title>Exploratory landscape analysis</article-title>
          .
          <source>In Proc. of GECCO-11</source>
          , pp.
          <volume>829</volume>
          {
          <fpage>836</fpage>
          . ACM,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>O.</given-names>
            <surname>Mersmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Preuss</surname>
          </string-name>
          , and
          <string-name>
            <given-names>H.</given-names>
            <surname>Trautmann</surname>
          </string-name>
          .
          <article-title>Benchmarking evolutionary algorithms: Towards exploratory landscape analysis</article-title>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. C. L. Muller, R. Ramaswamy,
          <string-name>
            <given-names>and I. F.</given-names>
            <surname>Sbalzarini</surname>
          </string-name>
          .
          <article-title>Global parameter identi cation of stochastic reaction networks from single trajectories</article-title>
          .
          <source>In Advances in Systems Biology</source>
          , volume
          <volume>736</volume>
          , chapter
          <issue>28</issue>
          , pp.
          <volume>477</volume>
          {
          <fpage>498</fpage>
          . Springer,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>C. L. Mu</surname>
          </string-name>
          <article-title>ller and</article-title>
          <string-name>
            <given-names>I. F.</given-names>
            <surname>Sbalzarini</surname>
          </string-name>
          .
          <article-title>A tunable real-world multi-funnel benchmark problem for evolutionary optimization (and why parallel island models might remedy the failure of CMA-ES on it)</article-title>
          .
          <source>In Proc. of IJCCI-2009</source>
          , pp.
          <volume>248</volume>
          {
          <issue>253</issue>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>C. L. Mu</surname>
          </string-name>
          <article-title>ller and</article-title>
          <string-name>
            <given-names>I. F.</given-names>
            <surname>Sbalzarini</surname>
          </string-name>
          .
          <article-title>Global characterization of the CEC 2005 tness landscapes using tness-distance analysis</article-title>
          .
          <source>In Proc. EvoStar</source>
          , volume
          <volume>6624</volume>
          of Leture Notes in Computer Science, pp.
          <volume>294</volume>
          {
          <fpage>303</fpage>
          . Springer,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>M. Munoz</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Kirley</surname>
            , and
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Halgamuge</surname>
          </string-name>
          .
          <article-title>Exploratory landscape analysis of continuous space optimization problems using information content</article-title>
          .
          <source>IEEE Trans. Evol. Comput.</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>A</article-title>
          . Mun~oz, M. Kirley, and
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Halgamuge</surname>
          </string-name>
          .
          <article-title>A meta-learning prediction model of algorithm performance for continuous optimization problems</article-title>
          .
          <source>In Proc. of PPSN2012</source>
          , pp.
          <volume>226</volume>
          {
          <fpage>235</fpage>
          . Springer,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Nelder</surname>
          </string-name>
          and
          <string-name>
            <given-names>R.</given-names>
            <surname>Mead</surname>
          </string-name>
          .
          <article-title>A simplex method for function minimization</article-title>
          .
          <source>The Computer Journal</source>
          ,
          <volume>7</volume>
          (
          <issue>4</issue>
          ):
          <volume>308</volume>
          {
          <fpage>313</fpage>
          ,
          <year>1965</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>L.</given-names>
            <surname>Pal</surname>
          </string-name>
          .
          <article-title>Comparison of multistart global optimization algorithms on the BBOB noiseless testbed</article-title>
          .
          <source>In Proc. of GECCO-2013</source>
          , pp.
          <volume>1153</volume>
          {
          <issue>1160</issue>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>C. Voglis</surname>
            ,
            <given-names>G. S.</given-names>
          </string-name>
          <string-name>
            <surname>Piperagkas</surname>
            ,
            <given-names>K. E.</given-names>
          </string-name>
          <string-name>
            <surname>Parsopoulos</surname>
            ,
            <given-names>D. G.</given-names>
          </string-name>
          <string-name>
            <surname>Papageorgiou</surname>
            ,
            <given-names>and I. E.</given-names>
          </string-name>
          <string-name>
            <surname>Lagaris</surname>
          </string-name>
          .
          <article-title>Mempsode: comparing particle swarm optimization and di erential evolution within a hybrid memetic global optimization framework</article-title>
          .
          <source>In Proc. of GECCO-2012</source>
          , pp.
          <volume>253</volume>
          {
          <fpage>260</fpage>
          . ACM,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Wolpert</surname>
          </string-name>
          and
          <string-name>
            <given-names>W. G.</given-names>
            <surname>Macready</surname>
          </string-name>
          .
          <article-title>No free lunch theorems for optimization</article-title>
          .
          <source>IEEE Trans. Evol. Comput.</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <volume>67</volume>
          {
          <fpage>82</fpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>