<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Method of Fractal Structuring as on Evolutionary Method of Global Optimization</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Maryna Antonevych</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitaliy Snytyuk</string-name>
          <email>snytyuk@knu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Natalia Tmienova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>24 B. Havrylyshyna Str., Kyiv, 04116</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>0</volume>
      <fpage>1</fpage>
      <lpage>03</lpage>
      <abstract>
        <p>Decision making processes in the modern world are based on the solutions of optimization problems. The variety of such problems, the corresponding objective functions and areas for finding optimal solutions is the reason for the development of new and improvement of the known optimization methods. This paper proposes a new method of fractal structuring, which is an evolutionary method from the category of soft computing methods. A feature of this method is a quick and in-depth study of the area in which the local extremum is located and the global optimum can also be found. A fractal structuring method has been developed for finding the optimum for one-dimensional, two-dimensional and n-dimensional objective functions. The first experiments were carried out, which prove the prospects and effectiveness of this method, and also indicate the possibility of its improvement. Optimization problem, function, evolutionary method, method of fractal structuring.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        A large number of modern practical problems belong to the class of constraint satisfaction problems
[1]. The target functions in such tasks are, as a rule, undifferentiated and (or) poly-extremal
dependencies. The use of classical methods of continuous optimization and in many cases discrete
optimization is impossible [
        <xref ref-type="bibr" rid="ref1">2</xref>
        ]. Сombinatorial optimization methods, evolutionary algorithms, etc - are
used to solve such tasks. The functional dependencies can be set tabularly or algorithmically. In this
case evolutionary algorithms are most often preferred.
      </p>
      <p>Exactly, the use of this algorithms does not require strict target functions constraints, but does not
guarantee the finding of a global optimum, although according to certain conditions there is a
probability convergence. The obtained solutions are considered suboptimal.</p>
      <p>
        Historically, the first
methods
of evolutionary
optimization
were genetic algorithms and
evolutionary strategies [
        <xref ref-type="bibr" rid="ref2 ref3">3, 4</xref>
        ]. These methods allowed to consider optimization problems differently and
expanded the subject base of optimization technologies. Also, these methods were based on the ideas
of natural evolution. In particular, genetic algorithms traditionally use the principle that the best parents
tend to have better children. Two parental potential solutions are involved in generating potential
solutions-offspring. In evolutionary strategies, potential offspring solutions are generated around a
single parent solution. This is the main difference between genetic algorithms and evolutionary
strategies in the generation of offspring solutions.
      </p>
      <p>Our hypothesis is that more potential parent solutions that will be involved in generating potential
successive solutions will improve the accuracy of the solutions and speed up the convergence of the
optimum search algorithms. This progress will be achieved by in-depth study of potential promising
solutions and by reducing the number of algorithm steps in non-perspective directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. A brief description of modern applications of evolutionary technologies</title>
      <p>John Holland, used the ideas of genetic algorithm to study and optimize the game with one-armed
and two-armed bandits (slot machines). H.-P. Schwefel and Ingo Rechenberg tried to get the part with</p>
      <p>
        2022 Copyright for this paper by its authors.
the least resistance in the wind tunnel. The first results obtained using evolutionary methods testified to
the prospects of this area. And the No Free Lunch Theorem [
        <xref ref-type="bibr" rid="ref4">5</xref>
        ] became the theoretical basis for the
development of a set of optimization methods, each of which showed the best results in solving certain
problems with a certain structure of the source data.
      </p>
      <p>
        Modern technologies of evolutionary modeling, as a rule, are focused on the further development of
the theory of evolutionary optimization and its practical application. In the first direction, we will pay
attention to only a few known results. In particular, in the field of evolutionary algorithms is the famous
school of Kalyanmoy Deb, a famous Indian professor. Recent studies of this school are aimed at solving
multicriteria optimization problems using additional options that allow you to solve relevant problems
more accurately and quickly [
        <xref ref-type="bibr" rid="ref5 ref6">6, 7</xref>
        ]. An excellent overview of multicriteria optimization methods using
such options in evolutionary algorithms, in particular with the choice of informative factors and data
normalization methods, is proposed in [
        <xref ref-type="bibr" rid="ref7 ref8">8, 9</xref>
        ].
      </p>
      <p>
        Hybrid evolutionary methods, which combine technologies of fuzzy set theory, particle swarm
optimization and genetic algorithms, are proposed in [
        <xref ref-type="bibr" rid="ref9">10</xref>
        ]. Two more works are devoted to the
development of new methods of evolutionary optimization: the evolutionary method based on centers
of mass [
        <xref ref-type="bibr" rid="ref10">11</xref>
        ] and the method of deformed stars [
        <xref ref-type="bibr" rid="ref11">12</xref>
        ]. The latter method is based on the hypothesis of
involving more potential parent solutions in the generation of potential descendant solutions, which
makes the search for the global optimum more informative, as well as a deeper study of the perspectives
of promising potential solutions.
      </p>
      <p>Thus, the results of this brief review indicate the continued development of the theory of
evolutionary optimization and practical applications of evolutionary methods.</p>
    </sec>
    <sec id="sec-3">
      <title>3. The method of fractal structuring in one-dimensional case</title>
      <p>The known optimization problem in one-dimensional case can be mathematically formulated as
follows:</p>
      <p>maximize f (x)
subject to x  D  R
where x is a solution in the feasible region D , and D is some segment [a,b] .</p>
      <p>There are no restrictions on the function f (x) , in the general case. The function f (x) can be set
analytically, tabularly or algorithmically. The proposed method contains such steps.</p>
      <p>Step 1. Initialize the method parameters: n, t  0, mi  7 , i  1, n. { n is the number of potential
parenting solutions in the population, t is the iteration number, mi is the number of
solutionsoffsprings of the i-th parent solution}.</p>
      <p>Step 2. Generate n uniform distributed potential solutions xi of problem (1) on the segment
[a,b], i  1, n (population Pt ).</p>
      <p>Step 3. For each solution xi , we find the value f (xi ), i  1, n .</p>
      <p>Step 4. Create solutions-offsprings xi ji , ji  1, mi , i  1, n, xi ji  xi  (N (0, i )) for each xi , where
 (N (0, i )) is the normally distributed random variable with mean 0 and standard deviation  i .</p>
      <p>Step 4.1. sL  0, sR  0, mL  0, mR  0.</p>
      <p>Step 4.2. For each ji  1, mi :</p>
      <p>Step 4.2.1. If xi ji  xi , then {sL  sL  xi ji , mL  mL 1}, otherwise
{sR  sR  xi ji , mR  mR 1}.</p>
      <p>1 1</p>
      <p>Step 4.3. x*L  mL sL , x*R  mR sR.</p>
      <p>Step 4.4. If f (x*L )  f (x*R ) , then xiH  x*L , else xiH  x*R .
(1)</p>
      <p>Step 4.5. Write xiH to the temporary population of an solutions-offsprings Pttest .</p>
      <p>Step 5. Write the elements Pt and Pttest to the population Ptin , find the values of the function f for
the elements of the population Pttest , arrange the elements of the population Ptin in descending order of
the values of the function f and determine the n prospective potential solutions.</p>
      <p>Step 6. If the stop condition is not fulfilled, the iterative process continues (return to the step 3). If
the stop condition is satisfied, then the value of the potential solution, which corresponds to the
maximum value of the function will be the solution of the problem (1).</p>
      <p>Traditionally, in a classic evolutionary strategy, each potential parental solution generates the same
number of potential solutions, no matter how promising that solution may be. Next, a new procedure
will be proposed, according to which not all parent solutions will be able to generate child solutions.
The number of descendant solutions in all parental solutions will be different.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The method of fractal structuring in two-dimensional case</title>
      <p>In two-dimensional case the known search problem is considered
where x is a solution in the feasible region D , and D is some rectangle</p>
      <p>maximize f (x1, x2 )
subject to x  (x1, x2 )  D  R2
D  {(x1, x2 ) | x1 [ p1; p2 ], x2 [q1; q2 ]},
p1, p2 , q1, q2  R.</p>
      <p>The properties of the function f are the same as in the previous one-dimensional case. The
corresponding method will contain steps like this.</p>
      <p>Step 1. Initialize the parameters L  1, T  Tmax , t  0, n. L is a parameter of the method, T in
similar methods plays the role of temperature and is initially equal to a large number Tmax , t is the
iteration number, n is the number of potential parenting decisions in the population.</p>
      <p>Step 2. Generate n points-solutions Pt  {(x11, x12 ),(x12 , x22 ),...,(x1n , x2n )} uniformly distributed in D .
Step 2.1. Find the values of the function f at points Pt and obtain f1, f2 ,..., fn .</p>
      <p>Step 3. Plot virtual circles with centers at the points Pt and radiuses r1, r2 ,..., rn . We will require that
the circles will be placed completely in D. All radiuses must initially be considered as equal to each
other, r  min{( p2  p1 ), (q2  q1 )} / n.</p>
      <p>L
Step 4. Let L </p>
      <p>t 1
potential parent solution, coordinates of the solution-offspring are
. For each ith point from Pt we generate 7 solutions-offsprings. If (x1i , x2i )
x1Hk  random(x1i  3Lri ; x1i  3Lri ),
x2Hk  x2i </p>
      <p>ri2  (x1Hk  x1i )2 ,
  random{1;1}, k  1, 7, i  1, n.</p>
      <p>Parental solutions and solutions-offsprings with coordinates (x1Hk , x2Hk ), k  1, 7 are recorded to the
population P .</p>
      <p>v</p>
      <p>The population Pv consists of 7n  n  8n potential solutions (fig.1). Arrange them in descending
order of the objective function values.
(2)</p>
      <p>Step 4 aims to explore the area around the potential solution of the optimization problem. This
approach does not protect us from hitting the local extreme. To prevent this, we suggest the following
steps:</p>
      <p>Step 5. Select 2n best (upper) potential solutions from the population P . Let's form 2n pairs,
v
i, j  random{1, 2,..., m},i  j and get 2n new potential solutions:</p>
      <p>xi  x1j , x2i  x2j ), l  1, 2n.
x1l  ( 1</p>
      <p>2 2</p>
      <p>If a pair of promising solutions are close together, their average value will allow you to explore more
deeper the area around these solutions (it is assumed that they are at short distances from each other)
and possibly find a better solution.</p>
      <p>If the solutions are far from each other, then finding their average value is an attempt to expand the
search area and, as in the previous case, it is possible to find a better solution.</p>
      <p>Step 6. Select the 2n worst solutions from the population Pv . We find also the average value of the
objective function fave for the n best solutions. For each of the worst solutions (x1l , x2l ) , take the
following steps:</p>
      <p>Step 6.1. Give a small increment (x1l , x2l ) , generated accordingly by a uniform distribution,
where:
x1l  (x1l  p2  p1 ; x1l  p2  p1 ),</p>
      <p>n n
x2l  (x2l  q2  q1 ; x2l  q2  q1 ), l  1, 2n.</p>
      <p>n n</p>
      <p>If f (x1l  x1l , x2l  x2l )  fave , so (x1H , x2H )  (x1l  x1l , x2l  x2l ) - is a new potential solution and it
is recorded to the population P . Otherwise, take a random number r  (0,1) and if
w
r  P(min(x1l , x2l ))  exp( min(x1l , x2l ) / T ),
then (x1H , x2H ) is recorded to the population P .</p>
      <p>w
If r  P(min(x1l , x2l )), we move on to the next solution from the set of the worst.
T
If the set of the worst solutions is exhausted and the stop criterion is not met, then T  , t  t 1
2
and we go to the step 4. If the stop criterion is met, then it is the end of the algorithm. Thus, the
population of the new epoch is formed from better solutions obtained from 2n elements of the
population Pw ; solutions ( 2n ) calculated in step 5 and solutions ( 8n ) from the population Pv , so that the
total number of them is equal to n.
5. The method of fractal structuring in n -dimensional case</p>
      <p>In the n -dimensional case, the optimization problem
is considered, where x is a solution in the feasible region D ,
and D is some rectangular hyperparallelepiped
maximize f (x1, x2 ,..., xm )
x  (x1, x2,..., xm )  D  Rm</p>
      <p>D  {(x1, x2 ,..., xm ) | xi [ai ,bi ],i 1, m} , ai ,bi  R,i  1, m .</p>
      <p>In some cases, data normalization is applied and the area D is a hypercube
Step 4. For each j -th hypersphere we generate 7 offsprings solutions (points) that will lie on its
r
surface and are the centers of the hyperspheres with radius r  . To find such a point we generate
t  1
a uniformly distributed random number k  random{1, 2,.., m}. Next we generate a random vector
(x1, x2 ,..., xk1, xk1,..., xm ), such that
m
(xi  (aij  r, aij  r),i  k) або ( (xi  aij )2  r 2 ),
i1
ik</p>
      <p>m
xk  akj  random{1;1} (r 2  (xi  aij )2 ) .</p>
      <p>i1
ik</p>
      <p>D  {(x1, x2 ,..., xm ) | xi [0,1],i  1, m} .</p>
      <p>The following algorithm for solving problem (3) is proposed.</p>
      <p>Step 1. Perform the initialization of the algorithm parameters. Iteration number (population of
potential solutions) is t  1.</p>
      <p>Step 2. Generate a sample of uniform distributed in the hypercube points</p>
      <p>Pt  {(a11, a12 ,..., a1m ),...,(a1n, a2n,..., amn)}, aij (0,1), i  1, m,j  1, n.</p>
      <p>Step 2.1. Find the value of the function f at the points in the sample Pt and get
will require that each point of the hypersphere lie completely inside the hypercube [0,1]n.</p>
      <p>The equation of such hyperspheres:
m
 (xi  aij )2  r 2 , j  1, n.</p>
      <p>i1
f1, f2 ,..., fn.
and calculate
(3)</p>
      <p>We obtain a point with coordinates (x1, x2 ,..., xm ), lying on the parent hypersphere. Let's recognize
its elements bijl  xi , where i determines the coordinate, j is the number of the parent solution, l is
population elements will be 8n .</p>
      <p>The next steps of the algorithm will be to ensure the "diversity" of the population of potential
solutions and to avoid hitting the local optimums. Next, we propose data transformations that will play
the role of mutations, as well as focus on a more detailed study of promising areas and a random search
in a wide range of unpromising solutions.</p>
      <p>Step 5. If a pair of prospective solutions are close together, researching their averages and values
around them will allow you to explore the prospective area more deeply (assuming they are at a short
distance from each other) and possibly find a better solution. If the solutions are far from each other,
then finding their average value is an attempt to expand the search area and, as in the previous case, it
is possible to find a better solution. Let a  (a1, a2 ,..., am ) and b  (b1,b2 ,...,bm ) are promising potential
parenting solutions. Then the point that lies in the middle of the segment connecting the points a and
b has the following coordinates</p>
      <p>a  b1 , a2  b2 ,..., am  bm ) .
c  ( 1</p>
      <p>2 2 2</p>
      <p>Suppose, too, that the best offspring solutions may lie in the middle of the rectangle sides for which
the segment connecting the points a і b is a diagonal. To generate them, we generate a random number
r {1,1} , where (-1) corresponds to the solution a , 1 corresponds the solution b and the random
number q {1, 2,..., m} . Generate the descendant vector as follows:
c  ( (r  1)a1   (r  1)b1,..., aq  bq ,..., (r  1)am   (r  1)bm ).</p>
      <p>2</p>
      <p>− parents,
− offsprings</p>
      <p>Thus, as a result of generating potential offspring solutions, the first n points will lie in the middle
of the main diagonal of the hypercube (rectangular hyperparallelepiped) (the ends of this diagonal are
the parent potential solutions), the other points will be in the middle of the side edges of such hypercube
or hyperparallelepiped. This way of generating potential offspring solutions allows us to explore the
area between the best solutions, as well as to test the hypothesis that parental solutions may be improved
by changing one of the coordinates.</p>
      <p>Step 6. Just as there may be an even better solution between the best potential solutions, so, given
the relief of many polyextreme functions, the best solution may be among the worst potential solutions.
So, like the two-dimensional case,
let's choose 2n the worst solutions from the set Pv .</p>
      <p>Similarly, we find the average value of the objective function fave for the n best solutions. For each of
the worst potential solutions (x1l , x2l,..., xml ) , follow these steps.</p>
      <p>Step 6.1. Let's give a small random increment
where
and</p>
      <p>(x1l , x2l,..., xml ) ,
xil (xil  1 , xil 
n
1
n</p>
      <p>), if D  {(x1, x2 ,..., xm ) | xi [0,1],i  1, m}
xil (xil  i b  ai ), if D  {(x1, x2 ,..., xm ) | xi [ai ,bi ],i 1, m} , ai ,bi  R, i  1, m,
b  ai , xil  i
n n</p>
      <p>l  1, 2n.</p>
      <p>It is also possible when a random number p  random{1, 2,..., m} is generated and a random
increment of only one coordinate is provided:
xlp (xlp  1 , xlp  1), або xlp  (xlp  bp  ap , xlp  bp  ap ).</p>
      <p>n n n n
Other coordinates of potential solutions remain unchanged, ie</p>
      <p>So, if f (x1l  x1l , x2l  x2l,..., xml  xml )  fave , then</p>
      <p>xql  0 q 1, m , q  p.</p>
      <p>(x1H , x2H ,..., xmH )  (x1l  x1l , x2l  x2l,..., xml  xml )
Is a new potential solution that is being recorded in the population P . Otherwise, generate a
w
uniformly distributed random number r (0,1) and if</p>
      <p>r  P(min(x1l ,x2l,...,xml ))  exp(min(x1l , x2l,...,xml ) / T ) ,
then (x1H , x2H ,..., xmH )  (x1l  x1l , x2l  x2l,..., xml  xml ) write to the new population Pw . If so
r  P(min(x1l ,x2l,...,xml )) , let's move on to the next from 2n worst solutions.</p>
      <p>If the set of the worst solutions have been exhausted and the stop condition is not met, then reduce the
temperature T  T , t  t 1 and go to step 4.</p>
      <p>2</p>
      <p>If the stop condition is satisfied, then we have the end of the algorithm.</p>
      <p>Thus, the population of the new iteration is formed from the best solutions obtained from 8n population
Pv solutions, 2n solutions from population Pw and n solutions obtained in step 5.
6. Algorithm for improving the process of formation of the offspting solutions
population</p>
      <p>
        According to the conditions of the implementation of the evolutionary strategy [
        <xref ref-type="bibr" rid="ref12 ref13 ref14">13-15</xref>
        ], we initially
assume that each parent potential solution can have the same number of offspring solutions, which is
the reason for the long-term convergence of the algorithm. In order to speed it up, we propose to use
the following hypotheses.
      </p>
      <p>Hypothesis 1. It is more probably that a better offspring solution lies around a better parental solution
than a worse one.</p>
      <p>Hypothesis 2. To find a better offspring solution, it is rational to generate more offspring solutions
in a neighborhood of better parent solution than in a worse one.</p>
      <p>We will propose an appropriate procedure for generating offspring solutions and verify it at the end
of the study.</p>
      <p>Suppose there are n parental solutions, we need to get all the 7n potential solutions for posterity.
Let the set of potential solutions contain the following elements:</p>
      <sec id="sec-4-1">
        <title>Find the values of the objective function at points:</title>
        <p>xl  (x1l , x2l ,..., xml ), l  1, n .</p>
        <p>f l  f (xl )  f (x1l , x2l ,..., xml ), l  1, n.</p>
        <p>Arrange the sequence {xl }ln1 in descending order of values f l . Let's divide this sequence into one
of the ratios (50:50, or 60:40, or 70:30), where the first number means the percentage of the best
solutions, the second number is the percentage of the worst solutions that will be removed. Let's take
appropriate action. Let the number of remaining solutions z .</p>
        <p>Let's perform normalization of values f l , l  1, n:
Then the l -th parental solution xl will have Nl  [ f l  7n] descendant solutions, l  1, z .</p>
        <p>The magnitude of the parent potential solution neighborhood in which potential offspring solutions
will be generated is determined by the researcher and depends on the standard deviation value.</p>
        <p>The following hypothesis is to be studied.</p>
        <p>Hypothesis 3. Over time, the value of standard deviation used to generate potential offspring
solutions for the best parents should decrease, and for the worst parents should increase.</p>
        <p>
          The realization of this hypothesis will be aimed at finding the best solution in the neighborhood of
the best parent solutions, and the large value of standard deviation will play the role of mutations in
evolutionary algorithms and allow us to explore a wider area with the prospect of finding a global
optimum [
          <xref ref-type="bibr" rid="ref15">16</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>7. Experimental verification of the obtained results</title>
      <p>The developed new method of fractal structuring requires a large number of experiments that would
confirm its effectiveness. In this paper, we present the results of only one of the simplest experiments
for the one-dimensional optimization problem. However, further experiments for more complex cases
also confirm the viability of the method.</p>
      <p>Consider the problem
maximize f (x) 
sin(10 x)
2x
 (x 1)4 , x [0,5; 2,5].</p>
      <sec id="sec-5-1">
        <title>The graph of this function is shown in Fig. 3.</title>
        <p>The dynamics of the objective function by iterations of the fractal structuring method is shown in
Fig. 4.</p>
        <p>Despite the fact that the stop criterion determined the maximum number of 100 iterations, the
algorithm found the global maximum fmax  5.062389479647202 at a point
xmax  2.499984685195675 in 9 iterations.</p>
        <p>Experiments with other algorithms showed that the optimal value of the objective function was
found by the method of deformed stars in 14 iterations, the genetic algorithm - in 50 iterations, and the
method of evolutionary strategy in 50 iterations to the global optimum did not match. Such results
convincingly support the fractal structuring method.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>8. Conclusion</title>
      <p>In this paper, we propose a new method of fractal structuring. Features of its realization for a
onedimensional case, a two-dimensional case for a case of n -dimensional optimization are considered. In
addition, a procedure has been developed to identify promising parental solutions and to determine the
number of generated solutions in each of them. The method of fractal circles demonstrates convincing
results in its effectiveness. The method is parametric and allows to search in the given region. Its main
idea is a fractal search around some areas. The obtained results testify to the fast convergence of the
algorithm of the fractal structuring method and considerable accuracy.</p>
    </sec>
    <sec id="sec-7">
      <title>9. References</title>
      <p>[1] H. Hnatiienko, Choice Manipulation in Multicriteria Optimization Problems, in: Proceedings of
Selected Papers of the XIX International Scientific and Practical Conference "Information
Technologies and Security" (ITS 2019), 2019, pp. 234–245.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hnatiienko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Tmienova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kruglov</surname>
          </string-name>
          ,
          <article-title>Methods for Determining the Group Ranking of Alternatives for Incomplete Expert Rankings</article-title>
          , in: S. Shkarlet,
          <string-name>
            <given-names>A.</given-names>
            <surname>Morozov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Palagin</surname>
          </string-name>
          . (Eds.),
          <source>Mathematical Modeling and Simulation of Systems (MODS'</source>
          <year>2020</year>
          ),
          <source>MODS 2020, Advances in Intelligent Systems and Computing</source>
          , Springer, Cham, volume
          <volume>1265</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>217</fpage>
          -
          <lpage>226</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -58124-4_
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Holland</surname>
          </string-name>
          , Genetic Algorithms, Scientific American, volume
          <volume>267</volume>
          , no.
          <issue>1</issue>
          ,
          <issue>1992</issue>
          , pp.
          <fpage>66</fpage>
          -
          <lpage>73</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H.-P.</given-names>
            <surname>Schwefel</surname>
          </string-name>
          , Numerical Optimization of Computer Models, Wiley, Chichester,
          <year>1981</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Wolpert</surname>
          </string-name>
          , W. G.
          <article-title>Macready, No Free Lunch Theorems for Optimization"</article-title>
          ,
          <source>IEEE transactions on evolutionary computation 1</source>
          ,
          <year>1997</year>
          , p.
          <fpage>67</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Deb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Pratap</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Meyarivan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A</given-names>
            <surname>Fast</surname>
          </string-name>
          and
          <article-title>Elitist Multiobjective Genetic Algorithm: NSGA-II, in:</article-title>
          <source>Proceedings of IEEE transactions on evolutionary computation</source>
          , vol.
          <volume>6</volume>
          , no.
          <issue>2</issue>
          ,
          <issue>2002</issue>
          , pp.
          <fpage>182</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Deb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <article-title>An Evolutionary Many-Objective Optimization Algorithm Using Reference-PointBased Nondominated Sorting Approach, Part I: Solving Problems With Box Constraints</article-title>
          ,
          <source>in: Proceedings of IEEE Transactions on Evolutionary Computation</source>
          , volume
          <volume>18</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2014</issue>
          , pp.
          <fpage>577</fpage>
          -
          <lpage>601</lpage>
          , doi:10.1109/TEVC.
          <year>2013</year>
          .
          <volume>2281535</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>B.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. N.</given-names>
            <surname>Browne</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Survey on Evolutionary Computation Approaches to Feature Selection</article-title>
          ,
          <source>in: Proceedings of IEEE transactions on evolutionary computation</source>
          , volume
          <volume>20</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2016</issue>
          , pp.
          <fpage>606</fpage>
          -
          <lpage>626</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ishibuchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Trivedi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Srinivasan</surname>
          </string-name>
          ,
          <article-title>A Survey of Normalization Methods in Multiobjective Evolutionary Algorithms</article-title>
          ,
          <source>in: Proceedings of IEEE transactions on evolutionary computation</source>
          , volume
          <volume>25</volume>
          , no.
          <issue>6</issue>
          ,
          <issue>2021</issue>
          , pp.
          <fpage>1028</fpage>
          -
          <lpage>1048</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Valdez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Melin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Castillo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Montiel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A New</given-names>
            <surname>Evolutionary</surname>
          </string-name>
          <article-title>Method with a Hybrid Approach Combining Particle Swarm Optimization and Genetic Algorithms using Fuzzy Logic for Decision Making</article-title>
          ,
          <source>in: Proceedings of 2008 IEEE Congress on Evolutionary Computation (IEEE World Congress on Computational Intelligence)</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>1333</fpage>
          -
          <lpage>1339</lpage>
          , doi:10.1109/CEC.
          <year>2008</year>
          .
          <volume>4630968</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [11]
          <string-name>
            <surname>JA</surname>
          </string-name>
          .
          <string-name>
            <surname>Mejía-de-Dios</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Mezura-Montes</surname>
          </string-name>
          ,
          <article-title>A New Evolutionary Optimization Method Based on Center of Mass</article-title>
          , in: Deep K.,
          <string-name>
            <surname>Jain</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Salhi</surname>
            <given-names>S</given-names>
          </string-name>
          . (Eds.),
          <article-title>Decision Science in Action, Asset Analytics (Performance and Safety Management</article-title>
          ), Springer, Singapore,
          <year>2019</year>
          , doi:10.1007/
          <fpage>978</fpage>
          - 981-13-0860-
          <issue>4</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>N.</given-names>
            <surname>Tmienova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Snytyuk</surname>
          </string-name>
          ,
          <article-title>Method of deformed stars for global optimization</article-title>
          ,
          <source>in: Proceedings of the 2020 IEEE 2nd International Conference on System Analysis &amp; Intelligent Computing (SAIC)</source>
          , Kyiv, Ukraine,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>4</lpage>
          . doi:
          <volume>10</volume>
          .1109/SAIC51296.
          <year>2020</year>
          .
          <volume>9239208</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Auger</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <article-title>A restart CMA evolution strategy with increasing population size</article-title>
          ,
          <source>in: Proceedings of the IEEE Congress on Evolutionary Computation (CEC</source>
          <year>2005</year>
          ), IEEE Press,
          <year>2005</year>
          , pp.
          <fpage>1769</fpage>
          -
          <lpage>1776</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P. R.</given-names>
            <surname>Sampaio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hansen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Brockhoff</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Auger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Atamna</surname>
          </string-name>
          ,
          <article-title>A methodology for building scalable test problems for continuous constrained optimization, Gaspard Monge Program for Optimisation (PGMO)</article-title>
          ,
          <source>ParisSaclay</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>R.</given-names>
            <surname>Mallipeddi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Suganthan,</surname>
          </string-name>
          <article-title>Differential evolution with ensemble of constraint handling techniques for solving CEC 2010 benchmark problems</article-title>
          ,
          <source>in: Proceedings of the IEEE Congress on Evolutionary Computation, CEC</source>
          <year>2010</year>
          ,
          <string-name>
            <given-names>Barcelona</given-names>
            <surname>Spain</surname>
          </string-name>
          ,
          <year>2010</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/CEC.
          <year>2010</year>
          .
          <volume>5586330</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Maniyappan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Umeda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Maki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Akimoto</surname>
          </string-name>
          <string-name>
            <surname>Y</surname>
          </string-name>
          ,
          <article-title>Effectiveness and mechanism of broachingto prevention using global optimal control with evolution strategy (CMA-ES)</article-title>
          ,
          <source>in: Proceedings of J Mar Sci Technol</source>
          ,
          <year>2020</year>
          , doi:10.1007/s00773-020-00743-4
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>