<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>modifications of Self-organizing migrating algorithm</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Radka Poláková</string-name>
          <email>radka.polakova@fpf.slu.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniel Valenta</string-name>
          <email>daniel.valenta@fpf.slu.cz</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>ITAT'25: Information Technologies - Applications and Theory</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Silesian University in Opava, Faculty of Philosophy and Science, Institute of Computer Science</institution>
          ,
          <addr-line>746 01 Opava</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This paper deals with global optimization and focuses on the Self-Organizing Migrating Algorithm (SOMA). Three new versions of SOMA are proposed, where the first two introduce diferent mechanisms to maintain population diversity and the third combines both approaches. The algorithms were tested on the CEC2014 benchmark set at two levels of dimension,  = 10</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>and  = 30
. The results of the experiments suggest that two of the proposed
modifications can improve the performance of the original SOMA-T3 variant in many cases.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>CEUR
Workshop
ISSN1613-0073</p>
    </sec>
    <sec id="sec-3">
      <title>2. SOMA Algorithm</title>
      <p>SOMA (Self-organizing Migrating Algorithm) was introduced in 2000 by Zelinka and Lampinen [3].
The algorithm can be viewed as a model inspired by the cooperative hunting behavior of animal packs.</p>
      <p>It works with a population of points from the search space. At the beginning of the run, the algorithm
generates a random initial population and evaluates the objective (optimized) function in each of them.
Then it works in cycles, so-called migrations. After each migration, the algorithm checks whether the
stopping condition is met. When done, it returns the best solution found.</p>
      <p>In each migration, each point travels towards the best point of the population which is named the
leader. The point can also skip the leader. The length of the path depends on the parameters of the
algorithm. Each point visits several places (positions) on its way to the leader. We can say that the
point jumps on the line (its path). The best position of such visited places is the new possible position
of this jumping point. The member of the population is moved to this new possible position if it is
better than the original position of the member (in the sense of optimized function value). A migration
includes such jumping and eventual moving for each member of population except the best point of
population – the leader.</p>
      <p>The movement of a member to the leader need not be straight. It can move only in some dimensions.
There is a parameter which calls   , this is a number between 0 and 1, more precisely the number
of the interval [0, 1].   determines the probability that the movement of a point to the leader is
done in a dimension. Before computing new possible positions for a population member, a  is
computed. It consists of  elements, each element can be from the set {0, 1}. The relevant member is
moved only in dimension(s) where  has the value 1 in the corresponding position(s). Firstly, a
vector of random numbers (from a uniform distribution) is generated. Then, where the vector has lower
value than   , the  has number 1 at the corresponding place, at other places there are zeros.</p>
      <p>
        The population member  jumps onto the line determined by its position, the leader position, and
 from its position to the leader position in steps. The   is the algorithm parameter, a
number in the interval (
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ), and the length of each jump of the point is   × ( − ) . As already
written, the movement is done only in some dimensions (according to  ). The recommended
value of   is 0.11 or 0.33.
      </p>
      <p>The next parameter of the algorithm is called  ℎℎ . It determines the maximal length of the
entire path from a population member to the leader. The maximum length is  ℎℎ × ( − ) .
The recommended value of  ℎℎ ranges from 1.1 to 3.</p>
      <p>Let us now look at how the new position is calculated in more detail. First, new positions are
calculated for all members of the population during a migration. Then each point is moved in a single
step to a better position (only in the dimensions selected by  ), so that all points move together.</p>
      <p>The last parameter is the size of the population   , which is recommended to be set to 10, when the
dimension of the solved problem is less than 100, otherwise it is recommended to have a population of
20 − 50 members.</p>
      <p>The duration of the algorithm run is determined by the number of migrations or by the number of
evaluations of the optimized function. [3]
2.1. Migration Strategies
The authors of the algorithm proposed several diferent migration strategies. The one described in the
previous paragraphs is called AllToOne. The other specification of the migration strategies is as follows:
[3]
• AllToOneRand: The leader is selected at random rather than based on the optimized function
value here, introducing an element of stochasticity.
• AllToOneAdaptive: In this migration strategy, the point is moved immediately after finding its
better position (not at the end of the whole migration as in AllToOne migration). So for other
members, the new position of the point can already be the leader if it is better than the previous
leader.
• AllToAll: Each point migrates to all other members of the population. The point is then moved to
the best position of all its migration paths if the position is better than its original position. This
strategy is computationally demanding.
• AllToAllAdaptive: Similar strategy to AllToAll, but each individual immediately moves to a better
position found during its migration path and then migrates to the next point of the population
from its new position, instead of waiting (with moving) to find the best position of all its migration
paths.</p>
    </sec>
    <sec id="sec-4">
      <title>3. SOMA-T3 Algorithm</title>
      <p>
        The SOMA-T3 algorithm is a very eficient modification of Self-organizing Migrating Algorithm. The
algorithm is described in detail in Algorithm (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). The changes from the original SOMA algorithm follow
(the authors’ implementation in Python [10] was very helpful in identifying them precisely).
      </p>
      <p>
        It uses dynamic setting of parameters   and   . Both are set separately for each individual of
the population, the   according to equation (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and   according to equation (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ). One can see that
  is small at the beginning of an algorithm run and it is almost 0.95 at the end of the algorithm run.
In contrast,   decreases as the algorithm runs.
      </p>
      <p>= 0.05 + 0.90 × ( / 
  = 0.15 − 0.08 × ( / 
  ),
  ).</p>
      <p>Such a dynamic setting of the parameters results in the modification of only a few dimensions at the
beginning of the run, and the jumps along the aforementioned line are relatively large due to a high
  , which supports a broad exploration. Later in the run,   increases and   decreases, so more
dimensions are updated, but jumps are smaller. This helps the algorithm smoothly shift from exploring
the search space to fine-tuning the best found solutions.</p>
      <p>The algorithm does not use the parameter  ℎℎ . It uses the parameter   with the original
meaning together with a parameter   . The number   indicates how many positions on the
path from the original position of point towards the leader position (in some coordinates; with step
  × ( − ) , where  is the position of the currently migrated point of population) the algorithm
explores.</p>
      <p>The SOMA-T3 algorithm also does not use only one population of points to solve an optimization
problem. It uses a relatively large population, but it migrates only a part of this population, in each
cycle the part is diferent. It has a population   of  points. For each migration, it randomly chooses
the population  of  members from   . And finally, only the best  members of  make the final
population  and migrate. Also, leader is not one of them. In this algorithm,  members are randomly
selected from   to form the population  of possible leaders. It is done separately for each migrant.
The best point of  is the  .</p>
    </sec>
    <sec id="sec-5">
      <title>4. Three Modifications of SOMA-T3</title>
      <p>
        We studied the SOMA-T3 algorithm and the goal was to improve its eficiency. Our first modification
lies in the following. We wanted to empower the exploration of the algorithm, but also to have a very
similar algorithm to the SOMA-T3 at the end of the search. We decided to add a tool that allows us to
extend the length of the whole path the point surmounts from its original position when it migrates at
the beginning of the algorithm and to cut this path at the end of the run. Originally we proposed the
number equal to  × (1 −  /    ), it decreased from the beginning to the end of the algorithm run.
But such a number without any randomness may, of course, make a behavior which is not suitable for
all solved problems. So, we multiply this by the random number  () between 0 and 1 and get the
ifnal  from the equality (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ).
      </p>
      <p>
        g =  × rand() × (1 −
 
 
 
) ,
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
Algorithm 1 Pseudocode of the SOMA-T3 Algorithm
1: Initialize problem dimension  , population   of  size (here  = 100 ), maximum number
of function evaluations     , domain (search space   ∈ [  ,   ] for each dimension  ∈
{1, … , } ) and parameters , ,  , where:
1.  represents the number of individuals selected randomly from population   ;  individuals
make population  ; (here  = 10 ),
2.  indicates the number of individuals selected from the  population according to optimized
function value to make population  for migration; (here  = 5 ),
3.  is the number of individuals randomly chosen from the population   to serve as potential
leaders during migration; (here  = 15 ).
      </p>
      <p>which is the number of positions of a migrant on its path towards leader, here
2: Initialize</p>
      <p>= 45.
3: Evaluate optimized function in all members of</p>
      <p>solution of the problem
4: while   &lt;=     do
5:    = ∅
6:  ←</p>
      <p>random selection (with uniform distribution)
of  migrant candidates from  
 ←  best migrants from</p>
      <p>according to optimized function value
for each    from  do
 ← random selection (with uniform</p>
      <p>distribution) of  leader candidates from  
 ← best from</p>
      <p>
        according to optimized function value
if  ==    then
 ← second best from 
according to optimized function value
according to (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
      </p>
      <p>
        according to (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>
        Then, when the new   positions of a migrant are computed, the vector, which was originally
added to the migrant in SOMA-T3 to get the new positions, see equality (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), is multiplied by  , see
equality (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ).
      </p>
      <p>
        Mig = Mig + (Lea − Mig) ×j × STEP × PRTVec,
Mig = Mig + (Lea − Mig) ×j × STEP × PRTVec × g.
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
In both equations, (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) and (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ),  is each member of set {1, 2, … ,   }.
      </p>
      <p>The algorithm SOMA-T3 with just described tool implemented in, we call SOMA-T3-RL (RL means
something like random length).</p>
      <p>Our second modification of the SOMA-T3 algorithm deals with another possibility of maintaining
population diversity. In the original algorithm, the authors guarantee a suficient degree of diversity of
the population by working with only a part of the population in each migration. Our approach lies in a
smaller population and refreshment of some points when the search process stagnates.</p>
      <p>We proposed changing randomly selected points of  points (but not several best) of the population
to a random position in the search space when the solution of the problem does not evolve in several
last consecutive migrations. In this case, when the diversity of population is provided in this way, it is
possible to work with a smaller population. And it is also suitable because of saving optimized function
evaluations with regard to their depletion for members which position is changed, when it is needed.</p>
      <p>So, we reduced the population size to less than about two-thirds and introduced three new parameters,
ℎ ,  ℎ ,  . We are storing the last two solutions (of solved problem; the solution - the
best point of population in a time) in each time and detecting if the last solution is or is not equal to
the penultimate one. If they are equal for more than ℎ times, we change the position of  ℎ
randomly selected points of the population, but not the  best points. This modification of the
SOMA-T3 algorithm we call SOMA-T3-DIV in the following.</p>
      <p>The third modification, called SOMA-T3-RL-DIV, merges the approaches from the first two variants
into a single algorithm. It incorporates the random path length mechanism of SOMA-T3-RL, which
adjusts the migration distance during the run, and the population refresh strategy of SOMA-T3-DIV,
which replaces selected individuals when stagnation is detected. This combination allows the algorithm
to use both strategies at the same time and potentially benefit from their joint efect.</p>
    </sec>
    <sec id="sec-6">
      <title>5. Experiments</title>
      <p>We tested four algorithms: the original SOMA-T3 algorithm and three of its modified versions described
above, SOMA-T3-RL and SOMA-T3-DIV, and also the combination of both modified variants
SOMA-T3RL-DIV. In this initial stage of our experiments, we adopted the same parameter settings as used in
SOMA-T3 to ensure a consistent baseline for comparison. For SOMA-T3-DIV and SOMA-T3-RL-DIV, we
reduced the population size to less than about two thirds. This reduction is feasible because population
diversity is maintained by randomly changing the positions of selected points while keeping the best
points unchanged. As a result, with the same number of function evaluations, the search can be in
some sense longer and can obtain better result.</p>
      <p>The evaluation is carried out on the CEC2014 benchmark set [9], which consists of 30 functions
designed for competition for single-objective optimization algorithms with real parameters. These
functions include a mix of unimodal ( 1 −  3 ), simple multimodal ( 4 −  16 ), hybrid ( 17 −  22 ), and
composition ( 23 −  30 ) types and ofer varying levels of dificulty and complexity.</p>
      <p>Each algorithm was run 51 times for each configuration, with     =  × 10 4. In total, this
resulted in 4 × 2 × 30 × 51 runs (count of algorithms × count of dimensions × count of functions × count of
runs), that is, 12, 240 runs overall. The count of runs for a combination (algorithm, dimension, function)
was chosen according to [9], and the dimensions 10 and 30 are also prescribed in it – in the future, we
plan to test dimensions 50 and 100 as well.</p>
      <p>All functions share the same search space, [−100, 100], for every dimension. The global minimum
values range from 100 to 3000:  1 has a minimum of 100,  2 of 200,  3 of 300, and so on, up to  30
with a minimum of 3000. This makes to compute the diference between an algorithm’s solution and
the true minimum straightforward. Therefore, in the results below, we report only this diference.</p>
      <p>All four algorithms tested have several parameters. Some of these parameters are set the same across
all algorithms:  ,  ,  , and   , with values  = 10 ,  = 5 ,  = 15 , and   = 45, following the
original SOMA-T3 settings. Other parameters, such as ℎ ,  ℎ , and  , are specific only
to SOMA-T3-DIV and SOMA-T3-RL-DIV. The parameter  difers for some of the algorithms tested.
The specific settings of the last mentioned parameters are listed in Table 1. In this table, SOMA-T3-RL,
SOMA-T3-DIV, and SOMA-T3-RL-DIV are marked as RL, DIV, and RL-DIV, respectively.</p>
      <p>The testing was done on a MacBook Pro 14” with an Apple M3 Pro chip (12-core CPU, 18-core GPU)
and 36GB of RAM. As a basis, we used the SOMA_T3A-python implementation (version 2), available on
GitHub [10], modified to support the CEC2014 benchmark functions – together with Python version
3.13.2, NumPy version 2.3.1, and the Opfunu [11] library (with CEC2014 functions) version 1.0.4.</p>
      <p>As a basis, we used a Python implementation of the algorithm sourced from GitHub [10], which we
extended with the modifications described above.</p>
      <p>In summary, we tested the three modified versions of the algorithm – by running each of them 51
times for the 30 CEC2014 benchmark functions, using the implementation provided by the opfunu
library (version=”1.0.4”). [11] The same procedure was applied to the original unmodified algorithm,
SOMA-T3. Moreover, all of these experiments were performed for both dimensions  = 10 and
 = 30 , always using the parameters described above.</p>
    </sec>
    <sec id="sec-7">
      <title>6. Results</title>
      <p>This section provides a detailed presentation and interpretation of the results obtained in our experiments
performed on the 30 CEC2014 benchmark functions in dimensions 10 and 30. All results obtained were
corrected as prescribed in [9]. If the result was less than 10−8, it was substituted by 0.</p>
      <p>Some statistics calculated from the results of our experiments are printed in Tables 2, 3, 4, 5. In
these tables, SOMA-T3-RL, SOMA-T3-DIV, and SOMA-T3-RL-DIV are also marked as RL, DIV, and
RL-DIV, respectively. Tables 2, 3 summarize the comparison of each proposed algorithm (SOMA-T3-RL,
SOMA-T3-DIV, SOMA-T3-RL-DIV) with the original algorithm SOMA-T3 in dimension  = 10 .
Tables 4, 5 show the summary of the same comparison in dimension  = 30 . For each combination of
dimension, function, and algorithm, we computed the minimum and median of the 51 results obtained.
All minimums are shown in Tables 2 and 4. All medians are shown in Tables 3 and 5.</p>
      <p>In these four tables, better statistics are underlined. We always compare a statistic (minimum, median)
of a SOMA-T3 modification (SOMA-T3-RL, SOMA-T3-DIV, SOMA-T3-RL-DIV) with the relevant statistic
of the original SOMA-T3 algorithm. That is the reason why the underlined numbers appear only in the
last three columns. The last line in each of these four tables summarizes the count of wins and count of
losses for each of the algorithms (SOMA-T3-RL, SOMA-T3-DIV, SOMA-T3-RL-DIV) when comparing
the statistic in the dimension.</p>
      <p>
        When we look at Table 2 (the numbers in the table are minimums in dimension  = 10 ), we can
see that the SOMA-T3-DIV and SOMA-T3-RL-DIV algorithms have more wins (20, 21) than losses (
        <xref ref-type="bibr" rid="ref6 ref7">7, 6</xref>
        ).
On the other hand, the algorithm SOMA-T3-RL has a diverse situation in comparison with SOMA-T3
in this dimension. The count of wins is 7 and the count of losses is 22. When we discuss the medians
in dimension  = 10 (see Table 3), the results of the comparisons are very similar. SOMA-T3-DIV
and SOMA-T3-RL-DIV have more wins than losses, while the opposite is true for SOMA-T3-RL. When
comparing the minimums (see Table 4) and medians (see Table 5) of the results in dimension  = 30 ,
the counts of wins and losses for the algorithms tested difer only slightly. They are very similar to the
results in the tables for dimension  = 10 .
      </p>
      <p>Without additional statistics, we can conclude that the proposed modifications SOMA-T3-DIV (which
use a tool for preserving population diversity that difers from the one used by the original algorithm)
and SOMA-T3-RL-DIV (an algorithm in which we implemented both proposed tools) of the SOMA-T3
algorithm are suficiently efective. SOMA-T3-RL does not appear to be as efective as expected. Here
we can observe the diference in the improvement of efectiveness when randomness is restricted to a
single direction, and when it is not restricted and diversity is supported by points across the whole
search space.</p>
      <p>In order to know the true diference in the efectivity of the original algorithm and the proposed
algorithms, we computed 180 Wilcoxon rank-sum statistical tests. All statistical tests were performed at
the significance level set at 0.05. The results of the statistical tests performed are shown in Table 6. In this
table, RL indicates SOMA-T3-RL, DIV is SOMA-T3-DIV, and the label RL-DIV means SOMA-T3-RL-DIV.
F
When + appears in the table, it means that the respective modified algorithm is statistically better than
the original, − means that the original algorithm works statistically better than the modified one, and ≈
means that both algorithms compared work with the statistically same efectivity.</p>
      <p>When we compare the original algorithm SOMA-T3 and SOMA-T3-RL in dimension  = 10 , we
found that only for the 2 functions tested the results of SOMA-T3-RL are statistically better than the
results of SOMA-T3, and simultaneously the results of SOMA-T3-RL are statistically worse than the
results of SOMA-T3 for 10 functions of the benchmark set. So, the conclusion should be that SOMA-T3
is better than SOMA-T3-RL in  = 10 . In dimension  = 30 , the situation is very similar. SOMA-T3
is statistically better than SOMA-T3-RL 13 times and SOMA-T3-RL is statistically better than SOMA-T3
6 times. The following conclusion should be drawn. The SOMA-T3 algorithm is more efective than the
SOMA-T3-RL algorithm also in dimension  = 30 .</p>
      <p>When we look at the columns for the comparison of SOMA-T3 and SOMA-T3-DIV, we can see that
the numbers at the bottom of the columns are diferent. In dimension  = 10 , the SOMA-T3-DIV
algorithm is better than the SOMA-T3 algorithm for 12 functions. On the other hand, the SOMA-T3
algorithm is more efective than the SOMA-T3-DIV algorithm for only 5 functions from the benchmark
set. A similar situation appears for dimension  = 30 . SOMA-T3-DIV is better than SOMA-T3 for
16 functions and SOMA-T3 wins only on the 6 functions tested. Looking at the last two columns of
the table, we can conclude that the combined use of both proposed tools does not yield significantly
F
diferent results compared to the implementation of only the DIV mechanism. The SOMA-T3-RL-DIV
wins 17 times in dimension  = 10 and 12 times in dimension  = 30 . On the other hand, the last
comparison looks as follows. SOMA-T3 wins 4 times and 6 times in dimension  = 10 and  = 30 ,
respectively.</p>
      <p>When we summarize the comparison of SOMA-T3 and SOMA-T3-RL, the part of functions where
SOMA-T3-RL is statistically better than the original algorithm is equal to or less than 20 % in each
tested dimension. However, when we think about the two other comparisons – SOMA-T3-DIV and
also SOMA-T3-RL-DIV, we find that SOMA-T3-DIV and SOMA-T3-RL-DIV are more efective than
the original SOMA-T3 algorithm for more than half of the functions in a given tested dimension:
SOMA-T3-DIV in  = 30 and SOMA-T3-RL-DIV in  = 10 . In the other dimension, they are both
better than SOMA-T3 in more than 33 % of the tested functions. Overall in both dimensions together,
SOMA-T3-DIV and also SOMA-T3-RL-DIV are worse than the original SOMA-T3 algorithm only in
about 16 % of the benchmark optimization problems. The results of these two algorithms are of similar
or better efectivity than the original SOMA-T3 in the rest of the benchmark problems, that is, in more
than 80 % of the tested problems.
F</p>
    </sec>
    <sec id="sec-8">
      <title>7. Conclusion</title>
      <p>In this article, we focus on the SOMA optimization algorithm, in particular the SOMA-T3 variant,
which we modified to improve optimization performance. Three algorithms based on SOMA-T3 were
proposed: SOMA-T3-RL, SOMA-T3-DIV, and a combination of both, SOMA-T3-RL-DIV.</p>
      <p>The results show that SOMA-T3-RL (with random length of the migration vector) does not bring
significant improvements and in some cases leads to a slight decrease in performance compared to
the original algorithm. However, its performance remains largely comparable to that of the original
SOMA-T3, suggesting that this modification does not introduce instability or consistent deterioration.</p>
      <p>In contrast, SOMA-T3-DIV (with a mechanism for maintaining population diversity) demonstrates
stronger and more robust improvements in both dimensions tested. The combined variant,
SOMA-T3RL-DIV, also achieves competitive results, reaching a similar or even better balance of improvements
compared to SOMA-T3-DIV.</p>
      <p>In general, the proposed modifications confirm that the most efective strategy is to enhance
population diversity (DIV). The combined variant SOMA-T3-RL-DIV provides results similar to SOMA-T3-DIV
and in some cases is even slightly better, showing that the random length mechanism can bring a small
additional benefit when used in combination with population diversity maintenance.</p>
      <p>However, a space for further improvement remains there, suggesting that additional modifications or</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>This work was supported by the Silesian University in Opava under the Student Funding Plan, project
SGS/9/2024.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>Either:
The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R.</given-names>
            <surname>Poláková</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Valenta</surname>
          </string-name>
          , jSO and GWO Algorithms Optimize Together, in Conference Information Technologies - Applications and Theory, Slovakia,
          <year>2022</year>
          , pp.
          <fpage>139</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>I. Zelinka</surname>
          </string-name>
          ,
          <article-title>SOMA-self-organizing migrating algorithm</article-title>
          , in New Optimization Techniques in Engineering, Springer, Berlin, Heidelberg,
          <year>2004</year>
          , pp.
          <fpage>167</fpage>
          -
          <lpage>217</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>I.</given-names>
            <surname>Zelinka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lampinen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Nolle</surname>
          </string-name>
          ,
          <article-title>SOMA-self-organizing migrating algorithm</article-title>
          ,
          <source>in Mendel 6th International Conference on Soft Computing</source>
          , Brno, Czech Republic, pp
          <fpage>167</fpage>
          -
          <lpage>217</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Zelinka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lampinen</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L.</given-names>
            <surname>Nolle</surname>
          </string-name>
          ,
          <article-title>SOMA: self-organizing migration algorithm</article-title>
          ,
          <source>in Proc. of the 13th International Conference on Process Control'01</source>
          ,
          <year>2001</year>
          , pp.
          <fpage>100</fpage>
          -
          <lpage>105</lpage>
          . ISBN 8022715425.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L.</given-names>
            <surname>Škanderová</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Fabian</surname>
          </string-name>
          ,
          <string-name>
            <surname>and I. Zelinka</surname>
          </string-name>
          ,
          <article-title>Self-adapting self-organizing migration algorithm</article-title>
          ,
          <source>SWEVO</source>
          , vol.
          <volume>51</volume>
          ,
          <year>2019</year>
          . ISSN 2210-
          <fpage>6510</fpage>
          . doi:
          <volume>10</volume>
          .1016/j.swevo.
          <year>2019</year>
          .
          <volume>100593</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Škanderová</surname>
          </string-name>
          ,
          <article-title>Self-organizing migrating algorithm: review, improvements and comparison</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          , vol.
          <volume>56</volume>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>172</lpage>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1007/s10462-022-10167-8.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Q. B.</given-names>
            <surname>Diep</surname>
          </string-name>
          ,
          <string-name>
            <surname>Self-Organizing Migrating Algorithm Team To Team Adaptive - SOMA T3A</surname>
          </string-name>
          ,
          <source>in Proc. 2019 IEEE Congress on Evolutionary Computation (CEC)</source>
          , Wellington, New Zealand,
          <year>2019</year>
          , pp.
          <fpage>1182</fpage>
          -
          <lpage>1187</lpage>
          . doi:
          <volume>10</volume>
          .1109/CEC.
          <year>2019</year>
          .
          <volume>8790202</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Volná</surname>
          </string-name>
          ,
          <article-title>Neuronové sítě a genetické algoritmy</article-title>
          .
          <source>Ostrava: Ostravská univerzita v Ostravě</source>
          ,
          <year>1998</year>
          .
          <source>ISBN 80-7042-762-0</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Y.</given-names>
            <surname>Qu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P. N.</given-names>
            <surname>Suganthan</surname>
          </string-name>
          ,
          <article-title>Problem definitions and evaluation criteria for the CEC 2014 special session and competition on single objective real-parameter numerical optimization</article-title>
          , Zhengzhou University, China, and Nanyang Technological University, Singapore,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <article-title>[10] diepquocbao, SOMA-T3A-</article-title>
          <string-name>
            <surname>Python</surname>
          </string-name>
          , GitHub repository,
          <year>2022</year>
          . [Online]. Available: https://github.com/diepquocbao/SOMA-T3A-Python
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N. V.</given-names>
            <surname>Thieu</surname>
          </string-name>
          ,
          <article-title>Opfunu: An Open-source Python Library for Optimization Benchmark Functions</article-title>
          ,
          <source>Journal of Open Research Software</source>
          , vol.
          <volume>12</volume>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .5334/jors.508.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Wolpert</surname>
          </string-name>
          and
          <string-name>
            <surname>W. G.</surname>
          </string-name>
          <article-title>Macready, No Free Lunch Theorems for Optimization</article-title>
          ,
          <source>IEEE Transactions on Evolutionary Computation</source>
          , vol.
          <volume>1</volume>
          ,
          <issue>1997</issue>
          , pp.
          <fpage>67</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Senkerik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kadavy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Janku</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pluhacek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guzowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pekař</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Matušů</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Viktorin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Smołka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Byrski</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Z.</given-names>
            <surname>Oplatkova</surname>
          </string-name>
          , Maximizing Eficiency:
          <article-title>A Comparative Study of SOMA Algorithm Variants and Constraint Handling Methods for Time Delay System Optimization</article-title>
          ,
          <year>2023</year>
          , pp.
          <fpage>1821</fpage>
          -
          <lpage>1829</lpage>
          . doi:
          <volume>10</volume>
          .1145/3583133.3596417.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kadavy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Viktorin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pluhacek</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kovář</surname>
          </string-name>
          ,
          <article-title>On modifications towards improvement of the exploitation phase for SOMA algorithm with clustering-aided migration and adaptive perturbation vector control</article-title>
          ,
          <source>in Proc. 2021 IEEE Symposium Series on Computational Intelligence (SSCI)</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/SSCI50451.
          <year>2021</year>
          .
          <volume>9659916</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Matusikova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pluhacek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kadavy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Viktorin</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Senkerik</surname>
          </string-name>
          ,
          <article-title>Exploring adaptive components of SOMA</article-title>
          ,
          <source>in Proc. 2023 Genetic and Evolutionary Computation Conference Companion (GECCO)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1803</fpage>
          -
          <lpage>1811</lpage>
          . doi:
          <volume>10</volume>
          .1145/3583133.3596421.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Škanderová</surname>
          </string-name>
          ,
          <article-title>Self-organizing migrating algorithm: review, improvements and comparison</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          , vol.
          <volume>56</volume>
          , pp.
          <fpage>101</fpage>
          -
          <lpage>172</lpage>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1007/s10462-022-10167-8.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>