<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>X (D. Uzlov);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Handling outliers in swarm algorithms: a review</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Dmytro Uzlov</string-name>
          <email>dmytro.uzlov@karazin.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yehor Havryliuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan Hushchyn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Volodymyr Strukov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergiy Yakovlev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>V.N. Karazin Kharkiv National University.</institution>
          <addr-line>4 Svobody, Sq., Kharkiv, 61022</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Swarm optimization algorithms, inspired by the collective behavior of biological swarms, are a promising tool for solving the problem of optimizing complex systems where traditional methods are often ineffective. However, the problem of outliers can significantly affect the process of finding an optimal solution. Therefore, the study of methods for detecting and processing outliers in swarm algorithms, such as the particle swarm optimization (PSO), is an urgent task that has significant potential to improve the efficiency and reliability of these algorithms in various practical applications, such as drone control systems, financial systems, environmental control and modeling systems. The article deals with the problem of outliers in swarm optimization algorithms such as PSO. An overview of existing approaches to managing outliers, including adaptive methods, methods using swarm topologies, hybrid algorithms, and others, is provided. The advantages and disadvantages of each approach are analyzed. Particular attention is paid to new promising areas, such as the combination of neural networks and reinforcement learning, to develop more efficient and adaptive swarm algorithms. The article is aimed at researchers and practitioners in the field of optimization who are interested in improving the efficiency and reliability of swarm algorithms.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Swarm optimization algorithms, inspired by biological swarms, are crucial for solving complex
problems in fields like engineering, economics, and medicine. Despite their power, these algorithms
often suffer from early convergence, where particles get stuck in local optima due to outliers. This
article reviews methods for detecting and managing outliers in swarm systems on the example of
PSO, analyzing adaptive methods, swarm topologies, and hybrid algorithms. It also explores
emerging solutions like neural networks and reinforcement learning hybridization, which could
enhance swarm algorithms' ability to avoid local optima and find global solutions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement and state of the arts</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], the authors emphasize that an "outlier is strange data values that stand out from datasets".
From this definition, outliers in swarm systems can be represented as particles in the swarm that do
not follow the expected swarm behavior, such as particles that move much faster or slower than
other particles in the swarm. Such particles have all the characteristics inherent in the standard
definition of an outlier: they can deviate significantly from the swarm trajectory, interfere with other
particles, and prevent them from moving toward the optimal solution. This leads to slower swarm
convergence or suboptimal results.
      </p>
      <p>
        The issue of outliers in swarm algorithms is underexplored but crucial, as optimizing them could
greatly enhance swarm convergence, benefiting many modern applications. Swarm systems, such as
drone control systems, are widely used in a variety of areas, including transportation systems [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
search and rescue operations [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], and other industries. In such systems, rogue drones can not only
slow down the convergence of the swarm, but also lead to a loss of control over individual drones,
which can cause safety hazards and involve damage to property or people.
      </p>
      <p>
        In financial systems where swarm systems such as PSO are used to optimize investment
portfolios, outliers can cause unpredictable fluctuations in portfolio value [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. These fluctuations can
negatively affect the stability and predictability of investments, creating additional risks for financial
markets.
      </p>
      <p>
        Outliers can have severe consequences in control systems [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], leading to unpredictable behavior
and potential damage. In environmental models, they can cause inaccurate predictions. There is no
universal solution for handling outliers in swarm systems, as hyperparameters in swarm algorithms
greatly impact their efficiency and stability. Tailoring strategies to specific problems and exploring
alternative outlier control methods are key research priorities. Developing effective detection and
management techniques for outliers in swarm systems is crucial and requires innovative approaches.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. State of the arts</title>
      <p>The outlier problem, though well-known in statistics, is especially crucial in swarm algorithms.
Viewing swarm particles as data points frames the outlier issue as a statistical anomaly detection
problem, where outliers signal an imbalance in exploration and exploitation. Traditional methods,
like removing outliers [6], aren't always viable, especially in applications like drone systems where
losing a unit isn't acceptable. Thus, exploring alternative outlier management strategies is essential.</p>
      <p>Recent studies have proposed a wide range of variations of the PSO algorithm aimed at solving
various problems and improving the basic algorithm. These variations include adaptive approaches,
where hyperparameters dynamically change during the optimization process, considering the
current state of the swarm and the characteristics of the problem. Researchers also consider methods
that utilize swarm topologies (in a standard PSO, one of the topologies shown in Figure 1 is often
used) [7], which affect the information exchange between particles, allowing for more efficient
exploration of the solution space and avoiding early convergence. In addition, hybrid algorithms
combining PSO with other optimization methods, such as genetic algorithms or bee swarm methods,
are actively being investigated to combine the advantages of different approaches [7, 8].</p>
      <p>The research papers [7, 8] provide an overview of such modern variations of PSO, their processing
principles and purpose. Further analysis of these variations, presented in this paper, will allow to
deeper understand their advantages and disadvantages, as well as to determine the optimal vector
for future research on the outlier problem not only in PSO, but also in the domain of swarm
algorithms in general, and finding a solution that will increase the efficiency and reliability of swarm
algorithms in various practical problems.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Managing outliers in swarm algorithms</title>
      <sec id="sec-4-1">
        <title>4.1. Outliers causes</title>
        <p>Outliers in swarm systems, as in any other systems where they occur, are stochastic in nature. It is
proposed to study outliers in swarm systems on the example of one of the most common and simplest
implementations of swarm intelligence algorithms for function optimization – PSO [9]. Determining
PSO hyperparameters involves dealing with outlier particles that deviate from the swarm's general
trend, making PSO ideal for studying outliers in swarm algorithms.</p>
        <p>Among the main reasons for the occurrence of outliers in PSO algorithms are the following:
1. Initialization: some particles may start moving with initial values of position and velocity
that are far from optimal
2. Particle divergence: particles explore the search space and may deviate from the swarm
3. Stochasticity: the algorithm is inherently random, potentially leading to some particles
having significantly different positions or velocities than the rest of the swarm
4. Inappropriate hyperparameters: the behavior of PSO is strongly influenced by the choice of
its hyperparameters</p>
        <p>The initial position and velocity of particles in the simplest implementation are determined
according to a uniform distribution in the search area. They also affect the speed of finding the
optimal solution by a swarm. There are effective methods for solving this problem [10, 11].</p>
        <p>Such causes of outliers as particle divergence, stochasticity, and inappropriate hyperparameters
are related to the choice of parameters such as inertia weights and acceleration coefficients [9] - they
affect the degree of exploration and exploitation of particles, thereby changing the behavior of
agents. The choice of such parameters is usually a separate task when implementing PSO to optimize
the objective function. For a static end state, it is possible to select such parameters. When the
objective function changes dynamically, this approach becomes suboptimal, because each change in
the objective function can potentially lead to unexpected results due to the static choice of initial
parameters. To solve the problem of optimizing swarm outliers, as well as to overcome the need to
find a compromise between the exploratory and exploitative behavior of a particle, many variations
of PSO, were developed. For example, one of them is adaptive PSO.</p>
        <p>Adaptive PSO (APSO) [12] has better search efficiency than the standard PSO, can perform a
global search over the entire search space with a higher convergence rate, and can automatically
control inertia weights, acceleration factors, and other algorithmic parameters at runtime, thereby
improving search efficiency and performance simultaneously. In addition, this algorithm can
influence a single swarm particle with the global best found value to "force" it out of the likely local
optimum.</p>
        <p>But as any other variation mentioned in this article, APSO has its own set of limitations and
potential problems. Some of the most common problems include [12]:
• Over-adaptation
• Complexity
• Convergence
• Performance</p>
        <p>Although APSO is widely utilized among PSO variants, it may not represent the most optimal
approach for all application scenarios.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. An overview of existing PSO variations</title>
        <p>APSO is by far the most common modification of the standard algorithm, being more flexible and
more versatile, but it does not solve all problems and creates new disadvantages. We will consider
other modifications of PSO further.</p>
        <p>Scientists Meetu Jain, Vibha Saihjpal Narinder Singh, Satya Bir Singh noted the following
variations of the PSO algorithm in their work, each aimed at solving the specific standard PSO
limitation [8]:
1. Fuzzy adaptive PSO algorithm – improves PSO optimization capabilities
2. Homogeneous particle swarm optimizer (HPSO) – modified version for solving
multiobjective optimization problems
3. Hybrid PSO with ranking, selection and mean square error criterion (STPSO) – combines</p>
        <p>PSO with statistical methods to solve stochastic optimization problems
4. Evolutionary modified PSO –improves search efficiency
5. Improved PSO algorithm (IPSO) – improves the search efficiency
6. Fully informed particles in PSO –improves performance</p>
        <p>The authors of another paper [7], in addition to a general review of the PSO algorithm and its
principles, provide a brief overview of the most recent PSO review documents, as well as a list of
recent publications with PSO variations and their limitations. The main PSO variants the article
focuses include the following:
1. Cooperative PSO – solving the outliers’ problem through particle cooperation
2. Multi-swarm PSO – improves exploration avoiding local optima convergence
3. Hybrid PSO – improves performance in dynamic object layout problems
4. Binary PSO – can optimize both continuous and discrete functions</p>
        <p>The article also provides an additional list of other PSO variations that are less popular or less
efficient. These variations of PSO are usually aimed at eliminating a specific drawback of the
standard algorithm, for example, the possibility of hitting a local optimum. But each of them also has
its own drawbacks. For example, adding new parameters to the algorithm increases the complexity
of the initial model setup and generally complicates the system with more hyperparameters, and, in
addition, requires additional computational costs [7].</p>
        <p>The PSO variations in [7, 8] show ongoing evolution and improvement for solving diverse
optimization problems, but their multitude suggests that each may offer a less universal solution for
specific problem types.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Comparative analysis of the PSO modifications</title>
        <p>The analysis of recent publications on the topic [7, 8] shows that many researchers' efforts are
focused on the development and improvement of the PSO algorithm. As a result of intensive
scientific activity, a wide range of modifications and hybridizations of the basic PSO algorithm have
been proposed, each of which has its own strengths and weaknesses. Based on the analysis of [7, 8],
the authors of this article present a qualitative comparison of PSO variations (Table 1).</p>
        <p>While these PSO variations offer improvements over the standard PSO, there is limited evidence
of their effectiveness in real-world applications. Current literature indicates that no PSO variant is
universally optimal or improves PSO without introducing new constraints. Thus, selecting a specific
algorithm requires careful consideration of the problem's specifics, convergence speed, and solution
accuracy, necessitating further research and experimentation.</p>
        <sec id="sec-4-3-1">
          <title>Cooperative PSO (CPSO)</title>
        </sec>
        <sec id="sec-4-3-2">
          <title>Gaussian PSO (GPSO)</title>
        </sec>
        <sec id="sec-4-3-3">
          <title>Concurrent PSO (CONPSO)</title>
        </sec>
        <sec id="sec-4-3-4">
          <title>Binary PSO (BPSO)</title>
        </sec>
        <sec id="sec-4-3-5">
          <title>Bare-Bones PSO</title>
        </sec>
        <sec id="sec-4-3-6">
          <title>Fully Informed PSO (FIPS)</title>
        </sec>
        <sec id="sec-4-3-7">
          <title>Binary PSO for Classification</title>
        </sec>
        <sec id="sec-4-3-8">
          <title>Fuzzy Adaptive PSO (FAPSO)</title>
        </sec>
        <sec id="sec-4-3-9">
          <title>Guided PSO (GPSO)</title>
        </sec>
        <sec id="sec-4-3-10">
          <title>Self-Regulating PSO (SRPSO)</title>
        </sec>
        <sec id="sec-4-3-11">
          <title>Improved PSO (IPSO)</title>
        </sec>
        <sec id="sec-4-3-12">
          <title>Genotype Phenotype Modified</title>
        </sec>
        <sec id="sec-4-3-13">
          <title>Binary PSO (GPMBPSO)</title>
        </sec>
        <sec id="sec-4-3-14">
          <title>Modified Binary PSO (MBPSO)</title>
        </sec>
        <sec id="sec-4-3-15">
          <title>Hybrid PSO (HPSO)</title>
        </sec>
        <sec id="sec-4-3-16">
          <title>STPSO (Stochastic PSO)</title>
        </sec>
        <sec id="sec-4-3-17">
          <title>Description Limitations</title>
        </sec>
        <sec id="sec-4-3-18">
          <title>Simple and resource efficient. Particles can get stuck in local optima.</title>
        </sec>
        <sec id="sec-4-3-19">
          <title>Balances exploration and Increased complexity due to</title>
          <p>exploitation by dynamically dynamic inertial weight, which
adjusting the inertial weight, requires additional tuning
improving convergence speed compared to fixed-parameter
and solution quality. PSO options.</p>
        </sec>
        <sec id="sec-4-3-20">
          <title>Aimed at solving the problem Particles can get stuck in local of outliers through the optima. cooperation of the particles.</title>
        </sec>
        <sec id="sec-4-3-21">
          <title>Not suitable for tasks where</title>
        </sec>
        <sec id="sec-4-3-22">
          <title>It only requires specifying the setting specific parameters is number of particles before use. critical for optimal performance.</title>
        </sec>
        <sec id="sec-4-3-23">
          <title>Improves convergence Increased computational performance compared to the complexity due to parallel original PSO. operations.</title>
          <p>Ifctuonncttinicouanonsu. s opatni mdize discbroetthe sscNppooeentcctiiinfaailucliwzoeutadysypsvess.aadlsogisofcerritfetfahtesemck)ts.sive(e.fgoa.sr,</p>
        </sec>
        <sec id="sec-4-3-24">
          <title>It is not always as effective in</title>
        </sec>
        <sec id="sec-4-3-25">
          <title>Eliminates the speed formula, complex, multidimensional making it simpler. search spaces where speed control is important.</title>
        </sec>
        <sec id="sec-4-3-26">
          <title>Particles are influenced by all Increased computational costs</title>
          <p>neighbors, not just the best due to the consideration of
one. information from all neighbors.</p>
        </sec>
        <sec id="sec-4-3-27">
          <title>Designed for classification</title>
          <p>tasks, showing promising Not suitable for other types of
results compared to machine optimization problems.
learning methods.
tcUhosenevsienaregrfetuniazczeyw.esyigshtet,mimtopraodvainpgt Itrcnheucqesrtueoifarmuisnzeizzgdianteicososnmaopnfldextehixteaypdesddrytiutsisietoeen.mtao,l</p>
        </sec>
        <sec id="sec-4-3-28">
          <title>Specially designed to recognize facial emotions, it demonstrates promising accuracy.</title>
        </sec>
        <sec id="sec-4-3-29">
          <title>Includes human learning strategies to improve exploration and exploitation processes.</title>
        </sec>
        <sec id="sec-4-3-30">
          <title>Solves the problems of slow</title>
          <p>convergence and limitations of
the basic PSO when planning
the trajectory of a mobile
robot.</p>
        </sec>
        <sec id="sec-4-3-31">
          <title>Designed to solve the Knapsack</title>
          <p>problem, offering improved
performance compared to</p>
        </sec>
        <sec id="sec-4-3-32">
          <title>BPSO.</title>
        </sec>
        <sec id="sec-4-3-33">
          <title>Outperforms the original BPSO algorithm.</title>
        </sec>
        <sec id="sec-4-3-34">
          <title>Limited applicability to other problem areas other than facial emotion detection.</title>
        </sec>
        <sec id="sec-4-3-35">
          <title>Requires careful adjustment of self-regulation mechanisms for optimal performance.</title>
        </sec>
        <sec id="sec-4-3-36">
          <title>Tends to generalize poorly to</title>
          <p>other problem areas or
demonstrate stable
performance in different
scenarios.</p>
        </sec>
        <sec id="sec-4-3-37">
          <title>Increased complexity due to genotype-phenotype mapping, which may require additional computing resources.</title>
        </sec>
        <sec id="sec-4-3-38">
          <title>It is not always suitable for other types of optimization problems and requires adaptation to a specific task.</title>
          <p>laapCalneogynrmoofeourbairttilmnhipnemragsons)bcleePm(iSneOs.tdg.oy.,nwaimtshimiicmuopolbartjohteevecdert Ichmnoycunbrlfretiiigadpusleereddcna.oalgmtouprrlieet,hxmitys dreutqeoutioribnitegs
oHmpyetbtirmhiodizdizasetsiotPonSpOsroowlbviletehmssstt.oacthisatsictaicl Ichmnoycunbrlfretiiigadpusleereddcna.oalgmtouprrlieet,hxmitys dreutqeoutioribnitegs</p>
          <p>While these PSO variations offer improvements over the standard PSO, there is limited evidence
of their effectiveness in real-world applications. Current literature indicates that no PSO variant is
universally optimal or improves PSO without introducing new constraints. Thus, selecting a specific
algorithm requires careful consideration of the problem's specifics, convergence speed, and solution
accuracy, necessitating further research and experimentation.</p>
          <p>Having examined this, the authors believe that it is advisable to consider other ways to improve
the PSO algorithm, other than the above approaches. One of these alternatives is to integrate PSO
with neural networks (NN), which will allow to use the advantages of both approaches. PSO can be
used to optimize the NN architecture, find optimal values of weights and thresholds, or find optimal
hyperparameters. In turn, NNs can be used to model complex nonlinear dependencies and improve
PSO's ability to find solutions.</p>
          <p>In addition, combining PSO with NNs can be especially useful in problems where many
parameters need to be considered or where the objective function is complex and multimodal. In
such cases, NNs can help PSO avoid local optima and find more accurate solutions (resolving outliers’
problem as well).</p>
          <p>The use of hybrid approaches that combine PSO with NN may grant new opportunities for solving
complex optimization problems and improving the efficiency of existing solutions. To confirm this
hypothesis, additional research and experiments are needed to assess the potential of this approach
and determine its advantages and disadvantages.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Deep learning models used over PSO</title>
      <p>Modifying and hybridizing PSO isn't the only way to improve it. Paper [8] reviews how integrating
collective intelligence, like self-organization and swarm intelligence, can enhance deep learning. The
authors explore using these principles to address deep learning challenges, such as combining
cellular automata with neural networks for image processing and rethinking reinforcement learning
with self-organizing agents. The authors identify four main areas of deep learning that have begun
to incorporate the ideas of collective intelligence:
1.
2.
3.
4.</p>
      <p>Image processing
Deep Reinforcement Learning (DRL)
Multi-agent learning</p>
      <p>Meta learning</p>
      <p>Based on these studies, it can be assumed that the introduction of a reinforcement learning model
for outlier’s optimization in PSO has prerequisites for future research to overcome the limitations of
other algorithm modifications. For instance, it can be predicted that one of the potential advantages
of integrating reinforcement learning with PSO is that the introduction of neural network models
into swarm operation will solve the problem of overfitting. Reinforcement learning algorithms are
designed to learn optimal policies that generalize well to new environments [13], so it is reasonable
to consider such integration as conducive to an effective process of adaptation to various
optimization problems.</p>
      <p>Let us consider an example of a possible potential application of the hybrid PSO-DRL approach.
The GPT models, such as ChatGPT by OpenAI, which have become a modern breakthrough in the
field of artificial intelligence, use deep neural networks with many parameters, which makes their
training and tuning a complex and resource-intensive process [14]. Using PSO to optimize the GPT
model architecture can help find the optimal number of layers, neurons in each layer, and types of
connections between them. This will reduce the number of model parameters, speed up its training,
and improve its ability to generate text.</p>
      <p>In addition, PSO can be used to optimize hyperparameters like learning rate, data set size, and
regularization, balancing training speed and model accuracy. DRL can help model complex
relationships between parameters and performance, enhancing PSO's efficiency in finding optimal
solutions. Some studies, like [15], have explored combining these methods, showing that the
introduced parameter adaptation method based on reinforcement learning (RLAM) improves PSO's
convergence rate and outperforms other variants. However, RLAM increases computational
complexity, complicates implementation, and risks overfitting. Despite these challenges, combining
PSO and DRL could effectively optimize GPT models, improving performance, reducing resource
use, and speeding up development. It is also promising to combine the modified PSO algorithm with
reinforcement learning models. Such integration has the prerequisites for further improving the
convergence property of the modified algorithm. Optimal policies in reinforcement learning
algorithms are obtained by maximizing the reward signal, which can be used to control the search
process in an adaptive algorithm [16].</p>
      <p>However, in the context of such integration, it is also worth noting potential limitations:</p>
      <p>Additional complexity of initialization
Setting additional hyperparameters</p>
      <p>Over-fitting of the RL model</p>
      <p>In addition, determining the best strategy for integrating the reinforcement learning algorithm
with PSO for a particular optimization problem, as well as finding another possible method for
combining the two algorithms that could reduce the number of algorithm limitations while
improving the performance of the standard PSO, requires additional research. The implementation
of this approach, as well as experimental confirmation or refutation of its advantages and
disadvantages, is the subject of the author's future research.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>Outliers in particle swarm optimization are a major challenge, influenced by factors like velocity,
position, and acceleration coefficients. Addressing their causes can improve convergence speed,
accuracy, stability, and reliability in complex search spaces. This article examines the causes,
concepts, and solutions to outlier issues in swarm optimization on the example of PSO, focusing on
methods that enhance convergence and reduce outliers, including adaptive methods, swarm
topologies, and hybrid algorithms.</p>
      <p>For example, the adaptive particle swarm method balances exploration and fast convergence but
is more complex than standard PSO. Cooperative PSO aids particle cooperation but can get stuck in
local optima, while binary PSO handles both continuous and discrete parameters but may be less
efficient than specialized algorithms. It has been determined that none of the existing variations of
PSO is a universal solution; each comes with its own limitations.</p>
      <p>It has been proposed to use of hybrid approaches combining PSO with neural networks and
reinforcement learning, which will grant new opportunities for solving complex optimization
problems and improving the efficiency of existing solutions. In contrast to algorithmic solutions, the
use of neural networks in combination with the particle swarm method (or its variations) would be
appropriate to obtain a positive practical result when applied to drone control systems or financial
systems for which other variations of algorithms are not optimal for one reason or another.</p>
      <p>Further research will be aimed at studying and experimentally confirming or refuting the
advantages and disadvantages of the proposed approach, as well as developing a new method for
effective detection and management of outliers in swarm systems on the example of PSO.
[6] T.W. Gress, J. Denvir, J.I. Shapiro. "Effect of removing outliers on statistical inference:
implications to interpretation of experimental data in medical research." Marshall Journal of
Medicine, vol. 4, issue 2.9. 2018. doi: 10.18590/mjm.
[7] T. M. Shami, A. A. El-Saleh, M. Alswaitti, Q. Al-Tashi, M. A. Summakieh, S. Mirjalili. "Particle
swarm optimization: A comprehensive survey." IEEE Access, vol. 10, pp. 10031-10061. 2022. doi:
10.1109/ACCESS.2022.3142859.
[8] M. Jain, V. Saihjpal, N. Singh, S.B. Singh. "An overview of variants and advancements of PSO
algorithm." Appl. Sci. vol. 12, no. 17: 8392. 2022. doi: 10.3390/app12178392.
[9] J. Kennedy, R. Eberhart. "Particle swarm optimization." Proceedings of IEEE International</p>
      <p>Conference on Neural Networks, vol. IV. pp. 1942–1948. 1995. doi:10.1109/ICNN.1995.488968.
[10] X. Hu, R. Shonkwiler, M. Spruill. "Random Restarts in Global Optimization." Georgia Tech
Library. Georgia Institute of Technology: School of Mathematics. 2009. URL:
http://hdl.handle.net/1853/31310.
[11] M. Barad. "Design of Experiments (DOE) — A Valuable Multi-Purpose Methodology." Applied</p>
      <p>Mathematics, vol. 5, no 14, pp. 2120-2129. 2014. doi: 10.4236/am.2014.514206.
[12] Z-H. Zhan, J. Zhang, Y. Li, H.S-H. Chung. "Adaptive Particle Swarm Optimization." IEEE
Transactions on Systems, Man, and Cybernetics, vol. 39 (6), pp. 1362-1381.
doi:10.1109/TSMCB.2009.2015956.
[13] J. Shuford. "Deep Reinforcement Learning: Unleashing the Power of AI in Decision-Making."
Journal of Artificial Intelligence General Science JAIGS, vol. 1, issue 1. 2024. URL:
https://www.researchgate.net/publication/378335647_ARTICLE_INFO_Deep_Reinforcement_L
earning_Unleashing_the_Power_of_AI_in_Decision-Making
[14] A. Birhane, A. Kasirzadeh, D. Leslie. "Science in the age of large language models." Nature</p>
      <p>Reviews Physics, vol. 5, pp. 277–280. doi: 10.1038/s42254-023-00581-4.
[15] S. Yin, M. Jin, H. Lu. "Reinforcement-learning-based parameter adaptation method for particle
swarm optimization." Complex Intell. Syst, vol. 9, pp. 5585–5609. 2023. doi:
10.1007/s40747-02301012-8.
[16] L. P. Kaelbling, M. L. Littman, A. W. Moore. "Reinforcement Learning: A Survey." Journal of
Artificial Intelligence Research, vol. 4. pp. 237–285. 1996. doi: 10.1613/jair.301.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Misinem</surname>
            ,
            <given-names>A. A.</given-names>
          </string-name>
          <string-name>
            <surname>Bakar</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          <string-name>
            <surname>Hamdan</surname>
            ,
            <given-names>M. Z. A.</given-names>
          </string-name>
          <string-name>
            <surname>Nazri</surname>
          </string-name>
          .
          <article-title>"A rough set outlier detection based on particle swarm optimization"</article-title>
          ,
          <source>10th international conference on intelligent systems design and applications</source>
          , Cairo, Egypt,
          <year>2010</year>
          . pp.
          <fpage>1021</fpage>
          -
          <lpage>1025</lpage>
          . doi:
          <volume>10</volume>
          .1109/ISDA.
          <year>2010</year>
          .
          <volume>5687054</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Schiano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. M.</given-names>
            <surname>Kornatowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cencetti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Floreano</surname>
          </string-name>
          .
          <article-title>"Reconfigurable drone system for transportation of parcels with variable mass and size</article-title>
          .
          <source>" IEEE Robotics and Automation Letters</source>
          , vol.
          <volume>7</volume>
          , no.
          <issue>4</issue>
          , pp.
          <fpage>12150</fpage>
          -
          <lpage>12157</lpage>
          . Oct.
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1109/LRA.
          <year>2022</year>
          .
          <volume>3208716</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Nathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rakesh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Kurmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Bimber</surname>
          </string-name>
          .
          <article-title>"Drone swarm strategy for the detection and tracking of occluded targets in complex environments."</article-title>
          <source>Communications Engineering</source>
          , vol.
          <volume>2</volume>
          ,
          <issue>55</issue>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1038/s44172-023-00104-0.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S. G.</given-names>
            <surname>Reid</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Malan</surname>
            ,
            <given-names>A. P.</given-names>
          </string-name>
          <string-name>
            <surname>Engelbrecht</surname>
          </string-name>
          .
          <article-title>"Carry trade portfolio optimization using particle swarm optimization." 2014 IEEE Congress on Evolutionary Computation (CEC)</article-title>
          . Beijing, China, pp.
          <fpage>3051</fpage>
          -
          <lpage>3058</lpage>
          .
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .1109/CEC.
          <year>2014</year>
          .
          <volume>6900497</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hamidi</surname>
          </string-name>
          .
          <article-title>"Control system design using particle swarm optimization (PSO)."</article-title>
          <source>International Journal of Soft Computing and Engineering: Blue Eyes Intelligence Engineering &amp; Sciences Publication</source>
          , vol.
          <volume>1</volume>
          , issue 6, pp.
          <fpage>2231</fpage>
          -
          <lpage>2307</lpage>
          .
          <year>2012</year>
          . URL: https://www.ijsce.org/wpcontent/uploads/papers/v1i6/F0280111611.pdf.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>