<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Distribute load among concurrent servers ⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Denys Bakhtiiarov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bohdan Chumachenko</string-name>
          <email>bohdan.chumachenko@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksandr Lavrynenko</string-name>
          <email>oleksandrlavrynenko@tks.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>CPITS-II 2024: Workshop on Cybersecurity Providing in Information and Telecommunication Systems II</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Aviation University</institution>
          ,
          <addr-line>1 Kosmonavta Komarova ave., 03058 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>State Scientific and Research Institute of Cybersecurity Technologies and Information Protection</institution>
          ,
          <addr-line>3 Maksym Zaliznyak, 03142 Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>260</fpage>
      <lpage>266</lpage>
      <abstract>
        <p>A technical implementation option for load balancing among concurrently operating application servers is proposed to mitigate the risks of overload amid substantial unpredictable fluctuations in request flow to the application system and the variable processing durations by each application server. The structuralfunctional model for load balancing inside the server line of the application system is delineated, and designed to operate under conditions where the incoming request flow from clients is characterized as random, unexpected, non-stationary, and pulsing. A proposal is made for a system that generates a flow of requests to the application server line, ensuring the alignment of the stationary intervals of this flow with the intervals of discrete control for equalizing server load factors. A technological framework for load balancing on application servers is proposed, facilitating the equalization of load factors among application system servers through real-time transmission, allowing the redistribution of a portion of incoming request traffic from more heavily loaded servers to those with lesser loads.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;request</kwd>
        <kwd>application</kwd>
        <kwd>server</kwd>
        <kwd>client</kwd>
        <kwd>load balancing1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In practice, when utilizing computerized real-time
application systems like ‘client/server’ that permit remote
access for clients via the Internet, such as various interactive
help systems, the effectiveness is assessed by the value of
τs—the average service duration of each stream of customer
requests entering the application system input. A reduced
value indicates that the consumer is likely to receive a
response to their request more promptly [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. At low request
flow intensities, queues at the application system’s input are
virtually nonexistent, thereby making τs directly contingent
upon the performance of the server hardware hosting the
application software. Issues occur when the volume of
incoming requests is misaligned with the processing speed
of the server infrastructure, leading to the accumulation of
unprocessed requests, which in turn results in an
unacceptable increase in service request duration and
certain instances, the loss of some requests. Given the high
intensity of request flow in several applications, it is
essential to partition it in real-time into parallel
demultiplexed substreams and execute their concurrent
online processing utilizing a series of application servers
with identical functionality. For instance, as illustrated in
Fig. 1. Before the processing of a user’s request by an
application server, it is initially received by the request
redirection server (step 1), which employs a block to
ascertain the current application server number designated
for the request and allocates the request stream in real-time.
Users between the line servers (steps 2 and 3) will
implement the distribution strategy outlined below. The
request redirection server transmits the IP address of the
subsequent application server, as determined by the
distribution method, to the user terminal (step 4), and
subsequently readies itself to handle a new request from
another user, advancing to step 1. The user utilizes the IP
address of the designated application server to retrieve the
online result of processing his request from that server
(step 5). The designated server resolves the application issue
and transmits the outcome to the user (step 6) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>Specifically, Fig. 1 illustrates that a series of specialized
application software and hardware servers process client
requests concurrently. Choosing the number of servers in
the configuration should align the request traffic intensity
with the application system’s performance. Nonetheless, the
issues get intricate when addressing an erratic and
unpredictable influx of requests, characterized by
substantial fluctuations in both intensity and duration. In
this scenario, due to erratic variations in request volume and
the uncertain processing times by application servers, these
servers, in the absence of specific interventions, experience
uneven and arbitrary loading—resulting in some servers
becoming overloaded and consequently losing requests,
while others remain underutilized. Unforeseen variations in
the volume of requests directed to any application server
can impede request processing due to potential transient
server overloads.</p>
      <p>
        0000-0003-3298-4641 (D. Bakhtiiarov);
0000-0002-0354-2206 (B. Chumachenko);
0000-0002-3285-7565 (O. Lavrynenko);
0000-0001-9412-7413 (V. Chupryn);
0000-0003-2244-262X (V. Antonov)
© 2024 Copyright for this paper by its authors. Use permitted under
Creative Commons License Attribution 4.0 International (CC BY 4.0).
Consequently, there is both theoretical and practical
interest in developing a mechanism for load balancing on
application servers, specifically a dynamic load balancing
approach among collaborating application servers in
realtime. This method’s implementation aims to avert potential
short-term overloads of individual application servers
during their operation, thereby fostering the sustainable
functioning of the application system amid uncertainties in
the dynamics of the aforementioned environmental factors.
The suggested technique must assure the stability of the
request distribution process, considering the dynamics of
unforeseen fluctuations in this flow. The theoretical
foundation of this strategy is explained in [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3–5</xref>
        ]. This paper
presents a potential option for its technical implementation,
the core of which is as follows. The application system
hardware depicted in pic.1 comprises a software server
(ROM server+server definition unit) that concurrently and
autonomously manages multiple application servers. This
software server facilitates a real-time adaptive distribution
of requests among the application servers to maintain a
more uniform load during unpredictable surges in request
flow.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Main Part</title>
      <p>
        The theoretical foundation of the employed load balancing
method is delineated in [
        <xref ref-type="bibr" rid="ref1 ref2 ref6">1, 2, 6</xref>
        ]. This paper presents a
potential option for its technical implementation, the core
of which is as follows. The application system comprises a
series of application servers that must function concurrently
and autonomously, with a software server that facilitates
real-time adaptive distribution of request flow among the
application servers to achieve more or less uniform load
balancing. The parameters of the examined load balancing
technology are established through the resolution of the
boundary value problem associated with the analytical
design of the relevant regulator, utilizing the synthesis of
the corresponding R. Bellman functional and iterative
numerical integration of the derived tuning equation. The
implemented technical solution facilitates nearly uniform
loading of server equipment under the specified conditions
while maintaining an acceptable average waiting time for
service requests with the minimal necessary server
resources.
      </p>
      <sec id="sec-2-1">
        <title>2.1. System model for load balancing on servers</title>
        <p>
          This work introduces a structural and functional model for
load balancing throughout the server line of the application
system, designed to operate under conditions where the
incoming request flow from clients is random, unexpected,
non-stationary, and pulsing. Server load balancing entails
the real-time redistribution of incoming request flows from
heavily loaded application servers to those with lighter
loads, thereby achieving a more uniform distribution of load
across the servers. Fig. 2 illustrates this model as a series of
numbered blocks, each representing a certain functional
component of the model’s structure [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
Fig. 2 use the following designations for functional blocks:
1—smoothing of an input request stream; 2—creation of
quasi-stationary segments of incoming request traffic at
time intervals ∆ti—smoothing steps (as the formation
process is executed as a stepwise iterative procedure with a
step ∆ti, while monitoring fluctuations in the intensity of
the incoming request flow); 3—demultiplexing of the
resulting input stream of requests at each smoothing
interval ∆ti; 4—configurator of smoothing and alignment
procedures (referring to the process of synchronizing the
current values of load factors for application servers seen in
pic.2), executed by software-controlled clock generators; 5—
assessing the current values of the intensity of the generated
input request stream at each smoothing interval ∆ti; 6—
buffering requests (establishing a queue of requests for
processing by the i-th application server) at the input of the
i-th application server; 7—evaluating the current values of
the load factor of the i-th application server at each
alignment step; 8—determining a singular matrix of
regulatory relationships among the variables to be aligned
(i.e., between load factors on servers) at each alignment step;
9—ascertaining the precise values of the resource allocation
(i.e., the amount of requests) to be allocated among the input
queues of application servers at each stage of the alignment;
10—data processing of the relevant issue; A—incoming
request stream; B—produced flow of requests; B—query
substreams post-demultiplexing. Fig. 2 illustrates that to
create quasi-stationary traffic segments, the non-stationary
incoming request stream is initially smoothed and
structured accordingly. The created input stream is
demultiplexed, and the resulting parallel substreams are
allocated to the application system’s servers based on the
established load-balancing method. The primary objective
of balancing is to attain the most accurate estimate of the
uniform load across the application system servers. In other
words, under conditions of unpredictable fluctuations in
incoming traffic and varying request processing times by
each server, the balancing algorithm must operate to ensure
that the generated quasi-stationary traffic segments receive
approximately equal load factors across all servers. The
model illustrated in Fig. 2 is founded on the adaptive
principle of reallocating demultiplexed subflows of requests
among application servers through real-time monitoring of
fluctuations in the current intensity of the incoming request
stream and the existing load levels of the application
servers. Consequently, this paradigm necessitates the
realtime implementation of the following three processes:
1)
2)
3)
        </p>
        <sec id="sec-2-1-1">
          <title>The establishment of an incoming request flow to attain a more uniform temporal distribution, thereby preventing short-term overloads in the application server line.</title>
          <p>The demultiplexing of the incoming request
stream into several concurrently operating
subflows corresponds to the number of application
servers in the line.</p>
          <p>The equalization of current application server load
factors diminishes the likelihood of short-term
overload on any individual server. Examine the
characteristics of each of these processes.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Establishment of the incoming request flow</title>
        <p>
          For the proper functioning of this load-balancing method,
the incoming request traffic must be transformed into a
series of quasi-stationary segments representing a discrete
random process, which can be partially refined by
specialized averaging techniques. The load balancing
technology on the application system’s servers necessitates
the accurate structuring of request flow, specifically to
maintain the consistency between the stationary intervals
of this flow, ∆Ts, and the intervals of the discrete control
process for equalizing server load factors, τk. Some traffic
creation technologies do not allow for this possibility. The
“bucket tokens” method [
          <xref ref-type="bibr" rid="ref6 ref8">6, 8</xref>
          ] has a notable constraint in its
applicability, being suitable solely for scenarios where
actual traffic exhibits the traits of a stationary random
process. Nevertheless, actual traffic and its derivatives must
be regarded as a non-stationary discontinuous process,
rendering the straight application of the “token bucket”
method, along with other established traffic generating
techniques, in adaptive load redistribution systems on
servers, largely unjustifiable. This study presents a
structural and functional framework for the development of
request flow, intended as a component of adaptive
loadbalancing technology for parallel servers within the
application system. This diagram is illustrated in Fig. 3.
Fig. 3 employs the following designations for functional
blocks: 1—the request queue buffer at the input of the
application system (i.e., the input request storage); 2—the
parameter (generator) defining the size of the smoothing
step; 3—the measurement of the number of requests
received at the input of the balancing system during a single
smoothing step duration; 4—generator of virtual events to
transmit the request via the gateway (token generator); 5—
repository of virtual events for the request sent through the
gateway (“bucket of tokens”); 6—gateway for routing
requests to the input of the demultiplexer; 7—demultiplexer
for the input stream of requests. Fig. 3 illustrates that the
foundation of this approach is the ‘buckets of tokens’
method, but with some adjustments and enhancements that
facilitate its application in the processing of non-stationary
request flows. In this scenario, the request gateway 6
functions as a lock jumper, allowing requests from the input
queue to go to the multiplexer only when the fill level of the
‘bucket’ of virtual events permits the request to traverse the
‘bucket’, achieving the average flow rate at the current
smoothing step. The velocity of the token generator 4 is
contingent upon the strength of the incoming request
stream. Based on the intensity measurements conducted by
meter 3 at each smoothing step, the configuration of the
token generator is executed. Consequently, we acquire
quasi-stationary segments of the generated request flow.
The applicability of this traffic generation strategy is
restricted to instances when there exists a possibility:
1)
2)
        </p>
        <sec id="sec-2-2-1">
          <title>Establish time intervals, referred to as stationary intervals (∆Tc), during which the average flow rate (Rc) at the input of the load balancing system remains almost constant.</title>
          <p>Ensure the regulated magnitude of pulsations in
the smoothed stream of queries.</p>
          <p>The implementation of this traffic processing scheme is
warranted if it can transform a non-stationary flow, marked
by unpredictable average speeds and fluctuating volumes,
into a series of quasi-stationary process segments with
defined maximum current thresholds. This transformation
enables the implementation of discrete control. The token
bucket technique is extensively discussed in the literature,
albeit within rather limited domains of applicability. The
operational architecture of this algorithm is altered to
facilitate its integration into the load-balancing system
circuit.</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Demultiplexing the incoming request stream</title>
        <p>Demultiplexing the incoming request stream from
application system clients is essential when the
performance of a single application server is inadequate to
effectively process this stream, necessitating the utilization
of multiple parallel application servers with identical
functionality. One can select from many ways of stream
multiplexing. The most straightforward option is to allocate
requests from the incoming stream uniformly across
application system servers. In this instance, the disparity in
request processing times would result in certain servers
experiencing temporary overloads, leading to request
losses, while other application servers operate under
capacity. Consequently, it is prudent to execute the
multiplexing of the input stream precisely as seen below.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Model training</title>
        <p>
          The processing time for each request is an unpredictable
variable, resulting in real-time fluctuations of application
server load factors. Under these circumstances, balancing
server load factors is recommended. Fig. 4 illustrates the
structural and functional framework of load balancing on
application servers.
Fig. 4 uses the subsequent designations for functional
blocks: 1—settler (generator) of the alignment step
magnitude; 2—buffer for the request queue at the server
application input; 3—assessment of the current value of the
server application load factor (evaluations are conducted at
each alignment step); 4—calculation of the determinant of
the matrix of regulatory connections among server
applications (resulting from the resolution of the
configuration equation); 5—computation of the determinant
of the resource share ∆ (specifically, the number of requests
to be redistributed at each alignment step among each
server application). The load balancing process is a
deliberate iterative procedure for the real-time
redistribution of requests inside the request queue buffers
for processing at the inputs of each application server. A
specific quantity of requests is extracted from one server’s
queue and subsequently transferred to another server’s
queue by the established alignment procedure. This
redistribution aims to diminish the disparity between the
load factor values of the servers comprising the line,
facilitating load balancing across each server in the line. The
technique operates so that at each alignment step,
determined by setter 1 based on the measured current load
values of each server, it ascertains the current state of the
control link matrix 4 (as a result of the incremental
solution). This matrix delineates the direction of request
redistribution across server pairs, while the resource share
determinant of 5, derived from measurements of current
incoming request traffic intensity, specifies the number of
requests to be transferred from one server to another. This
publication does not include a formal synthesis of the
adaptive system controller that executes load balancing on
application servers. A synthesis was specifically conducted
in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The principles of analytical regulator theory are
presented in references [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref9">9–14</xref>
          ]. Only the subsequent
information should be noted. The objective of synthesizing
an adaptive controller with a specified quantity of
application servers is to mitigate the risk of server
equipment overload and to maintain the stability of the load
balancing process amidst the unpredictable duration of
request processing by each server. The objective of
synthesizing such a regulator pertains to the established
boundary value problem of analytically designing
regulators to minimize the R. Bellman functional within the
realm of continuous dynamic control systems for entities
characterized by ordinary first-order linear differential
equations. The application of the synthesis results
facilitated a more uniform loading of the server equipment
and ensured the requisite stability and length of the
balancing procedure despite the aforementioned
unanticipated events. The trajectory of traffic flow
regulation is dictated by the suitably constructed R. Bellman
functional. The role of monitoring trends in variations in
processed flow intensity on servers is executed through the
incremental integration of the relevant differential tuning
equation. In the analytical design of the controller, the
structure of the Bellman function was defined, enabling the
formulation of the tuning equation, the specification of the
function, and the derivation of the appropriate Bellman
equation. The task of designing a controller is simplified to
solving the Riccati equation, a matrix quadratic equation
essential for determining the matrix component of the
Bellman function. Substituting the identified matrix into the
control expression yields the final formulation for the
required controller. A regulator is synthesized to maintain a
consistent trajectory of state changes in the regulation
object’s phase space C2, adhering to defined quality
parameters of the transient process. The controller must
observe both the variations in the intensity of incoming
request flows and the dynamics of the transient process of
load factor equalization to minimize control errors while
considering constraints that maintain the stability of the
control system. Initial parameters of the equalization
system: the number of servers in the queue and the
attenuation coefficient for the Bellman function α. The
        </p>
        <p>Here’s the translated text: where F represents the
total bandwidth of the application server line,
F  f1  f2  f3  ...  fn  const , f1, f2 , f3,..., fn are the
server bandwidths, and
s1, s2 , s3,..., sn
are the flow
intensities of requests at the inputs of application servers.</p>
        <p>Physical constraint 2: the unpredictability of request
flow ripples.</p>
        <p>Physical constraint 3: Ambiguity regarding the
processing duration of each specific request by each
application server. The efficiency of the load balancing
procedure on the servers, from a physical perspective, is the
aggregate of the squares of the discrepancies in the load
factors of each pair of application servers. This number
should be reduced, as a value of zero indicates that the load
factors of each server in the line will be identical. Adhering
design of this regulator must address the following inherent
physical restrictions. Physical Constraint 1:
to the aforementioned constraints will decrease the risk of
server traffic overflow.</p>
      </sec>
      <sec id="sec-2-5">
        <title>2.5. Essential Factors for Operating PHP</title>
      </sec>
      <sec id="sec-2-6">
        <title>Applications Across Multiple Servers</title>
        <p>
          Having addressed load balancing, the subsequent
pertinent inquiry is: how are sessions managed? Sessions
enable programs to circumvent the stateless characteristic
of HTTP and retain information across multiple requests
(e.g., authentication status and shopping cart contents).
PHP, by default, retains sessions on the server’s disk that
processes the user’s request. For instance, when User A
submits a request to Server B, a session for User A is
established and retained on Server B (Fig. 5) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ].
s1  s2  s3  ...  sn  F .
(1)
Nonetheless, when requests are distributed among
numerous servers, this setup is likely to lead to
malfunctioning functionality. For instance, consumers may
discover their shopping cart is unexpectedly empty midway
through the process; they may be arbitrarily redirected to
the login page; or they may realize that all their responses
in a survey have been erased while completing it. Two
alternatives exist to mitigate this: centrally stored sessions
and sticky sessions. Centrally Stored Sessions. Sessions may
be centrally saved via a caching server (e.g., Redis or
Memcached), a database (e.g., MySQL or PostgreSQL), or a
shared filesystem (e.g., NFS or GlusterFS). The optimal
choice among these choices is a caching server. This is due
to two factors: They are an in-memory storage system based
on key-value pairs, providing superior responsiveness
compared to SQL databases; sessions are consistently
written upon the conclusion of a request, whereas SQL
databases need writing to the database with each request.
This requirement may result in table locking and sluggish
write operations. When centrally storing sessions, it is
imperative to ensure that the session store does not become
a singular point of failure. This can be circumvented by
configuring the store in a clustered arrangement.
Consequently, if one server in the cluster fails, it is not
catastrophic, as another can be incorporated to substitute it
[
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. Persistent Sessions. An alternative to session caching
is Session Stickiness, also known as Session Persistence.
User queries are routed to the same server for the duration
of their session. Although it may initially appear to be a
wonderful concept, there are various possible downsides,
including Will thermal gradients emerge within the cluster?
What occurs when a server is inaccessible, overloaded, or
requires an upgrade? Consequently, I do not endorse this
strategy.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusions</title>
      <p>
        In several application systems, such as ‘client/server’, which
exhibit high traffic intensity, the processing of client
requests is executed by a series of concurrently operating
application servers. Owing to the erratic fluctuations in
request flow and the variable duration of their processing
by application servers, these servers, unless specific
measures are implemented, experience random and uneven
loading—resulting in some servers becoming overloaded
and consequently losing requests, while others remain
underutilized. In [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a formal balancing method was
developed to avert potential short-term overloads of
application servers during their operation, thereby
promoting the sustainable functioning of the application
system amidst uncertainties in the dynamics of the
aforementioned factors. This study presents a potential
option for the technical implementation of this strategy.
      </p>
      <p>The structural-functional model of load balancing for
the application system’s server line is delineated, and
designed to operate in conditions where the incoming
request flow from clients is random, unexpected,
nonstationary, and pulsating. The model utilizes the adaptive
principle of reallocating demultiplexed request sub-streams
across application servers through real-time monitoring of
fluctuations in the incoming request stream intensity and
the current load levels of the application servers. This
paradigm necessitates the implementation of the following
three processes:
1)
2)
3)</p>
      <sec id="sec-3-1">
        <title>Establishment of the incoming request flow to</title>
        <p>prevent short-term server line overloads.</p>
        <p>Demultiplexing the incoming request stream into
multiple parallel substreams based on the number
of application servers in the line.</p>
        <p>Equalization of the current load factor values of
application servers.</p>
        <p>The formation of an incoming request stream to the
application server line is examined. It is demonstrated that
the proper functioning of this load-balancing method
requires the incoming request traffic to be converted into a
sequence of quasi-stationary segments representing a
discrete random process. It is essential to align the intervals
of stationarity of this request flow with the intervals of the
discrete control steps for equalizing the load factor values of
application servers. A modification of the established
technological approach for packet traffic creation, referred
to as the “bucket of tokens”, is proposed. The token
generator’s performance is determined by the intensity of
the incoming request stream. Specifically, based on the
intensity measurements conducted by the meter at each
smoothing step, the token generator is calibrated.
Consequently, we acquire quasi-stationary segments of the
generated request flow.</p>
        <p>A technological technique for load balancing on
application servers has been created, characterized as a
deliberate iterative procedure for the real-time
redistribution of requests stored in the buffers of request
queues at the entry points of each application server. This
redistribution aims to diminish the disparity between the
load factor values of the servers constituting the line. The
implemented balancing algorithm enables a specified
number of application servers to mitigate the risk of
shortterm server overloads and ensures the stability of the
loadbalancing process amidst the unpredictable duration of
request processing by each server.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bakhtiiarov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Konakhovych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lavrynenko</surname>
          </string-name>
          ,
          <article-title>An Approach to Modernization of the Hat and COST 231 Model for Improvement of Electromagnetic Compatibility in Premises for Navigation and Motion Control Equipment</article-title>
          ,
          <source>in: 5th International Conference on Methods and Systems of Navigation and Motion Control (MSNMC)</source>
          (
          <year>2018</year>
          )
          <fpage>271</fpage>
          -
          <lpage>274</lpage>
          . doi:
          <volume>10</volume>
          .1109/MSNMC.
          <year>2018</year>
          .
          <volume>8576260</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F.</given-names>
            <surname>Xia</surname>
          </string-name>
          , et al.,
          <article-title>Community-based Event Dissemination with Optimal Load Balancing</article-title>
          ,
          <source>IEEE Trans. Comput</source>
          .
          <volume>64</volume>
          (
          <issue>7</issue>
          ) (
          <year>2015</year>
          )
          <fpage>1857</fpage>
          -
          <lpage>1869</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nahir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Orda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Raz</surname>
          </string-name>
          , Schedule First Manage Later:
          <article-title>Network-Aware Load Balancing</article-title>
          ,
          <source>Proc. IEEE INFOCOM</source>
          (
          <year>2013</year>
          )
          <fpage>510</fpage>
          -
          <lpage>514</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Doncel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Aalto</surname>
          </string-name>
          , U. Ayesta,
          <article-title>Economies of Scale in Parallel-Server Systems</article-title>
          ,
          <source>Proc. IEEE INFOCOM</source>
          (
          <year>2017</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>O.</given-names>
            <surname>Veselska</surname>
          </string-name>
          , et al.,
          <article-title>A Wavelet-Based Steganographic Method for Text Hiding in an Audio Signal</article-title>
          , Sensors,
          <volume>22</volume>
          (
          <issue>15</issue>
          ) (
          <year>2022</year>
          )
          <fpage>5832</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>R.</given-names>
            <surname>Odarchenko</surname>
          </string-name>
          , et al.,
          <article-title>Empirical Wavelet Transform in Speech Signal Compression Problems</article-title>
          , in: IEEE 8th International Conference on Problems of Infocommunications, Science and
          <string-name>
            <surname>Technology (PIC S&amp;T)</surname>
          </string-name>
          (
          <year>2021</year>
          )
          <fpage>599</fpage>
          -
          <lpage>602</lpage>
          , doi: 10.1109/PICST54195.
          <year>2021</year>
          .
          <volume>9772156</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Boger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Fraga</surname>
          </string-name>
          , E. Alchieri, Reconfigurable Scalable State Machine Replication,
          <source>LADC</source>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <surname>A.</surname>
          </string-name>
          <article-title>Schiper, Achieving High-Throughput State Machine Replication in Multi-Core Systems</article-title>
          , ICDCS (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>O.</given-names>
            <surname>Lavrynenko</surname>
          </string-name>
          , et al.,
          <source>Protected Voice Control System of UAV, in: IEEE 5th International Conference Actual Problems of Unmanned Aerial Vehicles Developments (APUAVD)</source>
          (
          <year>2019</year>
          )
          <fpage>295</fpage>
          -
          <lpage>298</lpage>
          . doi:
          <volume>10</volume>
          .1109/APUAVD47061.
          <year>2019</year>
          .
          <volume>8943926</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>O.</given-names>
            <surname>Solomentsev</surname>
          </string-name>
          , et al.,
          <article-title>A Procedure for Failures Diagnostics of Aviation Radio Equipment</article-title>
          , Proceedings-International Conference on Advanced Computer Information Technologies,
          <string-name>
            <surname>ACIT</surname>
          </string-name>
          (
          <year>2023</year>
          )
          <fpage>100</fpage>
          -
          <lpage>103</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACIT58437.
          <year>2023</year>
          .
          <volume>10275337</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bakhtiiarov</surname>
          </string-name>
          , et al.,
          <source>Method of Binary Detection of Small Unmanned Aerial Vehicles, in: Cybersecurity Providing in Information and Telecommunication Systems</source>
          , vol.
          <volume>3654</volume>
          (
          <year>2024</year>
          )
          <fpage>312</fpage>
          -
          <lpage>321</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Marandi</surname>
          </string-name>
          , et al.,
          <article-title>Filo: Con-Solidated Consensus as a Cloud Service, ATC (</article-title>
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Poke</surname>
          </string-name>
          , T. Hoefler,
          <article-title>DARE: High-Performance State Machine Replication on RDMA Networks, HPDC (</article-title>
          <year>2015</year>
          )
          <fpage>107</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <source>Performance Optimization for State Machine Replication based on Application Semantics, J. Syst. Software</source>
          , 122(C) (
          <year>2016</year>
          )
          <fpage>96</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J. R.</given-names>
            <surname>Lorch</surname>
          </string-name>
          , et al.,
          <article-title>Leveraging Lightweight Virtual Machines to Easily and Efficiently Construct FaultTolerant Services</article-title>
          ,
          <source>NSDI</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>