<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Distribute load among concurrent servers ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Denys</forename><surname>Bakhtiiarov</surname></persName>
							<email>bakhtiiaroff@tks.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1 Kosmonavta Komarova ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">State Scientific and Research Institute of Cybersecurity Technologies and Information Protection</orgName>
								<address>
									<addrLine>3 Maksym Zaliznyak</addrLine>
									<postCode>03142</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Bohdan</forename><surname>Chumachenko</surname></persName>
							<email>bohdan.chumachenko@npp.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1 Kosmonavta Komarova ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksandr</forename><surname>Lavrynenko</surname></persName>
							<email>oleksandrlavrynenko@tks.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1 Kosmonavta Komarova ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Chupryn</surname></persName>
							<email>volodymyr.chupryn@npp.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1 Kosmonavta Komarova ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Veniamin</forename><surname>Antonov</surname></persName>
							<email>veniamin.antonov@npp.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<addrLine>1 Kosmonavta Komarova ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Distribute load among concurrent servers ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">26AFFD28EE7282287C166E19F8AA725A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>request, application, server, client, load balancing1 0000-0003-3298-4641 (D. Bakhtiiarov)</term>
					<term>0000-0002-0354-2206 (B. Chumachenko)</term>
					<term>0000-0002-3285-7565 (O. Lavrynenko)</term>
					<term>0000-0001-9412-7413 (V. Chupryn)</term>
					<term>0000-0003-2244-262X (V. Antonov)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A technical implementation option for load balancing among concurrently operating application servers is proposed to mitigate the risks of overload amid substantial unpredictable fluctuations in request flow to the application system and the variable processing durations by each application server. The structuralfunctional model for load balancing inside the server line of the application system is delineated, and designed to operate under conditions where the incoming request flow from clients is characterized as random, unexpected, non-stationary, and pulsing. A proposal is made for a system that generates a flow of requests to the application server line, ensuring the alignment of the stationary intervals of this flow with the intervals of discrete control for equalizing server load factors. A technological framework for load balancing on application servers is proposed, facilitating the equalization of load factors among application system servers through real-time transmission, allowing the redistribution of a portion of incoming request traffic from more heavily loaded servers to those with lesser loads.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In practice, when utilizing computerized real-time application systems like 'client/server' that permit remote access for clients via the Internet, such as various interactive help systems, the effectiveness is assessed by the value of τs-the average service duration of each stream of customer requests entering the application system input. A reduced value indicates that the consumer is likely to receive a response to their request more promptly <ref type="bibr" target="#b0">[1]</ref>. At low request flow intensities, queues at the application system's input are virtually nonexistent, thereby making τs directly contingent upon the performance of the server hardware hosting the application software. Issues occur when the volume of incoming requests is misaligned with the processing speed of the server infrastructure, leading to the accumulation of unprocessed requests, which in turn results in an unacceptable increase in service request duration and certain instances, the loss of some requests. Given the high intensity of request flow in several applications, it is essential to partition it in real-time into parallel demultiplexed substreams and execute their concurrent online processing utilizing a series of application servers with identical functionality. For instance, as illustrated in Fig. <ref type="figure" target="#fig_0">1</ref>. Before the processing of a user's request by an application server, it is initially received by the request redirection server (step 1), which employs a block to ascertain the current application server number designated for the request and allocates the request stream in real-time.</p><p>Users between the line servers (steps 2 and 3) will implement the distribution strategy outlined below. The request redirection server transmits the IP address of the subsequent application server, as determined by the distribution method, to the user terminal (step 4), and subsequently readies itself to handle a new request from another user, advancing to step 1. The user utilizes the IP address of the designated application server to retrieve the online result of processing his request from that server (step 5). The designated server resolves the application issue and transmits the outcome to the user (step 6) <ref type="bibr" target="#b1">[2]</ref>.</p><p>Specifically, Fig. <ref type="figure" target="#fig_0">1</ref> illustrates that a series of specialized application software and hardware servers process client requests concurrently. Choosing the number of servers in the configuration should align the request traffic intensity with the application system's performance. Nonetheless, the issues get intricate when addressing an erratic and unpredictable influx of requests, characterized by substantial fluctuations in both intensity and duration. In this scenario, due to erratic variations in request volume and the uncertain processing times by application servers, these servers, in the absence of specific interventions, experience uneven and arbitrary loading-resulting in some servers becoming overloaded and consequently losing requests, while others remain underutilized. Unforeseen variations in the volume of requests directed to any application server can impede request processing due to potential transient server overloads. Consequently, there is both theoretical and practical interest in developing a mechanism for load balancing on application servers, specifically a dynamic load balancing approach among collaborating application servers in realtime. This method's implementation aims to avert potential short-term overloads of individual application servers during their operation, thereby fostering the sustainable functioning of the application system amid uncertainties in the dynamics of the aforementioned environmental factors. The suggested technique must assure the stability of the request distribution process, considering the dynamics of unforeseen fluctuations in this flow. The theoretical foundation of this strategy is explained in <ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref>. This paper presents a potential option for its technical implementation, the core of which is as follows. The application system hardware depicted in pic.1 comprises a software server (ROM server+server definition unit) that concurrently and autonomously manages multiple application servers. This software server facilitates a real-time adaptive distribution of requests among the application servers to maintain a more uniform load during unpredictable surges in request flow.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Main Part</head><p>The theoretical foundation of the employed load balancing method is delineated in <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b5">6]</ref>. This paper presents a potential option for its technical implementation, the core of which is as follows. The application system comprises a series of application servers that must function concurrently and autonomously, with a software server that facilitates real-time adaptive distribution of request flow among the application servers to achieve more or less uniform load balancing. The parameters of the examined load balancing technology are established through the resolution of the boundary value problem associated with the analytical design of the relevant regulator, utilizing the synthesis of the corresponding R. Bellman functional and iterative numerical integration of the derived tuning equation. The implemented technical solution facilitates nearly uniform loading of server equipment under the specified conditions while maintaining an acceptable average waiting time for service requests with the minimal necessary server resources.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">System model for load balancing on servers</head><p>This work introduces a structural and functional model for load balancing throughout the server line of the application system, designed to operate under conditions where the incoming request flow from clients is random, unexpected, non-stationary, and pulsing. Server load balancing entails the real-time redistribution of incoming request flows from heavily loaded application servers to those with lighter loads, thereby achieving a more uniform distribution of load across the servers. Fig. <ref type="figure" target="#fig_1">2</ref> illustrates this model as a series of numbered blocks, each representing a certain functional component of the model's structure <ref type="bibr" target="#b6">[7]</ref>. 2), executed by software-controlled clock generators; 5assessing the current values of the intensity of the generated input request stream at each smoothing interval ∆ti; 6buffering requests (establishing a queue of requests for processing by the i-th application server) at the input of the i-th application server; 7-evaluating the current values of the load factor of the i-th application server at each alignment step; 8-determining a singular matrix of regulatory relationships among the variables to be aligned (i.e., between load factors on servers) at each alignment step; 9-ascertaining the precise values of the resource allocation (i.e., the amount of requests) to be allocated among the input queues of application servers at each stage of the alignment; 10-data processing of the relevant issue; A-incoming request stream; B-produced flow of requests; B-query substreams post-demultiplexing. Fig. <ref type="figure" target="#fig_1">2</ref> illustrates that to create quasi-stationary traffic segments, the non-stationary incoming request stream is initially smoothed and structured accordingly. The created input stream is demultiplexed, and the resulting parallel substreams are allocated to the application system's servers based on the established load-balancing method. The primary objective of balancing is to attain the most accurate estimate of the uniform load across the application system servers. In other words, under conditions of unpredictable fluctuations in incoming traffic and varying request processing times by each server, the balancing algorithm must operate to ensure that the generated quasi-stationary traffic segments receive approximately equal load factors across all servers. The model illustrated in Fig. <ref type="figure" target="#fig_1">2</ref> is founded on the adaptive principle of reallocating demultiplexed subflows of requests among application servers through real-time monitoring of fluctuations in the current intensity of the incoming request stream and the existing load levels of the application servers. Consequently, this paradigm necessitates the realtime implementation of the following three processes:</p><p>1) The establishment of an incoming request flow to attain a more uniform temporal distribution, thereby preventing short-term overloads in the application server line.</p><p>2) The demultiplexing of the incoming request stream into several concurrently operating subflows corresponds to the number of application servers in the line.</p><p>3) The equalization of current application server load factors diminishes the likelihood of short-term overload on any individual server. Examine the characteristics of each of these processes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Establishment of the incoming request flow</head><p>For the proper functioning of this load-balancing method, the incoming request traffic must be transformed into a series of quasi-stationary segments representing a discrete random process, which can be partially refined by specialized averaging techniques. The load balancing technology on the application system's servers necessitates the accurate structuring of request flow, specifically to maintain the consistency between the stationary intervals of this flow, ∆Ts, and the intervals of the discrete control process for equalizing server load factors, τk. Some traffic creation technologies do not allow for this possibility. The "bucket tokens" method <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b7">8]</ref> has a notable constraint in its applicability, being suitable solely for scenarios where actual traffic exhibits the traits of a stationary random process. Nevertheless, actual traffic and its derivatives must be regarded as a non-stationary discontinuous process, rendering the straight application of the "token bucket" method, along with other established traffic generating techniques, in adaptive load redistribution systems on servers, largely unjustifiable. This study presents a structural and functional framework for the development of request flow, intended as a component of adaptive loadbalancing technology for parallel servers within the application system. This diagram is illustrated in Fig. <ref type="figure" target="#fig_2">3</ref>. Fig. <ref type="figure" target="#fig_2">3</ref> employs the following designations for functional blocks: 1-the request queue buffer at the input of the application system (i.e., the input request storage); 2-the parameter (generator) defining the size of the smoothing step; 3-the measurement of the number of requests received at the input of the balancing system during a single smoothing step duration; 4-generator of virtual events to transmit the request via the gateway (token generator); 5repository of virtual events for the request sent through the gateway ("bucket of tokens"); 6-gateway for routing requests to the input of the demultiplexer; 7-demultiplexer for the input stream of requests. Fig. <ref type="figure" target="#fig_2">3</ref> illustrates that the foundation of this approach is the 'buckets of tokens' method, but with some adjustments and enhancements that facilitate its application in the processing of non-stationary request flows. In this scenario, the request gateway 6 functions as a lock jumper, allowing requests from the input queue to go to the multiplexer only when the fill level of the 'bucket' of virtual events permits the request to traverse the 'bucket', achieving the average flow rate at the current smoothing step. The velocity of the token generator 4 is contingent upon the strength of the incoming request stream. Based on the intensity measurements conducted by meter 3 at each smoothing step, the configuration of the token generator is executed. Consequently, we acquire quasi-stationary segments of the generated request flow. The applicability of this traffic generation strategy is restricted to instances when there exists a possibility:</p><p>1) Establish time intervals, referred to as stationary intervals (∆Tc), during which the average flow rate (Rc) at the input of the load balancing system remains almost constant. 2) Ensure the regulated magnitude of pulsations in the smoothed stream of queries.</p><p>The implementation of this traffic processing scheme is warranted if it can transform a non-stationary flow, marked by unpredictable average speeds and fluctuating volumes, into a series of quasi-stationary process segments with defined maximum current thresholds. This transformation enables the implementation of discrete control. The token bucket technique is extensively discussed in the literature, albeit within rather limited domains of applicability. The operational architecture of this algorithm is altered to facilitate its integration into the load-balancing system circuit.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Demultiplexing the incoming request stream</head><p>Demultiplexing the incoming request stream from application system clients is essential when the performance of a single application server is inadequate to effectively process this stream, necessitating the utilization of multiple parallel application servers with identical functionality. One can select from many ways of stream multiplexing. The most straightforward option is to allocate requests from the incoming stream uniformly across application system servers. In this instance, the disparity in request processing times would result in certain servers experiencing temporary overloads, leading to request losses, while other application servers operate under capacity. Consequently, it is prudent to execute the multiplexing of the input stream precisely as seen below.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Model training</head><p>The processing time for each request is an unpredictable variable, resulting in real-time fluctuations of application server load factors. Under these circumstances, balancing server load factors is recommended. Fig. <ref type="figure" target="#fig_3">4</ref> illustrates the structural and functional framework of load balancing on application servers. The load balancing process is a deliberate iterative procedure for the real-time redistribution of requests inside the request queue buffers for processing at the inputs of each application server. A specific quantity of requests is extracted from one server's queue and subsequently transferred to another server's queue by the established alignment procedure. This redistribution aims to diminish the disparity between the load factor values of the servers comprising the line, facilitating load balancing across each server in the line. The technique operates so that at each alignment step, determined by setter 1 based on the measured current load values of each server, it ascertains the current state of the control link matrix 4 (as a result of the incremental solution). This matrix delineates the direction of request redistribution across server pairs, while the resource share determinant of 5, derived from measurements of current incoming request traffic intensity, specifies the number of requests to be transferred from one server to another. This publication does not include a formal synthesis of the adaptive system controller that executes load balancing on application servers. A synthesis was specifically conducted in <ref type="bibr" target="#b0">[1]</ref>. The principles of analytical regulator theory are presented in references <ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>. Only the subsequent information should be noted. The objective of synthesizing an adaptive controller with a specified quantity of application servers is to mitigate the risk of server equipment overload and to maintain the stability of the load balancing process amidst the unpredictable duration of request processing by each server. The objective of synthesizing such a regulator pertains to the established boundary value problem of analytically designing regulators to minimize the R. Bellman functional within the realm of continuous dynamic control systems for entities characterized by ordinary first-order linear differential equations. The application of the synthesis results facilitated a more uniform loading of the server equipment and ensured the requisite stability and length of the balancing procedure despite the aforementioned unanticipated events. The trajectory of traffic flow regulation is dictated by the suitably constructed R. Bellman functional. The role of monitoring trends in variations in processed flow intensity on servers is executed through the incremental integration of the relevant differential tuning equation. In the analytical design of the controller, the structure of the Bellman function was defined, enabling the formulation of the tuning equation, the specification of the function, and the derivation of the appropriate Bellman equation. The task of designing a controller is simplified to solving the Riccati equation, a matrix quadratic equation essential for determining the matrix component of the Bellman function. Substituting the identified matrix into the control expression yields the final formulation for the required controller. A regulator is synthesized to maintain a consistent trajectory of state changes in the regulation object's phase space C2, adhering to defined quality parameters of the transient process. The controller must observe both the variations in the intensity of incoming request flows and the dynamics of the transient process of load factor equalization to minimize control errors while considering constraints that maintain the stability of the control system. Initial parameters of the equalization system: the number of servers in the queue and the attenuation coefficient for the Bellman function α. The design of this regulator must address the following inherent physical restrictions. Physical Constraint 1:</p><formula xml:id="formula_0">1 2 3 ... n s s s s F       . (1)</formula><p>Here's the translated text: where F  represents the total bandwidth of the application server line,</p><formula xml:id="formula_1">1 2 3 ... n F f f f f const        , 1<label>2 3</label></formula><p>, , ,..., n f f f f are the server bandwidths, and 1 2 3 , , ,..., n s s s s are the flow intensities of requests at the inputs of application servers. Physical constraint 2: the unpredictability of request flow ripples.</p><p>Physical constraint 3: Ambiguity regarding the processing duration of each specific request by each application server. The efficiency of the load balancing procedure on the servers, from a physical perspective, is the aggregate of the squares of the discrepancies in the load factors of each pair of application servers. This number should be reduced, as a value of zero indicates that the load factors of each server in the line will be identical. Adhering to the aforementioned constraints will decrease the risk of server traffic overflow.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.5.">Essential Factors for Operating PHP Applications Across Multiple Servers</head><p>Having addressed load balancing, the subsequent pertinent inquiry is: how are sessions managed? Sessions enable programs to circumvent the stateless characteristic of HTTP and retain information across multiple requests (e.g., authentication status and shopping cart contents). PHP, by default, retains sessions on the server's disk that processes the user's request. For instance, when User A submits a request to Server B, a session for User A is established and retained on Server B (Fig. <ref type="figure">5</ref>) <ref type="bibr" target="#b10">[11]</ref>. Nonetheless, when requests are distributed among numerous servers, this setup is likely to lead to malfunctioning functionality. For instance, consumers may discover their shopping cart is unexpectedly empty midway through the process; they may be arbitrarily redirected to the login page; or they may realize that all their responses in a survey have been erased while completing it. Two alternatives exist to mitigate this: centrally stored sessions and sticky sessions. Centrally Stored Sessions. Sessions may be centrally saved via a caching server (e.g., Redis or Memcached), a database (e.g., MySQL or PostgreSQL), or a shared filesystem (e.g., NFS or GlusterFS). The optimal choice among these choices is a caching server. This is due to two factors: They are an in-memory storage system based on key-value pairs, providing superior responsiveness compared to SQL databases; sessions are consistently written upon the conclusion of a request, whereas SQL databases need writing to the database with each request. This requirement may result in table locking and sluggish write operations. When centrally storing sessions, it is imperative to ensure that the session store does not become a singular point of failure. This can be circumvented by configuring the store in a clustered arrangement. Consequently, if one server in the cluster fails, it is not catastrophic, as another can be incorporated to substitute it <ref type="bibr" target="#b14">[15]</ref>. Persistent Sessions. An alternative to session caching is Session Stickiness, also known as Session Persistence. User queries are routed to the same server for the duration of their session. Although it may initially appear to be a wonderful concept, there are various possible downsides, including Will thermal gradients emerge within the cluster? What occurs when a server is inaccessible, overloaded, or requires an upgrade? Consequently, I do not endorse this strategy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conclusions</head><p>In several application systems, such as 'client/server', which exhibit high traffic intensity, the processing of client requests is executed by a series of concurrently operating application servers. Owing to the erratic fluctuations in request flow and the variable duration of their processing by application servers, these servers, unless specific measures are implemented, experience random and uneven loading-resulting in some servers becoming overloaded and consequently losing requests, while others remain underutilized. In <ref type="bibr" target="#b0">[1]</ref>, a formal balancing method was developed to avert potential short-term overloads of application servers during their operation, thereby promoting the sustainable functioning of the application system amidst uncertainties in the dynamics of the aforementioned factors. This study presents a potential option for the technical implementation of this strategy.</p><p>The structural-functional model of load balancing for the application system's server line is delineated, and designed to operate in conditions where the incoming request flow from clients is random, unexpected, nonstationary, and pulsating. The model utilizes the adaptive principle of reallocating demultiplexed request sub-streams across application servers through real-time monitoring of fluctuations in the incoming request stream intensity and the current load levels of the application servers. This paradigm necessitates the implementation of the following three processes: 1) Establishment of the incoming request flow to prevent short-term server line overloads. 2) Demultiplexing the incoming request stream into multiple parallel substreams based on the number of application servers in the line. 3) Equalization of the current load factor values of application servers.</p><p>The formation of an incoming request stream to the application server line is examined. It is demonstrated that the proper functioning of this load-balancing method requires the incoming request traffic to be converted into a sequence of quasi-stationary segments representing a discrete random process. It is essential to align the intervals of stationarity of this request flow with the intervals of the discrete control steps for equalizing the load factor values of application servers. A modification of the established technological approach for packet traffic creation, referred to as the "bucket of tokens", is proposed. The token generator's performance is determined by the intensity of the incoming request stream. Specifically, based on the intensity measurements conducted by the meter at each smoothing step, the token generator is calibrated. Consequently, we acquire quasi-stationary segments of the generated request flow.</p><p>A technological technique for load balancing on application servers has been created, characterized as a deliberate iterative procedure for the real-time redistribution of requests stored in the buffers of request queues at the entry points of each application server. This redistribution aims to diminish the disparity between the load factor values of the servers constituting the line. The implemented balancing algorithm enables a specified number of application servers to mitigate the risk of shortterm server overloads and ensures the stability of the loadbalancing process amidst the unpredictable duration of request processing by each server.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Generalized structural and functional model for the allocation of user requests among application servers</figDesc><graphic coords="2,72.00,72.00,381.00,238.56" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure2: Structural and functional paradigm for load balancing between concurrently operating servers of the applied information system Fig.2use the following designations for functional blocks: 1-smoothing of an input request stream; 2-creation of quasi-stationary segments of incoming request traffic at time intervals ∆ti-smoothing steps (as the formation process is executed as a stepwise iterative procedure with a step ∆ti, while monitoring fluctuations in the intensity of the incoming request flow); 3-demultiplexing of the resulting input stream of requests at each smoothing interval ∆ti; 4-configurator of smoothing and alignment procedures (referring to the process of synchronizing the current values of load factors for application servers seen in pic.2), executed by software-controlled clock generators; 5assessing the current values of the intensity of the generated input request stream at each smoothing interval ∆ti; 6buffering requests (establishing a queue of requests for processing by the i-th application server) at the input of the i-th application server; 7-evaluating the current values of the load factor of the i-th application server at each alignment step; 8-determining a singular matrix of regulatory relationships among the variables to be aligned (i.e., between load factors on servers) at each alignment step; 9-ascertaining the precise values of the resource allocation (i.e., the amount of requests) to be allocated among the input queues of application servers at each stage of the alignment; 10-data processing of the relevant issue; A-incoming request stream; B-produced flow of requests; B-query substreams post-demultiplexing. Fig.2illustrates that to create quasi-stationary traffic segments, the non-stationary incoming request stream is initially smoothed and structured accordingly. The created input stream is demultiplexed, and the resulting parallel substreams are allocated to the application system's servers based on the established load-balancing method. The primary objective of balancing is to attain the most accurate estimate of the uniform load across the application system servers. In other words, under conditions of unpredictable fluctuations in incoming traffic and varying request processing times by each server, the balancing algorithm must operate to ensure</figDesc><graphic coords="3,72.00,72.00,372.24,227.52" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Structural and functional diagram of the request processing pipeline by a series of application servers</figDesc><graphic coords="4,72.00,151.80,365.40,177.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Structural and functional framework for load balancing on application servers Fig.4uses the subsequent designations for functional blocks: 1-settler (generator) of the alignment step magnitude; 2-buffer for the request queue at the server application input; 3-assessment of the current value of the server application load factor (evaluations are conducted at each alignment step); 4-calculation of the determinant of the matrix of regulatory connections among server applications (resulting from the resolution of the configuration equation); 5-computation of the determinant of the resource share ∆ (specifically, the number of requests to be redistributed at each alignment step among each server application). The load balancing process is a deliberate iterative procedure for the real-time redistribution of requests inside the request queue buffers for processing at the inputs of each application server. A specific quantity of requests is extracted from one server's queue and subsequently transferred to another server's queue by the established alignment procedure. This redistribution aims to diminish the disparity between the load factor values of the servers comprising the line, facilitating load balancing across each server in the line. The technique operates so that at each alignment step, determined by setter 1 based on the measured current load values of each server, it ascertains the current state of the control link matrix 4 (as a result of the incremental solution). This matrix delineates the direction of request redistribution across server pairs, while the resource share determinant of 5, derived from measurements of current incoming request traffic intensity, specifies the number of requests to be transferred from one server to another. This publication does not include a formal synthesis of the adaptive system controller that executes load balancing on application servers. A synthesis was specifically conducted in<ref type="bibr" target="#b0">[1]</ref>. The principles of analytical regulator theory are presented in references<ref type="bibr" target="#b8">[9]</ref><ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref><ref type="bibr" target="#b13">[14]</ref>. Only the subsequent information should be noted. The objective of synthesizing</figDesc><graphic coords="5,72.00,75.00,354.00,256.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Basic load balancer schematic</figDesc><graphic coords="6,72.00,321.72,353.40,195.00" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">An Approach to Modernization of the Hat and COST 231 Model for Improvement of Electromagnetic Compatibility in Premises for Navigation and Motion Control Equipment</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bakhtiiarov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Konakhovych</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Lavrynenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/MSNMC.2018.8576260</idno>
	</analytic>
	<monogr>
		<title level="m">5 th International Conference on Methods and Systems of Navigation and Motion Control (MSNMC)</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="271" to="274" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Community-based Event Dissemination with Optimal Load Balancing</title>
		<author>
			<persName><forename type="first">F</forename><surname>Xia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Comput</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1857" to="1869" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Schedule First Manage Later: Network-Aware Load Balancing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Nahir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Orda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Raz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. IEEE INFOCOM</title>
				<meeting>IEEE INFOCOM</meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="510" to="514" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Economies of Scale in Parallel-Server Systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Doncel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Aalto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Ayesta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. IEEE INFOCOM</title>
				<meeting>IEEE INFOCOM</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Wavelet-Based Steganographic Method for Text Hiding in an Audio Signal</title>
		<author>
			<persName><forename type="first">O</forename><surname>Veselska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">15</biblScope>
			<biblScope unit="page">5832</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Empirical Wavelet Transform in Speech Signal Compression Problems</title>
		<author>
			<persName><forename type="first">R</forename><surname>Odarchenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/PICST54195.2021.9772156</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 8 th International Conference on Problems of Infocommunications, Science and Technology (PIC S&amp;T)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="599" to="602" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Reconfigurable Scalable State Machine Replication</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Boger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Fraga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Alchieri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">LADC</title>
		<imprint>
			<biblScope unit="page" from="1" to="8" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Achieving High-Throughput State Machine Replication in Multi-Core Systems</title>
		<author>
			<persName><forename type="first">N</forename><surname>Santos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Schiper</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICDCS</title>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Protected Voice Control System of UAV</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lavrynenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/APUAVD-47061.2019.8943926</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 5 th International Conference Actual Problems of Unmanned Aerial Vehicles Developments (APUAVD)</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="295" to="298" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A Procedure for Failures Diagnostics of Aviation Radio Equipment</title>
		<author>
			<persName><forename type="first">O</forename><surname>Solomentsev</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACIT58437.2023.10275337</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings-International Conference on Advanced Computer Information Technologies</title>
				<meeting>-International Conference on Advanced Computer Information Technologies</meeting>
		<imprint>
			<publisher>ACIT</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="100" to="103" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Method of Binary Detection of Small Unmanned Aerial Vehicles</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bakhtiiarov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybersecurity Providing in Information and Telecommunication Systems</title>
		<imprint>
			<biblScope unit="volume">3654</biblScope>
			<biblScope unit="page" from="312" to="321" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Marandi</surname></persName>
		</author>
		<title level="m">Filo: Con-Solidated Consensus as a Cloud Service</title>
				<imprint>
			<publisher>ATC</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">DARE: High-Performance State Machine Replication on RDMA Networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Poke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hoefler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">HPDC</title>
		<imprint>
			<biblScope unit="page" from="107" to="118" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Performance Optimization for State Machine Replication based on Application Semantics</title>
		<author>
			<persName><forename type="first">W</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Syst. Software</title>
		<imprint>
			<biblScope unit="volume">122</biblScope>
			<biblScope unit="issue">C</biblScope>
			<biblScope unit="page" from="96" to="109" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Lorch</surname></persName>
		</author>
		<title level="m">Leveraging Lightweight Virtual Machines to Easily and Efficiently Construct Fault-Tolerant Services</title>
				<imprint>
			<publisher>NSDI</publisher>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
