<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Toliupa);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Stateful cluster leader failover models and methods based on Replica State Discovery Protocol</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Toliupa</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maksym Kotov</string-name>
          <email>maksym_kotov@ukr.net</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Buchyk</string-name>
          <email>buchyk@knu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Juliy Boiko</string-name>
          <email>boiko_julius@ukr.net</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Shtanenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Military Institute of Telecommunication and Information Technologies named after the Heroes of Kruty, Street of Princes of Ostrozki 45/1</institution>
          ,
          <addr-line>Kyiv, 01011</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>60 Volodymyrska St., Kyiv, 01033</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1919</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>High availability is a cornerstone of fault tolerance in production clusters. The following article delves into the novel methods and models to achieve rapid cluster leader failover based on the Replica State Discovery Protocol (RSDP). Firstly, RSDP is described and evaluated as a method of achieving consensus within homogenous multiagent distributed systems. This paper provides a novel mathematical model that describes the internal procedure of the said protocol. Additionally, diagrams and algorithm steps are provided to further simplify integration of the RSDP into modern Decentralized Coordination Networks (DCNs). Secondly, a new state reducer is developed that allows to perform a synchronized leader election process. Its mathematical model and code implementation written in JavaScript are provided and comply with an established extension interface described within the confines of Evaluation and implications of the newly created leader election protocol are provided to further expand the horizons of DCN coordination. Lastly, this article explores the practical implications of the mentioned state reducer in the context of the stateful cluster leader failover. Three different approaches and models based on the proposed consensus algorithm to mitigate spontaneous critical events are modeled and assessed. Based on failure probability, failover duration, and communication overhead mathematical models, the said approaches were compared, and recommendations for their application were provided. Overall, this article is aimed at further development of RSDP and describes novel approaches towards relevant coordination issues inside the clusters with high demands for availability and fault tolerance.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;distributed computing</kwd>
        <kwd>Decentralized Coordination Networks (DCNs)</kwd>
        <kwd>Replica State Discovery Protocol (RSDP)</kwd>
        <kwd>cluster state management models</kwd>
        <kwd>cluster failover management models</kwd>
        <kwd>leader election protocol based on RSDP and deterministic operations</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Being tasked with a complex design of a modern distributed system leads inevitably to the myriads
of convoluted architecture decisions towards achieving high availability and fault tolerance.
Throughout the entire history of computer science and Internet Technology industry, the
cornerstone problem is threefold: consistency, availability, and partition tolerance, renown also as</p>
      <sec id="sec-1-1">
        <title>CAP theorem [1-7].</title>
        <p>
          The theorem in its basis promotes an assumption that service could only be two of three: either
consistent and available, available and tolerant to partitioning, or consistent and tolerant to
partitioning [
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6 ref7">1-7</xref>
          ]. Though it has to be stated that business-centric approaches produced variants of
the said theorem that put resource utilization, complexity, and service quality instead while going
through the decision-making process.
        </p>
        <p>
          Nevertheless, the crux is the same; it is assumed that no model, method, approach, or
methodology exists that could completely satisfy every property of this group. That assumption is
still holding strong, since mechanisms that comply with one subset directly obstruct efforts of
achieving the other [
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6 ref7">1-7</xref>
          ].
        </p>
        <p>It has been known throughout the history of research efforts into building reliable systems, that
trying to achieve fault tolerance while relying on a single instance is futile. This can be attributed to
the following reasons:
1. No matter how much the source code is being tested, extremely rare race conditions could
still happen when dealing with external devices or even replicated systems.
2. Physical destruction of the datacenter will nullify any logical sound and fault-tolerant
mechanism that was preliminarily implemented.
3. Even if we assume that software is logically consistent, a random radiation ray could flip
some bits in the system and lead to a catastrophe.
logically. That is especially problematic during wartime or a nation-wide crisis.</p>
      </sec>
      <sec id="sec-1-2">
        <title>At some point you will simply have to put the running instance under maintenance, and this</title>
        <p>will effectively stop its operation for a while.</p>
        <p>
          While building cloud systems that require high availability, an architect has to eventually
consider replication and clusterization techniques [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref8 ref9">8-12</xref>
          ]. Within the context of this article, we will
differentiate between these terminologies in the following way:
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>Replication</title>
        <p>
          is a distributed topology of a homogeneous multiagent system, where each
participants of the said network. Each replica may have the same set of initial parameters,
program code, logic, and ongoing state. Therefore, replicated environments provide rapid
disaster recoveries since every other instance could effectively take the responsibilities of the
one that failed [
          <xref ref-type="bibr" rid="ref10 ref8 ref9">8-10</xref>
          ].
        </p>
      </sec>
      <sec id="sec-1-4">
        <title>Clusterization refers to the distributed system organization, where the overall state is still</title>
        <p>
          synchronized and coordinated but the purpose and the concurrent tasks on different nodes
are different. The common purpose of building clustered systems is state splitting. Other
examples may include hot and warm standby servers that replicate events from the main
machine but still follow the orders from a leader and are usually restricted in their
functionality [
          <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
          ].
        </p>
        <p>Therefore, the purpose of this article is to model multiple approaches towards coordinating
replicated and clustered decentralized networks. As a result of the conducted evaluation, a set of
practical recommendations is proposed to simplify the decision-making process during the system
design stage.</p>
        <p>Additionally, it is the intent of this paper to develop a mathematical model for the Replica State
Discovery Protocol, which serves as a framework for performing cluster-wide state synchronization
and coordination. RSDP provides a basis upon which a set of logical extensions could be built to
achieve various consensus effects.</p>
        <p>Using the mentioned consensus basis, multiple leader election mechanisms were developed and
modeled. Each of those mechanisms is characterized by a set of unique security, efficiency, and
resilience properties, allowing for catering to the need of a specific environment. The said properties
were compared based on probability and computational complexity assessments and as a result,
recommendations for their application were provided.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Replica State Discovery Protocol</title>
      <p>
        The fundamental problem in managing resilience through redundancy is coordination. Since the
managed objects are by definition separated, they usually do not have any common memory location
that would allow them to successfully establish a synchronization algorithm based on classical
concurrency control mechanisms such as locks, mutexes, or semaphores [
        <xref ref-type="bibr" rid="ref16 ref17 ref18">16-18</xref>
        ].
      </p>
      <p>
        There are quite a few solutions that allow for both immediate and eventually consistent
consensus-achieving. An example of the first would be total order broadcast, or, in other words,
complete replication of events in the original order. Eventually-consistent algorithms tend to take
the process in steps to reach consensus regarding a proposed value. Examples of such algorithms
include Raft, Paxos, Ring, ZooKeeper, and many others [
        <xref ref-type="bibr" rid="ref19 ref20 ref21">19-21</xref>
        ].
      </p>
      <p>
        In that regard, the Replica State Discovery Protocol could be called one of the eventually
consistent algorithms. RSDP provides not only the basis for achieving consensus but also serves as a
distributed coordination framework, allowing for various extensions and handling cluster events.
The difference between classical leader election or consensus-reaching protocols and RSDP is the
intention and flexibility they provide. The former usually concentrate on the process of voting for a
single common value. In the meantime, RSDP provides a foundation not only for single-state
consensus but also for synchronizing complex state setups and merges [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>2.1. Local Area Network Simulation based on AMQP</title>
        <p>
          RSDP in its core was initially designed on the basis of the local area network simulation based on
the Advanced Message Queuing Protocol. The details of its implementation, efficiency, security,
implications, and resilience are outlined in a separate article. But for the purpose of theoretical
context, a few words have to be said to cover potential questions regarding the reliability of RSDP
and its message passing process [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ].
        </p>
        <p>
          First and foremost, the said network simulation is built on top of the message queueing protocol.
In that context, AMQP stands as one of the most popular solutions in coordinating and routing
complex message network topologies and is continuously gaining momentum in the field of research
and engineering [
          <xref ref-type="bibr" rid="ref23 ref24 ref25">23-25</xref>
          ].
        </p>
        <p>Figure 1 shows the conceptual operation basis of AMQP and its components:</p>
        <p>
          The fundamental idea of AMQP is the separation of client and server, which are called producer
and consumer. Instead of direct communication, the message goes through the broker and its queue,
thus allowing for alleviating direct dependency between clients and servers. Additionally, AMQP
describes the achievement of fundamental communication properties such as resilience, durability,
congestion control, security, etc. [
          <xref ref-type="bibr" rid="ref23 ref24 ref25">23-25</xref>
          ].
        </p>
        <p>Local Area Network Simulation (SLAN) leverages these capabilities to establish a secure, resilient,
and isolated LAN-like environment. In its basis, SLAN describes the provisioning of the two basic
communication media: direct communication links and a broadcast link.</p>
        <p>Figure 2 shows the interaction media of the SLAN:</p>
        <p>SLAN operation basis relies on the two main capabilities described within the context of AMQP:
binded queues (e.g., broadcast the message) or send the message to a single binded queue by the
routing key (direct communication routing).</p>
        <p>
          AMQP defines the durability, mirroring, and quorum mechanisms for its queues and is considered
to be a well-documented, tested, resilient, and flexible foundation for communication media. SLAN,
and by implication, RSDP, rely on these properties to build its own abstraction layers [
          <xref ref-type="bibr" rid="ref26 ref27">26, 27</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. RSDP phases and consensus process</title>
        <p>
          RSDP has its own dedicated article that describes every operation, state change, and
consensusoriented process in detail [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ]. This section provides a succinct overview of RSDP phases with some
amendments and clarifications for the operation sequences.
        </p>
        <p>To provide capabilities of cluster-wide state operations, RSDP describes its lifecycle in a few
phases is respectively responsible for the introduction of the new node, state sharing, and final state
derivation. The last phase is responsible for handling the shutdown lifecycle event.</p>
        <p>
          Since each distinct phase somehow interferes with the cluster state stored on each replica
individually, every instance has a concurrency control mechanism based on the
“ ℎ ”. That mutex prevents multiple simultaneous cluster events from interfering
with each other by restricting stored state access to a single active phase [
          <xref ref-type="bibr" rid="ref16 ref17 ref18">16-18</xref>
          ].
” that prevents state mutation
announcing its presence in the cluster. This message is sent through the broadcast channel and is
meant to be received by all cluster members.
replica would then buffer answers from the cluster members and perform an aggregation of the state
as dictated by the state reducers.
exchange to all the members. The final operation of the initial phase is to release the mutex and wait
for any other cluster-wide events.
cluster. Share messages would then be buffered for a configured amount of time to avoid redundant
replica reloads. After the timeout elapses, the protocol engine would acquire the mutex, and the
share messages would be validated and aggregated, giving a holistic view of the cluster-wide state.
Subsequently, the protocol engine would release the mutex and wait for any other occurring events
in the system.
a replica goes down,
it signals the others while others in the cluster would then first acquire the mutex, perform the
necessary state updates, and release the mutex. No additional synchronization is necessary as the
protocol assumes that every operation performed on the state is deterministic and made within the
scope of a set of clean functions that do not rely on side effects.
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. RSDP mathematical model</title>
        <p>Let ℛ = { 1,  2, … ,   } be the set of replicas in the distributed system. The replicas communicate
over a network represented as a graph  = (ℛ,  ), where  ⊆ ℛ × ℛ denotes the set of
communication channels between replicas.</p>
        <p>Each replica   maintains a local state   , which is an element of the global state space  . The
global state of the system is a tuple  = ( 1,  2, … ,   ).
transitions.</p>
        <p>•
•
replica   .</p>
        <p>is denoted as  S(T→AT)US(  ) from   to   .</p>
        <p>During the initiation, each replica  
Upon receiving  H( E)LLO, a replica  
current state   .</p>
        <p>H( E)LLO from
 H( E)LLO to all other replicas.</p>
        <p>S(T→AT)US(  ) containing its
∀  ∈ ℛ,  
state  agg. The aggregation function  
 agg =</p>
        <p>combines individual states:
( {  ∣∣  S(T→AT)US(  ) received} )</p>
        <sec id="sec-2-3-1">
          <title>Remaining replicas adjust their states to reflect the departure:</title>
          <p>Where  
is a function that removes references to   from   .
  ←  
(   , { agg ∣
( ) ∣  SHARE ( a(gg)) received} )</p>
          <p>( )</p>
          <p>→</p>
          <p>C( L)OSE
∀  ∈ ℛ ∖ {  }, 
 ←  
(  ,   )</p>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. RSDP formal definition and properties</title>
        <p>Define  as a set of possible states for a replica. Each state   may include:
Membership list   ⊆ ℛ;
Resource utilization   ∈ ℝ+;</p>
        <p>Other reducer and application-specific data.
•
•
•
•
•
•
•
•</p>
        <sec id="sec-2-4-1">
          <title>Define ℳ as the set of all possible messages:</title>
          <p>The aggregation function ( 
) combines multiple states:</p>
          <p>∈ ℳ = { HELLO,  STATUS,  SHARE,  CLOSE} × Payload


({ 1,  2, . . . ,   }) = ⋃ Reducerk({ 1,  2, . . . ,   })</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>This could be defined as:</title>
          <p>For membership lists:  agg = ⋃   ;
For resource utilization  agg = |{ 1, 2,...,  }| ∑   .
1
The state update function  
updates the local state based on received aggregated states:
  ←  
(  , { a1gg,  a2gg, . . . ,  a(gg)})</p>
        </sec>
        <sec id="sec-2-4-3">
          <title>This may involve:</title>
          <p>Updating membership lists:   ← ⋃  a(gg);</p>
        </sec>
        <sec id="sec-2-4-4">
          <title>Adjusting resource utilization estimates;</title>
        </sec>
        <sec id="sec-2-4-5">
          <title>Updating any other application-specific data.</title>
        </sec>
        <sec id="sec-2-4-6">
          <title>To prevent race conditions, the “</title>
          <p>protocol, particularly during state updates.
” is used during critical sections of the
Let μ be a mutex for replica   . Then, state updates are performed under the lock μ :
μ</p>
          <p>←</p>
          <p>({ 1,  2, . . . ,   })</p>
          <p>As was previously stated, RSDP is based on eventual consistency, where all replicas converge to
the same state after a finite number of message exchanges.</p>
          <p>For any two replicas   and   , their states   and   satisfy:
 →∞
lim Pr (  ( ) =   ( )) = 1</p>
          <p>The protocol ensures that messages are eventually delivered, and state updates occur. If a message
 is sent from   to   , then</p>
          <p>will be delivered to   after some finite delay δ. In case some messages
will be lost, RSDP defines repeatable synchronization sessions as a contingency.
•
•
•
•
•
•
•
•
•
•
•
•</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Leader Election Reducer for the RSDP</title>
      <p>
        storing, retrieving, validation, aggregating, and updating a state is defined in a set of preconfigured
reducers. The original article of RSDP describes the benefits of such a modular approach.
Additionally, it provides a thorough explanation of the interface that every reducer must follow to
successfully integrate with the protocol. The list of defined methods includes [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]:
states. For example, discovery of replica members does not require 
implementation since this could be derived from the sender addresses.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Mathematical model of the leader election process</title>
        <p>Leader election reducer is an extension that performs a replicated deterministic holistic decision on
the trusted entity based on incoming cluster events. Having discussed the RSDP foundation and the
basis for leader election reducer, let us define an abstract description of the consensus process.</p>
        <sec id="sec-3-1-1">
          <title>Assume</title>
          <p>= { 1,  2, … ,   }: the set of all replica addresses;
 self ∈  : the address of the current replica instance,  self =   ;
  ⊆  : the current set of replica members known to the replica;
 ∈   : the address of the current leader.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Initially:</title>
          <p>Method 
•
•
  = ∅;
 = ⊥ (temporarily undefined).</p>
          <p>(
), as an input, accepts a list of status messages
 status = [ S1TATUS,  S2TATUS, … ,  STATUS], where each  STATUS contains a sender address
 STATUS.address∈  . During its execution it performs the following operations:
Extract addresses:  ′ = { STATUS.address∣  = 1,2, … ,  };
Sort addresses: arrange  ′ in ascending order according to a total order ( ≤) on addresses  ,
 sorted = Sort( ′); This operation could also include additional sorting criteria.</p>
          <p>Determine leader:  ′ = max( sorted);
Initialize state (if   is ∅ or  is ⊥):   ←  sorted,  ←  ′.</p>
          <p>As a result of its execution, the new state components are returned as the following tuple:
replicaMembers=  sorted, currentLeader=  ′.</p>
          <p>Method  ( ) expects an  ′ ∈  as an input and performs the
following transformation:</p>
          <p>true if L′ =  
isLeader {false if L′ ≠</p>
          <p>Method aggregateShareState(shareMessageBuffe)r expects a list of share messages   ℎ =
[ S1HARE,  S2HARE, … ,  SHARE], where each  STATUS replicaMembers
currentLeader Below are multiple approaches to aggregate the state components
replicaMembers currentLeader
•
•</p>
          <p>Select last message:  last =  SHARE;</p>
        </sec>
        <sec id="sec-3-1-3">
          <title>Extract state components:</title>
          <p>a.  ′ =  last.replicaMember,s
b.  ′ =  last.currentLeader.</p>
          <p>As result returns replicaMembers=  ′, currentLeader=  ′.
rather than relying on the latest and could be described as follows:
assigning points from each of the participants. Consider a set of share messages defined as the
  ℎ = [ S1HARE,  S2HARE, … ,  SHARE], where ∀  ∈   ℎ contains a ranked list of replica
members   = [  ,1,   ,2, … ,   ,  ], where   = |  | be the number of candidates in   , with   ,1 being
the most desired leader and   ,  being the least desired leader.</p>
          <p>For each message   ∈   ℎ .</p>
          <p>o For each candidate   , at position  in   compute the weight   , = 2  − .
Initialize a score set   = {  ,1,   ,2, … ,   ,  }, where   , = 0 for  = 1,2, … ,   .</p>
          <p>For each candidate   , :
o Update the candidate's score   , ←   , +   , ;
•
•
•
•
•
•
•</p>
          <p>Let ℋ = ∅ be a multiset of hashed state components.</p>
          <p>For each message   ∈   ℎ :
• Extract state components:
o   =   .replicaMember;s
o   =   .currentLeader.
• Form state tuple:</p>
          <p>o   = (  ,   ).
• Compute hash of the state tuple:</p>
          <p>o ℎ = ℎ(  ), where ℎ is a hash function.
• Add ℎ to ℋ:</p>
          <p>o ℋ ← ℋ ∪ {ℎ }.</p>
          <p>Identify the hash ℎ∗ with the highest frequency in ℋ:</p>
          <p>o ℎ∗ = arg max(frequency(ℎ , ℋ)).</p>
          <p>Find  ∗ = ( ′,  ′) such that ℎ( ∗) = ℎ∗.</p>
          <p>As result returns replicaMembers=  ′, currentLeader=  ′.
•
•
•
•
•
•
•
•
•
•
•</p>
        </sec>
        <sec id="sec-3-1-4">
          <title>Determine the leader:</title>
          <p>o</p>
          <p>Let   ∈  be the total score, where  is a set of total scores for each candidate, then

for   , ∈   ,   = ∑ =1  =1 δ  , ,  , ⋅   , ;</p>
          <p>∑ 
where δ  , ,  , is the Kronecker delta function:
▪
   , ,  ,
{
1 if   , =   ,
0 if   , ≠   ,
o Identify the candidate  ′ with the highest total score arg max(  )
  ∈
o In case of a tie, apply a deterministic tie-breaker, such as selecting the candidate with
the highest address according to the total order ≤ on  .</p>
          <p>Method 
leader  ′ ∈  .</p>
          <p>As result returns replicaMembers=  ′,</p>
          <p>currentLeader=  ′.
ℎ
(
, 
) expects a set  ′ ⊆  and a
Validate Members: areValidMembers= ( ′ ≠ ∅) ∧ ( self ∈  ′);</p>
          <p>Validate Leader: isLeaderValid=  ′ ∈  ′.</p>
        </sec>
        <sec id="sec-3-1-5">
          <title>Output could be described then as:</title>
          <p>(areValidMembers∧ isLeaderValid) ⟹ {replicaMember:s ′,currentLeader: ′};
¬(areValidMembers∧ isLeaderValid) ⟹ ∅.</p>
          <p>Method  ℎ</p>
          <p>(</p>
        </sec>
        <sec id="sec-3-1-6">
          <title>During its execution it performs the following operations:</title>
          <p>, 
) expects  ′ ⊆  and  ′ ∈  .</p>
          <p>Compare Members: membersChanged= ( ′ ≠   );
Compare Leader: leaderChanged= ( ′ ≠  );</p>
          <p>Determine Reload Necessity: shouldReload= membersChanged∨ leaderChanged.</p>
        </sec>
        <sec id="sec-3-1-7">
          <title>Returns a Boolean that indicates whether a client should be notified about the state change.</title>
          <p>Method updateStat(ereplicaMember,scurrentLeader) expects a  ′ ⊆  and  ′ ∈  . During its
execution it performs: if ( ′ ≠ ∅ ∧  ′ ∈  ′) then (  ←  ′,  ←  ′).</p>
          <p>Method 
(
) depends on the implementation of a
leader election function and whether it is completely deterministic. If the sorting is done with an
inclusion of a locally asserted context, this method is supposed to trigger the initial phase of the</p>
        </sec>
        <sec id="sec-3-1-8">
          <title>RSDP to achieve consistency.</title>
          <p>Otherwise it expects a list of close messages  
 CLOSE has sender address  
CLOSE.address∈  .</p>
        </sec>
        <sec id="sec-3-1-9">
          <title>During its execution it performs the following operations:</title>
          <p>= [ C1LOSE,  C2LOSE, … ,  CLOSE], where each
Extract Closing Addresses: Acl = { 
Update Replica Members:   ←   ∖ Acl;
Recalculate Leader:  ← max(  ) if   ≠ ∅ else ⊥.</p>
          <p>CLOSE.address∣  = 1,2, … ,  };</p>
          <p>Different approaches towards implementation of aggregateShareState(shareMessageBuffe)r
outlined in this section contribute to different properties of the election process. Method based on
the latest source of truth is characterized by low computational intensity, but while reasonable in
trusted and stable environments, is susceptible to network congestion or failures. This approach is
not suitable for networks that have strict requirements for Byzantine fault tolerance.</p>
          <p>Method based on a popular vote provides higher resilience for both intentional and unintentional
discrepancies during the consensus process. It is a suitable approach for systems that are beset with
an unstable or restricted environment. It is additionally characterized by increased computational
complexity, though it could be reduced by taking into account only scalar values to avoid hashing
overhead. This approach is the most resilient among the others against intentionally hostile behavior
since, to perform an action, you have to gather the majority of votes.</p>
          <p>Finally, the last method provides a solution based on the electoral points approach. It is the most
stable and resilient option among the previous three in the context of unstable connections due to
its ability to downgrade votes that have lost some portion of a state and hence have a limited view
of the global state set. Though it is not resilient towards intentionally hostile actions since the state
set cardinality could be superficially inflated.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Implementation of the leader election reducer</title>
        <p>The following section describes practical implementation of the leader election reducer using
JavaScript and the  interface defined by the RSDP. The
following algorithm assumes that every cluster member can trust the environment and implements
the first approach from the previous section. Such an assumption is a common case when building
coordinated replication between internal services to achieve high availability. From this point on we
will refer</p>
        <p>Figure 4 shows the initial aggregation implementation logic:</p>
        <p>The initial state of the LER, as defined by the model, is comprised of an empty set of replica
members and an undefined leader. This serves as an example of an abstract derived reducer subset
since it does not have an initial  to share with the cluster.</p>
        <p>Method  hence relies not on the data provided by the cluster members but on
the messages themselves and their metadata. Its operation is simple; the new leader is defined as the
replica that has the highest address. The sorting operation here is not redundant, since to achieve
determinism, every node must have the same dataset and ordering. Subsequently, 
abstracts out the internal store and provides a simple answer to the client, whether he is a leader.</p>
        <p>The  ℎ method is responsible for verifying the consistency of the data received
from the aggregated state. A leader is valid if it is in a set of known members, and the members are
valid if it is not an empty set. Then, the  ℎ method decides whether the state has changed
and whether the client should be notified. At last,  simply sets a new state if it was
provided.</p>
        <p>Figure 6 shows the close aggregation implementation logic:</p>
        <p>The 
Its primary goal is to deterministically determine a new set of cluster members and a leader. The
leader election logic here is the same as during the initial aggregation. RSDP continuously
resynchronizes the state of the cluster, so any discrepancies caused by the lost messages will
eventually be resolved due to the principle of eventual consistency.</p>
        <p>To conclude, RSDP provides a well-defined model for an arbitrary logical extension. The
Farther research could lead to the different invariants of this protocol that could potentially be
applicable in decentralized environments.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Leader election reducer duration and failure probability</title>
        <p>Failure probability and consensus duration are two of the most important metrics for the
consensusachieving algorithm. The following section provides mathematical models that describe these
properties. We will start with a model of time required to reach a consensus. The following
assumptions are made:
•
•
•
•
•
•
•</p>
        <p>Let  be the total number of replicas in the system.</p>
        <p>Let  be the maximum one-way network delay between any two replicas.</p>
        <p>Let   be the maximum time a replica takes to process a message.</p>
        <p>Phases to achieve initial consensus include “DEBATES” and “SHARE”.</p>
        <p>All message delays and processing times are bounded and known.</p>
        <p>The SLAN layer provides guarantees of message delivery.</p>
        <sec id="sec-3-3-1">
          <title>The probability of the communication media coordinator failure is negligent.</title>
          <p>DEBATES” phase each replica sends a “HELLO” message to all other replicas
where time for a “HELLO” message to reach other replicas:  . After receiving a “HELLO” message,
each replica processes it in time (n − 1)tp and sends back a “STATUS” message where time for a
“STATUS” message to reach the original sender is  .</p>
          <p>For a replica to receive “STATUS” messages from all others, the time is d + (n − 1)tp + d =
2d + (n − 1)tp and since there are  − 1 replicas sending “STATUS” messages, processing them
takes ( − 1)  .</p>
          <p>Subsequently, the total “DEBATES” phase time ( DEBATES) is:</p>
          <p>TDEBATES = 2d + (n − 1)tp + (n − 1)tp = 2d + 2tp(n − 1) (1)
During the “SHARE” phase, after aggregating the received “STATUS” messages, each replica
broadcasts a “SHARE” message to all others. Time for “SHARE” message to reach other replicas
is  . After that each replica processes incoming “SHARE” messages from  − 1 replicas in time
( − 1)  .</p>
          <p>Then the total “SHARE” phase time ( SHARE) is:</p>
          <p>TSHARE = d + (n − 1)tp
Hence, the total time to reach consensus ( consensus) is:</p>
          <p>Tconsensus= TDEBATES + TSHARE = (2d + 2tp(n − 1)) + (d + (n − 1)tp)</p>
          <p>= 3d + 3tp(n − 1) = 3 ( + tp(n − 1))</p>
          <p>It is obvious then that the consensus achieving time is linearly dependent on the amount of cluster
members. Additionally, network delay  and processing time   are critical factors, but since the
protocol is built on top of deterministic principles and clean functions,   should be negligent.
132
(2)
(3)</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>As for the failure probability, the following assumptions are made:</title>
          <p>•
•
•
•
•</p>
          <p>Let   be the probability that a message is lost.</p>
          <p>Let   be the probability that a replica fails during the consensus process.</p>
          <p>Message losses and replica failures are independent.</p>
          <p>Each replica must receive “HELLO”, “STATUS”, and “SHARE” messages from all the
replicas.</p>
          <p>A replica successfully participates if it can send and receive all required messages.
The probability that a single message is successfully transmitted is:</p>
          <p>During the consensus achieving stages, each replica must send  − 1 “HELLO” and  − 1
“SHARE” messages. Consequently, every replica expects to receive  − 1 “HELLO”,  − 1
“STATUS”,</p>
          <p>− 1 “SHARE” messages. Then the total messages received per replica could be
represented as:</p>
          <p>Having the total amount of required messages to successfully achieve consensus, the probability
that a replica successfully sends and receives all messages:</p>
          <p>Pmsg = 1 − pl</p>
          <p>Mtotal = 3(n − 1) = 3n − 3</p>
          <p>Preplica = (1 − pf) × (Pmsg)Mtotal
Consequently, the probability that all replicas will successfully participate is:</p>
          <p>n</p>
          <p>Pall replicas= (Preplica) = [(1 − pf) × (1 − pl)3n−3]n</p>
          <p>Then the probability of consensus failure for the first election method:
and considered in the following way:
majority:</p>
          <p>Plast state failure= 1 − [(1 − pf) × (1 − pl)3n−3]n</p>
          <p>The probability of failure is dependent on key characteristics of the network and the underlying
infrastructure. It is obvious that such an approach is suitable only in cases of stable network
connections. SLAN layer provides delivery recovery mechanisms but does not solve every issue
related to the message loss. Since the</p>
          <p>do not require successful participation of every node, the probability could be reevaluated
Let  be the minimum number of replicas required for consensus (the quorum). For a simple
q  =   ⌈ ⌉

2
n
k=q
n
k</p>
        </sec>
        <sec id="sec-3-3-3">
          <title>Consequently, the probability that at least  replicas successfully participate is:</title>
          <p>Pquorum consensus= ∑ ( ) (Preplica)k(1 − Preplica)
n−k
 </p>
        </sec>
        <sec id="sec-3-3-4">
          <title>Then the probability of consensus failure:</title>
          <p>Pvote failure= 1 − Pquorum consensus</p>
          <p>Evidently, vote-based approaches are significantly more resilient than the method based on the
last state decision. In such systems, it is possible to withstand partial failure of participating nodes
during the consensus process.
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Stateful Cluster Failover Models</title>
      <p>
        A stateful cluster in the context of this article is a distributed system where each node has its own
subset of the system state. The subset might be either a unique unknown portion for every other
cluster member or, as a more common case, a subset of anot
establish a leader-follower model to achieve high consistency [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
        ].
      </p>
      <p>
        While the entire cluster follows a single leader, it becomes a single point of failure. The principles
of fault tolerance in that context require establishing a failover mechanism as a contingency. During
luster nodes should be probed and tested to detect
any issues promptly. As soon as the critical event on the leader node is detected, the mechanism
switches to the active phase of achieving consensus. The entire network has to agree upon a new
leader of a cluster to continue its operation [
        <xref ref-type="bibr" rid="ref28 ref29 ref30">28-30</xref>
        ].
      </p>
      <p>The following list includes common definitions used to model every subsequent failover method
and their properties:
•
•
•
•
•
•
•
•
•
•
•
•
 : Number of instances in the cluster.
 : Number of external observers (Method 2 &amp; 3).
ℎ: Health-check interval between instances (Method 1).
ℎ : Health-check interval by observers (Methods 2 and 3).
  : Failure detection time.
  : Time to perform the failover procedure.
  : Consensus achieving time (an independent parameter).
  : Probability of an instance failing during the observation window.
  : Probability of an observer failing during the observation window.
 consensus: Probability of failure in achieving consensus.
  : Total observation time.</p>
      <p>: Average size of a message (in bytes)</p>
      <p>Each node/observer sends ( − 1)  ( − 1) messages three times during consensus. In the
following subsection, each failover topology will be described in terms of failover delay, total failure
probability, and communication overhead.
as an optimization phase to avoid going through the entire consensus cycle every time a node leaves
the cluster.</p>
      <sec id="sec-4-1">
        <title>4.1. Self-regulated mutual health evaluation</title>
        <p>
          We will first evaluate a model based on a single logical plane. Each node in such a cluster is
responsible for operational execution, monitoring, and governance. In such topology, every node
must have a communication link with every other in the system to successfully achieve consensus
and monitor other instances [
          <xref ref-type="bibr" rid="ref31 ref32">31, 32</xref>
          ].
        </p>
        <p>The cluster could be preconfigured to initiate health probes in a specified interval but with
different initial timestamp shifts based on a node position in the network. This allows to efficiently
utilize the repetitive status probes and decrease failure detection time. Additionally, since every node
conducts the monitoring, the network can tolerate to up to  – 1 failed nodes, where  is a node set
cardinality.</p>
        <p>In that regard, LER provides all the necessary data needed to establish successful monitoring and
election in an automated way. The capabilities of LER already provided data for the dynamic node
discovery. Hence, every node has a list of the cluster members that they must probe. LER
automatically adjusts the states of nodes in a cluster as soon as some subset leaves, but the new
leader election cycle could also be triggered in case the nodes suffered critical events.
E[Td1] =
h
2N
Tf1 = E[Td1] + Tc + Tp =</p>
        <p>+ Tc + Tp
h
2N
Pall instances=</p>
        <p />
        <p>After detecting failure, instances need to reach consensus. The consensus achieving time (  ) is
considered an independent parameter to simplify the model and concentrate directly on the factors
directly tied to the topology. The total time from failure occurrence   1 to failover completion is the
sum of detection time, consensus time, and failover procedure time:</p>
        <p>The probability that all instances fail simultaneously ( all instance)s could be represented simply as
an exponent of a single instance failure:</p>
        <p>Each instance performs health checks on every other instance at intervals of ℎ. The expected time
for the first instance to detect a failure is the minimum time any instance takes to detect it. Assuming
health checks and uniformly distributed, the expected detection time is:
(12)
(13)
(14)
(15)
(16)
(17)
(18)
instance sends 
consensus is:</p>
        <p>Then the total failure probability   1 would be described in terms of consensus (Pconsensus) and
all instances (Pall instance)s failure probabilities:</p>
        <p>Pf1 = Pconsensus+ Pall instances− (Pconsensus× Pall instance)s</p>
        <sec id="sec-4-1-1">
          <title>Then each instance sends health checks to  − 1 other instances. During consensus, each − 1 messages three times. The overall amount of the messages sent during</title>
          <p>Mconsensus= N × (N − 1) × 3
The total number of messages exchanged during the observation period ( 
) is:</p>
        </sec>
        <sec id="sec-4-1-2">
          <title>Total communication overhead in bytes:</title>
          <p>h
Mtotal1= (Tobs × Mhealth) + Mconsensus</p>
          <p>Overhead1 = Mtotal1× Cm
the models that are tied directly to the system failover properties. But to achieve a holistic view,
consensus time and maintenance time models must be included.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Centralized observer health monitoring</title>
        <p>
          The topology with a centralized observer includes a set of stateful worker nodes in a cluster and a
single coordinating machine that manages the entire network. This topology introduces the division
between operational and control planes and thus fosters the single responsibility principle in the
system [
          <xref ref-type="bibr" rid="ref33 ref34">33, 34</xref>
          ].
        </p>
        <p>Figure 8 shows the topology of a cluster monitored and coordinated by a single observer:</p>
        <p>Let us consider failure detection time   2. Assume that the observer performs health checks on
all instances at intervals of ℎ . Then the expected time to detect a failure is half the health-check
interval:
(19)
(22)
(23)
ho</p>
        <p>E[Td2] = 2</p>
        <sec id="sec-4-2-1">
          <title>Since there is no consensus process among instances, the total failover time is:</title>
          <p>Tf2 = E[Td2] + Tp = h2o + Tp (20)</p>
          <p>The system relies on a single observer; its failure directly impacts the system's ability to perform
failover. Probability of all Instances failing ( all_instances) is the same as in the first method.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>The total failure probability includes the observer failure and all instances failing:</title>
          <p>Pf2 = po + Pall instances− (po × Pall instance)s (21)</p>
          <p>Moving on to the communication overhead model, the observer sends health checks to all 
instances. Messages per health-check interval ℎ :</p>
          <p>Mhealth = N
Then the total messages over observation time   :</p>
          <p>Mtotal2= Thoobs × Mhealth</p>
        </sec>
        <sec id="sec-4-2-3">
          <title>Consequently, the total communication overhead in bytes:</title>
          <p>Overhead2 = Mtotal2× Cm (24)</p>
          <p>This model imposes smaller communication overhead and reduces coordination complexity but
suffers from a single point of failure.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Distributed observer health assessment</title>
        <p>
          The following discussed topology is also comprised of two distinct execution plains. In such
networks, a consensus algorithm is used between observers themselves and decides on the
responsible node that must coordinate the workers plane [
          <xref ref-type="bibr" rid="ref35 ref36 ref37">35-37</xref>
          ].
        </p>
        <p>Figure 9 shows the topology of a cluster monitored and coordinated by a group of observers:</p>
        <p>3. With  observers, the expected time to detect a
failure is:
ho (25)</p>
        <p>E[Td3] = 2M</p>
        <p>Observers need to reach consensus after detecting a failure that takes   . Then the total time from
failure occurrence to failover completion:</p>
        <p>Tf3 = E[Td3] + Tc + Tp = 2hMo + Tc + Tp
(26)</p>
        <p>The probability that all observers fail simultaneously is  all_observers =   . Given that the of all
instances failing ( all_observers) is the same as in the previous methods, the total failure probability
(  3) is:</p>
        <p>Pf3 = Pconsensus+ Pall observers+ Pall instances− (Pconsensus× Pall observers× Pall instance)s
Each of  observers sends health checks to all  instances. Then the Mhealth is:</p>
        <sec id="sec-4-3-1">
          <title>Additionally, each observer sends  − 1 messages three times during consensus:</title>
          <p>Mhealth = M × N</p>
          <p>Mconsensus= M × (M − 1) × 3
Then the total messages during</p>
          <p>is:
Mtotal3= (Tobs × Mhealth) + Mconsensus</p>
          <p>ho
Consequently, the communication overhead (Overhead3):</p>
          <p>Overhead3 = Mtotal3× Cm (31)</p>
          <p>This model provides a balance between communication overhead, complexity, separation of
concerns and fault tolerance.
(27)
(28)
(29)
(30)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Achieving consensus within the confines of a Decentralized Coordination Network based on
homogenous multiagent system is a critical process that is gaining momentum in the research field
due to the ever-growing sizes of distributed systems and requirements towards high availability.
Within the context of this article, a novel leader election protocol was created and modeled based on
the Replica State Discovery Protocol.</p>
      <p>One of the primary goals of this article is to introduce a leader election state reducer as a logical
extension of RSDP to address the rapid failover problem. As a result, multiple viable approaches were
proposed towards building such a reducer that are based on different properties of common state
aggregation. The first proposed method of leader election is based upon the supposition that the
network is controlled, and fault tolerance against malicious action during consensus is not expected.
That is a reasonable expectation since RSDP is built on top of LAN simulation, which in turn is based
on AMQP provider. Such providers often come with a set of authentication and authorization
mechanisms of their own. This method is characterized by its low computational overhead and
finality characteristics in case of an extremely dynamic network.</p>
      <p>The following two methods are based on quorum approach towards handling the leader election
process. The method based on a popular vote is the most suitable approach to ensure resilience
against both intentional and accidental failures during consensus interactions. Popular vote is a
common solution in networks that require Byzantine fault tolerance.</p>
      <p>The third proposed leader election method based on RSDP, in its foundation relies on the
weighted election algorithm. The approach is characterized by a greater degree of resilience in highly
congested and unreliable networks where random packet losses occur frequently due to its ability of
partial inclusion.</p>
      <p>Given the mathematical models and graphs for the three different cluster failover models and
approaches, it is fair to assume that none of those could be called objectively superior in every plain
of comparison. Method involving self-regulated mutual health evaluation is most suitable when high
additional infrastructure incurrence and the critical event detection time are the most influential
metrics of the successful system operation. Though it is worth noting that the communication
overhead grows exponentially with the number of cluster members.</p>
      <p>The second method, based on centralized observer health monitoring, is an appropriate solution
only in cases where higher infrastructure and communication overhead costs are a primary decision
factor. Since the entire stability of the system depends on a single centralized external observer, the
very same observer becomes a single point of failure, which could lead to a disaster when high
availability is a hard requirement.</p>
      <p>Lastly, a method based on distributed observer health assessment serves as a trade-off between
high availability, infrastructure cost incurrence, communication overhead, and failover delay by
offloading the decision-making process to the parallel distributed layer of coordination. Such an
approach is mostly suitable for current cloud infrastructure demands due to its flexibility and clearly
established separation of concerns.</p>
      <p>Overall, this article aims to inspire a surge of further research in the complex, exciting, and
extremely relevant field of distributed computing and management. The implications and results of
this research allow to bolster the security of modern critical infrastructure by effectively describing
novel ways of achieving resilience through redundancy and the distribution of responsibility.
Provided mathematical and graphical models are provided to help in the complex decision-making
process and reduce possible risks when choosing an appropriate model and approach for rapid
cluster failover.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools. 138</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Gilbert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Lynch</surname>
          </string-name>
          ,
          <article-title>The CAP theorem</article-title>
          ,
          <source>Computer</source>
          , vol.
          <volume>45</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>30</fpage>
          -
          <lpage>36</lpage>
          , Feb.
          <year>2012</year>
          . doi:
          <volume>10</volume>
          .1109/
          <string-name>
            <surname>MC</surname>
          </string-name>
          .
          <year>2011</year>
          .
          <volume>389</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kleppmann</surname>
          </string-name>
          ,
          <article-title>A critique of the CAP theorem</article-title>
          ,
          <source>arXiv:1509.05393v2 [cs.DC]</source>
          ,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.1509.05393.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>E.</given-names>
            <surname>Brewer</surname>
          </string-name>
          ,
          <article-title>A certain freedom: thoughts on the CAP theorem</article-title>
          ,
          <source>PODC '10: Proceedings of the 29th ACM SIGACT-SIGOPS symposium on Principles of distributed computing</source>
          , p.
          <fpage>335</fpage>
          ,
          <year>2010</year>
          . doi:
          <volume>10</volume>
          .1145/1835698.1835701.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bateni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lohstroh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Menard</surname>
          </string-name>
          ,
          <article-title>Quantifying and generalizing the CAP theorem</article-title>
          ,
          <source>arXiv:2109.07771v1 [cs.DC]</source>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.2109.07771.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Muñoz-Escoí</surname>
          </string-name>
          , et al,
          <article-title>CAP theorem: revision of its related consistency models</article-title>
          ,
          <source>The Computer Journal</source>
          , vol.
          <volume>62</volume>
          , no.
          <issue>6</issue>
          , pp.
          <fpage>943</fpage>
          -
          <lpage>960</lpage>
          ,
          <year>June 2019</year>
          . doi:
          <volume>10</volume>
          .1093/comjnl/bxy142.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Lewis-Pye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Roughgarden</surname>
          </string-name>
          ,
          <article-title>Resource pools and the CAP theorem</article-title>
          , arXiv:
          <year>2006</year>
          .
          <article-title>10698v1 [cs</article-title>
          .DC],
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2006</year>
          .
          <volume>10698</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Frank</surname>
          </string-name>
          , R. U. Pedersen,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Frank</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Larsson</surname>
          </string-name>
          ,
          <article-title>The CAP theorem versus databases with relaxed ACID properties</article-title>
          ,
          <source>Proceedings of the 8th International Conference on Ubiquitous Information Management and Communication (ICUIMC '14)</source>
          , article no.
          <issue>78</issue>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          , Jan.
          <year>2014</year>
          . doi:
          <volume>10</volume>
          .1145/2557977.2557981.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <article-title>[8] Fault-tolerance by replication in distributed systems</article-title>
          ,
          <source>Reliable Software Technologies AdaEurope '96 (Ada-Europe</source>
          <year>1996</year>
          ), conference paper, pp.
          <fpage>38</fpage>
          -
          <lpage>57</lpage>
          , Jan.
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K. P.</given-names>
            <surname>Birman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. A.</given-names>
            <surname>Joseph</surname>
          </string-name>
          ,
          <article-title>Exploiting replication in distributed systems</article-title>
          ,
          <source>NASA Contractor Report CR-186410</source>
          , Jan.
          <year>1989</year>
          . doi: NASA-CR-
          <volume>186410</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ciciani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Dias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Analysis of replication in distributed database systems</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>261</lpage>
          , Jun.
          <year>1990</year>
          . doi:
          <volume>10</volume>
          .1109/69.54723.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zaharia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Chowdhury</surname>
          </string-name>
          ,
          <string-name>
            <surname>T. Das</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Dave</surname>
            , J. Ma,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>McCauley</surname>
            ,
            <given-names>M. J.</given-names>
          </string-name>
          <string-name>
            <surname>Franklin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Shenker</surname>
            ,
            <given-names>I. Stoica</given-names>
          </string-name>
          ,
          <article-title>Resilient distributed datasets: A fault-tolerant abstraction for in-memory cluster computing</article-title>
          ,
          <source>Proceedings of the 9th USENIX Symposium on Networked Systems Design and -14</source>
          , University of California, Berkeley,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Cristian</surname>
          </string-name>
          ,
          <article-title>Understanding fault-tolerant distributed systems</article-title>
          ,
          <source>Communications of the ACM</source>
          , vol.
          <volume>34</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>56</fpage>
          -
          <lpage>78</lpage>
          , Feb.
          <year>1991</year>
          . doi:
          <volume>10</volume>
          .1145/102792.102801.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Luntovskyy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Shubyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Maksymuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Klymash</surname>
          </string-name>
          ,
          <article-title>Highly-distributed systems: What is inside?</article-title>
          ,
          <source>Proceedings of the 2020 IEEE International Conference on Problems of Infocommunications. Science</source>
          and
          <string-name>
            <surname>Technology (PIC S&amp;T)</surname>
          </string-name>
          , Kharkiv, Ukraine, Oct.
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1109/PICST51311.
          <year>2020</year>
          .
          <volume>9467890</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>V. S.</given-names>
            <surname>Pai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aron</surname>
          </string-name>
          , G. Banga,
          <string-name>
            <given-names>M.</given-names>
            <surname>Svendsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Druschel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zwaenepoel</surname>
          </string-name>
          , E. Nahum,
          <article-title>Localityaware request distribution in cluster-based network servers</article-title>
          ,
          <source>ACM SIGOPS Operating Systems Review</source>
          , vol.
          <volume>32</volume>
          , no.
          <issue>5</issue>
          , pp.
          <fpage>205</fpage>
          -
          <lpage>216</lpage>
          , Oct.
          <year>1998</year>
          . doi:
          <volume>10</volume>
          .1145/384265.291048.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pedrosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Korupolu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Oppenheimer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Tune</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wilkes</surname>
          </string-name>
          ,
          <article-title>Large-scale cluster management at Google with Borg</article-title>
          ,
          <source>EuroSys '15: Proceedings of the Tenth European Conference on Computer Systems, article no. 18</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          , Apr.
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1145/2741948.2741964.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holub</surname>
          </string-name>
          ,
          <article-title>"The mutex and lock management," in Taming Java Threads</article-title>
          , Apress, Berkeley, CA,
          <year>2000</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4302</fpage>
          -1129-
          <issue>7</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>M.</given-names>
            <surname>Walmsley</surname>
          </string-name>
          ,
          <article-title>"Semaphores," in Multi-Threaded Programming in C++</article-title>
          , Springer, London,
          <year>2000</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4471</fpage>
          -0725-
          <issue>5</issue>
          _
          <fpage>5</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Plagnol</surname>
          </string-name>
          ,
          <article-title>"Beyond mutexes, semaphores, and critical sections," in Embedded Real Time -</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Petrescu</surname>
          </string-name>
          ,
          <article-title>"Replication in Raft vs Apache Zookeeper," in Soft Computing Applications (SOFA</article-title>
          <year>2020</year>
          ),
          <source>Advances in Intelligent Systems and Computing</source>
          , vol.
          <volume>1438</volume>
          , Springer,
          <year>2023</year>
          , pp.
          <fpage>426</fpage>
          <lpage>435</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -55556-4_
          <fpage>44</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>H.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mortier</surname>
          </string-name>
          ,
          <article-title>"Paxos vs Raft: Have we reached consensus on distributed consensus?,"</article-title>
          <source>Proceedings of the 7th Workshop on Principles and Practice of Consistency for Distributed Data (PaPoC '20)</source>
          ,
          <source>Article No. 8</source>
          , pp.
          <fpage>1</fpage>
          <lpage>9</lpage>
          ,
          <string-name>
            <surname>April</surname>
          </string-name>
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1145/3380787.3393681.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>F.</given-names>
            <surname>Junqueira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Reed</surname>
          </string-name>
          , ZooKeeper: Distributed Process Coordination,
          <string-name>
            <given-names>O</given-names>
            <surname>'Reilly Media</surname>
          </string-name>
          , Inc.,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kotov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Toliupa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Nakonechnyi</surname>
          </string-name>
          ,
          <article-title>"Replica State Discovery Protocol Based on Advanced Message Queuing Protocol," Cybersecurity: Education, Science</article-title>
          , Technique, vol.
          <volume>3</volume>
          , no.
          <issue>23</issue>
          ,
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .28925/
          <fpage>2663</fpage>
          -
          <lpage>4023</lpage>
          .
          <year>2024</year>
          .
          <volume>23</volume>
          .156171.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>N.</given-names>
            <surname>Naik</surname>
          </string-name>
          ,
          <article-title>"Choice of effective messaging protocols for IoT systems: MQTT, CoAP, AMQP</article-title>
          and
          <string-name>
            <surname>HTTP</surname>
          </string-name>
          ," in
          <source>2017 IEEE International Systems Engineering Symposium (ISSE)</source>
          , Vienna, Austria, Oct.
          <year>2017</year>
          , pp.
          <fpage>426</fpage>
          -
          <lpage>435</lpage>
          . doi:
          <volume>10</volume>
          .1109/SysEng.
          <year>2017</year>
          .
          <volume>8088251</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Fernandes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. C.</given-names>
            <surname>Lopes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. J. P. C.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. Ullah,</surname>
          </string-name>
          <article-title>"Performance evaluation of RESTful web services and AMQP protocol," in 2013 Fifth International Conference on Ubiquitous and Future Networks (ICUFN), Da Nang</article-title>
          , Vietnam, Jul.
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1109/ICUFN.
          <year>2013</year>
          .
          <volume>6614932</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>N. Q.</given-names>
            <surname>Uy</surname>
          </string-name>
          and
          <string-name>
            <given-names>V. H.</given-names>
            <surname>Nam</surname>
          </string-name>
          ,
          <article-title>"A comparison of AMQP and MQTT protocols for Internet of Things,"</article-title>
          <source>in 2019 6th NAFOSTED Conference on Information and Computer Science (NICS)</source>
          , Hanoi, Vietnam, Dec.
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .1109/NICS48868.
          <year>2019</year>
          .
          <volume>9023812</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>A.</given-names>
            <surname>Prajapati</surname>
          </string-name>
          ,
          <article-title>"AMQP and beyond,"</article-title>
          <source>in 2021 International Conference on Smart Applications, Communications and Networking (SmartNets)</source>
          , Glasgow, United Kingdom, Sep.
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1109/SmartNets50376.
          <year>2021</year>
          .
          <volume>9555419</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>I. N.</given-names>
            <surname>McAteer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Baig</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Hannay</surname>
          </string-name>
          ,
          <article-title>"Security vulnerabilities and cyber threat analysis of the AMQP protocol for the Internet of Things,"</article-title>
          <source>in Australian Information Security Management Conference</source>
          ,
          <year>2017</year>
          . ISBN:
          <fpage>978</fpage>
          -0-6481270-8-6.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Stanik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Höger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>O.</given-names>
            <surname>Kao</surname>
          </string-name>
          ,
          <article-title>"Failover pattern with a self-healing mechanism for high availability cloud solutions,"</article-title>
          <source>in 2013 International Conference on Cloud Computing and Big Data</source>
          , Fuzhou, China, Dec.
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1109/CLOUDCOM-ASIA.
          <year>2013</year>
          .
          <volume>63</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>W.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>and J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>"An optimized multi-Paxos protocol with centralized failover mechanism for cloud storage applications," in Collaborative Computing: Networking, Applications</article-title>
          and Worksharing (CollaborateCom
          <year>2018</year>
          ),
          <source>Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering</source>
          , vol.
          <volume>268</volume>
          ,
          <string-name>
            <surname>Feb</surname>
          </string-name>
          .
          <year>2019</year>
          , pp.
          <fpage>610</fpage>
          <lpage>625</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -30146-8_
          <fpage>41</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>P.</given-names>
            <surname>Somasekaram</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Calinescu</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>"High-availability clusters: A taxonomy, survey, and future directions,"</article-title>
          <source>Journal of Systems and Software</source>
          , vol.
          <volume>187</volume>
          , May
          <year>2022</year>
          ,
          <volume>111208</volume>
          . doi:
          <volume>10</volume>
          .1016/j.jss.
          <year>2021</year>
          .
          <volume>111208</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>K. High-availability (HA) PostgreSQL Cluster with Patroni</article-title>
          .
          <source>Medium, Jan</source>
          <volume>14</volume>
          ,
          <year>2024</year>
          . URL: https://medium.com/@chriskevin_
          <article-title>80184/high-availability-ha-postgresql-cluster-with-patroni1af7a528c6be.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <article-title>Percona Distribution for PostgreSQL, "</article-title>
          <source>High availability." Percona Documentation, version 15</source>
          .
          <article-title>8</article-title>
          . URL: https://docs.percona.com/postgresql/15/solutions/high-availability.html.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <article-title>"Introduction to pg_auto_failover." pg_auto_failover Documentation</article-title>
          . URL: https://pg-autofailover.readthedocs.io/en/main/intro.html.
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>L.</given-names>
            <surname>Fittl</surname>
          </string-name>
          ,
          <article-title>"Introducing pg_auto_failover: Open source extension for automated failover and highavailability in PostgreSQL." Microsoft Azure Blog</article-title>
          , May 6,
          <year>2019</year>
          . URL: https://opensource.microsoft.com/blog/2019/05/06/introducing-pg_
          <article-title>auto_failover-postgresqlopen-source-extension-automated-failover-high-availability/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>A.E.</given-names>
            <surname>Nocentino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Weissman</surname>
          </string-name>
          ,
          <article-title>"Kubernetes Architecture,"</article-title>
          <source>in SQL Server on Kubernetes, Apress</source>
          , Berkeley, CA,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4842</fpage>
          -7192-
          <issue>6</issue>
          _
          <fpage>3</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <surname>Kubernetes</surname>
            <given-names>Documentation</given-names>
          </string-name>
          ,
          <article-title>"Cluster Architecture," Kubernetes Documentation</article-title>
          . URL: https://kubernetes.io/docs/concepts/architecture/.
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>L.</given-names>
            <surname>Larsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Gustafsson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Klein</surname>
          </string-name>
          , and
          <string-name>
            <given-names>E.</given-names>
            <surname>Elmroth</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Decentralized Kubernetes Federation Control Plane," 2020 IEEE/ACM 13th International Conference on Utility and Cloud Computing (UCC)</source>
          , Leicester, UK,
          <year>2020</year>
          , pp.
          <fpage>354</fpage>
          -
          <lpage>359</lpage>
          . doi:
          <volume>10</volume>
          .1109/UCC48980.
          <year>2020</year>
          .
          <volume>00056</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>