<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Comparing Cloud and On-Premises Kubernetes: Insights into Networking and Storage Tooling</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Jakob Koller</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Böhm</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bamberg</institution>
          ,
          <addr-line>An der Weberei 5, Bamberg, 96047</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>Kubernetes (K8s) is nowadays a well-recognized platform for automating deployment, scaling, and management of containerized workloads. However, running K8s in an on-premises environment comes with unique challenges, because several crucial components are typically managed by the cloud providers. These challenges are especially pronounced in network-specific load balancing services and storage management. For running K8s in an onpremises environment, these functionalities must be provided additionally. To see how open-source tools compare to their cloud counterparts, we conducted experiments evaluating MetalLB and Cilium for networking, as well as Ceph &amp; Rook and Longhorn for storage. The results showed that MetalLB and Cilium, for networking, could achieve similar results to cloud-based K8s. For storage, Ceph &amp; Rook outperformed their cloud counterparts, whereas Longhorn delivered inferior results.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Kubernetes</kwd>
        <kwd>On-premises</kwd>
        <kwd>Container orchestration</kwd>
        <kwd>Networking</kwd>
        <kwd>Storage</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>insights into the feasibility, performance, and usability of on-premises solutions compared to their cloud
counterparts.</p>
      <p>The remainder of the paper is structured as follows: Section 2 discusses the current research on
on-premises K8s. Section 3 provides an overview of existing tools that enable K8s functionality in
on-premises environments. In Section 4, we leverage these tools to conduct our experiment. Finally, we
critically review the experiment in Section 5 and conclude our work in Section 6.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        There are already related approaches to this work that discuss and evaluate solutions for cloud-equivalent
on-premises K8s. Packard et al. [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] discussed running and building a K8s cluster in an on-premises
environment. They partly discussed the networking components of their cluster, focusing on the
external connectivity to the Internet. In addition, they highlighted their solution for the storage aspect.
However, they only described a use case-based proof of concept with InfluxDB 1 and K8s. Also, they
considered only a small subset of network and storage tools and did not provide a comparison or
reasoning for their tool selection. Ruiz et al. [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] used an on-premises K8s cluster to test their custom and
QoS-aware autoscaling of deployments in diferent cluster setups. They provided a custom load balancer
to address network challenges and did not use publicly available open-source solutions. Tackling
storage-related aspects for on-premises K8s was not in scope. Manaouil and Lebre [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] discussed K8s in
the domain of edge computing, especially the applicability of geographically distributed K8s clusters.
They used a basic cluster setup not designed for production usage for testing. Mondal et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] set up a
K8s cluster from scratch without any auxiliary tooling. The deployments were only internally exposed.
Hence, a solution for load balancing was not discussed. Böhm and Wirtz [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] compared diferent K8s
distributions based on their performance characteristics. However, only the baseline functionality is
included and needs to be extended for a production-like environment.
      </p>
      <p>None of the related works discussed a cloud-equivalent on-premises setup, considering both network
and storage aspects. Specifically, no empirical and quantitative evaluation yet shows the outcome
when K8s nodes fail. Consequently, this work wants to address these gaps by providing an overview of
tooling with their functional comparability to cloud-based K8s.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Tooling</title>
      <p>This section discusses the available tooling for networking and storage. To identify an eligible set
of tools for evaluation, we first followed the findings mentioned in the previously discussed related
work (Section 2). Additionally, we enriched the set of tools by researching the internet for further
solutions. We eliminated solutions that do not contribute to our goal of cloud equivalence from a feature
perspective. Furthermore, we do not consider tools with minor reputations, incomplete documentation,
or inactive development.</p>
      <sec id="sec-3-1">
        <title>3.1. Networking</title>
        <p>Our research revealed two classes of solutions exist for providing load balancing for services, particularly
load balancers and via the Container Network Interface (CNI). We selected two representative tools for
the identified categories: MetalLB as the load balancer and Cilium as the CNI.</p>
        <p>
          MetalLB. One of the most mature and widely used load balancers for on-premises K8s environments
is MetalLB [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. It operates in Layer 2 mode or Border Gateway Protocol (BGP)2 mode. In Layer 2 mode,
MetalLB uses the Adress Resolution Protocol (ARP) and Neighbor Discovery Protocol (NDP) to assign
multiple IP addresses to a single machine. However, this mode has one significant limitation: the
1https://www.influxdata.com/
2BGP is a routing protocol used to exchange network reachability information between systems, enabling eficient trafic
distribution and scalability in complex network environments [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
incoming trafic from outside the cluster is limited by the bandwidth of a single node. All incoming
trafic must pass through this node before being distributed across the cluster, creating a potential
bottleneck. The second mode uses BGP to establish a direct BGP peering session between the router and
the diferent nodes. This allows a BGP-compliant router to forward trafic directly to the appropriate
node, bypassing the single node bottleneck3.
        </p>
        <p>Cilium. Unlike MetalLB, which is solely a load balancer, Cilium is primarily a CNI with additional
features. One of the features is the capability to provide load balancing for services. Similar to MetalLB,
it supports both Layer 2 and BGP mode. Since K8s requires a CNI by design, using the integrated load
balancer from Cilium eliminates the need to install additional tools to provide load balancing4.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Storage</title>
        <p>Longhorn. Developed by Rancher, Longhorn is an open-source block storage system. It claims to be
lightweight and reliable, which could be important in resource-constrained environments. It provides
incremental snapshots of the block storage, automated backups to secondary storage, replication of block
storage across multiple nodes or even data centers, non-disruptive upgrades, and an intuitive dashboard5.</p>
        <p>Ceph &amp; Rook. Rook allows Ceph6 storage to be used natively in K8s. Ceph is a highly scalable
distributed storage solution for block storage, object storage, and shared file systems. Rook is a
framework that automates the deployment and management of Ceph to provide self-managing,
selfscaling, and self-healing storage. Rook achieves this by using K8s resources to deploy, configure,
provision, upgrade, and monitor Ceph. Like Longhorn, Ceph can automatically replicate data to other
available nodes to protect the cluster from data loss7.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Functional Comparison</title>
      <p>This chapter presents the functional comparison between the cloud-based and on-premises K8s setups.
We describe the experimental environment and procedure and present, analyze, and discuss the results.</p>
      <sec id="sec-4-1">
        <title>4.1. Experimental Environment</title>
        <p>The test environment for the on-premises setup consists of five locally hosted Virtual Machines ( VMs)
distributed across multiple machines. Each VM is equipped with 2 vCPUs, 4 GB of RAM, a 40 GB SSD
boot disk, and an additional 12 GB disk for the storage experiment. These five VMs are configured as a
highly available K8s cluster using K3s8 as the K8s distribution and Cilium9 as the base CNI.</p>
        <p>We use DigitalOcean’s managed K8s Service for the cloud environment. The cluster is configured
with the same specifications as the on-premises setup. For storage, the managed cluster automatically
uses DigitalOcean’s managed Block Storage service. The experiment is performed 5 times for each tool
to evade any potential coincidences.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Network Experiment</title>
        <p>The goal of the network experiment is to evaluate the load-balancing components of the cluster. In
the following, we describe our experiment design and architecture. Afterward, we present our results,
revealed by the experiment.
3https://metallb.io/
4https://docs.cilium.io/en/stable/network/lb-ipam/#services
5https://longhorn.io/docs/1.7.2/
6https://ceph.io/
7https://rook.io/docs/rook/latest/Getting-Started/intro/
8https://k3s.io/
9https://cilium.io/</p>
        <p>Cloud managed
Load Balancer
(a) Cloud K8s
(b) On-Premises K8s - Case 1
(c) On-Premises K8s - Case 2</p>
        <sec id="sec-4-2-1">
          <title>4.2.1. Network Experiment Design</title>
          <p>The experiment architecture involves a simple client-server interaction10. As seen in Figure 1, a
Golangbased HTTP server application is deployed in the cluster, with two replicas running on separate worker
nodes. The worker exposes an endpoint that is routed out of the cluster using an ingress controller
exposed via the IP address provided by the load balancer. A client application from a diferent network
sends an HTTP GET requests to the server’s endpoint every 100 milliseconds and logs the response. If
there is a downtime detected, the application will measure the downtime and count the failed requests.</p>
          <p>The cloud K8s cluster, as illustrated in Figure 1 describes the experiment architecture of the cloud
experiment. In this setup, we simulate a node failure by shutting down a node containing the workload
while logging the client’s response. The managed load balancer should ensure that the workload is
dynamically redistributed to the healthy node. For the on-premises environment, we need to slightly
change the experiment architecture, because the tools are configured using Layer 2 mode. As already
highlighted in Section 3, in Layer 2 mode, only one node in the cluster holds the lease for the service
IP address at any given time. If trafic arrives on the node the holds the current lease, it then gets
forwarded to the appropriate Pod running the workload. This fundamental diference in trafic routing
introduces challenges for direct comparisons with the cloud setup. To address these challenges and
ensure a fair evaluation, we split the experiment for the on-premises environment into two distinct
cases:
1. Worker Node Holding the Lease: As illustrated in the first case in Figure 1, the worker node
holds the service IP lease and hosts one of the workloads. When this node fails, the lease for the
IP and the workload are interrupted, which is closer to the experiment performed in the cloud
environment. However, it isn’t fully representative, since the IP lease doesn’t necessarily have to
be announced by the worker node.
2. Master Node Holding the Lease: In the second case in Figure 1, the master node holds the
service IP lease. Simulating a node failure on the master would require the load balancer to elect
a new node to announce the IP lease. However, comparing this to the cloud environment would
be unfair, as we don’t interrupt one of the workloads.</p>
          <p>For reference, we also tested the case where we simulated the failure on the worker node while the
master held the lease. In combination with the other two cases, this will help us to precisely define the
downtime that is created by the load balancer.</p>
        </sec>
        <sec id="sec-4-2-2">
          <title>4.2.2. Network Experiment Results</title>
          <p>For the cloud environment, the client reported an average of 2.60 (σ = 0.89 ) failed requests, resulting
in a downtime of 2.77 seconds (σ = 0.57 ). A small amount of failed requests have to be expected, since
there are requests being sent as the failed node was going ofline.</p>
          <p>In the on-premises environment, we can see that both Cilium and MetalLB returned similar results for
the first case of the experiment, where we simulate a node failure on the worker node with a workload</p>
          <p>Fig. 1</p>
          <p>Failure Simulated</p>
          <p>Current Lease</p>
          <p>Total Requests Failed Requests</p>
          <p>Downtime (sec.)
and the IP lease. Both MetalLB and Cilium had an average of ≈ 75 failed requests. If we compare this to
our reference case, where we have an average of 74.40 (σ = 2.30 ) failed requests, we can see that these
values are similar. This shows that created downtime doesn’t necessarily come from the load balancer,
but from other cluster internals. The real impact of the load balancer on the failed requests can be
observed in the second case of the experiment. Here MetalLB showed no failed requests at all, whereas
Cilium showed an average of 12.40 failed requests, but with a high standard deviation of 13.88.</p>
        </sec>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Storage Experiment</title>
        <p>Similar to the network experiment, we designed an experiment to evaluate the performance and
reliability of the storage tooling. The following section outlines the design and architecture of the
storage experiment, followed by a presentation and analysis of the results obtained.</p>
        <sec id="sec-4-3-1">
          <title>4.3.1. Storage Experiment Design</title>
          <p>To evaluate the diferent tools for supporting distributed storage in K8s, we developed a custom K8s
workload consisting of an application written in Golang and a MySQL database. At the start of the
experiment, the application establishes a connection to the database and writes a unique character
sequence to it (Figure 2). Afterward, the application enters a loop, validating the character sequence
every 100 milliseconds.</p>
          <p>The experiment is divided into two scenarios. In the first scenario, the workload is simply rescheduled
to a diferent node. In the second scenario, a simulated node failure is performed by manually shutting
down a node. Both scenarios allow us to measure how long it takes for the Golang application to
re-establish a connection to the database. We also verify whether the previously written data remains
intact after each scenario.</p>
          <p>3. Step
Re-Validation</p>
          <p>Worker Node 1</p>
          <p>Go App
MySQL</p>
          <p>K8s Cluster</p>
          <p>1. Step
Write &amp; Validation</p>
          <p>2. Step
Simulate Node</p>
          <p>failure
&amp; reschedule</p>
          <p>Worker Node 2</p>
          <p>MySQL</p>
        </sec>
        <sec id="sec-4-3-2">
          <title>4.3.2. Storage Experiment Results</title>
          <p>In the cloud environment, the first scenario, involving rescheduling the MySQL deployment, resulted in
the Golang application losing its connection to the database for an average of 7.42 seconds (σ = 0.40 ).
However, once the connection was re-established, the application successfully validated the data in
the database. The second scenario, simulating a node failure, encountered some issues. After the node
crash, K8s automatically rescheduled the deployment to another node. However, the Pod failed to start
and remained stuck in the Creating phase. The root cause was that the Block Storage volume was still
attached to the failed node, preventing it from being remounted to the new node.</p>
          <p>For on-premises, the results of the first scenario were similar to the cloud setup. With Rook &amp; Ceph
installed as a storage provider, the Golang application lost its connection for an average of 4.02 seconds
(σ = 0.40 ). In contrast, using Longhorn, took significantly longer, with an average downtime of 20.08
seconds (σ = 1.63 ). The second scenario in the on-premises environment produced results similar to
those observed in the cloud. After the node failure, the Pod could not be started because the volume
remained attached to the failed node, preventing it from being remounted. This issue was consistent
across both Rook &amp; Ceph and Longhorn.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Discussion</title>
      <p>The evaluation of the network tooling showed that the load balancing capabilities of an on-premises
environment are similar to the cloud environment. However, the results also showed cases where we
have an increased number of failed requests in the on-premises environment compared to the cloud
environment. As highlighted, most of these failed requests aren’t coming from the load balancer but
from other cluster internals, which require further investigation.</p>
      <p>For storage, the experiment showed, that Rook &amp; Ceph can achieve similar results as their cloud
counterpart. Longhorn however, performed worse, with significantly longer recovery times. The reason
for this discrepancy requires further investigation. In the second scenario, all tested tools—including
Rook &amp; Ceph, Longhorn, and the cloud environment experienced the same issue: the inability to
remount the storage volume to a new node due to its attachment to the failed node. Notably, Longhorn’s
documentation acknowledges this as expected behavior and requires administrative intervention11.</p>
      <p>The proposed experiment and the results underline a few limitations. First, the selection of tools
was limited to the most popular ones. Numerous alternative tools exist, each with potentially distinct
capabilities, limitations, and performance characteristics. Secondly, the network tools were only tested
in the Layer 2 configuration. Using the BGP configuration instead should, in theory, be preferable.
Furthermore, the comparison between cloud and on-premise tooling was based on selected functional
experiments. A more comprehensive benchmark may provide more detailed results.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>This paper presented an approach to compare on-premises and cloud-based K8s environments,
specifically focusing on network and storage tooling. To answer our research question, we conclude that
it is possible to achieve comparable functionality in on-premises environments in the area of storage
and networking using appropriate tools. MetalLB and Ceph &amp; Rook delivered comparable results to
the cloud environment, demonstrating similar performance in terms of load balancing and storage
functionality. While our research provided valuable insights, there is still room for improvement. For
our future work, we want to expand the number of tools tested for a more comprehensive evaluation.
Additionally, testing network tools in BGP mode over Layer 2 mode, could uncover a performance
increase. Finally, expanding the scope to include other areas of K8s, such as monitoring, automated
deployment, and authentication, would ofer a more comprehensive understanding of the trade-ofs
between cloud-based and on-premises K8s setups.
11https://longhorn.io/docs/1.7.2/high-availability/node-failure/</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used ChatGPT-4 and DeepL Write in order to:
Grammar and spelling check, Paraphrase and reword. After using the tool/service, the authors carefully
reviewed and edited the suggested changes to ensure that the intended meaning of the content remained
unchanged. The authors take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Rodriguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>Container-based cluster orchestration systems: A taxonomy and future directions</article-title>
          ,
          <source>Software: Practice and Experience</source>
          <volume>49</volume>
          (
          <year>2019</year>
          )
          <fpage>698</fpage>
          -
          <lpage>719</lpage>
          . URL: https://onlinelibrary.wiley. com/doi/10.1002/spe.2660. doi:
          <volume>10</volume>
          .1002/spe.2660.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>P.</given-names>
            <surname>Kayal</surname>
          </string-name>
          , Kubernetes in Fog Computing:
          <article-title>Feasibility Demonstration, Limitations and Improvement Scope : Invited Paper, in: 2020 IEEE 6th World Forum on Internet of Things (WF-IoT)</article-title>
          , IEEE, New Orleans, LA, USA,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/WF-IoT48130.
          <year>2020</year>
          .
          <volume>9221340</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Takahashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Aida</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tanjo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Portable</given-names>
            <surname>Load</surname>
          </string-name>
          <article-title>Balancer for Kubernetes Cluster</article-title>
          ,
          <source>in: Proceedings of the International Conference on High Performance Computing in Asia-Pacific Region</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , Chiyoda Tokyo Japan,
          <year>2018</year>
          , pp.
          <fpage>222</fpage>
          -
          <lpage>231</lpage>
          . URL: https://dl.acm.org/doi/10.1145/ 3149457.3149473. doi:
          <volume>10</volume>
          .1145/3149457.3149473.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Packard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stubbs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Drake</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Garcia</surname>
          </string-name>
          , Real-World,
          <article-title>Self-Hosted Kubernetes Experience</article-title>
          ,
          <source>in: Practice and Experience in Advanced Research Computing</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , Boston MA USA,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3437359.3465603. doi:
          <volume>10</volume>
          .1145/3437359.3465603.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Ruiz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. P.</given-names>
            <surname>Pueyo</surname>
          </string-name>
          , J. Mateo-Fornes,
          <string-name>
            <given-names>J. V.</given-names>
            <surname>Mayoral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Tehas</surname>
          </string-name>
          ,
          <source>Autoscaling Pods on an On-Premise Kubernetes Infrastructure QoS-Aware, IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>33083</fpage>
          -
          <lpage>33094</lpage>
          . URL: https://ieeexplore.ieee.org/document/9732997/. doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3158743</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Manaouil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lebre</surname>
          </string-name>
          ,
          <article-title>Kubernetes and the Edge?</article-title>
          ,
          <source>PhD Thesis</source>
          , Inria
          <string-name>
            <surname>Rennes-Bretagne Atlantique</surname>
          </string-name>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Mondal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <surname>H. M. D. Kabir</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Tian</surname>
            ,
            <given-names>H.-N.</given-names>
          </string-name>
          <string-name>
            <surname>Dai</surname>
          </string-name>
          ,
          <article-title>Kubernetes in IT administration and serverless computing: An empirical study and research challenges</article-title>
          ,
          <source>The Journal of Supercomputing</source>
          <volume>78</volume>
          (
          <year>2022</year>
          )
          <fpage>2937</fpage>
          -
          <lpage>2987</lpage>
          . URL: https://link.springer.
          <source>com/10.1007/s11227-021-03982-3</source>
          . doi:
          <volume>10</volume>
          .1007/ s11227-021-03982-3.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.</given-names>
            <surname>Böhm</surname>
          </string-name>
          , G. Wirtz, Profiling Lightweight Container Platforms:
          <article-title>MicroK8s and</article-title>
          K3s in Comparison to Kubernetes,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>B.</given-names>
            <surname>Johansson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ragberger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nolte</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Papadopoulos</surname>
          </string-name>
          ,
          <article-title>Kubernetes Orchestration of High Availability Distributed Control Systems</article-title>
          , in: 2022
          <source>IEEE International Conference on Industrial Technology (ICIT)</source>
          , IEEE, Shanghai, China,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICIT48603.
          <year>2022</year>
          .
          <volume>10002757</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Rekhter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hares</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Border</given-names>
            <surname>Gateway</surname>
          </string-name>
          <article-title>Protocol 4 (BGP-4</article-title>
          ),
          <source>RFC 4271</source>
          ,
          <year>2006</year>
          . URL: https: //www.rfc-editor.
          <source>org/info/rfc4271. doi:10</source>
          .17487/RFC4271.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>