<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Seamless Migration of Containerized Stateful Applications in Orchestrated Edge Systems</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Roman Kudravcev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sebastian Böhm</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Bamberg</institution>
          ,
          <addr-line>An der Weberei 5, Bamberg, 96047</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <fpage>20</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>Edge and fog computing has established an innovative approach in the distributed systems context and enhanced traditional cloud computing by improving latency, bandwidth utilization, and data protection. To orchestrate such environments, dynamic changes in load, like the number of edge devices, must be considered and often result in the need to migrate services to other nodes. This also requires the seamless migration of highly available stateful services to handle such changes. In this paper, we propose a tool for seamless service migration while addressing critical issues like state management, networking, and service availability. We perform a real-world experiment and quantitatively evaluate resource utilization, response time, and availability during the migration and idle states of the involved Kubernetes clusters. We show that it is possible to provide a tool that almost achieves a seamless migration experience with high availability to enable changes for orchestrated edge environments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Edge Computing</kwd>
        <kwd>Kubernetes</kwd>
        <kwd>Service Migration</kwd>
        <kwd>Workload Migration</kwd>
        <kwd>Orchestration</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Edge and fog computing have emerged as innovative distributed computing paradigms, extending
traditional cloud infrastructure capabilities. They aim to meet the growing demand for real-time,
latency-sensitive applications and data processing closer to the source, such as Internet of Things (IoT)
devices [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These paradigms significantly enhance traditional cloud infrastructure by improving
latency, bandwidth utilization, and data protection, enabling eficient real-time applications [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Cloudedge orchestration is essential for managing workloads efectively across cloud, edge, and IoT layers.
Ensuring eficient application placement, reducing latency, and optimizing resource usage is critical. By
dynamically assigning workloads and making policy-driven decisions, orchestrators enable scalability,
fault tolerance, and seamless operation, even in complex, distributed environments [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Despite its
importance, cloud-edge orchestration faces several challenges, including service placement. Service
placement is a key challenge in edge and fog computing. It involves determining the optimal deployment
of services or applications within a distributed network to minimize latency, energy consumption, and
bandwidth usage while maximizing resource availability and ensuring network reliability [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. However,
dynamic changes in load, the number of edge devices, and their locations often require services to be
migrated to alternative nodes to maintain performance [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This migration is especially challenging for
certain applications that cannot tolerate downtime. For example, applications that require continuous
availability or state persistence during migration (stateful applications).
      </p>
      <p>Currently, stateful migration and request forwarding are not implemented or evaluated during the
migration process, although other studies have considered service placement and migration of stateless
applications. Also, there is no comprehensive literature on this particular problem. Hence, this paper’s
objective and contribution are designing, implementing, and validating a tool for seamless application
migration in edge and fog computing environments. To address this gap, the proposed tool integrates
self-developed methods and existing technologies to address critical issues such as state management,
networking, and service availability. A key aspect of this research is benchmarking and experimental
evaluation. This is done to assess the performance and efectiveness of the tool during migration. The
benchmarking process measures system availability, resource utilization, and migration time. This
comprehensively analyzes the tool’s impact before and during migration.</p>
      <p>We take a Design Science Research (DSR) approach. We focus on developing and evaluating a
migration tool for edge orchestration environments. The tool is implemented to address key challenges such
as state management and availability. Its efectiveness is validated through quantitative benchmarking.
Metrics such as availability, response time, resource utilization, and migration duration are measured to
assess its eficiency and impact on system performance.</p>
      <p>The rest of the paper is organized as follows: First, Section 2 discusses existing approaches to migration
in cloud- edge orchestration and the current state of research. After that, we cover the concepts of the
tools used for the migration tool (Section 3), which helps to understand how the individual components
work. Then, we examine the implementation of the migration tool and investigate the individual
problems that need to be solved for a successful migration (Section 4). Section 4 will also look at these
problems and explain which tools are used to solve them and how they are used. We will then evaluate
the migration tool, looking at predefined metrics and comparing them to the system in an idle state
(Section 5). Finally, we review the tool and the evaluation (Section 6) and outline plans for further
experimental studies (Section 7).</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Ma et al. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] propose a framework for eficient live migration of edge services using Docker containers,
using their layered storage architecture to minimize the amount of data transferred during the migration.
However, the work focuses on single-container migrations. It does not address orchestrated
multicontainer setups or stateful applications that require state management.
      </p>
      <p>
        Similarly, Kaur et al. [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] investigate live migration of containerized microservices across Kubernetes
(K8s) clusters. Their solution ensures uninterrupted communication with the migrated services using
Traefik ingress controllers and DNS redirection. While efective, this approach relies on manual
configuration and primarily targets stateless workloads. It leaves gaps in automating migration for
stateful applications and handling dynamic resource management in orchestrated environments.
      </p>
      <p>This research addresses these limitations with an automated tool for stateful and stateless migration in
orchestrated edge systems. The proposed solution focuses on seamless state management and real-time
request forwarding to ensure minimal downtime and consistent performance during migrations.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Background</title>
      <sec id="sec-3-1">
        <title>3.1. Migration</title>
        <p>
          Migration generally encompasses moving data, applications, or systems from one environment to
another. Typical scenarios include moving workloads between cloud platforms, data centers, or storage
systems. In edge computing, workload migration is used to ofload workloads from cloud data centers
to edge nodes to improve latency and bandwidth for end users. Particularly stateful and stateless
application migration are in focus. Diferent strategies for workload migration are available, with
benefits and downsides regarding migration time and performance degradation, as shown by [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ].
        </p>
        <p>Each strategy requires careful planning to address compatibility, security, and minimizing downtime.
This holds explicitly when multi-tier applications, consisting of Database (DB)s and caches, must be
moved with minimal downtime.</p>
        <p>
          The previously mentioned stateful application stacks store information about past interactions,
such as session data or DB records. To ensure continuity and proper functionality, migrating stateful
application stacks involves transferring both the application and its state. This includes maintaining
service availability while migrating DBs and session data [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
        <p>
          On the other hand, stateless applications retain no information about previous interactions and treat
each request independently. Migrating stateless applications is more manageable because only the
application itself needs to be transferred, with no state synchronization or data migration required [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Tunneling</title>
        <p>Tunneling is a networking technique that allows secure data to travel over public networks intended for
private use. It creates a direct connection between two networks by encapsulating data packets, allowing
them to traverse networks that do not natively support the original protocol. This encapsulation also
supports encrypted communications to ensure data security.</p>
        <p>
          Tunneling is commonly used in Virtual Private Networks (VPNs) to bypass firewalls, support
unsupported protocols, and establish secure connections. It simplifies communication between networks
without requiring extensive configuration or routing through multiple servers [
          <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
          ].
        </p>
        <p>
          However, tunneling has its drawbacks. Encapsulation consumes resources, which can slow down
communication. In addition, while packets are encrypted, tunnels bypass firewalls. This poses a
security risk if unauthorized access is gained. Proper management is essential to mitigate these
vulnerabilities [
          <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation</title>
      <sec id="sec-4-1">
        <title>4.1. Networking</title>
        <p>A secure connection is inevitable for the migration process. Both environments must communicate to
replicate the DB to the target environment. Since the DBs may contain sensitive data, unauthorized
external access must be prohibited. Therefore, we want to use a tunneling tool to connect container
orchestration platform clusters. We solved this problem with Submariner1. Submariner is an
opensource project that enables seamless networking between Pods and Services across multiple K8s clusters,
regardless of whether running on-premises or in the cloud. It provides cross-cluster Layer 3 (L3)
connectivity using encrypted or unencrypted connections. This is realized with an tunnel using Virtual
eXtensible LAN (VXLAN). This allows workloads in diferent clusters to communicate as if they were on
the same network. Submariner is designed to be network plug-in (CNI) agnostic to ensure compatibility
with various K8s networking setups.</p>
        <p>
          After migration, handling requests sent to the source environment is critical, often because IoT
devices have not yet been updated to point to the target environment. We assume that devices that
receive a response from the origin environment will also receive information about the target for
subsequent requests, as similarly discussed in [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]. To ensure uninterrupted service, we implemented a
mechanism to forward requests from the origin to the target environment, assuming the target has
the current DB state and primary role. The solution uses an HTTP reverse proxy deployed in the
source environment. This proxy intercepts HTTP requests, replicates their details (method, headers,
body), and forwards them to the target environment. It then returns the target response to the client,
preserving headers and status codes for transparency. Although designed for HTTP, this approach
can be adapted for TCP or UDP trafic. The proxy forwards raw byte streams for TCP, and for UDP,
it forwards datagrams. This flexible forwarding mechanism ensures seamless communication across
protocols.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Migration</title>
        <p>To enable the migration of our system, we developed a tool written in the programming language Go.
This tool scans the resources within a K8s cluster, such as deployments, services, ingress routes, config
maps, and secrets. It then checks whether these resources already exist in the target environment. If
they are not, the tool cleans up the resources by removing unique identifiers (e.g., creation dates and
IDs) before applying them to the new cluster.</p>
        <p>For data migration, we focus on SQL DBs, specifically PostgreSQL, which is the most popular SQL
DB.2 In container orchestration platforms like K8s, SQL DBs are typically deployed using stateful sets
or operator-managed DB clusters. Operator-managed clusters handle tasks such as high availability,
scaling, rolling updates, resource management, and security, making them a robust choice for database
management.3</p>
        <p>We used the CloudNativePG3 operator for testing. Many PostgreSQL operators, including
CloudNativePG, support bootstrapping a new DB from an existing one. During migration, the target DB cluster
is bootstrapped as a replica of the source DB, which continues to run as the primary DB. Once the target
environment is synchronized with the source, applications, and data are fully migrated. At this point,
the roles of the clusters must be switched: the replica is promoted to primary, and the original primary
is demoted to the replica. This ensures the target environment can handle writes without errors since
replicas typically do not allow writes. In CloudNativePG, this process includes synchronized demotion
and promotion of DB clusters.3 The process is similar for PostgreSQL, deployed in a StatefulSet, but
requires additional manual steps. First, we enable logical replication on the source DB so the target can
replicate it. Next, we enable the publishing of changes on the source. The DB schema is then imported
from the source, and the target DB subscribes to the source’s publication. Once replication is complete,
we can switch to the target DB by disabling the subscription.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evaluation and Results</title>
      <sec id="sec-5-1">
        <title>5.1. Experimental Setup and Design</title>
        <p>To evaluate the migration’s impact on the environments and key metrics, we designed a controlled
experimental setup to ensure that the results were reproducible and consistent.4 Therefore, we used
two Ubuntu 20.04 Virtual Machines (VMs) with 2 vCPUs, 4 GB memory and an SSD with a capacity
of 30 GB each. Both VMs run on-premises on one physical host machine with Kernel–based Virtual
Machine (KVM) as hypervisor and containerd as container runtime. These VMs were split into two
single-node clusters: a origin cluster for migration and a target cluster as the destination.</p>
        <p>The setup consists of three components: a resource utilization collector, a test application, and a client
application. The resource utilization collector used K8s’ Metrics API5 to monitor cluster-wide CPU and
memory usage each second, storing the data with timestamps in an SQLite DB. The test application is a
message store with a REST API for storing, retrieving, and deleting messages. It uses a PostgreSQL
DB managed by the CNPG operator. A message contains a time stamp, a unique ID, and a message so
we can later check which message may not have been received. The client application is a lightweight
HTTP client that simulates an IoT device and is configured to send requests to the test application’s
REST API. The client supports adjustable request rates and GET/POST ratios. It logs the details of
each request, namely HTTP method, message content, success status, timestamp, and response time.
After all requests have been sent, the client retrieves the stored messages from the test application to
determine message loss, availability, and average response time. A GET request is considered as failed
if the client receives an HTTP status code outside the 200 range. The number of failed POST requests
is determined by looking if the DB contained the unique message sent by the client.</p>
        <p>Our evaluation consists of two scenarios. The first scenario measures the idle load without
migration to establish a baseline for performance and resource utilization under normal conditions. The
second scenario evaluates the load during migration. By testing these two scenarios, we can compare
performance and resource utilization between the two scenarios. We can also observe and evaluate
2https://survey.stackoverflow.co/2024/technology#1-databases
3https://cloudnative-pg.io/
4Tool and resources for the performed experiment available online:
clustershift-benchmark
5https://kubernetes.io/docs/tasks/debug/debug-cluster/resource-metrics-pipeline/
https://github.com/romankudravcev/</p>
        <p>Origin Cluster</p>
        <p>Target Cluster</p>
        <p>Origin Cluster</p>
        <p>Target Cluster
70
% 60
ien 50
sag 40
U 30
PCU 2100
0
availability and downtime during a migration. We deploy the resource utilization collector and the
test application on that cluster and start making requests with our implemented client. For testing
during migration, we deploy the resource utilization collector on both clusters to get an overview of the
resource utilization of both clusters and deploy the test application on the origin environment. Then,
we start making requests with our implemented client and start our migration process.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Experimental Results</title>
        <p>Figure 1 shows the CPU and memory utilization of the origin and target clusters during the idle
and migration states. Idle corresponds to normal cluster load with no migration components, while
migration includes deploying the migration components and executing the migration process. In the
origin cluster, at about 30 seconds, a small spike in memory usage can be observed. This reflects the
deployment of the Submariner Broker and Operator, which requires about 30 MB additional memory.
A similar increase can be observed in the target cluster at roughly 40 seconds. The CPU utilization
also experiences an increase due to the deployment of the Submariner resources. It increases from
approximately 6.85% to a peak of 11.75% on the source cluster and from 3.45% to a peak of 7.55%
on the target cluster. At 100 seconds the most significant jump in both CPU and memory utilization
is noticed, caused by the replication of the PostgreSQL DB. A memory spike of 430 MB on the origin
cluster and 605 MB on the target cluster is observed. Both clusters experience a peak memory load of
approximately 2200 MB, which remains until the end of the experiment. On the CPU side, the origin
cluster peaks at 70% utilization, while the target cluster peaks at 60%. Both peaks are also caused by
the replication of the DB. After the completion of the migration process, the replica DB is promoted to
primary, and the original primary is demoted and decoupled from the target DB. This results in a drop
in the CPU utilization to about 8% on the origin cluster and 13% on the target cluster.</p>
        <p>Figure 2 shows the response time of our deployed test application comparing idle and migration
states for GET and POST requests. In the idle state, the average response time for GET requests is 34.5
ms (σ = 2.83 ). While migration, the average response time for a GET request increases to 35.9 ms
(σ = 6.16 ). For POST requests, the average response time during idle is 133.47ms (σ = 69.76 ), while
during migration, it decreases slightly to 118.17ms (σ = 74.23 ). Notably, the migration process causes
a considerable number of outliers during migration for GET requests.</p>
        <p>To evaluate availability and downtime during the migration we check the number of failed requests
and their timestamps. A total of 2165 requests were sent, with approximately 70% being POST requests
(1515) and 30% (650) being GET requests. Out of 650 GET requests, 12 failed (1.85%). For POST
requests, 29 out of 1515 failed (1.91%). Overall 41 out of the 2165 sent requests failed, resulting in an
availability of 98, 11%. The total downtime during the migration was 4.66 seconds. This was calculated
by looking at the timestamps of failed requests.</p>
        <p>GET Requests</p>
        <p>POST Requests
300
200
100
Idle</p>
        <p>Migration</p>
        <p>Idle</p>
        <p>Migration</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion</title>
      <p>The results show that Submariner has minimal impact on CPU and memory utilization, supporting its
use in migration scenarios. DB replication, on the other hand, caused more significant spikes in resource
usage. This is expected because replication requires additional resources to transfer large amounts of
data and ensure data consistency. However, it is important to consider clusters with higher idle loads, as
the replication process could potentially overload such clusters. Response time diferences are minor for
this use case and can be considered unimportant. It is relevant when clients don’t automatically update
their connections based on the response received from the application. In this case clients continue
routing through the reverse proxy until the target cluster’s IP is manually configured. This results in an
overhead since we forward the request not just once per client, but until the IP switch to the target
cluster is completed. This configuration directly afects the overall response latency. An availability
rate above 98% confirms that the tool is suitable for a migration use case. The duration of the downtime
shouldn’t be influenced by the size of the DB. The application startup times will most likely afect the
downtime because, at boot up, because the target DB must be ready for write operations.</p>
      <p>The proposed experiment and the obtained results underlie a few limitations. Firstly, not all types
of data were considered during the migration. In particular, no user context (i.e., state of the test
application) or session data, such as a Redis store, was included in the test application. The evaluation
was also limited to an SQL DB managed by an operator. This only covers a small portion of use cases.
This is a proof of concept and we are only showing general applicability. Finally, we also have threats to
validity in our experimental setup, for example application startup and network stability, which could
impact the results.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion and Future Work</title>
      <p>This paper showed an implementation of how a tool for migrating orchestrated environments could
be built and an evaluation of this tool. To highlight the contribution of our tool, we conclude that we
achieved an almost seamless application migration in edge and fog computing environments with an
availability of over 98%. With our tool we are able to establish a secure tunnel between our environments
by using Submariner, migrate stateful application by replicating PostgreSQL DBs and forward incoming
trafic by rerouting it with a reverse proxy.</p>
      <p>While our implementation and evaluation of this tool provided important insights, it also highlighted
issues that require further investigation. The evaluation of this tool was performed using single node
clusters, which does not reflect reality. In future experiments, it would be interesting to benchmark
multi-node clusters to see if the tool’s results are comparable. In order to handle trafic forwarding in a
more eficient and generic way, diferent proxy solutions or service meshes that also sound promising,
should be investigated in future work. It would also be interesting to look at migrating diferent DB
types such as NoSQL or key-value stores to cover a wider variety of storage methods.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used DeepL and ChatGPT-4 in order to: Grammar, spell
check, and rephrasing. After using these tools/services, the authors reviewed and edited the content as
needed and take full responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>K.</given-names>
            <surname>Cao</surname>
          </string-name>
          , Y. Liu,
          <string-name>
            <given-names>G.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <source>An Overview on Edge Computing Research, IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>85714</fpage>
          -
          <lpage>85728</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2020</year>
          .
          <volume>2991734</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Salaht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Desprez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lebre</surname>
          </string-name>
          ,
          <article-title>An Overview of Service Placement Problem in Fog and Edge Computing</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>53</volume>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>35</lpage>
          . URL: https://dl.acm.org/doi/10.1145/3391196. doi:
          <volume>10</volume>
          .1145/3391196.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Böhm</surname>
          </string-name>
          , G. Wirtz,
          <article-title>Cloud-Edge Orchestration for Smart Cities: A Review of Kubernetes-based Orchestration Architectures</article-title>
          ,
          <source>EAI Endorsed Transactions on Smart Cities</source>
          <volume>6</volume>
          (
          <year>2022</year>
          )
          <article-title>e2</article-title>
          . doi:
          <volume>10</volume>
          . 4108/eetsc.v6i18.
          <fpage>1197</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Service placement strategies in mobile edge computing based on an improved genetic algorithm</article-title>
          ,
          <source>Pervasive and Mobile Computing</source>
          <volume>105</volume>
          (
          <year>2024</year>
          )
          <article-title>101986</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.pmcj.
          <year>2024</year>
          .
          <volume>101986</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>C.-H. Hong</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Varghese</surname>
          </string-name>
          , Resource Management in Fog/Edge Computing:
          <article-title>A Survey on Architectures, Infrastructure, and Algorithms</article-title>
          ,
          <source>ACM Computing Surveys</source>
          <volume>52</volume>
          (
          <year>2020</year>
          )
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          . doi:
          <volume>10</volume>
          .1145/ 3326066.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Ma</surname>
          </string-name>
          , S. Yi,
          <string-name>
            <given-names>N.</given-names>
            <surname>Carter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <source>Eficient Live Migration of Edge Services Leveraging Container Layered Storage, IEEE Transactions on Mobile Computing</source>
          <volume>18</volume>
          (
          <year>2019</year>
          )
          <fpage>2020</fpage>
          -
          <lpage>2033</lpage>
          . URL: https: //ieeexplore.ieee.org/document/8470949/. doi:
          <volume>10</volume>
          .1109/TMC.
          <year>2018</year>
          .
          <volume>2871842</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaur</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guillemin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sailhan</surname>
          </string-name>
          ,
          <article-title>Live migration of containerized microservices between remote Kubernetes Clusters</article-title>
          ,
          <source>in: IEEE INFOCOM 2023 - IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS)</source>
          , IEEE, Hoboken, NJ, USA,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . URL: https://ieeexplore. ieee.org/document/10225858/. doi:
          <volume>10</volume>
          .1109/INFOCOMWKSHPS57453.
          <year>2023</year>
          .
          <volume>10225858</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. S. E.</given-names>
            <surname>Ng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Sripanidkulchai</surname>
          </string-name>
          ,
          <article-title>Workload-aware live storage migration for clouds</article-title>
          ,
          <source>in: Proceedings of the 7th ACM SIGPLAN/SIGOPS International Conference on Virtual Execution Environments</source>
          , ACM, Newport Beach California USA,
          <year>2011</year>
          , pp.
          <fpage>133</fpage>
          -
          <lpage>144</lpage>
          . doi:
          <volume>10</volume>
          .1145/1952682. 1952700.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , Y. Liu,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Survey on Service Migration in Mobile Edge Computing, IEEE Access 6 (</article-title>
          <year>2018</year>
          )
          <fpage>23511</fpage>
          -
          <lpage>23528</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2828102</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>F.</given-names>
            <surname>Barbarulo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Puliafito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Virdis</surname>
          </string-name>
          , E. Mingozzi,
          <source>Extending ETSI MEC Towards Stateful Application Relocation Based on Container Migration, in: 2022 IEEE 23rd International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM)</source>
          , IEEE, Belfast, United Kingdom,
          <year>2022</year>
          , pp.
          <fpage>367</fpage>
          -
          <lpage>376</lpage>
          . doi:
          <volume>10</volume>
          .1109/WoWMoM54355.
          <year>2022</year>
          .
          <volume>00035</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Aqun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Guanqun, Research on tunneling techniques in virtual private networks</article-title>
          ,
          <source>in: WCC 2000 - ICCT</source>
          <year>2000</year>
          .
          <source>2000 International Conference on Communication Technology Proceedings (Cat. No.00EX420)</source>
          , volume
          <volume>1</volume>
          ,
          <year>2000</year>
          , pp.
          <fpage>691</fpage>
          -
          <lpage>697</lpage>
          vol.
          <volume>1</volume>
          . doi:
          <volume>10</volume>
          .1109/ICCT.
          <year>2000</year>
          .
          <volume>889294</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <article-title>IP in IP Tunneling</article-title>
          ,
          <year>RFC 1853</year>
          ,
          <year>1995</year>
          . URL: https://www.rfc-editor.
          <source>org/info/rfc1853. doi:10</source>
          .17487/ RFC1853.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Hoagland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Krishnan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Thaler</surname>
          </string-name>
          ,
          <article-title>Security Concerns with IP Tunneling</article-title>
          , RFC
          <volume>6169</volume>
          ,
          <year>2011</year>
          . URL: https://www.rfc-editor.
          <source>org/info/rfc6169. doi:10</source>
          .17487/RFC6169.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Saad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Alawieh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. T.</given-names>
            <surname>Mouftah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gulder</surname>
          </string-name>
          ,
          <article-title>Tunneling techniques for end-to-end vpns: generic deployment in an optical testbed environment</article-title>
          ,
          <source>IEEE Communications Magazine</source>
          <volume>44</volume>
          (
          <year>2006</year>
          )
          <fpage>124</fpage>
          -
          <lpage>132</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>U.</given-names>
            <surname>Bulkan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Dagiuklas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Iqbal</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. S. Huq</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Al-Dulaimi</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Rodriguez</surname>
          </string-name>
          ,
          <article-title>On the Load Balancing of Edge Computing Resources for On-Line Video Delivery</article-title>
          ,
          <source>IEEE Access 6</source>
          (
          <year>2018</year>
          )
          <fpage>73916</fpage>
          -
          <lpage>73927</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2883319</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>