<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Bottom-Up Resource Orchestration in Edge Computing: A Pod Profile-Aware Agent-Based Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marija Gojković</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Melanie Schranz</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Alpen-Adria University</institution>
          ,
          <addr-line>Klagenfurt</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lakeside Labs GmbH</institution>
          ,
          <addr-line>Klagenfurt</addr-line>
          ,
          <country country="AT">Austria</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Modern distributed systems face growing challenges in scheduling workloads across heterogeneous cloud-edge infrastructures. Advanced pod orchestration techniques-pod cloning, dependency-aware scheduling, and parallel pod processing-are crucial for improving resource utilization, scalability, and fault tolerance. Pod cloning replicates workloads to handle spikes or failures, dependency management enforces correct task sequencing, and Kubernetes-native parallelism distributes tasks across concurrent pods. Despite their benefits, these strategies are seldom unified in adaptive, bio-inspired schedulers. This paper presents an emergent scheduler integrating cloning, dependency resolution, and parallelism within a swarm intelligence framework based on the Artificial Bee Colony (ABC) algorithm. Modeling the cluster as a multi-agent ecosystem, pods are treated as food sources managed via ABC's employed, onlooker, and scout phases. This enables decentralized decision-making that dynamically adjusts cloning, enforces dependencies, and tunes parallelism in response to real-time cluster states. Evaluated on a simulated edge-cloud testbed against random assignment, dependency-agnostic best-fit, and a static ABC baseline, our scheduler achieves superior latency and deadline satisfaction rates.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;multiagent systems</kwd>
        <kwd>edge computing</kwd>
        <kwd>bottom-up resource orchestration</kwd>
        <kwd>edge micro data centers</kwd>
        <kwd>dependencyaware scheduling</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Eficient workload scheduling across heterogeneous cloud-edge systems is a growing challenge in
modern distributed environments. Key pod orchestration strategies—cloning, dependency-aware
scheduling, and parallel pod processing—optimize resource use, scalability, and fault tolerance. Pod
cloning dynamically replicates workloads to handle trafic spikes or failures, dependency management
ensures correct task sequencing, and parallel processing accelerates execution via Kubernetes-native
mechanisms [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Despite addressing challenges like resource contention and coordination latency,
these strategies remain underutilized in adaptive schedulers, especially those leveraging bio-inspired
algorithms. This paper integrates pod cloning, dependency resolution, and parallelization into an
emergent scheduler based on the Artificial Bee Colony (ABC) swarm intelligence algorithm [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. ABC’s
decentralized resource allocation suits the self-organizing needs of modern infrastructures [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We
analyze how pod management techniques afect satisfaction rates. By combining ABC optimization
with Kubernetes pod controls, our scheduler dynamically adjusts cloning, parallelism, and dependency
handling, balancing overhead with performance—vital for real-time edge deployments.
      </p>
      <p>The paper is structured as follows: Section 2 reviews related work. Section 3 details pod optimization
strategies. Section 4 describes the system model and scheduler. Section 5 covers simulation setup
and system behavior analysis. Section 5 discusses key findings, and Section 6 concludes with future
directions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Eficient cluster resource management has inspired approaches such as the oversubscription framework
in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], maximizing utilization. Extending to edge platforms, [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] orchestrate workloads for
multitenant IoT services with varying SLOs, but overlook pod interdependencies—critical to performance.
Microservice interdependencies, shown in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] to degrade performance if unmanaged, are often ignored
by platforms like Kubernetes, which separate deployment and routing despite their latency correlation
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Resource allocation in multi-clouds [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and communication reduction [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] improve eficiency but
neglect pod-level coordination. Scaling strategies like in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] vertically aggregate per-task signals and
horizontally replicate tasks, yet remain system-specific and overlook fine-grained interdependencies.
      </p>
      <p>
        Autoscaling remains central [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
distinguishing between horizontal scaling (adjusting pod
count) and vertical scaling (adding resources),
but often reacts to metrics rather than SLOs.
Horizontal scaling, as in Kubernetes’ HPA [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ],
dynamically responds to load but mainly operates
at or before the edge. Elastic replica
management [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] enhances QoS and eficiency,
resembling pod cloning in our context, but still avoids (a) (b)
orchestration beyond the edge.
      </p>
      <p>
        We focus on fog-layer orchestration, selecting Figure 1: Pod cloning in the emergent
schedmaster agents on the cloud side and leveraging uler. (a) Conceptual framework
illusslack resources from rigid pods to support elastic trating clone generation. (b) Execution
workloads—an area underexplored by current workflow showing how cloned pods are
autoscaling. Edge-edge collaboration [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] and fog scheduled and completed.
computing [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] further highlight the need for
low-latency, flexible deployment in future systems.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Types of Pod Optimization Strategies</title>
      <p>Pod cloning, dependencies, and parallel
processing are key to optimizing workload
management across distributed cloud–edge
environments. This boosts resource utilization,
reliability, and performance in interconnected systems,
enabling applications to adapt to fluctuating
demands and network conditions.</p>
      <p>Pod Dependencies and Dynamic Resource
Management: Managing pod dependencies
ensures correct and eficient execution in Kuber- (a) (b)
netes. In single deployments, independent pods
enable horizontal scaling and resilience [15]. In Figure 2: Key pod orchestration concepts in the
multi-deployment setups, dependencies—e.g., a emergent scheduler. (a) Parallel pod
frontend relying on a backend or database—are processing in the emergent scheduler,
critical. Kubernetes manages these via Services, where pods process simultaneously
DNS, and network policies [15], enabling multi- based on shared constraints. (b)
Intertier applications to remain scalable and fault- pod dependencies modeled in the
emertolerant. Dynamic resource management tech- gent scheduler, including sequencing
niques, such as pod cloning, integrate with de- and coordination requirements.
pendency handling to enable autoscaling [16].</p>
      <p>Thus, applications adjust replicas based on workload changes while preserving dependency constraints.</p>
      <p>Pod Cloning and Autoscaling: doesn’t this already
answer the Q: 3. how would observer mechanism work
in kubernetes or computing infrastructure</p>
      <p>Autoscaling dynamically adjusts computational resources
to meet demand [16]. In Kubernetes, pod cloning replicates
instances to scale horizontally during workload spikes, while
load balancing distributes tasks for performance and fault
tolerance. Cloning introduces scheduling complexity, as all
replicas must respect original dependency rules. In complex
workloads, this requires QoS-aware scheduling, as in [17], to
ensure correct execution sequences. Techniques like
topological sorting [18] help maintain execution order and optimize Figure 3: Overview of pod
processperformance. ing logic in the
emer</p>
      <p>Parallel Pod Processing for Enhanced Performance: gent scheduler,
integratParallel pod processing runs multiple pod instances concur- ing standard, cloned,
parrently, similar to task parallelism [19]. Computationally in- allel, and dependent pod
tensive tasks are split into subtasks and executed in parallel, behaviors.
improving throughput and reducing latency—vital in areas
like real-time analytics and AI training. This relies on application-level parallelism, as Kubernetes does
not parallelize within a single pod. Developers must implement multithreading or multiprocessing [20]
to exploit concurrent execution efectively.</p>
      <p>Eficient orchestration combines parallel processing for speed, cloning for scalability and fault
tolerance, and dependency management for correctness. Together, these strategies enable robust
performance in complex, distributed systems.</p>
    </sec>
    <sec id="sec-4">
      <title>4. System Model</title>
      <p>
        We adopt the discrete-time, agent-based simulation framework with all scheduling policies described
in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Therefore our emergent scheduler comprises of a master agent, a worker agent, and dynamically
arriving pod agents. Pods are defined by type (rigid or elastic), resource demand, execution requirements,
and queueing tolerance. The worker manages CPU and RAM allocation, using scheduling policies to
accept or reject pods based on availability. For elastic pods, peer selection is performed via random,
best-match, and a bottom-up resource orchestration algorithm inspired by the Artificial Bee Colony
approach. The master maintains pod queues, coordinates scheduling, and employs a retry mechanism
with a tunable parameter to balance rigid and elastic workloads.
      </p>
      <p>
        To handle the pod processing methods outlined in Section 3, we implement a logic layer that
pre-processes incoming pods. This step resolves their complexities—parallelism, cloning, and
dependencies—transforming them into the sequential input format required by the emergent scheduler [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Parallel Pod Processing: If a pod  requires parallel processing, it is divided into  sub-pods [19]
1 , 2 , . . . ,  , where  is randomly selected from the range (1,10). Since each  must be processed
by a single Edge Micro Data Center (EMDC), all its sub-pods are routed to the same queue, as described
in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. To satisfy the sequential input requirement, the sub-pods are randomly ordered before insertion.
      </p>
      <p>Pod Cloning: To simulate dynamic resource availability [16], the scheduler supports processing
of cloned pods. If cloning is triggered,  is replicated  times, producing  ,  , . . . ,  , with 
randomly chosen between 1 and 10. Clones are placed in the same queue and randomly ordered for
sequential scheduling. Each clone is assigned an observer,  ,  , . . . ,  , which monitors its status.
Once one clone (e.g.,  ) finishes, its observer notifies the others, causing them to terminate—mimicking
unexpected resource release in the emergent scheduler.</p>
      <p>Pod Dependencies: When  depends on other pods, meaning it cannot start until its prerequisites
ifnish, dependencies are modeled as a DAG [ 18], with pods as nodes and dependencies as directed edges.
For example, if  depends on  and , edges go from  to  and  to .</p>
      <p>To ensure the scheduler’s sequential input respects dependencies, we apply topological sorting [21].
This produces a linear order in which each pod appears after all its prerequisites. For example, if 
depends on  and , and  depends on , the resulting order could be ,  , , , guaranteeing
that all constraints are satisfied before execution.</p>
    </sec>
    <sec id="sec-5">
      <title>5. System Behavior Analysis</title>
      <p>
        This section analyzes how system performance evolves under diferent scheduling strategies and
workload conditions. Through agent-based simulation, we evaluate the dynamic behavior of a distributed
edge environment subjected to varying pod elasticity levels, inter-pod dependencies, and trafic
intensities. The goal is to reveal how these factors influence pod satisfaction rate, i.e., how many of
the available resources in the EMDC can a pod use (as rigid) and reuse (as elastic) before its assigned
waiting time runs out [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Simulation Setup: The simulation environment is built using the MESA agent-based modeling
framework in Python [22]. Its modular design enables custom agent classes with specific behaviors and
decentralized execution, allowing each agent to operate independently. MESA’s built-in tools facilitate
modeling complex, interactive systems in a scalable way. Simulations run for 12,000 discrete time steps,
tracking pod satisfaction rate under varying pod arrival rates  from 0.55 (light load) to 0.75 and 0.95
(heavy load). This progression allows us to evaluate the scalability and robustness of the bottom-up
scheduling strategy as the system approaches saturation. Two main factors are explored: pod elasticity
and pod coordination constraints. We compare datasets with 30% elastic pods and with 70% elastic pods.
Elastic pods provide scheduling flexibility but can also introduce overhead or contention with rigid
workloads. To examine how coordination complexity interacts with elasticity, three pod-level features
are included: parallel processing (10% of pods, grouped randomly between 1 and 10), cloning (10%),
and inter-pod dependencies (20%). These extensions are evaluated against baseline scenarios from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ],
which reflect elastic scheduling without additional pod dependencies.
      </p>
      <p>Insights from Experimental Results: The results in Fig. 4–5 show pod elasticity, inter-pod
dependencies, and varying  influencing scheduling performance in terms of pod satisfaction rate.
This is directly reflecting the cost of having introduced pod-dependency processing (compared against
non-dependent pods), as well as the synchornization, i.e., pod coordination, costs.</p>
      <p>Our analysis proceeds along three key dimensions: (1) the impact of pod elasticity, (2) the efect of
increasing  , and (3) the role of pod coordination constraints such as parallel processing, cloning, and
inter-pod dependencies. These perspectives together provide a detailed understanding of the trade-ofs
involved in bottom-up scheduling across diverse edge workload scenarios.</p>
      <p>Satisfaction rate: (1) In panel (a), Fig. 4 (30% elastic pods), which includes pod dependencies, both
the best-match and swarm intelligence (SI) strategies initially underperform compared to the random
baseline. In panel (b), with independent pods, SI performs between random and best-match—a trend
that continues as  increases. (2) In panel (a), Fig. 5 (70% elastic pods), SI outperforms both baselines
after timestep 5000, maintaining a satisfaction rate above 95%. For independent pods (panel b), all
strategies achieve nearly perfect satisfaction, close to 100%, with minimal diferences across schedulers.</p>
      <p>Elastic pods improve performance notably under high load and elasticity, but only with adaptive
scheduling. The SI method shows strong resilience and consistently outperforms others in complex
scenarios. For simpler, independent workloads, advanced strategies ofer little extra advantage.</p>
      <p>Impacts and Observations: The simulation study reveals that pod elasticity and coordination
requirements significantly influence satisfaction rates under varying trafic conditions. At moderate to
high  , elastic pods enable more flexible resource allocation, improving satisfaction rates—especially
under the SI strategy. The addition of coordination mechanisms—such as parallel processing, cloning,
and inter-pod dependencies—introduces extra complexity and can moderately reduce satisfaction in
some scenarios. Nevertheless, the SI strategy shows resilience, maintaining relatively high satisfaction
even under complex workloads. Interestingly, dependencies can sometimes improve stability in SI by
preventing overly aggressive pod placement, resulting in more consistent satisfaction rates.
1.0
0.8
6000</p>
      <p>Steps
6000</p>
      <p>Steps
(e)</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>Eficient workload scheduling across heterogeneous cloud-edge infrastructures is increasingly critical
for modern distributed systems. Pod-level orchestration techniques—cloning, parallel processing, and
dependency-aware execution—are essential for managing complexity, scalability, and fault tolerance,
yet remain underutilized in decentralized schedulers. This work integrates these mechanisms into
a swarm-intelligence-based scheduler using the Artificial Bee Colony (ABC) algorithm. Simulations
(f)
6000</p>
      <p>Steps
(e)
show that pod elasticity and coordination requirements strongly impact performance under high load.
Elastic pods improve satisfaction rates with the SI strategy, though they may increase queue lengths.
Best-match efectively controls queues but can struggle with slack estimation errors and complex
coordination. Notably, dependencies like parallelism and sequencing can reduce queue buildup by
moderating placement aggressiveness, providing a self-regularizing efect for SI. No single strategy
dominates: best-match fits tightly constrained settings, while SI is more robust in dynamic, elastic, and
uncertain environments. These insights guide the design of resilient, adaptive orchestration systems
balancing flexibility, eficiency, and stability amid evolving workloads.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was performed in the course of the EU-project ACES supported by EU’s Horizon Europe
under the grant agreement No. 101093126 (HORIZON-CL4-2022-DATA-01-02).</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used Chat-GPT-4 and Grammarly in order to:
Grammar and spelling check. After using these tool(s)/service(s), the author(s) reviewed and edited the
content as needed and take(s) full responsibility for the publication’s content.
Telecommunication Systems (MASCOTS), 2020, pp. 1–8. doi:10.1109/MASCOTS50786.2020.
9285953.
[15] Kubernetes, Pods, 2025. URL: https://kubernetes.io/docs/concepts/workloads/pods/.
[16] T. Lorido-Botran, R. N. Calheiros, R. M. Rodriguez, R. Buyya, C. Vecchiola, Autoscaling in the
cloud: A survey, IEEE Transactions on Services Computing 8 (2015) 947–969. doi:10.1109/TSC.
2014.2350938.
[17] J. Yu, R. Buyya, K. Ramamohanarao, Workflow scheduling algorithms for service-oriented cloud
computing with blending of deadline and budget constraints, Proceedings of the 2008 IEEE
International Symposium on Cluster Computing and the Grid (CCGRID) (2008) 427–436. doi:10.
1109/CCGRID.2008.46.
[18] S. Even, Graph Algorithms, Cambridge University Press, 2011. Chapters on Directed Acyclic</p>
      <p>Graphs.
[19] S. Manvi, G. Shyam, Cloud Computing: Concepts and Technologies, 1st ed., CRC Press, 2021.</p>
      <p>doi:10.1201/9781003093671.
[20] P. S. Pacheco, An introduction to parallel programming, Morgan Kaufmann, 2011.
[21] T. H. Cormen, C. E. Leiserson, R. L. Rivest, C. Stein, Introduction to Algorithms, 3rd ed., MIT Press,
2009. Section 22.4: Topological Sort.
[22] D. Masad, J. L. Kazil, Mesa: An agent-based modeling framework, 2015. URL: https://github.com/
projectmesa/mesa/blob/master/CITATION.bib.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Kubernetes</surname>
          </string-name>
          ,
          <source>Fine parallel processing using a work queue</source>
          ,
          <year>2025</year>
          . URL: https://kubernetes.io/docs/ tasks/job/fine
          <article-title>-parallel-processing-work-queue/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ghasemi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schranz</surname>
          </string-name>
          ,
          <article-title>Bottom-up resource orchestration in edge computing: An agent-based modeling approach</article-title>
          ,
          <source>in: 2024 IEEE 12th International Conference on Intelligent Systems (IS)</source>
          , IEEE,
          <year>2024</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Umlauft</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Gojkovic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Harshina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Schranz</surname>
          </string-name>
          ,
          <article-title>Bottom-up bio-inspired algorithms for optimizing industrial plants</article-title>
          .,
          <source>in: ICAART (1)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>59</fpage>
          -
          <lpage>70</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Garraghan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Rose: Cluster resource scheduling via speculative over-subscription</article-title>
          ,
          <source>in: 2018 IEEE 38th International Conference on Distributed Computing Systems (ICDCS)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>949</fpage>
          -
          <lpage>960</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICDCS.
          <year>2018</year>
          .
          <volume>00096</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F.</given-names>
            <surname>Guim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Metsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Moustafa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Verrall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Carrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Cadenelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Doria</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Ghadie</surname>
          </string-name>
          , R. G. Prats,
          <article-title>Autonomous lifecycle management for resource-eficient workload orchestration for green edge computing</article-title>
          ,
          <source>IEEE Transactions on Green Communications and Networking</source>
          <volume>6</volume>
          (
          <year>2022</year>
          )
          <fpage>571</fpage>
          -
          <lpage>582</lpage>
          . doi:
          <volume>10</volume>
          .1109/TGCN.
          <year>2021</year>
          .
          <volume>3127531</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <article-title>Collaborative deployment and routing of industrial microservices in smart factories</article-title>
          ,
          <source>IEEE Transactions on Industrial Informatics</source>
          <volume>20</volume>
          (
          <year>2024</year>
          )
          <fpage>12758</fpage>
          -
          <lpage>12770</lpage>
          . doi:
          <volume>10</volume>
          .1109/TII.
          <year>2024</year>
          .
          <volume>3424347</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ji</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <article-title>Edge-edge collaboration based micro-service deployment in edge computing networks</article-title>
          ,
          <source>in: 2023 IEEE Wireless Communications and Networking Conference (WCNC)</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/WCNC55385.
          <year>2023</year>
          .
          <volume>10119013</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H. X.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhu</surname>
          </string-name>
          , M. Liu, Graph-phpa:
          <article-title>Graph-based proactive horizontal pod autoscaling for microservices using lstm-gnn</article-title>
          ,
          <source>in: 2022 IEEE 11th International Conference on Cloud Networking (CloudNet)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>237</fpage>
          -
          <lpage>241</lpage>
          . doi:
          <volume>10</volume>
          .1109/CloudNet55617.
          <year>2022</year>
          .
          <volume>9978781</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>W.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Microservice deployment in edge computing based on deep q learning</article-title>
          ,
          <source>IEEE Transactions on Parallel and Distributed Systems</source>
          <volume>33</volume>
          (
          <year>2022</year>
          )
          <fpage>2968</fpage>
          -
          <lpage>2978</lpage>
          . doi:
          <volume>10</volume>
          .1109/TPDS.
          <year>2022</year>
          .
          <volume>3150311</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rządca</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Findeisen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Świderski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Broniek</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. D. M. Kusmierek</surname>
            ,
            <given-names>P. K.</given-names>
          </string-name>
          <string-name>
            <surname>Nowak</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Strack</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Witusowski</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Hand</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wilkes</surname>
          </string-name>
          , Autopilot: workload autoscaling at google,
          <source>Proceedings of the Fifteenth European Conference on Computer Systems</source>
          (
          <year>2020</year>
          ). URL: https://api.semanticscholar. org/CorpusID:218489692.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Pramesti</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. I. Kistijantoro</surname>
          </string-name>
          ,
          <article-title>Autoscaling based on response time prediction for microservice application in kubernetes</article-title>
          ,
          <source>in: 2022 9th International Conference on Advanced Informatics: Concepts</source>
          ,
          <source>Theory and Applications (ICAICTA)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICAICTA56449.
          <year>2022</year>
          .
          <volume>9932943</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L. H.</given-names>
            <surname>Phuc</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.-A.</given-names>
            <surname>Phan</surname>
          </string-name>
          , T. Kim,
          <article-title>Trafic-aware horizontal pod autoscaler in kubernetes-based edge computing infrastructure</article-title>
          ,
          <source>IEEE Access 10</source>
          (
          <year>2022</year>
          )
          <fpage>18966</fpage>
          -
          <lpage>18977</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2022</year>
          .
          <volume>3150867</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Towards cost-eficient edge intelligent computing with elastic deployment of container-based microservices</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          )
          <fpage>102947</fpage>
          -
          <lpage>102957</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ACCESS.
          <year>2020</year>
          .
          <volume>2998767</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Fahs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Pierre</surname>
          </string-name>
          , E. Elmroth, Voilà:
          <article-title>Tail-latency-aware fog application replicas autoscaler</article-title>
          ,
          <source>in: 2020 28th International Symposium on Modeling, Analysis, and Simulation of Computer and</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>