<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>O. Bondar);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Mathematical model of the cloud infrastructure life</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sochor</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergii Lysenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleh Bondaruk</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Bondar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liudmyla Koretska</string-name>
          <email>koretskal@khmnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tomas</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>European Research University</institution>
          ,
          <addr-line>Ostrava</addr-line>
          ,
          <country country="CZ">Czech Republic</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Khmelnitsky National University</institution>
          ,
          <addr-line>Khmelnitsky, Instytutska street 11, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1883</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>This paper presents a formalized mathematical model for managing the life cycle of cloud infrastructure. The proposed model incorporates a multi-layered architecture that includes the infrastructure, cloud services management, and IT governance levels, capturing both automated and human-centric control processes. Using finite automata and process algebra, the model supports a structured representation of service states, control actions, and transition dynamics. It enables real-time lifecycle automation, SLA compliance enforcement, and adaptive response to dynamic workloads. The effectiveness of the model is validated through a series of experiments simulating real-world scenarios, demonstrating high service availability, resource optimization, and system scalability. This work contributes to the formalization and automation of cloud infrastructure lifecycle management, providing a foundation for further tool development and integration into cloud orchestration platforms. Cloud infrastructure lifecycle, mathematical modeling, finite automata, lifecycle automation, resource management, service level agreement, cloud service orchestration 1</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The rapid development of digital technologies and the increasing demand for scalable computing
resources have led to the widespread adoption of cloud computing as a fundamental paradigm for
delivering IT services [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ]. Cloud infrastructure has become an integral component of modern
information systems, providing on-demand access to computational power, storage, and networking
capabilities [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4-6</xref>
        ]. However, the management of cloud infrastructure throughout its entire life cycle
from design and deployment to scaling, maintenance, and decommissioning presents a complex
challenge that requires systematic and formalized approaches [
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7-9</xref>
        ].
      </p>
      <p>
        Despite significant progress in cloud infrastructure automation and orchestration, there is still a
lack of comprehensive mathematical models that accurately describe the dynamics of its life cycle.
Such models are essential for predicting system behavior, optimizing resource allocation, ensuring
reliability, and supporting decision-making in cloud infrastructure management [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10-12</xref>
        ]. Existing
approaches often rely on empirical or heuristic methods, which may not fully capture the underlying
processes or support rigorous analysis and verification [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
        ].
      </p>
      <p>
        This paper proposes a formal mathematical model of the cloud infrastructure life cycle, based on
discrete-state representations and process algebra principles. The model reflects the sequential and
parallel transitions between different infrastructure states, taking into account provisioning
constraints, resource dependencies, and performance metrics. The proposed approach enables the
analysis of system evolution over time and supports the development of tools for automated lifecycle
management in cloud environments [
        <xref ref-type="bibr" rid="ref16 ref17 ref18">16-18</xref>
        ].
      </p>
      <p>The remainder of this paper is structured as follows: Section II reviews related work in cloud
infrastructure modeling. Section III presents the formal definition of the proposed model. Section IV
provides case studies and evaluation results. Finally, Section V concludes the paper and outlines
directions for future research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>Resource management is a core aspect of cloud computing and virtualization. It involves the
allocation, coordination, release, and monitoring of cloud resources to ensure efficient and effective
system performance. Due to the virtualized, heterogeneous, and multi-tenant nature of cloud
environments, managing such resources is inherently complex. Challenges arise from uncertainty,
diversity, large-scale infrastructure, and unpredictable workloads generated by numerous users.
These factors hinder accurate global state estimation and workload forecasting, making resource
management highly demanding. To address these issues, cloud resource management requires
autonomous and adaptive strategies to optimize resource utilization while avoiding over- and
underprovisioning.</p>
      <p>Numerous centralized resource management approaches have been proposed in the literature,
primarily adopting centralized architectures to support cloud applications.</p>
      <p>
        The article [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] proposes a Petri net-based model for resource scheduling and auto-scaling in
elastic cloud computing environments. The goal is to improve the allocation of virtual resources
dynamically based on the workload demands. By modeling the system behavior using Colored Timed
Petri Nets (CTPNs), the authors enable precise representation and simulation of complex cloud
operations, including task arrival, resource provisioning, and scaling decisions. The model helps
ensure optimal use of resources while maintaining service level agreements (SLAs). Performance
evaluation shows that the proposed model supports efficient auto-scaling and task scheduling,
enhancing system responsiveness and resource utilization.
      </p>
      <p>Article [20] reviews and analyzes various policies and mechanisms aimed at improving resource
management in cloud computing from a performance perspective. It highlights key challenges such
as efficient resource allocation, load balancing, scalability, and energy efficiency. The authors explore
existing strategies, including virtual machine (VM) migration, resource scheduling, and
energyaware management, and discuss their impact on system performance. The paper also identifies gaps
in current research and suggests directions for developing more intelligent and adaptive resource
management solutions to optimize cloud infrastructure performance.</p>
      <p>The article [21] presents a proactive, self-managing framework for resource allocation in cloud
computing environments. The framework uses autonomic computing principles to monitor, analyze,
and manage cloud resources dynamically, aiming to improve performance, scalability, and resource
utilization for service-based applications. By predicting workloads and adjusting resources in
advance, the approach minimizes latency and service disruptions, contributing to more efficient and
intelligent cloud infrastructure management.</p>
      <p>Article [22] presents a cost-aware, elastic caching strategy for cloud environments using a
TimeTo-Live (TTL) based approach. The authors develop a mathematical model to dynamically adjust the
cache size in response to workload changes, aiming to balance performance (hit rate) and operational
cost. Their TTL-based mechanism is lightweight and adaptable, allowing cloud providers to
provision cache resources elastically while keeping expenses under control. Through theoretical
analysis and simulations, the proposed method demonstrates improved cost-efficiency and cache
performance compared to traditional static or heuristic-based approaches.</p>
      <p>The article [23] proposes a self-adaptive resource allocation approach for cloud-based software
services using an iterative Quality of Service (QoS) prediction model. The method continuously
monitors service performance and predicts future QoS values to dynamically adjust resource
allocation. It combines machine learning techniques with feedback control to respond to changing
workloads and maintain SLA compliance. The proposed approach enhances resource efficiency and
service reliability, as shown through experimental validation on real-world cloud service scenarios.</p>
      <p>Article [24] provides a comprehensive review of cloud resource management techniques within
the context of Industry 4.0, focusing on the integration of cloud computing with smart manufacturing
and industrial automation. The authors categorize and evaluate various resource provisioning,
scheduling, and optimization strategies, highlighting their relevance to real-time and data-intensive
industrial applications. The paper also discusses key challenges, such as latency, scalability, energy
efficiency, and security, that arise when deploying cloud solutions in Industry 4.0 environments. The
review identifies gaps in current research and suggests future directions for intelligent, adaptive, and
decentralized resource management systems.</p>
      <p>The article [25] presents a hybrid resource allocation algorithm for cloud computing that
combines the Shuffled Frog Leaping Algorithm (SFLA) and the Cuckoo Search (CS) algorithm. The
proposed hybrid approach aims to enhance resource utilization, minimize execution time, and reduce
energy consumption in cloud environments. By integrating the exploration capability of Cuckoo
Search with the local search efficiency of SFLA, the algorithm achieves better optimization results
than using either technique alone. Simulation results demonstrate that the hybrid method
outperforms traditional approaches in terms of task completion time and resource efficiency.</p>
      <p>Article [26] introduces an autonomic task scheduling algorithm designed to handle dynamic
workloads in cloud computing environments using an effective load balancing technique. The
proposed method enables the cloud system to automatically adapt to workload changes by
distributing tasks efficiently across virtual machines (VMs), ensuring improved resource utilization,
reduced response time, and enhanced system performance. The algorithm incorporates
selfmanagement features inspired by autonomic computing principles, allowing it to operate with
minimal human intervention. Experimental results show that the approach significantly improves
load distribution and task execution efficiency compared to traditional scheduling methods.</p>
      <p>Article [27] presents a distributed edge computing framework enhanced with artificial
intelligence (AI) to support Internet of Things (IoT) applications. The proposed solution focuses on
decentralized decision-making and resource management at the edge of the network, aiming to
reduce latency, improve scalability, and ensure efficient service delivery. AI techniques are
integrated to enable smart task offloading, real-time adaptation, and intelligent data processing. The
framework supports dynamic environments typical of IoT scenarios and demonstrates improved
performance in terms of latency, energy efficiency, and computational load distribution.</p>
      <p>Paper [28] presents a method for efficient resource provisioning in cloud systems by leveraging
workload prediction techniques. The approach forecasts future resource demands using historical
workload data, enabling proactive allocation of computing resources. By accurately predicting
workload fluctuations, the system reduces resource wastage and improves overall cloud performance
and cost efficiency. Experimental results validate the model's ability to enhance scalability and
responsiveness compared to reactive provisioning methods.</p>
      <p>Article [28] proposes a resource provisioning mechanism for cloud systems that leverages
workload clustering combined with the Biogeography-Based Optimization (BBO) technique. The
approach clusters workloads with similar resource demands to optimize resource allocation more
effectively. BBO is employed to find optimal provisioning solutions, improving resource utilization
and reducing operational costs. The method aims to enhance scalability and efficiency in dynamic
cloud environments by adapting resource allocation based on workload patterns. Experimental
results demonstrate improved performance over traditional provisioning strategies.</p>
      <p>Analysis has shown that there is a strong need in construction of a formal model to represent the
entire lifecycle of cloud infrastructure, encompassing the provisioning, operation, scaling,
monitoring, and decommissioning phases of cloud services. The model has to address the key
challenges in cloud lifecycle management, including dynamic scaling, SLA policy enforcement, and
adaptive control in response to variable workloads and system events.</p>
      <p>Infrastructure layer (servers, networks, storage).</p>
      <p>Cloud Services Lifecycle Management Layer.</p>
      <p>IT management level (according to ITIL, with human intervention).</p>
      <p>Let us consider a formalized mathematical model that takes into account these levels:</p>
      <p>=&lt;  ,  ,  ,  &gt;,
  = ( ,  ,  ,  0,  ),</p>
    </sec>
    <sec id="sec-3">
      <title>3. Cloud service lifecycle models</title>
      <p>To solve the problem of increasing the efficiency of curation by the life cycle of cloud infrastructure,
we will describe its mathematical model. It is obvious that it has a multi-level hierarchical
organization of management and includes:


 = { 1,  2, … ,   }, is a set of cloud services, each of which is an aggregated set of resources;
a set of control processes of the life cycle (monitoring, scaling, recovery,
{ 1,  2, … ,   };</p>
      <p>= { 1,  2, … ,   }
billing, etc.);</p>
      <p>= {ℎ1, ℎ2, … , ℎ }
incident management).</p>
      <p>The description of the functioning of the life cycle of a cloud service   ∈  will look like an
automaton described as a finite automaton:
,  
,  
,  
,  
,  
,  
} set of service states;
set of events/triggers (deploy, scale, pause, resume, fail, recover, terminate);</p>
      <p>Let us define a formal model that describes the behavior of the control system in the form of an
automaton (state machine or other type), which defines the rules for controlling processes  ∈  ,
objects, where each process is defined as a control mechanism superimposed on   .</p>
      <p>Specifically, the system monitoring function, which returns the metric vector (load, response
time, power consumption, etc.), will look like this:</p>
      <p>monitor:  ×  →   .</p>
      <p>The scaling function is described by a rule of type:
 
(  ,  ) ≥  ⇒  ∋ 


⇒  ( 
, 
_ ) =  
.</p>
      <p>Let us consider the level of IT management, namely the process of human-centric control.
Processes ℎ ∈  affect the transitions of automata   , but indirectly through approval, approval or
interaction with people. Let's formalize this through the function:
 : 
→ {
,</p>
      <p>} ,
where  ( )=true means that the event is allowed according to ITIL/organizational process
policies.</p>
      <p>(
ensembles</p>
      <p>managed resource pools.
homogeneous resources.</p>
      <p>For example, the deploy(  ) event is only possible when:
) = 
(ℎ ,   ) =</p>
      <p>.</p>
      <p>Let us consider the infrastructure layer, which operates with a set of resources R, organized in
Let's assume 
= { 1,  2, … ,   } ⊆  ( ) as a set of ensembles, where each   ⊆  is a pool of
(1)
(2)
(3)</p>
      <p>Then we will describe each ensemble by capacity  (  ) = { 1,  2, … ,   } (CPU, memory,
network); with a distribution strategy  :  →  that assigns services to ensembles; as well as rules
of self-organization (autonomous management).</p>
      <p>Let us describe the general compositional model by combining all levels.</p>
      <sec id="sec-3-1">
        <title>Each service is linked to resources through   ∈</title>
        <p>which is influenced   by M automated control services and H manual (ITIL) processes.</p>
        <p>The conditions for the correct functioning of the service will be set by the rule:
∀  : ∃  :  (  ) =   ∧ 
(  ) ≥ 
(  ) .</p>
        <p>Then the integral function of the state of the system will look like:</p>
        <p>( ) = ( ⋃ =1   ( ) , ⋃ =1   ( ) , ⋃ℎ=1 ℎ ( ) , ⋃ =1   ( ) ).</p>
        <sec id="sec-3-1-1">
          <title>3.1. Cloud service model</title>
          <p>Let's describe a mathematical model of a cloud service that takes into account the aggregation of
resources into higher-level services, the life cycle states of a cloud service, the hierarchy of service
types (IaaS, PaaS, SaaS), as well as management automation based on virtualization and resource
pools.</p>
          <p>Let's take as a set  = { 1,  2, … ,   }of physical or virtual resources (CPU, RAM, storage, network
adapters, etc.), but as a set  = { 1,  2, … ,   }of virtualized resources derived from R using
 :  ( ) →  ( ) is a virtualization function.</p>
          <p>This means that real resources are aggregated into pools of virtualized resources, which later
become elements of cloud services.</p>
          <p>Next, let's define a CS cloud service defined as a tuple:</p>
          <p>= ⟨   ,   ,   ,   ,    ,   ( )⟩ ,
unique identifier of the service;
service life period;
, 
, 
}</p>
          <p>type of service (infrastructure, platform, software);
a set of virtualized resources that make up the service;
a set of requirements for service (Service Level Agreement);
the state of the service in time (determined by the life cycle).</p>
          <p>The life cycle of a cloud service is presented in the form of a finite automaton:</p>
          <p>,  
initial state,
}
 :  ×  →  is the function of transition between states,
Let's define the transitions as follows:
To reduce the complexity of administration, resources are aggregated into autonomous pools:
,  
,  
} life cycle states,
(4)
(5)
(6)
(7)
 = { 1,  2, … ,   },   ⊆</p>
          <p>Each pool hides the hardware implementation and is managed using   self-organization policies.
Then let's define the aggregation function as  :  ( ) →  , and the assignment function  : 
→  ,
which determines which resource pool supports a particular service.</p>
          <p>The function of the composition of services is as follows:</p>
          <p>:  (   ) →    ,  ′:  (   ) →    .</p>
          <p>Let's consider the process of automating event processing by a cloud service. Automation is
implemented through event functions, where the query function will look like:

− 
:  ×  × 
→  ,
where U user, params</p>
          <p>service parameters (CPU, RAM, lifetime, etc.).</p>
          <p>Then we will present the function of reaction to events as follows:
 :  × 
→   ,</p>
          <p>Description of the model of functioning of a cloud service includes determining the time limit of
its existence. Let's assume</p>
          <p>the time of activation of the service, . Then for the time  


+  , де 
∈   t the service is active if:
 ∈ [ 
,  
] ⇒   ( ) ≠  
.</p>
          <p>After the time has elapsed, the service is automatically destroyed, and the resources are returned
to the pool:</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.2. Cloud service lifecycle model</title>
          <p>( &gt;  
,   →  .</p>
          <p>Based on the above description of the life cycle of a cloud service, it is possible to build a formalized
mathematical model that describes the stages, parameters, actions and states of the service during its
existence. This model is based on the concept of a finite automaton with state preservation, a
parameterized transition graph and a controlled set of control operations.</p>
          <p>Let's denote a cloud service as an object S, which is defined by:
 = { 0,  1,  2,  3,  4,   }
(8)
(9)
(10)</p>
          <p>=
(11)
(12)
(13)
(14)
 = ⟨ ,  ,  ,  ,  ,  ,  0,   ⟩,
where each operation is a function:
, 
 0
 :  ×  →  ′ ,
instance.</p>
          <p>= ⟨
  ∈ 
(15)
(16)
(17)
(18)
completion, return of resources.</p>
          <p>Each instance is determined by the following parameters: ∈  :

C
A
P
The transition function that displays the reaction to events will look like:
For the main events  ∈  , we determine the set of events that are given in Table 1.
Let's describe the process of autonomous driving. Let's take  :  × 
→ 
×  as a function a
recommendation or automatic execution, where  ( ,    ) returns which control operation should
be performed and with what parameters if a deviation from the SLA is detected.</p>
          <p>The service template is described by a tuple of the form:
,</p>
          <p>, {  } =1⟩ ,
- a graph of components and relationships between them,</p>
          <p>a sequence of steps to create an instance,
management operations.</p>
          <p>Let be the resources allocated to instance  = { 1,  2, . . . ,   }.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>In case of completion  →   , the following rule is used:</title>
        <p>∀  ∈   ,   →</p>
        <sec id="sec-3-2-1">
          <title>3.3 Formalization of requirements for automation of lifecycle management of cloud environments</title>
          <p>Automation of the management of the life cycle of a cloud environment is a key factor in ensuring
its efficiency, reliability, scalability, and continuity. Unlike traditional IT systems, cloud services
operate in a dynamic, multi-user environment with a high degree of virtualization and variability of
configurations, which necessitates a clear formalized approach to defining automation requirements.</p>
          <p>The purpose of requirements formalization is to provide a unified model for managing cloud
resources and services, the ability to machine interpret component life cycles, support adaptive
management and autonomous response to events, and ensure consistency between SLA policy
parameters, instance configurations, and management plans.</p>
          <p>Automation should provide a response to internal and external events in real time:
•
•
•
•
•
start/stop the service;
load change;
violation of the service level management policy;
depletion of resources;
changes in security policies [30, 31].</p>
          <p>Let's formalize the process of automating the management of the life cycle of the cloud
environment:
∀ ∈  , ∃ ∈  :  ( ) =  ( ),
 = ⟨ ,  ,  ⟩ ,
of events to actions, P is the execution parameters.</p>
          <p>The configuration of the cloud environment is defined in the form of formalized templates or
specifications:
where:
C a set of components (servers, storages, networks),
D is the dependencies between them (graph or tree),
R restrictions and rules (security, availability, scaling policies) [32, 33].</p>
          <p>We will understand that templates support machine interpretation and automatic deployment.</p>
          <p>Then management must take into account the state of the service   є  , where  = { 0,  1, … ,   }
is the set of states (definition, publishing, instantiating, operating, termination);  :  ×  →  is an
event-based state transition function.</p>
          <p>Thus, such formalization allows you to automate:
activation/deactivation of services;
change of management plans depending on the state;
control of the end of the life cycle.</p>
          <p>The system has the ability to automatically scale based on monitoring data:
 
:  × {
, 
, 
, … } →  ′ .</p>
          <p>The execution condition will look like this:
∃ :   ( ) &gt;  ⇒  
(22)</p>
          <p>Let us consider the process of ensuring the policy of service level management. Support and
automatic control of the implementation of service level agreements (SLAs) consist in monitoring
critical parameters, automatic notification of violations, and applying corrective actions.</p>
          <p>Formally, the process of ensuring the service level management policy will be described as:
where:
  is the metric;
  is the allowable range;
  is the time interval of te check.</p>
          <p>Let's consider the process of automated logging and auditing. Each action should generate a log
entry: who initiated the action; when; to which resource it is applied; The result of the action.</p>
          <p>Formally, the process of automated logging and auditing will be presented as follows:
where
time, user or service,
action, resource, execution status. 
 
 


 
Let's present a functional model of automated control.</p>
          <p>Formalized as a system is described by a tuple:
 = {(ti, ui, ai, ri, si)} =1,
 = ⟨ ,  ,  ,  ,  , 
,  ⟩ ,
(23)
(24)
(25)</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>•
•
•
•</p>
      <p>Load.</p>
      <p>Hardware problems.</p>
      <p>Interference between services.</p>
      <p>Monitoring and error handling.</p>
      <p>System stability in this context can be interpreted as the ability of a cloud service to operate for a
long time without noticeable failures or drops in performance, even under variable loads or
unforeseen circumstances. Stability can be assessed by several main criteria:</p>
      <p>To assess the stability of each stage of the cloud service lifecycle, it is important to analyze the
transitions between different states (for example, from the " Active " state to the " Error " or "Scaling"
state). Key factors that can affect stability:</p>
      <p>Peak load analysis (for example, autoscaling can cause overload during the scaling process).
Detecting errors or malfunctions in hardware components.</p>
      <p>Multiple services can interact with each other through shared resources (e.g., network, storage).</p>
      <p>The system must be able to detect errors at each stage and transition to recovery mode without
major delays or performance degradation.</p>
      <p>An important part of stability is the ability to perform operations autonomously at each stage.
For example, autoscaling should work without human intervention, ensuring that a stable state is
maintained under changing conditions. Statistical analysis of automatic actions (for example, how
long it takes to scale or restore) helps to verify whether the system can automatically adapt to new
conditions.</p>
      <p>Detecting and analyzing responses to critical events, such as errors (the " Error " state) or high
loads, can provide insight into the system's ability to recover or restore service stability.</p>
      <sec id="sec-4-1">
        <title>4.1. Analysis of the effectiveness of the model</title>
        <p>System effectiveness is determined by the ability of a cloud service to meet user requirements and
adhere to service level management policies, ensuring high availability and performance with
minimal resource consumption.</p>
        <p>To assess the effectiveness of the model, you can measure:
•
•
•
•</p>
        <p>Service response time under different loads.</p>
        <p>System performance at different stages of its lifecycle, such as activation, scaling, or recovery.
Resource usage (CPU, memory, network resources) at each stage.</p>
        <p>Time taken to perform operations: How quickly the system transitions between states (e.g.,
time from startup to activation or scaling time).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Resource optimization</title>
        <p>Resource allocation analysis is an important step in performance analysis. The assessment of
indicators such as:
•
•
•
assessing the effective use of memory, processor time, disk space, and network.
checking how effectively resources are scaled when the load changes.
assessment of energy consumption during the implementation of various stages of the life
cycle, especially during the scaling and recovery stages.</p>
        <p>The key point is compliance with the service level management policy. The effectiveness of the
model is assessed through service availability (analysis of the compliance of the service uptime with
the service level management policy requirements), response time performance (analysis of the
compliance of the service response time with the requirements), latency and recovery time (analysis
of the system recovery time after failures or malfunctions).</p>
        <p>Efficiency also includes optimizing resource costs, including infrastructure costs (analysis of the
cost of deployment, scaling, support, and service closure), as well as resource utilization optimization
(analysis of optimal resource utilization at each stage of the lifecycle).</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Analysis of the stability and efficiency of the functioning of the cloud service life cycle model. Experimental setup</title>
        <p>Analysis of the stability and efficiency of the cloud service lifecycle model is a comprehensive process
that includes assessing performance, resource efficiency, compliance with service level management
policies, and ensuring uninterrupted system operation. The use of mathematical modeling,
monitoring, and optimization methods allows you to obtain accurate data to ensure high quality user
service in the cloud environment.</p>
        <p>To analyze the stability and efficiency of the cloud service lifecycle model, a series of experiments
was performed, including various stages of the service lifecycle ( definition , offering, subscription
and instantiation , production process, scaling, and termination).</p>
        <p>The experiment was based on an analysis of the stability of the cloud service during the execution
of various stages of the life cycle, an assessment of the effectiveness of resource management, as well
as the implementation of service level management policies and compliance with parameters.</p>
        <p>The list of hypotheses of the experiment is given in Table 2.</p>
        <p>Description
A cloud service with properly configured scaling and resource management
processes continues to function stably, even with changing load or minor
failures at the infrastructure level.</p>
        <p>Low resource loss during transitions between different system states (e.g.
from " Active " to " Scaling ") while adhering to service level management
policies
System efficiency (compliance with service level management policies and
request processing speed) increases under conditions of dynamic scaling
and adaptive load management</p>
        <p>The purpose of the experiment is to test whether the stability and efficiency conditions of the
cloud service are met within the life cycle model. Particular attention is paid to stability during
scaling and under high loads, resource efficiency (CPU, memory, network usage), implementation of
service level management policies (response time, service availability), as well as the costs of
supporting and maintaining the service.</p>
        <p>The experiment was conducted in a test cloud environment that matches the characteristics of a
real cloud provider. The environment included:</p>
        <p>Virtual machines for running applications.</p>
        <p>Virtual storage resources and network resources.</p>
        <p>Automated systems for scaling and load management.</p>
        <p>Monitoring tools to collect data on load, resource usage, and service level management policy
enforcement.</p>
        <p>The experimental procedure included the following steps:
1. Initialization, which involved choosing a cloud service type (e.g., big data processing service,
web hosting).
2. Stability testing, in which different loads were imposed on the system (for example, different
numbers of simultaneous requests, resource loads), and an assessment of the system's time
response to the load was also carried out, which included an analysis of transient processes
(scaling, startup, shutdown).
3. Performance testing, which included estimating scaling time when the load changes,
estimating resource costs (CPU, memory, network) during the execution of various stages of
the lifecycle, and estimating recovery time after an error or overload.
4. Analysis of the implementation of service level management policies, which included an
assessment of service availability and response time to requests, as well as an analysis of the
implementation of service level management policies under high load conditions.
5. Scaling and adaptability, which included testing dynamic scaling under varying loads, as well
as evaluating performance during transitions between different stages (scaling, recovery).</p>
      </sec>
      <sec id="sec-4-4">
        <title>4.4. Experimental results</title>
        <p>Let us consider the results of the system stability.</p>
        <p>Under load, we will have the number of simultaneous requests coming to the cloud service.
Response time refers to the average time a service takes to respond to a request.
Execution time refers to the time it takes to process a request and response.</p>
        <p>CPU usage corresponds to the average percentage of CPU usage during the test.
Memory usage indicates the average memory usage of the service.</p>
        <p>Let's consider the results of experiments on the implementation of service level management
(SLA) policies.</p>
        <p>Response time is the average time it takes for a service to respond to a request.</p>
        <p>Availability reflects the percentage of time during which the service is available and operating
without interruption.</p>
        <p>Service Level Management Policy Compliance reflects the percentage of service level agreement
fulfillment, where higher values indicate greater compliance with the terms of the service level
agreements.</p>
        <p>Disaster recovery time: Recovery time from system failures, availability disruptions, or poor
performance.</p>
        <p>The service availability time was over 99.95%, which exceeds the standard service level
management policy requirements for cloud services.</p>
        <p>Recovery time after failures did not exceed 3 minutes, which meets the requirements of the service
level management policy for critical services.</p>
        <p>Response times were stable, ensuring that the service level management policy of 1 second for
90% of requests was met.</p>
        <p>The results of high-load stability and recovery time are presented in Table 5.
Let's consider the results of experiments on scaling and adaptability.</p>
        <p>We will consider the change in load to be the percentage increase in the load on the system during
the test.</p>
        <p>Scaling time refers to the time it takes for the system to scale when the load changes.</p>
        <p>Responsiveness defines the improvement in throughput after autoscaling, expressed as a
percentage.</p>
        <p>The load reduction time reflects the time it takes for the load to decrease after a peak in requests,
when the load returns to normal levels again.</p>
        <p>When the load increased, the system automatically scaled within 1-3 minutes, depending on the
type of resource being scaled (for example, adding virtual machines or expanding memory).</p>
        <p>Adaptive control was implemented in the system, which allowed the scaling strategy to be
dynamically changed depending on the actual load.</p>
        <p>The results of the scaling and adaptability experiments are presented in Table 6.</p>
        <p>The results of the experiments showed that the tested cloud service demonstrated high stability
during normal operation, and also effectively recovered after minor failures.</p>
        <p>The system was able to efficiently use resources, ensuring stable service operation under high
loads.</p>
        <p>All key aspects of the service level management policy, including availability, response time, and
recovery, were performed at or above standard requirements.</p>
        <p>Automatic scaling and adaptive resource management provide high efficiency and reduced
infrastructure costs, which is important for cloud services with dynamic loads.</p>
        <p>These results indicate a high level of stability and efficiency of the proposed cloud service lifecycle
model, making it suitable for use in real-world conditions.
Conclusion
In this work, we developed a comprehensive and formal mathematical model to represent the entire
lifecycle of cloud infrastructure, encompassing the provisioning, operation, scaling, monitoring, and
decommissioning phases of cloud services. The model integrates discrete-state automata,
virtualization abstractions, and hierarchical management levels, from infrastructure to ITIL-based
manual controls, thereby offering a unified framework for understanding and automating the
behavior of cloud systems.</p>
        <p>Through the use of formal methods, the proposed model addresses key challenges in cloud
lifecycle management, including dynamic scaling, SLA policy enforcement, and adaptive control in
response to variable workloads and system events. The integration of monitoring functions, SLA
validation mechanisms, and automated decision-making capabilities ensures not only the operational
reliability of cloud services but also their compliance with business-level requirements.</p>
        <p>The experimental evaluation confirms that the model supports high system stability and
efficiency under real-world conditions. Specifically, it demonstrated the capability to maintain over
99.9% availability, handle up to 500 concurrent requests with minimal latency variation, and perform
automated recovery and scaling within strict time constraints. These results underline the potential
of the model to reduce resource waste, enhance service responsiveness, and minimize manual
intervention, especially during peak load or failure events.</p>
        <p>Additionally, the model facilitates fine-grained performance analysis, such as resource
consumption tracking, SLA compliance measurement, and reaction time monitoring for lifecycle
transitions. This enables cloud administrators to make data-driven decisions, optimize infrastructure
costs, and improve the quality of service provided to end users.</p>
        <p>Future work may focus on integrating this model into active orchestration tools, extending it with
predictive analytics for preemptive scaling, and applying it to hybrid and edge-cloud environments.
Moreover, further refinement of the SLA-driven automation policies can lead to even more resilient
and self-adaptive cloud platforms.</p>
        <p>In conclusion, the proposed formal model represents a significant step toward the systematic and
automated management of cloud infrastructure lifecycles, paving the way for smarter, more efficient,
and resilient cloud computing systems.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Grammarly in order to: grammar and spelling
check; DeepL Translate in order to: some phrases translation into English. After using these
tools/services, the authors reviewed and edited the content as needed and take full responsibility for
[20] M. Bansal, S. K. Malik, S. K. Dhurandher, I. Woungang, Policies and mechanisms for enhancing
the resource management in cloud computing: A performance perspective, International Journal
of Grid and Utility Computing 11 (2020) 345 366.
[21] T. Bhardwaj, H. Upadhyay, S. C. Sharma, An autonomic resource allocation framework for
service-based cloud applications: A proactive approach, in: M. Pant, T. K. Sharma, R. Arya, B. C.
Sahana, H. Zolfagharinia (Eds.), Soft Computing: Theories and Applications, vol. 1154, Springer,
2020, pp. 1045 1058.
[22] D. Carra, G. Neglia, P. Michiardi, Elastic provisioning of cloud caches: A cost-aware TTL
approach, IEEE/ACM Transactions on Networking 28 (2020) 1283 1296.
[23] X. Chen, H. Wang, Y. Ma, X. Zheng, L. Guo, Self-adaptive resource allocation for cloud-based
software services based on iterative QoS prediction model, Future Generation Computer
Systems 105 (2020) 287 296.
[24] B. K. Dewangan, A. Agarwal, T. Choudhury, A. Pasricha, S. Chandra Satapathy, Extensive
review of cloud resource management techniques in industry 4.0: Issue and challenges,
Software: Practice and Experience (2020) 1 20.
[25] P. Durgadevi, S. Srinivasan, Resource allocation in cloud computing using SFLA and Cuckoo
search hybridization, International Journal of Parallel Programming 48 (2020) 549 565.
[26] F. Ebadifard, S. M. Babamir, Autonomic task scheduling algorithm for dynamic workloads
through a load balancing technique for the cloud-computing environment, Cluster Computing
(2020) 1 27.
[27] G. Fragkos, E. E. Tsiropoulou, S. Papavassiliou, Artificial intelligence enabled distributed edge
computing for Internet of Things applications, in: Proceedings of the 2020 16th International
Conference on Distributed Computing in Sensor Systems (DCOSS), IEEE, 2020, pp. 450 457.
[28] L. J. Gadhavi, M. D. Bhavsar, Efficient resource provisioning through workload prediction in the
cloud system, in: Smart Trends in Computing and Communications, Springer, Singapore, 2020,
pp. 317 325.
[29] M. Ghobaei-Arani, A workload clustering based resource provisioning mechanism using
biogeography based optimization technique in the cloud based systems, Soft Computing 25
(2020) 3813 3830.
[30] S. Lysenko, O. Savenko, K. Bobrovnikova, A. Kryshchuk, B. Savenko, Information technology
for botnets detection based on their behaviour in the corporate area network, Communications
in Computer and Information Science 718 (2017) 166 181.
[31] G. Markowsky, O. Savenko, S. Lysenko, A. Nicheporuk, The technique for metamorphic viruses'
detection based on its obfuscation features analysis, CEUR Workshop Proceedings 2104 (2018)
680 687.
[32] O. Pomorova, O. Savenko, S. Lysenko, A. Kryshchuk, K. Bobrovnikova, A technique for the
botnet detection based on DNS-traffic analysis, Communications in Computer and Information
Science 522 (2015) 127 138.
[33] O. Pomorova, O. Savenko, S. Lysenko, A. Kryshchuk, K. Bobrovnikova, Anti-evasion technique
for the botnets detection based on the passive DNS monitoring and active DNS probing,
Communications in Computer and Information Science 608 (2016) 83 95.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Goncalves</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. T.</given-names>
            <surname>Endo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rodrigues</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Sadok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kelner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Curescu</surname>
          </string-name>
          ,
          <article-title>Resource allocation based on redundancy models for high availability cloud</article-title>
          ,
          <source>Computing</source>
          <volume>102</volume>
          (
          <year>2020</year>
          ) 43
          <fpage>63</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Hajisami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. X.</given-names>
            <surname>Tran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Younis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pompili</surname>
          </string-name>
          ,
          <article-title>Elastic resource provisioning for increased energy efficiency and resource utilization in cloud-RANs</article-title>
          ,
          <source>Computer Networks</source>
          <volume>172</volume>
          (
          <year>2020</year>
          )
          <fpage>107170</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R. B.</given-names>
            <surname>Halima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kallel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Gaaloul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Maamar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jmaiel</surname>
          </string-name>
          ,
          <article-title>Toward a correct and optimal timeaware cloud resource allocation to business processes</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>112</volume>
          (
          <year>2020</year>
          ) 751
          <fpage>766</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Hamzaoui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Duthil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Courboulay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Medromi</surname>
          </string-name>
          ,
          <article-title>A survey on the current challenges of energy-efficient cloud resources management</article-title>
          ,
          <source>SN Computer Science</source>
          <volume>1</volume>
          (
          <year>2020</year>
          ) 1
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H. O.</given-names>
            <surname>Hassan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Azizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Shojafar</surname>
          </string-name>
          ,
          <article-title>Priority, network and energy-aware placement of IoT-based application services in fog-cloud environments</article-title>
          ,
          <source>IET Communications 14</source>
          (
          <year>2020</year>
          ) 2117
          <fpage>2129</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <surname>C. de Laat</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Zhao</surname>
          </string-name>
          ,
          <article-title>Concurrent container scheduling on heterogeneous clusters with multi-resource constraints</article-title>
          ,
          <source>Future Generation Computer Systems</source>
          <volume>102</volume>
          (
          <year>2020</year>
          ) 562
          <fpage>573</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Decomposition based cloud resource demand prediction using extreme learning machines</article-title>
          ,
          <source>Journal of Network and Systems Management</source>
          <volume>28</volume>
          (
          <year>2020</year>
          ) 1775
          <fpage>1793</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <article-title>CSL-driven and energy-efficient resource scheduling in cloud data center</article-title>
          ,
          <source>The Journal of Supercomputing</source>
          <volume>76</volume>
          (
          <year>2020</year>
          ) 481
          <fpage>498</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liaqat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. H.</given-names>
            <surname>Ab Hamid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Toseef</surname>
          </string-name>
          , U. Shoaib,
          <string-name>
            <given-names>R. L.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <article-title>Federated cloud resource management: Review and discussion</article-title>
          ,
          <source>Journal of Network and Computer Applications</source>
          <volume>77</volume>
          (
          <year>2017</year>
          ) 87
          <fpage>105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Abrol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <article-title>Social spider foraging-based optimal resource management approach for future cloud</article-title>
          ,
          <source>The Journal of Supercomputing</source>
          <volume>76</volume>
          (
          <year>2020</year>
          )
          <year>1880</year>
          1902.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>N.</given-names>
            <surname>Gholipour</surname>
          </string-name>
          , E. Arianyan,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>A novel energy-aware resource management technique using joint VM and container consolidation approach for green computing in cloud data centers</article-title>
          ,
          <source>Simulation Modelling Practice and Theory</source>
          <volume>104</volume>
          (
          <year>2020</year>
          )
          <fpage>102127</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S. S.</given-names>
            <surname>Gill</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tuli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. N.</given-names>
            <surname>Toosi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cuadrado</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Garraghan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bahsoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lutfiyya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sakellariou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Rana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dustdar</surname>
          </string-name>
          , R. Buyya,
          <article-title>ThermoSim: Deep learning-based framework for modeling and simulation of thermal-aware resource management for cloud computing environments</article-title>
          ,
          <source>Journal of Systems and Software</source>
          <volume>166</volume>
          (
          <year>2020</year>
          )
          <fpage>110596</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Abrol</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Nature-inspired metaheuristics in cloud: A review</article-title>
          ,
          <source>in: ICT Systems and Sustainability</source>
          , Springer, Singapore,
          <year>2020</year>
          , pp.
          <fpage>13</fpage>
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Alfakih</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Hassan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Gumaei</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Savaglio</surname>
          </string-name>
          , G. Fortino,
          <article-title>Task offloading and resource allocation for mobile edge computing by deep reinforcement learning based on SARSA, IEEE Access 8 (</article-title>
          <year>2020</year>
          ) 54074
          <fpage>54084</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Apostolopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Tsiropoulou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Papavassiliou</surname>
          </string-name>
          ,
          <article-title>Risk-aware data offloading in multiserver multi-access edge computing environment</article-title>
          ,
          <source>IEEE/ACM Transactions on Networking</source>
          <volume>28</volume>
          (
          <year>2020</year>
          ) 1405
          <fpage>1418</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>P. A.</given-names>
            <surname>Apostolopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. E.</given-names>
            <surname>Tsiropoulou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Papavassiliou</surname>
          </string-name>
          ,
          <article-title>Cognitive data offloading in mobile edge computing for internet of things</article-title>
          ,
          <source>IEEE Access 8</source>
          (
          <year>2020</year>
          ) 55736
          <fpage>55749</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ascigil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tasiopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. K.</given-names>
            <surname>Phan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sourlas</surname>
          </string-name>
          , I. Psaras, G. Pavlou,
          <article-title>Resource provisioning and allocation in function-as-a-service edge-clouds</article-title>
          ,
          <source>IEEE Transactions on Services Computing</source>
          <volume>1374</volume>
          (
          <year>2021</year>
          ) 1
          <fpage>14</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Asghari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Sohrabi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Yaghmaee</surname>
          </string-name>
          ,
          <article-title>A cloud resource management framework for multiple online scientific workflows using cooperative reinforcement learning agents</article-title>
          ,
          <source>Computer Networks</source>
          <volume>179</volume>
          (
          <year>2020</year>
          )
          <fpage>107340</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Babu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Samuel</surname>
          </string-name>
          ,
          <article-title>Petri net model for resource scheduling with auto scaling in elastic cloud</article-title>
          ,
          <source>International Journal of Networking and Virtual Organisations</source>
          <volume>22</volume>
          (
          <year>2020</year>
          ) 462
          <fpage>477</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>