<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Optimizing the Energy Consumption of On-site Private Cloud Computing Platforms</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anatol Lozinskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anatoly Gladun</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of information technology and systems of the National Academy of Sciences of Ukraine</institution>
          ,
          <addr-line>40 Akademik Glushkov Avenue, Kyiv, 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Cloud computing offers advantages such as self-service and dynamic redistribution of resource structures, necessitating the development of effective algorithms for optimal use of hardware infrastructure. Recent studies highlight the problems of resource redistribution in on-site private cloud computing platforms software layers, affecting total energy consumption. Existing approaches to mathematical modeling use combinatorial optimization, game theory, and artificial intelligence but lack an integrated approach to energy consumption optimization. The proposed approach involves creating an original algorithm based on a multi-level architecture developed by the author. This architecture divides objects into seven functional layers, from hardware physical infrastructure to application SaaS layer. The algorithm aims to minimize electricity consumption by isolating critical parameters in the process of redistributing resource structures. The optimization problem is segmented into subproblems, allowing for efficient load placement and energy savings. The model is focused on optimizing the power consumption of on-site private cloud computing platforms using a well-known approach - reducing the number of hardware nodes involved. The idea is to switch off not only individual computing nodes but also cooling nodes when idle. The mathematical formulation of the resource allocation in on-site private cloud computing platforms problem is presented as a Multi-dimensional Bin packing. The algorithm employs dynamic programming to maximize load placement density and reduce the number of hardware nodes. The model provides support technologies for both virtual machines and software containers. The paper presents the original author's algorithm, Optimization of Programmable Infrastructure Resources (OPIR), focused on reducing electricity consumption through combinatorial optimization.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;cloud computing</kwd>
        <kwd>on-site private cloud computing platforms</kwd>
        <kwd>energy optimization</kwd>
        <kwd>multi-layer architecture of system</kwd>
        <kwd>resource allocation1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        A large-scale modern power supply challenge is the rapid increase in the workload of data
centers (DC) with big data storage and analytics tasks, which leads to the growth of hardware
infrastructures of cloud computing platforms. At the present stage of development, the number of
cloud computing platforms hardware nodes already reach tens of thousands [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ][
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Several
advantages in cloud computing, such as self-service, dynamic redistribution of resource structures
in software layers require the development of effective algorithms and methods for optimal use of
hardware infrastructure.
      </p>
    </sec>
    <sec id="sec-2">
      <title>1. Related Surveys</title>
      <p>
        A detailed study [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] provides an overview of numerous developments in the field of energy
efficiency along the last decade in different fields of computing technologies from chips
microarchitecture to data centers scale. The issue of energy efficiency in the future stage of the
development of computer technology is brought to the fore. The Energy wall becomes the barrier
around which computing systems will develop in the entire spectrum of implementations. Modern
research focuses on finding solutions to the energy barrier problem through various ways of
implementing close interaction between energy-efficient hardware and energy-aware software
technologies.
      </p>
      <p>
        Numerous studies and scientific publications highlight the problems of redistributing the
resource structures in the program layers of cloud computing platforms from the viewpoint of
various aspects in cloud computing organization. Indeed, the components of the sub-system affect
the overall power consumption of cloud computing platforms. Highlighting the complexity of
implementing the processes of redistribution of resources in individual subsystems, these works
provide an understanding of the existing possibilities for optimizing the power consumption of
cloud computing platforms in general. The paper [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] highlights the problem of sharing processor
resources. The competition of several consumers for the speed of computing is caused by the
placement of workloads of several consumers on one computing node. An algorithm for planning
the hot migration of running virtual machines between nodes of the hardware cluster is proposed
in order to avoid conflicts. The paper [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] concludes that the technology of hot migration of virtual
machines can be used for another purpose – to concentrate virtual machines on a smaller number
of computing nodes in the cluster. The work [6] highlights the problem of resource utilization from
the point of view of providing reliable access for consumers to the storage services of the Amazon
cloud platform. The paper presents the capabilities of the distributed Dynamo system in providing
highly reliable data storage and availability due to built-in algorithms for distributed duplication of
network access and storage subsystems. The analysis of the work suggests that reducing the
amount of physical storage equipment can substantially reduce the cloud platform’s power
consumption. The paper [7] presents the issue of organizing data transfer in DC with computing
clusters. Based on the data presented in the work, it can be concluded that the application of the
principles of managing the number of network devices involved (switches, routers) to the
equipment of the DC network infrastructure can substantially reduce the cloud platform’s power
consumption. The paper [8] presents a study of hot migration of virtual machines between blade
servers as part of the IBM BladeCenter chassis to control the load and temperature conditions.
Therefore, we can conclude that the consolidation of virtual machines also affects the capacity of
the cooling systems necessary to remove the heat allocated by the computing nodes. The paper [9]
presents a subsystem for automating the scaling of the software infrastructure of cloud computing
platforms – the Policy Keeper orchestration service. Hot migration of virtual machines and
software containers is done based on machine learning scripts, policies, and algorithms. Therefore,
the complex and high complexity of scaling processes of cloud computing software infrastructures
can be successfully performed by orchestration tools. The work [10] relates the load level of physic
computing nodes to the power consumption of heat dissipation equipment. Migration of virtual
machines in operation is considered as a mechanism for managing the load of blade servers. It can
be concluded that control over the operation of heat removal equipment gives a significant
optimization of energy consumption. The work [11] presents a classification of cloud deployment
scenarios by form of ownership and location. A cloud deployment scenario that belongs to one
enterprise and serves only users of this enterprise, as well as located on the enterprise's own
territory is classified as on-site private cloud. Further in the text, the term on-site private cloud
computing platform (OPCCP or Platform) will be used to refer to cloud computing platforms
deployed in this scenario. More complex scenarios for organizing cloud computing can be
considered as the combination and interaction OPCCP of several enterprises with each other,
including those scenarios where shared access to the platform's cloud services is provided.
Therefore, the OPCCP study on energy consumption optimization remains relevant for other cloud
computing scenarios.
      </p>
      <p>The current state of the mathematical methods application issue in the field of cloud computing
are covered in the work [12]. Existing approaches to mathematical modeling of individual cloud
computing processes use the methods of combinatorial optimization, game theory, artificial
intelligence, etc. However, the algorithms presented in the papers are focused on individual
subsystems and do not provide for modeling the power consumption target function by
simultaneously isolating essential factors of all subsystems of OPCCP. Also, the presented
algorithms are mainly focused on the consolidation of virtual machines or software containers, and
do not delve into the details of the resources and processes involved in the implementation of the
OPCCP architecture.</p>
    </sec>
    <sec id="sec-3">
      <title>2. The proposed approach</title>
      <p>According to the author, only an understanding of the holistic picture of the organization of the
work of OPCCP from the physical infrastructure of the data center to the highest level of SaaS
cloud services allows for a significant improvement in the algorithms for minimizing power
supply. The creation of an original algorithm for optimizing power consumption is possible by
highlighting critical parameters as a result of the analysis of internal system processes, which is
based on the developed by the author multi-level architecture of cloud computing platforms
(Architecture). Understanding components of the Platform and the principles by which they
interact allows the researcher to identify significant parameters for optimizing energy
consumption. A detailed explanation of the organization, purpose, properties and constituent
elements of each layer of the Architecture is presented in the paper [13].</p>
      <sec id="sec-3-1">
        <title>2.1. The redistribution process of structure of resources</title>
        <p>The basis for presenting a holistic picture of the structure and internal processes of the OPCCP is
the author's Architecture. The method of splitting into layers used in the Architecture divides
functionally similar components into seven distinct layers. The lowest layer of the Architecture is
the physical infrastructure hardware – that is, a set of different physical nodes (PN): computing
nodes, data storage systems, network equipment, uninterruptible power supply, cooling units.
Therefore, all these elements of the Architecture consume electricity, heat up and require cooling,
which also adds its share to the total energy consumption of the Platform. The rest of the
Architecture consists of software layers (operating systems, software platforms and products).
Deployment and launch of elements in software layers leads to the involvement of hardware layer
resources (processor cores, RAM, data storage space, network bandwidth). In fact, the process of
redistributing OPCCP resource structures is determined by the life cycles of creating, operating and
deleting the constituent elements of software layers. The limited number of resources available to
the deployed operating system of each individual computing sever is determined by its constituent
physical units (processor cores, RAM, data storage drives). Figure 1 shows a revised representation
of the Architecture considering the new Kubernetes technology [14], which allows the
implementation of flexible and elastic container-based software infrastructures.</p>
        <p>It is to represent resources as an abstract array (the main property of cloud computing) that
OPCCP is built based on a computing cluster (Cluster). In the Cluster, PNs are added and removed
transparently (without stopping customer service), thanks to the flexible capabilities of the
Architecture to redistribute the resource structures of software layers between PNs. Thus, a
separate cloud service (CS) is implemented as a distributed information system capable of
increasing and decreasing resources volume needed in response to service requests to perform the
ordered actions.</p>
        <p>Analyzing the possibility of increasing the resources of the Architecture, we assume that there
are no restrictions on the number of PNs in the OPCCP. Accordingly, the value of the power
consumption of the OPCCP is equal to the total amount of power consumption of all the involved
PNs. Obviously, the total resource volume is limited by the total amount of PN resources in the
OPCCP. We will also consider the standard practice in the IT industry to produce specialized PNs
focused on a separate type of function or resources (computing speed, data storage, network
interaction, uninterruptible power supply, heat dissipation). For abbreviated designation in the
figures and in the text, we classify the types of PN according to their resource orientation:
computing (CPN), data storage (SPN), network (NPN), uninterruptible power supply (PPN), heat
dissipation (DPN).
Also, the spatial location of equipment in the data center servers room affects the power
consumption costs of the physical infrastructure. Let's simulate the situation with standardized
designs of PN enclosures with their placement in typical mounting cabinets in a stack one above
the other, allowing for a flexible combination of options for their spatial placement. The cabinets in
the machine room are placed in parallel rows to create "cold" and "hot" convection cooling
corridors. A solution is possible when the data center engine room is equipped with DPNs of
different heat dissipation capacities: a machine room, a cold corridor of several cabinets, a separate
cabinet. Thus, the heat dissipation zone is the totality of PN, the cooling of which is provided by
one DPN. The number of cabinets, PNs and their types varies according to the needs of the supplier
regarding the operation of the OPCCP.</p>
        <p>Given the local nature of the network architecture of OPCCP, it is advisable to assume that the
network latency is constant and very small, such that its value can be neglected. Therefore, the
above-mentioned structure of the network architecture and the principle of equipment location in
the data center makes it possible not to analyze the impact of network activity on electricity
consumption. From the author's point of view, the network hardware architecture of OPCCP
should be built based on flagship models of third-level switches, which allow you to vary the
number and bandwidth of ports, programmatically control the configuration of virtual network
segments.</p>
        <p>The general structure and dynamics of redistribution of OPCCP resources in the
implementation of individual CS depends on the ratio of the constituent software elements. Within
the framework of the model, four types of elements of the program layers of the Architecture
involved in the process of redistribution of resource structures are considered: 1) the operating
system of the PN (OS); 2) operating system service (OSS); 3) virtual machine (VM); 4) software
container (SK). The implementation of cloud services is carried out by the composition of the OSS,
as well as the composition of the compositions of the OSS. Further in the text, where appropriate,
the elements of OS, OSS, VM and SK will be designated by the general term of Load.</p>
        <p>The author's idea for solving the problem of optimizing the energy consumption of OPCCP is to
segment the total population of PN by two types of zones in the machine room. The first type is a
subset of PN whose power supply is provided by one PPN forming power supply zones (PSZ). The
second type is a subset of PN whose heat dissipation is provided by one DPN forming cooling zone
(CZ). The idea is, in the absence of loads, to power off not only individual CPN, but also PPN and
DPN, since the power consumption capacity of DPN is commensurate with the power consumption
of PN inside the CZ. One CZ can cover several PSZs, depending on the implementation of the data
center. Minimization of electricity consumption is achieved by minimizing the number of involved
both individual PNs and entire PSZ and CZ. Preference is given to the placement of loads in the
active zones. Only in the absence of resources in the zones involved, an additional zone is
activated, since the DPN has the highest level of power consumption.</p>
        <p>Splitting the common set of CPN into subsets of the CZ, which in turn are also divided into
subsets of the PSZ, allows the division of the optimization problem into subproblems with a limited
size of data. Let's represent the individual elements of the resource structure as packages packed
one into one. Thus, we get the idea of an algorithm for packing the multilevel nesting of elements
of the program levels into the corresponding elements of the lower levels of the Architecture. PSZ
are packed into CZ. At a level higher CPNs are packed in to PSZ. Even a level higher the loads are
packed in to CPNs. We consider that structural elements are heterogeneous in terms of
redistribution technology (VM or SC) and the composition of resource types (computing instances,
data storage volume). A general representation of the idea of redistribution of resource structures
is shown by Fig. 2.</p>
        <p>Also, in order to achieve energy savings, the algorithm for redistributing OPCCP resource
structures should perform the following subtasks: 1) selection of the target CZ; 2) selection of the
target PSZ; 3) selection of target CPN; 4) selection of loads according to the resource orientation of
the target CPN or SPN; 5) packing loads in the selected CPN or SPN; 6) shutdown of unused CPNs,
if necessary shutdown entirely all infrastructure PNs in unused PSZs and CZs; 7) elimination of
loads placement dispersion across machine room of DC.</p>
        <p>We suppose that the type of microprocessors in PN is the same. The computing power of the
elements of the Architecture is measured by the number of microprocessor cores involved, the unit
of measurement is piece by piece (pcs). We suppose that the type of RAM in CPN is the same. The
unit of measurement of the amount of RAM is a byte (Byte). The unit of measurement of the
amount of data storage is a byte (Byte). Finally, the result of the target function of the optimization
model is the power consumption capacity of OPCCP. The unit of measurement of power
consumption is Watt (W). It should be noted that typical operating systems of minimal
configuration are deployed in CPN and VM as needed for OSS deployment environment.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2.2. Resource allocation mechanism</title>
        <p>The problem of optimizing the redistribution of OPCCP resource structures is caused by the need
to ensure effective support of the main property of CS – the elasticity of cloud computing. The
elasticity of cloud computing is a property that provides for the ability to change the amount of
OPCCP resources involved in individual CSs over time. In accordance with the dynamics of
changes in individual consumer needs, the total volume and structure of redistribution of OPCCP
resources is constantly changing. An increase or decrease in the number of service requests to an
individual CS in the SaaS application layer leads to a downward sequence of requests between the
layers of the OPCCP architecture, causing the automatic allocate or release of CS compute
resources in the IaaS layer in order to ensure that the volume of resources involved in the
infrastructure is matched to ensure the current level and intensity of customer service.</p>
        <p>The life cycle of OPCCP infrastructure facilities takes place in the IaaS (Infrastructure as a
Service) layer. More precisely, requests for load placement are performed by the OSS subsystem
Resource allocation planning (RAP). In choosing the target PN for loads placement, the RAP uses an
algorithm of sequential enumeration, which leads to uncontrolled dispersion of loads between the
CPN in the machine room.</p>
        <p>The mechanism for the optimal selection of CPN can be implemented in the form of a separate
Resource Control and Management (RCM) service. Fig. 3 shows the receipt by the RCM of
information about the current state of redistribution of OPCCP resources. To provide the RCM
with operational information about the current state of resource allocation at each CPN, a software
agent (SA) is placed. Each SA notifies the RCM of each successful resource deployment or release
event at the local CPN.</p>
        <p>The calculation of the optimal solution for choosing the target CPN or SPN for loads placement
should be performed by the RCM based on information about the current state of each PN in the
hardware cluster of the Platform. It needs to know the power supply state of each CPN, and SPN,
to which PSZ and CZ belong, what resources are currently placed on them and what number of
resources for placing new loads are still available. The data of each message from the SA is entered
by the RCM into the general Nodes states table (NST). Thus, the dynamics of changes in the NST
reflect the current state of the resource allocation structure of OPCCP in real time. The data
collected in the NST is processed by the algorithm of optimal allocation of resources integrated in
the RCM. To ensure the optimal allocation of resources, the author proposes a sequence of actions
that together make up the algorithm for Optimizing Programmable Infrastructure Resources (OPIR)
represented by a flowchart in Fig. 4.</p>
      </sec>
      <sec id="sec-3-3">
        <title>2.3. Algorithm for resource allocation optimization</title>
        <p>The OPIR algorithm is activated by the occurrence of events of redistribution of OPCCP resource
structures. These are two events: 1) placement loads to the allocation queue; 2) the removal
(shutdown) of the load had happened.</p>
        <p>We pack loads into the target CPN using the appropriate mathematical method to achieve
maximum load placement density. Load placement processing is provided in the order of the
queue. The queue is formed on a first-come, first-served basis. Loads can be assigned priority
values. Priority loads are processed first.</p>
        <p>Thus, the OPIR algorithm minimizes the number of not only involved CPNs by tightly packing
the load queue to them but also ensures the consolidation (concentration) of loads in the machine
room space, by reducing the number of active hardware nodes and turning off PSZ and CZ.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>3. Mathematical methods overview</title>
      <p>Architecture presents the OPCCP as an integral system composed of sets of elements with their
inherent properties and parameters. Elements of the Architecture (should be understood as
separate services). Connections between individual services implement other services of a higher
level of complexity – the composition of services.</p>
      <sec id="sec-4-1">
        <title>3.1. The packaging problem</title>
        <p>The solution to the problem of redistribution of OPCCP resources is presented by the author as a
problem of Multi-dimensional Bin packing. At present, the NP-completeness of a large number of
problems, including the problem of packing a bin, has been determined. The bin packing problem is
NP-complex [15], i.e., it does not have a polynomial solution. The increase in the number of input
data leads to an exponential increase in the execution time of the algorithm. However, the
packaging problem is very common in practical activities. Heuristic algorithms are usually used to
solve problems of this class. There are two approaches to circumventing exponential dependence.
Firstly, it is possible to limit the number of input data to reduce the time of calculations. Secondly,
they try to find a polynomial algorithm for finding a non-optimal solution, but close to optimal.</p>
        <p>Approximate methods for the bin packing problem include Greedy Algorithm, Genetic
Algorithms, Ant colony Algorithm, Simulated Annealing, etc. These algorithms, as a rule, are
characterized by polynomial complexity, and the fee for solving is an approximate result.</p>
        <p>The exact methods of solving the bin packing problem include three main methods of discrete
programming:
•
•
•
dual simplex method is a numerical method for solving linear programs, which is an
improved version of the simplex method. The introduction of an additional constraint
allows you to obtain an optimal integer solution. An example of the implementation of this
idea is the Gomori algorithm. The main disadvantage of the method is poor convergence to
the whole solution.
branches and boundaries method come down to building a tree of possible options,
determining the estimate of the solution boundary for each vertex of the tree, cutting off
unpromising vertices.
dynamic programming method is based on the principle of Bellman's optimality. This
method was developed in the method of sequential analysis of options.</p>
        <p>The analysis of the resource structures redistribution processes shows that among the possible
options for redistribution, it is necessary to find the configuration that provides the optimal value
of the target function. Therefore, the problem is enumerative and can be reduced to combinatorial
optimization problems.</p>
      </sec>
      <sec id="sec-4-2">
        <title>3.2. Selection of mathematical method</title>
        <p>The listed methods for solving the problem about the bin packing in the worst case (with any
initial data, including the most unfavorable) have exponential complexity. This fact determines the
class of complexity of the problem about the bin packing. The first two exact methods are
discarded in view of the above shortcomings, and we will apply the method of dynamic
programming in solving the problem [16].</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>4. Scaling the OPIR algorithm</title>
      <p>The OPIR algorithm implementation is in the form of a VM image file in the format of common
standard for cloud platforms Open Virtualization Format (OVF). This solution allows us to
incorporate cloud capabilities for scaling the resources involved by the OPIR algorithm. Depending
on the scale of hardware clusters with different orders of magnitude of the number of PNs, part of
the resources at the main node of the OPCCP can be allocated for the placement of VM with OPIR,
as well as the resources of a separate CPN. With the volumes of OPCCP clusters, where the
number of PNs reaches tens of thousands of units, the algorithm can be implemented based on a
VM software cluster with the allocation of resources in the specified PNs of cluster.</p>
    </sec>
    <sec id="sec-6">
      <title>Summary and Further Work</title>
      <p>The article presents for the first time the original author's algorithm Optimization of programmable
infrastructure resources (OPIR) with the target function of reducing electricity consumption. The
target function of the minimum electricity consumption of private on-site cloud computing
platform operates based on resource redistribution options analysis in order to determine the
optimal placement configuration both of virtual machines, software containers and operation
system level services inside computing cluster of data center. The mathematical formulation of the
resource structures redistribution problem in on-site private cloud computing platforms is
presented by the author as a problem of Multi-dimensional Bin packing.</p>
      <p>For future work, the author plans to investigate the application mathematic method of dynamic
programming for solving the problem of packing virtual machines and software containers to
target computing nodes.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author used Microsoft Office 365 built-in service Copilot to:
Grammar and spelling check. After using this tool, the author reviewed and edited the content as
needed and took full responsibility for the publication’s content.
Cloud and Grid Computing, CCGRID ’10, IEEE Computer Society, Washington, DC, 2010,
pp. 826–831. doi:10.1109/CCGRID.2010.46.
[6] G. DeCandia, D. Hastorun, M. Jampani, G. Kakulapati, A. Lakshman, A. Pilchin, S.</p>
      <p>Sivasubramanian, P. Vosshall, W. Vogels, Dynamo: amazon’s highly available key-value store,
in: ACM SIGOPS Operating Systems Review, Association for Computing Machinery, New
York, NY, 2007, pp. 205–220. doi:10.1145/1323293.1294281.
[7] D. Abts, B. Felderman, A guided tour of data-center networking, Communications of the ACM
55 (2012) 44–51. doi:10.1145/351827.384253.
[8] J. Xu, J. Fortes, A multi-objective approach to virtual machine management in datacenters, in:
Proceedings of the 8th ACM international conference on Autonomic computing, ICAC’11,
Association for Computing Machinery, New York, NY, 2011, pp. 225–234.
doi:10.1145/1998582.1998636.9
[9] J. Kovács, Supporting programmable autoscaling rules for containers and virtual machines on
clouds, Journal of Grid Computing (2019) 813–829. doi:10.1007/s10723-019-09488-w.
[10] Z. Wang, N. Tolia, C. Bash, Opportunities and challenges to unify workload, power, and
cooling management in data centers, in: Proceedings of the Fifth International Workshop on
Feedback Control Implementation and Design in Computing Systems and Networks, FeBiD’10,
Association for Computing Machinery, New York, NY, 2010, p. 1–6.
doi:10.1145/1791204.1791205.
[11] M. Badger, T. Grance, R. Patt-Corner, J. Voas, Cloud computing synopsis and
recommendations, 2012. doi:10.6028/NIST.SP.800-146.
[12] B. H. Malik, M. Amir, B. Mazhar, S. Ali, R. Jalil, J. Khalid, Comparison of task scheduling
algorithms in cloud environment, International Journal of Advanced Computer Science and
Applications 9 (2018) 33–42. doi:10.14569/IJACSA.2018.090550.
[13] A. P. Lozinskyi, Synthesis of cloud computing platform technologies, Control Systems and</p>
      <p>Computers (2019) 35–45. doi:10.15407/csc.2019.06.035.
[14] Kubernetes, Open source system for automating deployment, scaling, and management of
containerized applications, 2025. URL: https://kubernetes.io/.
[15] J. Leeuwen (Ed.), Algorithms and complexity, volume A of Handbook of Theoretical Computer</p>
      <p>Science, 1st ed., Elsevier Science Publisher B.V., Amsterdam, The Netherlands, 1990.
[16] C. Papadimitriou, K. Steiglitz, Combinatorial optimization, Algorithms and complexity,
Prentice-Hall, Englewood Cliffs, London, 1982.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>I.</given-names>
            <surname>Baldini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Castro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Chang</surname>
          </string-name>
          , P. Cheng, S. Fink,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ishakian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mitchell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Muthusamy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rabbah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Slominski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Suter</surname>
          </string-name>
          ,
          <article-title>Serverless computing: current trends and open problems</article-title>
          , in: S. Chaudhary, G. Somani, R. Buyya (Eds.),
          <source>Research advances in cloud computing</source>
          , volume
          <volume>474</volume>
          , Springer, Singapor,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>20</lpage>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-10-5026-
          <issue>8</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Tirmazi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Barker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. E.</given-names>
            <surname>Haque</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. G.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Harchol-Balter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wilkes</surname>
          </string-name>
          ,
          <article-title>Borg: the next generation</article-title>
          ,
          <source>in: Proceedings of the Fifteenth European Conference on Computer Systems</source>
          , EuroSys '20,
          <string-name>
            <surname>Association</surname>
          </string-name>
          for Computing Machinery, New York, NY,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . doi:
          <volume>10</volume>
          .1145/3342195.3387517.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Muralidhar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Borovica-Gajic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>Energy efficient computing systems: Architectures, abstractions and modeling to techniques and standards</article-title>
          , CoRR abs/
          <year>2007</year>
          .09976 (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>2007</year>
          .
          <volume>09976</volume>
          . arXiv:
          <year>2007</year>
          .09976.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kim</surname>
          </string-name>
          , J. Han,
          <string-name>
            <given-names>Y.-R.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huh</surname>
          </string-name>
          ,
          <article-title>Dynamic virtual machine scheduling in clouds for architectural shared resources</article-title>
          ,
          <source>in: Proceedings of the 4th USENIX conference on Hot Topics in Cloud Ccomputing</source>
          , HotCloud'12,
          <string-name>
            <given-names>USENIX</given-names>
            <surname>Association</surname>
          </string-name>
          , Berkeley, CA,
          <year>2012</year>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>80</lpage>
          . doi:
          <volume>10</volume>
          .5555/2342763.2342782.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beloglazov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>Energy efficient resource management in virtualized cloud data centers</article-title>
          ,
          <source>in: Proceedings of the 2010 10th IEEE/ACM International Conference on Cluster,</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>