<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Enhancing Cloud Energy Eficiency through Predictive Machine Learning for Inter- and Intra-Data Center VM Consolidation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Santina Capalbo</string-name>
          <email>santina.capalbo@dimes.unical.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugenio Cesario</string-name>
          <email>eugenio.cesario@unical.it</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Paolo Lindia</string-name>
          <email>paolo.lindia@dimes.unical.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Federica Lobello</string-name>
          <email>federica.lobello@unical.it</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Vinci</string-name>
          <email>andrea.vinci@icar.cnr.it</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>DICES Department, University of Calabria</institution>
          ,
          <addr-line>87036, Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>DIMES Department, University of Calabria</institution>
          ,
          <addr-line>87036, Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Institute for High-Performance Computing and Networking (ICAR), CNR-National Research Council of Italy</institution>
          ,
          <addr-line>87036, Rende</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>The rapid expansion of Cloud Computing and large-scale data centers has resulted in a substantial increase in energy consumption, primarily due to hardware operation and cooling requirements. This rise in power usage has significantly elevated operational costs, making energy eficiency a critical concern for data center management. Virtual Machine (VM) consolidation is a well-established strategy to address these challenges by reducing the number of active physical servers while ensuring compliance with Service Level Agreements (SLAs). However, the efectiveness of consolidation heavily depends on the accurate prediction of VM resource demands. This paper proposes a consolidation approach-encompassing both intra- and inter-data center strategies-for energy-aware VM allocation across physical servers. The system leverages predictive machine learning models to forecast future computational needs of individual VMs. By anticipating these demands, the framework dynamically allocates VMs across the servers of the considered data centers to optimize server utilization and minimize energy consumption, without compromising performance or SLA compliance. Preliminary experimental results demonstrate that the proposed approach significantly reduces overall power consumption, particularly when guided by machine learning-driven workload forecasting.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Cloud energy optimization</kwd>
        <kwd>machine learning for energy eficiency</kwd>
        <kwd>green and sustainable computing</kwd>
        <kwd>Virtual Machines (VMs) consolidation</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, Cloud Computing has spread rapidly, becoming a strategic resource for companies
and organizations. This paradigm avoids the costs associated with installing and maintaining physical
infrastructure, allowing companies to focus their resources on innovation and process optimization. An
increasing number of companies and scientific communities are transferring their data, software, and
services to the cloud, taking advantage of a scalable infrastructure that avoids the costs and complexity
associated with directly managing hardware and software. This phenomenon has been facilitated by
the adoption of the pay-as-you-go model, which, together with the spread of high-performance data
centers and fast networks, has contributed to the rapid spread of on-demand computing in industry and
research [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Nevertheless, this expansion has been accompanied by a considerable increase in
operating costs, mainly due to the enormous energy consumption required by data centers, which need
large amounts of power not only to run the hardware but also to keep them cool. With the evolution of
cloud technologies, the scientific community has intensified its eforts to develop more energy-eficient
solutions, promoting more sustainable management models. Among the most promising strategies are
task scheduling optimization, better resource allocation, and the intelligent use of Virtual Machines
consolidation techniques [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In this context, Power Usage Efectiveness (PUE) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] has become a
fundamental metric for measuring the energy eficiency of data centers. Introduced by The Green
Grid consortium (www.thegreengrid.org/), the PUE is defined as the ratio between the total energy
consumption of a data center and the energy used exclusively for IT equipment (servers, storage,
switches, etc.). A PUE value of 1 is ideal, indicating that all energy is used only to power IT devices,
without waste. In practice, modern data centers aim to maintain values below 1.5, thanks to more
efective cooling solutions, virtualization technologies, and advanced monitoring systems.
      </p>
      <p>In this study, we propose an approach to perform energy-eficient allocations of virtual machines
in a Cloud environment through predictive machine learning models. In particular, virtual machine
allocation across servers is performed through a two-level consolidation strategy involving both
inter-data center and intra-data center optimizations. The approach involves continuous monitoring,
predictive modeling, and periodic reallocation to reduce the number of active servers while ensuring
performance and compliance with SLAs. In particular, the approach aims to reduce the total energy
consumption by working at two levels. First, the proposed strategy prioritizes VM allocations to physical
servers hosted by data centers with better PUE values by giving lower priorities to data centers showing
lower eficiencies. Second, it leverages predictive machine learning models to anticipate future CPU
demands and optimize VM consolidation inside each data center, with the aim of further reducing the
whole energy consumption.</p>
      <p>The rest of the paper is organized as follows. Section 2 provides an overview of existing approaches for
energy-eficient virtual machine allocation found in the literature. Section 3 presents our approach that
leverages machine learning models to drive virtual machine allocation considering the energy eficiency
of data centers. Section 4 reports an analysis of the experimental results obtained by comparing
diferent scenarios and presents an assessment of the efectiveness and eficiency of the proposed
strategic approach. Finally, Section 5 concludes the work by summarizing the results obtained and
proposing directions for the continuation of research activities.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        Several works have addressed the issue of reducing energy consumption in data centers. An overview
of the most relevant proposed approaches is provided below, with a focus on optimizing task scheduling,
resource allocation, and virtual machine consolidation, both intra- and inter-data center. In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] the
authors predict resource utilization – CPU, memory, and network usage – in cloud data centers by
proposing a combined CNN-LLSTM model. Data are first processed using Vector Autoregression
(VAR), followed by CNN (Convolutional Neural Networks) which extracts complex features from
the components of VM resource utilization. These features are then fed into the LSTM (Long Short
Term Memory) to generate the final predictions. A diferent strategy for predicting workload and
resource utilization – CPU and RAM – time series is presented by [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], which proposes a combined
model of Bi-directional LSTM and Grid LSTM (BG-LSTM) to capture bidirectional dependencies and
extract and concatenate features from both the time and frequency domains. In [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] the authors predict
data centers Power Usage Efectiveness (PUE) by comparing the performance of three deep learning
models: Multilayer Perceptron (MLP), Resilient Backpropagation-based Deep Neural Network (DNN),
and Attention-based Long Short-Term Memory (LSTM). To this end, they identify the input features
that have a strong influence on the variation of the PUE through the Sobol Sensitivity Analysis, and
define the relationship between them through the Hinton diagram. To reduce energy consumption by
minimizing the number of active servers through consolidation and balanced usage of multidimensional
resources – CPU, RAM, and bandwidth –, in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] the authors propose a hybrid Virtual Machine Placement
(VMP) algorithm that integrates an improved permutation-based genetic algorithm (IGA-POP) with a
multidimensional resource-aware best-fit allocation strategy. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] the authors propose an approach
for resource provisioning – CPU, memory, and storage space – in a cloud environment. They combine
the Imperialist Competition Algorithm (ICA) with K-means to cluster workloads based on their Quality
of Service (QOS) attributes and then use a decision tree algorithm to determine resource provisioning.
The authors in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] propose a technique for VM allocation through the Enhance-Modified Best Fit
Decreasing (E-MBFD) algorithm, which first sorts VMs in decreasing order of their CPU utilization and
then allocates them to physical machines after verifying that they have suficient available resources.
The resulting allocations are validated through an Artificial Neural Network (ANN). In order to perform
dynamic consolidation of VMs, in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] the authors propose two algorithms to detect overloaded and
underloaded hosts, considering energy consumption and the number of migrations when dealing with
underloaded hosts detection. To predict short-term CPU utilization, they employ the Gray-Markov
model on accumulated host data. The authors in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] propose the Online Multi Resource Feed-forward
Neural Network (OM-FNN) model to simultaneously forecast multiple resource demands from running
VMs, combined with the Tri-adaptive Diferential Evolution (TaDE) algorithm to optimize its predictor.
The system clusters tasks based on their predicted resource requirement to facilitate VM autoscaling
and allocation to energy-eficient physical hosts. The Online VM Prediction-based Multi-objective Load
Balancing (OP-MLB) framework, proposed in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], follows a three-phase process. In the first phase,
tasks are assigned to VMs according to their resource requirements. In the second phase, forecasts of
resource utilization are made by an online predictor based on neural networks. Finally, in the third
phase, VMs are allocated and migrated using a multi-objective algorithm which prevents underload and
overload of physical machines. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] the authors propose a combination of an Adaptive Neuro-Fuzzy
Inference System (ANFIS) to predict workload patterns and an Advanced Ant Colony Optimization
(AACO) technique to optimize resource allocation. By doing so, resources are dynamically reconfigured
in response to real-time feedback. To minimize the consumption of “brown” energy and maximize the
use of renewable one, [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] proposes a model for inter-data center VM migration. The approach involves
determining VM migration requests, performing routing and spectrum allocation over the elastic
optical network (EON) infrastructure, and allocating computing resources. Four heuristic algorithms are
introduced to determine which VMs to migrate and to which data center. With the same goal, the authors
in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] propose an algorithm for inter-DC migration over the elastic optical network infrastructure.
To reduce the consumption of brown energy, they introduce the Sliding-Window Lower Confidence
Bound (SW-LCB) algorithm, based on the multi-armed bandit (MAB) formulation. Additionally, to
enhance the cost eficiency of optical network devices and migration, they incorporate optical grooming
techniques. In [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], the Green Energy Oriented Virtual-machine Migration Algorithm (GEOVMA) is
proposed, aiming to minimize the combination of average response time, brown energy consumption,
and carbon emissions. This goal is achieved by dynamically migrating VMs between data centers
powered by renewable energy, employing the policy of Minimum Migration Time. The authors in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]
propose two multi-phase algorithms for load balancing and optimizing task scheduling across VMs
distributed in multiple data centers. The max select load balancing (MSLB) algorithm does not consider
communication delay and bandwidth, whereas the max select load balance with communication delay
(MSLBCD) algorithm takes both into account.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed approach</title>
      <p>
        This section presents an architecture that leverages machine learning models to enable energy-eficient
allocation and migration of virtual machines (VMs) across (inter-) and within (intra-) data centers.
The proposed system, illustrated in Figure 1, consists of multiple geographically distributed data
centers interconnected via an Elastic Optical Network (EON) infrastructure [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. Each data center
is characterized by a distinct energy eficiency level, quantified using the Power Usage Efectiveness
(PUE) metric. PUE is defined as    = ITToFtaaclilDitCiesPoPwowererCConosnusmumptpitoinon , providing a standardized measure of
energy eficiency (further details can be found in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]). Notably, PUE values may vary over time due to
factors such as seasonal changes or fluctuations in the availability of locally generated (e.g., renewable)
energy.
      </p>
      <p>Each data center comprises diferent key components, such as Physical and Virtual Machines, VM
Monitor, VM Consumption Modeler, and VM Migration Manager. The Physical and Virtual Machines
are, respectively, the servers and the VMs that execute client tasks. VMs can be migrated between
servers within the same data center or across external ones by the VM Migration Manager which,
in managing server loads, prioritizes data centers with a better eficiency indicator and switches of
inactive servers to save energy. The VM Monitor is a module that continuously records the CPU usage
of VMs over time, supplying crucial data for modeling and analysis. The VM Consumption Modeler
employs machine learning algorithms to build a predictive model for each VM to forecast its future
CPU usage, by analyzing resource usage patterns. Diferent types of algorithms can be used, such
as classification models, regression models, or neural networks. As stated above, the VM Migration
Manager is responsible for periodic energy-eficient VMs consolidation by using the predictions from
the VM Consumption Modeler.</p>
      <p>
        Virtual machine allocations across servers is performed through a two-level consolidation strategy
involving both inter-data center and intra-data center optimizations, as outlined below. First, each
virtual machine vm is assigned to the most energy-eficient data center, determined in terms of PUE
values, available to host vm. Once a data center is selected, an intra-data center consolidation is executed
to allocate all VMs onto the minimal number of servers required to satisfy performance constraints.
Idle servers are then switched of to reduce overall energy consumption. In particular, the intra-data
center VM consolidation follows the methodology described in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The process involves continuous
monitoring and logging of resource utilization. At regular intervals, this data is analyzed to build
predictive models of VM resource demands. These forecasts are then used to plan VM migrations that
minimize the number of active servers. Unused servers are transitioned into low-power modes, thereby
improving energy eficiency while maintaining compliance with Service Level Agreements (SLAs). To
prevent SLA violations caused by unforeseen demand spikes, a safety margin is enforced by limiting
server utilization at a threshold  &lt; 1.0, leaving a bufer 1 −  for unexpected demands.
      </p>
      <p>External Data Center</p>
      <p>DC1
PUE = 1.85</p>
      <p>DC3
PUE = 1.45</p>
      <p>Server 1
Server 2
Server N
Physical</p>
      <p>Machines</p>
      <p>DC2
PUE = 1.25</p>
      <p>VM 1
Monitor
VM 1
Monitor
VM 1
Monitor</p>
      <p>VM
2 Consumption</p>
      <p>Modeler</p>
      <p>3</p>
      <p>VM
Consumption</p>
      <p>Models
Virtual
Machines</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experimental results</title>
      <p>In this section, we present some results obtained from the proposed energy-aware approach tested
on synthetic data, and we describe the ad-hoc data generator exploited to produce these data. The
experiments are aimed at evaluating the potential energy savings derived from the migration of virtual
machines among servers hosted by data centers with low-eficiency indicators to data centers with
higher eficiencies. The objective is to evaluate whether the migration process guided by machine
learning models can reduce the energy consumption of a Cloud system compared to a
non-energyaware scenario (random approach), while still meeting performance requirements and ensuring reliable
service quality for users. The efectiveness of the proposed approach in terms of energy savings is
demonstrated through experiments carried out in a simulated cloud environment. In the following, we
present preliminary results achieved so far.</p>
      <sec id="sec-4-1">
        <title>4.1. Synthetic data generation and experimental settings</title>
        <p>An ad-hoc data generator was developed to simulate resource-usage traces for virtual machines, in
order to analyze synthetic workloads. Generative usage patterns have been built through underlying
probability distributions. In particular, four normal distributions , for  = 1, . . . , 4, have been defined.
Each distribution has distinct means   and standard deviations  , which generate data samples from
distinct, non-overlapping data ranges. Each virtual machine has a reference time period which is
segmented into four temporal windows, with a unique distribution applied to each temporal window.
Then, by a random assignment process, one distribution is associated to each virtual machine, generating
data for each specific time window. In this way, four distinct usage patterns are created and subsequently
assigned to each virtual machine. This allows to capture variability in CPU demand over time.</p>
        <p>
          The experimental environment set for our tests is composed of three data centers, each one hosting
40 physical servers (a total of 120 servers, distributed on three data centers). Moreover, 600 virtual
machines are running to deal with users’ requests, distributed among the servers of the three data
centers. The eficiency indicator of each data center is expressed in terms of Power Usage Efectiveness
(PUE). PUE values have been taken from publicly available data provided by Google Data Centers [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
Specifically, the 2024 PUE Yearly Report [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] is divided by quarters (three months), with each data center
having diferent PUE values for each quarter. In our experiments, Data Center 1 and 3 have an average
better PUE than the Data Center 2. Resource usage was sampled every six hours over each day for an
entire year, resulting in a dataset composed of 878,400 instances. The values of  have been fixed at
 = 1 for the Oracle scenario and at  = 0.75 for the ML-based one.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Energy consumption results</title>
        <p>
          This subsection analyzes the energy performance for three diferent VM migration scenarios, exploited
as benchmarks to highlight the efectiveness and drawbacks of various migration strategies in terms of
energy eficiency. A detailed description of each scenario is provided below:
• Oracle. This scenario describes a perfect (ideal) case in which the VM Migration Manager queries
an "oracle" to foresee the upcoming CPU requirements of each virtual machine. Having this
information in advance allows the system to plan and execute optimal VM migrations ahead
of time, to reduce energy usage. When allocating virtual machines to physical servers, priority
is given to those located in data centers with higher energy eficiency, with the aim of further
improving the optimization of total consumption. While this is not achievable in real-world
settings, the Oracle scenario acts as a theoretical benchmark, illustrating the best (theoretically)
achievable result if future demands could be predicted without errors.
• ML-based. In this scenario, virtual machine migrations are driven by predictions provided by
the VM Migration Manager, based on the expected usage of the virtual machines. To estimate
CPU demands, we adopted a machine learning model, specifically a Multi-layer Perceptron [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]
[
          <xref ref-type="bibr" rid="ref20">20</xref>
          ], which allows us to accurately predict future loads. This predictive approach simulates a
more realistic environment than the Oracle-driven scenario, enabling us to assess how much
predictive models can come to optimal energy performance even without perfect knowledge
of the future. Furthermore, in order to further optimize overall energy consumption, the VM
Migration Manager exploits these forecasts to allocate the virtual machines on the physical
servers, by giving priority to physical machines hosted by the most eficient data centers.
• Random. This is a baseline scenario, in which virtual machines are randomly allocated among
available servers, not taking into account the eficiency of data centers. This is an uninformed
approach, in which relocations are carried out without considering either the expected demands of
the virtual machines or the energy eficiency of the data centers, thus making both the assignment
to physical servers and the choice of destination data center completely random. The Random
scenario serves as a benchmark for evaluating the benefits of forecast-based strategies in terms of
energy eficiency, highlighting how much informed decisions can improve the results compared
to a purely stochastic approach.
        </p>
        <p>The energy eficiency of the proposed approach is illustrated in Figure 2, which shows the total
energy consumption of the three data centers during the entire test period, comparing the three diferent
scenarios. The total energy consumption is computed as the cumulative energy needed by the system
to perform computational and VM migration tasks. The figure shows that, as expected, the Oracle
scenario is the best case in terms of energy savings. In fact, since Data Center 1 and Data Center 3 have
a higher eficiency indicator than Data Center 2, the consolidation of virtual machines is performed by
prioritizing their allocations to the most eficient data centers, leaving some servers in Data Center 2
turned of and, as a result, allowing for a reduced utilization of the least eficient data center. Similarly,
for the ML-based strategy, it is evident that by applying the logic in which virtual machine allocation
prioritizes servers located in more energy-eficient data centers, Data Center 2 (i.e., the least eficient)
ends up consuming a lower amount of energy. This is because it contains powered-of servers, which
contributes to a lower overall energy consumption. On the other hand, the Random scenario shows
that the total energy consumption is higher than those computed in the other two scenarios. In fact,
since the consolidation of virtual machines occurs randomly and uniformly within the servers and the
choice of the destination data center is also random, all three data centers have approximately the same
energy consumption.</p>
        <p>250000
/)h200000
W
K
(
n
o
itp150000
m
u
s
n
o
c
y
g
re100000
n
e
lt
a
o
T
50000
0</p>
        <p>Data Centers
Data Center 1
Data Center 2
Data Center 3
Oracle</p>
        <p>ML-based
Approaches</p>
        <p>Random</p>
        <p>Figure 3 shows the total number of virtual machine migrations between servers, i.e., the number
of times virtual machines are moved from a source server to a destination server located in diferent
data centers, for the three diferent study scenarios during the entire simulation period considered.
From the figure, we can observe that the number of migrations in the Oracle scenario is lower than
those computed in the other two scenarios. In fact, in the Oracle case, virtual machines are primarily
distributed across Data Center 1 and Data Center 3, which are the most energy-eficient. In contrast, the
machine learning-based approach results in a higher number of migrations. This is mainly due to the
lower server load threshold ( = 0.75), which leads to a more conservative consolidation strategy. As a
result, Data Center 2 is utilized more frequently than in the Oracle scenario, increasing the total number
of migrations. Finally, in the Random scenario, the number of migrations is higher than in Oracle and
lower than in ML-based. This could be due to the fact that, since not only the choice of the destination
server is random, but also the data center to which it belongs, some virtual machines tend to migrate
more often between servers in the same data center and less between servers in diferent data centers.
600000</p>
        <p>Oracle</p>
        <p>ML-based
Approaches</p>
        <p>Random</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>This work presented a predictive machine learning-based approach for enhancing cloud energy eficiency
through VM consolidation, both within and across data centers. By leveraging predictive models to
estimate future CPU usage, the proposed system enables eficient VM allocations that reduce the number
of active physical servers and prioritize those hosted in energy-eficient data centers, i.e., the data
centers characterized by lower PUE. Experimental results, based on synthetic workloads, demonstrate
that the proposed strategy significantly lowers energy consumption compared to uninformed (random)
allocation strategies. The incorporation of Power Usage Efectiveness (PUE) metrics enhanced the
energy-aware allocation policy, contributing to a more sustainable cloud infrastructure. Future work
will focus on extending the approach by considering a policy further optimizing the number of VM
migrations, especially those that occur between diferent data-center. The energy-aware model will
also take into account more complex and real-world network topologies. Finally, the approach will be
extended by integrating additional resource dimensions, e.g., memory and GPUs.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work was supported by the "PNRR MUR project PE0000013-FAIR", the "ICSC National Centre
for HPC, Big Data and Quantum Computing" (CN00000013) within the NextGenerationEU program,
by European Union - NextGenerationEU - National Recovery and Resilience Plan (Piano Nazionale di
Ripresa e Resilienza, PNRR) - Project: “SoBigData.it - Strengthening the Italian RI for Social Mining
and Big Data Analytics” - Prot. IR0000013 - n. 3264, 28/12/2021, and by the research project “INSIDER:
INtelligent ServIce Deployment for advanced cloud-Edge integRation” granted by the Italian Ministry of
University and Research (MUR) within the PRIN 2022 program and European Union - Next Generation
EU (grant n. 2022WWSCRR, CUP H53D23003670006).
models to forecast hourly o3 and no2 levels in the bilbao area, Environmental Modelling &amp;
Software 21 (2006) 430–446.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bharany</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. I.</given-names>
            <surname>Khalaf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Abdulsahib</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Al Humaimeedy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>Aldhyani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Maashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Alkahtani</surname>
          </string-name>
          ,
          <article-title>A systematic survey on energy-eficient techniques in sustainable cloud computing</article-title>
          ,
          <source>Sustainability</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <fpage>6256</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Katal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Dahiya</surname>
          </string-name>
          , T. Choudhury,
          <article-title>Energy eficiency in cloud computing data centers: a survey on software technologies</article-title>
          ,
          <source>Cluster Computing</source>
          <volume>26</volume>
          (
          <year>2023</year>
          )
          <fpage>1845</fpage>
          -
          <lpage>1875</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ouhame</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ullah</surname>
          </string-name>
          ,
          <article-title>An eficient forecasting approach for resource utilization in cloud data center using cnn-lstm model</article-title>
          ,
          <source>Neural Computing and Applications</source>
          <volume>33</volume>
          (
          <year>2021</year>
          )
          <fpage>10043</fpage>
          -
          <lpage>10055</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Cesario</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lindia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lobello</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vinci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zarin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Capalbo</surname>
          </string-name>
          ,
          <article-title>Improving cloud energy eficiency through machine learning models</article-title>
          ,
          <source>in: 2025 33rd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing (PDP)</source>
          ,
          <year>2025</year>
          , pp.
          <fpage>247</fpage>
          -
          <lpage>251</lpage>
          . doi:
          <volume>10</volume>
          .1109/PDP66500.
          <year>2025</year>
          .
          <volume>00041</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. S.</given-names>
            <surname>Abohamama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hamouda</surname>
          </string-name>
          ,
          <article-title>A hybrid energy-aware virtual machine placement algorithm for cloud environments</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>150</volume>
          (
          <year>2020</year>
          )
          <fpage>113306</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H.-A.</given-names>
            <surname>Ounifi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gherbi</surname>
          </string-name>
          ,
          <string-name>
            <surname>N. Kara,</surname>
          </string-name>
          <article-title>Deep machine learning-based power usage efectiveness prediction for sustainable cloud infrastructures</article-title>
          ,
          <source>Sustainable Energy Technologies and Assessments</source>
          <volume>52</volume>
          (
          <year>2022</year>
          )
          <fpage>101967</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Integrated deep learning method for workload and resource prediction in cloud systems</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>424</volume>
          (
          <year>2021</year>
          )
          <fpage>35</fpage>
          -
          <lpage>48</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shahidinejad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghobaei-Arani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Masdari</surname>
          </string-name>
          ,
          <article-title>Resource provisioning using workload clustering in cloud computing environment: a hybrid approach</article-title>
          ,
          <source>Cluster Computing</source>
          <volume>24</volume>
          (
          <year>2021</year>
          )
          <fpage>319</fpage>
          -
          <lpage>342</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Shalu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Artificial neural network-based virtual machine allocation in cloud computing</article-title>
          ,
          <source>Journal of Discrete Mathematical Sciences and Cryptography</source>
          <volume>24</volume>
          (
          <year>2021</year>
          )
          <fpage>1739</fpage>
          -
          <lpage>1750</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.-Y.</given-names>
            <surname>Hsieh</surname>
          </string-name>
          , C.-S. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Y.</given-names>
            <surname>Zomaya</surname>
          </string-name>
          ,
          <article-title>Utilization-prediction-aware virtual machine consolidation approach for energy-eficient cloud data centers</article-title>
          ,
          <source>Journal of Parallel and Distributed Computing</source>
          <volume>139</volume>
          (
          <year>2020</year>
          )
          <fpage>99</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>D.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>A proactive autoscaling and energy-eficient vm allocation framework using online multi-resource neural network for cloud data center</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>426</volume>
          (
          <year>2021</year>
          )
          <fpage>248</fpage>
          -
          <lpage>264</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Saxena</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          ,
          <article-title>Op-mlb: an online vm prediction-based multi-objective load balancing framework for resource management at cloud data center</article-title>
          ,
          <source>IEEE Transactions on Cloud Computing</source>
          <volume>10</volume>
          (
          <year>2021</year>
          )
          <fpage>2804</fpage>
          -
          <lpage>2816</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>H.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>A novel approach for energy consumption management in cloud centers based on adaptive fuzzy neural systems</article-title>
          ,
          <source>Cluster Computing</source>
          <volume>27</volume>
          (
          <year>2024</year>
          )
          <fpage>14515</fpage>
          -
          <lpage>14538</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , T. Han,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ansari</surname>
          </string-name>
          ,
          <article-title>Energy-aware virtual machine management in inter-datacenter networks over elastic optical infrastructure</article-title>
          ,
          <source>IEEE Transactions on Green Communications and Networking</source>
          <volume>2</volume>
          (
          <year>2018</year>
          )
          <fpage>305</fpage>
          -
          <lpage>315</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Amri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>Energy-aware inter-data center vm migration over elastic optical networks</article-title>
          ,
          <source>in: GLOBECOM 2023 - 2023 IEEE Global Communications Conference</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>5421</fpage>
          -
          <lpage>5426</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vatsal</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. B. Verma,</surname>
          </string-name>
          <article-title>Virtual machine migration based algorithmic approach for safeguarding environmental sustainability by renewable energy usage maximization in cloud data centres</article-title>
          ,
          <source>International Journal of Information Technology</source>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. C. M.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Rath</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. R.</given-names>
            <surname>Parida</surname>
          </string-name>
          ,
          <article-title>Eficient load balancing techniques for multi-datacenter cloud milieu</article-title>
          ,
          <source>International Journal of Information Technology</source>
          <volume>14</volume>
          (
          <year>2022</year>
          )
          <fpage>979</fpage>
          -
          <lpage>989</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Google</surname>
          </string-name>
          ,
          <article-title>Google data center pue performance, 2025</article-title>
          . URL: https://datacenters.google/eficiency/,
          <source>last accessed 11 June</source>
          <year>2025</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>N.</given-names>
            <surname>AL-Rousan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. A.</given-names>
            <surname>Mat Isa</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. K. Mat Desa</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>AL-Najjar</surname>
          </string-name>
          ,
          <article-title>Integration of logistic regression and multilayer perceptron for intelligent single and dual axis solar tracking systems</article-title>
          ,
          <source>International Journal of Intelligent Systems</source>
          <volume>36</volume>
          (
          <year>2021</year>
          )
          <fpage>5605</fpage>
          -
          <lpage>5669</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Agirre-Basurko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Ibarra-Berastegi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Madariaga</surname>
          </string-name>
          ,
          <article-title>Regression and multilayer perceptron-based</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>