<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Methods for Proactive Resource Scaling in Kubernetes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Rolik</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Omelchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>CEUR Workshop Proceedings (CEUR-WS.org) Proceedings</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National Technical University of Ukraine “Igor Sikorsky Kyiv Polytechnic Institute”</institution>
          ,
          <addr-line>Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <fpage>27</fpage>
      <lpage>28</lpage>
      <abstract>
        <p>The article considers the issue of predicting workloads in a cluster for use in the proactive scaling of computing resources. Although classical prediction methods have a sufficient level of accuracy, their use on the scale of hundreds of different workloads requires manual data preprocessing and model tuning. The new generation of prediction methods is more versatile, including those capable of independently detecting seasonalities, trends, and anomalies. The paper considers applying these methods to provide accurate predictions for workloads without significant manual intervention. Given the current trend of using microservice architecture, where there are many unique workloads, this attribute can be helpful. Numerous research papers focus on the subject of proactive scaling, exploring statistical approaches and artificial intelligence-based methods. However, most of these studies primarily assess the accuracy of the models while overlooking an essential aspect, which is universality. Universality refers to a model's capacity to handle diverse workload patterns without requiring manual adjustments. The primary focus is to investigate the feasibility of using these methods as a comprehensive solution for automated scaling.</p>
      </abstract>
      <kwd-group>
        <kwd>Network architecture</kwd>
        <kwd>deep learning</kwd>
        <kwd>training</kwd>
        <kwd>small data</kwd>
        <kwd>hardware</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The emergence of orchestrators such as Kubernetes, Nomad, and others has dramatically simplified
many aspects of computing resource management and made significant changes in approaches to
infrastructure development, mainly through the introduction of the containerization paradigm [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Containers can significantly reduce the application's availability time compared to virtualization and
optimize resource utilization while improving application performance [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. This is achieved due to the
absence of a guest operating system and using cgroups to manage allocated resources.
      </p>
      <p>Orchestration and containerization gave impetus to the development and popularization of
microservice architecture. A microservice is a lightweight application whose functionality follows the
principle of single responsibility. This separation of functions makes it possible to scale system
components separately and allocate cluster resources more granularly.</p>
      <p>
        In particular, these solutions, together with the decomposition of the system into microservices,
provide significant opportunities for automating resource management, including computing. Scaling,
in particular automatic scaling, is one of the most effective tools for managing the cluster's computing
resources and maintaining the required level of service quality. Individual applications, groups of
applications, or the entire cluster can be scaled horizontally and vertically. Scaling approaches can be
reactive, proactive, and hybrid, which includes components of both previous approaches [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>Proactive scaling approaches can be broadly divided into time series based and machine learning
based. Time series approaches are easier to interpret and do not require a lot of time and computational</p>
      <p>2023 Copyright for this paper by its authors.
resources. The main drawback of the approach is the prediction accuracy, that relies heavily on the
selected metric and how well historical data is pre-processed.</p>
      <p>
        In the work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], the authors proposed a solution for proactive scaling based on ARIMA, using the
Hyndman-Khandakar algorithm for more accurate selection of model parameters. The accuracy of the
model was tested on the example of web requests to the Wikipedia server, obtaining an accuracy rate
of 91%. It is worth noting that ARIMA does not support work with complex seasonality and the authors
evaluated the model on the weekly load pattern.
      </p>
      <p>
        In another paper [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the authors presented a solution that allows combining multiple time series
forecasting algorithms (Simple Exponential Smoothing, Moving Average, ARMA, Holt-Winters) using
a genetic algorithm. This work shows that there is no best algorithm for all existing time series and that
a combination of such methods can be more accurate than each of them individually.
      </p>
      <p>
        ML-based are able to detect nonlinear features of systems, but require a lot of time and data to train
the model. In the work devoted to Google Autopilot [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], it is noted that the predictions are produced
with the help of several ML models, which, in addition to historical data of computing resources usage,
are able to include such events as OOM and CPU throttling, in the forecasting process. The authors also
point out that one of the problems of this approach is the interpretation and the possibility of explaining
the predictions.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Predictive scaling and Kubernetes</title>
      <p>
        Firstly, we must consider the disadvantages of reactive scaling, as discussed in our previous work
[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. With reactive scaling, there is a delay between when the load increases and when additional
resources become available. During this delay, the application may experience performance degradation
or unavailability. In addition, reactive scaling does not work effectively with sudden or unpredictable
load peaks. Also, this type of scaling can be resource-intensive and inefficient if the load changes
rapidly. Suppose the limits for reactive scaling are set too sensitive. In that case, you may end up with
a system constantly scaling up and down, resulting in suboptimal resource utilization and degraded
QoS. Predictive scaling can compensate for these shortcomings by predicting peak loads [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. However,
it should be considered that in the case of atypical loads, reactive scaling can be more effective [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. A
predictive approach is practical when a seasonal load pattern persists for a long time. It makes it possible
to scale up the application in time and maintain a high level of QoS, and on the other hand, to free up
resources when they are no longer needed.
      </p>
      <p>3.1.</p>
    </sec>
    <sec id="sec-4">
      <title>Requirements to prediction methods</title>
      <p>This paper is devoted to studying the feasibility of using the selected models for prediction in
conditions close to working in a Kubernetes cluster, considering all its limitations and capabilities.</p>
      <p>In this paper, it is assumed that the load pattern of any component has a clear seasonality or trend.
Otherwise, there is no point in applying this approach. Therefore, the first requirement for the predictive
models considered in this paper is the ability to work with one or more seasonality. Also, the models
considered in this paper should be accurate since the more accurate the forecast, the more optimally the
amount of computing resources is calculated. Given the current trend of microservices, scaling all
possible cluster components makes sense. Also, given that each component of the microservice
architecture has its unique features and functionality, the load pattern is individual for each component,
and there can be an unlimited number of such components. This leads to the conclusion that processing
historical metrics and manually adjusting model parameters for each component is a process that cannot
be scaled. The main advantage and feature of predictive models is that it is possible to estimate the load
at any point in the future. This means that we can adapt the target subsystem to future loads in terms of
performance and saving computing resources. However, predictions in the context of workloads are
only sometimes accurate for many reasons. The workload on a subsystem depends on many technical
and non-technical factors, such as network stability, data center availability, holidays, and even political
and economic situations. No model can account for all of these factors, but it can be made adaptive and
resilient to such things. If a model is very accurate but takes too long to train, its effectiveness also
decreases. It is necessary to formulate the requirements for predictive models taking into account all of
the above:
1. Versatility
2. High accuracy of predictions
3. Work with complex seasonality</p>
    </sec>
    <sec id="sec-5">
      <title>3.1.1. Architecture of Kubernetes</title>
      <p>Kubernetes, as an open-source platform, facilitates the management of workloads and applications.
It offers automation for load balancing, application deployment, scaling, data storage management, and
access control. A cluster in Kubernetes refers to a collection of virtual or physical machines connected
within a single network. This coherence is achieved through specialized software on each machine
called kubelet agent. In the context of a Kubernetes cluster, each machine is considered a node. Each
node is allocated specific computing resources as part of the resource management subsystem.</p>
      <p>During application deployment, each instance is assigned to a node with sufficient computing
resources to ensure proper functionality. The deployment specification includes minimum resource
requirements, known as requests and limits. While a containerized application instance can utilize more
resources than the specified requests if available on the node, it is restricted from using more resources
than the limits configuration defines. This ensures efficient resource utilization and management within
the Kubernetes cluster. In this paper, we consider scaling automation in two stages. The first stage
involves obtaining historical data, processing it, obtaining resource or workload predictions, and
validating them. The second stage involves determining the point in time when applying new resource
constraints will have the most negligible impact on quality indicators. This paper considers only a part
of the first stage, namely obtaining predictions.</p>
    </sec>
    <sec id="sec-6">
      <title>3.1.2. Kubernetes limitations</title>
      <p>
        Kubernetes uses a ready-made solution for resource monitoring - Prometheus, which has its
specifics. In this paper, we will use this solution as a data source. First, it has a relatively short storage
time for historical metrics, usually several weeks. Second, metrics can be lost due to system failures.
Third, the frequency of metrics collection is approximately one minute due to the limitations of kubelet
[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Predicting memory values has its specifics because if the limit is exceeded, a denial of service due
to OOM may occur. This paper does not cover this feature.
      </p>
      <p>3.2.</p>
    </sec>
    <sec id="sec-7">
      <title>Prediction methods 3.2.1. TBATS</title>
      <p>There are several predictive models, but we have chosen to analyze relatively new approaches that
meet the above requirements in this paper – TBATS, Prophet and NeuralProphet.</p>
      <p>
        The first forecasting method considered in this article is TBATS [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], whose name is an acronym
for the main components of the model: trigonometric seasonality, Box-Cox transformation, ARIMA,
trend, and seasonality components. TBATS is designed to forecast complex time series with several
seasonalities of different lengths. In TBATS, the original time series is subjected to the Box-Cox
transformation [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], the primary purpose of which is to make the variance of the data stable, which is a
crucial assumption for many statistical models, especially for linear regression and time series models.
After that, the transformed time series is modeled as a linear combination of an exponentially smoothed
trend, seasonal components, and ARMA components. Seasonal components are modeled by
trigonometric functions using a Fourier series. TBATS is capable of adjusting some parameters
independently using AIC [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. This method was chosen for this work because of its versatility and
ability to adapt to time series of varying complexity, which means there is no need to adapt the model
to the load of each existing component in the system.
where yt( ) – time series at moment t, st(i)mi - seasonal components, lt1 - local level, bt1 - trend
with damping, dt - ARMA(p, q) process for residuals. Each seasonal component described with the
following dependence:
s(ji,t)  s(ji,t)1 cos(i )  s*j(,ti) sin(i )  1(i)dt ,
(2)
where  j  2 j / mi , mi - length of i-th seasonal period,  1(i) - seasonal smoothing of i-th
seasonality.
      </p>
      <p>3.2.2. Prophet</p>
      <p>
        Prophet is a time series forecasting library developed at Facebook [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The main goal of the
development was to create a simple, transparent, and understandable model-generation algorithm that
would allow for quick and reliable predictions.
      </p>
      <p>
        This algorithm is based on an additive regression model [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] with several components.
y(t)  g (t)  s(t)  h(t)  e(t),
(3)
where g (t) - trend component, s(t) - seasonal component, h(t) - anomaly component and e(t)
an error function. In addition to the additive regression model, Prophet also uses the Fourier transform.
      </p>
      <p>Among the advantages of this model are the ability to work with any time series, the ability to work
efficiently with large data sets and missing data, and flexibility in customization.</p>
    </sec>
    <sec id="sec-8">
      <title>3.2.3. NeuralProphet</title>
      <p>
        NeuralProphet [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] is a time series prediction library developed on top of the PyTorch library. This
library develops Facebook's Prophet model but uses neural networks. The main difference of this library
is the ability to use the power of deep learning to predict time series with different trends and
seasonality.The basis of this library is an autoregressive neural network [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], which combines classical
autoregressive time series prediction methods with modern approaches based on artificial intelligence.
      </p>
      <p>
        This library consists of several components: trend, seasonality, regression of future variables,
autoregression of historical variables, and regression of lagged variables. The following equation can
describe their dependence:
yˆ  T (t)  S (t)  E(t)  F (t)  A(t)  L(t),
t
(4)
where T (t ) - trend function, S (t ) - seasonal function, E(t) - event and holiday function, F (t)
regression effects for future-known exogenous variables, A(t) - auto-regression effects based on past
observations, L(t) - regression effects for lagged observations of exogenous variables [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
      </p>
      <p>This library lies at the intersection of static and neural network methods and has the advantages of
both approaches. In addition, NeuralProphet includes the ability to select component parameters
automatically, has the ability to work with time series of different seasonality, and allows the use of
side variables to improve the results of predictions.</p>
    </sec>
    <sec id="sec-9">
      <title>3.2.4. Other methods</title>
      <p>In this paper, we do not consider SARIMA since this method does not support the work with several
seasonalities in one time series. Also, finding the optimal model parameters is a non-trivial task. Also,
this paper does not consider the LTSM method, as it has been considered in many papers. In addition,
approaches that rely entirely on neural networks are difficult to understand and require a significant
understanding of neural network architecture.</p>
    </sec>
    <sec id="sec-10">
      <title>4. Experiments</title>
      <p>This section is devoted to conducting practical experiments to evaluate the accuracy of the models
under certain conditions. The purpose of these experiments is not only to compare the selected methods
with each other and determine the most accurate ones but also to assess the feasibility of using these
methods in general.</p>
      <p>4.1.</p>
    </sec>
    <sec id="sec-11">
      <title>Accuracy evaluation</title>
      <p>To evaluate the accuracy of time series prediction models, two accuracy metrics are suitable:
 Root mean square error (RMSE);
 Mean absolute percentage error (MAPE).</p>
      <p>RMSE allows you to compare the deviations in the initial values and helps you assess the
prediction's overall accuracy.</p>
      <p>where x(t) – an actual value at the moment of time t, xˆt – prediction at the moment of time t,
and n – a number of datapoints in the dataset.</p>
      <p>MAPE allows you to compare the predictions of different models at different scales or data.
(5)
(6)
RMSE 
n ( xˆt - xt )2
t1
n</p>
      <p>,
MAPE 
100 n xˆt  xt ,</p>
      <p>n t1 xt</p>
      <p>To verify the accuracy and versatility of the models, we chose the metrics of an artificially generated
load that is stationary with daily and weekly seasonality. The data for training and validation were
obtained from Prometheus. This combination of seasonality was chosen because it reflects the cyclical
nature of human behavior in society, which is driven by social norms and habits. In a typical week,
where x(t) – an actual value at the moment of time t, xˆt – prediction at the moment of time t, аnd
n – a number of datapoints in the dataset.</p>
      <p>The assessment of the universality of the method is determined based on the assessment of accuracy
without additional adjustment of the models. Determining how the model can adapt to various load
patterns is essential.</p>
      <p>4.2.</p>
    </sec>
    <sec id="sec-12">
      <title>Typical workloads</title>
      <p>
        To compare the selected time series prediction methods, it is necessary first to identify typical
application load patterns [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Figure 1 shows graphs of typical loads:
 Monotonically increasing;
 on/off patters;
 bursty pattern;
 random pattern;
 mix of several patterns;
      </p>
      <p>The random pattern is not considered in this paper, as predictions of this type are impossible. All
other patterns have either a steady trend or some seasonality. This is the basis for the load patterns
selected for training.</p>
      <p>4.1.</p>
    </sec>
    <sec id="sec-13">
      <title>Dataset</title>
      <p>especially in a business environment, a distinct rhythm is caused by working days and days off. Daytime
seasonality, in turn, reflects the repetition of people's actions throughout the day: working hours, time
for rest, sleep, and so on. Generally, any other seasonality can be selected, and the goal is to test the
models on complex seasonality.</p>
      <p>The frequency of the values in the generated time series is one hour and has a minimal impact on
the accuracy of the models compared to lower data frequencies. Anomalies and distortions are
introduced by artificially distorting the data.</p>
    </sec>
    <sec id="sec-14">
      <title>Experiment: daily and weekly fluctuations without data distortion</title>
      <p>In the first experiment, we compare the selected models on the example of the above-described time
series with two periodicities of different lengths - daily and weekly. The data were not pre-processed.
The purpose of this experiment is to investigate the prediction capabilities of the selected models on
complex load patterns without any data distortion and to investigate the effect of the size of historical
data during training on prediction accuracy. The models are trained on datasets of different lengths,
including 1, 2, and 3-week periods.</p>
      <p>Based on the graphs and accuracy values, we can conclude that each model accurately predicts the
load under these conditions.</p>
      <p>However, it is worth noting that TBATS is 6% more accurate than the other models that showed
the same result. It is worth noting that the minimum number of periods of historical data to detect
seasonality and, accordingly, anomalies are two periods. Therefore, it makes sense to test the behavior
of these methods on short historical data. Reducing the duration of training data had almost no effect
on accuracy in the case of TBATS and Prophet, but NeuralProphet's accuracy deteriorated by 44%,
although it is still quite high.</p>
    </sec>
    <sec id="sec-15">
      <title>Experiment: daily and weekly fluctuations with anomalies</title>
      <p>In the next experiment, distortions are included in the historical data. Some of the days have atypical
increased or decreased values. In real information systems, such distortions can be caused by holidays,
network equipment failure, or load-balancing problems.</p>
      <sec id="sec-15-1">
        <title>Model</title>
      </sec>
      <sec id="sec-15-2">
        <title>TBATS</title>
      </sec>
      <sec id="sec-15-3">
        <title>Prophet</title>
      </sec>
      <sec id="sec-15-4">
        <title>NeuralProphet</title>
        <p>On accuracy compared to the first experiment is halved in this experiment, with Prophet and
NeuralProphet's accuracy decreasing by more than 200%.</p>
        <p>In such extreme conditions, the accuracy of predictions decreases significantly. In particular, the
accuracy of the NeuralProphet model, in this case, is critically low, but the general pattern is
nevertheless preserved.</p>
      </sec>
    </sec>
    <sec id="sec-16">
      <title>5. Conclusions</title>
      <p>The results of the experiments show that, in general, all three selected models are able to predict
complex load patterns with several seasonalities without preliminary data processing and with
anomalies quite accurately. TBATS is more accurate than Prophet and NeuralProphet in all the
experiments, but the difference in accuracy is no more than 16%. TBATS and Prophet predicted the
load quite accurately with only one week of data, and the accuracy degradation is only 10%.
NeuralProphet requires additional parameter settings or data pre-processing in some of the experiments.
Anomalies significantly affect the accuracy, and pre-processing of historical data is necessary.</p>
      <p>Given the results and requirements, all three selected models can be used to automate resource
scaling in Kubernetes. However, it is necessary to consider the features and prerequisites for their use.</p>
      <sec id="sec-16-1">
        <title>MAPE 0.174 0.263 0.297</title>
      </sec>
      <sec id="sec-16-2">
        <title>MAPE 0.243 0.209 0.476</title>
      </sec>
    </sec>
    <sec id="sec-17">
      <title>6. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O. I.</given-names>
            <surname>Rolik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. F.</given-names>
            <surname>Telenyk</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Yasochka</surname>
          </string-name>
          ,
          <article-title>"Управление корпоративной инфраструктурой,"</article-title>
          <source>Kyiv: Наукова Думка</source>
          ,
          <year>2018</year>
          , 576 p.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Bhimani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Leeser</surname>
          </string-name>
          , and
          <string-name>
            <given-names>N.</given-names>
            <surname>Mi</surname>
          </string-name>
          ,
          <article-title>"Accelerating Big Data Applications Using Lightweight Virtualization Framework on Enterprise Cloud,"</article-title>
          <source>in Proceedings of the High Performance Extreme Computing Conference (HPEC)</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Calheiros</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          , “
          <article-title>Auto-Scaling Web Applications in Clouds,” ACM Computing Surveys</article-title>
          , vol.
          <volume>51</volume>
          , no. 4. Association for Computing
          <source>Machinery (ACM)</source>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>33</lpage>
          , Jul.
          <volume>13</volume>
          ,
          <year>2018</year>
          . doi:
          <volume>10</volume>
          .1145/3148149.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R. N.</given-names>
            <surname>Calheiros</surname>
          </string-name>
          , E. Masoumi,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ranjan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Buyya</surname>
          </string-name>
          , “
          <article-title>Workload Prediction Using ARIMA Model and Its Impact on Cloud Applications' QoS,”</article-title>
          <source>IEEE Transactions on Cloud Computing</source>
          , vol.
          <volume>3</volume>
          , no. 4. Institute of Electrical and Electronics
          <string-name>
            <surname>Engineers</surname>
          </string-name>
          (IEEE), pp.
          <fpage>449</fpage>
          -
          <lpage>458</lpage>
          , Oct.
          <volume>01</volume>
          ,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1109/tcc.
          <year>2014</year>
          .
          <volume>2350475</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V. R.</given-names>
            <surname>Messias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Estrella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ehlers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Santana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Santana</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Reiff-Marganiec</surname>
          </string-name>
          , “
          <article-title>Combining time series prediction models using genetic algorithm to autoscaling Web applications hosted in the cloud infrastructure</article-title>
          ,
          <source>” Neural Computing and Applications</source>
          , vol.
          <volume>27</volume>
          , no. 8. Springer Science and Business Media LLC, pp.
          <fpage>2383</fpage>
          -
          <lpage>2406</lpage>
          , Dec.
          <volume>12</volume>
          ,
          <year>2015</year>
          . doi:
          <volume>10</volume>
          .1007/s00521-015-2133-3.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>K.</given-names>
            <surname>Rzadca</surname>
          </string-name>
          et al.,
          <source>“Autopilot,” Proceedings of the Fifteenth European Conference on Computer Systems</source>
          . ACM, Apr.
          <volume>15</volume>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .1145/3342195.3387524.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Omelchenko</surname>
          </string-name>
          and
          <string-name>
            <given-names>O.</given-names>
            <surname>Rolik</surname>
          </string-name>
          , “
          <article-title>Автоматизація управління ресурсами в інформаційних системах на основі реактивного вертикального масштабування,” Адаптивні системи автоматичного управління</article-title>
          , vol.
          <volume>2</volume>
          , no. 41. Kyiv Politechnic Institute, pp.
          <fpage>65</fpage>
          -
          <lpage>78</lpage>
          , Dec.
          <volume>01</volume>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .20535/
          <fpage>1560</fpage>
          -
          <lpage>8956</lpage>
          .
          <fpage>41</fpage>
          .
          <year>2022</year>
          .
          <volume>271344</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>J.</given-names>
            <surname>Santos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Wauters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Volckaert</surname>
          </string-name>
          , and
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Turck</surname>
          </string-name>
          , “
          <article-title>gym-hpa: Efficient Auto-Scaling via Reinforcement Learning for Complex Microservice-based Applications in Kubernetes,” NOMS 2023- 2023 IEEE/IFIP Network Operations and Management Symposium</article-title>
          . IEEE, May
          <volume>08</volume>
          ,
          <year>2023</year>
          . doi:
          <volume>10</volume>
          .1109/noms56928.
          <year>2023</year>
          .
          <volume>10154298</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Straesser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grohmann</surname>
          </string-name>
          , J. von Kistowski,
          <string-name>
            <given-names>S.</given-names>
            <surname>Eismann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bauer</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S.</given-names>
            <surname>Kounev</surname>
          </string-name>
          , “Why Is It Not Solved Yet?,
          <source>” Proceedings of the 2022 ACM/SPEC on International Conference on Performance Engineering</source>
          . ACM, Apr.
          <volume>09</volume>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1145/3489525.3511680.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <article-title>Metrics For Kubernetes System Components. Kubernetes Documentation</article-title>
          . URL: https://kubernetes.io/docs/concepts/cluster-administration/system-metrics/.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>A. M. De Livera</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          <string-name>
            <surname>Hyndman</surname>
          </string-name>
          , and R. D. Snyder, “
          <article-title>Forecasting Time Series With Complex Seasonal Patterns Using Exponential Smoothing,”</article-title>
          <source>Journal of the American Statistical Association</source>
          , vol.
          <volume>106</volume>
          , no. 496. Informa UK Limited, pp.
          <fpage>1513</fpage>
          -
          <lpage>1527</lpage>
          , Dec.
          <year>2011</year>
          . doi:
          <volume>10</volume>
          .1198/jasa.
          <year>2011</year>
          .tm09771.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G. E. P.</given-names>
            <surname>Box</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Cox</surname>
          </string-name>
          ,
          <article-title>"An Analysis of Transformations,"</article-title>
          <source>Journal of the Royal Statistical Society</source>
          , Series B, vol.
          <volume>26</volume>
          , no.
          <issue>2</issue>
          , pp.
          <fpage>211</fpage>
          -
          <lpage>252</lpage>
          ,
          <year>1964</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Skorupa</surname>
          </string-name>
          , “
          <article-title>Forecasting Time Series with Multiple Seasonalities using TBATS in Python”</article-title>
          .
          <year>2019</year>
          . URL: https://medium.com/intive-developers/
          <article-title>forecasting-time-series-with-multiple-seasonalitiesusing-tbats-in-python-398a00ac0e8a.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>S. J.</given-names>
            <surname>Taylor</surname>
          </string-name>
          and B.
          <string-name>
            <surname>Letham</surname>
          </string-name>
          , “Forecasting at scale.” PeerJ, Sep.
          <volume>27</volume>
          ,
          <year>2017</year>
          . doi:
          <volume>10</volume>
          .7287/peerj.preprints.3190v2.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Friedman</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Stuetzle</surname>
          </string-name>
          ,
          <article-title>"Projection Pursuit Regression,"</article-title>
          <source>Journal of the American Statistical Association</source>
          , vol.
          <volume>76</volume>
          , pp.
          <fpage>817</fpage>
          -
          <lpage>823</lpage>
          ,
          <year>1981</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>O.</given-names>
            <surname>Triebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hewamalage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pilyugina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Laptev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bergmeir</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Rajagopal</surname>
          </string-name>
          , “NeuralProphet: Explainable Forecasting at Scale.” arXiv,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .48550/ARXIV.2111.15397.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>O.</given-names>
            <surname>Triebe</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Laptev</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Rajagopal</surname>
          </string-name>
          , “
          <article-title>AR-Net: A simple Auto-Regressive Neural Network for timeseries</article-title>
          .
          <source>” arXiv</source>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .48550/ARXIV.
          <year>1911</year>
          .
          <volume>12436</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>W.</given-names>
            <surname>Veenhof</surname>
          </string-name>
          ,
          <article-title>"Workload patterns for cloud computing,"</article-title>
          <source>URL:</source>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19] http://watdenkt.veenhof.nu/
          <year>2010</year>
          /07/13/workload-patterns-for
          <string-name>
            <surname>-</surname>
          </string-name>
          cloud-computing/
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Humphrey</surname>
          </string-name>
          , “
          <article-title>Empirical Evaluation of Workload Forecasting Techniques for Predictive Cloud Resource Scaling</article-title>
          ,”
          <source>2016 IEEE 9th International Conference on Cloud Computing (CLOUD)</source>
          . IEEE, Jun.
          <year>2016</year>
          . doi:
          <volume>10</volume>
          .1109/cloud.
          <year>2016</year>
          .0011. S. Fan and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Hyndman</surname>
          </string-name>
          , “
          <article-title>Short-Term Load Forecasting Based on a Semi-Parametric Additive Model,”</article-title>
          <source>IEEE Transactions on Power Systems</source>
          , vol.
          <volume>27</volume>
          , no. 1. Institute of Electrical and Electronics
          <string-name>
            <surname>Engineers</surname>
          </string-name>
          (IEEE), pp.
          <fpage>134</fpage>
          -
          <lpage>141</lpage>
          , Feb.
          <year>2012</year>
          . doi:
          <volume>10</volume>
          .1109/tpwrs.
          <year>2011</year>
          .
          <volume>2162082</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>