<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>T. Hovorushchenko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Machine learning methods' comparison for land surface temperatures forecasting due to climate classification⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tetiana Hovorushchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Alekseiko</string-name>
          <email>vitalii.alekseiko@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitaly Levashenko</string-name>
          <email>vitaly.levashenko@fri.uniza.sk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>AdvAIT-2024: 1st International Workshop on Advanced Applied Information Technologies</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>Institutska str., 11, Khmelnytskyi, 29016</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Zilina University</institution>
          ,
          <addr-line>Univerzitná 8215, 010 26 Žilina</addr-line>
          ,
          <country country="SK">Slovakia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1857</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The application of machine learning methods for short- and medium-term forecasting of the average monthly temperature of the Earth's surface, taking into account climatic zoning, is considered. The peculiarities of predicting the temperature of a moving surface in the context of the regression problem using machine learning methods are described. A comparison of the forecasting accuracy of methods based on metrics was made. Peculiarities of calculating the metrics, according to the values of the investigated parameters, are considered. The speed of operation of the methods was analyzed and statistical indicators were calculated. To visualize the effectiveness of the methods, Taylor diagrams were constructed. The most effective methods for forecasting the temperature of the Earth's surface have been determined.</p>
      </abstract>
      <kwd-group>
        <kwd>machine learning (ML)</kwd>
        <kwd>forecasting</kwd>
        <kwd>land surface temperature</kwd>
        <kwd>climate zone</kwd>
        <kwd>models' evaluation</kwd>
        <kwd>regression</kwd>
        <kwd>climate changes</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>changes. The study of these parameters has a key influence on determining the priorities of greening,
urban planning and landscape design [10].</p>
      <p>Forecasting the temperature of the Earth’s surface is associated with some complexities and
peculiarities due to the dynamic nature of the climate system and the Earth’s surface itself. The key
factors that determine these features are:
– spatial variability;
– temporal variability;
– feedback mechanisms;
– uncertainties in models;
– anthropogenic impact;
– extreme events;
– availability and quality of data.</p>
      <p>The Earth’s surface temperature varies greatly among regions due to factors such as latitude,
proximity to oceans, elevation above sea level, and types of land cover (such as forests, deserts).
Forecasting must take into account these spatial variations, which may affect local weather
conditions.</p>
      <p>It should be noted that the temperature of the Earth’s surface fluctuates not only seasonally, but
also daily due to diurnal cycles (day-night). However, when studying the main trends in the average
monthly temperature, it is advisable to ignore daily cycles. Also, weather patterns and climate
phenomena such as El Niño and La Niña can cause interannual variability. In addition, they are able
to influence long-term temperature trends.</p>
      <p>The Earth’s climate system is driven by various feedback mechanisms, such as the albedo effect
(the reflectivity of the Earth’s surface), the concentration of greenhouse gases (e.g. CO2) and the
accumulation of heat in the ocean. These feedbacks can amplify or weaken temperature changes,
making predictions difficult.</p>
      <p>Climate models that simulate Earth’s climate include physical processes such as radiation,
convection, and ocean currents. These models contain uncertainties due to imperfect knowledge of
the parameters and the complexity of the interaction between different components of the climate
system.</p>
      <p>Another important factor is human activity, including industrial emissions, land-use change, and
urbanization [11], which contribute to warming trends [12]. Predicting how these factors will evolve
and interact with natural climate variability adds another layer of complexity to temperature
projections.</p>
      <p>Forecasting extreme temperature events, such as heat waves and cold snaps, requires
understanding not only trends in average temperature, but also the likelihood and intensity of such
events under changing climate conditions.</p>
      <p>Surface temperature forecasting is based on historical data from weather stations, satellites, and
other sources. Ensuring the accuracy and reliability of this data, especially in remote or data-poor
regions, can present challenges for forecasting models.</p>
      <p>Solving these complexities involves integrating observations, improving modeling methods, and
understanding of the Earth’s climate system. Technological advances and computing power continue
to improve our ability to more accurately predict surface temperatures on different time scales.</p>
    </sec>
    <sec id="sec-2">
      <title>3. Methodology</title>
      <sec id="sec-2-1">
        <title>3.1. Dataset</title>
        <p>In the research it was used a dataset GlobalLandTemperatures [13] with Creative Commons License
(CC0: Public Domain) from Kaggle.</p>
        <p>This dataset includes Earth’s surface temperatures data from 1743 to 2013. Original tables content
following information:
– dt (includes month and year, when the temperature was observed);
– AverageTemperature (average monthly temperature);
– AverageTemperatureUncertainty (with uncertainty values of measurement);
– Country (includes country or territory, where the temperature was observed);
Due to needs of research the dataset was modified. It was added columns ‘ClimateZone’ with
abbreviation of climate zones according to the World Climate Data [14] and ‘MainClimateZone’ with
letter, which means belonging to one of five main climate zones. Table 1 shows number of countries
of each main climate zone.</p>
        <p>Although the dataset cannot be fully called balanced, this is explained by the peculiarities of the
location of countries on the globe and the geopolitical situation. There are some important aspects:
– Area of Countries;
– Geopolitical factors;
– Selecting Data Sources.</p>
        <p>First of all, large countries can have a variety of climate zones. For example, some countries cover
a vast territory with varying climate conditions: from arctic to temperate in Canada or from
temperate to arid in the USA. This may result in uneven presentation of temperature data.</p>
        <p>Secondly, political, economic and sociocultural differences between countries can also affect the
balance of the dataset. For example, access to climate observation technologies may be uneven across
countries, which may affect the accuracy of the data.</p>
        <p>Finally, different countries may have different climate monitoring systems and different data
sources. Some countries may be active in collecting data, while others may be less active. This can
also affect the balance of the dataset.</p>
        <p>In general, the imbalance of the dataset with the temperatures of the earth’s surface by country
is a complex issue associated with many factors. For more accurate climate analysis and modeling,
it is important to consider all these aspects.</p>
        <p>In the research was used data with similar values of uncertainty, but sometimes this values are
different, so forecasting may be more or less accurate for some regions. To avoid any discrimination,
it was used data for all countries and territories with relevant information.</p>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. Machine learning methods</title>
        <p>It was conducted a study of the operation of various methods for different climate zones. To do this,
it was developed several models to forecast temperature for the period from 2000 to 2013.
Temperature data up to the year 2000 were used for model fitting.</p>
        <p>The following methods were chosen for the study:
– neural network;
– decision trees;
– random forest;
– K nearest neighbors;
– method of support vectors;
– gradient boosting;
– Ada boost;
– XG boost;
– light GBM.</p>
        <p>Due to the climatic features of different regions of the Earth, it is advisable to conduct separate
studies for each of the climatic zones in order to identify the methods that are best adapted to the
corresponding temperature dependencies [15, 16].</p>
      </sec>
      <sec id="sec-2-3">
        <title>3.2.1. Neural Network</title>
        <p>A neural network (NN) is a set of algorithms modeled after the human brain designed for pattern
recognition. A neural network interprets the data using a kind of machine perception, labeling or
clustering of the raw data. Neural networks consist of layers of interconnected nodes (“neurons”)
that process input data, learn from it, and make decisions based on learned patterns. Each node is
assigned a weight that is adjusted during learning to minimize the prediction error.</p>
        <p>In the context of regression tasks for predicting numerical series, neural networks can model
complex relationships between inputs and outputs. Recurrent neural networks (RNNs),
long-shortterm memory (LSTM) networks, and supervised recurrent units (GRUs) are particularly well-suited
to time series forecasting because they can capture temporal dependencies in data. By learning from
historical data, these networks learn patterns and trends that can be used to predict future values.</p>
      </sec>
      <sec id="sec-2-4">
        <title>3.2.2. Decision Trees</title>
        <p>Decision Tree (DT) is a non-parametric supervised learning method used for classification and
regression problems. It partitions the dataset into subsets based on the most important feature at
each node, making decisions based on feature values. Each branch of the tree represents a decision
rule, and each leaf represents an outcome. The decision rule can be represented as:
if xi &lt; a then go to left subtree, else go to right subtree,
where:
xi – feature;
a – threshold.</p>
        <p>In the context of a regression problem, decision trees can be used by partitioning the data into
subsets based on input feature values and predicting a numerical value for each subset [17]. In time
series forecasting, decision trees can model the relationship between time-based features and a target
variable. Although decision trees are easy to understand and interpret, individual decision trees can
be prone to overfitting and as a result may not perform well with complex patterns, but at the same
time the method is fundamental to ensemble methods such as random forest and gradient boosting.</p>
      </sec>
      <sec id="sec-2-5">
        <title>3.2.3. Random Forest</title>
        <p>
          Random Forest (RF) is an ensemble learning method. The work of the method is based on building
several decision trees during the learning process and deriving class membership (classification task)
or average prediction (regression task) of individual trees [18]. A random forest combines the
simplicity of decision trees with improved accuracy, robustness, and robustness to overfitting by
averaging the results of multiple trees that may individually be subject to overfitting [19].
y =
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
where:
yi – prediction of the i-th tree;
N – total number of trees.
        </p>
        <p>In the context of numerical series prediction, each tree is trained on a random subset of data and
features, and their predictions are averaged to produce a final prediction. Random forests are quite
robust and handle a large number of input variables well.</p>
      </sec>
      <sec id="sec-2-6">
        <title>3.2.4. K-nearest neighbors</title>
        <p>K-Nearest Neighbors (KNN) is a simple instance-based learning algorithm that classifies a data point
based on how its neighbors are classed. In KNN, the parameter “K” represents the number of nearest
neighbors to consider. The algorithm calculates the distance between the new data point and the
training points and then assigns a class based on the K-nearest neighbor majority votes.</p>
        <p>KNN can be applied to regression tasks by averaging the numerical values of the K-nearest
neighbors. For time series forecasting, KNN can predict the future value by finding similar historical
patterns and averaging their subsequent values. KNN is simple to implement, but can be
computationally expensive and sensitive to the choice of K as well as the distance metric used.</p>
      </sec>
      <sec id="sec-2-7">
        <title>3.2.5. Support Vector Regression</title>
        <p>Support Vector Machine (SVM) is a supervised learning algorithm that can be used for classification
or regression. The algorithm of the method consists in finding the hyperplane that best divides the
data into classes [18]. In cases where the data cannot be partitioned linearly, SVM uses a
transformation of the data into a higher dimensional space where a hyperplane can be used for
partitioning.</p>
        <p>In regression problems, the support vector method is known as support vector regression (SVR).
SVR tries to find a function that deviates from the actual observed values by an amount that does
not exceed a given threshold and is as smooth as possible. For time series forecasting, SVR can
capture the underlying trend and seasonality in the data, although this often requires careful
parameter tuning and kernel selection.</p>
      </sec>
      <sec id="sec-2-8">
        <title>3.2.6. Gradient Boosting</title>
        <p>Gradient Boosting (GB) is a complex technique that sequentially builds models, where each new
model tries to correct mistakes made by previous models. This approach uses a gradient descent
algorithm to minimize the loss function. The method is powerful for both classification and
regression tasks [19]. It is very efficient and accurate in forecasting, although it may require
significant resources for intensive calculations.</p>
        <p>The loss function in regression problems is often represented by mean squared error or mean
absolute error. Variations of the gradient boosting method, in particular XGBoost and LightGBM,
are known for their high accuracy and ability to handle complex datasets.
3.2.7. Ada boost.R
Adaptive Boosting (AB, AdaBoost) is an ensemble learning technique that combines several weak
classifiers to create a strong classifier. This method focuses on cases that previous classifiers
misclassified and adjusts their weights accordingly, thus increasing the accuracy of the model. Each
subsequent model in the sequence is tuned to correct the errors of the previous ones, making it highly
effective at improving forecasting performance.</p>
        <p>AdaBoost can be adapted for the regression problem (AdaBoost.R). In this context, the method
combines the predictions of several weak methods, typically decision trees, to create a strong
predictive model. Each such method focuses on correcting the mistakes of the previous ones. For
numerical series prediction, AdaBoost.R can improve prediction accuracy by highlighting
hard-topredict data points during fitting.
3.2.8. XG boost
XGBoost (XGB – Extreme Gradient Boosting) is a powerful and efficient implementation of gradient
boosting. The method includes numerous optimizations such as parallel processing, tree pruning,
and missing value handling, making it faster and more accurate than traditional gradient boosting
methods [16]. XGBoost is widely used in competitive machine learning due to its performance and
flexibility.</p>
        <p>XGBoost is a powerful tool for regression tasks and is widely used for predicting numerical series.
It includes optimizations such as parallel processing and regularization to prevent overfitting.
XGBoost builds trees sequentially, where each tree aims to reduce the residual errors of previous
trees. The method is known for its high performance and scalability, making it a popular choice for
forecasting tasks.
3.2.9. Light GBM
LightGBM (LGBM – Light Gradient Boosting Machine) is a gradient boosting framework that uses
tree-based learning algorithms. It is designed to be highly efficient and scalable, suitable for large
datasets. LightGBM uses histogram-based algorithms, which provides faster fitting and less memory
usage compared to traditional gradient boosting frameworks.</p>
        <p>LightGBM is particularly effective for regression tasks, including predicting number series. The
method uses histogram-based algorithms to efficiently group and divide data. LightGBM handles
large datasets and high-dimensional data efficiently, making it suitable for numerical series
prediction. It builds trees sequentially, with each tree correcting the errors of previous ones, similar
to other gradient boosting methods.</p>
      </sec>
      <sec id="sec-2-9">
        <title>3.3. Models’ evaluation</title>
        <p>To evaluate the effectiveness of the predictive model in the regression problem, various aspects of
performance are measured. The most common metrics include [20, 21, 22]:
– Mean Absolute Error (MAE):</p>
        <p>Indicates the average of the absolute differences between predicted and actual values. Estimates
the accuracy of forecasts without considering the direction of errors [22].</p>
        <p>MAE =</p>
        <p>|yi − yi|,
where n – number of observations;
yi – the actual value of the i-th observation;
ŷi – the predicted value of the i-th observation.
– Mean Squared Error (MSE):</p>
        <p>Indicates the mean of the squared differences between predicted and actual values, giving greater
weight to larger errors.</p>
        <p>
          1
n
1
n
n
i=1
n
i=1
MSE =
(yi − yi)2
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
– Root Mean Square Error (RMSE):
        </p>
        <p>The square root of the root mean square error has the same units as the raw data, making it easier
to interpret [22].</p>
        <p>– R-squared (R²):</p>
        <p>RMSE =
1
n
n
i=1</p>
        <p>(yi − yi)2</p>
        <p>Indicates the proportion of variance in the dependent variable that can be predicted from the
independent variable(s). The value ranges from 0 to 1, with higher values indicating a better fit of
the model to the tasks at hand.</p>
        <p>R2 = 1 −
∑in=1(yi − yi)2</p>
        <p>,
∑in=1(yi − yi)2
where yi – the average among the actual values.
– Mean Absolute Percentage Error (MAPE):</p>
        <p>Determines the average value of the absolute percentage errors. MAPE expresses the error as a
percentage of the actual values.</p>
        <p>100% n yi − yi
MAPE =</p>
        <p>n i=1 yi
– Symmetric Mean Absolute Percentage Error (sMAPE):
It is a type of MAPE that takes into account positive and negative deviations symmetrically.
– Mean Bias Deviation (MBD):</p>
        <p>Determines the average bias in the forecasts, indicating whether the model is systematically
overor under-predicting.</p>
        <p>sMAPE =
r =
∑in=1(xobsi − xobs)(xmodeli − xmodel)</p>
        <p>,
∑in=1(xobsi − xobs)2 ∑in=1(xmodeli − xmodel)2
– Median Absolute Error (MedAE):</p>
        <p>Is the median of the absolute differences between the predicted and actual values. The mean
absolute error is less sensitive to outliers compared to the mean absolute error (MAE), making it a
reliable measure of model performance when there is significant outliers in the data.</p>
        <p>MedAE = median(|yi − yi|)</p>
        <p>A Taylor diagram is a graphical tool used to evaluate the performance of models by comparing
their results to observations. The chart combines three statistics into one graph: correlation
coefficient, standard deviation, and root mean square error (RMSE).</p>
        <p>The standard deviation σ represents the variability or spread of the data. Calculated for both
observation data and model data:
σ =
1
n
n
i=1</p>
        <p>(xi − xi)2,
where:
n – number of observations;
xi – each individual observation;
x – the mean value of the observations.</p>
        <p>
          Correlation coefficient r between observation data and model data. It indicates how well the
model results match the observed data in terms of patterns and time intervals.
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
(8)
(9)
(10)
(11)
(12)
xobsi – individual values of observations;
xmodeli – individual model values;
xobs – the mean value of the observations;
xmodel – model mean value.
        </p>
        <p>The centered root mean square error E' reflects the total difference between the model output and
the observations, taking into account both the variance and the bias.</p>
        <p>E′ = 1n in=1 (xmodeli − xmodel) − (xobsi − xobs) 2 (13)
In this way, a Taylor chart allows you to compare multiple models on a single graph, making it
easier to visualize and interpret the relative performance of different machine learning techniques.</p>
        <p>The chart layout makes it easy to see how close the model's performance is to ideal (represented
by a control point where the correlation is 1, the standard deviation matches the observed, and the
RMSE is zero).</p>
        <p>Thus, the Taylor plot is a powerful tool for evaluating and comparing the performance of machine
learning methods, providing a visual and quantitative assessment of their ability to accurately
reproduce observed data patterns.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Results</title>
      <p>In general, most machine learning methods have demonstrated high predictive accuracy on test data.
The calculated metrics are presented in Tables 3.1 – 3.5, separately for each climate zone.</p>
      <p>For the tropical climate zone (Table 2), KNN (72.1%) and LGBM (73.6%) methods showed the
highest efficiency according to the R2 metric. At the same time, their MAPE was 0.68% and 0.65%,
respectively. However, the complex temperature patterns associated with the geographical location
of most countries do not allow a highly accurate forecast to be made. In particular, this is due to the
location of the studied territories in different hemispheres, as well as some countries located in both
hemispheres. The impossibility of making a high-precision forecast necessitates further studies of
the tropical climate zone, in particular by conducting separate studies for different hemispheres, as
well as climate subzones.</p>
      <p>In the arid climate zone (Table 3), temperature patterns are clearly observed, which are common
to both hemispheres, except for the shift caused by different seasons in different hemispheres. This
allows for accurate forecasting using various methods. Thus, NN, DT, RF, KNN, GB, LGBM and XGB</p>
      <p>NN
DT</p>
      <p>RF
KNN
SVR
GB</p>
      <p>AB
LGBM</p>
      <p>XGB</p>
      <sec id="sec-3-1">
        <title>Method NN</title>
        <p>show a high R2 metric value (above 96%). In addition, these methods have a MAPE below 5%, which
indicates the high efficiency of the methods.</p>
        <p>Temperature dependencies observed in the temperate climate zone are also well amenable to
processing by machine learning methods (Table 4). Immediately three methods (RF, LGBM and XGB)
show high performance, while another 3 methods (DT, KNN, GB) also show high performance,
although they are minimally inferior.</p>
        <p>Forecasting the land surface temperature in the continental climate zone using machine learning
methods demonstrates a fairly high accuracy (Table 5). All significant methods have an R2 score above
94%. The gradient enhancement method has the best results. He is somewhat influenced by LGBT and
XGB methods. The evaluation of MAPE and sMAPE metrics for continental and polar climatic zones was
not carried out, since in these zones temperature values close to 0 are quite often observed, when extended
to which, according to formulas 7 and 8, very high indicators are produced, which in fact do not reflect
real situation.</p>
        <p>For temperatures in the polar climate zone, the methods do not work very well (Table 6). The
highest rates are observed among LGBM. R2 is 92%.</p>
        <p>Since the time spent on creating a forecast plays a rather important role, it is advisable to choose
faster methods, provided the same accuracy of forecasting. Table 7 shows the running time of each
method for data from each climate zone. Comparing the results, we can conclude that the methods
of decision trees and k-nearest neighbors work the fastest. Analyzing the forecasting accuracy, it can
be concluded that the KNN, RF, GB, LGBM, XGB methods are quite effective for creating a forecast
in the short and medium term. These methods are able to fairly accurately predict the average
monthly temperature for the next decade. Tables 8 and 9 show the standard deviation, correlation
coefficients, and centered root mean square error for each of the methods in each climate zone.</p>
      </sec>
      <sec id="sec-3-2">
        <title>Metrics</title>
        <p>MAE
b) Zone B
c) Zone C
d) Zone D</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusions</title>
      <p>The conducted research made it possible to identify the most effective methods for forecasting the
temperature of the Earth’s surface in terms of the accuracy of the forecast and the time spent in each
of the climatic zones. Analysis of the forecast, taking into account climatic zoning, allows to more
clearly determine the patterns of individual territories and make a more accurate forecast. The
proposed approach makes it possible to monitor the main trends of climatic changes in the context
of changes in the temperature of the Earth’s surface in the short- and medium-term perspectives.
The proposed machine learning methods are able to make an accurate and quick forecast of the main
trends in the change of the average monthly temperature of the Earth's surface for the next decade.
Evaluation of machine learning methods was carried out on the basis of metrics. The values of
standard deviation, correlation coefficient and centered mean squared error for each of the methods
were also calculated. To visualize the effectiveness of the methods, Taylor diagrams were
constructed. This research makes it possible to form a basis for further study of changes in climatic
indicators in the context of individual territories and the search for the most appropriate machine
learning methods for forecasting climatic changes, taking into account climatic zoning.</p>
    </sec>
    <sec id="sec-5">
      <title>Acknowledments</title>
      <p>This work was supported by the project “Earth Observation for Early Warning of Land Degradation
at European Frontier (EWALD)” under the European Union’s Framework Programme for Research
and Innovation Horizon Europe – the Framework Programme for Research and Innovation
(20212027), Grant Agreement No. ID 101086250.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the authors used Grammarly in order to: grammar and spelling
check; DeepL Translate in order to: some phrases translation into English. After using these
tools/services, the authors reviewed and edited the content as needed and take full responsibility for
the publication’s content.
learning paradigm. Expert Systems with Applications, Volume 222, (2023), 119811, ISSN
09574174, doi:10.1016/j.eswa.2023.119811.
[8] C. B. Pande, J. C. Egbueri, R. Costache, L. M. Sidek, Q. Wang, F. Alshehri, N. Md Din, V. K.</p>
      <p>Gautam, S. C. Pal. Predictive modeling of land surface temperature (LST) based on Landsat-8
satellite data and machine learning models for sustainable development, Journal of Cleaner
Production, Volume 444, (2024). 141035, ISSN 0959-6526.
[9] F. Di Nunno, S. Zhu, M. Ptak, M. Sojka, F. Granata. A stacked machine learning model for
multistep ahead prediction of lake surface water temperature, Science of The Total Environment,
Volume 890, (2023). 164323, ISSN 0048-9697. doi:10.1016/j.scitotenv.2023.164323.
[10] O. E. Adeyeri, A. H. Folorunsho, K. I. Ayegbusi, V. Bobde, T. E. Adeliyi, C. E. Ndehedehe, A. A.</p>
      <p>
        Akinsanola. Land surface dynamics and meteorological forcings modulate land surface
temperature characteristics, Sustainable Cities and Society, Volume 101, (2024), 105072, ISSN
2210-6707, doi:10.1016/j.scs.2023.105072.
[11] N. Gupta, B. H. Aithal. Urban land surface temperature forecasting: a data-driven approach
using regression and neural network models. Geocarto International, 39(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). (2024).
https://doi.org/10.1080/10106049.2023.2299145.
[12] L. Tian, Y. Tao, M. Li, C. Qian, T. Li, Y. Wu, F. Ren . Prediction of Land Surface Temperature
Considering Future Land Use Change Effects under Climate Change Scenarios in Nanjing City,
China. Remote Sensing. 15(11):2914. (2023). doi:10.3390/rs15112914.
[13] Kaggle. Globallandtemperature. (2018).
      </p>
      <p>
        https://www.kaggle.com/datasets/sambapython/globallandtemperature.
[14] List of countries by climate zone and average yearly temperatures. (2024)
https://weatherandclimate.com/countries#google_vignette
[15] O. Pavlova, V. Alekseiko. The concept of an information system for forecasting the temperature
regime of the earth’s surface based on machine learning. Computer Systems and Information
Technologies, №2, (2024). pp. 6–13. doi:10.31891/csit-2024-2-1
[16] S. Sharafi, M. Mohammadi Ghaleni. Revealing accuracy in climate dynamics: enhancing
evapotranspiration estimation using advanced quantile regression and machine learning
models. Appl Water Sci 14, 162. (2024). doi:10.1007/s13201-024-02211-5
[17] A. Nailman. Comparing machine learning algorithms for regression. Machine Learning Models.
(2024, May 31).
https://machinelearningmodels.org/comparing-machine-learning-algorithmsfor-regression/
[18] B. Lefoula, A. Hebal, , &amp; D. Bengora. Performance of machine learning methods for modeling
reservoir management based on irregular daily data sets: a case study of Zit Emba dam. Earth
Science Informatics, 17(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), (2023) pp. 145–161. doi:10.1007/s12145-023-01160-y.
[19] A. Nailman. Supervised Machine Learning types: Exploring the different approaches. Machine
Learning Models. (2024, May 28).
https://machinelearningmodels.org/supervised-machinelearning-types-exploring-the-different-approaches/.
[20] J. Chen. Analysis of Statistic Metrics in Different Types of Machine Learning. Highlights in
      </p>
      <p>Science, Engineering and Technology, 88, pp. 182–188. (2024). doi:10.54097/c4mz2q66.
[21] V. Plevris, G. Solorzano, N. Bakas, M. Ben Seghier. Investigation of performance metrics in
regression analysis and machine learning-based prediction models. The 8th European Congress
on Computational Methods in Applied Sciences and Engineering ECCOMAS Congress 2022. 5
– 9 June 2022, Oslo, Norway. (2022). doi:10.23967/eccomas.2022.155.
[22] B. Wohlwend. Regression model evaluation metrics: R-Squared, Adjusted R-Squared, MSE,
RMSE, and MAE. Medium. (2023, August 12).
https://medium.com/@brandon93.w/regressionmodel-evaluation-metrics-r-squared-adjusted-r-squared-mse-rmse-and-mae-24dcc0e4cbd3</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Hryhoruk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Grygoruk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Khrushch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          .
          <article-title>Using non-metric multidimensional scaling for assessment of regions' economy in the context of their sustainable development</article-title>
          .
          <source>CEUR-WS</source>
          . (
          <year>2020</year>
          ). Vol.
          <volume>2713</volume>
          .
          <fpage>315</fpage>
          -
          <lpage>333</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yıldırım</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.H.</given-names>
            <surname>Bostancı</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.Ç.</given-names>
            <surname>Yıldırım</surname>
          </string-name>
          .
          <article-title>Parameters for the Study of Climate Refugees</article-title>
          . In: P.
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Ao</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Yadav (eds) Global
          <source>Climate Change and Environmental Refugees</source>
          . Springer, Cham. (
          <year>2023</year>
          ). doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -24833-7_
          <fpage>11</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>C. O. de Burgh-Day</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>Leeuwenburg</surname>
          </string-name>
          .
          <article-title>Machine learning for numerical weather and climate modelling: a review, Geosci</article-title>
          . Model Dev.,
          <volume>16</volume>
          ,
          <fpage>6433</fpage>
          -
          <lpage>6477</lpage>
          , (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          .
          <source>Machine Learning Methods in Weather and Climate Applications: A Survey. Appl. Sci</source>
          . (
          <year>2023</year>
          ),
          <volume>13</volume>
          , 12019. doi:
          <volume>10</volume>
          .3390/app132112019.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hovorushchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Alekseiko</surname>
          </string-name>
          .
          <article-title>Land surface temperature forecasting in the context of the development of sustainable cities and communities</article-title>
          .
          <source>Computer Systems and Information Technologies</source>
          ,
          <volume>3</volume>
          , (
          <year>2024</year>
          ).
          <fpage>6</fpage>
          -
          <lpage>12</lpage>
          . doi:
          <volume>10</volume>
          .31891/csit-2024
          <source>-3-1.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Fister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pérez-Aracil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Peláez-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Salcedo-Sanz</surname>
          </string-name>
          .
          <article-title>Accurate long-term air temperature prediction with Machine Learning models and data reduction techniques</article-title>
          ,
          <source>Applied Soft Computing</source>
          , Volume
          <volume>136</volume>
          , (
          <year>2023</year>
          ). 110118,
          <string-name>
            <surname>ISSN</surname>
          </string-name>
          1568-4946.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Jamei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Karbasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Malik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z. M.</given-names>
            <surname>Yaseen</surname>
          </string-name>
          ,
          <article-title>A novel global solar exposure forecasting model based on air temperature: Designing a new multi-processing ensemble deep</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>