<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Online Learning for Energy Consumption Forecasting in Heavy-Duty Electric Vehicles</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuantao Fan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhenkan Wang</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sławomir Nowaczyk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Center for Applied Intelligent Systems Research (CAISR), Halmstad University</institution>
          ,
          <addr-line>Kristian IV:s väg 3, 301 18 Halmstad</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Volvo Group Trucks Technology and Industrial Division</institution>
          ,
          <addr-line>Götaverksgatan 10, 417 55 Göteborg, Göteborg</addr-line>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Accurate forecasting of auxiliary energy consumption in heavy-duty battery electric vehicles (HD-BEVs) is critical for energy-eficient operation, route planning, and charging optimization. However, varying driving conditions, payloads, and environmental factors cause concept drift, reducing the reliability of static, oflinetrained models. To address this challenge, this paper presents a tailor-made online learning framework capable of continuously updating forecasting models in real time as new trip data becomes available. The proposed approach adapts to shifting consumption patterns while remaining computationally eficient for on-vehicle edge deployment. Experimental results on real-world HD-BEV data show that the proposed online learning strategy significantly outperform batch models in both accuracy and adaptability. Moreover, the proposed strategy achieves an efective trade-of between performance and computational cost, making it well-suited for real-time deployment in resource-limited environments.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Online Learning</kwd>
        <kwd>Energy Consumption Forecasting</kwd>
        <kwd>Heavy-duty Battery Electric Trucks</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Optimizing the operation of electric commercial heavy-duty vehicles requires accurate energy
consumption prediction in order to, for example, plan the route in the most eficient way and determine the
best time and location for charging. Commercial vehicles operate across a wide range of scenarios and
perform diverse types of tasks. In transportation operations, variations in ambient conditions, driver
behavior, and route characteristics influence not only vehicle performance but also the internal dynamics
of the driveline and auxiliary systems. The number of factors impacting key subsystems such as heaters,
air compressors, and energy converters is extensive and often highly variable. Given this high degree of
variability, AI systems designed to forecast energy consumption must be capable of adapting to dynamic
and evolving conditions. This requires robust handling of concept drift, where the statistical properties
of the features, or relationships between input features and the target variable change over time or
across diferent sub-populations. To maintain forecasting accuracy in the presence of evolving data
distributions, models must either be continuously updated through adaptive mechanisms or designed
to be inherently robust to such heterogeneity. Moreover, delayed label availability often postpones the
online learning process, particularly when longer forecasting horizons are employed, thereby hindering
timely model adaptation. Last but not least, on-board computational resources are limited, which
constrains the frequency and complexity of model updates that can be executed in real time. These
challenges highlight the necessity for an adaptive, cost-eficient online learning framework capable of
maintaining forecasting accuracy under practical resource constraints.</p>
      <p>It is of interest to apply and evaluate the performance of online learning algorithms across various
forecasting scenarios, particularly under conditions involving delayed label availability, where the
ground truth is not immediately available for incremental model update. In industrial applications
involving fleets of equipment, multi-stream learning presents a promising approach, where models are
trained on multiple data streams, from diferent units or trip segments, and optimized to forecasting
on a specific equipment. Last but not least, establishing a benchmark framework to compare the
performance of batch versus online learners in this domain is of great interesting for guiding the
selection and deployment of such online learning systems for real-world applications in forecasting
energy consumptions.</p>
      <p>This paper investigates online learning configurations and strategies for forecasting auxiliary energy
consumption in HD-BEVs. The contributions of this paper including: i) a tailor-made online
learning strategy for forecasting auxiliary energy consumption in HD-BEVs, aiming at adaptively update
the model based on the streaming observations, while accounting for the computational constraints
performing on, e.g., edge devices; and ii) an evaluation and comparison of multiple online learning
configurations is conducted in terms of predictive accuracy and computational eficiency, using a
real-world HD-BEV dataset collected from in-service operations. Moreover, several promising research
directions are proposed for forecasting energy consumption under resource-constrained conditions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Online time series forecasting [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ] is challenging due to streaming data and concept drift. Deep
neural network based forecasters struggle to adapt to evolving environments while retaining prior
knowledge [
        <xref ref-type="bibr" rid="ref3 ref4">3, 4</xref>
        ]. FSNet [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] addresses this by using per-layer adapters, which map contexts to
transformation coeficients stored in associative memory for fast adaptation. Recent work emphasizes exploiting
both temporal and cross-variable dependencies under a unified framework [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. OneNet [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] builds
independent models for these dependencies and combines them using Online Convex Programming (OCP),
leveraging exponential gradient descent for long-term history and ofline reinforcement learning for
short-term adjustment. Empirical evaluations show OneNet outperforms FSNet and Informer [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] in
online multivariate time series forecasting [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Streaming regressors are also used for data stream forecasting [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Many methods are discussed
in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Hoefding Tree Regressor (HTR) adapts Hoefding Tree (HT) for regression using Hoefding
bounds [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] and ADaptive sliding WINdow (ADWIN)[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] for drift detection. Fast Incremental Regression
Trees with Drift Detection (FIRT-DD) uses variance-reduction split criteria and the Page-Hinckley
test[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] for drift detection. In the above methods, upon drift detection, a background tree replaces
the foreground tree. Ensemble methods such as Adaptive Random Forest Regressor (ArfReg)[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] and
Self-Optimising K-Nearest Leaves (SOKNL)[
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] outperform single-tree models [13]. SOKNL combines
k Nearest Neighbors and ArfReg. Only the  trees with the smallest distances between the input
instance and the centroid of the relevant leaf are used for the prediction. Both methods use FIRT-DD as
base learners. ArfReg use ADWIN for drift detection and adaptation, while SOKNL contains explicit
adaptation via centroid updates [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. It is of interest to build a tailor-made online learning strategy for
HD-BEVs, with light-weight and generic models (e.g. neural networks of a limited size) that can be
configured and deployed on edge devices mounted on-board vehicle.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Formulation</title>
      <p>In this study, we consider the task of energy consumption forecasting for EVs using online learning.
Let  ⊆ R  denote the input feature space (e.g., vehicle speed, ambient temperature, payload etc.), and
 ⊆ R represent energy consumption or power output over a prediction horizon    .</p>
      <p>A neural network parameterized by  is used as the predictive model (regressor)  :  → R, mapping
the incoming streaming sample to the energy consumption in a future period of time    . Each trip is

indexed by  = 1, . . . ,  , and provides a sequence of time-ordered samples  = {(,, ,)}=1
where , ∈  and , ∈  denote the input feature vector and the observed energy consumption
at time step . The objective is to design an online learning strategy that incrementally updates a
regression model  as streaming observations  arrive over time. The model parameters are updated
based on a loss function ℓ(ˆ, ) computed using the true label +   , which becomes available after
 +    .</p>
    </sec>
    <sec id="sec-4">
      <title>4. Online Learning for Energy Consumption Forecasting</title>
      <p>In this study, we develop an online learning strategy specifically tailored for forecasting the energy
consumption of auxiliary systems in HD-BEVs. Empirical observations indicate that, for most trips,
the auxiliary energy consumption tends to be higher with more uncertainty during the initial phase of
the journey, before stabilizing as the trip progresses. The proposed energy consumption forecasting
framework consists of the following stages: i) training an initial model  in a batch setting using
historical datasets collected from in-service vehicle operations; ii) employing  to forecast the auxiliary
energy consumption over a future prediction horizon    ; and iii) compute the loss ℓ(ˆ, ) once the
true label become available and subsequently updating the regressor  in an adaptive manner.</p>
      <p>The proposed adaptive online learning strategy, specifically designed for this application, is based
on the following mechanisms/principles: i) training activation: learning is triggered for  epochs
whenever the prediction error exceeds a predefined threshold  , which is determined empirically by
selecting a value corresponding to a certain upper percentile of the distribution of historical training
losses; ii) Adaptive learning rate: a weighting function () (see candidate functions in Figure 1) is
applied to adjust the learning rate dynamically according to   =  0 · () , where  0 denotes the
initial learning rate and  represents the number of instances since the learning mechanism was last
triggered; iii) Enhanced learning in early stage: during the initial phase of each trip, when the auxiliary
systems exhibit higher and more variable energy consumption due to vehicle initialization, the number
of training epochs  is doubled to promote rapid model adaptation; and iv) Stabilized learning phase:
once the vehicle enters a more stable consumption regime, the model is updated less frequently, using a
bufer containing the most recent  samples to maintain adaptation while reducing computational cost.
Pseudo-code of the proposed learning strategy is available in Algorithm 1.</p>
      <p>Prequential evaluation was employed to assess and validate the performance of the proposed online
learning strategies. Standard regression metrics, including the mean absolute error (MAE) and mean
squared error (MSE), were utilized for quantitative comparison. Furthermore, given that the overall
objective is to estimate the vehicle’s total auxiliary energy consumption for the remainder of each trip,
the accumulated error (AccErr) metric was introduced to evaluate deviations in aggregated consumption
across the entire trip, defined as  = 1 ∑︀∈ | ∑︀=1  − ∑︀=1 ^ |, where  denotes the
number of instances within the  -th trip. Finally, the cost-eficiency of the proposed online learning
framework is analyzed by examining the trade-of between predictive accuracy and computational
resource usage, e.g., the CPU time required.</p>
      <p>2.0
1.8</p>
    </sec>
    <sec id="sec-5">
      <title>5. Experiment Results</title>
      <p>For the experiments conducted in this exploratory study, a four-layer feedforward neural network
was employed for the forecasting task, consisting of hidden layers with 128, 64, 32, and 16 neurons,
respectively; MSE was employed as the loss function for training the network; Adam optimizer was
selected as the optimizer with a learning rate of 0.008, determined through hyper-parameter tuning;
the model was implemented in PyTorch, and all experiments were executed on a server equipped with
an NVIDIA Tesla V100.</p>
      <sec id="sec-5-1">
        <title>5.1. HD-BEVs Dataset</title>
        <p>The dataset consists of signals transmitted through the controller area network within a heavy-duty
battery electric truck, including parameters such as speed, acceleration, road inclination and other
vehicle operating signals, from its delivery operations over a few weeks. The battery capacity of the
truck is 540 kWh (including 6 modules) by Lithium Nickel-Cobalt-Aluminum Oxide (NCA) technology.
The time-series data were segmented into trips from in-service delivery tasks, based on expert suggested
rules. The segmented trips represent individual journeys partitioned into distinct segments resulting
from interruptions such as vehicle stops, route alterations, or discontinuities in data logging process.
Only segments corresponding to the driving mode are retained for analysis, defined as periods during
which the vehicle’s batteries are discharging and it is not connected to an external charging source.
3.5
3.0
2.5
2.0
EAM 1.5
1.0
0.5
0.0
0.5
0
10
20</p>
        <p>30
# instances in each trip
40
50</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Comparison Forecasting Sequences</title>
        <p>Figure. 3 presents a comparison of forecasting sequences obtained under diferent online learning
settings from four trips, demonstrating how online training epochs and strategies influence the model’s
ability to learn and generalize in dynamic conditions. Figures. 3(a), 3(c), 3(e), and 3(g) illustrate the
performance of online learning when trained with varying numbers of epochs per update step (e.g., 1,
5, 10, and 100). Across all trips, the general trend shows that the increasing number of epochs, resulted
in smother and more stable predictions compared to batch-trained model. The first three trips (i.e., A to
C) illustrated high power consumption at the beginning of each trip. In these cases, the online learning
models were updated to capture the sharp initial variations, with some delay in the adaptation, whereas
the batch-trained model fail to produce sound forecasts, and performs worse compared to the online
learning methods. Conversely, Trip D displays relatively low consumption at the start, representing a
unique case amongst the trips, the online methods were able to adapt in a few epochs but the batch
model fail to produce reliable predictions. It is observed in Figure. 2, for both online learning methods
and the batch model produce higher errors the first 20 instances, while the error of the subsequent
samples (in which the consumption stabilizes) were lower. Figures 3(b, d, f, h) illustrates the forecasting
sequences of diferent online learning configurations, including variations in bufer size, learning rate
adaptation, and exponential weighting. The results indicate that models update with adaptive learning
strategies achieve better tracking of short-term variations in the consumption.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Performance Comparison</title>
        <p>Table 1 summarizes the performance of various online learning configurations in terms of MAE, MSE,
AccErr, and CPU Time. The results shows the trade-ofs between predictive accuracy and computational
eficiency w.r.t. training and inference time.</p>
        <p>The baseline batch-trained model exhibits the highest MAE and MSE, indicating limited capability to
handle evolving data streams. In contrast, all online learning configurations achieve notable
improvements across all accuracy metrics, with MAE reductions of up to 30% compared to the batch model.
Increasing the number of training epochs generally enhances prediction accuracy but with significantly
higher CPU time. The experiments with bufer-based and adaptive learning rate strategies demonstrate
comparable accuracy to OL models with diferent training epochs but with much lower CPU times,
indicating better eficiency for real-time applications. The proposed ECF strategy (i.e. OL_ECF) allocates
more computational resources for online learning during the initial phase; afterwards, the training was
carried out for every fours incoming samples, with a reduced number of iterations and a small bufer
2.5
2.0
s 1.5
m
e
t
syS 1.0
.
x
ruA 0.5
e
w
Po 0.0
0.5
1.0
2
sm 1
e
tsyS
.
xu
rA 0
e
w
Po
1
2
1
2
s
m
e
tsy
.xS 0
u
A
r
e
w
o
P 1
2.5
2.0
True
Batch Model Pred.</p>
        <p>OL_ep50
OL_ep200
OL_ep5_buffer8
OL_ep5_lr_const
OL_ep10_rl_exp
OL_adapt
0
10
20
30
50
60
70
80
0
10
20
30
50
60
70
80
(g) Trip D: online learning with diferent epochs
(h) Trip S: online learning with diferent settings
learning consistently improves the forecasting performance
of four samples. Among all tested variants, the proposed approach achieves a balanced performance,
with low MAE, MSE and AccErr, and moderate computational cost (23.52 ± 0.95s), significantly lower
than OL model trained with 10 epochs per instance. Figure. 4 illustrates the trade-of between accuracy
(MAE) and computational cost (CPU time) across diferent online learning configurations. The results
show a general trend where the number of training epochs improves accuracy but at the expense of
higher computation time. The proposed approach (i.e., OL_ECF) attains good accuracy while being
computationally eficient, compared to other configurations.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.4. Discussion</title>
        <p>The online learning strategy achieves its best performance when the model is trained for at least 50
epochs and updated at every time step, resulting in a computational cost of more than 158 CPU time
units. However, reducing the number of epochs to 5 and applying a learning rate decay function
decreases the computational cost to approximately 20 CPU time units, with only a slight increase of
about 3% in MAE error. Therefore, the proposed adaptive online learning strategy provides a substantial
reduction in computation time with minimal loss in accuracy. Considering the trade-of between
training time, computational resources, and accuracy improvement, our online learning strategy is to
use 5 epochs for online training after initialization of the vehicle. In contrast, employing 50 epochs
during the initial 10–20 time steps (or instances) is beneficial to ensure adequate model adaptation at
the beginning of operation.</p>
      </sec>
      <sec id="sec-5-5">
        <title>5.5. Conclusion and Future Work</title>
        <p>This paper presented a exploratory study on online learning strategies for forecasting auxiliary energy
consumption in HD-BEVs. The proposed framework enables adaptive model updates in response to
streaming data, while accounting for the computational constraints of onboard edge devices. The
experimental results demonstrated that the proposed online learning models substantially outperform
the batch-trained baseline in both mean absolute error and adaptability across multiple driving trips.</p>
        <p>Future research includes the following directions: i) developing adaptive mechanisms that dynamically
adjust the training schedule (including model update frequency, bufer sampling strategies, and
weighting functions) based on learning eficiency and data variability; ii) exploring alternative time-series
forecasting models and advanced online learning techniques to further enhance predictive performance
and adaptability; and iii) investigating hybrid online–ofline learning paradigms that support long-term
knowledge retention while maintaining rapid responsiveness to short-term operational dynamics.</p>
      </sec>
      <sec id="sec-5-6">
        <title>Author Declaration on GenAI</title>
        <p>During the preparation of this work, the authors used ChatGPT, Grammarly in order to: Grammar and
spelling check, Paraphrase and reword. After using this tool/service, the authors reviewed and edited
the content as needed and takes full responsibility for the publication’s content.
[13] H. M. Gomes, J. Montiel, S. M. Mastelini, B. Pfahringer, A. Bifet, On ensemble techniques for data
stream regression, in: 2020 International Joint Conference on Neural Networks (IJCNN), IEEE,
2020, pp. 1–8.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>O.</given-names>
            <surname>Anava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hazan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Mannor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Shamir</surname>
          </string-name>
          ,
          <article-title>Online learning for time series prediction</article-title>
          ,
          <source>in: Conference on learning theory, PMLR</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>172</fpage>
          -
          <lpage>184</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. C.</given-names>
            <surname>Hoi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Online arima algorithms for time series prediction</article-title>
          ,
          <source>in: Proceedings of the AAAI conference on artificial intelligence</source>
          , volume
          <volume>30</volume>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Sahoo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hoi</surname>
          </string-name>
          ,
          <article-title>Learning fast and slow for online time series forecasting</article-title>
          ,
          <source>in: The Eleventh International Conference on Learning Representations</source>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Tan</surname>
          </string-name>
          , et al.,
          <article-title>Onenet: Enhancing time series forecasting models under concept drift by online ensembling</article-title>
          ,
          <source>Advances in Neural Information Processing Systems</source>
          <volume>36</volume>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xiong</surname>
          </string-name>
          , W. Zhang, Informer:
          <article-title>Beyond eficient transformer for long sequence time-series forecasting</article-title>
          , in: AAAI, volume
          <volume>35</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>11106</fpage>
          -
          <lpage>11115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <article-title>Real-time energy pricing in new zealand: An evolving stream analysis</article-title>
          ,
          <source>arXiv preprint arXiv:2408.16187</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Choudhary</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Bharill</surname>
          </string-name>
          ,
          <article-title>A brief survey on concept drifted data stream regression</article-title>
          ,
          <source>SocProS</source>
          (
          <year>2021</year>
          )
          <fpage>733</fpage>
          -
          <lpage>744</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W.</given-names>
            <surname>Hoefding</surname>
          </string-name>
          ,
          <article-title>Probability inequalities for sums of bounded random variables, The collected works of Wassily Hoefding (</article-title>
          <year>1994</year>
          )
          <fpage>409</fpage>
          -
          <lpage>426</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gavalda</surname>
          </string-name>
          ,
          <article-title>Learning from time-changing data with adaptive windowing, in: SIAM (SDM)</article-title>
          ,
          <source>SIAM</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>443</fpage>
          -
          <lpage>448</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>E.</given-names>
            <surname>Ikonomovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Džeroski</surname>
          </string-name>
          ,
          <article-title>Learning model trees from evolving data streams, Data Min</article-title>
          .
          <source>Knowl. Discov</source>
          .
          <volume>23</volume>
          (
          <year>2011</year>
          )
          <fpage>128</fpage>
          -
          <lpage>168</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>H. M. Gomes</surname>
            ,
            <given-names>J. P.</given-names>
          </string-name>
          <string-name>
            <surname>Barddal</surname>
            ,
            <given-names>L. E. B.</given-names>
          </string-name>
          <string-name>
            <surname>Ferreira</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Bifet</surname>
          </string-name>
          ,
          <article-title>Adaptive random forests for data stream regression</article-title>
          .,
          <source>in: ESANN</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <article-title>Soknl: A novel way of integrating k-nearest neighbours with adaptive random forest regression for data streams</article-title>
          ,
          <source>Data Min. Knowl. Discov</source>
          .
          <volume>36</volume>
          (
          <year>2022</year>
          )
          <fpage>2006</fpage>
          -
          <lpage>2032</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>