<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluating Multi-task Curriculum Learning for Forecasting Energy Consumption in Electric Heavy-duty Vehicles</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yuantao Fan</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sławomir Nowaczyk</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhenkan Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sepideh Pashami</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Gropegårdsgatan 2</institution>
          ,
          <addr-line>417 15 Göteborg</addr-line>
          ,
          <country>Sweden, Volvo Group</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Isafjordsgatan 28 A, 164 40 Kista, Sweden, Research Institutes of Sweden</institution>
          ,
          <addr-line>RISE</addr-line>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Kristian IV:</institution>
          <addr-line>s väg 3, 301 18 Halmstad, Sweden</addr-line>
          ,
          <institution>Center for Applied Intelligent Systems Research (CAISR), Halmstad University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>Accurate energy consumption prediction is crucial for optimising the operation of electric commercial heavy-duty vehicles, particularly for eficient route planning, refining charging strategies, and ensuring optimal truck configuration for specific tasks. This study investigates the application of multi-task curriculum learning to enhance machine learning models for forecasting the energy consumption of various onboard systems in electric vehicles. Multi-task learning, unlike traditional training approaches, leverages auxiliary tasks to provide additional training signals, which has been shown to enhance predictive performance in many domains. By further incorporating curriculum learning, where simpler tasks are learned before progressing to more complex ones, neural network training becomes more eficient and efective. We evaluate the suitability of these methodologies in the context of electric vehicle energy forecasting, examining whether the combination of multi-task learning and curriculum learning enhances algorithm generalisation, even with limited training data. We primarily focus on understanding the eficacy of diferent curriculum learning strategies, including sequential learning and progressive continual learning, using complex, real-world industrial data. Our research further explores a set of auxiliary tasks designed to facilitate the learning process by targeting key consumption characteristics projected into future time frames. The findings illustrate the potential of multi-task curriculum learning to advance energy consumption forecasting, significantly contributing to the optimisation of electric heavy-duty vehicle operations. This work ofers a novel perspective on integrating advanced machine learning techniques to enhance energy eficiency in the exciting field of electromobility.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Energy Consumption Forecasting</kwd>
        <kwd>Curriculum Learning</kwd>
        <kwd>Multi-task Learning</kwd>
        <kwd>Electric Vehicles</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Predicting energy consumption for electric vehicles (EVs), especially those used in commercial
heavy-duty contexts, is paramount for improving their operational eficiency and promoting
sustainability. Efective energy consumption forecasts are indispensable for strategic route
planning, optimising charging protocols, and ensuring that vehicle configurations align well
with specific operational demands. As electric vehicles gain traction as a viable and
ecofriendly alternative to internal combustion engine vehicles, the importance of precise energy
consumption predictions becomes increasingly pronounced. The challenges in this domain
are multifaceted, stemming from the inherent variability in driving conditions, vehicle load,
and diverse environmental factors, which collectively complicate the development of accurate
predictive models. Overcoming these obstacles is essential not only for enhancing the reliability
and performance of EVs but also for minimising operational costs and boosting the overall
eficiency of electric transport systems.</p>
      <p>The transition to electric vehicles is a significant step towards reducing greenhouse gas
emissions and achieving sustainable transportation goals. However, since limited energy storage
puts unique constraints on which operations are feasible, the benefits of EVs can only be fully
realised through the development of specified forecasting methods that accurately anticipate
energy needs. In this context, AI and ML emerge as transformative tools. AI-driven models can
analyse vast amounts of data to uncover patterns and relationships that are not immediately
apparent, providing more accurate and reliable energy consumption forecasts. These models
can adapt to new data, continuously improving their predictions over time.</p>
      <p>Nevertheless, energy consumption forecasting for EVs faces critical challenges, such as
dynamic driving conditions and fluctuating loads, which makes even state-of-the-art methods
struggle to handle complex real-world data efectively. While the potential to learn from
historical data and identify trends that influence energy consumption is the biggest strength
of ML-based approaches, it is crucial to develop robust models that can generalise well across
diferent scenarios and vehicle types.</p>
      <p>The complexity and variability inherent in forecasting energy consumption for electric
vehicles make it a relevant testing ground for cutting-edge modelling techniques that promise
to handle diverse and dynamic data inputs. In particular, Multi-Task Learning (MTL) presents a
compelling solution by enabling simultaneous training across multiple related tasks, thereby
leveraging shared information to improve the predictive performance of each task. In contrast,
training machine learning models in a traditional setting only utilize the target task. MCL is
particularly beneficial in scenarios with limited training data, as it enhances generalisation by
incorporating auxiliary tasks that provide additional training signals. Moreover, the eficacy of
MTL can be further amplified by integrating curriculum learning (CL), which structures the
learning process in a progressive manner. Curriculum learning organises tasks from simple to
complex, allowing the model to build a robust foundation before tackling more challenging
problems. By combining these methodologies into multi-task curriculum learning (MCL),
we can eficiently train neural networks that not only perform better on individual tasks
but also generalise more efectively across diferent contexts. MCL optimises the learning
trajectory, ensuring that simpler tasks enhance the model’s capability to learn more complex
ones, ultimately leading to more accurate and reliable energy consumption forecasts for electric
heavy-duty vehicles. This integrative approach has been shown to be a potent strategy to
address the multifaceted challenges in several domains but has not been applied to EV auxiliary
energy forecasting before. Thus, this paper aims to evaluate the suitability of MCL in this
real-world, complex scenario.</p>
      <p>Generating a set of auxiliary tasks is a critical step in the implementation of MCL – and
how to do it for forecasting energy consumption in EVs requires experimental evaluation. To
create auxiliary tasks, one must first obtain an understanding of the primary task, identifying
key factors and variables that influence energy consumption and the types of patterns that are
indicative of future behaviour. These factors often include vehicle load, driving speed, route
characteristics, weather conditions, and driver behaviour. Each of these variables can serve
as the basis for an auxiliary task. For instance, an auxiliary task might involve predicting the
impact of vehicle load on energy consumption under diferent trafic conditions or estimating
the efect of varying driving speeds on battery usage. Historical data from real-world vehicle
operations can be mined to extract relevant patterns and correlations, which can then be used to
define these auxiliary tasks. In this paper, we have decided to focus on the patterns within the
forecasted value itself instead of exploiting multivariate vehicle signals. In particular, we define
several types of energy consumption characteristics as targets for the auxiliary tasks, such as
questioning whether the consumption in the next time frame exceeds the global mean, whether
the consumption will be higher in the next time frame compared to the current consumption,
or predicting the consumption diference between the start and the end of the next time frame.
These tasks are general enough to be suitable for any forecasting task, while at the same
time being suficiently closely related to the actual primary task to, hopefully, provide useful
information to boost the training process.</p>
      <p>The core contribution of this paper is the evaluation of applying several multi-task curriculum
learning techniques for forecasting the energy consumption of heavy-duty electric vehicles,
including the proposition of utilising key consumption characteristics as targets for generating
auxiliary tasks for MCL. Comparison of MCL variations, with combinations of curriculum
learning strategy (sequential learning and progressive continual learning) and auxiliary tasks,
illustrates the improvements in the performance on a real-world data collected from normal
operations of commercial transportation electric vehicles. The experimental results show
progressive continual learning, with a logistic growth weighting function governing the learning
balance between the primary and the auxiliary task, achieves the best performance; the result
also shows that the first auxiliary task is the most helpful task for subsystems 1 and 4; the
third auxiliary task is the most helpful task for subsystems 2 and 3. Furthermore, it is observed
that MCL with the proposed auxiliary tasks can improve the learning eficiency of the model,
achieving faster convergence to a point beyond which the gain from further training is limited.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Curriculum learning enables the training of machine learning models in a meaning order,
from easy samples to sets of dificult and complex samples [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. A Common approach for CL
introduces easy-to-hard ordering of samples for the training process, e.g., vanilla CL, self-paced
CL, balanced CL, etc. When multiple tasks are available, the easy-to-hard ordering of the tasks
to be learned can be applied as well. Multi-task learning can be applied, by sharing information
across a set of related tasks in the training process, and the performance can be further improved
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] via, e.g. Gradnorm [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] balancing the losses between multiple tasks. While most multi-task
learning approaches aim at learning multi-tasks simultaneously, progressive curriculum learning
allows determining the best order to learn multiple tasks to maximise the final result. Study [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
presented by Pentina et al. finds the best order of tasks to be learned in a sequence based on
a generalisation bound criterion to optimise the average expected classification performance
over all the tasks. Work [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] by Siahpour et al. introduced a penalty coeficient, as a function of
the epoch step, to govern the training process by suppressing the loss, and noise respectively,
from the domain discrimination task in the early stage, to ensure the eficient training of neural
networks. Shi et al. proposed progressive contrastive learning [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] based on multi-prototypes
in the dataset, the training process is ordered to learn the centroid prototype first, followed
by the hard prototype, and finally the dynamic prototype. In this work, we explore sequential
learning and progressive continual learning with a set of auxiliary tasks generated based on
key characteristics of target signal.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Problem Formulation</title>
      <p>For a given primary learning task , we create a set of auxiliary tasks  , where  corresponds
to the primary task (in our case, the forecasting of energy consumption for the -th auxiliary
subsystem in an electric truck), and  corresponds to the -th type of auxiliary task.</p>
      <p>The majority of the multi-task learning studies aim to learn all relevant tasks together to
improve the performance for each task . In our study, we are only interested in improving
the energy forecasting tasks , not the generated auxiliary tasks  . All energy forecasting
tasks and the auxiliary tasks are learned from the same dataset, multi-variate time series sensor
readings were collected from the normal operations of several heavy-duty electric vehicles.</p>
      <p>Let us denote data of the multivariate time series x of each vehicle  by  = { , |  =
1, 2, ..., (),  = 1, 2, ..., }, where , is the value of the -th feature x given a
vehicle/trajectory  at time , and () corresponds to the end of the recording. A subset of the features
, reflects the energy consumption of subsystem  at time . The target energy consumption
,0 in a future time frame  ℎ can be approximated by summing up the energy consumed over
this time frame ,0 = ∑︀∈[0,0+ ℎ] () · ∆ , where () is the power consumption at time
, and ∆  is the time interval between two samples.</p>
      <p>In this study, we set  ℎ equal to 10 minutes. For a given forecasting task , a regression model
 (· ) is trained together with one of the auxiliary tasks  to estimate consumption ,. In this
study, neural networks, a shared feature extractor, with multiple heads, each corresponding to
one task, were trained under diferent settings and evaluated for their performance after 200
training epochs. We explore diferent multi-task curriculum learning settings and auxiliary
tasks for forecasting energy consumption. The MCL methods were compared to the traditional
approach</p>
    </sec>
    <sec id="sec-4">
      <title>4. Method</title>
      <sec id="sec-4-1">
        <title>4.1. Auxiliary Tasks</title>
        <p>For a given regression task  (forecasting energy consumption for one of the subsystems), a set
of auxiliary tasks was generated to assist the learning progress. We explore the use of five types
of consumption characteristics as targets for creating the auxiliary tasks: i) 1: classifying
whether the consumption in the next time frame exceeds the global mean for that subsystem ;
ii) 2: classifying whether the consumption will increase in the next time frame, compared with
the current consumption; iii) 3: classifying whether the consumption at the end of the next
time frame exceeds the starting point; iv) 4: predicting the consumption diference between
the start and the end of the next time frame; v) 5 predicting the diference between the peak
consumption and the lowest consumption in the next time frame. The first three auxiliary tasks
are classification task, the other two tasks are regression task. Learning to predict these key
consumption characteristics in these auxiliary tasks  , along with the primary tasks , under
MCL, are evaluated for their usefulness.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Network Architecture</title>
        <p>The regression model evaluated for MCL in this study builds on a multi-layer perceptron. The
model is comprised of a shared feature extractor and two heads, one head carries out the main
task , and the other corresponds to one of the five auxiliary tasks  . The network architecture
is illustrated in Figure 1. For auxiliary tasks that are classification tasks, a sigmoid function was
applied to the output of the corresponding head.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Curriculum Learning Strategy</title>
        <p>The two curriculum learning strategies evaluated in this work are sequential learning (SeqL)
and progressive continual learning (PCL). The overall optimisation loss ℒ can be defined as:
where ℒ denotes the loss for the primary tasks, while ℒ denotes the loss for the auxiliary
task . The SeqL employed imposes a fixed ordering of the tasks, e.g. learning the auxiliary task
ifrst, before a predetermined epoch, and the primary task afterwards:
ℒ =  ℒ + (1 −  )ℒ
  =
{︃ 0 if  &lt;</p>
        <p>1 if  ≥  
where  is the current training epochs, and  is the number of epochs predetermined to
switch to another task.</p>
        <p>The PCL employs a weighting mechanism, a function of training epochs, to govern the
learning process and gradually increases the weights on the loss corresponding to the primary
task:
2
   = 1 + (− 10/
)
where  is a coeficient governing the change rate (see Figure 2 for an illustration),  is the
current training epochs, and  is the total amount of training epochs. The two curriculum
learning strategies were compared with MTL without any special curriculum learning and
learning only on the primary task 
(1)
(2)
(3)</p>
        <p>
          The two evaluation criteria in this study are (i) the test loss (Mean Absolute Error, MAE) after
training converged, i.e.,  epochs, and (2) whether the proposed learning strategy achieves a
faster convergence time, i.e., the epoch at which the test loss has reached a saturation point (no
further significant decrease in the loss afterwards). In each case, diferent variants of MCL are
compared against a learning process without any multi-task curriculum learning. The saturation
point is detected using a knee point detection algorithm [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], proposed by Satopaa et al.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiment Result</title>
      <p>The energy consumption dataset was collected from several electric trucks operating in
different countries for a couple of months, including sensor readings of mileage, speed, ambient
temperature, and energy consumed for auxiliary subsystems, etc., from sessions of driving.
The four subsystems we forecast energy consumption for are the air compressor 1, the air
conditioner 2, the cabin heater 3, and the heater of the energy storage system 4.</p>
      <p>
        For the experiment conducted in this paper, the neural networks were implemented via
Pytorch library [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], using an ADAM optimiser with a learning rate of 0.001. The loss function
for the regression tasks is mean absolute error (MSE), and binary cross-entropy (BCE) was
employed as the loss function for the classification tasks. The total number of training epochs
 is set to 200. For the sequential learning strategy,  is set to 100, and for the progressive
continual learning,  value of 0.1 (i.e. a linear function) and 0.3 are tested. The experiments
were conducted using 4-fold cross-validation driving session-wise, i.e. data from the same
driving session would never appear in the training and the testing population together.
      </p>
      <p>Table 1 and Table 2 show the training and testing losses after 200 epochs of training of the
neural networks using multi-task learning with any curriculum learning (MTL), sequential
learning (SeqL), progressive continual learning with an  of 0.1 (PCL-lin), and an  of 0.3
(PCL-exp). The baseline performance, single task learning (STL), is produced with learning
only on the primary task  for each subsystem, shown in the parenthesis. It is shown in both
tables that the lowest averaged MSE is achieved using PCL-exp. As a sanity check, Table 1
demonstrated that the training losses, after 200 epochs of training, of most MCL methods
did converge to a level comparable to STL. For the testing losses shown in Table 2, applying
PCL-exp on task sets {1, 11} and {4, 41} achieved lowest averaged MSE for forecasting
energy consumption of subsystem 1 and 4 (i.e., the first auxiliary task appears to be the most
helpful auxiliary task for subsystem 1 and 4); similarly, applying PCL-exp on task sets {2, 23}
and {3, 33} achieved lowest averaged MSE for forecasting energy consumption of subsystem
2 and 3.</p>
      <p>
        Figure 3 illustrates the diferences between several multi-task curriculum learning strategies,
focusing on the convergence speed. Specifically, we identify a reference point (epoch) beyond
which the gain from further training is limited. This reference point is computed using a knee
point detection method (algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] by Satopaa et al.) on the mean STL test losses (shown as
grey dots and the corresponding dash line). The four plots in Figure. 3 illustrate the testing loss
for learning the four primary tasks, along with their 5-th auxiliary task, i.e. 5. It is observed
in Figure 3: i) there is no significant diference between the four approaches for 1; ii) MTL
and PCL-lin drop slightly slower compared to STL and PCL-exp for 2; iii) both PCL approach
drop slower compared with STL and MTL for 3; iv) MTL, PCL-lin, and PCL-exp drops faster
compared to STL.
      </p>
      <p>Table 3 shows a comparison between MCL methods on the convergence time to the reference
point (computed based on STL mean testing losses over the four folds). It is observed that: i)
MTL outperforms STL in all four primary tasks, and converged to the reference point faster than
other approaches in three out of four primary tasks; ii) the performance of PCL-lin achieved
converged fast for two of the tasks; iii) PCL-exp achieved better performance compared to
PCL-lin, with overall short convergence time. The result corresponding to SeqL is particularly
interesting. Although a  of 100 epochs is adopted for SeqL (i.e. trained on one of the auxiliary
tasks for the first 100 epochs before learning the primary task), the testing loss converges to the
reference point within 10 epochs in the majority of the cases. From an empirical perspective,
the proposed auxiliary tasks assisted the learning (of the models) for the primary task, resulting
Task 1</p>
      <p>Task 2</p>
      <p>Task 3</p>
      <p>Task 4
STL
MTL
PCL-Lin
PCL-exp</p>
      <p>STL
MTL
PCL-Lin</p>
      <p>PCL-exp
in a faster convergence time to the reference point.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>In this work-in-progress paper, several multi-task curriculum learning strategies were evaluated
for forecasting the energy consumption of auxiliary subsystems in heavy-duty electric
vehicles. The preliminary results show that progressive continual learning has achieved the best
performance (lowest averaged MSE) compared to multi-task learning with any CL, Sequential
CL, and the traditional approach (STL). Moreover, the proposed auxiliary tasks, based on key
consumption characteristics, for CL have shown to be useful for solving all four primary tasks,
both in terms of regression error as well as convergence speed.</p>
      <p>Future works include: (i) developing methods to rank/order the auxiliary tasks based on
relevancy and select the top  number of tasks for CL; (ii) proposing adaptive methods for
governing the learning process, e.g. based on weighting the losses of auxiliary tasks using
learning dynamics; (iii) enabling CL across primary tasks, based on task relevancy.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>The work was carried out with support from the Knowledge Foundation and Vinnova (Sweden’s
innovation agency) through the Vehicle Strategic Research and Innovation Programme FFI.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>P.</given-names>
            <surname>Soviany</surname>
          </string-name>
          , R. T. Ionescu,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rota</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sebe</surname>
          </string-name>
          ,
          <article-title>Curriculum learning: A survey</article-title>
          ,
          <source>International Journal of Computer Vision</source>
          <volume>130</volume>
          (
          <year>2022</year>
          )
          <fpage>1526</fpage>
          -
          <lpage>1565</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>A survey on multi-task learning</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>34</volume>
          (
          <year>2021</year>
          )
          <fpage>5586</fpage>
          -
          <lpage>5609</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Badrinarayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rabinovich</surname>
          </string-name>
          , Gradnorm:
          <article-title>Gradient normalization for adaptive loss balancing in deep multitask networks</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>794</fpage>
          -
          <lpage>803</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A.</given-names>
            <surname>Pentina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sharmanska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. H.</given-names>
            <surname>Lampert</surname>
          </string-name>
          ,
          <article-title>Curriculum learning of multiple tasks</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2015</year>
          , pp.
          <fpage>5492</fpage>
          -
          <lpage>5500</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Siahpour</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A novel transfer learning approach in remaining useful life prediction for incomplete dataset</article-title>
          ,
          <source>IEEE Transactions on Instrumentation and Measurement</source>
          <volume>71</volume>
          (
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <article-title>Progressive contrastive learning with multi-prototype for unsupervised visible-infrared person re-identification</article-title>
          ,
          <source>arXiv preprint arXiv:2402.19026</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Satopaa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Albrecht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Irwin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Raghavan</surname>
          </string-name>
          ,
          <article-title>Finding a" kneedle" in a haystack: Detecting knee points in system behavior, in: 2011 31st international conference on distributed computing systems workshops</article-title>
          , IEEE,
          <year>2011</year>
          , pp.
          <fpage>166</fpage>
          -
          <lpage>171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Paszke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chintala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Chanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>DeVito</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Desmaison</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Antiga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lerer</surname>
          </string-name>
          ,
          <article-title>Automatic diferentiation in pytorch (</article-title>
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>