<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Preface to the Proceedings of the 1st Streaming Continual Learning Bridge at AAAI26</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Heitor Murilo Gomes</string-name>
          <email>heitor.gomes@vuw.ac.nz</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Cossu</string-name>
          <email>andrea.cossu@unipi.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Federico Giannini</string-name>
          <email>federico.giannini@polimi.it</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anton Lee</string-name>
          <email>anton.lee@ecs.vuw.ac.nz</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nuwan Gunasekara</string-name>
          <email>nuwan.gunasekara@hh.se</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Emanuele Della Valle</string-name>
          <email>emanuele.dellavalle@polimi.it</email>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Halmstad University</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>Continual Learning and Streaming Machine Learning address complementary aspects of learning under nonstationary data, yet difer in objectives and assumptions, limiting interaction between the two communities. Streaming Continual Learning has recently emerged as a unifying paradigm that combines rapid adaptation to evolving data streams with selective knowledge retention. The Streaming Continual Learning Bridge brought together researchers working at the intersection of these areas to discuss open challenges in drift adaptation, forgetting, plasticity, and temporal dependencies. The contributions in this volume highlight recent methodological advances, practical tools, and application-driven perspectives, outlining future directions for learning systems operating under continuous distributional change.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Continual Learning (CL) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] and Streaming Machine Learning (SML) [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] both address the challenge
of adapting learning agents to environments that evolve over time. Despite their shared goals, these
research areas have had limited interaction, largely due to diferences in their approaches, assumptions,
and evaluation metrics.
      </p>
      <p>CL primarily emphasizes long-term knowledge retention and the mitigation of forgetting, often
without strict real-time constraints. It typically addresses situations where the new concept simply
extends the previously observed learning task with a new input distribution, without invalidating
the past task. In this case, retaining the past knowledge is crucial since the task associated with the
previously observed input distribution is assumed to remain unchanged. Conversely, SML prioritizes
rapid, eficient adaptation without making any assumptions about the type of change. This change may
alter the mapping between the input and the desired target, invalidating a portion of previously acquired
knowledge. Thus, SML focuses only on the current concept and ignores the problem of forgetting,
prioritizing rapid, eficient adaptation to high-frequency streams.</p>
      <p>
        Moreover, in many real-world scenarios (e.g., Internet of Things, sensor data), data points are collected
over time with a specific order and timing, and the solution should model temporal dependence between
data points. This problem is often underexplored by SML and CL and is addressed by Time Series
Analysis (TSA) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. However, TSA is less commonly integrated to data streams.
      </p>
      <p>
        Recently, Streaming Continual Learning (SCL) [
        <xref ref-type="bibr" rid="ref4 ref5 ref6">4, 5, 6</xref>
        ] is emerging as a unified learning paradigm
for scenarios where neither Continual Learning nor Streaming Machine Learning is suficient on its own.
Real-world environments typically involve an interplay of input drifts , where new information appears
in previously unseen feature subspaces, and contradictory drifts , where changes in the data invalidate
parts of the learned decision boundary. SCL aims to reconcile CL and SML objectives by promoting five
key properties: rapid adaptation to both types of drift; autonomous drift detection; eficient learning
from single or few samples; the development of deep hierarchical representations; and the selective
      </p>
      <p>CEUR
Workshop</p>
      <p>ISSN1613-0073
retention of relevant knowledge. From this perspective, avoiding forgetting is reframed as a dynamic
resource-management problem, in which the model must determine which information remains useful
and which becomes obsolete as the environment evolves. Consequently, learning systems must both
archive a global representation of the data stream and selectively reuse past knowledge.</p>
      <p>
        This bridge welcomed researchers at any level working on learning protocols and models for
nonstationary environments where CL and SML ideas intersect. This also includes areas like online learning
in non-stationary environments [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], transfer learning, domain/test-time adaptation, and TSA.
      </p>
      <p>Participants in the bridge shared their ideas with researchers from diferent areas through dedicated
sessions in the bridge program. In addition, participants learned from tutorials and invited talks about
the key ideas behind CL and SML research, and how they difer from each other. The ultimate goal was
to spark new collaborations in which experts in CL can contribute to long-standing challenges in SML,
and vice versa.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Open research questions</title>
      <p>
        The bridge focused on the interplay between rapid adaptation and knowledge consolidation (i.e.,
mitigating forgetting), two of the main objectives of CL and SML models. Recent studies provide
preliminary evidence about how SML models address knowledge retention and how CL models can
rapidly adapt in the presence of sudden drifts and high-frequency data streams [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ].
      </p>
      <p>
        The bridge encouraged participants to reason about key open questions, like:
• Can we design learning models that quickly adapt to new information (in the spirit of SML)
without forgetting previous knowledge (in the spirit of CL)?
• What is the meaning of avoiding forgetting in the case of real drifts (i.e., the new classification
problem changes the decision boundary in a portion of the previously observed feature space)?
• Is the loss of plasticity [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] commonly encountered in CL also present in SML? If not, how can
we leverage insights from SML to mitigate this adverse phenomenon?
• Can we separate the concerns of continual knowledge representation and rapid task adaptation
by combining CL and SML techniques?
• Can we leverage the temporal dependencies [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] usually present in a data stream to improve the
learning experience?
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Bridge Activities and Program</title>
      <p>
        The bridge was held on January 21, 2026, in Singapore, collocated with the 40th AAAI Conference on
Artificial Intelligence. The program combined theoretical discussions with practical sessions to foster
shared understanding between the Continual Learning and Streaming Machine Learning communities.
In particular, the bridge featured:
• Technical tutorials on widely adopted software libraries, providing participants with a
common practical grounding for research in Streaming Continual Learning. Tutorials covered
Avalanche [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and CapyMOA [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], representative frameworks from the Continual Learning
and Streaming Machine Learning communities, respectively.
• Paper presentations and discussion sessions, organized around poster spotlights followed by a
dedicated poster session, which enabled in-depth technical exchanges and cross-community
interaction. The bridge concluded with a plenary discussion synthesizing insights across contributions
and identifying key open challenges and future research directions.
• An invited talk by Prof. Albert Bifet, co-leader of the MOA framework, who discussed drift-aware
algorithms and scalable open-source tools for adaptive learning in non-stationary environments,
with a focus on real-world applications in environmental monitoring.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Overview of the Papers</title>
      <p>The bridge welcomed contributions on learning protocols and models for non-stationary environments
at the intersection of Continual Learning, Streaming Machine Learning, and Time Series Analysis.
Submissions were accepted in two tracks:
1. Non-archival track, including previously published works, preliminary results, and position
papers. The bridge received five and two were presented.
2. Archival track, including short and full papers presenting original contributions, published in
the post-proceedings on CEUR-WS.org. The bridge received 13 submissions and accepted seven.</p>
      <p>The papers included in this volume correspond to revised post-proceedings versions of the accepted
archival track contributions.</p>
      <sec id="sec-4-1">
        <title>4.1. Foundation Models and Agents in Streaming Environments</title>
        <p>A significant portion of the discussion focused on bridging the gap between static Foundation Models
and dynamic data streams. Innovative ways to adapt Large Language Models (LLMs) and Large
Tabular Models (LTMs) were proposed without incurring catastrophic forgetting. Proposals included
bio-inspired frameworks like Learn-Master-Teach Tuning (LMT2), which simulates a human-like
studentteacher lifecycle to resolve the stability-plasticity dilemma, and autonomous agents like SOLAR, which
leverage parameter-level meta-learning to self-improve and reason in evolving environments. Vision
papers further expanded this scope, exploring the use of LTMs as a bridge for SCL through in-context
learning and summarization, and outlining the challenges of applying Foundation Models to Earth
Observation data streams, where managing sensor degradation and environmental drift is critical.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Time Series Forecasting and Real-World Applications</title>
        <p>The intersection of SCL and TSA proved to be a fertile ground for research. In the non-archival track,
authors strengthened the theoretical connections between Online CL and TSA, proposing methods like
Natural Score-driven Replay (NatSR) and unified frameworks that model temporal drift as a sequential
domain shift process. On the application side, the bridge showcased works addressing concrete industrial
challenges. These included energy consumption forecasting for heavy-duty electric vehicles using
online incremental learning, and hybrid anomaly detection systems for industrial elevators. The latter
notably introduced a ”dual-learner” approach, combining fast online tree-based models with slower,
pre-trained time-series foundation models to balance responsiveness and robustness.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Methodologies for Drift Adaptation and Evaluation</title>
        <p>Finally, the bridge highlighted fundamental methodological contributions regarding how we evaluate
performance in non-stationary environments. The crucial issue of evaluation was addressed by
proposing new metrics that distinguish between a model’s failure to adapt and the intrinsic dificulty of the
data, ofering a more nuanced view of robustness to temporal distribution shifts.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Acknowledgments</title>
      <p>We would like to thank our invited speaker, Albert Bifet, for his valuable contributions to the bridge
program. We also express our gratitude to the Program Committee members for their time and expertise
in reviewing the submissions, and to the AAAI organization for hosting this event.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>G. I. Parisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kemker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Part</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wermter</surname>
          </string-name>
          ,
          <article-title>Continual lifelong learning with neural networks: A review</article-title>
          ,
          <source>Neural Networks</source>
          <volume>113</volume>
          (
          <year>2019</year>
          )
          <fpage>54</fpage>
          -
          <lpage>71</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.neunet.
          <year>2019</year>
          .
          <volume>01</volume>
          .012.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Read</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Barddal</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. a. Gama,</surname>
          </string-name>
          <article-title>Machine learning for streaming data: state of the art, challenges, and opportunities</article-title>
          ,
          <source>SIGKDD Explor. Newsl</source>
          .
          <volume>21</volume>
          (
          <year>2019</year>
          )
          <fpage>6</fpage>
          -
          <lpage>22</lpage>
          . URL: https://doi.org/10.1145/3373464.3373470. doi:
          <volume>10</volume>
          .1145/3373464.3373470.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Box</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Jenkins</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. C.</given-names>
            <surname>Reinsel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Ljung</surname>
          </string-name>
          ,
          <article-title>Time series analysis: forecasting and control</article-title>
          , John Wiley &amp; Sons,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zifer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cossu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lomonaco</surname>
          </string-name>
          ,
          <article-title>Streaming Continual Learning for Unified Adaptive Intelligence in Dynamic Environments</article-title>
          ,
          <source>IEEE Intelligent Systems</source>
          <volume>39</volume>
          (
          <year>2024</year>
          )
          <fpage>81</fpage>
          -
          <lpage>85</lpage>
          . doi:
          <volume>10</volume>
          .1109/ MIS.
          <year>2024</year>
          .
          <volume>3479469</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>N.</given-names>
            <surname>Gunasekara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Bifet,</surname>
          </string-name>
          <article-title>Survey on Online Streaming Continual Learning</article-title>
          , in: Thirty-Second
          <source>International Joint Conference on Artificial Intelligence</source>
          , volume
          <volume>6</volume>
          ,
          <year>2023</year>
          , pp.
          <fpage>6628</fpage>
          -
          <lpage>6637</lpage>
          . doi:
          <volume>10</volume>
          .24963/ijcai.
          <year>2023</year>
          /743.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Cossu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zifer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bernardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gepperth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. Della</given-names>
            <surname>Valle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hammer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Bacciu</surname>
          </string-name>
          ,
          <article-title>A practical guide to streaming continual learning</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>674</volume>
          (
          <year>2026</year>
          )
          <article-title>132951</article-title>
          . URL: https://www.sciencedirect.com/science/article/pii/S0925231226003486. doi:https://doi.org/10. 1016/j.neucom.
          <year>2026</year>
          .
          <volume>132951</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ditzler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Roveri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Alippi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Polikar</surname>
          </string-name>
          , Learning in Nonstationary Environments: A Survey,
          <source>IEEE Computational Intelligence Magazine</source>
          <volume>10</volume>
          (
          <year>2015</year>
          )
          <fpage>12</fpage>
          -
          <lpage>25</lpage>
          . doi:
          <volume>10</volume>
          .1109/
          <string-name>
            <surname>MCI</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <volume>2471196</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chambers</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Gaber</surname>
          </string-name>
          , H. Ghomeshi,
          <article-title>AdaDeepStream: Streaming adaptation to concept evolution in deep neural networks</article-title>
          ,
          <source>Applied Intelligence</source>
          (
          <year>2023</year>
          ).
          <source>doi:10.1007/s10489-023-04812-0.</source>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ghunaim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bibi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Alhamoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alfarra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Al Kader Hammoud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Prabhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. H. S.</given-names>
            <surname>Torr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ghanem</surname>
          </string-name>
          ,
          <article-title>Real-Time Evaluation in Online Continual Learning: A New Hope</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>11888</fpage>
          -
          <lpage>11897</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Sener</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Sha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Koltun</surname>
          </string-name>
          ,
          <article-title>Drinking From a Firehose: Continual Learning With Web-Scale Natural Language</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>45</volume>
          (
          <year>2023</year>
          )
          <fpage>5684</fpage>
          -
          <lpage>5696</lpage>
          . doi:
          <volume>10</volume>
          .1109/TPAMI.
          <year>2022</year>
          .
          <volume>3218265</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Dohare</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. F.</given-names>
            <surname>Hernandez-Garcia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Lan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Rahman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. R.</given-names>
            <surname>Mahmood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. S.</given-names>
            <surname>Sutton</surname>
          </string-name>
          ,
          <article-title>Loss of plasticity in deep continual learning</article-title>
          ,
          <source>Nature</source>
          <volume>632</volume>
          (
          <year>2024</year>
          )
          <fpage>768</fpage>
          -
          <lpage>774</lpage>
          . doi:
          <volume>10</volume>
          .1038/s41586-024-07711-7.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G.</given-names>
            <surname>Zifer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannini</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. Della Valle</surname>
          </string-name>
          ,
          <article-title>Tenet: Benchmarking Data Stream Classifiers in Presence of Temporal Dependence</article-title>
          , in: 2024
          <source>IEEE International Conference on Big Data (BigData)</source>
          ,
          <year>2024</year>
          , pp.
          <fpage>1187</fpage>
          -
          <lpage>1196</lpage>
          . doi:
          <volume>10</volume>
          .1109/BigData62323.
          <year>2024</year>
          .
          <volume>10825670</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Carta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pellegrini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cossu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hemati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lomonaco</surname>
          </string-name>
          ,
          <article-title>Avalanche: A pytorch library for deep continual learning</article-title>
          ,
          <source>J. Mach. Learn. Res</source>
          .
          <volume>24</volume>
          (
          <year>2023</year>
          )
          <volume>363</volume>
          :
          <fpage>1</fpage>
          -
          <lpage>363</lpage>
          :
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gunasekara</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. W.</given-names>
            <surname>Cassales</surname>
          </string-name>
          , J. Liu,
          <string-name>
            <given-names>M.</given-names>
            <surname>Heyden</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Cerqueira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bahri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. S.</given-names>
            <surname>Koh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Pfahringer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Bifet</surname>
          </string-name>
          ,
          <article-title>Machine learning for data streams with capymoa</article-title>
          ,
          <source>in: ECML/PKDD (10)</source>
          , volume
          <volume>16022</volume>
          of Lecture Notes in Computer Science, Springer,
          <year>2025</year>
          , pp.
          <fpage>438</fpage>
          -
          <lpage>443</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>