<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tutorial: Interactive Adaptive Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marek Herde</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Minh Tuan Pham</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alaa Tharwat</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Bernhard Sick</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Hochschule Bielefeld</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Kassel</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <fpage>5</fpage>
      <lpage>10</lpage>
      <abstract>
        <p>We summarize the contents of the tutorial we present as a part of the 8th Interactive Adaptive Learning workshop. This workshop is co-located with the ECML-PKDD conference, where it takes place on September 9th, 2024 in Vilnius, Lithuania. Interactive adaptive learning refers to methods that help improve the entire lifecycle of machine learning models. This includes how the models interact with human experts or other systems and how they adapt to diferent types of emerging data rather than just training them on a fixed dataset. This allows the models to improve and adapt over time, which is critical for many real-world applications. Active learning is the most prominent field of interactive adaptive learning [ 1, 2, 3]. Therefore, we explore diferent aspects of active learning in this tutorial and then discuss recent advances in the field in a workshop session. This tutorial is divided into three main parts, which are described in detail in the following sections. Their titles and presenters are as follows: 1. Introduction to Uncertainty-Based Active Learning (A. Tharwat), 2. Hands-on Pool-based Active Learning via scikit-activeml (M. Herde), 3. Towards Pool-based Active Learning with Error-prone Annotators (M. Herde).</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;active learning</kwd>
        <kwd>uncertainty quantification</kwd>
        <kwd>exploration-exploitation tradeof</kwd>
        <kwd>scikit-activeml</kwd>
        <kwd>noisy class labels</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        true structure of the problem domain, even with a limited labeled dataset [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Some common
exploration-focused query strategies include (i) Diversity-based sampling: selecting dissimilar
samples to explore diferent regions, (ii) Density-based sampling: prioritizing samples in dense
areas, as they are more representative, (iii) Clustering-based sampling: identifying samples
near cluster centroids to ensure coverage of data subgroups [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
• Informativeness (or exploitation)-focused strategies: The type in which active learning
employs informativeness-focused query strategies with the goal of identifying the most informative
samples, i.e., the data points that, if labeled, would provide the maximum information gain to the
model [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Some common exploitation-focused approaches include (i) Margin-based sampling:
prioritizing samples near the decision boundary [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], (ii) Entropy-based sampling: selecting the
most uncertain samples [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], and (iii) Variance-based sampling: prioritizing samples with high
prediction variance [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        However, by exploiting the model’s own uncertainty about the unlabeled data, active learning can
more efectively identify the samples that, if labeled, would provide the maximum information gain
to improve the model’s performance. The key idea behind uncertainty-based active learning is that
the model’s uncertainty serves as a proxy for the informativeness of a data point. Samples with higher
uncertainty are more likely to be informative because they represent areas of the input space where the
model’s predictions are less confident or reliable [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]. Common uncertainty-based approaches include
• Uncertainty sampling: Selecting the most uncertain samples, as they are likely to be the most
informative [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
• Expected information gain: Choosing samples with the highest expected information gain [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
• Bayesian optimization: Using a Bayesian model to quantify uncertainty and guide sample
selection [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>
        Recent advances in uncertainty quantification have further extended the capabilities of active learning.
The distinction between (i) epistemic uncertainty, which occurs due to lack of knowledge, and samples
with high epistemic uncertainty are often the most informative for improving model understanding,
and (ii) aleatory uncertainty, which captures the inherent noise or randomness in the data, and samples
with high aleatory uncertainty may be less useful for model training [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ].
      </p>
      <p>
        By distinguishing between epistemic and aleatory uncertainty, active learning can more efectively
identify the most informative samples to improve model performance with less labeled data. The
integration of advanced uncertainty quantification with active learning strategies creates a powerful
framework for eficient and efective model training, even with large amounts of unlabeled data [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
3. Part II – Hands-on Pool-based Active Learning via
      </p>
      <p>
        scikit-activeml
Active learning is a versatile approach to reducing the labeling cost. Assumptions regarding data
availability and training of the classifiers can vary greatly depending on the use case. Active learning
libraries often abstract parts of active learning experiments, such as the whole experiment (AliPy [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]
and Baal [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]), the classifier training ( modAL [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] and small-text [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]), and data management
(libact [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]). These abstractions help simplify active learning experiments where the use case
matches the library’s scope. However, it requires considerable work if the assumptions difer. The
scikit-activeml [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] library has been conceptualized with this in mind, with its modular design
inspired by and built on top of scikit-learn [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], a flexible general-purpose machine learning library.
      </p>
      <p>The goal of scikit-activeml is to bridge active learning research and its application in real-world
use cases. The library is flexible enough for researchers to allow for many assumptions and promotes
reproducibility. For practitioners, it provides extensive documentation, many tutorials for diferent use
cases, and examples for each query strategy with animations visualizing the strategies’ behavior that
can be used as a starting ground.</p>
      <p>
        Algorithm 1 A basic active learning cycle example using scikit-activeml (v0.5.1) [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]
      </p>
      <p>
        scikit-activeml provides query strategies for pool-based active learning in classification and
regression with single or multiple annotators with varying batch sizes. Stream-based active learning [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]
for classification is also supported for single annotator scenarios. In this tutorial, we focus on pool-based
active learning. Algorithm 1 shows a small pool-based active learning script using scikit-activeml.
Labeled and unlabeled data are stored together in X (samples) and y (labels), where unlabeled data
is marked with a user-specified MISSING_LABEL constant (cf. lines 9–13). The classifier and query
strategy are initialized independently and support using random seeds to ensure reproducibility (cf.
lines 16–21). The for-loop shows the active learning cycle, where sample indices are queried (cf. line
26), and their corresponding missing label is replaced with the ground truth (cf. line 27). Figure 1 shows
the fitted classifier, labeled, and unlabeled data after 10 and 30 cycles. Additionally, areas where it is
beneficial to query more labels, according to uncertainty sampling, are highlighted in dark green.
      </p>
      <p>
        Starting from such a basic learning cycle with uncertainty sampling as the employed query strategy,
this tutorial’s part outlines other popular and state-of-the-art query strategies for pool-based active
learning, e.g., core set [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], batch active learning by diverse gradient embeddings (BADGE) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ], typical
clustering (TypiClust) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], probability coverage (ProbCover) [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ], clustering uncertainty-weighted
embeddings (CLUE) [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], and contrastive active learning (CAL) [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. Specifically, we analyze these query
strategies regarding informativeness, representativeness, and batch diversity as central concepts in
pool-based active learning (cf. Section 2). Further, we introduce the diferentiation between low- and
high-budget active learning scenarios. Depending on the scenario, the importance of the aforementioned
concepts changes. Throughout this tutorial’s part, illustrations of synthetic two-dimensional datasets (cf.
Fig. 1) provide a more intuitive understanding of the query strategies’ main ideas and sample selection
behaviors. Beyond such toy examples, we also present an empirical evaluation study as a potential
application for scikt-activeml, where we compare query strategies’ performances across tabular,
2
e
r
tau 4
e
f
2
2
e
r
tau 4
e
f
8
6
2
0
Result after 10 active learning cycles
      </p>
      <p>Result after 30 active learning cycles
3
2
1
1
2
3
4
3
2
1
2
3
4
0
feature 1
class 0
class 1
class 2
class 3
labeled samples
decision boundary
1 0</p>
      <p>
        feature 1
low utility score
high utility score
image, and text data. In doing so, we leverage feature representations (embeddings) learned by the
pre-trained model self-distillation with no labels (DINOv2) [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ] for the image and bidirectional encoder
representations from transformers (BERT) [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] for the text data. This tutorial’s part concludes with a
hands-on session, where participants can access a Jupyter notebook to apply their newly acquired
knowledge by implementing their own active learning experiment via scikit-activeml.
4. Part III – Towards Pool-based Active Learning with Error-prone
      </p>
      <p>
        Annotators
In pool-based active learning, a common assumption is that the queried class labels originate from a
single omniscient annotator [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ]. However, many annotation campaigns involve querying class labels
from multiple humans, e.g., crowdworkers, who are prone to error for various reasons, e.g., lack of
expertise, tiredness, or missing motivation [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ]. As a result, the queried class labels are subject to noise.
Training with such noisy class labels can strongly deteriorate the classifier’s performance. With a
focus on neural networks, numerous techniques have been proposed to improve the robustness against
noisy class labels [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. A common approach is the joint training of the classifier and an annotator
performance model, which corrects the noisy class labels by modeling each annotator’s individual
performance [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. Depending on the assumptions about the annotators’ noise patterns, confusion
matrices are estimated per annotator [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] or even for each sample-annotator pair [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ], for example.
Such techniques are typically employed to train a neural network after completing an annotation
campaign. Yet, the annotators’ performance estimates could be used to guide the annotator selection
during an ongoing annotation campaign. In conjunction with intelligent sample selection, we refer
to this scenario as pool-based active learning with multiple error-prone annotators. Corresponding
query strategies [
        <xref ref-type="bibr" rid="ref36 ref37">36, 37</xref>
        ] must also balance the added exploration-exploitation trade-of when assigning
annotators to provide class labels for given instances. The goal of this tutorial’s part is to give a basic
understanding of such challenges and outline potential baselines, which leverage common pool-based
query strategies for the sample selection and performance estimates for the annotator selection.
      </p>
    </sec>
    <sec id="sec-2">
      <title>Acknowledgments</title>
      <p>The work of A. Tharwat was conducted within the framework of the project “SAIL: SustAInable
Lifecycle of Intelligent SocioTechnical Systems” (grant no. NW21-059B). SAIL is receiving funding
from the program “Netzwerke 2021”, an initiative of the Ministry of Culture and Science of the State
of North Rhine-Westphalia. The work of M. T. Pham was conducted within the framework of the
project “Künstliche Intelligenz zur Fremdkörperdetektion in befüllten Getränkeflaschen (KI4FKD)” (493
22_0022_2B). This project is receiving funding from the program “Distr@l”, an initiative of the Ministry
for Digitalization and Innovation of the State of Hesse. The sole responsibility for the content of this
publication lies with the authors.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Settles</surname>
          </string-name>
          ,
          <article-title>Active learning literature survey (</article-title>
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>A novel low-query-budget active learner with pseudo-labels for imbalanced data</article-title>
          ,
          <source>Mathematics</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>1068</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>A survey on active learning: State-of-the-art, practical challenges and research directions</article-title>
          ,
          <source>Mathematics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>820</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Cohn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Atlas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ladner</surname>
          </string-name>
          ,
          <article-title>Improving generalization with active learning</article-title>
          ,
          <source>Machine Learning</source>
          <volume>15</volume>
          (
          <year>1994</year>
          )
          <fpage>201</fpage>
          -
          <lpage>221</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          , W. Schenck, Balancing Exploration and
          <article-title>Exploitation: A novel active learner for imbalanced data</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>210</volume>
          (
          <year>2020</year>
          )
          <fpage>106500</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>Using methods from dimensionality reduction for active learning with low query budget</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.-L.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Shaker</surname>
          </string-name>
          , E. Hüllermeier,
          <article-title>How to measure uncertainty in uncertainty sampling for active learning</article-title>
          ,
          <source>Machine Learning</source>
          <volume>111</volume>
          (
          <year>2022</year>
          )
          <fpage>89</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Shaker</surname>
          </string-name>
          , E. Hüllermeier,
          <article-title>Aleatoric and epistemic uncertainty with random forests</article-title>
          ,
          <source>in: Advances in Intelligent Data Analysis: International Symposium on Intelligent Data Analysis</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>444</fpage>
          -
          <lpage>456</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>Active Learning for Handling Missing Data</article-title>
          ,
          <source>IEEE Transactions on Neural Networks and Learning Systems</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Khatamsaz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Vela</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. D.</given-names>
            <surname>Johnson</surname>
          </string-name>
          , D. Allaire,
          <string-name>
            <given-names>R.</given-names>
            <surname>Arróyave</surname>
          </string-name>
          ,
          <article-title>Bayesian optimization with active learning of design constraints using an entropy-based approach</article-title>
          ,
          <source>npj Computational Materials</source>
          <volume>9</volume>
          (
          <year>2023</year>
          )
          <fpage>49</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Wimmer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bischl</surname>
          </string-name>
          , E. Hüllermeier,
          <article-title>Quantifying aleatoric and epistemic uncertainty in machine learning: Are conditional entropy and mutual information appropriate measures?</article-title>
          ,
          <source>in: Uncertainty in Artificial Intelligence, PMLR</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>2282</fpage>
          -
          <lpage>2292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Hüllermeier</surname>
          </string-name>
          , W. Waegeman,
          <article-title>Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods</article-title>
          ,
          <source>Machine Learning</source>
          <volume>110</volume>
          (
          <year>2021</year>
          )
          <fpage>457</fpage>
          -
          <lpage>506</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Senge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bösner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dembczyński</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Haasenritter</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Hirsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Donner-Banzhof</surname>
          </string-name>
          , E. Hüllermeier,
          <article-title>Reliable classification: Learning classifiers that distinguish aleatoric and epistemic uncertainty</article-title>
          ,
          <source>Information Sciences 255</source>
          (
          <year>2014</year>
          )
          <fpage>16</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.-P.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.-X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-J.</given-names>
            <surname>Huang</surname>
          </string-name>
          , ALiPy: Active Learning in Python, arXiv preprint arXiv:
          <year>1901</year>
          .
          <volume>03802</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Atighehchian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Branchaud-Charron</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lacoste</surname>
          </string-name>
          ,
          <article-title>Bayesian active learning for production, a systematic study and a reusable library</article-title>
          , arXiv preprint arXiv:
          <year>2006</year>
          .
          <volume>09916</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Danka</surname>
          </string-name>
          , P. Horvath,
          <article-title>modAL: A modular active learning framework for Python</article-title>
          , arXiv preprint arXiv:
          <year>1805</year>
          .
          <volume>00979</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>C.</given-names>
            <surname>Schröder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Niekler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <article-title>Small-text: Active learning for text classification in python</article-title>
          ,
          <source>in: Conference of the European Chapter of the Association for Computational Linguistics: System Demonstrations</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>84</fpage>
          -
          <lpage>95</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Y.-Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-A.</given-names>
            <surname>Chung</surname>
          </string-name>
          , T.-E. Wu,
          <string-name>
            <given-names>S.-A.</given-names>
            <surname>Chen</surname>
          </string-name>
          , H.-T. Lin,
          <article-title>libact: Pool-based Active Learning in Python</article-title>
          ,
          <source>arXiv preprint arXiv:1710.00379</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kottke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Herde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Minh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Benz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mergard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Roghman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sandrock</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <article-title>scikitactiveml: A library and toolbox for active learning algorithms</article-title>
          ,
          <source>Preprints</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>L.</given-names>
            <surname>Buitinck</surname>
          </string-name>
          , G. Louppe,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mueller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Niculae</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grobler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Layton</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. VanderPlas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joly</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Holt</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Varoquaux, API design for machine learning software: experiences from the scikit-learn project</article-title>
          ,
          <source>in: ECML PKDD Workshop: Languages for Data Mining and Machine Learning</source>
          ,
          <year>2013</year>
          , pp.
          <fpage>108</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D.</given-names>
            <surname>Cacciarelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kulahci</surname>
          </string-name>
          ,
          <article-title>Active learning for data streams: a survey</article-title>
          ,
          <source>Machine Learning</source>
          <volume>113</volume>
          (
          <year>2024</year>
          )
          <fpage>185</fpage>
          -
          <lpage>239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sener</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Savarese</surname>
          </string-name>
          ,
          <article-title>Active learning for convolutional neural networks: A core-set approach</article-title>
          , in: International Conference on Learning Representations,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Ash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krishnamurthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Langford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <article-title>Deep Batch Active Learning by Diverse, Uncertain Gradient Lower Bounds</article-title>
          , in: International Conference on Learning Representations,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>G.</given-names>
            <surname>Hacohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dekel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weinshall</surname>
          </string-name>
          ,
          <article-title>Active Learning on a Budget: Opposite Strategies Suit High and Low Budgets</article-title>
          ,
          <source>in: International Conference on Machine Learning</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>8175</fpage>
          -
          <lpage>8195</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>O.</given-names>
            <surname>Yehuda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dekel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hacohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weinshall</surname>
          </string-name>
          ,
          <article-title>Active Learning through a Covering Lens</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>22354</fpage>
          -
          <lpage>22367</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>V.</given-names>
            <surname>Prabhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Chandrasekaran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Saenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hofman</surname>
          </string-name>
          ,
          <article-title>Active Domain Adaptation via Clustering Uncertainty-weighted Embeddings</article-title>
          , in: IEEE/CVF International Conference on
          <source>Computer Vision</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>8505</fpage>
          -
          <lpage>8514</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>K.</given-names>
            <surname>Margatina</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Vernikos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Barrault</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aletras</surname>
          </string-name>
          ,
          <article-title>Active Learning by Acquiring Contrastive Examples</article-title>
          ,
          <source>in: Conference on Empirical Methods in Natural Language Processing</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>650</fpage>
          -
          <lpage>663</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>M.</given-names>
            <surname>Oquab</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Darcet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Moutakanni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. V.</given-names>
            <surname>Vo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Szafraniec</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Khalidov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fernandez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Haziza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Massa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>El-Nouby</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Assran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ballas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Galuba</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Howes</surname>
          </string-name>
          , P.-Y. Huang,
          <string-name>
            <given-names>S.-W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Misra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Rabbat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sharma</surname>
          </string-name>
          , G. Synnaeve,
          <string-name>
            <given-names>H.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jegou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Mairal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Labatut</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joulin</surname>
          </string-name>
          , P. Bojanowski,
          <article-title>DINOv2: Learning robust visual features without supervision</article-title>
          ,
          <source>Transactions on Machine Learning Research</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , BERT:
          <article-title>Pre-training of Deep Bidirectional Transformers for Language Understanding, in: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</article-title>
          ,
          <year>2019</year>
          , pp.
          <fpage>4171</fpage>
          -
          <lpage>4186</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>E.</given-names>
            <surname>Mosqueira-Rey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Hernández-Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Alonso-Ríos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Bobes-Bascarán</surname>
          </string-name>
          , Á. Fernández-Leal,
          <article-title>Human-in-the-loop machine learning: a state of the art</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          <volume>56</volume>
          (
          <year>2023</year>
          )
          <fpage>3005</fpage>
          -
          <lpage>3054</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>M.</given-names>
            <surname>Herde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Huseljic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Calma</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Survey on Cost Types, Interaction Schemes, and Annotator Performance Models in Selection Algorithms for Active Learning in Classification, IEEE Access 9 (</article-title>
          <year>2021</year>
          )
          <fpage>166970</fpage>
          -
          <lpage>166989</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>H.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-G.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Learning From Noisy Labels With Deep Neural Networks: A Survey, IEEE Transactions on Neural Networks and Learning Systems (</article-title>
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>M.</given-names>
            <surname>Herde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Huseljic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <article-title>Multi-annotator Deep Learning: A Probabilistic Framework for Classification, Transactions on Machine Learning Research (</article-title>
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ibrahim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fu</surname>
          </string-name>
          , Deep Learning From Crowdsourced Labels: Coupled
          <string-name>
            <surname>Cross-Entropy</surname>
            <given-names>Minimization</given-names>
          </string-name>
          , Identifiability, and Regularization, in: International Conference on Learning Representations,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>M.</given-names>
            <surname>Herde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Lührs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Huseljic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          , Annot-Mix:
          <article-title>Learning with Noisy Class Labels from Multiple Annotators via a Mixup Extension</article-title>
          ,
          <source>arXiv preprint arXiv:2405.03386</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chakraborty</surname>
          </string-name>
          ,
          <article-title>Asking the Right Questions to the Right Users: Active Learning with Imperfect Oracles</article-title>
          ,
          <source>in: AAAI Conference on Artificial Intelligence</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>3365</fpage>
          -
          <lpage>3372</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>M.</given-names>
            <surname>Herde</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kottke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Huseljic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <article-title>Multi-annotator Probabilistic Active Learning</article-title>
          , in: International Conference on Pattern Recognition, IEEE,
          <year>2021</year>
          , pp.
          <fpage>10281</fpage>
          -
          <lpage>10288</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>