<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Tutorial: Interactive Adaptive Learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mirko Bunse</string-name>
          <email>mirko.bunse@cs.tu-dortmund.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Georg Krempl</string-name>
          <email>g.m.krempl@uu.nl</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alaa Tharwat</string-name>
          <email>alaa.othman@fh-bielefeld.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Amal Saadallah</string-name>
          <email>amal.saadallah@cs.tu-dortmund.de</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Hochschule Bielefeld</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>TU Dortmund University</institution>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Utrecht University</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>We summarize the contents of the tutorial we present as a part of the 7th Interactive Adaptive Learning workshop. This workshop is co-located with the ECML-PKDD conference, where it takes place on Part 1-Foundations of Active Learning Workshop Proceedings htp:/ceur-ws.org CEUR Workshop Proceedings (CEUR-WS.org) ISN1613-073</p>
      </abstract>
      <kwd-group>
        <kwd>active learning</kwd>
        <kwd>active class selection</kwd>
        <kwd>active feature acquisition</kwd>
        <kwd>meta-learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        Interactive adaptive learning comprises methods that improve the overall life-cycle of machine
learning models, including interactions with human supervisors, interactions with other
processing systems, and adaptations to diferent forms of data that become available at diferent
points in time. Most importantly, interactive adaptive learning is concerned with diferent forms
of active learning (AL) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], a research area with many facets. We cover the most important
facets of AL in this tutorial, before discussing recent progress in a workshop session.
      </p>
      <p>This tutorial is structured into five parts, which we detail in the following sections. Their
titles and presenters are as follows:
1. Foundations of Active Learning (A. Tharwat &amp; G. Krempl)
2. Beyond Pool-Based Scenarios (G. Krempl &amp; A. Tharwat)
3. Beyond Active Labeling (M. Bunse)
4. Towards Explainable Active Learning using Meta-Learning (A. Saadallah)
5. Practical Challenges and New Research Directions (A. Tharwat &amp; G. Krempl)
Although a huge amount of unlabeled data has been collected recently, this data is still useless
for developing learning algorithms that require labeled data. However, collecting labeled data
might require an expert annotator, might be expensive (e.g., when a series of processes must
nEvelop-O
CEUR
be performed in laboratories to generate the label), might be time-consuming (e.g., when long
documents need to be annotated), or might be dificult for several other reasons. In this case,
the active learning technique provides a solution by querying a small set of informative and
representative points from the available unlabeled points to annotate them. This selected set of
points represents the training data for learning a model and yields promising results.</p>
      <sec id="sec-1-1">
        <title>Active Labeling and Semi-Supervised Learning In the semi-supervised technique, the aim</title>
        <p>
          is to leverage a combination of labeled and unlabeled data to enhance the model’s performance.
To do this, the unlabeled data is utilized to further improve the supervised models, which has
been learned from the labeled data. In contrast, AL aims to optimize the process of data labeling
by querying the most informative samples from an unlabeled pool. So, the semi-supervised
learning and AL could be solutions when the labeled data is limited, but the semi-supervised
technique searches for the most certain points while ALs query the most uncertain ones [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
Scenarios of Active Labeling There are three main scenarios of active labeling. First, the
scenario of membership query synthesis, where AL creates synthetic instances within the space
and then queries these new instances. Since there is no processing of unlabeled data, this
scenario is fast compared to the other scenarios and suitable for finite problem domains [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
The main limitation here is that the artificially created instances may not have a meaningful
label. Second, in the stream-based selective sampling scenario, the unlabeled data instances
are drawn iteratively, one at a time, and the learning model makes a decision whether to
query the unlabeled point based on its information content. The third, last and most
wellknown AL scenario is the pool-based scenario, where a query strategy is used to evaluate the
informativeness of some/all instances in the pool of unlabeled data to query the labels of one or
more instances [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
        </p>
        <p>Other Forms of Active Learning There are diferent research directions of active learning.
The best known is active labeling, where AL searches a pool of unlabeled instances to select
the most informative and representative points to be labeled and added to the training data.
Another direction is active feature selection, where the environment is searched for unobserved
features to improve performance at the time of evaluation. The active class selection direction
aims to actively select a class and ask the annotator to provide a sample/instance for that class
to optimise classification performance with a small number of queries.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Part 2—Beyond Pool-Based Scenarios</title>
      <p>
        In real-world applications, there are situations that go beyond the classical setup of the
poolbased scenario, leading to more challenging and interesting AL scenarios:
Stream-based AL Here, data arrives in a continuous stream (e.g., social media posts).
Therefore, the selection of instances for labeling becomes more dynamic, requiring the model
to make decisions in real time as the stream evolves [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Batch-based AL This involves selecting a batch of instances for labeling at the same time,
rather than selecting a single instance, which is relevant when labeling can be done in
groups. However, the challenge here is to select a diverse and informative batch that
efectively improves the performance of the model [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        Semi-supervised AL Here, both labeled and unlabeled data are available, and AL strategies
aim to use both data sources to improve model performance [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Transductive and Inductive AL The goal of these two diferent strategies is to find the most
informative examples for labeling. The main diference between them lies in their goals
and focus, where transductive AL aims to improve the model’s performance on the
current set of unlabeled instances, while inductive AL focuses on improving the model’s
generalization to new, unseen instances. Thus, the choice between them depends on
whether the goal is to achieve immediate accuracy or broader generalization [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-3">
      <title>Part 3—Beyond Active Labeling</title>
      <p>The term active learning is often identified with an active labeling of unlabeled instances from a
pool or a stream. This understanding is limited by the assumptions i) that labels can be assigned
in hindsight and to arbitrary instances and ii) that labels are the only relevant cost factor during
data acquisition; use cases of AL might violate these assumptions, thereby rendering an active
acquisition of labels infeasible. In the third part of the tutorial, we therefore broaden the idea of
AL to settings where other parts of the data are queried, e.g., settings where individual features
or complete instances have to be acquired instead of labels.</p>
      <p>
        Active Class Selection One such setting is active class selection [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], where some
classconditional data generator  ∶  →  is assumed. Strategies for active class selection query
this data generator in terms of the class proportions which are to be generated during the next
acquisition round. The generator then produces a batch of new data instances according to
these proportions. Data generators of this kind appear in use cases as diverse as astrophysics [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],
brain computer interaction [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ], and gas sensor arrays [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. They are in contrast to the oracles
 ∶  →  that are required for an active labeling of pre-existing instances.
      </p>
      <p>
        Most strategies for the active selection of classes [
        <xref ref-type="bibr" rid="ref5 ref9">5, 9</xref>
        ] consist of ad-hoc heuristics. A recent
line of research, however, evolves around theoretical analyses of the implications that active
class selection has on supervised learning. This line of research, put forward by the presenters
of this tutorial, begins with a study on consistency [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and evolves into a PAC analysis [
        <xref ref-type="bibr" rid="ref11 ref6">6, 11</xref>
        ].
The theoretical results therein give rise to an acquisition strategy [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] which leverages the prior
knowledge of a practitioner instead of relying on heuristics.
      </p>
      <p>
        Active Feature Acquisition Another setting of AL is active feature acquisition [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], where
instances have missing features that can be queried individually. For this purpose, a feature
value oracle is needed. Acquisition strategies have to choose missing feature values of
specific instances to acquire—a problem that might occur at training time or at testing time of a
supervised model.
      </p>
      <p>
        In our tutorial, we introduce the problem of active feature acquisition and we revisit some of
the most important strategies [
        <xref ref-type="bibr" rid="ref14 ref15 ref16">14, 15, 16</xref>
        ]. All in all, the third part of this tutorial demonstrates
that AL comprises several important problems beyond the active acquisition of labels.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Part 4—Towards Explainable Active Learning using Meta-Learning</title>
      <p>Explainable AI focuses on developing machine learning models that provide transparent and
interpretable explanations for their predictions. A lack of interpretability can be a significant
drawback in safety-critical applications, like healthcare, finance, and autonomous systems.
Meta-learning aids in providing interpretable explanations in AL.</p>
      <p>
        Meta-Learning The majority of popular AL approaches relies on heuristics, none of which
clearly outperforms the others in all cases [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. The primary objective of meta-active learning
(meta-AL) is to develop a data-driven approach to AL, which is capable of selecting the optimal
set of unlabeled items for labeling. The fundamental idea is to train a regressor that predicts
the informativeness of a candidate sample in a specific learning state [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Recent examples
include bandit algorithms [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] and reinforcement learning techniques [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] which, however, are
limited to combining pre-existing hand-designed heuristics. This limitation is lifted in “Learning
Active Learning” [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], a method which predicts the reduction in generalization error that is
caused by labeling an instance. This method outperforms competing methods at a relatively
low computational cost.
      </p>
      <p>
        Explainability Explainable AL using meta-learning [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] involves leveraging the benefits
of both AL and meta-learning techniques to obtain more informative labels while ensuring
the interpretability of the AL process. This goal can be achieved through several measures:
explainable model architectures, interpretable meta-models, explainable active sample selection,
attention mechanisms, post-hoc explanation techniques, regularization towards explainable
outcomes, and human-in-the-loop feedback.
      </p>
      <p>By combining these strategies, researchers can develop a system that not only achieves
high accuracy through AL and meta-learning but also provides transparent, interpretable, and
trustworthy explanations for its predictions. The goal is to strike a balance between accuracy and
interpretability, making the AL process more trustworthy and usable in real-world applications
where human understanding is essential.</p>
    </sec>
    <sec id="sec-5">
      <title>Part 5—Practical Challenges and New Research Directions</title>
      <p>We conclude our tutorial with a discussion of common challenges that appear in real-world
AL applications. We also propose several research directions that have not received a lot of
attention yet.
1–8
Imbalanced Data This challenge stems from the dificulty of learning from minority classes.</p>
      <p>
        The problems of unequal labelling costs or unequal misclassification costs are also similar;
these problems happen when the classification costs are diferent between classes [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
Therefore, ALs should improve their exploration capability to scan the whole space
including minority class subspaces [
        <xref ref-type="bibr" rid="ref22 ref3">3, 22</xref>
        ].
      </p>
      <p>
        Diversity of Samples Non-representative selections of points (e.g., when samples are
concentrated in a small region) reduce generalization capabilities and lead to biased models.
Hence, a consideration of diversity has to compensate for the lack of exploration in
uncertainty methods [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
      </p>
      <p>
        Outliers Outliers can appear as representing an informative region. Instead, however, they
deviate AL techniques from exploring truly uncertain regions [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>
        High Dimensionality High-dimensional spaces challenge AL not only because they challenge
the learning method, but also because of the computational time required [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
Crowdsourcing Non-expert annotators have the potential of cheaply labeling large amounts
of data. However, their labels might be noisy, leading to negative efects that can be more
harmful than having only small training sets [
        <xref ref-type="bibr" rid="ref26 ref27">26, 27</xref>
        ].
      </p>
      <p>
        Small Query Budgets With a small budget, AL may not be able to explore the entire space
perfectly. This situation can lead to sub-optimal performances [
        <xref ref-type="bibr" rid="ref28">28</xref>
        ].
      </p>
      <p>
        Stopping Criteria AL continues querying until a termination condition is met [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
Termination conditions can be based on sampling complexity or on fixed budgets [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ].
      </p>
      <sec id="sec-5-1">
        <title>New Research Directions</title>
        <p>
          Deep AL Deep learning achieves impressive results, especially with large training sets.
However, collecting these sets is often challenging [
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]. Deep AL promises reasonable
performance with small but highly informative training sets [
          <xref ref-type="bibr" rid="ref31">31</xref>
          ].
        </p>
        <p>
          AL with Evolutionary Algorithms The most expensive part of evolutionary optimization
algorithms is the fitness evaluation. Here, AL can be used to select the most informative
points to build a surrogate model that simulates the original fitness function [
          <xref ref-type="bibr" rid="ref32">32</xref>
          ].
AL with Simulation In large-scale simulation models, the calibration of large numbers of
parameters is expensive. AL can be used to reduce the number of simulations required
by finding the most informative regions within the space and the most important
parameters [
          <xref ref-type="bibr" rid="ref33">33</xref>
          ].
        </p>
        <p>
          AL with Design of Experiments Design of experiments allows researchers to optimize
processes by identifying important factors and drawing reliable conclusions with a minimum
number of trials [
          <xref ref-type="bibr" rid="ref34">34</xref>
          ]. Here, AL can reduce the number of experiments by finding and
performing only the most informative ones.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The work of M.B. and A.S. was partly funded by the Federal Ministry of Education and Research
of Germany and the state of North Rhine-Westphalia as part of the Lamarr Institute for Machine
Learning and Artificial Intelligence. The work of A.T. was conducted within the framework of
the project “SAIL: SustAInable Lifecycle of Intelligent SocioTechnical Systems” (grant no.
NW21059B). SAIL is receiving funding from the programme “Netzwerke 2021”, an initiative of the
Ministry of Culture and Science of the State of North Rhine-Westphalia. The sole responsibility
for the content of this publication lies with the authors.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Settles</surname>
          </string-name>
          ,
          <article-title>Active learning literature survey</article-title>
          ,
          <source>Technical Report 1648</source>
          , University of WisconsinMadison Department of Computer Sciences,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>A survey on active learning: State-of-the-art, practical challenges and research directions</article-title>
          ,
          <source>Mathematics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <fpage>820</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          , W. Schenck,
          <article-title>Balancing exploration and exploitation: A novel active learner for imbalanced data</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>210</volume>
          (
          <year>2020</year>
          )
          <fpage>106500</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Ciano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bianchini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Scarselli</surname>
          </string-name>
          ,
          <article-title>On inductive-transductive learning with graph neural networks</article-title>
          ,
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>44</volume>
          (
          <year>2021</year>
          )
          <fpage>758</fpage>
          -
          <lpage>769</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Lomasky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. E.</given-names>
            <surname>Brodley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aernecke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Walt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Friedl</surname>
          </string-name>
          ,
          <article-title>Active class selection</article-title>
          ,
          <source>in: Europ. Conf. on Mach. Learn., volume 4701 of Lecture Notes in Comput. Sci.</source>
          , Springer,
          <year>2007</year>
          , pp.
          <fpage>640</fpage>
          -
          <lpage>647</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>540</fpage>
          - 74958- 5_
          <fpage>63</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bunse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Morik</surname>
          </string-name>
          ,
          <article-title>Certification of model robustness in active class selection</article-title>
          ,
          <source>in: Europ. Conf. on Mach. Learn. and Knowl</source>
          . Discov. in Databases, Springer,
          <year>2021</year>
          , pp.
          <fpage>266</fpage>
          -
          <lpage>281</lpage>
          . doi:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>030</fpage>
          - 86520- 7_
          <fpage>17</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. J.</given-names>
            <surname>Lance</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. D.</given-names>
            <surname>Parsons</surname>
          </string-name>
          ,
          <article-title>Collaborative filtering for brain-computer interaction using transfer learning and active class selection</article-title>
          ,
          <source>PloS one 8</source>
          (
          <year>2013</year>
          ). doi:
          <volume>10</volume>
          .1371/journal. pone.
          <volume>0056624</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>I.</given-names>
            <surname>Hossain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khosravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nahavandi</surname>
          </string-name>
          ,
          <article-title>Weighted informative inverse active class selection for motor imagery brain computer interface</article-title>
          ,
          <source>in: Canad. Conf. on Electr. and Comput</source>
          . Engin., IEEE,
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          . doi:
          <volume>10</volume>
          .1109/CCECE.
          <year>2017</year>
          .
          <volume>7946613</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kottke</surname>
          </string-name>
          , G. Krempl,
          <string-name>
            <given-names>M.</given-names>
            <surname>Stecklina</surname>
          </string-name>
          , C. S. von Rekowski, T. Sabsch,
          <string-name>
            <given-names>T. P.</given-names>
            <surname>Minh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Deliano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Spiliopoulou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Sick</surname>
          </string-name>
          ,
          <article-title>Probabilistic active learning for active class selection</article-title>
          ,
          <source>in: Proc. of the NeurIPS Worksh. on the Future of Interact. Learn. Mach.</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bunse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Weichert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Morik</surname>
          </string-name>
          ,
          <article-title>Optimal probabilistic classification in active class selection</article-title>
          ,
          <source>in: Int. Conf. on Data Mining</source>
          , IEEE,
          <year>2020</year>
          , pp.
          <fpage>942</fpage>
          -
          <lpage>947</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ICDM50108.
          <year>2020</year>
          .
          <volume>00106</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Senz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bunse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Morik</surname>
          </string-name>
          ,
          <article-title>Certifiable active class selection in multi-class classification</article-title>
          , in: Worksh. on Interact. Adapt. Learn.,
          <source>CEUR Worksh. Proc.</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>68</fpage>
          -
          <lpage>76</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Bunse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Morik</surname>
          </string-name>
          ,
          <article-title>Active class selection with uncertain deployment class proportions</article-title>
          , in: Worksh. on Interact. Adapt. Learn.,
          <source>CEUR Worksh. Proc.</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>79</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Saar-Tsechansky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Melville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. J.</given-names>
            <surname>Provost</surname>
          </string-name>
          ,
          <article-title>Active feature-value acquisition</article-title>
          ,
          <source>Manag. Sci</source>
          .
          <volume>55</volume>
          (
          <year>2009</year>
          )
          <fpage>664</fpage>
          -
          <lpage>684</lpage>
          . doi:
          <volume>10</volume>
          .1287/mnsc.1080.0952.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>M. desJardins</surname>
          </string-name>
          , J. MacGlashan,
          <string-name>
            <surname>K. L. Wagstaf</surname>
          </string-name>
          ,
          <article-title>Confidence-based feature acquisition to minimize training and test costs</article-title>
          ,
          <source>in: SIAM Int. Conf. on Data Mining, SIAM</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>514</fpage>
          -
          <lpage>524</lpage>
          . doi:
          <volume>10</volume>
          .1137/1.9781611972801.45.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sugiyama</surname>
          </string-name>
          , G. Niu,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Active feature acquisition with supervised matrix completion</article-title>
          ,
          <source>in: Int. Conf. on Knowl. Discov. &amp; Data Mining, ACM</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>1571</fpage>
          -
          <lpage>1579</lpage>
          . doi:
          <volume>10</volume>
          .1145/3219819.3220084.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Padmanabhan</surname>
          </string-name>
          ,
          <article-title>On active learning for data acquisition</article-title>
          ,
          <source>in: Int. Conf. on Data Mining</source>
          , IEEE,
          <year>2002</year>
          , pp.
          <fpage>562</fpage>
          -
          <lpage>569</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICDM.
          <year>2002</year>
          .
          <volume>1184002</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>K.</given-names>
            <surname>Konyushkova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Sznitman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Fua</surname>
          </string-name>
          ,
          <article-title>Learning active learning from data</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>30</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Taguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kameyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hino</surname>
          </string-name>
          ,
          <article-title>Active learning with interpretable predictor</article-title>
          , in: 2019
          <source>International Joint Conference on Neural Networks (IJCNN)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. M.</given-names>
            <surname>Hospedales</surname>
          </string-name>
          ,
          <article-title>Dynamic ensemble active learning: A nonstationary bandit with expert advice</article-title>
          ,
          <source>in: 2018 24th International Conference on Pattern Recognition (ICPR)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>2269</fpage>
          -
          <lpage>2276</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>K.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          , T. Hospedales,
          <article-title>Meta-learning transferable active learning policies by deep reinforcement learning</article-title>
          , arXiv preprint arXiv:
          <year>1806</year>
          .
          <volume>04798</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kottke</surname>
          </string-name>
          ,
          <string-name>
            <surname>A Holistic</surname>
          </string-name>
          ,
          <article-title>Decision-Theoretic Framework for Pool-Based Active Learning</article-title>
          ,
          <source>PhD thesis dissertation</source>
          , Kassel University,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A.</given-names>
            <surname>Tharwat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Schenck</surname>
          </string-name>
          ,
          <article-title>A novel low-query-budget active learner with pseudo-labels for imbalanced data</article-title>
          ,
          <source>Mathematics</source>
          <volume>10</volume>
          (
          <year>2022</year>
          )
          <fpage>1068</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>J. T.</given-names>
            <surname>Ash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Krishnamurthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Langford</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <article-title>Deep batch active learning by diverse, uncertain gradient lower bounds</article-title>
          , arXiv preprint arXiv:
          <year>1906</year>
          .
          <volume>03671</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>S.</given-names>
            <surname>Karamcheti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Krishna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fei-Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. D.</given-names>
            <surname>Manning</surname>
          </string-name>
          ,
          <article-title>Mind your outliers! investigating the negative impact of outliers on active learning for visual question answering</article-title>
          ,
          <source>arXiv preprint arXiv:2107.02331</source>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>T.</given-names>
            <surname>Tran</surname>
          </string-name>
          , T.-T. Do, I. Reid, G. Carneiro,
          <article-title>Bayesian generative active deep learning</article-title>
          ,
          <source>in: International Conference on Machine Learning, PMLR</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6295</fpage>
          -
          <lpage>6304</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. S.</given-names>
            <surname>Sheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Learning from crowds with active learning and self-healing</article-title>
          ,
          <source>Neural Computing and Applications</source>
          <volume>30</volume>
          (
          <year>2018</year>
          )
          <fpage>2883</fpage>
          -
          <lpage>2894</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Calma</surname>
          </string-name>
          ,
          <article-title>Active Learning with Uncertain Annotators: Towards Dedicated Collaborative Interactive Learning</article-title>
          ,
          <source>PhD thesis dissertation</source>
          , Kassel University,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Niu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tan</surname>
          </string-name>
          ,
          <article-title>Online adaptive asymmetric active learning with limited budgets</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          <volume>33</volume>
          (
          <year>2019</year>
          )
          <fpage>2680</fpage>
          -
          <lpage>2692</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>V.-L. Nguyen</surname>
            ,
            <given-names>M. H.</given-names>
          </string-name>
          <string-name>
            <surname>Shaker</surname>
          </string-name>
          , E. Hüllermeier,
          <article-title>How to measure uncertainty in uncertainty sampling for active learning</article-title>
          ,
          <source>Machine Learning</source>
          <volume>111</volume>
          (
          <year>2022</year>
          )
          <fpage>89</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>K. O. Lye</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mishra</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Ray</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Chandrashekar</surname>
          </string-name>
          ,
          <article-title>Iterative surrogate model optimization (ismo): An active learning algorithm for pde constrained optimization with deep neural networks</article-title>
          ,
          <source>Computer Methods in Applied Mechanics and Engineering</source>
          <volume>374</volume>
          (
          <year>2021</year>
          )
          <fpage>113575</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Islam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ghahramani</surname>
          </string-name>
          ,
          <article-title>Deep bayesian active learning with image data</article-title>
          ,
          <source>in: International conference on machine learning, PMLR</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1183</fpage>
          -
          <lpage>1192</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>N.</given-names>
            <surname>Zemmal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Azizi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sellami</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Cheriguene</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ziani</surname>
          </string-name>
          ,
          <article-title>A new hybrid system combining active learning and particle swarm optimisation for medical data classification</article-title>
          ,
          <source>International Journal of Bio-Inspired Computation</source>
          <volume>18</volume>
          (
          <year>2021</year>
          )
          <fpage>59</fpage>
          -
          <lpage>68</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>T.</given-names>
            <surname>Lookman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. V.</given-names>
            <surname>Balachandran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <article-title>Active learning in materials science with emphasis on adaptive sampling using uncertainties for targeted design</article-title>
          ,
          <source>npj Computational Materials</source>
          <volume>5</volume>
          (
          <year>2019</year>
          )
          <fpage>21</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <surname>C.-T. Chen</surname>
            ,
            <given-names>G. X.</given-names>
          </string-name>
          <string-name>
            <surname>Gu</surname>
          </string-name>
          ,
          <article-title>Generative deep neural networks for inverse materials design using backpropagation and active learning</article-title>
          ,
          <source>Advanced Science 7</source>
          (
          <year>2020</year>
          )
          <fpage>1902607</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>