=Paper= {{Paper |id=Vol-1998/paper_04 |storemode=property |title=A Meta-Learning Approach to One-Step Active-Learning |pdfUrl=https://ceur-ws.org/Vol-1998/paper_04.pdf |volume=Vol-1998 |authors=Gabriella Contardo,Ludovic Denoyer,Thierry Artieres |dblpUrl=https://dblp.org/rec/conf/pkdd/ContardoDA17 }} ==A Meta-Learning Approach to One-Step Active-Learning== https://ceur-ws.org/Vol-1998/paper_04.pdf
          A Meta-Learning Approach to One-Step
                    Active-Learning

           Gabriella Contardo1 , Ludovic Denoyer1 , and Thierry Artieres2
    1
    Sorbonne Universites, UPMC Univ Paris 06, UMR 7606, LIP6, F-75005, Paris,
                      France. firstname.lastname@lip6.fr
 2
   Ecole Centrale Marseille-Laboratoire d’Informatique Fondamentale (Aix-Marseille
            Univ.), France. thierry.artiere@centrale-marseille.fr



         Abstract. We consider the problem of learning when obtaining the
         training labels is costly, which is usually tackled in the literature using
         active-learning techniques. These approaches provide strategies to choose
         the examples to label before or during training. These strategies are
         usually based on heuristics or even theoretical measures, but are not
         learned as they are directly used during training. We design a model
         which aims at learning active-learning strategies using a meta-learning
         setting. More specifically, we consider a pool-based setting, where the
         system observes all the examples of the dataset of a problem and has to
         choose the subset of examples to label in a single shot. Experiments show
         encouraging results.


1       Introduction
Machine learning, and more specifically deep learning techniques, are now recog-
nized for their ability to obtain high performance on a large variety of problems,
from image recognition to natural language processing. However, most of the
tasks tackled for now are supervised and need a critical amount of labeled data to
be learned properly. Depending on the final application, these labeled examples
are often expensive to get (e.g manual annotation), and not always available in
large quantity.Learning using a small amount of labeled data is thus a key issue
in the machine learning domain.
    Humans are able to learn and generalize well from only a few labeled examples
(e.g children can recognize rapidly any depiction of a car or some animals -
drawing, photo, real life- after having been shown only a few pictures with
explicit "supervision"). This problem has been studied in the literature as one-
shot (or few-shots) learning, where the goal is to predict based on very few
supervised examples (e.g one per category). This setting was first proposed in
[16], and it knows a renewal of interest under slightly different flavors. Recently,
several methods have been presented, relying on different techniques such as
matching networks and bi-LSTM ([14]) or memory-networks ([10]) which are
learned using a meta-learning approach: they aim at learning from a large set
of learning problems a strategy that will enable the algorithm to efficiently and
rapidly use the (small) supervision when facing a new problem (see Section 2 for
2

a description of the related work). In this setting, one consider that the model
has as an input a set of already labeled data, usually k examples chosen randomly
per category in the problem.
    In parallel, the field of active learning focuses on approaches that allow a
model to ask an oracle for the labels of some training examples, to improve its
learning. It is thus based on a different assumption where the model has the
ability to ask labels for a set of unsupervised data. In this case, different settings
can be defined, regarding the nature of the unsupervised examples set (a finite
dataset completely observable, i.e pool-based, or a stream of inputs), and the
nature of the acquisition process (single step or sequential). Some approaches also
benefits from an initial small labeled dataset. The decision process for selecting
the examples to label being made during training, all methods from state of the
art in this field do not learn this decision process, but instead design specific
heuristics or criterion.

We propose to study a problem at the crossroad of one-shot learning and active
learning. We present a method that not only learns to classify examples using
small supervision but additionally learns a label acquisition strategy which is
used to acquire the training set. We study the case of pool-based setting: the
model works on a completely observable set of examples. This is novel with regard
to previous approach in one-shot learning which consider a stream of examples
to classify one after the other. The choice of the subset of examples to label is
made in a single step via the acquisition strategy.

In Section 3, we define the problem and the specific training strategy inspired
from recent one-shot learning methods. We then describe our approach in Section
4, which is based on representation learning and the use of bi-directional recurrent
networks. Section 5 provides experimental results on artificial and real datasets.


2    Related work

The active-learning problem has been studied under various flavors, reviewed
in a survey in [11]. Generically speaking, methods are usually composed of two
components: a selector, which decides which examples should be labeled, and a
predictor. Most of the approaches focus on sequential labeling strategies, where
the system can send some examples to be labeled to the oracle, eventually update
its prediction model, and choose new examples to be labeled depending on the
answers of the oracle and/or the new predictor. The data examples can be pre-
sented to the selector either as a complete set (e.g pool-based ) or in a sequential
fashion, where the selector has to decide at each step if the example should be
labeled or not. Several methods for single-instance selector in pool-based setting
have been proposed such as [18], which uses Fisher information matrices, or [3]
that relies on a multi-armed bandit approach. Batch-mode (i.e each step can ask
for several labels) have been studied for instance by [7], using a definition of the
performance based on high likelihood of labeled examples and low uncertainty of
                                                                                   3

the unlabeled ones. Stream-based setting have been tackled through measures of
"informativeness" (i.e favor labeling of more informative examples [4]), by defining
region of uncertainty (e.g [2]), using "committees" for the decision (e.g [8] with an
ensemble method focusing on favoring diversity in committee members). Other
types of approaches design decisions by studying the expected model change ([12])
or the expected error reduction ([9]). Static methods (i.e where the subset of
examples to label is decided in a single shot) have been less studied as it can not
benefit from the feedback of the oracle or any estimation w.r.t. the prediction, the
quality of the current predictor or uncertainty measure. However such methods
can prove useful when asking several times in a row an oracle is not possible, or
when interactions between the learner and the "oracle" is limited, e.g. as cited by
[5] when using Amazon Mechanical Turks. In this paper, the authors define the
problem as selective labeling, in a semi-supervised context. They propose to select
a subset of examples to label by minimizing the upper-bound of a deterministic
out-of-sample error bound for Laplacian regularized Least Squares. [6] present an
approach for single batch active learning for specific graphs-based tasks, while [17]
propose a method based on transductive experimental design, however they design
a sequential optimization algorithm to overcome the combinatorial problem.

In parallel, the problem of one-shot learning (first described in [16]) knows
a renewal of interest. Notably, recent methods have proposed to use a meta-
learning approach, by relying on additionnal data of similar nature (e.g images
of different classes). The goal is to design systems that learns to predict on
novel problems based only on few labeled examples. For example, [10] propose
to use the recent memory-augmented neural network, to integrate and store the
new examples. Similarly, [14] propose to rely on external memories for neural
networks, bidirectional LSTM and attention LSTM. One key aspect of their
approach is their aim at representing an instance w.r.t. the current memory
(i.e observed labeled examples). Note that these approaches design a "one-shot
learning problem" (e.g training point/inference point) as a sequential problem,
where one instance arrives after the other. Additionally, the system can receive
some afterward feedback on the observed instances.
Tackling active-learning through meta-learning has been little studied for now.
The work of [15] propose an extension of the model of [10], where the true label
of the observed instance is withheld unless the system ask for it. The model
can either classify or ask for the label. The decision is learned through reinforce-
ment learning, where the system gets a high reward for accurate prediction and
is penalized when acquiring label or giving false prediction. They design the
action-value function to learn as a LSTM. This suffers from a similar drawback
as one-shot learning methods, as it does not consider the dataset as a whole but
instead follow a "myopic" process.
The recent work of [1] is the most closely related to ours, as they propose a
similar approach for this novel task of meta-learning an active labeling strategy
in a pool-based setting. However, they present a model that sequentially select
4




Fig. 1: Examples of a complete dataset for a meta-active learning strategy, with a
set of training problems S, with P categories per problem, on a total of |Ctrain |
classes, and a set of testing problems on distinct categories. Each problem is
composed of a set of N examples that can be labeled and used for prediction,
and a set of M examples to classify.


an item to label in several step, while we propose a "one-step" static selection
that does not rely on any oracle feedback.


3     Meta-active learning problem and setting
3.1   Preliminary
The generic goal of an active learning system is to provide the best prediction on
a task, using the fewer amount of labels as possible. The system has to choose
the most relevant examples to label in order to learn accurately. It is usually
considered that the model has access to an oracle, which provides the labels of
given examples. Active learning usually aims at tackling a single problem, i.e
one dataset and one task. We consider in this paper a pool-based setting with a
single-step acquisition, which resumes to the generic following schema: (i) the
system receives an entire unsupervised set of examples, (ii) it computes the subset
of examples to send to the oracle for labeling, (iii) learning is made based on this
reduced supervised subset. In such a single step setting, the decision process for
                                                                                      5

choosing the examples to label can not be learned.

We propose to design a meta-active learning protocol in order to learn the
acquisition strategy i.e the way to choose the examples to label, in a meta learn-
ing fashion. We follow a similar principle to what has been recently presented for
one-shot learning problems, e.g in [10]. It aims at extending the basic principle
of training in machine learning, where a model is trained on data-points from a
similar distribution to the data-points observed during inference. For one-shot
learning, it resumes as designing data-points as one-shot problems, on dataset of
similar nature (e.g all inputs are images). The protocol therefore replicates the
final task during training and aims at learning to learn from few examples.

Let us now describe our meta-active learning protocol while introducing few nota-
tions. As explained in Figure 1, our training stage will consist of many elementary
active classification problems built from a large dataset. Each elementary problem
is denoted S = (C, S T rain , S Eval ), it is dedicated to the classification of classes
in a set C, coming with two sets of examples, the first one being used to infer a
prediction model, S T rain , and the second one, S Eval , being used to evaluate the
inferred model.
    Starting from a large multiclass dataset B of labeled examples belonging to a
large number of categories U T rain , each elementary problem is built as follows:
 – A subset of classes C is sampled uniformly in the set of all the categories in
   U T rain .
 – Then, a first set of N examples from classes in C is sampled from B to build
   S T rain = {(x1 , y1 ), ...., (xN , yN )}, where xi is the i-th input data-point and
   yi ∈ C stands for its class.
 – At last, a second set of M new data points is sampled from B to build
   S Eval = {(xN +1 , yN +1 ), ...., (xN +M , yN +M )} where S T rain ∩ S Eval = ∅.
    In the learning stage, the system is presented a series of elementary training
problems S. For each problem the training set S T rain is provided without any
labels and the system is allowed to ask for the label of a limited subset D of
samples in S T rain according to an acquisition strategy. The system then infers
a predictive model d from D that is evaluated over S Eval . Learning aims at
learning the various components of the system (acquisition strategy, infering a
predictive model). Each pair (S T rain , S Eval ) serves as a supervised example for
the meta-learning algorithm of the system.
    In the test stage the system is evaluated on elementary testing problems
to evaluate the quality of our meta-learning approach. The testing problems are
fully different from training problems since there are based on a new subset of
categories U T est that is disjoint from the categories used to build the training
sets U T rain .
    An illustration of this setting is provided in Figure 1 with image classification.
All elementary classification problems are binary classification (i.e |C|=2). The
training problems contains categories such as cats, dogs, houses and bicycles,
with different classification problems e.g classification between cat and dog, dog
6

and house,etc. The elementary testing problems are drawn from a different set of
categories, here elephants, cars, cakes and planes.


3.2    Problem Definition

The goal of a meta-active learning system is to learn an active-learning strategy
such that, for each problem, coming with a training dataset of unlabeled examples,
it can predict the most relevant examples to label and provide a good prediction
based on these supervised examples on the "test" part of the problem. We propose
a system to tackle such a task as composed of two modules.
    The first component is an active-learning strategy, which controls the selection
of examples. This strategy is defined as a probability distribution over the set
of training examples of a problem that we note P (α|S train ) where α is a binary
vector of size N such that αk = 1 if the strategy asked for label yk and αk = 0
elsewhere. The distribution P (α|S train ) is used to sample which examples are
asked to be labeled by the oracle. This yields a subset of labeled examples
Dα = {xj ∈ S train /αj = 1} ⊂ S train .
    The second component is a prediction component, which takes as an input an
example x to classify in S eval and the supervised training dataset Dα , and outputs
prediction for this example denoted d(x, Dα ). The prediction component does
not have access to the examples that have not been targeted by the acquisition
policy – i.e only the examples from Dα are used.
    We resume the generic learning scheme in Algorithm 1. During training, the
process iteratively samples a random problem S in the set of training problems.
The acquisition model receives S train (without labels) and predicts which exam-
ples to select for labelling by sampling with P (α|S train ). The built labeled set
Dα is used to output prediction for each example in S Eval using the prediction
module d. Its performance is evaluated on S Eval which is used to update the
model.The process is similar at testing time to evaluate the whole meta-learning
system.
    Since we consider that acquiring labels during the first step has a price, we
consider a generic objective function that is a trade-off between the prediction
quality on the evaluation set S Eval and the size of the labeled set (|Dα |), i.e the
labeling cost. The generic objective function resumes to:
                                              X
    L = ES∼P (S) [Eα∼P (α|S T rain ) [                      [∆(d(xj , Dα ), yj )] + λ|Dα |]]
                                                                                               (1)
                                         (xj ,yj )∈S Eval


    where ES∼P (S) is the expectation over the distribution of problems which we
will empirically approximate by an average over a large set of training problems.
Eα∼P (α|S T rain ) stands for the expectation over the subset of examples selected
according to the acquisition strategy and ∆(d(xj , Dα ), yj ) measures the error
∆ between the expected output yj and the model prediction d(xj , Dα ) for an
evaluation sample xj and a model inferred from Dα .
                                                                                     7




Fig. 2: Illustration of the inference process for a given problem : the unsupervised
dataset ST rain is fed to a "selector", which decides which examples should
be labeled. The oracle provides the necessary labels, which provides a small
supervised sub-dataset Dα . This dataset is used by the prediction model to
predict on the evaluation examples in S Eval .


Algorithm 1 Learning algorithm for meta-active learning algorithms.
Require: S: distribution over training problems.
Require: Active-learning model
Require: d = Prediction model
1: repeat
2:     Sample a random problem S
3:     Active-learning model predicts the probability P (α|S T rain ).
4:     Sample following the probability to obtain Dα ,the subset of examples to label
   in S T rain
5:     Feed d with labeled sub-dataset Dα and evaluate error of d on predictions of all
   xj ∈ S Eval
6:     Update both modules accordingly.
7: until stopping criterion




4     Description of the model



4.1   Optimization criterion



We now detail the optimization criterion based on the generic objective function
defined in Equation 1.
As explained in the previous section, the sub-dataset Dα of examples chosen
for labeling comes from the binary vector α, s.t. an example xj is asked for
labeling if αj 6= 0. This vector α is sampled from the distribution Pθ (α|S T rain ),
outputted by the acquisition component (whose parameters are noted θ), given
the unsupervised training set S train . Thus, the number of elements in the dataset
8

Dα is directly the number of non-zero elements in α. The loss for a given problem
S can therefore be rewritten as :

                                    X
Lθ,d (S) = Eα∼Pθ (α|S T rain ) [              ∆(d(x, Dα ), y) + λ|Dα |]
                               (x,y)∈S Eval

                                    X                                                     N
                                                                                          X
         = Eα∼Pθ (α|S T rain ) [              ∆(d(x, Dα ), y)] + Eα∼Pθ (α|S T rain ) [λ         αk ]
                               (x,y)∈S Eval                                               k=1
            |                       {z                         }   |            {z                }
                            error in prediction                         cost of labelization
                                                                               (2)
    The first part corresponds to the prediction quality depending on the acquired
and labeled examples. Its gradient w.r.t. parameters of both modules (noted for
sake of simplicity ∇θ,d ) can be computed using inspired policy-gradient method
(likelihood-ratio trick) as follows, where we consider for clarity the gradient of
the prediction loss for a single example (x, y) in S Eval :

                                               Z
∇θ,d Eα∼Pθ (α|S T rain ) [∆(d(x, Dα ), y)] =    ∇θ,d (Pθ (α|S T rain )∆(d(x, Dα ), y)dα
                                                Z
                                              + Pθ (α|S T rain )∇θ,d ∆(d(x, Dα ), y)dα

                                                Pθ (α|S T rain )
                                              Z
                                            =                    ∇θ,d (Pθ (α|S T rain ))∆(d(x, Dα ), y)dα
                                                Pθ (α|S T rain )
                                                Z
                                              + Pθ (α|S T rain )∇θ,d ∆(d(x, Dα ), y)dα
                                              Z
                                            = Pθ (α|S T rain )∇θ,d (log(Pθ (α|S T rain )))∆(d(x, Dα ), y)dα
                                                Z
                                              + Pθ (α|S T rain )∇θ,d ∆(d(x, Dα ), y)dα
                                                                       (3)
This can be approximated through Monte-Carlo sampling, which yields, on M
histories:
                                                      M
                                                  1 X
∇θ,d Eα∼Pθ (α|S T rain ) [∆(d(x, Dα ), y)] ≈            ∇θ,d (log(Pθ (α|S T rain )))∆(d(x, Dα ), y) + ∇θ,d ∆(d(x, Dα ), y)
                                                  M m=1
                                                                                         (4)

4.2   Labels acquisition component
This module takes as input the whole unlabeled training dataset of the current
problem at hand and outputs a probability of the usefulness of labeling each
of these samples. We propose to use recurrent neural networks, which were
initially proposed to consider sequences of inputs. More specifically, we propose
in this work to use bi-directional RNN, which ensure that the output i of the
                                                                                                9

network is computed with regards to all inputs examples, and thus provide a
"non-myopic" decision for each example (at the difference of a classical RNN), in
order to benefit from the observation of all example for each decision. Note that it
could be relevant to use attentional-LSTM here, as presented in [13], as it provides
an order-invariant network, but this has not been tested yet in our experiments.
The output of the recurrent network is considered to be a probability distribution
that is used to sample α, the binary vector that select the examples to label.
The output can thus be seen either as (i) a multinomial distribution, where
PN             3                                                     train
  i=1 αi = 1, , (ii) a bernouilli distribution where each Pθ (αj |Si       ) ∈ {0, 1}.
We present in this paper experiments using a multinomial distribution sampled
k-times, where k is the maximum number of examples labeled.


4.3     Prediction component

This module takes as input a (new) example and a limited supervised training
dataset, and outputs a prediction (e.g a category). It could be any prediction
algorithm, parametric or not, which requires learning or not. In our case, the
component should be able to back-propagate some gradients of errors to drive
the overall learning. We propose to use similarity based prediction, which doesn’t
need learning, thus allows for a fast overall meta-learning. We test two similar-
ity measures, a normalized cosine similarity and an euclidean-based similarity.
Additionally, computing the predicted label for a new input is done as follow :
(i) each similarity with the supervised examples is computed. (ii) This vector of
similarities is then converted into a probability distribution, using a softmax with
temperature. (iii) The predicted label is computed as the sum of one-hot-vector
labels of supervised examples weighted by this distribution. Note that when the
temperature is high enough, this distribution is a one-hot vector, which is similar
to a 1-nearest neighbor technique.
     Additionally, we propose to use a representation component, common
to the acquisition and decision components. The key idea is to learn a latent
representation space that disentangle the raw inputs to provide better prediction
as well as facilitate the acquisition decision. This module, denoted f , takes as
input an example in RK (the original space of all examples of B) and outputs
its representation in a latent space RL . It is conjointly learned with the others
functions. Integrating this representation function in the original loss defined in
Eq. 2 resumes to:

                                         X                                                                  N
                                                                                                            X
Lθ,d,f (S) = Eα∼Pθ (αf (|S T rain )) [              ∆(d(f (x), f (Dα )), y)] + Eα∼Pθ (α|f (S T rain )) [λ         αk ]
                                     (x,y)∈S Eval                                                           k=1
                                                                                        (5)
Where we note for sake of clarity f (S T rain ) = {f (x1 ), . . . , f (xN )}, and similarly
for f (Dα ).
 3
     Note that this allows to manually bound the number of examples labeled as one has
     to decide beforehand the number of sampling
10

5      Experiments
We first describe our experimental protocol and the baselines we used, then we
show the results of our experiments on two datasets,letter and aloi.

Experimental Protocol : To build our "meta-active learning" datasets, we set P ,
the number of categories of each elementary problem, N the number of examples
in the "unsupervised" dataset, and M the number of examples to classify on. For
simplicity, we chose in our experiments to use same numbers P,M,N, whatever
the every elementary problem.
    The generation of the complete dataset as illustrated in Figure 1 with train-
ing/validation/testing problems is based on a partition of the full set of categories
between train, validation and test, while keeping a common domain between all
inputs. It is done as follows:
    – training dataset: we select a subset of the categories as "training classes"
      (e.g 50% of all classes) and their corresponding examples. We then generate
      a large amount of sub-problems: for one problem, (i) we randomly select P
      categories in the "training classes", (ii) we randomly select N examples in
      these P categories (i.e Sitrain , the examples that can be asked for labeling),
      (iii) we randomly select M additional examples to evaluate the predictions,
      i.e Sieval .
    – validation and testing datasets are generated similarly, on distinct "vali-
      dation classes" and "testing classes", unobserved in the complete training
      dataset.

Baselines : We propose for this study two baselines. These baselines follow the
same global scheme, but with a different acquisition component:
    – Random acquisition: the examples to label are chosen randomly in the
      dataset.
    – K-medoids acquisition: the examples to label are selected following a k-
      medoid clustering technique, where we label each example if it is a centroid
      of a cluster.
Note that these acquisition methods do not learn during the overall process, only
the representation component (if one is used) is learned. While being simple,
we expect the k-medoids baseline to be a reasonable and efficient baseline in
our static active-learning setting, more especially when using a similarity-based
function for prediction.

Dataset letter : This dataset has 26 categories and 16 features. We took 10
categories for training, 7 for validation and 9 for testing. We generated 2000
problems in training and 500 problems for validation and testing. The size of
a dataset (examples that can be labeled) is 25, and the number of examples
to classify per problem is 40. Here again we study 3 types of problems, binary,
4-classes and 6-classes with various budget levels. The results are plotted in
                                                                                            11




(a) Results on dataset let-     (b) Results on dataset let-     (c) Results on dataset let-
ter with 2-categories clas-     ter with 4-categories clas-     ter with 6-categories clas-
sification problems.            sification problems.            sification problems.




(d) Results on dataset aloi (e) Results on dataset aloi (f) Results on dataset aloi
with 2-categories classifica- with 4-categories classifica- with 6-categories classifica-
tion problems.                tion problems.                tion problems.

Fig. 3: Plots of results on uci-dataset letter (top) and dataset aloi (bottom), with
2,4 or 6 categories per problem. K-medoids acquisition strategy is depicted in
blue, random acquisition strategy in red. Our model using Policy-Gradient is in
green. Abscissa is the number of examples selected for labeling, ordinate is the
average accuracy obtained on all test-problems. For each model, we select the
best results on validation problems for each budget, and plot the corresponding
performance on test problems (square points).
12

Figures 3a,3b,3c. We observe mixed results. Our model performs better than a
k-medoid acquisition strategy for a budget of 2 on binary-classification problems,
but k-medoid leads to a better accuracy for higher budgets. It is also better for
all budgets except 6 on 4-categories problems. For 6-categories problems, our
model beats the two baselines for all budgets. This difference of performance can
be explained by the small amount of different categories in the training dataset;
with 10 categories and binary problems (45 different combinations), our model
will observe the same problem a large number of times, which could lead to
over-fitting. This seems to be the case, as it performs better on 6-classes problems
(210 different combinations). We propose thus to study now a dataset with a
larger number of categories.

Dataset aloi : This dataset has 1000 categories, with around one hundred images
per class. It is a more realistic and challenging dataset for the meta active learning
setting we are dealing with. We created 4000 training problems on 350 training
categories, and 500 validation and testing problems on respectively 300 and 350
categories. The number of examples that can be labeled is 25, and the number of
examples to classify per problem is 40. The results are shown in Figures 3d,3e,3f,
for the 3 types of problems (2-classes, 4-classes and 6-classes). We see that our
method performs better than k-medoid for all budgets and all types of problems,
except on binary-classification with budget 6, where k-medoid performs slightly
better (0.5%). On this bigger dataset, our approach is less prone to overfit, and
thus manages to generalize well its acquisition strategy to novel problems on
unseen categories.


6    Closing remarks
We present in this paper a first approach for a meta-learning approach to a pool-
based static active-learning strategy. We propose a stochastic instantiation based
on bi-directionnal LSTM to benefit from the whole unsupervised dataset before
prediction. First results are encouraging and show the ability of our approach
to learn a labeling strategy that performs as well or better than our k-medoid
baseline.


References
 [1] Bachman, P., Sordoni, A., Trischler, A.: Learning algorithms for active learning.
     ICLR Workshop (2017)
 [2] Cohn, D., Atlas, L., Ladner, R.: Improving generalization with active learning.
     Machine learning 15(2), 201–221 (1994)
 [3] Collet, T., Pietquin, O.: Optimistic active learning for classification. ECML/PKDD
     2014 p. 11 (2014)
 [4] Dagan, I., Engelson, S.P.: Committee-based sampling for training probabilistic
     classifiers. In: Proceedings of the Twelfth International Conference on Machine
     Learning. pp. 150–157. The Morgan Kaufmann series in machine learning,(San
     Francisco, CA, USA) (1995)
                                                                                       13

 [5] Gu, Q., Zhang, T., Han, J., Ding, C.H.: Selective labeling via error bound min-
     imization. In: Advances in neural information processing systems. pp. 323–331
     (2012)
 [6] Guillory, A., Bilmes, J.A.: Label selection on graphs. In: Advances in Neural
     Information Processing Systems. pp. 691–699 (2009)
 [7] Guo, Y., Schuurmans, D.: Discriminative batch mode active learning. In: Advances
     in neural information processing systems. pp. 593–600 (2008)
 [8] Melville, P., Mooney, R.J.: Diverse ensembles for active learning. In: Proceedings of
     the twenty-first international conference on Machine learning. p. 74. ACM (2004)
 [9] Roy, N., McCallum, A.: Toward optimal active learning through monte carlo
     estimation of error reduction. ICML, Williamstown pp. 441–448 (2001)
[10] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., Lillicrap, T.: Meta-learning
     with memory-augmented neural networks. In: Proceedings of The 33rd International
     Conference on Machine Learning. pp. 1842–1850 (2016)
[11] Settles, B.: Active learning literature survey. University of Wisconsin, Madison
     52(55-66), 11 (2010)
[12] Settles, B., Craven, M., Ray, S.: Multiple-instance active learning. In: Advances in
     neural information processing systems. pp. 1289–1296 (2008)
[13] Vinyals, O., Bengio, S., Kudlur, M.: Order matters: Sequence to sequence for sets.
     arXiv preprint arXiv:1511.06391 (2015)
[14] Vinyals, O., Blundell, C., Lillicrap, T., Wierstra, D., et al.: Matching networks
     for one shot learning. In: Advances in Neural Information Processing Systems. pp.
     3630–3638 (2016)
[15] Woodward, M., Finn, C.: Active one-shot learning (2017)
[16] Yip, K., Sussman, G.J.: Sparse representations for fast, one-shot learning (1997)
[17] Yu, K., Bi, J., Tresp, V.: Active learning via transductive experimental design.
     In: Proceedings of the 23rd international conference on Machine learning. pp.
     1081–1088. ACM (2006)
[18] Zhang, T., Oles, F.: The value of unlabeled data for classification problems.
     In: Proceedings of the Seventeenth International Conference on Machine Learn-
     ing,(Langley, P., ed.). pp. 1191–1198. Citeseer (2000)