=Paper= {{Paper |id=Vol-3014/paper3 |storemode=property |title=Interactive Label Cleaning with Example-based Explanations |pdfUrl=https://ceur-ws.org/Vol-3014/paper2.pdf |volume=Vol-3014 |authors=Stefano Teso,Andrea Bontempelli,Fausto Giunchiglia,Andrea Passerini }} ==Interactive Label Cleaning with Example-based Explanations== https://ceur-ws.org/Vol-3014/paper2.pdf
Interactive Label Cleaning with Example-based
Explanations
Stefano Teso1 , Andrea Bontempelli1 , Fausto Giunchiglia1,2 and Andrea Passerini1
1
    University of Trento, Italy
2
    Jilin University, China


                                         Abstract
                                         We tackle sequential learning under label noise in applications where a human supervisor can be queried
                                         to relabel suspicious examples. Existing approaches are flawed, in that they only relabel incoming ex-
                                         amples that look “suspicious” to the model. As a consequence, those mislabeled examples that elude
                                         (or don’t undergo) this cleaning step end up tainting the training data and the model with no further
                                         chance of being cleaned. We propose cincer, a novel approach that cleans both new and past data by
                                         identifying pairs of mutually incompatible examples. Whenever it detects a suspicious example, cincer
                                         identifies a counter-example in the training set that—according to the model—is maximally incompat-
                                         ible with the suspicious example, and asks the annotator to relabel either or both examples, resolving
                                         this possible inconsistency. The counter-examples are chosen to be maximally incompatible, so to serve
                                         as explanations of the model’s suspicion, and highly influential, so to convey as much information as
                                         possible if relabeled. cincer achieves this by leveraging an efficient and robust approximation of influ-
                                         ence functions based on the Fisher information matrix (FIM). Our extensive empirical evaluation shows
                                         that clarifying the reasons behind the model’s suspicions by cleaning the counter-examples helps in
                                         acquiring substantially better data and models, especially when paired with our FIM approximation.

                                         Keywords
                                         Interactive Learning, Explainable AI, Label Noise, Influence Functions, Fisher Information




1. Introduction
Label noise is a major issue in machine learning as it can lead to compromised predictive
performance and unreliable models [1, 2]. We focus on sequential learning settings in which
a human supervisor, usually a domain expert, can be asked to double-check and relabel any
potentially mislabeled example. Applications include crowd-sourced machine learning and
citizen science, where trained researchers can be asked to clean the labels provided by crowd-
workers [3, 4], and interactive personal assistants [5], where the user self-reports the initial
annotations (e.g., about activities being performed) and unreliability is due to memory bias [6],
unwillingness to report [7], or conditioning [8].
   This problem is often tackled by monitoring for incoming examples that are likely to be
mislabeled, aka suspicious examples, and ask the supervisor to provide clean (or at least better)
annotations for them. Existing approaches, however, focus solely on cleaning the incoming

XAI.it 2021 - Italian Workshop on Explainable Artificial Intelligence
" stefano.teso@unitn.it (S. Teso); andrea.bontempelli@unitn.it (A. Bontempelli); fausto.giunchiglia@unitn.it
(F. Giunchiglia); andrea.passerini@unitn.it (A. Passerini)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073       CEUR Workshop Proceedings (CEUR-WS.org)
Figure 1: Suspicious example and counter-examples selected using (from left to right) cincer, 1-NN
and influence functions (IF), on noisy MNIST. Left: the suspicious example is mislabeled, the machine’s
suspicion is supported by a clean counter-example. Right: the suspicious example is not mislabeled, the
machine is wrongly suspicious because the counter-example is mislabeled. cincer’s counter-example
is contrastive and influential; 1-NN’s is not influential and IF’s is not pertinent, see desiderata D1–D3
below.


examples [9, 4, 10, 5]. This means that noisy examples that did not undergo the cleaning step
(e.g., those in the initial bootstrap data set) or that managed to elude it are left untouched. This
degrades the quality of the model and prevent it from spotting future mislabeled examples that
fall in regions affected by noise.
   We introduce cincer (Contrastive and InflueNt CounterExample stRategy), a new explainable
interactive label cleaning algorithm that lets the annotator observe and fix the reasons behind
the model’s suspicions. For every suspicious example that it finds, cincer identifies a counter-
example, i.e., a training example that maximally supports the machine’s suspicion. The idea
is that the example/counter-example pair captures a potential inconsistency in the data—as
viewed from the model’s perspective—which is resolved by invoking the supervisor. More
specifically, cincer asks the user to relabel the example, the counter-example, or both, thus
improving the quality of, and promoting consistency between, the data and the model. Two
hypothetical rounds of interaction on a noisy version of MNIST are illustrated in Figure 1.
   cincer relies on a principled definition of counter-examples derived from few explicit, in-
tuitive desiderata, using influence functions [11, 12]. The resulting counter-example selection
problem is solved using a simple and efficient approximation based on the Fisher information
matrix [13] that consistently outperforms more complex alternatives in our experiments.

Contributions: We: 1) Introduce cincer, an explanatory interactive label cleaning strategy
that leverages example-based explanations to identify inconsistencies in the data—as perceived
by the model—and enable the annotator to fix them. 2) Show how to select counter-examples
that explain why the model is suspicious and that are highly informative using (an efficient ap-
proximation of) influence functions. 3) Present an extensive empirical evaluation that showcases
the ability of cincer of building cleaner data sets and better models.


2. Background
We are concerned with sequential learning under label noise. In this setting, the machine
receives a sequence of examples 𝑧˜𝑡 := (x𝑡 , 𝑦˜𝑡 ), for 𝑡 = 1, 2, . . ., where x𝑡 ∈ R𝑑 is an instance
and 𝑦˜𝑡 ∈ [𝑐] is a corresponding label, with [𝑐] := {1, . . . , 𝑐}. The label 𝑦˜𝑡 is unreliable and
might differ from the ground-truth label 𝑦𝑡* . The key feature of our setting is that a human
supervisor can be asked to double-check and relabel any example. The goal is to acquire a clean
dataset and a high-quality predictor while asking few relabeling queries, so to keep the cost of
interaction under control.
   The state-of-the-art for this setting is skeptical learning (SKL) [10, 5]. SKL is designed
primarily for sequential interactive learning from unreliable users: in each iteration the machine
receives an example and updates the model accordingly. However, for each example that it
receives, the machine compares (an estimate of) the quality of the annotation with that of its
own prediction, and if the prediction looks more reliable than the annotation by some factor,
SKL asks the user to double-check his/her example. The details depend on the implementation:
in [10] label quality is estimated using the empirical accuracy for the classifier and the empirical
probability of contradiction for the annotator, while in [5] the machine measures the margin
between the user’s and machine’s labels. Our approach follows the latter strategy.
   Another very related approach is learning from weak annotators (LWA) [9, 4], which focuses
on querying domain experts rather than end-users. The most recent approach [4] jointly learns a
prediction pipeline composed of a classifier and a noisy channel, which allows it to estimate the
noise rate directly, and identifies suspicious examples that have a large impact on the model. A
theoretical foundation is given in [9]. LWA is however designed for pool-based scenarios, where
the training set is given rather than obtained sequentially. For this reason, in the remainder of
the paper we will chiefly build on and compare to SKL.

Limitations of existing approaches. A major downside of SKL is that it focuses on cleaning
the incoming examples only. This means that if a mislabeled example manages to elude the
cleaning step and gets added to the training set, it is bound to stay there forever. This situation
is actually quite common during the first stage of skeptical learning, in which the model is
highly uncertain and trusts the incoming examples—even if they are mislabeled. The same issue
occurs if the initial training set used to bootstrap the classifier contains mislabeled examples.
As shown by our experiments, the accumulation of noisy data in the training set may have a
detrimental effect on the model’s performance (cf. Figure 2). In addition, it can also affect the
model’s ability to identify suspicious examples: a noisy data point can fool the classifier into
trusting incoming mislabeled examples that fall close to it, further aggravating the situation.


3. Explainable Interactive Label Cleaning with CINCER
We consider a very general class of probabilistic classifiers 𝑓 : R𝑑 → [𝑐] of the form 𝑓 (x; 𝜃) :=
argmax𝑦∈[𝑐] 𝑃 (𝑦 | x; 𝜃), where the conditional distribution 𝑃∑︀(𝑌 | X; 𝜃) has been fit on training
data by minimizing the cross-entropy loss ℓ((x, 𝑦), 𝜃) = − 𝑖∈[𝑐] 1{𝑖 = 𝑦} log 𝑃 (𝑖 | x, 𝜃). In
our implementation, we also assume 𝑃 to be a neural network with a softmax activation at the
top layer, trained using some variant of SGD and possibly early stopping.

3.1. The cincer Algorithm
The pseudo-code of cincer is listed in Algorithm 1. At the beginning of iteration 𝑡, the machine
has acquired a training set 𝐷𝑡−1 = {𝑧1 , . . . , 𝑧𝑡−1 } and trained a model with parameters 𝜃𝑡−1
Algorithm 1 Pseudo-code of cincer. Inputs: initial (noisy) training set 𝐷0 ; threshold 𝜏 .
 1: fit 𝜃0 on 𝐷0
 2: for 𝑡 = 1, 2, . . . do
 3:      receive new example 𝑧˜𝑡 = (x𝑡 , 𝑦˜𝑡 )
 4:      if 𝜇(𝑧˜𝑡 , 𝜃𝑡−1 ) < 𝜏 then
 5:          𝐷𝑡 ← 𝐷𝑡−1 ∪ {𝑧˜𝑡 }                                                    ◁ 𝑧˜𝑡 is compatible
 6:      else
 7:          find counterexample 𝑧𝑘 using Eq. 10                                    ◁ 𝑧˜𝑡 is suspicious
 8:          present 𝑧˜𝑡 , 𝑧𝑘 to the user, receive possibly cleaned labels 𝑦𝑡 , 𝑦𝑘
                                                                            ′    ′

 9:          𝐷𝑡 ← (𝐷𝑡−1 ∖ {𝑧𝑘 }) ∪ {(x𝑡 , 𝑦𝑡′ ), (x𝑘 , 𝑦𝑘′ )}
10:     fit 𝜃𝑡 on 𝐷𝑡


on it. At this point, the machine receives a new, possibly mislabeled example 𝑧˜𝑡 (line 3) and has
to decide whether to trust it.
   Following skeptical learning [5], cincer does so by computing the margin 𝜇(𝑧˜𝑡 , 𝜃𝑡−1 ), i.e., the
difference in conditional probability between the model’s prediction 𝑦^𝑡 := argmax𝑦 𝑃 (𝑦 | x𝑡 , 𝜃𝑡−1 )
and the annotation 𝑦˜𝑡 . More formally:
                         𝜇(𝑧˜𝑡 , 𝜃𝑡−1 ) := 𝑃 (𝑦^𝑡 | x𝑡 , 𝜃𝑡−1 ) − 𝑃 (𝑦˜𝑡 | x𝑡 , 𝜃𝑡−1 )              (1)
The margin estimates the incompatibility between the model and the example: the larger the
margin, the more suspicious the example. The example 𝑧˜𝑡 is deemed compatible if the margin is
below a given threshold 𝜏 and suspicious otherwise (line 4); possible choices for 𝜏 are discussed
in Section 3.5.
   If 𝑧˜𝑡 is compatible, it is added to the data set as-is (line 5). Otherwise, cincer computes a
counter-example 𝑧𝑘 ∈ 𝐷𝑡−1 that maximally supports the machine’s suspicion. The intuition is
that the pair (𝑧˜𝑡 , 𝑧𝑘 ) captures a potential inconsistency in the data. For instance, the counter-
example might be a correctly labeled example that is close or similar to 𝑧˜𝑡 but has a different
label, or a distant noisy outlier that fools the predictor into assigning low probability to 𝑦˜𝑡 . How
to choose an effective counter-example is a major focus of this paper and discussed in detail in
Section 3.2 and following.
   Next, cincer asks the annotator to double-check the pair (𝑧˜𝑡 , 𝑧𝑘 ) and relabel the suspicious
example, the counter-example, or both, thus resolving the potential inconsistency. The data set
and model are updated accordingly (line 9).

3.2. Counter-example Selection
Counter-examples are meant to illustrate why an example 𝑧˜𝑡 is deemed suspicious by the
machine in a way that makes it easy to elicit useful corrective feedback from the supervisor.
We posit that a good counter-example 𝑧𝑘 should be:
 D1. Contrastive: 𝑧𝑘 should explain why 𝑧˜𝑡 is considered suspicious by the model, thus high-
     lighting a potential inconsistency in the data.
 D2. Influential: if 𝑧𝑘 is mis-labeled, correcting it should improve the model as much as possible,
     so to maximize the information gained by interacting with the annotator.
In the following, we show how, for models learned by minimizing the cross-entropy loss, one
can identify counter-examples that satisfy both desiderata.

What is a contrastive counter-example? We start by tackling the first desideratum. Let
𝜃𝑡−1 be the parameters of the current model. Intuitively, 𝑧𝑘 ∈ 𝐷𝑡−1 is a contrastive counter-
example for a suspicious example 𝑧˜𝑡 if removing it from the data set and retraining leads to a
                        −𝑘
model with parameters 𝜃𝑡−1  that assigns higher probability to the suspicious label 𝑦˜𝑡 . The most
contrastive counter-example is then the one that maximally affects the change in probability:
                                     {︁                                        }︁
                                                  −𝑘
                     argmax𝑘∈[𝑡−1] 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 ) − 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )                 (2)

While intuitively appealing, optimizing Eq. 2 directly is computationally challenging as it
involves retraining the model |𝐷𝑡−1 | times. This is impractical for realistically sized models and
data sets, especially in our interactive scenario where a counter-example must be computed in
each iteration.

Influence functions. We address this issue by leveraging influence functions (IFs), a compu-
tational device that can be used to estimate the impact of specific training examples without
retraining [11, 12]. Let 𝜃𝑡 be the empirical risk minimizer on 𝐷𝑡 and 𝜃𝑡 (𝑧, 𝜖) be the minimizer
obtained after adding an example 𝑧 with weight 𝜖 to 𝐷𝑡 , namely:

    𝜃𝑡 := argmin𝜃 1𝑡 𝑡𝑘=1 ℓ(𝑧𝑘 , 𝜃)
                                                                 (︀∑︀𝑡
                                         𝜃𝑡 (𝑧, 𝜖) := argmin𝜃 1𝑡                              (3)
                     ∑︀                                                           )︀
                                                                     𝑘=1 ℓ(𝑧𝑘 , 𝜃) + 𝜖ℓ(𝑧, 𝜃)

Taking a first-order Taylor expansion,(︀ 𝑑 the difference
                                                   ⃒ )︀ between 𝜃𝑡 = 𝜃𝑡 (𝑧, 0) and 𝜃𝑡 (𝑧, 𝜖) can be
written as 𝜃𝑡 (𝑧, 𝜖) − 𝜃𝑡 (𝑧, 0) ≈ 𝜖 · 𝑑𝜖 𝜃𝑡 (𝑧, 𝜖)⃒𝜖=0 . The derivative appearing on the right hand
side is the so-called influence function, denoted ℐ𝜃𝑡 (𝑧). It follows that the effect on 𝜃𝑡 of adding
(resp. removing) an example 𝑧 to 𝐷𝑡 can be approximated by multiplying the IF by 𝜖 = 1/𝑡 (resp.
𝜖 = −1/𝑡). Crucially, if the loss is strongly convex and twice differentiable, ∑︀
                                                                                the IF can be written
as ℐ𝜃𝑡 (𝑧) = −𝐻(𝜃𝑡 )−1 ∇𝜃 ℓ(𝑧, 𝜃𝑡 ), where the curvature matrix 𝐻(𝜃𝑡 ) := 1𝑡 𝑡𝑘=1 ∇2𝜃 ℓ(𝑧𝑘 , 𝜃𝑡 ) is
positive definite and invertible. IFs were shown to capture meaningful information even for
non-convex models [12].

Identifying contrastive counter-examples with influence functions. To see the link
between contrastive counter-examples and influence functions, notice that the second term of
Eq. 2 is independent of 𝑧𝑘 , while the first term can be conveniently approximated with IFs by
applying the chain rule:
          (︂                               ⃒ )︂        (︂                                           ⃒ )︂
     1       𝑑                             ⃒        1                            ⊤ 𝑑
                                                                                                    ⃒
 −             𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 (𝑧𝑘 , 𝜖))⃒
                                           ⃒    =−        ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )     𝜃𝑡−1 (𝑧𝑘 , 𝜖)⃒⃒
   𝑡 − 1 𝑑𝜖                                 𝜖=0    𝑡−1                             𝑑𝜖                𝜖=0
                                                                                                        (4)
                                                    1
                                                =−     ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )⊤ ℐ𝜃𝑡−1 (𝑧𝑘 )             (5)
                                                   𝑡−1
The constant can be dropped during the optimization. This shows that Eq. 2 is equivalent
to: argmax𝑘∈[𝑡−1] ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )⊤ 𝐻(𝜃𝑡−1 )−1 ∇𝜃 ℓ(𝑧𝑘 , 𝜃𝑡−1 ). This problem can be solved
efficiently by combining two strategies [12]: i) Caching the inverse Hessian-vector product
(HVP) ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )⊤ 𝐻(𝜃𝑡−1 )−1 , so that evaluating the objective on each 𝑧𝑘 becomes a
simple dot product, and ii) Solving the inverse HVP with an efficient stochastic estimator like
LISSA [14]. This gives us an algorithm for computing contrastive counter-examples.

Contrastive counter-examples are highly influential. Can this algorithm be used for
identifying influential counter-examples? It turns out that, as long as the model is obtained
by optimizing the cross-entropy loss, the answer is affirmative. Indeed, note that if ℓ(𝑧, 𝜃) =
− log 𝑃 (𝑦 | x; 𝜃), then:
                                                    ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )
        ∇𝜃 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 ) = 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )                      =                               (6)
                                                     𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )
              = 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )∇𝜃 log 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 ) = −𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )∇𝜃 ℓ(𝑧˜𝑡 , 𝜃𝑡−1 )    (7)

Hence, Eq. 5 can be rewritten as:

                      − 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 )∇𝜃 ℓ(𝑧˜𝑡 , 𝜃𝑡−1 )⊤ 𝐻(𝜃𝑡−1 )−1 ∇𝜃 ℓ(𝑧𝑘 , 𝜃𝑡−1 )                  (8)
                                                  ⊤          −1
                            ∝ −∇𝜃 ℓ(𝑧˜𝑡 , 𝜃𝑡−1 ) 𝐻(𝜃𝑡−1 )         ∇𝜃 ℓ(𝑧𝑘 , 𝜃𝑡−1 )                          (9)

It follows that, under the above assumptions and as long as the model satisfies 𝑃 (𝑦˜𝑡 | x𝑡 ; 𝜃𝑡−1 ) >
0, Eq. 2 is equivalent to:

                      argmax𝑘∈[𝑡−1] −∇𝜃 ℓ(𝑧˜𝑡 , 𝜃𝑡−1 )⊤ 𝐻(𝜃𝑡−1 )−1 ∇𝜃 ℓ(𝑧𝑘 , 𝜃𝑡−1 )                        (10)

This equation recovers exactly the definition of influential examples given in Eq. 2 of [12]
and shows that, for the large family of classifiers trained by cross-entropy, highly influential
counter-examples are highly contrastive and vice versa, so that no change to the selection
algorithm is necessary.

3.3. Counter-example Selection with the Fisher information matrix
Unfortunately, we found the computation of IFs to be unreliable in practice, cf. [15]. This leads
to unstable ranking of candidates and reflects on the quality of the counter-examples, as in
Figure 1. The issue is that, for the common use case of non-convex classifiers trained using
gradient-based methods (and possibly early stopping), 𝜃𝑡−1 is seldom close to a local minimum
of the empirical risk, rendering the Hessian non-positive definite. In our setting, the situation is
further complicated by the presence of noise, which dramatically skews the curvature of the
empirical risk. Remedies like fine-tuning the model with L-BFGS [12, 16], preconditioning and
weight decay [15] proved unsatisfactory in our experiments.
   We take a different approach. The idea is to replace the Hessian by the Fisher information
matrix (FIM). The FIM 𝐹 (𝜃) of a discriminative model 𝑃 (𝑌 | X, 𝜃) and training set 𝐷𝑡−1 is [17,
18]:
                     1 ∑︀𝑡−1                                                                        ⊤    (11)
                                                  [︀                                                  ]︀
          𝐹 (𝜃) := 𝑡−1     𝑘=1 E 𝑦∼𝑃 (𝑌 | x 𝑘 ,𝜃)    ∇ 𝜃 log 𝑃 (𝑦 | x 𝑘 , 𝜃)∇ 𝜃 log 𝑃 (𝑦 | x 𝑘 , 𝜃)
It can be shown that, if the model approximates the data distribution, the FIM approximates the
Hessian, cf. [19, 20]. Even when this assumption does not hold, as is likely in our noisy setting,
the FIM still captures much of the curvature information encoded into the Hessian [17]. Under
this approximation, Eq. 10 can be rewritten as:

                   argmax𝑘∈[𝑡−1] −∇𝜃 ℓ(𝑧˜𝑡 , 𝜃𝑡−1 )⊤ 𝐹 (𝜃𝑡−1 )−1 ∇𝜃 ℓ(𝑧𝑘 , 𝜃𝑡−1 )             (12)

The advantage of this formulation is twofold. First of all, this optimization problem also admits
caching the inverse FIM-vector product (FVP), which makes it viable for interactive usage.
Second, and most importantly, the FIM is positive semi-definite by construction, making the
computation of Eq.12 much more stable.
   The remaining step is how to compute the inverse FVP. Naïve storage and inversion of the
FIM, which is |𝜃| × |𝜃| in size, is impractical for typical models, so the FIM is usually replaced
with a simpler matrix. Three common options are the identity matrix, the diagonal of the FIM,
and a block-diagonal approximation where interactions between parameters of different layers
are set to zero [17]. Our best results were obtained by restricting the FIM to the top layer of the
network. We refer to this approximation as “Top Fisher”. While more advanced approximations
like K-FAC [17] exist, the Top Fisher proved surprisingly effective in our experiments.

3.4. Selecting Pertinent Counter-examples
So far we have discussed how to select contrastive and influential counter-examples. Now we
discuss how to make the counter-examples easier to interpret for the annotator. To this end, we
introduce the additional desideratum that counter-examples should be:

  D3 Pertinent: it should be clear to the user why 𝑧𝑘 is a counter-example for 𝑧˜𝑡 .

We integrate D3 into cincer by restricting the choice of possible counter-examples. A simple
strategy, which we do employ in all of our examples and experiments, is to restrict the search
to counter-examples whose label in the training set is the same as the prediction for the
suspicious example, i.e., 𝑦𝑘 = 𝑦^𝑡 . This way, the annotator can interpret the counter-example as
being in support of the machine’s suspicion. In other words, if the counter-example is labeled
correctly, then the machine’s suspicion is likely right and the incoming example needs cleaning.
Otherwise, if the machine is wrong and the suspicious example is not mislabeled, it is likely the
counter-example – which backs the machine’s suspicions – that needs cleaning.
   Finally, one drawback of IF-selected counter-examples is that they may be perceptually
different from the suspicious example. For instance, outliers are often highly influential as they
fool the machine into mispredicting many examples, yet they have little in common with those
examples [20]. This can make it difficult for the user to understand their relationship with
the suspicious examples they are meant to explain. This is not necessarily an issue: first, a
motivated supervisor is likely to correct mislabeled counter-examples regardless of whether they
resemble the suspicious example; second, highly influential outliers are often identified (and
corrected if needed) in the first iterations of cincer (indeed, we did not observe a significant
amount of repetitions among suggested counter-examples in our experiments). Still, cincer
can be readily adapted to acquire more perceptually similar counter-examples. One option is
to replace IFs with relative IFs [20], which trade-off influence with locality. Alas, the resulting
optimization problem does not support efficient caching of the inverse HVP. A better alternative
is to restrict the search to counter-examples 𝑧𝑘 that are similar enough to 𝑧˜𝑡 in terms of some
given perceptual distance ‖·‖𝒫 [21] by filtering the candidates using fast nearest neighbor
techniques in perceptual space. This is analogous to FastIF [22], except that the motivation is
to encourage perceptual similarity rather than purely efficiency, although the latter is a nice
bonus.

3.5. Advantages and Limitations
The main benefit of cincer is that, by asking a human annotator to correct potential inconsis-
tencies in the data, it acquires substantially better supervision and, in turn, better predictors.
In doing so, cincer also encourages consistency between the data and the model. Another
benefit is that cincer allows the supervisor to spot bugs and justifiably build – or, perhaps more
importantly, reject [23, 24] – trust into the prediction pipeline. cincer only requires to set a
single parameter, the margin threshold 𝜏 , which determines how frequently the supervisor is
invoked. The optimal value depends on the ratio between the cost of a relabeling query and
the cost of noise. If the annotator is willing to interact (for instance, because it is payed to do
so) then the threshold can be quite generous. cincer can be readily applied in applications in
which the data set is retained over time, even partially. If retaining data is not an option, cincer
could be adapted to synthesize counter-examples ex novo. This is however highly non-trivial
and left to future work.


4. Experiments
We address empirically the following research questions: Q1: Do counter-examples contribute to
cleaning the data? Q2: Which influence-based selection strategy identifies the most mislabeled
counter-examples? Q3: What contributes to the effectiveness of the best counter-example
selection strategy?
   We implemented cincer using Python and Tensorflow [25] on top of three classifiers and
compared different counter-example selection strategies on five data sets. The IF code is adapted
from [26]. All experiments were run on a 12-core machine with 16 GiB of RAM and no GPU.
The code for all experiments is available at: https://github.com/abonte/cincer.

Data sets. We used a diverse set of classification data sets: Adult [27]: data set of 48,800
persons, each described by 15 attributes; the goal is to discriminate customers with an income
above/below $50K. Breast [27]: data set of 569 patients described by 30 real-valued features.
The goal is to discriminate between benign and malignant breast cancer cases. 20NG [27]: data
set of newsgroup posts categorized in twenty topics. The documents were embedded using a
pre-trained Sentence-BERT model [28] and compressed to 100 features using PCA. MNIST [29]:
handwritten digit recognition data set from black-and-white, 28 × 28 images with pixel values
normalized in the [0, 1] range. The data set consists of 60K training and 10K test examples.
Fashion [30]: fashion article classification dataset with the same structure as MNIST. For adult
and breast a random 80 : 20 training-test split is used while for MNIST, fashion and 20 NG the
split provided with the data set is used. The labels of 20% of training examples were corrupted
                                                   Drop CE                No CE              CINCER (Top Fisher)
                                         30                                                                                30
 20                                                                                30
                                         20                                                                                20
                                                                                   20
 10
                                         10                                        10                                      10
  0                                       0                                         0                                       0
       0   20   40     60    80   100          0   20   40     60    80     100          0   20   40     60    80   100          0   20   40     60    80   100
                                        0.95                                                                              0.80
0.75                                                                              0.40
0.73                                    0.90                                      0.38                                    0.70
0.70                                    0.85                                      0.36
                                                                                                                          0.60
       0   20   40     60    80   100          0   20   40     60    80     100          0   20   40     60    80   100          0   20   40     60    80   100
                Iterations                              Iterations                                Iterations                              Iterations
Figure 2: cincer using Top Fisher vs. drop CE and no CE. Left to right: results for FC on adult, breast
and 20NG, CNN on MNIST. Top row: # of cleaned examples. Bottom row: 𝐹1 score.


at random. Performance was measured in terms of 𝐹1 score on the (uncorrupted) test set. Error
bars in the plots indicate the standard error. The experiments were repeated five times, each
time changing the seed used for corrupting the data. All competitors received exactly the same
examples in exactly the same order.

Models. We applied cincer to three models: LR, a logistic regression classifier; FC, a feed-
forward neural network with two fully connected hidden layers with ReLU activations; and
CNN, a feed-forward neural network with two convolutional layers and two fully connected
layers. For all models, the hidden layers have ReLU activations and 20% dropout while the
top layer has a softmax activation. LR was applied to MNIST, FC to both the tabular data sets
(namely: adult, breast, german, and 20NG) and image data sets (MNIST and fashion), and CNN
to the image data sets only. Upon receiving a new example, the classifier is retrained from
scratch for 100 epochs using Adam [31] with default parameters, with early stopping when
the accuracy on the training set reaches 90% for FC and CNN, and 70% for LR. This helps
substantially to stabilize the quality of the model and speeds up the evaluation. Before each run,
the models are trained on an bootstrap training set (containing 20% mislabeled examples) of 500
examples for 20NG and 100 for all the other data sets. The margin threshold is set to 𝜏 = 0.2.
Due to space constraints, we report the results on one image data set and three tabular data,
and we focus on FC and CNN. The other results are consistent with what is reported below;
these plots are reported in the Supplementary Material.

4.1. Q1: Counter-examples improve the quality of the data
To evaluate the impact of cleaning the counter-examples, we compare cincer combined with
the Top fisher approximation of the FIM, which works best in practice, against two alternatives,
namely: No CE: an implementation of skeptical learning [5] that asks the user to relabel any
incoming suspicious examples identified by the margin and presents no counter-examples.
Drop CE: a variation of cincer that identifies counter-examples using Top Fisher but drops
them from the data set if the user considers the incoming example correctly labeled. The results
are reported in Figure 2. The plots show that cincer cleans by far the most examples on all data
sets, between 33% and 80% more than the alternatives (top row in Figure 2). This translates into
               0.4                              0.4                              0.4                              0.4
                                                                                                                                       IF
   precision




                                    precision




                                                                     precision




                                                                                                      precision
               0.3                              0.3                              0.3                              0.3
               0.2                              0.2                              0.2                              0.2                  Practical Fisher
               0.1                              0.1                              0.1                              0.1                  Top Fisher
               0.0
                     Pr@5   Pr@10
                                                0.0
                                                      Pr@5   Pr@10
                                                                                 0.0
                                                                                       Pr@5   Pr@10
                                                                                                                  0.0
                                                                                                                        Pr@5   Pr@10   Full Fisher
Figure 3: Counter-example Pr@5 and Pr@10. Standard error information is reported. Left to right:
results for FC on adult, breast and 20NG, and CNN on MNIST.


better predictive performance as measured by 𝐹1 score (bottom row). Notice also that cincer
consistently outperforms the drop CE strategy in terms of 𝐹1 score, suggesting that relabeling
the counter-examples provides important information for improving the model. These results
validate our choice of identifying and relabeling counter-examples for interactive label cleaning
compared to focusing on suspicious incoming examples only, and allow us to answer Q1 in the
affirmative.

4.2. Q2: FIM-based strategies identify the most mislabeled CEs
Next, we compare the ability of alternative approximations of IFs of discovering mislabeled
counter-examples. To this end, we trained a model on a noisy bootstrap data set, selected 100
examples from the remainder of the training set, and measured how many truly mislabeled
counter-examples are selected by alternative strategies. In particular, we computed influence
using the IF LISSA estimator of [12], the actual FIM (denoted “full Fisher” and reported for the
simpler models only for computational reasons) and its approximations using the identity matrix
(aka “practical Fisher” [32]), and Top Fisher. We computed the precision at 𝑘 for 𝑘 ∈ {5, 10}, i.e,
the fraction of mislabeled counter-examples within five or ten highest-scoring counter-examples
retrieved by the various alternatives, averaged over 100 iterations for five runs. The results in
Figure 3 show that, in general, FIM-based strategies outperform the LISSA estimator, with Full
Fisher performing best and Top Fisher a close second. Since the full FIM is highly impractical to
store and invert, this confirms our choice of Top Fisher as best practical strategy.

4.3. Q3: Both influence and curvature contribute to the effectiveness of Top
     Fisher
Finally, we evaluate the impact of selecting counter-examples using Top Fisher on the model’s
performance, in terms of use of influence, by comparing it to an intuitive nearest neighbor
alternative (NN), and modelling of the curvature, by comparing it to the Practical Fisher. NN
simply selects the counter-example that is closest to the suspicious example. The results can be
viewed in Figure 4. Top Fisher is clearly the best strategy, both in terms of number of cleaned
examples and 𝐹1 score. NN is always worse than Top Fisher in terms of 𝐹1 , even in the case of
adult (first column) when it cleans the same number of examples, confirming the importance
of influence in selecting impactful counter-examples. Practical Fisher clearly underperforms
compared with Top Fisher, and it shows the importance of having the curvature matrix. For
each data set, all methods make a similar number of queries: 58 for 20NG, 21 for breast, 31
for adult and 37 for MNIST. In general, cincer detects around 75% of the mislabeled examples
(compared to 50% of the other methods) and only about 5% of its queries do not involve a
                                                   NN                Practical Fisher                  Top Fisher
                                         30                                                                               30
 20                                                                               30
                                         20                                                                               20
                                                                                  20
 10
                                         10                                       10                                      10
  0                                       0                                        0                                       0
       0   20   40     60    80   100          0   20   40     60     80   100          0   20   40     60    80   100          0   20   40     60    80   100
                                        0.95                                                                             0.80
0.75                                                                             0.40
                                        0.90
0.73                                                                             0.38                                    0.70
0.70                                    0.85
                                                                                 0.36
       0   20   40     60    80   100          0   20   40     60     80   100          0   20   40     60    80   100          0   20   40     60    80   100
                Iterations                              Iterations                               Iterations                              Iterations
Figure 4: Top Fisher vs. practical Fisher vs. NN. Left to right: results for FC on adult, breast and
20NG, CNN on MNIST. Top row: # of cleaned examples. Bottom row: 𝐹1 score.


corrupted example or counter-example. The complete values are reported in the Supplementary
Material. As a final remark, we note that cincer cleans more suspicious examples than counter-
examples (in a roughly 2 : 1 ratio), as shown by the number of cleaned suspicious examples
vs. counter-examples reported in the Supplementary Material. Comparing this to the curve for
Drop CE shows that proper data cleaning improves the ability of the model of being suspicious
for the right reasons, as expected.


5. Related Work
Interactive learning under noise. Most existing approaches are designed for crowd-sourcing
applications in which items are labelled by different annotators of varying quality and the goal
is to aggregate weak annotations into a high-quality consensus label [3]. Our work is strongly
related to skeptical learning [10, 5] and learning from weak annotators [9, 4]. These approaches
completely ignore the issue of noise in the training set, which can be quite detrimental, as shown
by our experiments. Moreover, they are completely black-box, making it hard for him/her to
establish or reject trust into the data and the model.

Influence functions and Fisher information. It is well known that mislabeled examples
tend to exert a larger influence on the model [33, 12, 34, 20] and indeed IFs may be a valid
alternative to the margin for identifying suspicious examples. Building on the seminal work of
Koh and Liang [12], we instead leverage IFs to define and compute contrastive counter-examples
that explain why the machine is suspicious. The difference is that noisy training examples
influence the model as a whole, whereas contrastive counter-examples influence a specific
suspicious example. To the best of our knowledge, this application of IFs is entirely novel. IFs
has been used to present to the user training examples that negatively influence the model
and from them, augmented examples are created to fine-tune the model [35]. This approach
does not consider mislabeled examples. Notice also that empirical evidence that IFs recover
noisy examples is restricted to offline learning [12, 34]. Our experiments extend this to a less
forgiving interactive setting in which only one counter-example is selected per iteration and
the model is trained on the whole training set. The idea of exploiting the FIM to approximate
the Hessian has ample support in the natural gradient descent literature [17, 18]. The FIM has
been used for computing example-based explanations by Khanna et al. [34]. Their approach is
however quite different from ours. cincer is equivalent to maximizing the Fisher kernel [32]
between the suspicious example and the counter-example (Eq. 12) for the purpose of explaining
the model’s margin, and this formulation is explicitly derived from two simple desiderata. In
contrast, Khanna et al. maximize the squared Fisher kernel between 𝑧𝑘 and 𝑧˜𝑡 divided by the
norm of 𝑧𝑘 in the RKHS, which is not equivalent to Eq. 12 and does not admit caching the
inverse FIM-vector product.

Other works. cincer draws inspiration from explanatory active learning, which integrates
local [24, 36, 37, 38] or global [39] explanations into interactive learning and allows the annotator
to supply corrective feedback on the model’s explanations. These approaches differ from cincer
in that they neither consider the issue of noise nor perform label cleaning, and indeed they
explain the model’s predictions rather than the model’s suspicion. Another notable difference is
that they rely on attribution-based explanations, whereas the backbone of cincer are example-
based explanations, which enable users to reason about labels in terms of concrete (training)
examples [40, 41]. Saliency maps could potentially be integrated into cincer to provide more
fine-grained information.


6. Conclusion
We introduced cincer, an approach for handling label noise in sequential learning that asks
a human supervisor to relabel any incoming suspicious examples. Compared to previous
approaches, cincer identifies the reasons behind the model’s skepticism and asks the supervisor
to double-check them too. This is done by computing a training example that maximally
supports the machine’s suspicions. This enables the user to correct both incoming and old
examples, cleaning inconsistencies in the data that confuse the model. Our experiments shows
that, by removing inconsistencies in the data, cincer enables acquiring better data and models
than less informed alternatives.
   Our work can be improved in several ways. cincer can be straightforwardly extended to
online active and skeptical learning, in which the label of incoming instances is acquired on
the fly [42, 10]. cincer can also be adapted to correcting multiple counter-examples as well
as the reasons behind mislabeled counter-examples using “multi-round” label cleaning and
group-wise measures of influence [43, 44, 45]. This more refined strategy is especially promising
for dealing with systematic noise, in which counter-examples are likely affected by entire groups
of mislabeled examples.
   Potential negative impact. Like most interactive approaches, there is a risk that cincer
annoys the user by asking an excessive number of questions. This is currently mitigated by
querying the user only when the model is confident enough in its own predictions (through the
margin-based strategy) and by selecting influential counter-examples that have a high chance
to improve the model upon relabeling, thus reducing the future chance of pointless queries.
Moreover, the margin threshold 𝜏 allows to modulate the amount of interaction based on the
user’s commitment. Another potential issue is that cincer could give malicious annotators fine-
grained control over the training data, possibly leading to poisoning attacks. This is however not
an issue for our target applications, like interactive assistants, in which the user benefits from
interacting with a high-quality predictor and is therefore motivated to provide non-adversarial
labels.


Acknowledgments
This research has received funding from the European Union’s Horizon 2020 FET Proactive
project “WeNet - The Internet of us”, grant agreement No. 823783, and from the “DELPhi -
DiscovEring Life Patterns” project funded by the MIUR Progetti di Ricerca di Rilevante Interesse
Nazionale (PRIN) 2017 – DD n. 1062 del 31.05.2019. The research of ST and AP was partially
supported by TAILOR, a project funded by EU Horizon 2020 research and innovation programme
under GA No 952215.


References
 [1] B. Frénay, M. Verleysen, Classification in the presence of label noise: a survey, IEEE
     transactions on neural networks and learning systems 25 (2014) 845–869.
 [2] H. Song, M. Kim, D. Park, J.-G. Lee, Learning from noisy labels with deep neural networks:
     A survey, arXiv preprint arXiv:2007.08199 (2020).
 [3] J. Zhang, X. Wu, V. S. Sheng, Learning from crowdsourced labeled data: a survey, Artificial
     Intelligence Review 46 (2016) 543–576.
 [4] J. Kremer, F. Sha, C. Igel, Robust active label correction, in: International conference on
     artificial intelligence and statistics, PMLR, 2018, pp. 308–316.
 [5] A. Bontempelli, S. Teso, F. Giunchiglia, A. Passerini, Learning in the Wild with Incremental
     Skeptical Gaussian Processes, in: Proceedings of the 29th International Joint Conference
     on Artificial Intelligence, 2020.
 [6] R. Tourangeau, L. J. Rips, K. Rasinski, The psychology of survey response, Cambridge
     University Press, 2000.
 [7] L. Corti, Using diaries in social research, Social research update 2 (1993).
 [8] B. T. West, J. Sinibaldi, The quality of paradata: A literature review, Improving Surveys
     with Paradata (2013) 339–359.
 [9] R. Urner, S. Ben-David, O. Shamir, Learning from weak teachers, in: Artificial intelligence
     and statistics, PMLR, 2012, pp. 1252–1260.
[10] M. Zeni, W. Zhang, E. Bignotti, A. Passerini, F. Giunchiglia, Fixing mislabeling by human
     annotators leveraging conflict resolution and prior knowledge, Proceedings of the ACM
     on Interactive, Mobile, Wearable and Ubiquitous Technologies 3 (2019) 32.
[11] R. D. Cook, S. Weisberg, Residuals and influence in regression, New York: Chapman and
     Hall, 1982.
[12] P. W. Koh, P. Liang, Understanding black-box predictions via influence functions, in:
     Proceedings of the 34th International Conference on Machine Learning, JMLR. org, 2017,
     pp. 1885–1894.
[13] E. L. Lehmann, G. Casella, Theory of point estimation, Springer Science & Business Media,
     2006.
[14] N. Agarwal, B. Bullins, E. Hazan, Second-order stochastic optimization for machine
     learning in linear time, The Journal of Machine Learning Research 18 (2017) 4148–4187.
[15] S. Basu, P. Pope, S. Feizi, Influence functions in deep learning are fragile, arXiv preprint
     arXiv:2006.14651 (2020).
[16] C.-K. Yeh, J. S. Kim, I. E. Yen, P. Ravikumar, Representer point selection for explaining
     deep neural networks, in: Proceedings of the 32nd International Conference on Neural
     Information Processing Systems, 2018, pp. 9311–9321.
[17] J. Martens, R. Grosse, Optimizing neural networks with kronecker-factored approximate
     curvature, in: International conference on machine learning, PMLR, 2015, pp. 2408–2417.
[18] F. Kunstner, L. Balles, P. Hennig, Limitations of the empirical fisher approximation for
     natural gradient descent, in: 33rd Conference on Neural Information Processing Systems
     (NeurIPS 2019), Curran Associates, Inc., 2020, pp. 4133–4144.
[19] D. Ting, E. Brochu, Optimal subsampling with influence functions, in: Advances in Neural
     Information Processing Systems, 2018, pp. 3650–3659.
[20] E. Barshan, M.-E. Brunet, G. K. Dziugaite, Relatif: Identifying explanatory training
     examples via relative influence, in: The 23rd International Conference on Artificial
     Intelligence and Statistics, 2020.
[21] M. Heusel, H. Ramsauer, T. Unterthiner, B. Nessler, S. Hochreiter, Gans trained by a two
     time-scale update rule converge to a local nash equilibrium, in: Proceedings of the 31st
     International Conference on Neural Information Processing Systems, 2017, pp. 6629–6640.
[22] H. Guo, N. F. Rajani, P. Hase, M. Bansal, C. Xiong, Fastif: Scalable influence functions for
     efficient model interpretation and debugging, arXiv preprint arXiv:2012.15781 (2020).
[23] C. Rudin, Stop explaining black box machine learning models for high stakes decisions
     and use interpretable models instead, Nature Machine Intelligence 1 (2019) 206–215.
[24] S. Teso, K. Kersting, Explanatory interactive machine learning, in: Proceedings of the
     2019 AAAI/ACM Conference on AI, Ethics, and Society, 2019, pp. 239–245.
[25] M. Abadi, et al., TensorFlow: Large-scale machine learning on heterogeneous systems,
     2015. URL: https://www.tensorflow.org/, software available from tensorflow.org.
[26] T. Ye, Example based explanation in machine learning, https://github.com/Timothy-Ye/
     example-based-explanation, 2020.
[27] D. Dua, C. Graff, UCI machine learning repository, 2017. URL: http://archive.ics.uci.edu/ml.
[28] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks,
     in: EMNLP-IJCNLP, 2019.
[29] Y. LeCun, C. Cortes, C. Burges, Mnist handwritten digit database, ATT Labs [Online].
     Available: http://yann.lecun.com/exdb/mnist 2 (2010).
[30] H. Xiao, K. Rasul, R. Vollgraf, Fashion-mnist: a novel image dataset for benchmarking
     machine learning algorithms, 2017. arXiv:cs.LG/1708.07747.
[31] D. P. Kingma, J. Ba, Adam: A method for stochastic optimization, arXiv preprint
     arXiv:1412.6980 (2014).
[32] T. S. Jaakkola, D. Haussler, et al., Exploiting generative models in discriminative classifiers,
     Advances in neural information processing systems (1999) 487–493.
[33] M. Wojnowicz, B. Cruz, X. Zhao, B. Wallace, M. Wolff, J. Luan, C. Crable, “influence sketch-
     ing”: Finding influential samples in large-scale regressions, in: 2016 IEEE International
     Conference on Big Data (Big Data), IEEE, 2016, pp. 3601–3612.
[34] R. Khanna, B. Kim, J. Ghosh, S. Koyejo, Interpreting black box predictions using fisher
     kernels, in: The 22nd International Conference on Artificial Intelligence and Statistics,
     2019, pp. 3382–3390.
[35] H. Zylberajch, P. Lertvittayakumjorn, F. Toni, Hildif: Interactive debugging of nli models
     using influence functions, in: Proceedings of the First Workshop on Interactive Learning
     for Natural Language Processing, 2021, pp. 1–6.
[36] R. R. Selvaraju, S. Lee, Y. Shen, H. Jin, S. Ghosh, L. Heck, D. Batra, D. Parikh, Taking a
     hint: Leveraging explanations to make vision and language models more grounded, in:
     Proceedings of the IEEE/CVF International Conference on Computer Vision, 2019, pp.
     2591–2600.
[37] P. Lertvittayakumjorn, L. Specia, F. Toni, Human-in-the-loop debugging deep text classi-
     fiers, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language
     Processing (EMNLP), 2020, pp. 332–348.
[38] P. Schramowski, W. Stammer, S. Teso, A. Brugger, F. Herbert, X. Shao, H.-G. Luigs, A.-K.
     Mahlein, K. Kersting, Making deep neural networks right for the right scientific reasons
     by interacting with their explanations, Nature Machine Intelligence 2 (2020) 476–486.
[39] T. Popordanoska, M. Kumar, S. Teso, Machine guides, human supervises: Interactive
     learning with global explanations, arXiv preprint arXiv:2009.09723 (2020).
[40] T. Miller, Explanation in artificial intelligence: Insights from the social sciences, Artificial
     Intelligence (2018).
[41] J. V. Jeyakumar, J. Noor, Y.-H. Cheng, L. Garcia, M. Srivastava, How Can I Explain This to
     You? An Empirical Study of Deep Neural Network Explanation Methods, Advances in
     Neural Information Processing Systems 33 (2020).
[42] L. Malago, N. Cesa-Bianchi, J. Renders, Online active learning with strong and weak
     annotators, in: NIPS Workshop on Learning from the Wisdom of Crowds, 2014.
[43] P. W. W. Koh, K.-S. Ang, H. Teo, P. S. Liang, On the accuracy of influence functions for
     measuring group effects, in: Advances in Neural Information Processing Systems, 2019,
     pp. 5254–5264.
[44] S. Basu, X. You, S. Feizi, Second-order group influence functions for black-box predictions,
     arXiv preprint arXiv:1911.00418 (2019).
[45] A. Ghorbani, J. Zou, Data shapley: Equitable valuation of data for machine learning, in:
     International Conference on Machine Learning, PMLR, 2019, pp. 2242–2251.