=Paper=
{{Paper
|id=Vol-3381/paper_45
|storemode=property
|title=Less is More: Data Pruning for Faster Adversarial Training
|pdfUrl=https://ceur-ws.org/Vol-3381/45.pdf
|volume=Vol-3381
|authors=Yize Li,Pu Zhao,Xue Lin,Bhavya Kailkhura,Ryan Goldhahn
|dblpUrl=https://dblp.org/rec/conf/aaai/Li0LKG23
}}
==Less is More: Data Pruning for Faster Adversarial Training==
Less is More: Data Pruning for Faster Adversarial Training
Yize Li1,† , Pu Zhao1 , Xue Lin1 , Bhavya Kailkhura2 and Ryan Goldhahn2
1
Northeastern University, 360 Huntington Ave, Boston, MA 02115
2
Lawrence Livermore National Laboratory, 7000 East Ave, Livermore, CA 94550
Abstract
Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the
real world. Although adversarial training (AT) is currently one of the most effective methodologies to robustify DNNs, it is
computationally very expensive (e.g., 5 ∼ 10× costlier than standard training). To address this challenge, existing approaches
focus on single-step AT, referred to as Fast AT, reducing the overhead of adversarial example generation. Unfortunately, these
approaches are known to fail against stronger adversaries. To make AT computationally efficient without compromising
robustness, this paper takes a different view of the efficient AT problem. Specifically, we propose to minimize redundancies at
the data level by leveraging data pruning. Extensive experiments demonstrate that the data pruning based AT can achieve
similar or superior robust (and clean) accuracy as its unpruned counterparts while being significantly faster. For instance,
proposed strategies accelerate CIFAR-10 training up to 3.44× and CIFAR-100 training to 2.02×. Additionally, the data
pruning methods can readily be reconciled with existing adversarial acceleration tricks to obtain the striking speed-ups of
5.66× and 5.12× on CIFAR-10, 3.67× and 3.07× on CIFAR-100 with TRADES and MART, respectively.
Keywords
Adversarial Robustness, Adversarial Data Pruning, Efficient Adversarial Training
1. Introduction fortunately, these cheaper training approaches are known
to attain poor performance on stronger adversaries and
Deep neural networks (DNNs) achieve great success in suffer from ‘catastrophic overfitting’ [14, 17], where Pro-
various machine learning tasks, such as image classifi- jected Gradient Descent (PGD) robustness is gained at
cation [1, 2], object detection [3, 4], language modeling the beginning, but later the robust accuracy decreases to
[5, 6] and so on. However, the reliability and security con- 0 suddenly. In this regard, there does not seem to exist a
cerns of DNNs limit their wide deployment in real-world satisfactory solution to achieve optimal robustness with
applications. For example, imperceptible perturbations moderate computation cost.
added to inputs by adversaries (known as adversarial In this paper, we propose to overcome the above limi-
examples) [7, 8, 9] can cause incorrect predictions during tation by exploring a new perspective—leveraging data
inference. Therefore, many research efforts are devoted pruning during AT. Differing from the prior Fast AT-
to designing robust DNNs against adversarial examples based solutions that focus on the AT algorithm, we attain
[10, 11, 12]. efficiency by selecting the representative subset of train-
Adversarial Training (AT) [13] is one of the most effec- ing samples and performing AT on this smaller dataset.
tive defense approaches to improving adversarial robust- Although several recent works explore data pruning
ness. AT is formulated as a min-max problem, with the for efficient standard training (see [18] for a survey), data
inner maximization aiming to generate adversarial exam- pruning for efficient AT is not well investigated. To the
ples, and the outer minimization aiming to train a model best of our knowledge, the most relevant one is [19],
based on them. However, to achieve better defense with which speeds up AT by the loss-based data pruning. How-
higher robustness, the iterative AT is required to gener- ever, the random sub-sampling outperforms their data
ate stronger adversarial examples with more steps in the pruning schemein terms of clean accuracy, robustness,
inner problem, leading to expensive computation costs. and training efficiency, raising doubts about the feasibil-
In response to this difficulty, a number of approaches ity of the proposed approach. In contrast, we propose to
investigate efficient AT, such as Fast AT [14] and their perform data pruning in two ways: 1) by maximizing the
variants [15, 16] via single-step adversarial attacks. Un- log-likelihood of the subset on the validation dataset, and
2) by minimizing the gradient disparity between the sub-
The AAAI-23 Workshop on Artificial Intelligence Safety (SafeAI 2023), set and the full dataset. We implement these approaches
Feb 13-14, 2023, Washington, D.C., US with two AT objectives: TRADES [20] and MART [21].
†
Corresponding author. Experimental results show that we can achieve training
$ li.yize@northeastern.edu (Y. Li); p.zhao@northeastern.edu acceleration up to 3.44× on CIFAR-10 and 2.02× on
(P. Zhao); xue.lin@northeastern.edu (X. Lin); kailkhura1@llnl.gov CIFAR-100. In addition, incorporating our proposed data
(B. Kailkhura); goldhahn1@llnl.gov (R. Goldhahn)
© 2023 Copyright for this paper by its authors. Use permitted under Creative Commons License pruning with Bullet-Train [22], which allocates dynamic
Attribution 4.0 International (CC BY 4.0).
CEUR
CEUR Workshop Proceedings (CEUR-WS.org)
Workshop
Proceedings
http://ceur-ws.org
ISSN 1613-0073
computing cost to categorized training data, further im-
proves the speed-ups by 5.66× and 3.67× on CIFAR-10 computation consumption depending on the number of
and CIFAR-100, respectively. Our main contributions are steps used in generating adversarial examples. The major
summarized below. work to achieve training efficiency focuses on how to
reduce the number of attack steps and maintain the sta-
• We explore efficient AT from the lens of data prun-
bility of one-step FGSM-based AT. Free AT [36] performs
ing, where the acceleration is achieved by only
FGSM perturbations and updates model weights on the
focusing on the representative subset of the data.
simultaneous mini-batch. FAST AT [14] generates FGSM
• We propose two data pruning algorithms, Adv-
attacks with random initialization but still suffers from
GRAD-MATCH and Adv-GLISTER, and perform
‘catastrophic overfitting’. Therefore, Gradient alignment
a comprehensive experimental study. We demon-
regularization [17], suitable inner interval (step size) for
strate that our data pruning methods yield consis-
the adversarial direction [16], and Fast Bi-level AT (FAST-
tent effectiveness across diverse robustness eval-
BAT) [37] are proposed to prevent such failure.
uations, e.g., PGD [13] and AutoAttack [23].
• Furthermore, combining our efficient AT frame-
work with the existing Bullet-Train approach [22] Data pruning. Efficient learning through data subset
achieves state-of-the-art performance in training selection economizes on training resources. Proxy func-
cost. tions [38, 39] take advantage of the feature representation
from the tiny proxy model to select the most informative
subset for training the larger one. Coreset-based algo-
2. Related Work rithms [40] mine for a small representative subset that
approximates the entire dataset following established cri-
Adversarial attacks and defenses. Adversarial at- teria. CRAIG [41] selects the training data subset which
tacks [13, 24, 25, 26, 27] refer to detrimental techniques approximates the full gradient and GRAD-MATCH [42]
that inject imperceptible perturbations into the inputs minimizes the gradient matching error. GLISTER [43]
and mislead decision making process of networks. In prunes the training data by maximizing log-likelihood
this paper, we mainly investigate ℓ𝑝 attacks, where for the validation set.
𝑝 ∈ {0, 1, 2, ∞}. Fast Gradient Sign Method (FGSM)
[24] is the cheapest one-shot adversarial attack. Basic
Iterative Method (BIM) [28], Projected Gradient Descent 3. Data Pruning Based Adversarial
(PGD) [13] and CW [25] are stronger attacks that are Training
iterative in nature. Adversarial examples are used for
the assessment of model robustness. AutoAttack [23] 3.1. Preliminaries
ensembles multiple attack strategies to perform a fair
and reliable evaluation of adversarial robustness. AT [13] aims to solve the min-max optimization problem
Various defense methods [29, 30, 31, 32] have been as follows:
proposed to tackle the vulnerability of DNNs against ad- ∑︁ [︂ ]︂
1
versarial examples, while most of the approaches are built min max ℒ(𝜃; 𝑥 + 𝛿, 𝑦) , (1)
𝜃 |𝐷| 𝛿∈△
over AT, where perturbed inputs are fed to DNNs to learn (𝑥,𝑦)∈𝒟
from adversarial examples. Projected Gradient Descent
(PGD) based AT is one of the most popular defense strate- where 𝜃 is the model parameter, 𝑥 and 𝑦 denote the data
gies [13], which uses a multi-step adversary. Training sample and label from the training dataset 𝒟, 𝛿 denotes
only with adversarial samples can lead to a drop in clean imperceptible adversarial perturbations injected into 𝑥
accuracy [33]. To improve the trade-off between accuracy under the norm constraint by the constant strength 𝜖, i.e.,
and robustness, TRADES [20] and MART [21] compose △ := {‖𝛿‖∞ ≤ 𝜖}, and ℒ is the training loss. During
the training loss with both the natural error term and the the adversarial procedure, the optimization first maxi-
robustness regularization term. Curriculum Adversarial mizes the inner approximation for adversarial attacks
Training (CAT) [34] robustifies DNNs by adjusting PGD and then minimizes the outer training error over the
steps arranging from weak attack strength to strong at- model parameter 𝜃. A typical adversarial example gener-
tack strength, while Friendly Adversarial Training (FAT) ation procedure involves multiple steps for the stronger
[35] performs early-stopped PGD for adversarial exam- adversary, e.g.,
ples.
𝑥𝑡+1 = Proj△ 𝑥𝑡 + 𝛼 sign ∇𝑥𝑡 ℒ 𝜃; 𝑥𝑡 , 𝑦 , (2)
(︀ (︀ (︀ )︀)︀)︀
Efficient adversarial training. Despite PGD-based where the projection follows 𝜖-ball at the step 𝑡 with step
training showing empirical robustness against adversar- size 𝛼, using the sign of gradients.
ial examples, the learning overhead is usually dramat-
ically larger than the standard training, e.g., 5 ∼ 10×
90 90 90
80 80 80
70 70 70
60 60 60
Outlier Outlier Outlier
Fraction
Fraction
Fraction
50 50 50
Boundary Boundary Boundary
40 Robust 40 Robust 40 Robust
30 30 30
20 20 20
10 10 10
0 0 0
0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180
Epoch Epoch Epoch
(a) TRADES. (b) Bullet. (c) Adv-GRAD-MATCH.
90 90 90
Outlier Outlier
80 Boundary 80 80 Boundary
70 Robust 70 70 Robust
60 60 60
Outlier
Fraction
Fraction
Fraction
50 50 50
Boundary
40 40 Robust 40
30 30 30
20 20 20
10 10 10
0 0 0
0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180 0 20 40 60 80 100 120 140 160 180
Epoch Epoch Epoch
(d) Adv-GLISTER. (e) Adv-GRAD-MATCH&Bullet. (f) Adv-GLISTER&Bullet.
Figure 1: Tracking of adversarial robustness during 200 epochs of training. Red, Green and Blue denote outlier, robust and boundary
examples, respectively.
3.2. General Formulation for Adversarial tions towards efficiently achieving high clean accuracy.
Data Pruning We extend these approaches in the context of adversarial
robustness. Motivated by GLISTER [43], we first consider
Our adversarial data pruning consists of two steps: ad- training a subset that obtains the optimal adversarial
versarial subset selection and AT with the subset of data. log-likelihood on the validation set in Eq. (5), defined as
In the specified epoch, adversarial subset selection first Adv-GLISTER:
finds a representative subset of data from the entire train- ∑︁
ing dataset. Next, AT is performed with the selected 𝐺(𝒮) = 𝐿𝑉 (𝜃𝑆 ; 𝑥𝑉 + 𝛿𝑉* , 𝑦𝑉 ) (5)
subset. Though the size of the subset keeps the same in (𝑥𝑉 ,𝑦𝑉 )∈𝒱
different iterations, the data in the subset is updated in where 𝐿𝑉 is the negative log-likelihood on validation
each iteration based on the different status of the model set; 𝛿𝑉* is the adversarial perturbation obtained by maxi-
weights. We formulate the AT with the data subset in mizing 𝐿𝑉 (𝜃𝑆 ; 𝑥𝑉 + 𝛿𝑉 , 𝑦𝑉 ).
Eq. (3) and adversarial subset selection in Eq. (4). Another adversarial data pruning approach is inspired
1 ∑︁ [︂ ]︂ by GRAD-MATCH [42], which aims to find the data sub-
min max ℒ(𝜃; 𝑥 + 𝛿, 𝑦) , (3) set whose gradients closely match those of the full train-
𝜃 𝑘 𝛿∈△
(𝑥,𝑦)∈𝒮 ing data. Adv-GRAD-MATCH is formulated as Eq. (6):
∑︁
min 𝐺(𝒮) (4) 𝐺(𝒮) = ‖ 𝑤∇𝜃 ℒ𝒮 (𝜃; 𝑥𝑆 + 𝛿𝑆* , 𝑦𝑆 )
𝒮⊆𝒟,|𝒮|=𝑘 (𝑥𝑆 ,𝑦𝑆 )∈𝒮
(6)
*
where 𝒟 represents the complete training set and 𝛿 repre- −∇𝜃 ℒ𝒟 (𝜃; 𝑥𝐷 + 𝛿𝐷 , 𝑦𝐷 )‖
(𝑥𝐷 ,𝑦𝐷 )∈𝒟
sents the perturbation under 𝑙∞ norm constraint △. The
selected subset 𝒮 with the size 𝑘 is obtained by optimiz- where 𝑤 is the weight vector associated with each
ing the function 𝐺, which aims to narrow the difference instance 𝑥𝑆 in the subset 𝒮; ℒ𝑆 and ℒ𝐷 denote the
between 𝒟 and 𝒮 under specific criteria with model pa- training loss over the subset and entire dataset; 𝛿𝑆* and
rameters 𝜃. Note that the data selection step is performed 𝛿𝐷 *
are adversarial examples obtained by maximizing
periodically to achieve computational savings. 𝐿𝑆 (𝜃; 𝑥𝑆 + 𝛿𝑆 , 𝑦𝑆 ) and 𝐿𝐷 (𝜃; 𝑥𝐷 + 𝛿𝐷 , 𝑦𝐷 ), respec-
Recent data subset selection schemes, GRAD-MATCH tively. During the data selection, the adversarial gra-
[42] and GLISTER [43], have made significant contribu- dient difference between the weighted subset loss and
Table 1
TRADES results where data pruning methods use only 30% data points on CIFAR-10 and 50% data points on CIFAR-100 for 100
epochs of training.
PGD
Dataset Method Clean AutoAttack Time/epoch (Speed-up)
4/255 8/255 16/255
TRADES [20] 82.73 69.17 51.83 19.43 49.06 416.20 (-)
Bullet [22] 84.60 70.24 50.82 16.05 47.93 193.06 (2.16×)
Adv-GLISTER (Ours) 77.62 63.06 46.06 16.52 41.61 120.70 (3.45×)
CIFAR-10
Adv-GRAD-MATCH (Ours) 75.67 61.85 45.96 17.49 42.19 138.19 (3.01×)
Adv-GLISTER&Bullet (Ours) 79.21 63.02 44.52 13.33 40.77 72.91 (5.66×)
Adv-GRAD-MATCH&Bullet (Ours) 77.57 62.00 45.13 14.65 41.94 87.38 (4.76×)
TRADES [20] 55.85 40.31 27.35 10.71 23.39 387.72 (-)
Bullet [22] 59.43 42.23 28.08 9.40 23.85 173.59 (2.23×)
Adv-GLISTER (Ours) 51.26 37.16 24.78 9.49 20.57 202.7 (1.91×)
CIFAR-100
Adv-GRAD-MATCH (Ours) 51.03 37.17 24.60 9.70 20.42 206.05 (1.88×)
Adv-GLISTER&Bullet (Ours) 53.54 37.24 23.91 7.69 20.02 105.66 (3.67×)
Adv-GRAD-MATCH&Bullet (Ours) 52.98 36.92 24.24 8.01 20.17 105.61 (3.67×)
Table 2
MART results where data pruning methods use only 30% data points on CIFAR-10 and 50% data points on CIFAR-100 for 100
epochs of training.
PGD
Dataset Method Clean AutoAttack Time/epoch (Speed-up)
4/255 8/255 16/255
MART [21] 80.96 68.21 52.59 19.52 46.94 329.54 (-)
Bullet [22] 85.29 70.92 50.64 13.33 43.77 199.42 (1.65×)
Adv-GLISTER (Ours) 71.97 60.13 46.25 16.59 39.86 95.68 (3.44×)
CIFAR-10
Adv-GRAD-MATCH (Ours) 73.67 61.35 47.07 18.16 40.98 106.51 (3.09×)
Adv-GLISTER&Bullet (Ours) 73.87 59.89 44.01 14.20 38.99 64.31 (5.12×)
Adv-GRAD-MATCH&Bullet (Ours) 78.78 64.42 46.72 13.50 39.53 77.11 (4.27×)
MART [21] 54.85 39.24 25.08 8.59 22.66 307.43 (-)
Bullet [22] 57.44 39.22 24.14 6.66 21.55 187.73 (1.64×)
Adv-GLISTER (Ours) 46.36 34.37 24.01 9.20 19.79 152.11 (2.02×)
CIFAR-100
Adv-GRAD-MATCH (Ours) 48.07 36.19 26.11 10.79 21.24 153.86 (2.00×)
Adv-GLISTER&Bullet (Ours) 52.13 35.07 20.67 5.64 18.21 100.22 (3.07×)
Adv-GRAD-MATCH&Bullet (Ours) 52.46 35.81 22.20 6.48 18.68 113.03 (2.72×)
the complete dataset loss is minimized so as to produce and 3.5e-3 for MART. For Adv-GRAD-MATCH and Adv-
the optimum subset and corresponding weights. GLISTER, the initial learning rate is 0.01 and 0.02 on
CIFAR-10 and 0.08 and 0.05 on CIFAR-100 respectively.
Besides the original TRADES [20] and MART [21] meth-
4. Experiments ods, we also compare our approach with Bullet-Train
[22]. PGD attack [13] (PGD-50-10) is adopted for evalu-
4.1. Experiment Setup ating the robust accuracy, ranging from low magnitude
To evaluate the efficiency and generality of the proposed (𝜖 = 4/255) to high magnitude (𝜖 = 16/255) with 50 it-
method, we apply adversarial training loss functions from erations as well as 10 restarts at the step-size 𝛼 = 2/255
TRADES [20] or MART [21] on the standard datasets, under 𝑙∞ -norm. Moreover, AutoAttack [23] is leveraged
CIFAR-10, CIFAR-100 [44] trained on ResNet-18 [45]. Our for the reliable robustness evaluation. Additionally, our
adversarial data pruning methods include Adv-GRAD- methods can also be combined with Bullet-Train [22]
MATCH and Adv-GLISTER with different data portions and we term them as Adv-GRAD-MATCH&Bullet and
(subset size) [30%, 50%] with 100 and 200 epochs where Adv-GLISTER&Bullet.
the selection interval is 20 (i.e., perform adversarial subset
selection every 20 epochs of AT). The original training 4.2. Main Results
dataset is divided into the train (90%) and the valida-
tion set (10%) in Adv-GLISTER. The optimizer is SGD Table 1 shows the results of our Adv-GLISTER and Adv-
with momentum 0.9 and weight decay 2e-4 for TRADES GRAD-MATCH for TRADES compared with the orig-
inal TRADES and Bullet-Train methods. The compar-
Table 3
100 v.s. 200 epoch TRADES CIFAR-10 results with ResNet-18 when using 30% data points with robustness regularization factor
to be 1.
PGD
Method Epoch Clean AutoAttack
4/255 8/255 16/255
Adv-GLISTER 100 77.62 63.06 46.06 16.52 41.61
Adv-GRAD-MATCH 100 75.61 60.81 45.76 17.49 42.19
Adv-GLISTER 200 78.76 64.15 46.11 16.92 42.43
Adv-GRAD-MATCH 200 75.75 61.24 46.49 18.55 43.63
Table 4
TRADES results on CIFAR-10 with ResNet-18 using 30% data samples under different selection counts for 200 epoch training.
PGD
Method Number of selections Clean AutoAttack Speed-up
4/255 8/255 16/255
TRADES - 83.32 68.91 49.64 17.31 47.53 -
Adv-GLISTER 4 75.80 60.48 44.62 16.07 40.44 3.15×
Adv-GRAD-MATCH 4 73.80 60.43 46.06 18.33 43.03 2.83×
Adv-GLISTER 9 78.76 64.15 46.11 16.92 42.43 2.93×
Adv-GRAD-MATCH 9 75.75 61.24 46.49 18.55 43.63 2.75×
ison is in terms of clean and robust accuracy (under samples gradually increases and eventually dominates,
two attack methods, PGD Attack [13] and AutoAttack while the number of outliers and boundary data points
[23]) along with the training speed-up. We observe that decreases over epochs, revealing similar achievements
compared to the baselines, the training efficiency of our in TRADES-based AT and data pruning-based methods.
method is improved significantly on CIFAR-10, while In addition, the ultimate portions of three sets explain
the decrease happens on the clean accuracy and robust- the clean accuracy and robustness degrading of our ap-
ness under AutoAttack and PGD attacks for different proaches. In detail, two baselines obtain more robust
values of 𝜖. Especially, for 𝜖 = 16/255, the robust accu- samples and fewer boundary and outlier examples.
racy can be improved from 16.05% (Bullet-Train [22]) We further evaluate the performances of adversarial
to 16.52% and 17.49% with our Adv-GRAD-MATCH data pruning based on the loss of MART in Table 2. Re-
and Adv-GLISTER, indicating our defensive capability sults are consistent with our findings on TRADES in
on powerful attacks. As displayed in Table 1, our Adv- Table 1.
GRAD-MATCH and Adv-GLISTER reduce the training
overheads (seconds per epoch) enormously and achieve 4.3. Ablation Studies
3.44× and 3.09× training speed-ups. After combining
our approaches with Bullet-Train [22], an even faster Epoch. We first consider the training epoch. Table 3
acceleration of 5.12× can be reached. shows that longer training improves both clean and ro-
On CIFAR-100, the validity of our schemes is consistent bust accuracy. Due to the shrinking data size, more
as well. The reason why both clean and robust accuracy epochs are required to enhance data-efficient adversarial
drop might be that our data pruning schemes struggle learning, in alignment with standard data pruning train-
with the dimensionality and complexity of the dataset. ing. However, 100-epoch training appears to be sufficient
Regardless, our schemes still result in conspicuous com- for the small dataset.
putation savings compared with other baselines. Subset Size. We experiment with different subset sizes.
To understand the robustness improvements of our Moving from the extremely small subset (10% of the full
schemes, we track the dynamics of the outlier, robust, training set) to a larger subset (70%) in Fig. 2, the obser-
and boundary sets (similar to [22]) using PGD-5-1 attack. vation is that robust accuracy gradually increases to that
Without any attack, the outlier examples have already of the full dataset. This highlights the benefit of pruning
been mistaken by the model, but boundary and robust with optimal subset size. We can see that 30% is an appro-
examples are correctly identified. After adversarial at- priate choice for the CIFAR-10 subset size, after taking
tacks, boundary examples are incorrectly classified while the global efficiency into account.
robust examples are still correctly classified. Fig. 1 dis- Number of selection rounds. In Sec. 4.2, our experi-
plays the dynamics of the outlier, boundary, and robust ments perform adversarial data pruning every 20 epochs
examples on CIFAR-10 for various schemes. During the (with 9 selections). Here we present the results of data
model training and data selection, the number of robust pruning every 40 epochs (with 4 selections). As shown in
100 11 100 11
Adv-GLISTER Efficiency Adv-GLISTER 10 Adv-GLISTER Efficiency Adv-GLISTER 10
Adv-GRAD-MATCH Efficiency TRADES Adv-GRAD-MATCH Efficiency TRADES
80 Adv-GRAD-MATCH 9 80 Adv-GRAD-MATCH 9
PGD Robustness (%)
PGD Robustness (%)
TRADES 8 TRADES 8
7 7
Speed-up
Speed-up
60 60
48.1847.85 49.3649.17 51.8351.83 6
47.07 48.1150.61 47.1551.98 52.5952.59 6
46.0645.76 5 46.25 5
40 38.3635.88 40 39.39
4 30.31 4
3 3
20 2 20 2
1 1
0 0 0 0
10% 30% 50% 70% 100% 10% 30% 50% 70% 100%
Subset Size Subset Size
(a) TRADES. (b) MART.
Figure 2: PGD evaluation (𝜖 = 8/255) with the corresponding speed-up under different subset sizes for 100 epoch CIFAR-10 training.
Note that when the size is 100%, data pruning methods are not applied and the speed-up is compared with the baselines (TRADES or
MART).
Table 4, 9 selections can achieve better clean and robust [3] Z.-Q. Zhao, P. Zheng, S.-T. Xu, X. Wu, Object de-
accuracy with comparable acceleration. tection with deep learning: A review, IEEE Trans-
actions on Neural Networks and Learning Systems
30 (2019) 3212–3232. doi:10.1109/TNNLS.2018.
5. Conclusion and Future Work 2876865.
[4] S. S. A. Zaidi, M. S. Ansari, A. Aslam, N. Kanwal,
In this paper, we investigated efficient adversarial train-
M. Asghar, B. Lee, A survey of modern deep learn-
ing from a data-pruning perspective. With comprehen-
ing based object detection models, Digital Signal
sive experiments, we demonstrated that proposed adver-
Processing 126 (2022) 103514.
sarial data pruning approaches outperform the existing
[5] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit,
baselines by mitigating substantial computational over-
L. Jones, A. N. Gomez, L. u. Kaiser, I. Polosukhin,
head. These positive results pave a path for future re-
Attention is all you need, in: Advances in Neural
search on accelerating AT by minimizing redundancy at
Information Processing Systems (NeurIPS), 2017.
the data level. Our future work will focus on designing
[6] S. Borgeaud, A. Mensch, J. Hoffmann, T. Cai,
more accurate pruning schemes for large-scale datasets.
E. Rutherford, K. Millican, G. B. Van Den Driessche,
J.-B. Lespiau, B. Damoc, A. Clark, D. De Las Casas,
Acknowledgment A. Guy, J. Menick, R. Ring, T. Hennigan, S. Huang,
L. Maggiore, C. Jones, A. Cassirer, A. Brock, M. Pa-
This work was performed under the auspices of the U.S. ganini, G. Irving, O. Vinyals, S. Osindero, K. Si-
Department of Energy by Lawrence Livermore National monyan, J. Rae, E. Elsen, L. Sifre, Improving lan-
Laboratory under Contract DE-AC52-07NA27344 and guage models by retrieving from trillions of tokens,
was supported by LLNL-LDRD Program under Project in: Proceedings of the 39th International Confer-
No. 20-SI-005 (LLNL-CONF-842760). ence on Machine Learning (ICML), 2022.
[7] P.-Y. Chen, H. Zhang, Y. Sharma, J. Yi, C.-J. Hsieh,
Zoo: Zeroth order optimization based black-box
References attacks to deep neural networks without training
substitute models, in: Proceedings of the ACM
[1] Q. Xie, M.-T. Luong, E. Hovy, Q. V. Le, Self-training
Workshop on Artificial Intelligence and Security,
with noisy student improves imagenet classifica-
ACM, 2017.
tion, in: IEEE/CVF Conference on Computer Vision
[8] C. Xiao, B. Li, J. yan Zhu, W. He, M. Liu, D. Song,
and Pattern Recognition (CVPR), 2020.
Generating adversarial examples with adversarial
[2] P. Foret, A. Kleiner, H. Mobahi, B. Neyshabur,
networks, in: Proceedings of the Twenty-Seventh
Sharpness-aware minimization for efficiently im-
International Joint Conference on Artificial Intelli-
proving generalization, in: International Confer-
gence(IJCAI), 2018.
ence on Learning Representations (ICLR), 2021.
[9] F. Tramer, N. Carlini, W. Brendel, A. Madry, On
adaptive attacks to adversarial example defenses,
in: Advances in Neural Information Processing Sys- 2021.
tems (NeurIPS), 2020. [23] F. Croce, M. Hein, Reliable evaluation of adversarial
[10] A. Athalye, N. Carlini, D. Wagner, Obfuscated gra- robustness with an ensemble of diverse parameter-
dients give a false sense of security: Circumventing free attacks, in: International Conference on Ma-
defenses to adversarial examples, in: Proceedings chine Learning (ICML), 2020.
of the 35th International Conference on Machine [24] I. J. Goodfellow, J. Shlens, C. Szegedy, Explaining
Learning (ICML), 2018. and harnessing adversarial examples, in: arXiv,
[11] E. Wong, Z. Kolter, Provable defenses against ad- 2015.
versarial examples via the convex outer adversarial [25] N. Carlini, D. Wagner, Towards evaluating the ro-
polytope, in: Proceedings of the 35th International bustness of neural networks, in: IEEE Symposium
Conference on Machine Learning (ICML), 2018. on Security and Privacy (S&P), IEEE, 2017.
[12] H. Salman, J. Li, I. Razenshteyn, P. Zhang, H. Zhang, [26] F. Croce, M. Hein, Sparse and imperceivable adver-
S. Bubeck, G. Yang, Provably robust deep learning sarial attacks, in: Proceedings of the IEEE/CVF In-
via adversarially trained smoothed classifiers, in: ternational Conference on Computer Vision (ICCV),
Advances in Neural Information Processing Sys- 2019.
tems (NeurIPS), 2019. [27] Q. Zhang, X. Li, Y. Chen, J. Song, L. Gao, Y. He,
[13] A. Madry, A. Makelov, L. Schmidt, D. Tsipras, H. Xue, Beyond imagenet attack: Towards crafting
A. Vladu, Towards deep learning models resistant adversarial examples for black-box domains, in:
to adversarial attacks, in: International Conference International Conference on Learning Representa-
on Learning Representations (ICLR), 2018. tions (ICLR), 2022.
[14] E. Wong, L. Rice, J. Z. Kolter, Fast is better than free: [28] A. Kurakin, I. Goodfellow, S. Bengio, Adversarial
Revisiting adversarial training, in: International examples in the physical world, 2016. URL: https:
Conference on Learning Representations (ICLR), //arxiv.org/abs/1607.02533. doi:10.48550/ARXIV.
2020. 1607.02533.
[15] B. S. Vivek, R. Venkatesh Babu, Single-step adver- [29] D. Meng, H. Chen, Magnet: A two-pronged defense
sarial training with dropout scheduling, in: 2020 against adversarial examples, in: Proceedings of
IEEE/CVF Conference on Computer Vision and Pat- the 2017 ACM SIGSAC Conference on Computer
tern Recognition (CVPR), 2020. and Communications Security, 2017.
[16] H. Kim, W. Lee, J. Lee, Understanding catastrophic [30] F. Liao, M. Liang, Y. Dong, T. Pang, X. Hu, J. Zhu,
overfitting in single-step adversarial training, in: Defense against adversarial attacks using high-level
Proceedings of the AAAI Conference on Artificial representation guided denoiser, in: Proceedings
Intelligence (AAAI), volume 35, 2021, pp. 8119– of the IEEE Conference on Computer Vision and
8127. Pattern Recognition (CVPR), 2018.
[17] M. Andriushchenko, N. Flammarion, Understand- [31] A. Mustafa, S. Khan, M. Hayat, R. Goecke, J. Shen,
ing and improving fast adversarial training, in: Ad- L. Shao, Adversarial defense by restricting the hid-
vances in Neural Information Processing Systems den space of deep neural networks, in: Proceedings
(NeurIPS), 2020. of the IEEE/CVF International Conference on Com-
[18] B. R. Bartoldson, B. Kailkhura, D. Blalock, Compute- puter Vision (ICCV), 2019.
efficient deep learning: Algorithmic trends and op- [32] Y. Gong, Y. Yao, Y. Li, Y. Zhang, X. Liu, X. Lin, S. Liu,
portunities, arXiv preprint arXiv:2210.06640 (2022). Reverse engineering of imperceptible adversarial
[19] M. Kaufmann, Y. Zhao, I. Shumailov, R. Mullins, image perturbations, in: International Conference
N. Papernot, Efficient adversarial training with on Learning Representations (ICLR), 2022.
data pruning, in: arXiv, 2022. [33] D. Su, H. Zhang, H. Chen, J. Yi, P.-Y. Chen, Y. Gao, Is
[20] H. Zhang, Y. Yu, J. Jiao, E. P. Xing, L. E. Ghaoui, robustness the cost of accuracy? – a comprehensive
M. I. Jordan, Theoretically principled trade-off be- study on the robustness of 18 deep image classifi-
tween robustness and accuracy, in: International cation models, in: Proceedings of the European
Conference on Machine Learning (ICML), 2019. Conference on Computer Vision (ECCV), 2018.
[21] Y. Wang, D. Zou, J. Yi, J. Bailey, X. Ma, Q. Gu, Im- [34] Q.-Z. Cai, C. Liu, D. Song, Curriculum adversarial
proving adversarial robustness requires revisiting training, in: Proceedings of the Twenty-Seventh
misclassified examples, in: International Confer- International Joint Conference on Artificial Intelli-
ence on Learning Representations (ICLR), 2020. gence, IJCAI-18, International Joint Conferences on
[22] W. Hua, Y. Zhang, C. Guo, Z. Zhang, G. E. Suh, Bul- Artificial Intelligence Organization, 2018, pp. 3740–
lettrain: Accelerating robust neural network train- 3747. URL: https://doi.org/10.24963/ijcai.2018/520.
ing via boundary example mining, in: Advances in doi:10.24963/ijcai.2018/520.
Neural Information Processing Systems (NeurIPS), [35] J. Zhang, X. Xu, B. Han, G. Niu, L. Cui, M. Sugiyama,
M. Kankanhalli, Attacks which do not kill training
make adversarial learning stronger, in: Proceedings
of the 37th International Conference on Machine
Learning (ICML), 2020.
[36] A. Shafahi, M. Najibi, M. A. Ghiasi, Z. Xu, J. Dicker-
son, C. Studer, L. S. Davis, G. Taylor, T. Goldstein,
Adversarial training for free!, in: Advances in Neu-
ral Information Processing Systems (NeurIPS), 2019.
[37] Y. Zhang, G. Zhang, P. Khanduri, M. Hong, S. Chang,
S. Liu, Revisiting and advancing fast adversarial
training through the lens of bi-level optimization,
in: International Conference on Machine Learning
(ICML), 2022.
[38] C. Coleman, C. Yeh, S. Mussmann, B. Mirza-
soleiman, P. Bailis, P. Liang, J. Leskovec, M. Za-
haria, Selection via proxy: Efficient data selection
for deep learning, in: International Conference
on Learning Representations (ICLR), 2020. URL:
https://openreview.net/forum?id=HJg2b0VYDr.
[39] V. Kaushal, R. Iyer, S. Kothawade, R. Mahadev,
K. Doctor, G. Ramakrishnan, Learning from less
data: A unified data subset selection and active
learning framework for computer vision, in: Pro-
ceedings of the IEEE/CVF Winter Conference on
Applications of Computer Vision (WACV), 2019.
[40] D. Feldman, Core-Sets: Updated Survey, Springer
International Publishing, Cham, 2020, pp. 23–44.
URL: https://doi.org/10.1007/978-3-030-29349-9_2.
doi:10.1007/978-3-030-29349-9_2.
[41] B. Mirzasoleiman, J. Bilmes, J. Leskovec, Coresets
for data-efficient training of machine learning mod-
els, in: Proceedings of the 37th International Con-
ference on Machine Learning (ICML), 2020.
[42] K. Killamsetty, D. S, G. Ramakrishnan, A. De, R. Iyer,
Grad-match: Gradient matching based data subset
selection for efficient deep model training, in: Pro-
ceedings of the 38th International Conference on
Machine Learning (ICML), 2021.
[43] K. Killamsetty, D. Sivasubramanian, G. Ramakrish-
nan, R. Iyer, Glister: Generalization based data
subset selection for efficient and robust learning,
in: Proceedings of the AAAI Conference on Ar-
tificial Intelligence (AAAI), volume 35, 2021, pp.
8110–8118.
[44] A. Krizhevsky, G. Hinton, Learning multiple lay-
ers of features from tiny images, Master’s thesis,
Department of Computer Science, University of
Toronto (2009).
[45] K. He, X. Zhang, S. Ren, J. Sun, Identity mappings
in deep residual networks, in: European conference
on computer vision (ECCV), Springer, 2016, pp. 630–
645.