=Paper= {{Paper |id=Vol-2881/paper6 |storemode=property |title=Efficient Warm Restart Adversarial Attack for Object Detection |pdfUrl=https://ceur-ws.org/Vol-2881/paper6.pdf |volume=Vol-2881 |authors=Ye Liu,Xiaofei Zhu,Xianying Huang }} ==Efficient Warm Restart Adversarial Attack for Object Detection== https://ceur-ws.org/Vol-2881/paper6.pdf
 Efficient Warm Restart Adversarial Attack for Object Detection
                             Ye Liu                                             Xiaofei Zhu                                  Xianying Huang∗
      College of Computer Science and                            College of Computer Science and                     College of Computer Science and
    Engineering, Chongqing University of                       Engineering, Chongqing University of                Engineering, Chongqing University of
                 Technology                                                 Technology                                          Technology
              Chongqing, China                                          Chongqing, China                                    Chongqing, China
            liuye_ly94@163.com                                           zxf@cqut.edu.cn                                     hxy@cqut.edu.cn

ABSTRACT                                                                                          • Constraint 1: Maximum Changed Pixel Rate Constraint,
This article introduces the solution of the champion team green                                     which limits the changed pixel rate less than 2% of all image
hand for CIKM2020 Analyticup: Alibaba-Tsinghua Adversarial                                          pixels.
Challenge on Object Detection. In this work, we propose a new                                     • Constraint 2: Patch Number Constraint, which requires the
adversarial attack method called Efficient Warm Restart Adversarial                                 number of patches no more than 10.
Attack for Object Detection. It consists of three modules: 1) Effi-
cient Warm Restart Adversarial Attack, which is designed to select
                                                                                                 Existing adversarial attack methods, such as FGSM [3], PGD [4],
proper top-k pixels ; 2) Connecting Top-k pixels with Lines, which
                                                                                              MultiTargeted-PGD [2], ODI-PGD [9], add adversarial perturba-
specifies the strategy on how to connect two top-k pixels to reduce
                                                                                              tions to the whole image. The shortcomings of these approaches
the patch number and minimize the number of changed pixels; 3)
                                                                                              are: (1) Due to the Constraint 1, adding adversarial perturbations
Adaptive Black Box Optimization, which is used to achieve a better
                                                                                              to the whole image is not allowed. (2) All these adversarial attack
performance of the black box adversarial attack by adjusting only
                                                                                              methods are mainly designed in the image classification scenario.
the white box models. The final results show that our model, which
                                                                                              As there is a considerable difference between object detection and
only uses two white box models (i.e., YOLOv4 and Faster-RCNN),
                                                                                              image classification, directly applying above methods in the ob-
achieves an evaluation score of 3761 in this competition, which
                                                                                              ject detection scenario would lead to sub-optimal results. (3) These
ranks first among all 1,701 teams. Our code will be available at
                                                                                              methods do not control the number of adversarial patches, thus it
https://github.com/liuye6666/EWR-PGD.
                                                                                              couldn’t satisfy the Constraint 2.
                                                                                                 To address the above-mentioned problems of existing approaches,
CCS CONCEPTS                                                                                  in this work, we propose a novel approach, named Efficient Warm
• Computing methodologies → Object detection;                                                 Restart Adversarial Attack for Object Detection. It consists of three
                                                                                              modules: (1) Efficient Warm Restart Adversarial Attack (EWR),
KEYWORDS                                                                                      which performs multiple warm restarts during the process of gen-
adversarial attacks, neural networks, object detection                                        erating adversarial examples and selects the most important top-k
                                                                                              pixels based on the gradient value for each warm restart. (2) Con-
                                                                                              nect Top-k pixels with Lines (CTL), which connects these important
1     INTRODUCTION                                                                            pixels together with lines to ensure less pixels are modified and
Deep neural networks have achieved great success in object de-                                patch number satisfies the Constraint 2. (3) Adaptive Black Box
tection [6–8]. However, recent studies have shown that deep neu-                              Optimization method (ABBO), which attempts to adjust the white
ral networks are vulnerable to attacks from adversarial examples                              box models to implicitly affect the performance of the black box
[1, 5, 10]. In order to identify the fragility of the object detection                        adversarial attack.
models and better evaluate the model’s adversarial robustness, Al-                               The main contribution of this work is summarized as follows:
ibaba and Tsinghua organize the CIKM2020 AnalytiCup Challenge,
i.e., Alibaba-Tsinghua Adversarial Challenge on Object Detection.
                                                                                                  1) We propose a novel approach which can effectively handle
The competition uses the MSCOCO dataset1 , and expects that par-
                                                                                                     the limitation of existing adversarial attack methods, and
ticipants can make the models unable to detect objects while adding
                                                                                                     satisfy the two constraints given by the challenge.
fewer adversarial patches.
                                                                                                  2) Our method achieves the best performance among all 1,701
    To make the challenge more competitive, the challenge organizer
                                                                                                     teams with utilizing only two white box models, i.e., YOLOv4
add two Constraints:
                                                                                                     and Faster-RCNN.
∗ Corresponding author
1 https://cocodataset.org/

                                                                                              2   OUR APPROACH
 Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons         In order to solve the problem given in this competition, we propose
License Attribution 4.0 International (CC BY 4.0).                                            a novel method, which contains three modules: (1) Efficient Warm
In: Dimitar Dimitrov, Xiaofei Zhu (eds.): Proceedings of the CIKM AnalytiCup 2020, 22
October, 2020, Gawlay (Virtual Event), Ireland, 2020, published at http://ceur-ws.org.        Restart Adversarial Attack; (2) Connecting Top-k Pixels with Lines,
                                                                                              and (3) Adaptive Black Box Optimization.
                                                                                         21
2.1    Efficient Warm Restart Adversarial Attack                               the white box models, it will be difficult for the black box attack.
       (EWR)                                                                   Thus, we first select a small k for top-k pixels. Then we restrict
                                                                               the number of changed pixels between two top-k pixel . Inversely,
In an image, there are multiple objects which can be detected. Based
                                                                               when an image has a large number of changed pixels for the white
on our preliminary analysis, we find that the loss usually don’t
                                                                               box models, we will select a bigger k for top-k pixels. In particular,
change in parallel. In particular, in the beginning, some objects will
                                                                               we conduct as follows:
change their corresponding loss considerably, while the loss change
of the remaining objects is small. After that, these objects with less             1) When white box score > 3.3, which means a small number
loss change in the beginning will change their corresponding loss                     of changed pixels, we set k=10 for top-k, and don’t connect
greatly. If we select top-k pixel only in the beginning stage, then                   two top-k pixels if the number of changed pixels between
the selected top-k pixels will be biased towards these objects with a                 them are more than 100.
high early loss change. It will inevitably result in selecting improper            2) When white box score is between 3 and 3.3, which means
important pixels.                                                                     a medium number of changed pixels, we set k=20 for top-k,
   Inspired by the work of PGD[4] and I-FGSM[3], we design a novel                    and don’t connect two top-k pixels if the number of changed
module named Efficient Warm Restart Adversarial Attack. In the                        pixels between them are more than 150.
first few restarts, modifying the selected top-k pixels will increase              3) When white box score is < 3, which means a larger number
the loss of some objects. As the number of restarts increases, more                   of changed pixels, we set k=35 for top-k, and don’t connect
important pixels are selected, so that the loss of the remaining                      two top-k pixels if the number of changed pixels between
objects will increase significantly. This method can effectively solve                them are more than 500.
the problem of selecting improper important pixels as we mentioned
before.                                                                        2.4     Loss Function
   Therefore, for a given original image, we use multiple warm                 In our the EWR module, the loss function directly affects the po-
restarts. For each warm restart, we start from the result of last              sition of selecting important top-k pixels. Since the goal of this
warm restart, and then feed previous restarted adversarial examples            competition is to make the model unable to identify the bounding
into the YOLOv4 and Faster-RCNN model. Then we compute the                     boxes, we only need to consider the loss related to the confidence
loss, and obtain the gradient value of the input image through back            of bounding boxes. In order to make the confidence of all bounding
propagation. At last, we select the pixel points according to the new          boxes small than a given threshold, we set different weights for
gradient values, and modify these pixels in the direction of loss              different confidence intervals. Specifically, for bounding boxes with
raising. When the number of restarts reach a specified threshold               higher confidence, we set a larger weight in order to make it drop
(e.g., 10) , or the evaluation score of the subsequent restarts doesn’t        faster.
increase. Finally, we obtain the best adversarial example with the                In YOLOv4 model, we set 4 confidence intervals, and set different
highest score.                                                                 weights for different confidence intervals as follows:

2.2    Connecting Top-k pixels with Lines (CTL)                                                 
                                                                                                 −0.01 × 𝑐𝑜𝑛𝑓            if 𝑐𝑜𝑛𝑓 ≤ 0.2
                                                                                                
In order to satisfy the condition 2, we need to connect the important                            −0.1 × 𝑐𝑜𝑛𝑓
                                                                                                
                                                                                                                         if 0.2 < 𝑐𝑜𝑛𝑓 ≤ 0.3
top-k pixels together to reduce the patch number. In this work, we                   𝐿𝑜𝑠𝑠𝑌𝑂𝐿𝑂 =
                                                                                                
                                                                                                   −1 × 𝑐𝑜𝑛𝑓             if 0.3 < 𝑐𝑜𝑛𝑓 ≤ 0.4
propose a simple while effective method, called Connecting Top-k                                 −10 × 𝑐𝑜𝑛𝑓
                                                                                                
                                                                                                                         if 0.4 < 𝑐𝑜𝑛𝑓 ≤ 0.5
pixels with Lines, to make the number changed pixels as small as
                                                                               where conf represents the confidence of the detection bounding
possible.
                                                                               boxes.
    Specifically, we iteratively connect two top-k pixels to reduce the
                                                                                  In Faster-RCNN model, since the confidence threshold of the
patch number and minimize the number of changed pixels. First,
                                                                               boxes is 0.3, which is smaller than that in YOLOv4 (In YOLOv4, the
we randomly select a pixel from all top-k pixels and connect it to
                                                                               confidence threshold of the detection bounding boxes is 0.5), we
its nearest pixel in the remaining top-k pixels by using a Line. It is
                                                                               simply modify the loss function of Faster-RCNN as follow:
worth noting that a line will involve minimum changed pixels, this
step can minimize the changed number of pixels. Then we ignore                                   
                                                                                                 
                                                                                                  −0.01 × 𝑐𝑜𝑛𝑓          if 𝑐𝑜𝑛𝑓 ≤ 0.1
the selected pixel, and run the above process again in the remaining                              −0.1 × 𝑐𝑜𝑛𝑓
                                                                                                 
                                                                                                                        if 0.1 < 𝑐𝑜𝑛𝑓 ≤ 0.15
                                                                                     𝐿𝑜𝑠𝑠𝑅𝐶𝑁 𝑁 =
set of pixels. We will conduct this two steps iteratively until all                              
                                                                                                    −1 × 𝑐𝑜𝑛𝑓           if 0.15 < 𝑐𝑜𝑛𝑓 ≤ 0.2
important pixels are in the same connected sets.                                                  −10 × 𝑐𝑜𝑛𝑓            if 0.2 < 𝑐𝑜𝑛𝑓 ≤ 0.3
                                                                                                 
                                                                                                 
                                                                                  Finally, for the overall loss function, we combine the loss function
2.3    Adaptive Black Box Optimization (ABBO)                                  of YOLOv4 and Faster-RCNN by simply adding both of them:
For adversarial attack, the black box models are much harder as
compared with the white box models. Since in our work we only                               𝐿𝑜𝑠𝑠𝑎𝑙𝑙 = 𝐿𝑜𝑠𝑠𝑌𝑂𝐿𝑂 + 𝐿𝑜𝑠𝑠 𝐹𝑎𝑠𝑡𝑒𝑟 −𝑅𝐶𝑁 𝑁                (1)
make use of two white box models for adversarial attack, we will
improve our model to achieve a better performance over the black               3     EXPERIMENTS
box adversarial attack. In particular, we will adaptively adjust the           Dataset: This competition selected about 1,000 images from test
strategy of connecting top-k pixels as well as the parameter 𝑘 of              split of MSCOCO 2017 dataset. Each image has been resized to 500×
top-k. For an image with a small number of changes pixels for                  500.
                                                                          22
Model: we use only the two white box models, i.e., YOLOv4 and
Faster-RCNN.
Evaluation Metrics:The goal of the adversarial attack is to make
all bounding boxes invisible by adding the adversarial patches to
images. Thus we will adopt the following metric for evaluation:

                                 𝑚𝑖𝑛(𝐹 (𝑥; 𝑚𝑖 ), 𝐹 (𝑥 ∗ ; 𝑚𝑖 ))
                                                               
           𝑆 (𝑥, 𝑥 ∗, 𝑚𝑖 ) = 1 −
                                         𝐹 (𝑥; 𝑚𝑖 )
                                  Í                                (2)
                                    𝑘 𝑅 𝑘
                            × 2−
                                   5000                                                        (a) Clear image                      (b) Faster-RCNN’s result
where 𝑅𝑘   is the 𝑘-th patch’s area, 𝑥 is the original image, 𝑥 ∗ is the
submitted adversarial image, and 𝑚𝑖 is the 𝑖-th model (𝑖 ∈ [1, 2, 3, 4]).
𝐹 (𝑥; 𝑚𝑖 ) returns the number of bounding boxes of image 𝑥, given
by model 𝑚𝑖 (a small number of bounding boxes given by the
adversarial example indicates a higher score). At last, the final score
is the sum of the scores of all images over the 4 models:
                                   4 Õ
                                   Õ
                  𝐹𝑖𝑛𝑎𝑙𝑆𝑐𝑜𝑟𝑒 =             𝑆 (𝑥, 𝑥 ∗, 𝑚𝑖 )           (3)
                                   𝑖=1 𝑥

3.1    Results                                                                               (c) YOLOv4’s result                    (d) Adversarial example
Table 1 shows the performance of our proposed approach via
different combinations of modules.The combination of EWR and                           Figure 1: result of adversarial attack on the 47.png
CTL achieves evaluation score of 2500+ and 2600+ when attack-
ing YOLOv4 and Faster-RCNN, respectively. When attacking both
YOLOv4 and Faster-RCNN, the combination of EWR and CTL will                      while maintaining a very high success rate of adversarial attack.
achieve an evaluation score of 3560+. We further combine all three               Our solution achieves the best performance in all 1701 teams in the
modules (i.e., EWR, CTL and ABBO), we will obtain the highest                    challenge of CIKM2020 Analyticup: Alibaba-Tsinghua Adversarial
evaluation score (i.e., 3761+), which ranks first among all 1,701                Challenge on Object Detection.
teams in the challenge of CIKM2020 Analyticup: Alibaba-Tsinghua
Adversarial Challenge on Object Detection.                                       REFERENCES
                                                                                  [1] Yinpeng Dong, Fangzhou Liao, Tianyu Pang, Hang Su, Jun Zhu, Xiaolin Hu,
                                                                                      and Jianguo Li. 2018. Boosting Adversarial Attacks with Momentum. In 2018
           Table 1: Results of Ablation Experiments                                   IEEE/CVF Conference on Computer Vision and Pattern Recognition. 9185–9193.
                                                                                  [2] Sven Gowal, Jonathan Uesato, Chongli Qin, Po-Sen Huang, Timothy A. Mann, and
                                                                                      Pushmeet Kohli. 2019. An Alternative Surrogate Loss for PGD-based Adversarial
            Model                      Method                Score                    Testing. arXiv preprint arXiv:1910.09338 (2019).
         YOLO RCNN             EWR      CTL ABBO                                  [3] Alexey Kurakin, Ian J. Goodfellow, and Samy Bengio. 2017. Adversarial examples
         √                       √       √                                            in the physical world. In ICLR (Workshop).
                                                             2500+
                  √              √       √                                        [4] Aleksander Madry, Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and
                                                             2600+                    Adrian Vladu. 2018. Towards Deep Learning Models Resistant to Adversarial
         √        √              √       √                                            Attacks. In ICLR 2018 : International Conference on Learning Representations 2018.
                                                             3560+
         √        √              √       √    √                                   [5] Preetum Nakkiran, Gal Kaplun, Yamini Bansal, Tristan Yang, Boaz Barak, and
                                                             3761+                    Ilya Sutskever. 2020. Deep Double Descent: Where Bigger Models and More Data
                                                                                      Hurt. In ICLR 2020 : Eighth International Conference on Learning Representations.
                                                                                  [6] Shaoqing Ren, Kaiming He, Ross Girshick, Xiangyu Zhang, and Jian Sun. 2017.
                                                                                      Object Detection Networks on Convolutional Feature Maps. IEEE Transactions
3.2    Case Study                                                                     on Pattern Analysis and Machine Intelligence 39, 7 (2017), 1476–1481.
                                                                                  [7] Evan Shelhamer, Jonathan Long, and Trevor Darrell. 2017. Fully Convolutional
Figure 1 demonstrates the adversarial attack results of an image,                     Networks for Semantic Segmentation. IEEE Transactions on Pattern Analysis and
where (a) is the original image, (b) is the results of Faster-RCNN                    Machine Intelligence 39, 4 (2017), 640–651.
                                                                                  [8] Hirotaka Suzuki and Masato Ito. 2019. Information processing device, information
model’s detection, (c) is the results of YOLOv4 model’s detection,                    processing method, and program.
and (d) is adversarial example. We can observe that our methods                   [9] Yusuke Tashiro, Yang Song, and Stefano Ermon. 2020. Diversity can be Trans-
have the following advantages:                                                        ferred: Output Diversification for White- and Black-box Attacks. arXiv: Learning
                                                                                      (2020).
    • It has a small number of changed pixels.                                   [10] Cihang Xie, Zhishuai Zhang, Yuyin Zhou, Song Bai, Jianyu Wang, Zhou Ren, and
    • Most of the top-k pixels are the key positions of attacked                      Alan L. Yuille. 2019. Improving Transferability of Adversarial Examples With
                                                                                      Input Diversity. In 2019 IEEE/CVF Conference on Computer Vision and Pattern
      objects.                                                                        Recognition (CVPR). 2730–2739.

4     CONCLUSION
In this paper, we proposed an Efficient Warm Restart Adversarial
Attack Method for Object Detection, which can modify fewer pixels
                                                                            23