<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable Color Filter</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Zhengyu Zhao</string-name>
          <email>z.zhao@cs.ru.nl</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Radboud University</institution>
          ,
          <country country="NL">Netherlands</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <fpage>14</fpage>
      <lpage>15</lpage>
      <abstract>
        <p>This paper presents the submission of our RU-DS team to the Pixel Privacy Task 2020. We propose to fool the blind image quality assessment model by transforming images based on optimizing a human-understandable color filter. In contrast to the common work that relies on small,  -bounded additive pixel perturbations, our approach yields large yet smooth perturbations. Experimental results demonstrate that in the specific context of this task, our approach is able to achieve strong adversarial efects, but has to sacrifice the image appeal.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>INTRODUCTION</title>
      <p>
        High-quality images shared online can be misappropriated for
promotional goals. The Pixel Privacy Task [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] this year is focused on
developing adversarial techniques to decrease the predicted quality
scores of an automatic Blind Image Quality Assessment (BIQA)
model [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which efectively camouflages images from being
promoted. A key requirement of such adversaries is that the adversarial
image should remain its original quality or become more appealing
to the human eye. Conventional work on generating adversarial
images has been focused on small additive perturbations, mostly
bounded by  distance [
        <xref ref-type="bibr" rid="ref16 ref2 ref3 ref9">2, 3, 9, 16</xref>
        ], or other more
visual-perceptionaligned metrics [
        <xref ref-type="bibr" rid="ref18 ref19 ref21 ref4">4, 18, 19, 21</xref>
        ]. In this way, the adversarial image
is only designed to maintain its original appearance as much as
possible, instead of enhancing the image appeal.
      </p>
      <p>
        In contrast, recent studies [
        <xref ref-type="bibr" rid="ref1 ref13 ref14 ref17 ref20 ref6 ref7">1, 6, 7, 13, 14, 17, 20</xref>
        ] have started to
explore non-suspicious adversarial images that accommodate larger
perturbations without arousing suspicion because they transform
groups of pixels along dimensions consistent with human
interpretation of images. Among them, the Adversarial Color Enhancement
(ACE) [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] can simultaneously achieve the adversarial efects and
image enhancement by optimizing a human-understandable
parametric color filter. Its efectiveness has been originally validated in
the domain of image classification and segmentation.
      </p>
      <p>
        One may argue that it is easier to separately conduct the
optimization for adversarial efects and image enhancement. However, we
note that the joint optimization can yield larger perturbations that
enjoy two important practical properties: robustness against
common image processing operations and transferability to a black-box
target model [
        <xref ref-type="bibr" rid="ref1 ref17 ref20">1, 17, 20</xref>
        ]. In this paper, specifically, we will explore
the usefulness of ACE in this Pixel Privacy Task for decreasing the
BIQA score while enhancing the image appeal.
      </p>
    </sec>
    <sec id="sec-2">
      <title>APPROACH</title>
      <p>
        In this section, we firstly recall the general formulation of
Adversarial Color Enhancement (ACE) as proposed by [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], and then present
the modifications for applying it in our specific Pixel Privacy Task.
2.1
      </p>
    </sec>
    <sec id="sec-3">
      <title>Parametric Image Enhancement</title>
      <p>
        Most advanced automatic photo enhancement algorithms have
proposed to parameterize the image editing process by the DNNs,
which however sufers from high computational cost and low
interpretability [
        <xref ref-type="bibr" rid="ref12 ref22 ref8">8, 12, 22</xref>
        ]. In contrast, recent work [
        <xref ref-type="bibr" rid="ref11 ref5">5, 11</xref>
        ] has proposed
to parameterize the process as human-understandable image filters.
Such methods have far fewer parameters to optimize, and can be
applied independently of the image resolution.
      </p>
      <p>
        Specifically, ACE adopts the approximation of the color filter
in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], which is formulated as a simple monotonic piecewise-linear
mapping function:
 ( ) = Õ=−11 sum + ( · − ( − 1)) · sum ,
sum =

Õ
=1
 ,
where  demotes the total number of pieces. In this case, an input
image pixel  falling in the -th piece will be filtered using the
parameter  , and  ( ) is its corresponding output. By doing this,
pixels with similar colors will be filtered with the same parameter,
leading to smooth color transformation. Specifically, the three RGB
channels are processed independently. An example of this function
with four pieces ( = 4) is illustrated in Fig. 1.
(1)
ACE generates non-suspicious adversarial images by iteratively
updating the parameters of the color filter defined in Eq. 1, in
contrast to the conventional attacks that are operated in the raw
pixel space.
      </p>
      <p>
        There are two methods to constrain the color transformation
strength. The first method imposes adjustable bounds on the filter
parameters, formulated as:
(2)

min  ( ()), s.t. 1 ≤ ∥  0 ∥∞ ≤ ,

where  0 denotes the initial parameters, equaling to 1 / . The
adversarial loss,  , adopts the specific logit loss from the the
wellknown C&amp;W method [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Note that this parameter bound is not
necessarily to tight as in the  methods, since the color filtering can
inherently guarantee the uniformity of the image transformation
even when the perturbations are large. This bounded variant of
ACE is referred to as ACE-PGD.
      </p>
      <p>The second method guides the transformation towards specific
appealing color styles, in addition to achieving the adversarial
effects. To this end, additional guidance from common enhancement
practices is incorporated into the adversarial optimization.
Specifically, the targeted appealing color styles are obtained by using
Instagram filters, and the optimization can be formulated as:
min  ( ()) +  · ∥ () − ins ∥22, (3)

where ins denotes the targeted Instagram filtered image with a
specific color style. This variant of ACE is referred to as ACE-Ins.
One popular Instagram filter style, Nashville, is considered in our
submitted runs, and the implementation is automated using the
GIMP toolkit with the Instagram Efects Plugins 1.</p>
      <p>In the context of fooling BIQA, the  is formulated as:
 = max{BIQA( ()) − , 0},
(4)
where the target score can be set by adjusting . Specifically, we
set  a bit lower than the standard target, 50, to make sure the
adversarial efects could remain after the JPEG compression.
3</p>
    </sec>
    <sec id="sec-4">
      <title>RESULTS AND ANALYSIS</title>
      <p>In total, we submitted five runs. We tried diferent parameters of
ACE-PGD for the first four runs, and used ACE-Ins for the last run.</p>
      <p>As can be seen from Table 1, all the five runs efectively
decrease the model accuracy to a level below 50%. Specifically, as
expected, higher  = 4 and  lead to stronger adversarial efects. In
1https://www.marcocrippa.it/page/gimp_instagram.php.
5
7
addition, we find that the results before and after the JPEG
compression remain similar, suggesting that our approach is stale against
compression.</p>
      <p>However, the human evaluation results on the 20 selected images
are not satisfying. It implies that the BIQA model is more stable
against the interference of smooth modifications, such as ACE,
than the classification models. Specifically, we notice that
ACEIns fails to drive the image into a target appealing style since the
optimization has to be focused on lowering the score. This may
be because the quality assessment model tends to rely on
highfrequency features but the ImageNet classifier learns both
lowfrequency (e.g. shape) and high-frequency (e.g. textures) features.
This makes the quality assessment model more robust against the
low-frequency perturbations by our ACE. We will explore this in
more depth for the future work.</p>
      <p>Figure 2 visualizes the successful adversarial examples with high
and low appeal. We can observe that ACE can yield good image
examples with filtering-like styles, but the bad examples sufer from
over-colorization efects.</p>
    </sec>
    <sec id="sec-5">
      <title>ACKNOWLEDGMENTS</title>
      <p>This work was carried out on the Dutch national e-infrastructure
with the support of SURF Cooperative.</p>
      <p>Pixel Privacy: Quality Camouflage for Social Images</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Anand</given-names>
            <surname>Bhattad</surname>
          </string-name>
          , Min Jin Chong, Kaizhao Liang,
          <source>Bo Li, and David A Forsyth</source>
          .
          <year>2020</year>
          .
          <article-title>Unrestricted Adversarial Examples via Semantic Manipulation</article-title>
          .
          <source>In ICLR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Nicholas</given-names>
            <surname>Carlini</surname>
          </string-name>
          and
          <string-name>
            <given-names>David</given-names>
            <surname>Wagner</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Towards evaluating the robustness of neural networks</article-title>
          .
          <source>In IEEE S&amp;P.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Pin-Yu</surname>
            <given-names>Chen</given-names>
          </string-name>
          , Yash Sharma, Huan Zhang, Jinfeng Yi, and
          <string-name>
            <surname>Cho-Jui Hsieh</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>EAD: elastic-net attacks to deep neural networks via adversarial examples</article-title>
          .
          <source>In AAAI.</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Francesco</given-names>
            <surname>Croce</surname>
          </string-name>
          and
          <string-name>
            <given-names>Matthias</given-names>
            <surname>Hein</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Sparse and Imperceivable Adversarial Attacks</article-title>
          . In ICCV.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Yubin</given-names>
            <surname>Deng</surname>
          </string-name>
          , Chen Change Loy, and
          <string-name>
            <given-names>Xiaoou</given-names>
            <surname>Tang</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Aestheticdriven image enhancement by adversarial learning</article-title>
          .
          <source>In ACM MM.</source>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Logan</given-names>
            <surname>Engstrom</surname>
          </string-name>
          , Brandon Tran, Dimitris Tsipras, Ludwig Schmidt, and
          <string-name>
            <given-names>Aleksander</given-names>
            <surname>Madry</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Exploring the Landscape of Spatial Robustness</article-title>
          . In ICML.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>Kevin</given-names>
            <surname>Eykholt</surname>
          </string-name>
          , Ivan Evtimov, Earlence Fernandes,
          <string-name>
            <given-names>Bo</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Amir</given-names>
            <surname>Rahmati</surname>
          </string-name>
          , Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, and
          <string-name>
            <given-names>Dawn</given-names>
            <surname>Song</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Robust physical-world attacks on deep learning models</article-title>
          .
          <source>In CVPR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Michaël</given-names>
            <surname>Gharbi</surname>
          </string-name>
          , Jiawen Chen, Jonathan T Barron, Samuel W Hasinof, and
          <string-name>
            <given-names>Frédo</given-names>
            <surname>Durand</surname>
          </string-name>
          .
          <year>2017</year>
          .
          <article-title>Deep bilateral learning for real-time image enhancement</article-title>
          .
          <source>ACM TOG 36</source>
          ,
          <issue>4</issue>
          (
          <year>2017</year>
          ),
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Ian</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          , Jonathon Shlens, and
          <string-name>
            <given-names>Christian</given-names>
            <surname>Szegedy</surname>
          </string-name>
          .
          <year>2015</year>
          .
          <article-title>Explaining and harnessing adversarial examples</article-title>
          .
          <source>In ICLR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Vlad</surname>
            <given-names>Hosu</given-names>
          </string-name>
          , Hanhe Lin, Tamas
          <string-name>
            <surname>Sziranyi</surname>
            , and
            <given-names>Dietmar</given-names>
          </string-name>
          <string-name>
            <surname>Saupe</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment</article-title>
          .
          <source>IEEE TIP 29</source>
          (
          <year>2020</year>
          ),
          <fpage>4041</fpage>
          -
          <lpage>4056</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Yuanming</surname>
            <given-names>Hu</given-names>
          </string-name>
          , Hao He, Chenxi Xu,
          <string-name>
            <given-names>Baoyuan</given-names>
            <surname>Wang</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Stephen</given-names>
            <surname>Lin</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Exposure: A white-box photo post-processing framework</article-title>
          .
          <source>ACM Transactions on Graphics 37</source>
          ,
          <issue>2</issue>
          (
          <year>2018</year>
          ),
          <fpage>26</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Phillip</surname>
            <given-names>Isola</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jun-Yan</surname>
            <given-names>Zhu</given-names>
          </string-name>
          ,
          <source>Tinghui Zhou, and Alexei A Efros</source>
          .
          <year>2017</year>
          .
          <article-title>Image-to-image translation with conditional adversarial networks</article-title>
          .
          <source>In CVPR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Ameya</surname>
            <given-names>Joshi</given-names>
          </string-name>
          , Amitangshu Mukherjee, Soumik Sarkar, and
          <string-name>
            <given-names>Chinmay</given-names>
            <surname>Hegde</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers</article-title>
          . In ICCV.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Cassidy</given-names>
            <surname>Laidlaw</surname>
          </string-name>
          and
          <string-name>
            <given-names>Soheil</given-names>
            <surname>Feizi</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Functional Adversarial Attacks</article-title>
          . In NeurIPS.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Zhuoran</surname>
            <given-names>Liu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Zhengyu</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Martha</given-names>
            <surname>Larson</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Laurent</given-names>
            <surname>Amsaleg</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Exploring Quality Camouflage for Social Images</article-title>
          .
          <source>In Working Notes Proceedings of the MediaEval Workshop.</source>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Aleksander</surname>
            <given-names>Madry</given-names>
          </string-name>
          , Aleksandar Makelov, Ludwig Schmidt, Dimitris Tsipras, and
          <string-name>
            <given-names>Adrian</given-names>
            <surname>Vladu</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Towards deep learning models resistant to adversarial attacks</article-title>
          .
          <source>In ICLR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>Ali</given-names>
            <surname>Shahin</surname>
          </string-name>
          <string-name>
            <surname>Shamsabadi</surname>
          </string-name>
          , Ricardo Sanchez-Matilla, and
          <string-name>
            <given-names>Andrea</given-names>
            <surname>Cavallaro</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>ColorFool: Semantic Adversarial Colorization</article-title>
          .
          <source>In CVPR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Eric</surname>
            <given-names>Wong</given-names>
          </string-name>
          , Frank Schmidt, and
          <string-name>
            <given-names>Zico</given-names>
            <surname>Kolter</surname>
          </string-name>
          .
          <year>2019</year>
          .
          <article-title>Wasserstein Adversarial Examples via Projected Sinkhorn Iterations</article-title>
          . In ICML.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Chaowei</surname>
            <given-names>Xiao</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jun-Yan</surname>
            <given-names>Zhu</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Bo</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Warren He</surname>
          </string-name>
          , Mingyan Liu, and
          <string-name>
            <given-names>Dawn</given-names>
            <surname>Song</surname>
          </string-name>
          .
          <year>2018</year>
          .
          <article-title>Spatially transformed adversarial examples</article-title>
          .
          <source>In ICLR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Zhengyu</surname>
            <given-names>Zhao</given-names>
          </string-name>
          , Zhuoran Liu, and
          <string-name>
            <given-names>Martha</given-names>
            <surname>Larson</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Adversarial Robustness Against Image Color Transformation within Parametric Filter Space</article-title>
          . In arXiv preprint arXiv:
          <year>2011</year>
          .06690.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Zhengyu</surname>
            <given-names>Zhao</given-names>
          </string-name>
          , Zhuoran Liu, and
          <string-name>
            <given-names>Martha</given-names>
            <surname>Larson</surname>
          </string-name>
          .
          <year>2020</year>
          .
          <article-title>Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance</article-title>
          .
          <source>In CVPR.</source>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Jun-Yan</surname>
            <given-names>Zhu</given-names>
          </string-name>
          , Taesung Park,
          <source>Phillip Isola, and Alexei A Efros</source>
          .
          <year>2017</year>
          .
          <article-title>Unpaired image-to-image translation using cycle-consistent adversarial networks</article-title>
          .
          <source>In ICCV.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>