<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable Color Filter</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Zhengyu</forename><surname>Zhao</surname></persName>
							<email>z.zhao@cs.ru.nl</email>
							<affiliation key="aff0">
								<orgName type="institution">Radboud University</orgName>
								<address>
									<country key="NL">Netherlands</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fooling Blind Image Quality Assessment by Optimizing a Human-Understandable Color Filter</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">720DC0D44F558A4F593138D3DFDD38DB</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents the submission of our RU-DS team to the Pixel Privacy Task 2020. We propose to fool the blind image quality assessment model by transforming images based on optimizing a human-understandable color filter. In contrast to the common work that relies on small, 𝐿 𝑝 -bounded additive pixel perturbations, our approach yields large yet smooth perturbations. Experimental results demonstrate that in the specific context of this task, our approach is able to achieve strong adversarial effects, but has to sacrifice the image appeal.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>High-quality images shared online can be misappropriated for promotional goals. The Pixel Privacy Task <ref type="bibr" target="#b14">[15]</ref> this year is focused on developing adversarial techniques to decrease the predicted quality scores of an automatic Blind Image Quality Assessment (BIQA) model <ref type="bibr" target="#b9">[10]</ref>, which effectively camouflages images from being promoted. A key requirement of such adversaries is that the adversarial image should remain its original quality or become more appealing to the human eye. Conventional work on generating adversarial images has been focused on small additive perturbations, mostly bounded by 𝐿 𝑝 distance <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b15">16]</ref>, or other more visual-perceptionaligned metrics <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b20">21]</ref>. In this way, the adversarial image is only designed to maintain its original appearance as much as possible, instead of enhancing the image appeal.</p><p>In contrast, recent studies <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b19">20]</ref> have started to explore non-suspicious adversarial images that accommodate larger perturbations without arousing suspicion because they transform groups of pixels along dimensions consistent with human interpretation of images. Among them, the Adversarial Color Enhancement (ACE) <ref type="bibr" target="#b19">[20]</ref> can simultaneously achieve the adversarial effects and image enhancement by optimizing a human-understandable parametric color filter. Its effectiveness has been originally validated in the domain of image classification and segmentation.</p><p>One may argue that it is easier to separately conduct the optimization for adversarial effects and image enhancement. However, we note that the joint optimization can yield larger perturbations that enjoy two important practical properties: robustness against common image processing operations and transferability to a black-box target model <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b19">20]</ref>. In this paper, specifically, we will explore the usefulness of ACE in this Pixel Privacy Task for decreasing the BIQA score while enhancing the image appeal.</p><p>Copyright 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). MediaEval'20, December 14-15 2020, Online </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">APPROACH</head><p>In this section, we firstly recall the general formulation of Adversarial Color Enhancement (ACE) as proposed by <ref type="bibr" target="#b19">[20]</ref>, and then present the modifications for applying it in our specific Pixel Privacy Task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Parametric Image Enhancement</head><p>Most advanced automatic photo enhancement algorithms have proposed to parameterize the image editing process by the DNNs, which however suffers from high computational cost and low interpretability <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b21">22]</ref>. In contrast, recent work <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b10">11]</ref> has proposed to parameterize the process as human-understandable image filters. Such methods have far fewer parameters to optimize, and can be applied independently of the image resolution.</p><p>Specifically, ACE adopts the approximation of the color filter in <ref type="bibr" target="#b10">[11]</ref>, which is formulated as a simple monotonic piecewise-linear mapping function:</p><formula xml:id="formula_0">𝐹 𝜽 (𝑥 𝑘 ) = 𝑘−1 𝑖=1 𝜃 𝑖 𝜃 sum + (𝐾 • 𝑥 𝑘 − (𝑘 − 1)) • 𝜃 𝑘 𝜃 sum , 𝜃 sum = 𝐾 𝑘=1 𝜃 𝑘 ,<label>(1)</label></formula><p>where 𝐾 demotes the total number of pieces. In this case, an input image pixel 𝑥 𝑘 falling in the 𝑘-th piece will be filtered using the parameter 𝜃 𝑘 , and 𝐹 𝜽 (𝑥 𝑘 ) is its corresponding output. By doing this, pixels with similar colors will be filtered with the same parameter, leading to smooth color transformation. Specifically, the three RGB channels are processed independently. An example of this function with four pieces (𝐾 = 4) is illustrated in Fig. <ref type="figure" target="#fig_0">1</ref>.</p><p>MediaEval'20, December 14-15 2020, Online Z. Zhao  There are two methods to constrain the color transformation strength. The first method imposes adjustable bounds on the filter parameters, formulated as:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Runs Methods</head><formula xml:id="formula_1">min 𝜽 𝐿 𝑎𝑑𝑣 (𝐹 𝜽 (𝒙)), s.t. 1 ≤ ∥ 𝜽 𝜽 0 ∥ ∞ ≤ 𝜖,<label>(2)</label></formula><p>where 𝜽 0 denotes the initial parameters, equaling to 1 𝐾 /𝐾. The adversarial loss, 𝐿 𝑎𝑑𝑣 , adopts the specific logit loss from the the wellknown C&amp;W method <ref type="bibr" target="#b1">[2]</ref>. Note that this parameter bound is not necessarily to tight as in the 𝐿 𝑝 methods, since the color filtering can inherently guarantee the uniformity of the image transformation even when the perturbations are large. This bounded variant of ACE is referred to as ACE-PGD.</p><p>The second method guides the transformation towards specific appealing color styles, in addition to achieving the adversarial effects. To this end, additional guidance from common enhancement practices is incorporated into the adversarial optimization. Specifically, the targeted appealing color styles are obtained by using Instagram filters, and the optimization can be formulated as:</p><formula xml:id="formula_2">min 𝜽 𝐿 𝑎𝑑𝑣 (𝐹 𝜽 (𝒙)) + 𝜆 • ∥𝐹 𝜽 (𝒙) − 𝒙 ins ∥ 2 2 ,<label>(3)</label></formula><p>where 𝒙 ins denotes the targeted Instagram filtered image with a specific color style. This variant of ACE is referred to as ACE-Ins. One popular Instagram filter style, Nashville, is considered in our submitted runs, and the implementation is automated using the GIMP toolkit with the Instagram Effects Plugins 1 .</p><p>In the context of fooling BIQA, the 𝐿 𝑎𝑑𝑣 is formulated as:</p><formula xml:id="formula_3">𝐿 𝑎𝑑𝑣 = max{BIQA(𝐹 𝜽 (𝒙)) − 𝐶, 0},<label>(4)</label></formula><p>where the target score can be set by adjusting 𝐶. Specifically, we set 𝐶 a bit lower than the standard target, 50, to make sure the adversarial effects could remain after the JPEG compression.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">RESULTS AND ANALYSIS</head><p>In total, we submitted five runs. We tried different parameters of ACE-PGD for the first four runs, and used ACE-Ins for the last run.</p><p>As can be seen from Table <ref type="table" target="#tab_0">1</ref>, all the five runs effectively decrease the model accuracy to a level below 50%. Specifically, as expected, higher 𝐾 = 4 and 𝜖 lead to stronger adversarial effects. In   addition, we find that the results before and after the JPEG compression remain similar, suggesting that our approach is stale against compression. However, the human evaluation results on the 20 selected images are not satisfying. It implies that the BIQA model is more stable against the interference of smooth modifications, such as ACE, than the classification models. Specifically, we notice that ACE-Ins fails to drive the image into a target appealing style since the optimization has to be focused on lowering the score. This may be because the quality assessment model tends to rely on highfrequency features but the ImageNet classifier learns both lowfrequency (e.g. shape) and high-frequency (e.g. textures) features. This makes the quality assessment model more robust against the low-frequency perturbations by our ACE. We will explore this in more depth for the future work. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A 4-piece color filter in ACE ( from [20]).</figDesc><graphic coords="1,341.98,173.19,192.20,160.91" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Parameters 1 ACE-PGD 𝐾 = 64, 𝜖 = 16, and iters. = 20 2 ACE-PGD 𝐾 = 64, 𝜖 = 32, and iters. = 20 3 ACE-PGD 𝐾 = 256, 𝜖 = 16, and iters. = 20 4 ACE-PGD 𝐾 = 256, 𝜖 = 64, and iters. = 20 5 ACE-Ins 𝐾 = 64, 𝜆 = 0.01, and iters. = 100 2.2 Adversarial Color Enhancement ACE generates non-suspicious adversarial images by iteratively updating the parameters of the color filter defined in Eq. 1, in contrast to the conventional attacks that are operated in the raw pixel space.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>1 https://www.marcocrippa.it/page/gimp_instagram.php.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Adversarial images achieved by our approach with the original and decreased scores. The top row shows the examples with relatively high appeal and the bottom row shows the failed examples with low appeal.</figDesc><graphic coords="2,317.96,251.58,240.25,129.24" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 2</head><label>2</label><figDesc>visualizes the successful adversarial examples with high and low appeal. We can observe that ACE can yield good image examples with filtering-like styles, but the bad examples suffer from over-colorization effects.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Detailed settings of our five runs.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Evaluation results of our five runs. The accuracy (%) is calculated over all the 550 test images, which are compressed with JPEG 90 before evaluation. The number of times selected as "Top-3" most appealing among the total 13 qualified runs is evaluated by user study with 7 people on 20 representative images that have the largest BIQA score variance. The maximum number is 140.</figDesc><table><row><cell>Runs</cell><cell>1</cell><cell>2</cell><cell>3</cell><cell>4</cell><cell>5</cell></row><row><cell cols="6">Acc before JPEG 48.00 33.27 50.00 21.82 35.09</cell></row><row><cell>Acc after JPEG</cell><cell cols="5">45.27 33.45 47.45 22.55 44.91</cell></row><row><cell>Number of Top-3</cell><cell>2</cell><cell>7</cell><cell>6</cell><cell>4</cell><cell>7</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>This work was carried out on the Dutch national e-infrastructure with the support of SURF Cooperative.</p><p>Pixel Privacy: Quality Camouflage for Social Images MediaEval'20, December 14-15 2020, Online</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Unrestricted Adversarial Examples via Semantic Manipulation</title>
		<author>
			<persName><forename type="first">Anand</forename><surname>Bhattad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Min</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Jin</forename><surname>Chong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kaizhao</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bo</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><forename type="middle">A</forename><surname>Forsyth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards evaluating the robustness of neural networks</title>
		<author>
			<persName><forename type="first">Nicholas</forename><surname>Carlini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Wagner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE S&amp;P</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">EAD: elastic-net attacks to deep neural networks via adversarial examples</title>
		<author>
			<persName><forename type="first">Pin-Yu</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yash</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Huan</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jinfeng</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cho-Jui</forename><surname>Hsieh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Sparse and Imperceivable Adversarial Attacks</title>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Croce</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Matthias</forename><surname>Hein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCV</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Aestheticdriven image enhancement by adversarial learning</title>
		<author>
			<persName><forename type="first">Yubin</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chen</forename><forename type="middle">Change</forename><surname>Loy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiaoou</forename><surname>Tang</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>ACM MM</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Exploring the Landscape of Spatial Robustness</title>
		<author>
			<persName><forename type="first">Logan</forename><surname>Engstrom</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Brandon</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dimitris</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ludwig</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aleksander</forename><surname>Madry</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>ICML</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Robust physical-world attacks on deep learning models</title>
		<author>
			<persName><forename type="first">Kevin</forename><surname>Eykholt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ivan</forename><surname>Evtimov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Earlence</forename><surname>Fernandes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bo</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amir</forename><surname>Rahmati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chaowei</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Atul</forename><surname>Prakash</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tadayoshi</forename><surname>Kohno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dawn</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Deep bilateral learning for real-time image enhancement</title>
		<author>
			<persName><forename type="first">Michaël</forename><surname>Gharbi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jiawen</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathan</forename><forename type="middle">T</forename><surname>Barron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Samuel</forename><forename type="middle">W</forename><surname>Hasinoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frédo</forename><surname>Durand</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM TOG</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="1" to="12" />
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Explaining and harnessing adversarial examples</title>
		<author>
			<persName><forename type="first">Ian</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathon</forename><surname>Shlens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Szegedy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">KonIQ-10k: An ecologically valid database for deep learning of blind image quality assessment</title>
		<author>
			<persName><forename type="first">Vlad</forename><surname>Hosu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hanhe</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tamas</forename><surname>Sziranyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dietmar</forename><surname>Saupe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE TIP</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="4041" to="4056" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Exposure: A white-box photo post-processing framework</title>
		<author>
			<persName><forename type="first">Yuanming</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hao</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chenxi</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Baoyuan</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stephen</forename><surname>Lin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Graphics</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page">26</biblScope>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Image-to-image translation with conditional adversarial networks</title>
		<author>
			<persName><forename type="first">Phillip</forename><surname>Isola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun-Yan</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tinghui</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexei</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers</title>
		<author>
			<persName><forename type="first">Ameya</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amitangshu</forename><surname>Mukherjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Soumik</forename><surname>Sarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chinmay</forename><surname>Hegde</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCV</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Functional Adversarial Attacks</title>
		<author>
			<persName><forename type="first">Cassidy</forename><surname>Laidlaw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Soheil</forename><surname>Feizi</surname></persName>
		</author>
		<editor>NeurIPS</editor>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Exploring Quality Camouflage for Social Images</title>
		<author>
			<persName><forename type="first">Zhuoran</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhengyu</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martha</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Laurent</forename><surname>Amsaleg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes Proceedings of the MediaEval Workshop</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Towards deep learning models resistant to adversarial attacks</title>
		<author>
			<persName><forename type="first">Aleksander</forename><surname>Madry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aleksandar</forename><surname>Makelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ludwig</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dimitris</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adrian</forename><surname>Vladu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">ColorFool: Semantic Adversarial Colorization</title>
		<author>
			<persName><forename type="first">Ricardo</forename><surname>Ali Shahin Shamsabadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andrea</forename><surname>Sanchez-Matilla</surname></persName>
		</author>
		<author>
			<persName><surname>Cavallaro</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Wasserstein Adversarial Examples via Projected Sinkhorn Iterations</title>
		<author>
			<persName><forename type="first">Eric</forename><surname>Wong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Frank</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zico</forename><surname>Kolter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICML</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Spatially transformed adversarial examples</title>
		<author>
			<persName><forename type="first">Chaowei</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jun-Yan</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Bo</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Warren</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mingyan</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dawn</forename><surname>Song</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Adversarial Robustness Against Image Color Transformation within Parametric Filter Space</title>
		<author>
			<persName><forename type="first">Zhengyu</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhuoran</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martha</forename><surname>Larson</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2011.06690</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Towards Large yet Imperceptible Adversarial Image Perturbations with Perceptual Color Distance</title>
		<author>
			<persName><forename type="first">Zhengyu</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhuoran</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martha</forename><surname>Larson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Unpaired image-to-image translation using cycle-consistent adversarial networks</title>
		<author>
			<persName><forename type="first">Jun-Yan</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Taesung</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Phillip</forename><surname>Isola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexei</forename><forename type="middle">A</forename><surname>Efros</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCV</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
