<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fooling an Automatic Image Quality Estimator</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Benoit</forename><surname>Bonnet</surname></persName>
							<email>benoit.bonnet@inria.fr</email>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Univ. Rennes</orgName>
								<orgName type="institution" key="instit2">Inria</orgName>
								<orgName type="institution" key="instit3">CNRS</orgName>
								<orgName type="institution" key="instit4">IRISA Rennes</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Teddy</forename><surname>Furon</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution" key="instit1">Univ. Rennes</orgName>
								<orgName type="institution" key="instit2">Inria</orgName>
								<orgName type="institution" key="instit3">CNRS</orgName>
								<orgName type="institution" key="instit4">IRISA Rennes</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Bas</surname></persName>
							<affiliation key="aff1">
								<orgName type="laboratory">UMR 9189</orgName>
								<orgName type="institution" key="instit1">Univ. Lille</orgName>
								<orgName type="institution" key="instit2">CNRS</orgName>
								<orgName type="institution" key="instit3">Centrale Lille</orgName>
								<orgName type="institution" key="instit4">CRIStAL</orgName>
								<address>
									<settlement>Lille</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Fooling an Automatic Image Quality Estimator</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">64721DE2CEAE7EB5A0793791F242C108</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents our work on the 2020 MediaEval task: "Pixel Privacy: Quality Camouflage for Social Images". Blind Image Quality Assessment (BIQA) is an algorithm predicting a quality score for any given image. Our task is to modify an image to decrease its BIQA score while maintaining a good perceived quality. Since BIQA is a deep neural network, we worked on an adversarial attack approach of the problem.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>The internet is flooded with images. This is especially true with the growth of social networks over the last decade. All this data is used to perform analysis to bring out new trends or to train predictive models. When it comes to images, deep neural networks vastly lead the landscape of machine learning. These deep neural networks are especially known to thrive on big datasets. This leads to the idea that more data leads to better models. While there certainly is truth to that affirmation, better learning mostly comes out of better data. Good data is data that both fits the task (e.g. people, places, objects detection) and whose quality is good. Due to the amount of available data, a human could not perform this cherry-picking of good data. Automated classifiers like BIQA <ref type="bibr" target="#b3">[4]</ref> have been trained to assess the quality of an image. This classifier was trained on images whose quality was labeled based on the perceived quality of the media (e.g. resolution, compression artifacts). To protect one's data, images can be manipulated and slightly modified to defeat the automatic quality assessment <ref type="bibr" target="#b5">[6]</ref>. We chose an adversarial attack approach to achieve this goal.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">APPROACH 2.1 Adversarial Examples</head><p>Adversarial examples were first introduced by Szegedy et al. <ref type="bibr" target="#b7">[8]</ref> in early 2014. They are usually studied in the case of image classification: An attack effectively crafts a perturbation of an image to a small extent but enough to fool even the best classifiers.</p><p>In this setup, an original image 𝑥 0 is given as an input to the trained neural network to estimate the probabilities ( p𝑘 (𝑥 0 )) 𝑘 of being from class 𝑘 ∈ {1, . . . , 𝐾 }. The predicted class is given by: ĉ (𝑥 0 ) = arg max 𝑘 p𝑘 (𝑥 0 ).</p><p>(</p><formula xml:id="formula_0">)<label>1</label></formula><p>The classification is correct if ĉ (𝑥 0 ) = 𝑐 (𝑥 0 ) the ground truth class for 𝑥 0 . The goal of an attack is to craft an imperceptible perturbation 𝑝 such that the adversarial sample 𝑥 𝑎 = 𝑥 0 + 𝑝 verifies ideally:</p><p>Copyright 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). MediaEval'20, December 14-15 2020, Online</p><formula xml:id="formula_1">𝑥 ★ 𝑎 = arg min 𝑥: ĉ (𝑥)≠𝑐 (𝑥 0 ) ∥𝑥 − 𝑥 𝑜 ∥,<label>(2)</label></formula><p>Where ∥ • ∥ is a measure of distortion, in most cases the Euclidean distance. A small distortion makes it less likely for human to perceive that the image was manipulated.</p><p>BIQA is a deep neural network and as such is vulnerable to adversarial attacks. However BIQA is not a classifier returning a class prediction but a regressor giving a quality score 𝐵𝐼𝑄𝐴(𝑥) ∈ [0, 100]. The notion of adversarial sample thus needs to be redefined. In our case, we set a target score 𝑠 𝑎 ∈ [0, 100]. Regardless of the original score 𝐵𝐼𝑄𝐴(𝑥 𝑜 ), our adversarial sample now ideally verifies:</p><formula xml:id="formula_2">𝑥 ★ 𝑎 = arg min 𝑥:𝐵𝐼𝑄𝐴(𝑥) &lt;𝑠 𝑎 ∥𝑥 − 𝑥 𝑜 ∥,<label>(3)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Quantization</head><p>An original image 𝑥 0 in the spatial domain (e.g. PNG format) is a 3-dimensional discrete tensor: 𝑥 0 ∈ {0, 1, . . . , 255} 𝑛 (with 𝑛 = 3 × 𝑅 × 𝐶, 3 color channels, 𝑅 rows and 𝐶 columns of pixels). The main objective of this task is to craft images: 𝑥 𝑎 ∈ {0, 1, . . . , 255} 𝑛 . This additional constraint to the attack is yet not easy to enforce. In a deep neural network, this input image is first preprocessed onto a range domain that usually reduces variance of the data. Its purpose is to ease the learning phase and thus to increase the performance of a deep neural network. This preprocessing is defined by design before the training stage and cannot be modified at testing. In the case of BIQA, the range domain is [−0.5, 0.5] 𝑛 .</p><p>Most attacks of the literature are performed in this domain without consideration of the transformation it represents. This leads to an adversarial sample 𝑥 𝑎 ∈ [0, 255] 𝑛 after reverting the preprocessing. To save this adversarial sample 𝑥 𝑎 as an image, the first step is then to round it which will erase most of the perturbation in the case of a low-distortion attack. Rounding is therefore likely to remove the adversarial property of the sample.</p><p>Paper <ref type="bibr" target="#b0">[1]</ref> addresses this problem presenting a post-processing added on top of any attack to efficiently quantize a perturbation: It keeps the adversarial property while lowering the added distortion. The method is based on a classification loss to ensure adversariality defined as follows:</p><formula xml:id="formula_3">𝐿(𝑥) = log( p𝑐 (𝑥 0 ) (𝑥)) − log( p ĉ (𝑥) (𝑥)). (<label>4</label></formula><formula xml:id="formula_4">)</formula><p>To adapt this method to the context of BIQA, we only need to redefine it to:</p><formula xml:id="formula_5">𝐿(𝑥) = 𝐵𝐼𝑄𝐴(𝑥) − 𝑠 𝑎 . (<label>5</label></formula><formula xml:id="formula_6">)</formula><p>For a given x, 𝐿(𝑥) &lt; 0 ensures x scores under the target 𝑠 𝑎 .</p><p>In this task, we know the classifier (BIQA) and its parameters. We are therefore in a white-box setup. Most modern attacks are developed in this scenario, from the most basic FGSM <ref type="bibr" target="#b2">[3]</ref> and IFGSM <ref type="bibr" target="#b4">[5]</ref> to the most advanced PGD <ref type="bibr" target="#b6">[7]</ref>, C&amp;W <ref type="bibr" target="#b1">[2]</ref>, BP <ref type="bibr" target="#b9">[10]</ref>. FGSM is a noniterative attack bringing a fast solution of the problem. Our work used this attack in the early stages as a proof of concept bringing a quick further understanding of the problem. Artifacts were visible. Instead all the results reported here are crafted using more the advanced PGD attack <ref type="bibr" target="#b6">[7]</ref> in its 𝐿 2 optimization version. One input parameter is the distortion budget. We run the attack over 7 iterations with different distortion budgets (whose maximum value is set to 2000). A binary search quickly finds an adversarial sample with the lowest distortion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">JPEG compression</head><p>The final images will be evaluated on their JPEG <ref type="bibr" target="#b8">[9]</ref> counterpart. This compression is done with a quality factor of 90. However there are many image compression sofwares providing different results. We used the command line $ convert to simulate this compression. Tables <ref type="table" target="#tab_1">1 and 2</ref> show for different methods both 𝑃 𝑃 𝑁 𝐺 and 𝑃 𝐽 𝑃𝐸𝐺 respectively the percentage of images successfully beating the target score in the PNG domain and the JPEG domain. Additionally Table <ref type="table" target="#tab_1">2</ref> shows results of the jury as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Quantization</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1">Spatial domain.</head><p>The work <ref type="bibr" target="#b0">[1]</ref> serves as a baseline for quantization. We only slightly adapt it as stated in Sect. 2.2. Table <ref type="table" target="#tab_0">1</ref> reports our results for two target scores: 𝑠 𝑎 = 30 and 𝑠 𝑎 = 50. It appears that the perturbation crafted in the pixel domain is fragile when facing a JPEG compression.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2">DCT domain.</head><p>The final image being evaluated after a JPEG compression, we explore a method adapting the quantization <ref type="bibr" target="#b0">[1]</ref> to the DCT domain. Using the same notations <ref type="bibr" target="#b0">[1]</ref>: Let 𝑋 𝑜 denote the image in the DCT domain, 𝑋 𝑎 = 𝑋 𝑜 + 𝑃 is the result of an initial attack like PGD, and 𝑋 𝑞 = 𝑋 𝑜 + 𝑃 + 𝑄 the final quantized transformed coeffcients. We solve a Lagrangian formulation:</p><formula xml:id="formula_7">𝑋 𝑞 = 𝑋 𝑜 + 𝑃 + arg min 𝑄 𝐷 (𝑄) + 𝜆𝐿(𝑄),<label>(6)</label></formula><p>where 𝜆 is the Lagrangian multiplier controlling the tradeoff between the distortion 𝐷 (𝑄) and 𝐿(𝑄) defined in <ref type="bibr" target="#b4">(5)</ref>. The distortion 𝐷 (𝑄) is defined as the squared 𝐿 2 norm of added perturbation:</p><formula xml:id="formula_8">𝐷 (𝑄) = ∥Δ × (𝑃 + 𝑄)∥ 2 .</formula><p>The quantization noise 𝑄 is s.t. 𝑋 𝑜 + 𝑃 + 𝑄 ∈ ΔZ 𝑛 , where Δ ∈ N 𝑛 is the quantization step matrix for JPEG QF=90. If we use a first order approximation of 𝐿(𝑄), we can develop (6) in a second-degree polynomial function. For any coefficient 𝑗, this function is locally minimized by:</p><formula xml:id="formula_9">𝑄 ★ ( 𝑗) = −𝑃 ( 𝑗) − 𝜆 𝐺 ( 𝑗) 2Δ( 𝑗) ,<label>(7)</label></formula><p>where 𝐺 = ∇𝐿(𝑄)| 𝑄=0 the gradient computed at 𝑄 = 0. This minimum however does not enforce (𝑃 + 𝑄 ★ ) ∈ Z 𝑛 . A simple rounding of (𝑃 + 𝑄) will then finalize the quantization. Finally we need to control a maximum allowed distortion. If 𝜆 gets big, 𝑄 ( 𝑗) become a very high value which is not desirable. The final value </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">RESULTS AND ANALYSIS</head><p>Tables <ref type="table" target="#tab_1">1 and 2</ref> show the importance of considering the JPEG compression. When the image is quantized by the 𝐿 2 optimization in the spatial domain, most images will successfully be adversarial images. However, very few of them remain adversarial after the JPEG compression. The BIQA score on most images increases up to 10 points. If the quantization is done in the DCT domain, most of them remain adversarial and the task is successful. It is however obviously more difficult to beat a lower target score 𝑠 𝑎 . An interesting property of the DCT quantization is that it creates typical JPEG artifacts as seen on Figure <ref type="figure">1</ref>. This is especially true in low frequency images since it is harder to remain undetectable in a such situation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">DISCUSSION AND OUTLOOK</head><p>The MediaEval task was a good opportunity to extend our previous work <ref type="bibr" target="#b0">[1]</ref> to 1) a regressor BIQA, and 2) in the DCT domain. Saving the DCT coefficients directly into a JPEG image is more consistent as it offers a better control on adversariality. Another difficulty of this task was the lack of knowledge about the compression algorithm. We therefore worked in a 'gray' box setup. The results showed that JPEG compression have a big effect on the BIQA score of, at least, adversarial images (and probably any other quality estimator). Hopefully our JPEG compression is close to the one used in the contest which allowed transferability of our adversarial images.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Probabilities of success with a spatial Quantization 𝑃 𝑃 𝑁𝐺 𝑃 𝐽 𝑃𝐸𝐺 𝑠 𝑎 = 30 99.0% 0.7% 𝑠 𝑎 = 50 100.0% 11.1%</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Probabilities of success with a DCT Quantization 𝑃 𝑃 𝑁𝐺 𝑃 𝐽 𝑃𝐸𝐺</figDesc><table><row><cell>Accuracy</cell><cell>Number of times</cell></row><row><cell>after(JPEG90)</cell><cell>selected "best"</cell></row></table><note>Δ ]. These images were submitted to the jury.</note></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">What If Adversarial Samples Were Digital Images?</title>
		<author>
			<persName><forename type="first">Benoît</forename><surname>Bonnet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Teddy</forename><surname>Furon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><surname>Bas</surname></persName>
		</author>
		<idno type="DOI">10.1145/3369412.3395062</idno>
		<ptr target="https://doi.org/10.1145/3369412.3395062" />
	</analytic>
	<monogr>
		<title level="m">Proc. of ACM IH&amp;MMSec &apos;</title>
				<meeting>of ACM IH&amp;MMSec &apos;</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="55" to="66" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Towards evaluating the robustness of neural networks</title>
		<author>
			<persName><forename type="first">Nicholas</forename><surname>Carlini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">David</forename><surname>Wagner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Symp. on Security and Privacy</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explaining and Harnessing Adversarial Examples</title>
		<author>
			<persName><forename type="first">Ian</forename><forename type="middle">J</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jonathon</forename><surname>Shlens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Christian</forename><surname>Szegedy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR 2015</title>
				<meeting><address><addrLine>San Diego, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">KonIQ-10k: An Ecologically Valid Database for Deep Learning of Blind Image Quality Assessment</title>
		<author>
			<persName><forename type="first">V</forename><surname>Hosu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sziranyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Saupe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="4041" to="4056" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Adversarial Machine Learning at Scale</title>
		<author>
			<persName><forename type="first">Alexey</forename><surname>Kurakin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Samy</forename><surname>Bengio</surname></persName>
		</author>
		<idno>arXiv:cs.CV/1611.01236</idno>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Exploring Quality Camouflage for Social Images</title>
		<author>
			<persName><forename type="first">Zhuoran</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zhengyu</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martha</forename><surname>Larson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Laurent</forename><surname>Amsaleg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes Proceedings of the MediaEval Workshop</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards Deep Learning Models Resistant to Adversarial Attacks</title>
		<author>
			<persName><forename type="first">Aleksander</forename><surname>Madry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aleksandar</forename><surname>Makelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ludwig</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dimitris</forename><surname>Tsipras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adrian</forename><surname>Vladu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICLR 2018</title>
				<meeting><address><addrLine>Vancouver, BC, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Intriguing properties of neural networks</title>
		<author>
			<persName><forename type="first">Christian</forename><surname>Szegedy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wojciech</forename><surname>Zaremba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ilya</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joan</forename><surname>Bruna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dumitru</forename><surname>Erhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rob</forename><surname>Fergus</surname></persName>
		</author>
		<idno>arXiv:cs.CV/1312.6199</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The JPEG still picture compression standard</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">K</forename><surname>Wallace</surname></persName>
		</author>
		<idno type="DOI">10.1109/30.125072</idno>
		<ptr target="https://doi.org/10.1109/30.125072" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Consumer Electronics</title>
		<imprint>
			<biblScope unit="volume">38</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="1992">1992. 1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Walking on the Edge: Fast, Low-Distortion Adversarial Examples</title>
		<author>
			<persName><forename type="first">Hanwei</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yannis</forename><surname>Avrithis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Teddy</forename><surname>Furon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Laurent</forename><surname>Amsaleg</surname></persName>
		</author>
		<idno type="DOI">10.1109/TIFS.2020.3021899</idno>
		<ptr target="https://doi.org/10.1109/TIFS.2020.3021899" />
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Forensics and Security</title>
		<imprint>
			<date type="published" when="2020-09">2020. Sept. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
