<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Identifying Training Data &quot;Fingerprints&quot; Using Border Enhancing Image Processing Methods and Their Ensemble Notebook for the inouekokiteam Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Koki</forename><surname>Inoue</surname></persName>
							<email>inoue.koki.we@tut.jp</email>
							<affiliation key="aff0">
								<orgName type="institution">Toyohashi University of Technology</orgName>
								<address>
									<addrLine>1-1 Hibarigaoka, Tempaku-cho</addrLine>
									<postCode>441-8580</postCode>
									<settlement>Toyohashi</settlement>
									<region>Aichi</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tetsuya</forename><surname>Asakawa</surname></persName>
							<email>asakawa.tetsuya.um@tut.jp</email>
							<affiliation key="aff0">
								<orgName type="institution">Toyohashi University of Technology</orgName>
								<address>
									<addrLine>1-1 Hibarigaoka, Tempaku-cho</addrLine>
									<postCode>441-8580</postCode>
									<settlement>Toyohashi</settlement>
									<region>Aichi</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kazuki</forename><surname>Shimizu</surname></persName>
							<email>shimizu@heart-center.or.jp</email>
							<affiliation key="aff1">
								<orgName type="institution">Toyohashi Heart Center</orgName>
								<address>
									<addrLine>21-1Gobutori, Ohyamacho</addrLine>
									<postCode>441-8071</postCode>
									<settlement>Toyohashi</settlement>
									<region>Aichi</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Kei</forename><surname>Nomura</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Toyohashi Heart Center</orgName>
								<address>
									<addrLine>21-1Gobutori, Ohyamacho</addrLine>
									<postCode>441-8071</postCode>
									<settlement>Toyohashi</settlement>
									<region>Aichi</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Masaki</forename><surname>Aono</surname></persName>
							<email>masaki.aono.ss@tut.jp</email>
							<affiliation key="aff0">
								<orgName type="institution">Toyohashi University of Technology</orgName>
								<address>
									<addrLine>1-1 Hibarigaoka, Tempaku-cho</addrLine>
									<postCode>441-8580</postCode>
									<settlement>Toyohashi</settlement>
									<region>Aichi</region>
									<country key="JP">Japan</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Identifying Training Data &quot;Fingerprints&quot; Using Border Enhancing Image Processing Methods and Their Ensemble Notebook for the inouekokiteam Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">64C0CAA76388D0161DA030F0829E7AA1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:57+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Image Processing</term>
					<term>Integrated the Predictions</term>
					<term>Histogram Equalization</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes our approach to the Identify training data "fingerprints" task of ImageCLEFmedical GANs 2024. In Task 1, the goal is to detect "fingerprints" within the synthetic biomedical image data to determine which real images were used in training to produce the generated images. The proposed method uses image processing as a preprocessing step, and a pre-trained model, Resnet-152, is used for training. We also integrated the predictions of each model. As a result, the model with histogram equalization was able to outperform the baseline model trained without preprocessing by 66.6%. The model with prediction integration achieved 63.1%.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>ImageCLEF has been held as part of CLEF since 2003, and ImageCLEF2024 <ref type="bibr" target="#b0">[1]</ref> approaches different areas, including ImageCLEFmedical GANs 2024 <ref type="bibr" target="#b1">[2]</ref>. In Task 1 (Task to identify training data "fingerprints"), the goal is to detect "fingerprints" within the synthetic biomedical image data to determine which real images were used in training to produce the generated images. We are participating as a member of the inouekokiteam and are challenging this task. This paper describes the approach used to determine which images were used in training to create the generative model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">ImageCLEF 2024 Dataset</head><p>This section describes the dataset for the Identify training data "fingerprints" task of ImageCLEFmedical GANs 2024 <ref type="bibr" target="#b1">[2]</ref>. This task uses two generative models. The dataset contains images used to train each model, images not used for training, and images generated by the models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Development Dataset</head><p>The first generative model consists of 200 images annotated as used/not used for training image generation and 10k generative images generated by model 1. The second generative model consists of 6k images annotated as used/not used for training image generation, and 10k generative images generated by model 2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Test Dataset</head><p>The test dataset contains two CSV files and two folders, and does not specify which set of images was used to train the generative model. The ratio of generated to real images is not identical. The first folder contains 7200 generated images and 4000 real images. The second folder contains 5000 generated images and 4000 real images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed Method</head><p>In this section, we describe our approach to the task of identifying the training data "fingerprints" of ImageCLEFmed GANs 2024 <ref type="bibr" target="#b1">[2]</ref>. We have observed that the color boundaries of the generated images are often unclear. Therefore, we propose a method that captures the boundary sharpness using a set of OpenCV <ref type="bibr" target="#b2">[3]</ref> image processing functions as preprocessing for both training and prediction. We also propose a method to integrate the prediction results of each training model into a single result. The image processing methods used are shown below.</p><formula xml:id="formula_0">• Binarization • Histogram Equalization • Laplacian Process • Contrast Adjustment</formula><p>We also propose a method to integrate the predictions of each training model into a single prediction. A total of five models are used: one model trained without image processing and four models trained with the image processing described above. The integration procedure is described below.</p><p>• Take a majority vote of the five models' forecasts and make an integrated forecast.</p><p>• If the predictions of all five models are not in agreement, a negative result is assumed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Preprocessing by Image Processing</head><p>This section describes the image processing preprocessing performed on the development and test datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Binarization</head><p>Binarization was performed in preprocessing using OpenCV <ref type="bibr" target="#b2">[3]</ref>. The image was loaded as grayscale, and Otsu binarization <ref type="bibr" target="#b3">[4]</ref> was performed. It uses the threshold that maximizes the separation between classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Histogram Equalization</head><p>We describe the preprocessing histogram equalization performed using OpenCV <ref type="bibr" target="#b2">[3]</ref>. The images were loaded as grayscale and subjected to histogram equalization. This is a process that transforms the density so that the histogram of pixel values is uniform throughout.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Laplacian Process</head><p>Laplacian processing was performed using OpenCV <ref type="bibr" target="#b2">[3]</ref>. The images were loaded as grayscale and processed with a Laplacian filter. It detects edges where the difference in pixel values changes significantly. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Contrast Adjustment</head><p>Contrast adjustment was performed in preprocessing using OpenCV <ref type="bibr" target="#b2">[3]</ref>. The images were loaded as grayscale images and the contrast was adjusted. It was adjusted with 𝛼=1.5 and 𝛽=0. v ′ is the output pixel value and v is the input pixel value.</p><formula xml:id="formula_1">v ′ = 𝛼 × v + 𝛽<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Train</head><p>In this section, we describe the training of the model. A pre-trained model from Resnet-152 <ref type="bibr" target="#b4">[5]</ref> was used for training. As training data, we used 3100 images each from generated_1 and generated_2 in the development dataset, for a total of 6200 images as generated, and all images from not_used_1, used_1, not_used_2, and used_2 as real. A total of 6200 images were considered REAL. In addition to preprocessing by image processing, random horizontal flipping was applied to the training images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Prediction</head><p>In this section, we describe the prediction using the model described in the previous section and the integration of the prediction results. Test dataset preprocesses the models for prediction by image processing according to the model used. A total of five models are used for the prediction, one trained without image processing and four trained with different image processing methods.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Model Predictions</head><p>The prediction for each model is described in the following section. The detailed flow is shown in Figure <ref type="figure" target="#fig_0">1</ref>. For the prediction of a trained model without image processing, no image processing is applied to test dataset. For the trained model with image processing, the same image processing was applied to test dataset to make predictions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Integration of Prediction</head><p>We describe the integration of the predictions, using two methods: one with no image processing on test dataset, and the other with four different image processing methods. For the integration of the predictions, we used majority voting and perfect agreement. The integration flow is shown in Figure <ref type="figure" target="#fig_1">2</ref>.</p><p>For perfect agreement, the results were accepted only when all the results predicted by the five models were in agreement, and rejected when they were not.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Submission Results</head><p>In this section we describe the results of our team's submissions. The submissions included predictions for each of the five models (Run ID: 891-896) and the integration of the predictions (Run ID: 301, 890).</p><p>The prediction for the model without added preprocessing (Run ID: 896) was 66.3%. The highest score  for the prediction using the model with histogram equalization (Run ID: 892) was 66.6%. No score was returned for majority voting (Run ID: 301), one of the proposed methods. The reason for not returning a score is believed to be that it produced the same prediction result for all test data. For perfect agreement (Run ID: 890) the score was 63.1%.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Discussion</head><p>In this section, we describe the submitted results. The model with histogram equalization and laplacian processing outperformed the baseline model with no preprocessing (Run ID: 896). Other models with additional preprocessing underperformed the baseline. This suggests that histogram equalization is an effective image processing method for detecting "fingerprints" within the synthetic biomedical image data to determine which real images were used in training to produce the generated images. We were not able to exceed the baseline for perfect agreement in predictive integration. One possible reason for this is that histogram equalization was effective, but other image processing methods were not. It is also possible that the acceptance method of rejecting all predictions if they did not match resulted in the rejection of accurate predictions. No results were returned for majority voting for prediction integration. A possible reason for this could be that the prediction was not accepted because it was used for all images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">Conclusion</head><p>This paper describes an approach to the identify training data "fingerprints" task of ImageCLEFmedical GANs 2024 <ref type="bibr" target="#b1">[2]</ref>. We applied image processing as a preprocessing step, and attempted training and prediction. We also made predictions for each model, and attempted to integrate the predictions using majority voting and perfect agreement methods. The results showed that only the models with histogram equalization and laplacian processing were able to exceed the 66.3% of the models without image processing that were set as the baseline. Both predictions integration failed to exceed the baseline. This paper describes an approach to the task of identifying training data "fingerprints" of Image-CLEFmedical GANs 2024 <ref type="bibr" target="#b1">[2]</ref>. We applied image processing as a preprocessing step and attempted training and prediction. We also made predictions for each model and attempted to integrate the predictions using majority voting and perfect agreement methods. The results showed that only the models with histogram equalization and laplacian processing were able to exceed the 66.3% of the models without image processing, which was set as the baseline. Both prediction integrations failed to outperform the baseline.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Flow of model Predictions</figDesc><graphic coords="3,72.00,65.61,451.14,75.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Integration of the five models' predictions</figDesc><graphic coords="4,72.00,65.61,450.91,279.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell cols="2">Submission Results</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>M1</cell><cell></cell><cell></cell><cell></cell><cell>M2</cell></row><row><cell>Run ID</cell><cell>method name</cell><cell>Score</cell><cell>Acc</cell><cell cols="2">Prec Recall</cell><cell>F1</cell><cell>Acc</cell><cell cols="2">Prec Recall</cell><cell>F1</cell></row><row><cell>896</cell><cell>Non-Preprocessed</cell><cell cols="5">0.663 0.495 0.497 0.987 0.661</cell><cell>0.5</cell><cell>0.5</cell><cell>0.996</cell><cell>0.66</cell></row><row><cell>895</cell><cell>Binarization</cell><cell cols="3">0.638 0.484 0.49</cell><cell cols="5">0.838 0.619 0.503 0.501 0.951 0.656</cell></row><row><cell>894</cell><cell>Contrast Adjustment</cell><cell cols="8">0.660 0.491 0.495 0.973 0.656 0.499 0.499 0.993 0.664</cell></row><row><cell>892</cell><cell cols="7">Histogram Equalization 0.666 0.499 0.499 0.998 0.665 0.501</cell><cell>0.5</cell><cell>0.999 0.667</cell></row><row><cell>891</cell><cell>Laplacian Process</cell><cell cols="3">0.663 0.484 0.49</cell><cell cols="5">0.838 0.619 0.503 0.501 0.951 0.656</cell></row><row><cell>301</cell><cell>Majority Voting</cell><cell>Non</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>890</cell><cell>Perfect Agreement</cell><cell cols="8">0.631 0.473 0.484 0.805 0.604 0.508 0.504 0.945 0.657</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="10.">Acknowledgments</head><p>A part of this research was carried out with the support of the Grant for Toyohashi Heart Center Smart Hospital Joint Research Course and the Grant-in-Aid for Scientific Research (C) (issue numbers 22K12149 and 22K12040).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of ImageCLEF 2024: Multimedia retrieval in medical applications</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ionescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Drăgulinescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rückert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ben Abacha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garcıa Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bloch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Brüngel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Idrissi-Yaghir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schäfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Pakull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Damm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bracke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Andrei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Prokopchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Karpenka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radzhabov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kovalev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Macaire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schwab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lecouteux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Esperança-Rodier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yetisgen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Thambawita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Storås</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Heinrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kiesel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Potthast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Stein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction, Proceedings of the 15th International Conference of the CLEF Association (CLEF 2024</title>
		<title level="s">Springer Lecture Notes in Computer Science LNCS</title>
		<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of 2024 ImageCLEFmedical GANs Task -Investigating Generative Models&apos; Impact on Biomedical Synthetic Images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Andrei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radzhabov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Karpenka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Prokopchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kovalev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ionescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF2024 Working Notes, CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The OpenCV Library</title>
		<author>
			<persName><forename type="first">G</forename><surname>Bradski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Dr. Dobb&apos;s Journal of Software Tools</title>
		<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A threshold selection method from gray-level histograms</title>
		<author>
			<persName><forename type="first">N</forename><surname>Otsu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TSMC.1979.4310076</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="62" to="66" />
			<date type="published" when="1979">1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Deep Residual Learning for Image Recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</title>
				<meeting>the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="770" to="778" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
