<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">GAN-Amis: Evaluating Clustering of GAN-Generated Medical Images Using Custom and Pre-trained CNN Architectures to Identify GAN Fingerprints Notebook for ImageCLEF Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Aman</forename><surname>Upganlawar</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Pune Institute of Computer Technology</orgName>
								<address>
									<settlement>Pune</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Aarti</forename><surname>Lad</surname></persName>
							<email>aarti.lad@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Pune Institute of Computer Technology</orgName>
								<address>
									<settlement>Pune</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Arnav</forename><surname>Desai</surname></persName>
							<email>arnavdesai235@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Pune Institute of Computer Technology</orgName>
								<address>
									<settlement>Pune</settlement>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">GAN-Amis: Evaluating Clustering of GAN-Generated Medical Images Using Custom and Pre-trained CNN Architectures to Identify GAN Fingerprints Notebook for ImageCLEF Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">96D41056441E0D40878E17017D4B0993</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Clustering</term>
					<term>GAN Fingerprint detection</term>
					<term>Convolutional neural networks(CNNs)</term>
					<term>Generative models</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>ImageCLEF is an annual evaluation forum that addresses research tasks in image analysis and cross-language annotation. In ImageCLEF 2024, a challenging task named "Detect Generative Model's Fingerprints" was introduced, focusing on identifying unique fingerprints left by generative models on synthetic images. In this paper, we present our approach to this task, which involves exploring the hypothesis that generative models imprint distinct fingerprints on their synthetic outputs. We describe the task setup, dataset composition, and related works in detail. Our methodology involves employing various deep learning architectures, including a custom CNN architecture, EfficientNet, ResNet50, MobileNetV2, VGG19, and Xception, to extract features from synthetic images and perform clustering using K-means algorithm. We conducted experiments on both development and test datasets, evaluating the effectiveness of different architectures in detecting model fingerprints. Our results reveal varying performance across architectures, with challenges encountered in accurately clustering synthetic images. Through this study, we contribute insights into the complexities of detecting generative model fingerprints and discuss potential avenues for improvement in future research endeavors.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>ImageCLEF is an evaluation forum organized annually that encompasses research tasks oriented towards image analysis and cross-language annotation. ImageCLEF 2024 <ref type="bibr" target="#b0">[1]</ref> focused on various challenges aimed at improving research contributions in visual analysis, annotation, classification, and retrieval tasks. Medical-based tasks have been included since the second edition of ImageCLEF under the tag ImageCLEFMedical <ref type="bibr" target="#b1">[2]</ref>, which has annually hosted several medical domain-based tasks for significant achievements since 2004. Amongst the tasks proposed for the year 2024, Detect Generative Model's Fingerprints is indeed a challenging task within the track.</p><p>In the healthcare domain, medical imaging plays a pivotal role in disease diagnosis and treatment planning. Lung cancer is one of the leading causes of cancer-related deaths worldwide. Computed tomography (CT) scans are widely used for lung cancer screening, diagnosis, and treatment response assessment. The application of GANs in lung CT imaging has shown promising results in various tasks, including image denoising, segmentation, and synthesis <ref type="bibr" target="#b2">[3]</ref>. However, the detection of GAN-generated fingerprints on lung CT scans remains an under-explored research area.</p><p>The detection of GAN-generated images is a challenging task due to the high quality and realistic nature of the generated images. Several methods have been proposed for detecting GAN-generated images, including the use of statistical features <ref type="bibr" target="#b3">[4]</ref>, deep learning-based approaches <ref type="bibr" target="#b4">[5]</ref>, and frequencydomain analysis <ref type="bibr" target="#b5">[6]</ref>. However, these methods have limitations, such as the requirement of a large number of images for training, the inability to generalize to unseen GAN architectures, and susceptibility to image compression and post-processing operations.</p><p>We have employed a CNN architecture and several other widely used classification architectures to detect complex patterns within each generated image. We then used standard K-means clustering using the extracted features from these architectures to cluster the images from the test dataset.</p><p>In the following sections, we first describe the task and the dataset provided for ImageCLEF Medical 2024 for the task DETECT GENERATIVE MODELS' FINGERPRINTS in detail in Section 2, followed by the related works which discuss approaches to this task in Section 3. In Section 4, we describe the details of the methods employed, and Section 5 presents the experiments, results, and discussion. Section 6 elucidates the conclusion for this task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Task Description</head><p>The primary objective of this task is to explore the hypothesis that generative models imprint unique fingerprints on the synthetic images they produce. This investigation focuses on understanding whether different generative models or architectures leave discernible signatures within the synthetic images they generate.</p><p>Participants are provided with a set of synthetic images generated through various generative models. The task is to identify and detect the distinct "fingerprints" associated with each model. This involves analyzing the characteristics, patterns, or features embedded in the synthetic images to determine the specific traits that define each model's output. The ultimate goal is to distinguish between images created by different models and to uncover the unique imprints left by each generative model, facilitating model attribution recognition.</p><p>This task is fundamentally a clustering problem, where the aim is to group images based on the unique fingerprints left by different generative models. It is important to note that the number of clusters identified in the training and development datasets may differ from those in the testing dataset, adding a layer of complexity to the task.</p><p>To achieve this task, we had access to two datasets: Development Dataset: The development dataset consists of 600 images generated using three different generative models. Each model is represented by 200 images of size 256x256 and are organized in annotated folders.</p><p>Test Dataset: This task involves working with a dataset comprising 3000 computed tomography (CT) slices, each sized at 256x256 pixels and grayscale. These slices were generated using four distinct generative models. For the tasks, participants must refer to these models as <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>The subsets of real images are composed of axial slices of 3D computed tomography (CT) images taken from a dataset of approximately 8,000 lung tuberculosis patients. No real data was used in this task in either the development or the test dataset and the images obtained were solely generated by the generative models. Data Description The benchmarking image dataset consists of axial slices of 3D CT images from approximately 8,000 lung tuberculosis patients. These images, stored as 8-bit PNG files with dimensions of 256x256 pixels, vary in appearance; some may look relatively "normal, " while others exhibit lung lesions, including severe cases.</p><p>In addition to these real CT images, participants are provided with artificial slice images of the same size (256x256 pixels) generated using different generative models, including Generative Adversarial Networks (GANs) and Diffusion Neural Networks. The challenge is to analyze these synthetic images to identify and differentiate the unique fingerprints imprinted by each generative model. The figures 1 and 2 represent some sample images from the datasets for better insight into the nature of images in this task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Related works</head><p>There have been several attempts to discern real images from fake(generated images) when it comes to GAN detection in generated face images. Matern et al. <ref type="bibr" target="#b6">[7]</ref> extracted several geometric facial features which were then fed to a Support Vector Machine (SVM) classifier to distinguish between real and </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Convolutional Neural Network</head><p>The first model devised for this task is based on a Convolutional Neural Network (CNN) architecture.</p><p>CNNs are widely used for image classification tasks due to their ability to effectively capture spatial features from images. In this study, we first constructed and preprocessed the datasets for training and validation. The training dataset was created by loading images from the specified directory, with images automatically labeled based on the directory structure. The images, in grayscale format with a resolution of 256x256 pixels, were loaded in batches of 16. Similarly, the validation dataset was prepared using images from a separate directory with identical specifications. To facilitate model training, we applied a preprocessing function that normalized the image pixel values to a range between 0 and 1 by casting the images to float32. Additionally, the labels were one-hot encoded to represent the three different classes, ensuring compatibility with our classification model. This preprocessing step was applied to both the training and validation datasets. The resulting architecture of our CNN model for detecting fingerprints in synthetic lung CT images begins with an input layer for grayscale images of size 256x256 pixels. The input is followed by a series of convolutional layers that progressively increase the number of filters, capturing increasingly complex features. The first stage consists of two convolutional layers with 64 filters each, followed by batch normalization and ReLU activation. This pattern is repeated, with the number of filters doubling in each subsequent stage: 128, 256, and 512 filters, respectively. Max pooling layers follow each pair of convolutional layers to downsample the feature maps, reducing their spatial dimensions while preserving crucial information. After the four stages of convolution and downsampling, the model includes a bottleneck layer with two convolutional layers having 1024 filters each, continuing the pattern of batch normalization and ReLU activation. From the bottleneck layer, the feature maps are flattened into a one-dimensional vector, which is then passed through two fully connected (dense) layers with 256 and 64 units, respectively, each employing ReLU activation to introduce non-linearity and enable the model to learn complex patterns. The architecture concludes with a dense output layer with three units, corresponding to the three classes of generative models, using a softmax activation function to generate class probabilities. the training process with a batch size of 16 and a learning rate of 10-4, over a span of 200 epochs. The model was compiled using the Adam optimizer with categorical cross-entropy loss, and accuracy as the evaluation metric. We incorporated several callbacks: ModelCheckpoint to save the best-performing model, CSVLogger to record the training log, TensorBoard for visualization, and EarlyStopping to halt training if the validation loss did not improve for 50 consecutive epochs. The last layer of the model was removed to create a feature extractor, which outputs the penultimate layer's activations. This modified model was used to predict features for the validation dataset. These features were then clustered using K-means clustering with four clusters, corresponding to the four generative models used to create the test dataset.</p><p>To validate our approach, we also generated clusters for a smaller subset of the test dataset. The same feature extractor was employed to predict features from this dataset, and K-means clustering was applied to these features as well. Finally, we performed clustering on the full test dataset. The image files were processed similarly, and the features were extracted using the same feature extractor. These features were clustered into four groups using K-means, and the resulting cluster labels were analyzed to assess the performance of our approach in distinguishing between the synthetic images generated by different models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Existing architectures</head><p>In addition to the custom CNN architecture, we leveraged several pre-trained deep learning models to enhance feature extraction and clustering performance, specifically EfficientNet, ResNet50, Mo-bileNetV2, VGG19, and Xception. These architectures, each with unique strengths, were fine-tuned on the development dataset to adapt to the specific nuances of synthetic lung CT images. EfficientNet, for its ability to balance accuracy and computational efficiency, scales depth, width, and resolution uniformly, making it versatile for various image recognition tasks. This model's compound scaling approach enables it to extract a diverse set of features. ResNet50, with its deep residual learning framework, excels in capturing intricate patterns and mitigating the vanishing gradient problem. Its ability to maintain performance with increased depth ensures that it captures detailed and hierarchical features. MobileNetV2, optimized for mobile and embedded vision applications, offers a lightweight yet effective feature extraction capability. Its inverted residuals and linear bottlenecks allow it to efficiently process images. Despite its efficiency, MobileNetV2 maintains robust feature extraction performance, which is beneficial for our clustering task. VGG19, characterized by its deep and uniform architecture, provides a straightforward yet powerful approach to feature extraction. Its simplicity in design, with sequential convolutional layers, enables it to capture hierarchical features effectively. The depth of VGG19 allows it to learn complex representations, which can be particularly useful for distinguishing fine-grained differences in the synthetic images. Xception, an extension of the Inception architecture, utilizes depth wise separable convolutions, which decouple the learning of spatial and channel-wise features. This approach significantly reduces the number of parameters while maintaining high performance, making Xception both efficient and powerful. Each of these pre-trained models was custom-trained on the development dataset to fine-tune their weights for our specific task. This custom training ensured that the models were well-adapted to the characteristics of the synthetic lung CT images generated by different models. After training, the final classification layers of these models were removed to use the deep feature representations generated by the preceding layers. The extracted features from each model were then subjected to K-means clustering, grouping the images based on the unique fingerprints left by different generative models. This multi-architecture approach allowed us to comprehensively evaluate and utilize the strengths of different deep learning models, enhancing the robustness and reliability of our detection methodology. By comparing the clustering results across these architectures, we aimed to identify the most effective model for detecting generative model fingerprints in synthetic lung CT images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results and discussion</head><p>The performance of the clustering was evaluated using the Adjusted Rand Index (ARI), a standard metric for comparing the similarity between two data clusterings. The ARI is a measure of the similarity between two clusterings, adjusted for the chance grouping of elements. It ranges from -1 to 1, where: 1 indicates perfect agreement between the two clusterings, 0 indicates a random clustering result, Negative values indicate less agreement than expected by chance. The formula for the Adjusted Rand Index is given by:</p><formula xml:id="formula_0">ARI = RI − E[RI] max(RI) − E[RI]</formula><p>where:</p><p>• RI is the Rand Index, which measures the similarity between two clusterings.</p><p>• E[RI] is the expected value of the Rand Index for a random clustering.</p><p>• max(RI) is the maximum value of the Rand Index.</p><p>The ARI is particularly useful because it adjusts for the chance of random clusterings, providing a more accurate measure of clustering performance.</p><p>The following table shows the ARI scores for different architectures:</p><p>The ARI scores indicate the effectiveness of each model in clustering the images generated by different generative models. An ARI score close to 0 indicates that the clustering is random and does not effectively capture the underlying structure. Negative ARI scores, as seen in several of the models, suggest that the clustering results are even less consistent than what would be expected by chance. EfficientNet and VGG19 produced slightly negative ARI scores, indicating poor clustering performance. EfficientNet's more complex scaling might not have aligned well with the synthetic image features, while VGG19's simpler architecture might have missed intricate patterns. MobileNetV2 achieved a near-zero ARI score, suggesting random clustering performance. Despite its efficiency and effectiveness in other tasks, its lightweight design might not have captured enough discriminative features for this task. Xception had the most negative ARI score, possibly due to its complex architecture failing to generalize well to the specific synthetic features of the images. ResNet50 produced a slightly positive ARI score, indicating that it performed better than random clustering. Its residual connections likely helped in preserving more relevant features, making it somewhat more effective for this task. Custom CNN also resulted in a negative ARI score, suggesting that it might not have captured the generative model fingerprints as effectively as was expected. The varying ARI scores across different architectures highlight the differences in their capabilities to capture and distinguish the synthetic image features. ResNet50's slight positive score shows some promise due to its residual learning capabilities, which help in retaining more complex patterns. In contrast, Xception's lower performance might be attributed to its more sophisticated architecture not aligning well with the specific dataset characteristics. MobileNetV2's near-zero score suggests that its efficient, lightweight structure did not capture enough details necessary for effective clustering. The relatively poor performance of EfficientNet and VGG19 could be due to their architectural designs not being optimal for the type of features present in the synthetic lung CT images. Overall, these results indicate that while pre-trained models provide powerful feature extraction capabilities, their effectiveness in this specific task of clustering generative model fingerprints varies significantly. Custom tuning and perhaps hybrid approaches combining multiple architectures might be necessary to achieve better clustering performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>In this paper, we explored the challenging task of detecting generative models' fingerprints on synthetic images, particularly focusing on lung CT scans. Our proposed method, utilizing modified CNN architectures for feature extraction followed by K-means clustering, showcased limitations in effectively clustering the images based on the unique fingerprints of different generative models. Despite custom training on the development dataset, our approach yielded unsatisfactory results.</p><p>However, our study highlights several important insights and avenues for improvement in this domain. Firstly, while our method struggled to distinguish between images generated by different models, it underscores the complexity of the task and the need for more sophisticated techniques. Future research could explore ensemble approaches or hybrid models that combine features from multiple architectures to leverage their respective strengths. Additionally, incorporating domain-specific knowledge, such as lung anatomy and pathology, into the feature extraction process could enhance the model's ability to discern subtle differences in synthetic images.</p><p>Furthermore, our study sheds light on the importance of dataset diversity and size. The limited size of the development dataset may have hindered the generalization ability of our model. Therefore, expanding the dataset to include a wider range of synthetic images generated by various models could lead to more robust and generalizable results.</p><p>Moreover, exploring alternative clustering algorithms beyond K-means could offer valuable insights. Hierarchical clustering or density-based clustering methods may better capture the underlying structure of the data, especially in scenarios where the number of clusters is unknown or varies.</p><p>In conclusion, while our proposed method demonstrated limitations in effectively detecting generative model fingerprints on synthetic images, it provides a foundation for future research in this area. By addressing the identified shortcomings and leveraging advancements in machine learning techniques, we can pave the way towards more accurate and reliable methods for attributing synthetic images to their respective generative models, thus ensuring the integrity and authenticity of medical imaging data.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :Figure 2 :</head><label>12</label><figDesc>Figure 1: Sample images from the three classes in the development dataset</figDesc><graphic coords="3,207.38,258.35,180.51,180.51" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>ARI Scores for Different Architectures along with their corresponding submission IDs</figDesc><table><row><cell>Architecture</cell><cell>ARI Score</cell></row><row><cell>EfficientNet(ID#: 520)</cell><cell>-0.0005467941</cell></row><row><cell cols="2">MobileNetV2(ID#: 518) -0.0000102128</cell></row><row><cell>Xception(ID#: 517)</cell><cell>-0.0020193309</cell></row><row><cell>ResNet50(ID#: 516)</cell><cell>0.0000795212</cell></row><row><cell>VGG19(ID#: 513)</cell><cell>-0.0009935105</cell></row><row><cell cols="2">Custom CNN(ID#: 277) -0.0006152185</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of ImageCLEF 2024: Multimedia retrieval in medical applications</title>
		<author>
			<persName><forename type="first">B</forename><surname>Ionescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Drăgulinescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Rückert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ben Abacha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Garcıa Seco De Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bloch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Brüngel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Idrissi-Yaghir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Schäfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Pakull</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Damm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bracke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M</forename><surname>Friedrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Andrei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Prokopchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Karpenka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radzhabov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kovalev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Macaire</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schwab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lecouteux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Esperança-Rodier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yetisgen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Riegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Thambawita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Storås</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Halvorsen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Heinrich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kiesel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Potthast</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Stein</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction, Proceedings of the 15th International Conference of the CLEF Association (CLEF 2024</title>
		<title level="s">Springer Lecture Notes in Computer Science LNCS</title>
		<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of 2024 ImageCLEFmedical GANs Task -Investigating Generative Models&apos; Impact on Biomedical Synthetic Images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Andrei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radzhabov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Karpenka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Prokopchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kovalev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ionescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF2024 Working Notes, CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Generative adversarial network in medical imaging: A review</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Walia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Babyn</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.media.2019.101552</idno>
	</analytic>
	<monogr>
		<title level="j">Medical Image Analysis</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page">101552</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lyu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1811.00656</idno>
		<title level="m">Exposing deepfake videos by detecting face warping artifacts</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Cnn detection of gan-generated face images based on cross-band co-occurrences analysis</title>
		<author>
			<persName><forename type="first">M</forename><surname>Barni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kallas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Nowroozi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tondi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE international workshop on information forensics and security (WIFS), IEEE</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Leveraging frequency analysis for deep fake image recognition</title>
		<author>
			<persName><forename type="first">J</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Eisenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Schönherr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kolossa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Holz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="3247" to="3258" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Detecting gan-generated imagery using color cues</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mccloskey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Albright</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1812.08247</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Exposing deep fakes using inconsistent head poses</title>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lyu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="8261" to="8265" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deep image fingerprint: Towards low budget synthetic image detection and model lineage analysis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sinitsa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Fried</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision</title>
				<meeting>the IEEE/CVF Winter Conference on Applications of Computer Vision</meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="4067" to="4076" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Applications of generative adversarial networks to dermatologic imaging</title>
		<author>
			<persName><forename type="first">F</forename><surname>Furger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Amruthalingam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Navarini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pouly</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Neural Networks in Pattern Recognition</title>
				<editor>
			<persName><forename type="first">F.-P</forename><surname>Schilling</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Stadelmann</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="187" to="199" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Figueroa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Johnsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sopasakis</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2403.13916</idno>
		<title level="m">Enhancing fingerprint image synthesis with gans, diffusion models, and style transfer techniques</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Evaluating privacy on synthetic images generated using gans: Contributions of the vcmi team to imageclefmedical gans</title>
		<author>
			<persName><forename type="first">H</forename><surname>Montenegro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Neto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Patrício</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Rio-Torto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gonçalves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Teixeira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Challenge</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Subburam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Sathyanarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Anand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Srinivasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subramaniam</surname></persName>
		</author>
		<title level="m">Dmk-ssn at imageclef 2023 medical: Controlling the quality of synthetic medical images created via gans using machine learning and image hashing techniques</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Ghazi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Ghazi</surname></persName>
		</author>
		<title level="m">Gan-isi: Generative adversarial networks image source identification using texture analysis</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<author>
			<persName><forename type="first">H</forename><surname>Bharathi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bhaskar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Venkataramani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Desingu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kalinathan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Correlating biomedical image fingerprints between gan-generated and real images using a resnet backbone with ml-based downstream comparators and clustering</title>
				<imprint>
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Aimultimedialab at imageclefmedical gans 2023: determining &quot;fingerprints&quot; of training data in generated synthetic images</title>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Andrei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ionescu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF2023 Working Notes, CEUR Workshop Proceedings</title>
				<meeting><address><addrLine>Thessaloniki, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Cnn-based method for lung cancer detection in whole slide histopathology images</title>
		<author>
			<persName><forename type="first">M</forename><surname>Šarić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Russo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sikora</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 4th International Conference on Smart and Sustainable Technologies (SpliTech), IEEE</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deep convolutional neural networks for computer-aided detection: Cnn architectures, dataset characteristics and transfer learning</title>
		<author>
			<persName><forename type="first">H.-C</forename><surname>Shin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">R</forename><surname>Roth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Nogues</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mollura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Summers</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on medical imaging</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="1285" to="1298" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
