<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Correlating Biomedical Image Fingerprints between GAN-generated and Real Images using a ResNet Backbone with ML-based Downstream Comparators and Clustering: ImageCLEFmed GANs, 2023</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haricharan Bharathi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anirudh Bhaskar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vishal Venkataramani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Karthik Desingu</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lekshmi Kalinathan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering</institution>
          ,
          <addr-line>Chennai</addr-line>
          ,
          <country country="IN">India</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>To address the challenge of ImageClef GAN 2023, this paper proposes a comprehensive approach that incorporates models with a similarity constraint in common.A relation network based on few-shot learning to capture the underlying similarities between real and artificial images is used. This model learns to diferentiate between the two classes by leveraging the information from the limited labeled data. Secondly, Agglomerative clustering is used to group similar images together. By identifying clusters predominantly composed of real images, the authors enhance the ability to distinguish between real and artificial images efectively. Lastly, SVM is implemented for classification. The SVM model is trained using the combined feature representations obtained from the real and artificial images. The performance of the SVM on a test set containing 200 real images is evaluated, predicting which of these images were generated using real images. The experimental results demonstrate the efectiveness of the relational model in accurately identifying real images within the test dataset.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Generative Adversarial Network</kwd>
        <kwd>ImageCLEF</kwd>
        <kwd>Support Vector Machines</kwd>
        <kwd>Heirarchical Clustering</kwd>
        <kwd>Machine Learning</kwd>
        <kwd>Deep Learning</kwd>
        <kwd>ResNet</kwd>
        <kwd>Convolutional Neural Networks</kwd>
        <kwd>Few shot learning</kwd>
        <kwd>Relational model</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The ImageCLEF GANs task 2023 [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is an evaluation campaign organised by the CLEF initiative
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Being the first edition of the task, the primary objective is to detect similarities between
synthetic biomedical images generated by GANs and the real images used for training, in order
to assess the probability of specific real images being used in the training process. The main
motive behind this task is to assess the potential privacy and security implications of using
GAN-generated medical images. Understanding if GANs preserve identifiable information from
real images can help determine the risks associated with sharing and using artificial biomedical
data in real-life scenarios. The findings can inform the development of ethical guidelines and
regulations for the generation and usage of synthetic medical images while safeguarding patient
privacy.
      </p>
      <p>With advancements in Machine Learning techniques like GANs, it has become increasingly
dificult to visually distinguish between real and fake images. Generative models can produce
highly realistic images that mimic the visual characteristics of real images, making it challenging
for ML algorithms to identify subtle diferences. Limited data for images may result in biased ML
models that struggle to generalize well. This can be realised with the amount of data provided for
the task. Identifying fake images may require understanding the contextual information beyond
pixel-level analysis. Detecting subtle anomalies in scene composition, object interactions, or
semantic consistency can be dificult for ML models that primarily rely on low-level visual
features.Overcoming these dificulties requires continuous research and development of more
robust and adaptable ML algorithms.</p>
      <p>This paper provides a solution to the above challenges with the help of an ML technique
known as feature extraction. Feature extraction involves the use of pre-trained deep learning
models, such as convolutional neural networks (CNNs), to extract high-level features from both
real and fake images. The CNNs can capture important patterns and textures that can help in
distinguishing between the two.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        In recent years, Generative Adversarial Networks (GANs) have gained significant attention in the
medical field for various image generation and translation tasks. Several studies have explored
the use of GANs for medical image synthesis, image-to-image translation, and specifically,
the identification or detection of synthetic images. Numerous works have investigated the
generation of synthetic medical images using GANs. For instance, Choi et al. (2018) proposed a
method called "StarGAN" [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for multi-domain image synthesis, which was successfully applied
to generating diverse and realistic brain MRI images. Meanwhile, the paper by Kench et al.
presents SliceGAN [4] , an architecture that utilizes generative adversarial networks (GANs) to
generate high-quality 3D datasets from a single representative 2D image.
      </p>
      <p>Synthetic images play a crucial role in the medical field as they ofer several important
advantages and address significant challenges. First, the generation of synthetic images allows
for the augmentation of limited or insuficient datasets. In many medical imaging applications,
acquiring large and diverse annotated datasets can be challenging and time-consuming. By
generating synthetic images, researchers can expand the training data, thereby improving the
robustness and generalization of machine learning models. Second, synthetic images enable
the simulation of rare or dificult-to-obtain medical scenarios. Certain conditions or diseases
may have low prevalence or be challenging to capture through traditional imaging methods.
Synthetic images provide a means to create representative cases, allowing researchers and
clinicians to study and understand these conditions better, develop diagnostic tools, and explore
treatment strategies. Moreover, synthetic images can address privacy concerns related to
patient data. Medical images often contain sensitive information, making it dificult to share
or publicly release datasets. By generating synthetic images that preserve the statistical and
anatomical properties of real data while removing specific patient information, privacy can
be maintained, enabling more open collaboration and facilitating research advancements. In
summary, synthetic images are indispensable in the medical field, serving as a valuable resource
for data augmentation, rare scenario simulation and privacy preservation. Their utilization
empowers researchers, clinicians, and technologists to address critical challenges, enhance
diagnostic accuracy, improve patient care, and advance medical imaging technologies.</p>
      <p>The paper by Nataraj et al. [5] proposed a novel approach for detecting GAN-generated
fake images by combining co-occurrence matrices and deep learning techniques. The authors
extracted the co-occurrence matrices on three color channels of the pixel domain and trained a
deep convolutional neural network (CNN) model. The experimental results on two diverse GAN
datasets, based on image-to-image translations and facial attributes/expressions, demonstrated
the promising performance of the proposed approach, achieving over 99% classification accuracy.
The approach also exhibited good generalization capabilities when trained on one dataset and
tested on the other. GANs have also been utilized for image-to-image translation tasks in the
medical domain. For example, The paper by Zhu et al. [6] introduces CycleGAN, an approach
for translating images from one domain to another without requiring paired training data.</p>
      <p>In summary, prior works have demonstrated the potential of GANs in generating synthetic
medical images and performing image-to-image translation tasks. However, the problem of
distinguishing synthetic and real medical images remains an active research area, requiring
robust methodologies to ensure the reliability and integrity of generated data.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methods</title>
      <sec id="sec-3-1">
        <title>3.1. Relational Model</title>
        <p>The following section explains in detail the systems that were utilized in our submissions for
the task
A Relational model was employed which takes a pair of input images comprising a real image
and a generated image, with the objective of determining the relationship or similarity between
them.</p>
        <p>The two distance metrics considered were L2-Euclidean distance and Cosine distance between
samples and support set prototypes.</p>
        <sec id="sec-3-1-1">
          <title>3.1.1. Deep Learning Architectures Considered</title>
          <p>The family of ResNet[7] mdodels were considered as backbones, ResNet-10, ResNet-12,
ResNet18, ResNet-34, ResNet-50, ResNet-101, ResNet-152
ResNet101, a widely adopted convolutional neural network model introduced in 2015, addresses
the degradation problem encountered in deep networks. This problem refers to the phenomenon
where increasing network depth leads to saturated accuracy followed by a rapid decline. To
overcome this, ResNet101 incorporates shortcut connections that bypass one or more layers.
These connections, inspired by the Highway network, utilize gated shortcuts to regulate the
lfow of information. By employing these mechanisms, ResNet101 efectively mitigates the
degradation problem and improves the overall accuracy of deep neural networks.</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>3.1.2. Few-shot Learning</title>
          <p>Few-shot learning[8] is a machine learning approach that deals with the challenge of learning
new concepts or classes with limited labeled training data. In traditional machine learning,
a substantial amount of labeled data is typically required to train models efectively.
Fewshot learning aims to recognise novel visual categories from very few labelled examples. The
availability of only one or very few examples challenges the standard ‘fine-tuning’ practice in
deep learning. However, few-shot learning aims to enable learning from a small number of
labeled examples, often referred to as the support set or the few-shot training set. The key idea
behind few-shot learning is to leverage the knowledge gained from a larger set of base classes
or categories to learn new classes with limited labeled data. This is achieved by exploiting the
similarities and transferable knowledge between the base classes and the novel classes.
Fewshot learning can be achieved through various techniques, including metric-based approaches,
model-based methods, or generative models. These methods often utilize techniques such as
siamese networks, meta-learning, or data augmentation strategies to enable efective learning
with limited labeled data.[8] deals with the implementation of relation network for few-shot
learning.</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Hierarchical Clustering</title>
        <p>In order to understand the patterns underlying in the data provided, other methods were
implemented. One such method which gave success was using an Unsupervised mode of
learning, in particular, forming clusters of the real and generated images. Over the several
algorithms of clustering available, the one used is Hierarchical Agglomerative Clustering.
Hierarchical Agglomerative clustering is a clustering algorithm that aims to group similar data
points into clusters based on their pairwise distances or similarities. It starts with each data
point as an individual cluster and iteratively merges the most similar clusters until a termination
criterion is met. This merging process continues until all data points are part of a single cluster.
Hierarchical clustering has the ability to capture the nested structure of data. It is a bottom up
approach which builds clusters from the bottom by merging smaller clusters into larger ones.</p>
        <p>The choice of similarity or distance metric is crucial in hierarchical clustering. Commonly
used metrics include Euclidean distance, Manhattan distance, cosine similarity, or correlation
coeficients, depending on the nature of the data being clustered. The linkage criterion determines
how the distances or similarities between clusters are calculated. Some popular linkage criteria
are Single Linkage, Complete Linkage, Average Linkage, Ward’s Linkage. The type of linkage
used in the model is Ward’s linkage where the distance between two clusters is determined
by the increase in the total within-cluster sum of squares that would result from merging the
clusters. Although it is computationally expensive, it can perform on the given dataset with
ease.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Support Vector Machine</title>
        <p>Another method used encompasses the Supervised mode of Learning with the usage of Support
Vector Machines. SVM works by finding an optimal hyperplane that maximally separates the
data points of the two classes in the feature space. The hyperplane is defined by a subset of
training examples called support vectors. SVMs aim to achieve a balance between maximizing
the margin, i.e., the distance between the hyperplane and the closest data points of each class,
and minimizing classification errors. It performs well even in high-dimensional feature spaces,
where the number of features is much larger than the number of training examples. SVM can
handle non-linear classification problems by employing kernel functions. Kernel functions
transform the data into a higher-dimensional space where a linear hyperplane can be used for
separation. Common kernel functions include linear, polynomial, radial basis function (RBF),
and sigmoid. The RBF kernel used in the model allows SVM to efectively handle complex,
non-linear decision boundaries. It can capture intricate relationships between features and adapt
to non-linear patterns in the data. The RBF kernel implicitly maps the input data into a
higherdimensional space, avoiding the need for explicit feature engineering or manual transformation.
This makes it suitable for cases where the underlying data structure is not well-defined or the
relationship between features is non-linear. SVM is utilized in medical research for disease
diagnosis, prognosis, and prediction. It can classify medical data such as patient records, genetic
data, or medical images to aid in decision-making and support clinical diagnosis.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <sec id="sec-4-1">
        <title>4.1. Relational Model</title>
        <p>An abstract base class for a few-shot learning model was introduced where the backbone
model ResNet-101 used for feature extraction[9] were instantiated taking a support set as an
input. Feature maps are extracted from both images using separate backbone networks. These
feature maps are then concatenated to form a comparison candidate tensor. Subsequently, the
comparison candidate tensor is passed through the relation module, which is a convolutional
neural network. This module produces relation scores indicating the relationship or similarity
between the two images.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Hierarchical Clustering</title>
        <p>The ResNet50 model pre-trained on the ImageNet dataset is loaded. The model is configured
to exclude the top dense layer and use global average pooling for feature extraction. Feature
extraction is performed by normalization of the pixel values of the images from 0 to 1 and
feeding them to the ResNet50 model. Feature agglomeration is used to reduce the dimensions
of the feature vectors. A label of 0 is assigned to the 80 not used images, a label of 1 is assigned
to the 80 used images. Finally, Hierarchical Agglomerative clustering[10] with the ward linkage
method is implemented on the feature vectors of the training images. The labels for the 200
real test images are predicted by applying the trained clustering model to their feature vectors.
A Dendrogram[11] is plotted which computes the Cophenetic correlation coeficient [ 12] to
evaluate the hierarchical clustering model’s quality.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Support Vector Machines</title>
        <p>A ResNet101 model is employed without its top dense layer to extract feature vectors from each
image. The train images are rescaled to lie between 0 and 1 and their features are extracted
using the employed model. Additional feature vectors are created by concatenating the feature
vectors from the 80 used and 80 not used training images with the feature vectors from the 500
generated images. These concatenated feature vectors are used for training a support vector
machine[13] (SVM) classifier. The SVM classifier is trained using the concatenated feature
vectors and their corresponding labels (1 for used images, 0 for not used images) with the help
of rbf kernel[14].The 10,000 generated images along with 200 real images given for testing
are preprocessed and fed to the ResNet101 model for feature extraction as well. The trained
classifier is then used to predict the labels (used or not used) for the test feature vectors. The
prediction is driven by a majority count of 1’s or 0’s. EigenValue Analysis is then performed to
store the maximum eigenvalue for each feature vector of the used training images.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Conclusion</title>
      <p>All results have been submitted under the team name Clef—CSE—GAN and the results are
as follows. The relational model has performed the best among the three similarity based
approaches achieving a F1-Score of 61.4 denoted by Submission 1. The relational model used
ResNet-101 as the backbone model for feature extraction. We notice a higher number of false
positives 78 when compared to the other approaches. This might stem from the fact that the
decision boundary was very low or the model was not very complex. Future improvements that
can be undertaken are to use a more complex model, ones preferably pre-trained on medical
image data and tuning the hyperparameters.</p>
      <p>Submission 2 talks about the scores achieved by the Hierarchical clustering approach. It gives
us an F1-Score of 52.1 which is the second best amongst the three. The threshold for the number
of clusters was decided after plotting a dendogram and taking an approximate threshold.</p>
      <p>Submission 3 portrays the Support Vector Machine’s performance with an F1-Score of 43.1.
The SVM has a higher number of false negatives-64 when compared to other approaches. This
might be because the SVM has a very high sensitivity and might also be because the feature
vectors that represent the data need to be more comprehensive. A more complex model can be
deployed and the hyperparameters can be tunes for further improvement in scores</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>The authors would like to express their gratitude to the Department of Computer
Science and Engineering, Sri Sivasubramaniya Nadar College of Engineering, Chennai, India
(https://www.ssn.edu.in/) for providing the GPU resources for model training and testing.
[4] S. Kench, S. J. Cooper, Generating 3d structures from a 2d slice with gan-based
dimensionality expansion, 2021. arXiv:2102.07708.
[5] L. Nataraj, T. M. Mohammed, S. Chandrasekaran, A. Flenner, J. H. Bappy, A. K.
RoyChowdhury, B. S. Manjunath, Detecting gan generated fake images using co-occurrence
matrices, 2019. arXiv:1903.06836.
[6] J.-Y. Zhu, T. Park, P. Isola, A. A. Efros, Unpaired image-to-image translation using
cycleconsistent adversarial networks, in: Proceedings of the IEEE international conference on
computer vision, 2017, pp. 2223–2232.
[7] S. Targ, D. Almeida, K. Lyman, Resnet in resnet: Generalizing residual architectures, arXiv
preprint arXiv:1603.08029 (2016).
[8] F. Sung, Y. Yang, L. Zhang, T. Xiang, P. H. Torr, T. M. Hospedales, Learning to compare:
Relation network for few-shot learning, in: Proceedings of the IEEE conference on
computer vision and pattern recognition, 2018, pp. 1199–1208.
[9] Z. Li, C. Peng, G. Yu, X. Zhang, Y. Deng, J. Sun, Detnet: Design backbone for object
detection, in: Proceedings of the European conference on computer vision (ECCV), 2018,
pp. 334–350.
[10] F. Nielsen, F. Nielsen, Hierarchical clustering, Introduction to HPC with MPI for Data</p>
      <p>Science (2016) 195–211.
[11] T. CaliŃski, Dendrogram, Wiley StatsRef: Statistics Reference Online (2014).
[12] J. S. Farris, On the cophenetic correlation coeficient, Systematic Zoology 18 (1969)
279–285.
[13] C. Cortes, V. Vapnik, Support-vector networks, Machine learning 20 (1995) 273–297.
[14] S. Han, C. Qubo, H. Meng, Parameter selection in svm with rbf kernel function, in: World
Automation Congress 2012, IEEE, 2012, pp. 1–4.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Coman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          , Overview of ImageCLEFmedical GANs 2023 task
          <article-title>- Identifying Training Data "Fingerprints" in Synthetic Biomedical Images Generated by GANs for Medical Image Security</article-title>
          , in: CLEF2023 Working Notes, CEUR Workshop Proceedings, CEUR-WS.org, Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>B.</given-names>
            <surname>Ionescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Müller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Drăgulinescu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Yim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Ben</given-names>
            <surname>Abacha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Snider</surname>
          </string-name>
          , G. Adams,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yetisgen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Rückert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garcıa Seco de Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Friedrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bloch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Brüngel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Idrissi-Yaghir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Schäfer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Hicks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Riegler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Thambawita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Storås</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Halvorsen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Papachrysos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schöler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Jha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Andrei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Radzhabov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Coman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kovalev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Stan</surname>
          </string-name>
          , G. Ioannidis,
          <string-name>
            <given-names>H.</given-names>
            <surname>Manguinhas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Ştefan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. G.</given-names>
            <surname>Constantin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dogariu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Deshayes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Popescu</surname>
          </string-name>
          , Overview of ImageCLEF 2023:
          <article-title>Multimedia retrieval in medical, socialmedia and recommender systems applications</article-title>
          , in: Experimental IR Meets Multilinguality, Multimodality, and
          <string-name>
            <surname>Interaction</surname>
          </string-name>
          ,
          <source>Proceedings of the 14th International Conference of the CLEF Association (CLEF</source>
          <year>2023</year>
          ), Springer Lecture Notes in Computer Science LNCS, Thessaloniki, Greece,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-W.</given-names>
            <surname>Ha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choo</surname>
          </string-name>
          , Stargan:
          <article-title>Unified generative adversarial networks for multi-domain image-to-image translation</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>8789</fpage>
          -
          <lpage>8797</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>