<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Classification using Conv2D Neural Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Debasish Samal</string-name>
          <email>debasishsamal01@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Prateek Agrawal</string-name>
          <email>dr.agrawal.prateek@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vishu Madaan</string-name>
          <email>dr.vishumadaan@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>OpenForensics: Large-Scale Challenging Dataset For Multi-Face Forgery Detection and Segmentation</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lovely Professional University</institution>
          ,
          <addr-line>Punjab</addr-line>
          ,
          <country country="IN">INDIA</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Shree Guru Gobind Singh Tricentenary University</institution>
          ,
          <addr-line>Gurugram, Haryana</addr-line>
          ,
          <country country="IN">INDIA</country>
        </aff>
      </contrib-group>
      <fpage>113</fpage>
      <lpage>128</lpage>
      <abstract>
        <p>From past few years, rapid advancement of generative AI and fake image creation has evolved, using deep learning. These AI generated fake images are still incredibly challenging to detect. A generative adversarial network (GAN) can create realistic looking fake multimedia, such as images, audio, and videos. So, the spreading of fake media creates panic in social communities and can damage the reputation of a person or community by manipulating public sentiments and opinions towards a person or community. Current studies have suggested using the convolution neural network (CNN) as an efective tool to fight against deepfakes. This paper presents an improved CNN architecture, the Conv2D Model which is trained on 1,40,000 images containing 70,000 real images and 70,000 deepfake images while most of the approaches are using image datasets containing small number of images and pre-trained models to show the fake detection accuracy. A sparse-categorical cross entropy and adam optimizer are applied to enhance the CNN model's learning rate. The proposed model produces an accuracy of 94.54% in ACI'23: Workshop on Advances in Computational Intelligence at ICAIDS 2023, December 29-30, 2023, Hyderabad, India https://www.linkedin.com/in/debasish-samal-50b906299/ (D. Samal); https://www.drprateekagrawal.com/ 0000-0002-0217-5221 (D. Samal); 0000-0001-6861-0698 (P. Agrawal); 0000-0002-9127-4490 (V. Madaan)</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>1. Introduction
Artificial intelligence (AI) has made significant developments in various fields, such as computer
vision, speech analysis and generation in industries. In an equivalent way, deep learning
generative techniques have brought about a revolutionary change in audiovisual processing.
Recently, a relatively new phenomenon called deepfakes (DF) has appeared, enabling the
generation of artificial (fake) content based on digitally captured images &amp; videos of individuals.
Deepfake involves capturing a person’s facial expressions, lip movements, and eye movements,
and overlaying them onto a diferent background to create a lifelike simulation of that person in
a fabricated scenario. As the global population becomes increasingly interconnected &amp; reliant
on social media platforms, Deepfakes are being used more often to generate synthetic data of
(P. Agrawal)
∗Corresponding author.
†These authors contributed equally.
CEUR
Workshop
Proceedings</p>
      <p>
        ceur-ws.org
ISSN1613-0073
politicians, communities, actors, and media. This, in turn, contributes to the proliferation and
dissemination of fake news on social media.[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>The deepfakes are so popular in use that every day these deepfake contents are releasing
and making headlines, interfering the privacy and destroying the reputation of the person. As
shown in Figure 1, the news went viral when indian actress Rashmika Mandanna caught in a
deepfake video showing exact facial expression and appearance which is hard to recognize if at
all known that it was a deepfake.</p>
      <p>With the widespread use of platforms like Telegram, Instagram, Reddit, WhatsApp, and
Wikipedia for sharing images, it has become increasingly dificult to distinguish between
authentic photos and those that have been manipulated. The use of diverse photo-editing
software complicates the process of verifying an image’s authenticity. Picture forgeries are
commonly created through splicing and copy-movement techniques. In photo montages’
copy-move forgery, a section of an image is illegally replaced with another section to obscure
significant details. By cutting and pasting a section from one image onto another, image splicing
forms a novel digital image. The main objective of forgery detection is to distinguish similar
regions in copy-move forgery and distinct areas in spliced images.</p>
      <p>An efective deepfake detection system can accurately identify manipulated and synthetic
content that difers from authentic content. Current research publications emphasize the
development of a resilient deepfake detection scheme where, many existing approaches in the
literature exhibit weaknesses in terms of resilience, efectiveness in formulating the deepfake
detection model, &amp; the incorporation of generalizability and legibility within the model.</p>
      <p>
        The ability of a deepfake detection system to accurately find manipulation in both
highquality and poor-quality image or video contents is crucial for its robustness. It is important
that the system’s efectiveness is not compromised by the resolution of the content being
analyzed. Typically, deepfake detection systems tend to perform less efectively when analyzing
low-quality content. Generalizability is achieved when each deepfake generation tool employs
unique methods to detect the deepfake contents. Interpretability is a critical aspect within the
domain of deepfake detection, where a model must have the capability to find the authentic
and manipulated regions within an image (such as a person’s face) and assign fake probability
labels to the corresponding face regions. This feature is essential as it empowers a system to
comprehend the complexities of artificially synthesized content and provides a clear rationale
for identifying diferences in the images. Consequently, there exists a pressing need for robust
deepfake detection models that can strike a balance between the aforementioned criteria. Several
notable examples of Deepfake tools include Faceswap, DeepFaceLab, Faceswap-GAN, DFaker,
StyleGAN, StarGAN, and Face Swapping GAN (FSGAN), among several others[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>1.1. Main Contribution</title>
      <p>
        Below are the main contributions of the paper:
• We delve into diferent strategies for detecting deepfake images through the utilization of
the improved frameworks, emphasizing both their strengths and drawbacks.
• We give a fundamental study on the deepfake images, their creation and advancements
following detection works.
• We provide a convolutional neural network model (Conv2D) architecture to classify
deepfake images. The proposed model is trained over 1,40,002 training images and 39,428
testing images with 10,905 validation images used for image classification using the whole
Open Forensics dataset [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The following sections of the document are organized accordingly: In Section 2, we provide a
literature review of related works, featuring various deep learning models and discuss existing
deepfake detection approaches. In Section 3, we present current state of the art benchmark data
sets used widely for better accuracy of deepfake detection. In section 4, we tested our proposed
CNN Model and discussed the results in Section 5. Lastly, we summarize our findings in Section
6 and propose potential avenues for future studies to conclude the paper.</p>
      <sec id="sec-2-1">
        <title>2. Literature Review</title>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2.1. Deepfake</title>
      <p>Deepfakes have gained widespread recognition primarily because of the convenience and
accessibility of various mobile applications and algorithms. These applications heavily rely
on deep learning methodologies, although there are also alternative approaches used. The
implementation of deep learning for data representation is a prevalent and extensively employed
method in modern times.</p>
      <p>
        Deepfakes can jeopardize individuals’ and governments’ privacy and societal security.
Moreover, they are a grave threat to national security, with democracies increasingly at risk. Various
methods and strategies have been developed to deal with the impact of deepfakes, enabling the
detection of such content and the implementation of necessary measures[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. In contrast to the
identification of video deepfakes, which comprise a series of images, the primary objective of
deepfake image detection is to distinguish any image as fake or real. Recent research[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ],[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ],[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
examines various biological indicators to identify deepfake images, specifically focusing on eye
and gaze properties that distinguish them. Additionally, the scientists integrated these attributes
to create unique signatures, enabling a comparison between genuine and manipulated images.
This analysis encompassed geometric, visual, metric, temporal, and spectral variances.
      </p>
    </sec>
    <sec id="sec-4">
      <title>2.2. Deep learning models</title>
      <p>
        2.2.1. Autoencoder
The Autoencoder was the first technology employed in the generation of deepfakes[
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. The
purpose of the model is to reproduce images it has been taught. The output is generated through
three successive stages: encoding, latent space, and decoding. The encoder compresses the
input pixels, encoding specific attributes like skin texture, color, facial expressions, open/closed
eyes, head pose and fine details, resulting in a smaller compressed image. The latent space
processes the compressed image, revealing patterns and structural similarities among the data
points. The decoder reconstructs an output by decomposing and interpreting the information
from the latent space. The decoder aims to reproduce an image as similar as possible to the
original.
      </p>
      <p>
        An autoencoder can be utilized to exchange two faces as shown in Figure 4. Face B is
reconstructed to look like Face A by tracing the route indicated by the red arrows. Both faces
were encoded identically. Encoding common features enables similar positioning of faces in the
latent space for the encoder. Autoencoders can swap faces in the same image. To accurately
reconstruct Face B as similar to Face A, the decoder uses Face A’s latent space as reference. This
technique is used in DeepFaceLab, DFaker, TensorFlow-based deepfakes, and other deepfake
technologies[
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
2.2.2. CNN
The convolutional neural network (CNN) is a specific type of neural network that is designed
to learn feature engineering by optimizing filters. This regularization technique allows CNN to
automatically capture relevant features from input data without the need for manual feature
engineering. As mentioned in Fig 5, CNNs consist of convolution layers, pooling layers, and
output layers. CNNs are commonly used in tasks such as fake photo detection and object
recognition because they excel in extracting features using principles of linear algebra, particularly
matrix multiplication, to identify patterns in images.
      </p>
      <p>
        Studies of [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] show an improved dense CNN model which focuses on high generalizability and
detection accuracy over GAN generated image datasets. Similarly, Zhu et al.[
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposed a deep
learning model for detecting deepfake images using CNNs to extract frame-level features and
detect forgeries. The method was evaluated using a large dataset of forged images from various
sources, and it yielded favorable results for the project. Earlier research by Wang et al.[
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]
present an approach that reveals images containing synthetic faces generated by deep neural
network models. By analyzing the entire image, the convolution network initially extracts
several low-level features through multiple layers, which subsequently combine to form more
intricate features via a succession of convolution layers. CNNs can capture more comprehensive
information from images due to the composition of their high-level features from multiple
low-level features.
2.2.3. GAN
GAN is one of the best techniques for artificial image detection in computer vision. Their core
principle is based on game theory[
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], In a generator-versus-discriminator competition, the
generator generates the samples. The discriminator’s task is to diferentiate between real and
generated samples.
      </p>
      <p>In GANs, both the generator and discriminator learn concurrently: the generator generates
artificial images following the dataset distribution, while the discriminator distinguishes between
real and fake images. After numerous training iterations, the generator network produces images
that closely resemble real images, while the discriminator network learns to distinguish between
these produced images and real ones.</p>
      <p>The discriminator model within Generative Adversarial Networks (GANs) is responsible for
classifying an input example from the problem domain, whether it is a real instance or one that
has been generated. Its main task is to predict a binary label, distinguishing between real and
fake. Generative Adversarial Network (GAN) architecture is the progression of arranging and
planning the structure of GANs to enhance their performance.</p>
      <p>
        This technology, called GANs, was first invented in 2014 by [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], see Fig 8. In the past few
years, people have made big advancements in generating and detecting gan produced fake
pictures.In the work by [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ], the authors present an approach for detecting GAN-generated images
Sr.
      </p>
      <p>No
1
2
3
4
5
6
7
8
9</p>
      <p>IEEE
Transactions on
Information
Forensics and
Security
IEEE
Transactions on
Multimedia
Mathematics
MDPI
Revue
telligence
Artificielle</p>
      <p>d’InApplied
ences MDPI</p>
      <p>SciSecurity and
Communication Networks
Computational
Intelligence
and
Neuroscience
2023</p>
      <p>
        IEEE Access
through the generalization of an unsupervised domain adaptation model. The results shows
significant generalization accuracy improvement over StyleGan[
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], StarGan[
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], StyleGAN2[
        <xref ref-type="bibr" rid="ref16">16</xref>
        ]
and PGGAN[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].Zhang et al.[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] developed AutoGAN, a system capable of replicating the
synthetic imperfections found in GAN-generated images. This model incorporates upsampling
techniques. Also In 2023, Monkam et al.[
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] introduced the G-JOB GAN model which achieved
95.7% accuracy using a 4096-image dataset from the CelebA dataset.
      </p>
      <p>Accuracy
in %
95.33
94</p>
      <sec id="sec-4-1">
        <title>3. State-of-the-Art Datasets for Deepfake Image Detection</title>
        <p>
          A variety of Deepfake visuals have been created over last few years utilizing diferent frameworks
including AttGAN, StarGAN, GDWCT, StyleGAN, and StyleGAN2. The image datasets listed
below are primarily utilized for deepfake image detection purposes:
1. CelebA 202,599 high-quality images of celebrities are present in the dataset, accompanied
by detailed annotations.[
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]
2. The FF++ The dataset comprises 1000 authentic video sequences that were modified using
four automated face manipulation techniques: FaceSwap, Face2Face, Deepfakes, and
NeuralTextures.[
          <xref ref-type="bibr" rid="ref27">27</xref>
          ]
3. LSUN The dataset contains around one million face images that have been labeled.[
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]
4. The CELEB-DF, 590 YouTube videos representing a mix of ages, ethnicities, and genders
make up the dataset. Furthermore, it contains 5639 DeepFake videos that replicate the
original content.[
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]
5. The HFF The dataset contains a significant amount of synthetic facial images, consisting
of more than 155,000 face photos.[
          <xref ref-type="bibr" rid="ref30">30</xref>
          ]
6. The DigiFace-1M The dataset consists of an extensive collection of over one million
artificial facial images, covering a broad spectrum of diversity.[
          <xref ref-type="bibr" rid="ref31">31</xref>
          ]
7. AttGAN is a well-built dataset of over 30,000 images.[
          <xref ref-type="bibr" rid="ref32">32</xref>
          ]
8. StyleGAN image dataset contains over 7000 synthesised images[
          <xref ref-type="bibr" rid="ref33">33</xref>
          ]
9. StyleGAN2 dataset covers 100,000 fake face images[
          <xref ref-type="bibr" rid="ref34">34</xref>
          ]
10. OpenForensics: Multi-Face Forgery Detection And Segmentation In-The Wild dataset [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]
consists on over 1,90,000 real and fake images.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>4. Proposed Conv2D Model</title>
        <p>This section describes the CNN architecture proposed herein Fig 7, which combines
convolutional and pooling layers. Convolutional layers extract image features, while pooling layers
reduce the dimensionality of the feature maps. After being processed by the convolutional
layers, the feature maps are flattened and combined into a one-dimensional array for input
to the fully connected layer. The output layer determines the subsequent class after the fully
connected layer processes the input image. In a similar way, the proposed Conv2D model
presented in this study is designed for binary image classification tasks.</p>
        <p>The Conv2D model carries out the deepfake image detection divided into five phases.
1. Dataset Collection involves the collection of authentic data as the initial task of any deep
learning model.
2. Data Preprocessing begins with resizing the images and augmentation to increase the
diversity of the dataset.
3. Model Training starts by using the preprocessed images to train the deep learning model,
which is the Conv2D model here. So that the model should learn to distinguish between
the features present in real and fake images.
4. Model Evaluation is the fourth phase, where we evaluate the trained model using a separate
validation dataset to recognize its performance in distinguishing between real and fake
images. Metrics such as accuracy can be used for evaluation.
5. Deepfake Image Detection happens once the model demonstrates satisfactory performance,
then we deploy it to classify new and unseen images as real or fake.</p>
      </sec>
      <sec id="sec-4-3">
        <title>5. Results &amp; Discussions</title>
        <p>In this section, we discuss the efectiveness of the suggested design and the outcomes obtained.</p>
        <p>Each layer of the model helps in eficient training in the following ways:
• A 3x3 convolutional layer containing 32 filters initiates the model. The activation function
for this layer is ReLU (Rectified Linear Unit). This layer receives a 150x150 RGB image as
input.
• The second layer is a max pooling operation with a pool size of 2x2. This layer compresses
the input’s spatial dimensions by selecting the maximum value within a given window
determined by pool size.
• A 3×3 convolutional layer with 64 filters and ReLU activation is implemented as the third
layer.
• The fourth layer is another max pooling layer with a pool size of 2×2.
• The fith layer comprises a convolutional structure with 128 filters of size 3×3, all applying
the ReLU activation function.
• The sixth layer is a max pooling layer with a 2x2 pool size.
• Flatten Layer is the seventh layer that converts the 2D matrix into a 1D vector.
• The eighth layer consists of 1064 neurons, fully-connected to the previous layers, and
employs the ReLU activation function.
• The final dense layer comprises 2 neurons representing the two classes, ’Fake’ and ’Real’,
and is activated by the softmax function to deliver probabilities.</p>
        <p>This design takes advantage of CNNs to extract hierarchical features from images, which are
then utilized for the binary classification task. By incorporating multiple convolutional and
pooling layers, the model is able to grasp complex patterns in the data. The dense layers at the
end of the model carry out the final classification by leveraging these learned features.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5.1. Simulation Setup</title>
      <p>The model’s training, testing, and implementation was done using the TensorFlow and Keras
libraries in Python, which is carried out on an Intel Core i7-11th generation CPU. We conducted
the trials using a graphics processing unit (GPU) from NVIDIA GEFORCE RTX 3060 equipped
with 16 gb of random-access memory (RAM).</p>
    </sec>
    <sec id="sec-6">
      <title>5.2. Dataset Description</title>
      <p>
        We have chosen to utilize the OpenForensics dataset [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], which has data on over 1,90,000 real
and fake images.OpenForensics is the initial extensive dataset that poses a significant challenge.
This dataset is designed with face-specific rich annotations explicitly for face forgery detection
and segmentation. The OpenForensics dataset has great value for research in both deepfake
elimination and general artificial face detection because of its rich annotations. It is a balanced
dataset of resolution 256×256 pixels. The training, testing, and validation images are divided into
two classes namely real and fake containing a total of 140002, 39428 &amp; 10905 images respectively.
      </p>
    </sec>
    <sec id="sec-7">
      <title>5.3. Evaluation metrics and Discussions</title>
      <p>In our experimentation, we use Accuracy scores to measure the model’s performance.Accuracy
is one of the most used evaluation metric in machine learning, especially for classification
problems like image classification using Convolutional Neural Networks (CNNs).</p>
      <p>The proportion of correct predictions to total number of predictions is the measure of accuracy.
Mathematically, it can be expressed in equation 1:</p>
      <p>Accuracy = NTuomtableNruomfCboerrroefctPPrerdedicitcitoionnss .</p>
      <p>In a binary categorization problem, this can also be written as:</p>
      <p>Accuracy =</p>
      <p>TP + TN
TP + TN + FP + FN
(1)
where:
• TP represents correctly identified positive instances.
• The count of correctly identified negative instances is referred to as TN.
• False positives, FP, represent instances classified as positive when they should have been
negative.</p>
      <p>• False negatives, FN, represent incorrectly identified negative instances.</p>
      <p>Accuracy is a straightforward metric that provides a general measure of how well a model is
performing across all classes. In the context of image classification with CNNs, accuracy can
give us a quick understanding of how well our model is able to correctly classify images. For
our Conv2D model, we have used a sparse categorical cross entropy and the adam optimizer
to increase the model’s learning rate. Spanned over 10 epochs and validation batch size of 50
we were able to achieve 99.36% training accuracy and 94.54% validation accuracy as shown in
Figure 9 . The model loss over time is shown in Figure 10.</p>
      <p>
        Diferent DeepFake detection models [
        <xref ref-type="bibr" rid="ref24 ref3 ref4">3, 24, 4</xref>
        ] are trained on various datasets containing
noticeable artifact characteristics like low resolution, color discrepancies, and visible boundaries.
These learned features might not be efective when applied to the high-quality DeepFake dataset
like OpenForensics[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], leading to a decrease in performance. In addition, from the experimental
results it can be observed our model’s accuracy maintains over an average of 90% which can be
considered reasonable with respect to the latest deepfake image detection models. Our model is
able to achieve such reasonable accuracy over such a large scale dataset.
      </p>
      <sec id="sec-7-1">
        <title>6. Conclusion and Future Work</title>
        <p>
          Detecting deepfake content has always been a challenging task due to its unique level of
abstraction. Traditionally, the problem is categorized as a binary classification issue, distinguishing
between prestine and deepfake labels. To address this issue, a CNN-based Conv2D architecture
has been proposed in our research to efectively identify deepfake images. The architecture has
demonstrated an impressive accuracy of 94.54% when trained on the extensive OpenForecsics
dataset[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ], which consists both class of real and fake images. Despite observing an increase
in model loss over time, the accuracy of the model remains excellent over validation data.
Furthermore, this work can be expanded to classify open image datasets and video deepfake
content. For video deepfake detection, the model can process each frame by extracting the
face, cropping it, and then applying the model to detect deepfake falsifications. A pipeline
can be created to implement this process for handling video data. The proposed CNN-based
model, which utilizes diverse data augmentation methods, demonstrates strong performance
and equilibrium across the dataset.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>T.-N.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yamagishi</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Echizen</surname>
          </string-name>
          , Openforensics:
          <article-title>Large-scale challenging dataset for multi-face forgery detection and segmentation in-the-wild</article-title>
          ,
          <source>in: International Conference on Computer Vision</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Patel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tanwar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bhattacharya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Alsuwian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. E.</given-names>
            <surname>Davidson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. F.</given-names>
            <surname>Mazibuko</surname>
          </string-name>
          ,
          <article-title>An Improved Dense CNN Architecture for Deepfake Image Detection</article-title>
          ,
          <source>IEEE Access 11</source>
          (
          <year>2023</year>
          )
          <fpage>22081</fpage>
          -
          <lpage>22095</lpage>
          . URL: https://ieeexplore.ieee.org/document/10057390/. doi:
          <volume>10</volume>
          .1109/ ACCESS.
          <year>2023</year>
          .
          <volume>3251417</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G. R.</given-names>
            <surname>Panigrah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. K.</given-names>
            <surname>Sethy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. P. R.</given-names>
            <surname>Borra</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Barpanda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Behera</surname>
          </string-name>
          ,
          <article-title>Deep Ensemble Learning for Fake Digital Image Detection: A Convolutional Neural Network-Based Approach</article-title>
          , Revue d'
          <source>Intelligence Artificielle</source>
          <volume>37</volume>
          (
          <year>2023</year>
          )
          <fpage>703</fpage>
          -
          <lpage>708</lpage>
          . URL: https://iieta.org/ journals/ria/paper/10.18280/ria.370318. doi:
          <volume>10</volume>
          .18280/ria.370318.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Shad</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Rizvee</surname>
            ,
            <given-names>N. T.</given-names>
          </string-name>
          <string-name>
            <surname>Roza</surname>
            ,
            <given-names>S. M. A.</given-names>
          </string-name>
          <string-name>
            <surname>Hoq</surname>
            ,
            <given-names>M. Monirujjaman</given-names>
          </string-name>
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Singh</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Zaguia</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Bourouis</surname>
          </string-name>
          ,
          <article-title>Comparative Analysis of Deepfake Image Detection Method Using Convolutional Neural Network</article-title>
          ,
          <source>Computational Intelligence and Neuroscience</source>
          <year>2021</year>
          (
          <year>2021</year>
          )
          <article-title>e3111676</article-title>
          . URL: https://www.hindawi.com/journals/cin/2021/3111676/. doi:
          <volume>10</volume>
          .1155/
          <year>2021</year>
          / 3111676.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H. F.</given-names>
            <surname>Shahzad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Rustam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. S.</given-names>
            <surname>Flores</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Luís</given-names>
            <surname>Vidal Mazón</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. De La Torre Diez</surname>
            ,
            <given-names>I. Ashraf</given-names>
          </string-name>
          ,
          <source>A Review of Image Processing Techniques for Deepfakes, Sensors</source>
          <volume>22</volume>
          (
          <year>2022</year>
          )
          <article-title>4556</article-title>
          . URL: https://www.mdpi.com/1424-8220/22/12/4556. doi:
          <volume>10</volume>
          .3390/s22124556.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Wan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Qin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <source>Deepfake Detection Algorithm Based on DualBranch Data Augmentation and Modified Attention Mechanism, Applied Sciences</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <article-title>8313</article-title>
          . URL: https://www.mdpi.com/2076-3417/13/14/8313. doi:
          <volume>10</volume>
          .3390/app13148313.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>T. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V. H.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Huynh-The</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nahavandi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. T.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.-V.</given-names>
            <surname>Pham</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. M.</given-names>
            <surname>Nguyen</surname>
          </string-name>
          ,
          <article-title>Deep Learning for Deepfakes Creation and Detection: A Survey</article-title>
          ,
          <source>Computer Vision and Image Understanding</source>
          <volume>223</volume>
          (
          <year>2022</year>
          )
          <article-title>103525</article-title>
          . URL: http://arxiv.org/abs/
          <year>1909</year>
          .11573. doi:
          <volume>10</volume>
          .1016/j.cviu.
          <year>2022</year>
          .
          <volume>103525</volume>
          , arXiv:
          <year>1909</year>
          .11573 [cs, eess].
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>R.</given-names>
            <surname>Katarya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lal</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          <article-title>Study on Combating Emerging Threat of Deepfake Weaponization, in: 2020 Fourth International Conference on I-SMAC (IoT in Social, Mobile, Analytics and Cloud) (I-SMAC)</article-title>
          , IEEE, Palladam, India,
          <year>2020</year>
          , pp.
          <fpage>485</fpage>
          -
          <lpage>490</lpage>
          . URL: https://ieeexplore.ieee. org/document/9243588/. doi:
          <volume>10</volume>
          .1109/I- SMAC49090.
          <year>2020</year>
          .
          <volume>9243588</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>Information-Containing Adversarial Perturbation for Combating Facial Manipulation Systems</article-title>
          ,
          <source>IEEE Transactions on Information Forensics and Security</source>
          <volume>18</volume>
          (
          <year>2023</year>
          )
          <fpage>2046</fpage>
          -
          <lpage>2059</lpage>
          . URL: https://ieeexplore.ieee. org/document/10086559/. doi:
          <volume>10</volume>
          .1109/TIFS.
          <year>2023</year>
          .
          <volume>3262156</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>S.-Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Owens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Efros</surname>
          </string-name>
          ,
          <article-title>CNN-generated images are surprisingly easy to spot</article-title>
          ... for now,
          <year>2020</year>
          . URL: http://arxiv.org/abs/
          <year>1912</year>
          .11035, arXiv:
          <year>1912</year>
          .11035 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mohebbi Moghaddam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Boroomand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Jalali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zareian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Daeijavad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. H.</given-names>
            <surname>Manshaei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Krunz</surname>
          </string-name>
          ,
          <article-title>Games of GANs: game-theoretical models for generative adversarial networks</article-title>
          ,
          <source>Artificial Intelligence Review</source>
          <volume>56</volume>
          (
          <year>2023</year>
          )
          <fpage>9771</fpage>
          -
          <lpage>9807</lpage>
          . URL: https: //doi.org/10.1007/s10462-023-10395-6. doi:
          <volume>10</volume>
          .1007/s10462- 023- 10395- 6.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ozair</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Courville</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          , Generative Adversarial Nets (
          <year>2014</year>
          ). URL: https://dl.acm.org/doi/10.5555/2969033. 2969125.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Malik</surname>
          </string-name>
          , H. Liu,
          <article-title>Improving GAN-Generated Image Detection Generalization Using Unsupervised Domain Adaptation</article-title>
          , in: 2022
          <source>IEEE International Conference on Multimedia and Expo (ICME)</source>
          , IEEE, Taipei, Taiwan,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . URL: https://ieeexplore.ieee.org/document/9859763/. doi:
          <volume>10</volume>
          .1109/ICME52920.
          <year>2022</year>
          .
          <volume>9859763</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aila</surname>
          </string-name>
          ,
          <article-title>A Style-Based Generator Architecture for Generative Adversarial Networks</article-title>
          , in: 2019 IEEE/CVF Conference on
          <article-title>Computer Vision and Pattern Recognition (CVPR), IEEE</article-title>
          , Long Beach, CA, USA,
          <year>2019</year>
          , pp.
          <fpage>4396</fpage>
          -
          <lpage>4405</lpage>
          . URL: https://ieeexplore.ieee. org/document/8953766/. doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2019</year>
          .
          <volume>00453</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-W.</given-names>
            <surname>Ha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Choo,</surname>
          </string-name>
          <article-title>StarGAN: Unified Generative Adversarial Networks for Multi-domain Image-to-Image Translation</article-title>
          , in: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition,
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          , Salt Lake City,
          <string-name>
            <surname>UT</surname>
          </string-name>
          ,
          <year>2018</year>
          , pp.
          <fpage>8789</fpage>
          -
          <lpage>8797</lpage>
          . URL: https://ieeexplore.ieee.org/document/8579014/. doi:
          <volume>10</volume>
          .1109/CVPR.
          <year>2018</year>
          .
          <volume>00916</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Aittala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hellsten</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          , T. Aila,
          <source>Analyzing and Improving the Image Quality of StyleGAN</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>8110</fpage>
          -
          <lpage>8119</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>T.</given-names>
            <surname>Karras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Aila</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Laine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Lehtinen</surname>
          </string-name>
          ,
          <article-title>Progressive Growing of GANs for Improved Quality</article-title>
          , Stability, and
          <string-name>
            <surname>Variation</surname>
          </string-name>
          ,
          <year>2018</year>
          . URL: http://arxiv.org/abs/1710.10196. doi:
          <volume>10</volume>
          .48550/arXiv. 1710.10196, arXiv:
          <fpage>1710</fpage>
          .10196 [cs, stat].
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>X.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , S. Karaman,
          <string-name>
            <given-names>S.-F.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>Detecting and Simulating Artifacts in GAN Fake Images</article-title>
          , in: 2019
          <source>IEEE International Workshop on Information Forensics and Security (WIFS)</source>
          , IEEE, Delft, Netherlands,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . URL: https://ieeexplore.ieee.org/document/9035107/. doi:
          <volume>10</volume>
          .1109/WIFS47025.
          <year>2019</year>
          .
          <volume>9035107</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>G.</given-names>
            <surname>Monkam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <article-title>A GAN-based Approach to Detect AI-Generated Images</article-title>
          ,
          <source>in: 2023 26th ACIS International Winter Conference on Software Engineering, Artificial Intelligence</source>
          ,
          <article-title>Networking and Parallel/Distributed Computing (SNPD-Winter)</article-title>
          , IEEE, Taiyuan, Taiwan,
          <year>2023</year>
          , pp.
          <fpage>229</fpage>
          -
          <lpage>232</lpage>
          . URL: https://ieeexplore.ieee.org/document/10223798/. doi:
          <volume>10</volume>
          .1109/ SNPD- Winter57765.
          <year>2023</year>
          .
          <volume>10223798</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ju</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Guan</surname>
          </string-name>
          , S. Lyu, GLFF:
          <article-title>Global and Local Feature Fusion for AI-synthesized Image Detection</article-title>
          , IEEE Transactions on Multimedia (
          <year>2023</year>
          ). URL: https://ieeexplore.ieee. org/abstract/document/10246417, https://github.com/littlejuyan/GLFF.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>B.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Shan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Wei</surname>
          </string-name>
          ,
          <article-title>Frequency Domain Filtered Residual Network for Deepfake Detection</article-title>
          ,
          <source>Mathematics</source>
          <volume>11</volume>
          (
          <year>2023</year>
          )
          <article-title>816</article-title>
          . URL: https://www.mdpi.com/ 2227-7390/11/4/816. doi:
          <volume>10</volume>
          .3390/math11040816, number: 4 Publisher: Multidisciplinary Digital Publishing Institute.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Khalil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. Z.</given-names>
            <surname>Ghalwash</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.-G.</given-names>
            <surname>Elsayed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. I.</given-names>
            <surname>Salama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. A.</given-names>
            <surname>Ghalwash</surname>
          </string-name>
          ,
          <article-title>Enhancing Digital Image Forgery Detection Using Transfer Learning</article-title>
          ,
          <source>IEEE Access 11</source>
          (
          <year>2023</year>
          )
          <fpage>91583</fpage>
          -
          <lpage>91594</lpage>
          . URL: https://ieeexplore.ieee.org/document/10226188/. doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2023</year>
          .
          <volume>3307357</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>A.</given-names>
            <surname>Raza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Munir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Almutairi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Novel</given-names>
            <surname>Deep</surname>
          </string-name>
          <article-title>Learning Approach for Deepfake Image Detection</article-title>
          ,
          <source>Applied Sciences</source>
          <volume>12</volume>
          (
          <year>2022</year>
          )
          <article-title>9820</article-title>
          . URL: https://www.mdpi.com/2076-3417/12/19/ 9820. doi:
          <volume>10</volume>
          .3390/app12199820.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Guarnera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Giudice</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Guarnera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ortis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Puglisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Paratore</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M. Q.</given-names>
            <surname>Bui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fontani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Coccomini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Caldelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Falchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gennaro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Messina</surname>
          </string-name>
          , G. Amato, G. Perelli,
          <string-name>
            <given-names>S.</given-names>
            <surname>Concas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Cuccu</surname>
          </string-name>
          , G. Orrù,
          <string-name>
            <given-names>G. L.</given-names>
            <surname>Marcialis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Battiato</surname>
          </string-name>
          ,
          <article-title>The Face Deepfake Detection Challenge</article-title>
          ,
          <source>Journal of Imaging</source>
          <volume>8</volume>
          (
          <year>2022</year>
          )
          <article-title>263</article-title>
          . URL: https://www.mdpi.com/2313-433X/ 8/10/263. doi:
          <volume>10</volume>
          .3390/jimaging8100263, https://iplab.dmi.unict.it/deepfakechallenge/.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>G.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <source>Detection of GAN-Synthesized Image Based on Discrete Wavelet Transform, Security and Communication Networks</source>
          <year>2021</year>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>10</lpage>
          . URL: https://www.hindawi.com/journals/scn/2021/5511435/. doi:
          <volume>10</volume>
          .1155/
          <year>2021</year>
          /5511435, https://github.com/peterwang512/CNNDetection.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>CelebFaces</given-names>
            <surname>Attributes (CelebA) Dataset</surname>
          </string-name>
          ,
          <year>2015</year>
          . URL: https://www.kaggle.com/datasets/ jessicali9530/celeba-dataset.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27] ondyari, FaceForensics++: Learning to Detect Manipulated Facial Images,
          <year>2023</year>
          . URL: https://github.com/ondyari/FaceForensics, original-date:
          <fpage>2018</fpage>
          -
          <lpage>04</lpage>
          -13T12:
          <fpage>47</fpage>
          :
          <fpage>46Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>F.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>LSUN</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL: https://github.com/fyu/lsun, original-date:
          <fpage>2015</fpage>
          -
          <lpage>04</lpage>
          -02T01:
          <fpage>06</fpage>
          :
          <fpage>34Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Celeb-DF</surname>
          </string-name>
          :
          <article-title>A Large-scale Challenging Dataset for DeepFake Forensics</article-title>
          ,
          <year>2023</year>
          . URL: https://github.com/yuezunli/celeb-deepfakeforensics, original-date:
          <fpage>2019</fpage>
          -
          <lpage>10</lpage>
          -02T00:
          <fpage>31</fpage>
          :
          <fpage>06Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Fake face detection via adaptive manipulation traces extraction network</article-title>
          ,
          <source>Computer Vision and Image Understanding</source>
          <volume>204</volume>
          (
          <year>2021</year>
          )
          <article-title>103170</article-title>
          . URL: https: //linkinghub.elsevier.com/retrieve/pii/S107731422100014X. doi:
          <volume>10</volume>
          .1016/j.cviu.
          <year>2021</year>
          .
          <volume>103170</volume>
          , https://github.com/EricGzq/AMTENnet.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <fpage>DigiFace</fpage>
          -1M
          <string-name>
            <surname>Dataset</surname>
          </string-name>
          ,
          <year>2022</year>
          . URL: https://github.com/microsoft/DigiFace1M, original-date:
          <fpage>2022</fpage>
          -
          <lpage>09</lpage>
          -15T09:
          <fpage>35</fpage>
          :
          <fpage>25Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <surname>E. Y.-J. Lin</surname>
          </string-name>
          ,
          <string-name>
            <surname>AttGAN-PyTorch</surname>
          </string-name>
          ,
          <year>2023</year>
          . URL: https://github.com/elvisyjlin/AttGAN-PyTorch, original-date:
          <fpage>2018</fpage>
          -
          <lpage>11</lpage>
          -28T08:
          <fpage>56</fpage>
          :
          <fpage>52Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33] NVlabs/stylegan,
          <year>2023</year>
          . URL: https://github.com/NVlabs/stylegan, original-date:
          <fpage>2019</fpage>
          -
          <lpage>02</lpage>
          - 04T15:
          <fpage>33</fpage>
          :
          <fpage>58Z</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34] NVlabs/stylegan2,
          <year>2023</year>
          . URL: https://github.com/NVlabs/stylegan2, original-date:
          <fpage>2019</fpage>
          -
          <lpage>11</lpage>
          -26T20:
          <fpage>52</fpage>
          :
          <fpage>23Z</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>