<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Survey: Deepfake and Current Technologies for Solutions</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sayan</forename><surname>Banerjee</surname></persName>
							<email>banerjeesayan554@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Technology</orgName>
								<orgName type="institution">University of North Bengal</orgName>
								<address>
									<addrLine>Raja Rammohunpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sumit</forename><surname>Kumar</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Technology</orgName>
								<orgName type="institution">University of North Bengal</orgName>
								<address>
									<addrLine>Raja Rammohunpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ankit</forename><surname>Dhara</surname></persName>
							<email>ankitdhara8250@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Technology</orgName>
								<orgName type="institution">University of North Bengal</orgName>
								<address>
									<addrLine>Raja Rammohunpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Md</forename><surname>Ajij</surname></persName>
							<email>mdajij@nbu.ac.in</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science and Technology</orgName>
								<orgName type="institution">University of North Bengal</orgName>
								<address>
									<addrLine>Raja Rammohunpur</addrLine>
									<postCode>734013</postCode>
									<settlement>Darjeeling</settlement>
									<region>West Bengal</region>
									<country key="IN">India</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Survey: Deepfake and Current Technologies for Solutions</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E3474D5B0905BC7191254891FC64B960</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Deepfake</term>
					<term>Survey</term>
					<term>Advanced machine learning models</term>
					<term>Generative Adversarial Networks (GANs)</term>
					<term>Convolutional Neural Networks (CNN)</term>
					<term>Recurrent Neural Networks (RNN)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper offers a detailed survey of deepfake detection methods, addressing the challenges posed by the fastpaced advancements in deepfake technology. It provides an overview of various detection techniques, examining their effectiveness in identifying manipulated content. The survey covers traditional detection strategies, such as digital forensics and watermarking, as well as modern AI-driven approaches like convolutional and recurrent neural networks. The study delves into the key features of deepfake technology, which leverages advanced machine learning models, particularly Generative Adversarial Networks (GANs), to manipulate video, audio, and images. These techniques have led to the creation of highly realistic synthetic media that is increasingly difficult to detect, raising serious concerns about privacy, misinformation, and security. Recent progress in deepfake detection has focused on improving the accuracy and efficiency of real-time solutions. Approaches that integrate visual, audio, and behavioural cues have demonstrated significant potential in distinguishing authentic content from fake media. Despite these advancements, there remains an urgent need for detection systems that can generalize effectively across different types of deepfakes, as many current models struggle with previously unseen or extremely realistic synthetic content. The survey reviews a broad spectrum of detection methods, assessing their strengths, weaknesses, and performance on various datasets. It also identifies gaps in the current research landscape and suggests directions for future work, emphasizing the importance of developing more robust and scalable detection frameworks.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Deepfakes, a term combining "deep learning" and "fake", describe highly convincing synthetic media produced using advanced machine learning techniques. Emerging in 2017, deepfakes initially focused on facial manipulation in videos. Since then, the technology has expanded to encompass audio and image alteration. Using algorithms like Generative Adversarial Networks (GANs), deepfakes can realistically swap faces, modify facial expressions, and even mimic voices, making it increasingly challenging to distinguish between genuine and synthetic content. Although initially developed for entertainment purposes, deepfake technology has evolved rapidly, bringing with it significant implications for digital privacy, security, and the reliability of online information.</p><p>The swift advancement of deepfake technology is both impressive and concerning. As the algorithms become more sophisticated, so does the quality of synthetic content. This progress has sparked worries about the potential misuse of deepfakes for spreading misinformation, committing fraud, and facilitating identity theft. Deepfakes have already been used in disinformation campaigns, influencing public perception and casting doubt on media authenticity. Their potential to erode trust in individuals and institutions underscores the urgent need for effective detection and prevention measures.</p><p>This paper seeks to offer an in-depth survey of the existing methods for detecting and mitigating deepfakes. By examining various techniques, such as facial feature analysis, biometric inconsistencies, and behavioural patterns, the study assesses the effectiveness of these approaches across different datasets and scenarios. The goal is to highlight current solutions while identifying research gaps and suggesting future directions to address the growing sophistication of deepfake technology.</p><p>The motivation for this survey stems from the increasing need for reliable systems capable of accurately and efficiently detecting synthetic media. As deepfakes become more prevalent and easily accessible, developing robust detection methods is crucial to protect privacy, uphold the integrity of digital content, and prevent misuse. This paper aims to contribute to this effort by thoroughly analysing the current state of deepfake detection, supporting the development of more advanced and dependable solutions.</p><p>Deepfake technology, a product of advancements in artificial intelligence (AI), specifically deep learning, enables the creation of hyper-realistic synthetic media that can manipulate audio, video, and images to mimic reality convincingly. While this technology offers legitimate applications, such as in entertainment and education, its misuse poses significant societal threats. Deepfakes have been used to spread misinformation, perpetuate fraud, violate individual privacy, and destabilize public trust <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. The societal implications of deepfake proliferation are profound. For example, deepfakes can undermine democratic processes by creating fabricated political speeches or events <ref type="bibr" target="#b2">[3]</ref>. They can also perpetuate personal and institutional damages, such as identity theft and reputation harm <ref type="bibr" target="#b3">[4]</ref>. Moreover, the accessibility of deepfake-generating tools exacerbates the problem by enabling individuals with minimal technical expertise to create deceptive content <ref type="bibr" target="#b4">[5]</ref>. These issues necessitate urgent attention and robust countermeasures to combat the deepfake menace effectively.</p><p>Existing reviews on deepfake technologies primarily focus on foundational concepts and early detection mechanisms. However, the rapid evolution of AI and the growing sophistication of deepfake creation techniques have rendered many of these reviews outdated <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. This survey aims to fill the gap by providing a comprehensive overview of recent advancements in deepfake detection, prevention, and mitigation strategies. It also emphasizes the importance of addressing the societal and ethical challenges associated with deepfakes <ref type="bibr" target="#b7">[8]</ref>.</p><p>We hypothesize that advancements in machine learning, AI, and cybersecurity offer promising solutions to mitigate the threats posed by deepfakes. By leveraging innovative detection techniques, regulatory frameworks, and collaborative efforts, it is possible to reduce the negative impacts of deepfake technology effectively <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>.</p><p>This survey is guided by several objectives: to consolidate and evaluate current solutions to the challenges posed by deepfakes, to identify gaps and limitations in existing approaches to deepfake detection and mitigation, and to propose future research directions and strategies for combating deepfake-related issues. The scope of this survey encompasses deepfake detection techniques, including machine learning and digital watermarking methods <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b8">9]</ref>, prevention strategies such as AI-generated content authentication and multi-modal analysis <ref type="bibr" target="#b9">[10]</ref>, and mitigation efforts, including regulatory frameworks, ethical considerations, and public awareness campaigns <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>The remainder of this paper is organized as follows: Section 2 reviews deepfake technology, including its evolution, societal implications, and research gaps. Section 3 details the workflow of deepfake detection, highlighting key stages and methodologies. Section 4 outlines detection and mitigation approaches, comparing techniques and evaluation metrics. Section 5 discusses findings, trends, datasets, and mathematical foundations. Section 6 identifies challenges and research gaps, including dataset limitations and real-time detection issues. Section 7 explores recommendations and potential impacts. Section 8 concludes with a summary of findings and the importance of addressing gaps.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature Review</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Historical Background</head><p>Deepfake technology has transformed the digital landscape, leveraging advancements in artificial intelligence and deep learning. The early foundation of this field was laid with the development of Generative Adversarial Networks (GANs), which facilitated the creation of hyper-realistic visual and audio content <ref type="bibr" target="#b11">[12]</ref>. Initially, deepfakes found applications in entertainment and creative industries, such as enhancing visual effects in movies and creating virtual influencers <ref type="bibr" target="#b12">[13]</ref>. However, their malicious use for spreading misinformation, violating privacy, and manipulating political narratives has garnered significant attention <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. The dual-edged nature of this technology highlights both its innovative potential and the ethical dilemmas it poses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Key Findings</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.1.">Categorization of Detection Methods</head><p>Research efforts in deepfake detection have yielded several methodologies, each with distinct approaches and objectives:</p><p>• AI-Based Techniques: Machine learning and deep learning algorithms, particularly Convolutional Neural Networks (CNNs), have achieved notable success in identifying deepfakes by detecting artifacts introduced during the generation process <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>. Advanced models such as recurrent neural networks (RNNs) and transformers have also been explored to analyze temporal inconsistencies in videos <ref type="bibr" target="#b17">[18]</ref>. Pre-trained models and transfer learning have further enhanced the efficiency of these techniques. • Signal Processing Approaches: Signal processing-based methods focus on identifying spatial and temporal anomalies in manipulated media. These methods often examine discrepancies in frame transitions, lighting inconsistencies, and unnatural blending between facial regions <ref type="bibr" target="#b18">[19]</ref>.</p><p>Techniques such as spectral analysis and phase correlation are employed to uncover hidden manipulations that are otherwise challenging to detect. • Blockchain Solutions: Blockchain technology is increasingly being adopted for media authentication and traceability. By leveraging immutable ledgers, these solutions can validate the origin and integrity of digital content, thereby providing a robust mechanism to counteract deepfake manipulation <ref type="bibr" target="#b16">[17]</ref>. Integration with smart contracts can further automate validation processes, enhancing reliability. • Feature Extraction-Based Methods: Feature extraction-based approaches analyze unique patterns within media to differentiate between authentic and manipulated content. Techniques such as frequency domain analysis, optical flow analysis, and texture-based methods have been employed to identify irregularities that are imperceptible to the human eye <ref type="bibr" target="#b19">[20]</ref>. In addition, facial landmark detection and biomechanical consistency checks provide granular insights into potential manipulations. • Hybrid Approaches: Hybrid methods combine multiple techniques, such as integrating AI-based algorithms with signal processing or blockchain frameworks, to enhance detection accuracy. These approaches aim to capitalize on the strengths of each methodology while mitigating their individual limitations <ref type="bibr" target="#b20">[21]</ref>. Examples include combining temporal analysis with CNN-based models or integrating blockchain verification with real-time anomaly detection algorithms.</p><p>The timeline of deepfake evolution, as shown in Figure <ref type="figure" target="#fig_0">1</ref>, provides a detailed overview of the technological advancements that have driven this field. It highlights critical breakthroughs, including the introduction of Generative Adversarial Networks (GANs) in 2014, which revolutionized content generation by enabling high-quality synthetic media. Subsequent developments include advanced autoencoders and transfer learning techniques, which improved model scalability and personalization. The timeline also emphasizes the rise of real-time face reenactment systems, deep neural networks for voice synthesis, and advancements in deepfake detection algorithms. These milestones underline the rapid growth and sophistication of this technology, posing significant challenges and opportunities in various domains. Datasets such as the DeepFake Detection Challenge (DFDC) and FaceForensics++ have underpinned advancements in detection algorithms, providing benchmarks for evaluation <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b18">19]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Critical Analysis</head><p>The landscape of deepfake detection is characterized by both significant progress and persistent challenges. AI-driven methods have achieved high accuracy in controlled environments but often struggle with generalization to diverse datasets and unforeseen manipulation techniques <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. Signal processing approaches, while effective in controlled scenarios, may lack robustness against sophisticated deepfake methods. Blockchain solutions, though promising, face scalability and adoption challenges. Feature extraction techniques are often computationally intensive, limiting their applicability in realtime settings <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b17">18]</ref>.</p><p>Recurring issues include the need for standardized evaluation metrics, improved computational efficiency, and ethical considerations. Furthermore, the rapid evolution of deepfake generation methods necessitates continuous adaptation of detection strategies <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b20">21]</ref>. The absence of datasets that capture real-world variability remains a bottleneck, as most benchmarks are designed for academic purposes <ref type="bibr" target="#b21">[22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Identification of Research Gaps</head><p>While considerable advancements have been made, several critical gaps remain unaddressed:</p><p>• Real-Time Detection: The development of lightweight and efficient algorithms capable of real-time processing remains a significant challenge <ref type="bibr" target="#b23">[24]</ref>. Advances in edge computing could provide a pathway for achieving this goal. • Robustness Across Domains: Current detection methods require improved generalization to handle diverse datasets and evolving threats <ref type="bibr" target="#b20">[21]</ref>. Domain adaptation techniques and unsupervised learning approaches could play a pivotal role.</p><p>• Ethical and Legal Frameworks: Comprehensive guidelines and regulations addressing the misuse of deepfake technology are urgently needed <ref type="bibr" target="#b24">[25]</ref>. Collaboration between technologists, policymakers, and ethicists is essential to establish a robust framework. • Advanced Benchmarks: The lack of standardized and representative datasets hinders the objective evaluation and comparison of detection methods <ref type="bibr" target="#b21">[22]</ref>. Future benchmarks should incorporate real-world variations, such as diverse lighting, occlusions, and cultural differences.</p><p>Addressing these gaps is imperative for advancing the field of deepfake detection and fostering trust in digital ecosystems. Future research must prioritize the development of scalable, robust, and ethically aligned solutions to counteract the growing threats posed by deepfake technology. Collaboration across disciplines and the integration of emerging technologies will be key to overcoming these challenges.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Workflow: Deepfake Detection</head><p>The process of deepfake detection involves several critical steps, as illustrated in Figure <ref type="figure" target="#fig_1">2</ref>. Each step plays a vital role in accurately distinguishing between original and fake content. Below is a detailed explanation of the workflow, along with examples of methodologies and techniques commonly employed at each stage:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Video</head><p>Frames </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Feature Extraction</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Applying various feature extraction methodologies</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Classification</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Applying various classification techniques</head><note type="other">Original Fake or</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Input -Video Frames Extraction</head><p>The first step involves splitting the input video into individual frames. These frames serve as the foundational data for further analysis. High-resolution frames are preferred to ensure the features used in detection are well-represented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example Methodology:</head><p>• Frame Sampling: Extract frames at fixed intervals (e.g., every nth frame) to reduce computational load while maintaining key details.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Feature Extraction</head><p>Feature extraction involves identifying and isolating the most critical aspects of the video frames that can reveal inconsistencies or unnatural patterns. These features form the basis for differentiating between real and fake media.</p><p>Example Feature Extraction Methods:</p><p>• Pixel-Level Artifacts Detection: Focus on artifacts such as inconsistent lighting, shadows, or pixel distortions often introduced during deepfake generation. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Classification</head><p>Once features are extracted, they are fed into a classification model to predict whether the content is original or fake. This step leverages machine learning and deep learning algorithms to make the final determination.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Example Classification Techniques:</head><p>• Traditional Machine Learning Models:</p><p>-SVM (Support Vector Machines): Effective for small datasets and well-defined features.</p><p>-Random Forest: Ensemble-based approach for feature importance and classification.</p><p>• Deep Learning Models:</p><p>-Convolutional Neural Networks (CNNs): Suitable for spatial features like pixel-level inconsistencies or facial biometrics. -Recurrent Neural Networks (RNNs): Ideal for temporal features such as frame continuity and motion consistency. -EfficientNet, MobileNetV2, and VGG16: Pretrained architectures fine-tuned for deepfake detection tasks.</p><p>• Hybrid Models: Combining CNNs for spatial features with RNNs for temporal consistency checks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Output -Classification Result</head><p>The final step produces a classification result that labels the input as either "Original" or "Fake". The accuracy and reliability of this output depend on the effectiveness of the previous steps and the quality of training data used to build the detection model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Evaluation Metrics:</head><p>• Accuracy, Precision, Recall: Measure overall model performance.</p><p>• F1 Score: Balance between precision and recall.</p><p>• AUC-ROC Curve: Evaluate model sensitivity to different thresholds.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Methodologies and Approaches</head><p>This section outlines the methodologies employed in surveying the research landscape on deepfake detection and mitigation. It describes the survey methodology, provides detailed insights into various approaches analyzed, and presents a comparative analysis of these methodologies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Survey Methodology</head><p>The reviewed papers were selected using a systematic approach to ensure comprehensive coverage of the field. A database search was conducted across platforms such as IEEE Xplore, Springer, and ACM Digital Library using keywords like "deepfake detection, " "GAN-based manipulation, " and "blockchain authentication." The inclusion criteria prioritized articles published in peer-reviewed journals and conferences between 2019 and 2025. Studies that lacked empirical results or focused solely on deepfake generation without discussing detection were excluded. A total of 50 papers met these criteria and were included in this review.</p><p>The evaluation framework for categorizing existing solutions focused on three key dimensions:</p><p>• Technique: Classification into AI-based, signal processing-based, blockchain-assisted, handcrafted feature extraction, and hybrid approaches. • Performance Metrics: Accuracy, scalability, and computational efficiency.</p><p>• Applicability: Suitability for real-time detection and generalization across datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Approaches Analyzed</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1.">Machine Learning/AI-Based Techniques</head><p>Machine learning and AI-based techniques are among the most widely explored methods for deepfake detection. Convolutional Neural Networks (CNNs) effectively detect spatial inconsistencies, such as unnatural textures and blending artifacts, in manipulated media <ref type="bibr" target="#b15">[16]</ref>. Recurrent Neural Networks (RNNs) and transformers analyze temporal patterns, making them well-suited for video analysis <ref type="bibr" target="#b17">[18]</ref>. Generative Adversarial Networks (GANs), while primarily used for creating deepfakes, are also utilized for adversarial training to identify and counteract synthetic content <ref type="bibr" target="#b19">[20]</ref>. Furthermore, pre-trained models and transfer learning approaches have improved detection performance by reducing training requirements and leveraging pre-existing knowledge bases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2.">Digital Forensics Techniques</head><p>Digital forensics relies on analyzing inconsistencies and artifacts in video and audio signals. Techniques such as phase correlation, frequency domain analysis, and optical flow detection identify discrepancies that are challenging for deepfake generation algorithms to mimic <ref type="bibr" target="#b18">[19]</ref>. For instance, variations in lighting, unnatural reflections, and irregularities in motion provide telltale signs of manipulation. These methods are particularly valuable in scenarios where content integrity is under scrutiny.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.3.">Blockchain for Authentication</head><p>Blockchain technology offers a robust framework for verifying the authenticity and provenance of digital content. Immutable ledgers record the history of media, ensuring traceability and preventing tampering <ref type="bibr" target="#b16">[17]</ref>. Smart contracts enable automated verification processes, enhancing the scalability of blockchain-assisted solutions. This approach is particularly effective in applications requiring real-time validation, such as social media and news dissemination platforms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.4.">Handcrafted Feature Extraction Techniques</head><p>Handcrafted feature extraction focuses on identifying specific features that distinguish manipulated from authentic media. These methods analyze elements such as facial landmarks, eye blinking patterns, and lip synchronization <ref type="bibr" target="#b19">[20]</ref>. Techniques like Local Binary Patterns (LBP) and Histogram of Oriented Gradients (HOG) are used to detect texture inconsistencies and unnatural movements. Although computationally less intensive than AI-based approaches, handcrafted techniques often struggle with the subtle sophistication of modern deepfakes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.5.">Hybrid Approaches</head><p>Hybrid approaches integrate multiple methodologies to enhance robustness and accuracy. For example, combining CNNs with optical flow analysis leverages both spatial and temporal insights. Similarly, blockchain verification can be paired with AI-driven anomaly detection for comprehensive validation <ref type="bibr" target="#b20">[21]</ref>. These approaches aim to balance the strengths of individual techniques while mitigating their limitations, making them suitable for complex and diverse use cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Comparative Analysis</head><p>A comparative analysis of the methodologies is presented in Table <ref type="table" target="#tab_1">1</ref>, highlighting their efficiency, accuracy, scalability, and suitability for real-time detection. AI-based techniques excel in accuracy but are computationally demanding, making scalability and real-time application challenging. Digital forensics methods offer high scalability but may struggle with sophisticated manipulations. Blockchain solutions provide high reliability and real-time suitability but face scalability issues due to resource requirements. Handcrafted feature extraction methods are efficient and scalable but less effective against subtle manipulations. Hybrid approaches represent a balanced solution, combining accuracy, scalability, and real-time suitability.</p><p>In conclusion, while each methodology has its strengths and weaknesses, hybrid approaches demonstrate the most promise for addressing the diverse challenges posed by deepfake technology.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Findings and Trends</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Key Insights</head><p>Recent advancements in deepfake detection have introduced innovative techniques that significantly improve accuracy and robustness against increasingly sophisticated deepfake content. Maheshwari et al. (2024) explored plasmonic nanomaterials with surface plasmon resonance (SPR) for image detection, achieving over 95% accuracy even in complex scenarios <ref type="bibr" target="#b25">[26]</ref>. A hybrid deep learning model combining MesoNet4 and ResNet101 was proposed by Javed et al. ( <ref type="formula">2024</ref>), attaining detection accuracies of 98.73%, 96.89%, and 97.90% on FaceForensics++, CelebV1, and CelebV2 datasets, respectively <ref type="bibr" target="#b26">[27]</ref>. Advanced biosensors integrating plasmonic resonance with convolutional neural networks reached 98.7% accuracy and demonstrated rapid response times (0.8 seconds per frame) <ref type="bibr" target="#b27">[28]</ref>.</p><p>Blockchain-based federated learning approaches, such as Heidari et al.'s (2024) method, enhanced accuracy by 6.6% compared to benchmarks while maintaining data confidentiality <ref type="bibr" target="#b28">[29]</ref>. Temporal feature prediction schemes focusing on audio-visual modalities demonstrated superior accuracy (84.33%) on the FakeAVCeleb dataset <ref type="bibr" target="#b29">[30]</ref>. Vision Transformers (ViTs) showed great promise in multiclass detection tasks, achieving an F1-score of 99.90%, outperforming traditional CNNs <ref type="bibr" target="#b30">[31]</ref>. Kingra et al.'s (2024) SFormer architecture, based on spatio-temporal transformers, achieved up to 100% accuracy on datasets such as FF++ and Deeper-Forensics <ref type="bibr" target="#b31">[32]</ref>. <ref type="bibr" target="#b32">Almestekawy et al. (2024)</ref> demonstrated that incorporating spatiotemporal textures improved reproducibility and accuracy by up to 91.96% <ref type="bibr" target="#b32">[33]</ref>. <ref type="bibr" target="#b33">Guarnera et al. (2024)</ref> introduced a hierarchical multi-level approach for deepfake detection, achieving 97% accuracy across multiple GAN and diffusion model tasks <ref type="bibr" target="#b33">[34]</ref>. <ref type="bibr" target="#b29">Gao et al. (2024)</ref> used temporal audio-video feature prediction to reach an 84.33% accuracy <ref type="bibr" target="#b29">[30]</ref>. Lastly, <ref type="bibr" target="#b30">Arshed et al. (2024)</ref> explored Vision Transformers (ViTs) achieving F1-scores close to 99.90% <ref type="bibr" target="#b30">[31]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Statistical Analysis</head><p>Table <ref type="table" target="#tab_2">2</ref> compares the performance metrics, including accuracy, computational cost, and dataset benchmarks, for different methods. These approaches reflect varying trade-offs in sensitivity, speed, and dataset applicability. Advanced Plasmonic Biosensor <ref type="bibr" target="#b27">[28]</ref> 98.7%</p><p>Custom dataset Fast response time; integration challenges in real-world scenarios. Blockchain-Based Federated Learning <ref type="bibr" target="#b28">[29]</ref> 6.6% increase over benchmarks</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Diverse</head><p>Data privacy maintained; high computational complexity. Temporal Feature Prediction <ref type="bibr" target="#b29">[30]</ref> 84.33% FakeAVCeleb Novel audio-visual fusion; lower accuracy compared to transformer-based models. Vision Transformers (ViTs) <ref type="bibr" target="#b30">[31]</ref> 99.90%</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Multiclass-prepared dataset</head><p>High accuracy; robust to compression and resizing. SFormer <ref type="bibr" target="#b31">[32]</ref> Up to 100% FF++, Deeper-Forensics Superior generalization; computationally expensive. Spatiotemporal Textures <ref type="bibr" target="#b32">[33]</ref> 91.96% Celeb-DF, FF++ Enhanced stability; moderate accuracy in crossdataset scenarios. Hierarchical Multi-level GAN Analysis <ref type="bibr" target="#b33">[34]</ref> 97%</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>GAN and Diffusion Model Dataset</head><p>Robust to attacks like compression; lacks real-time capabilities. Patch-Wise Deep Learning <ref type="bibr" target="#b30">[31]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>99.90% GAN, Stable Diffusion Datasets</head><p>Impressive F1 rates; computational overhead.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Popular Datasets for Deepfake Validation</head><p>Datasets play a crucial role in validating and improving deepfake detection solutions. Table <ref type="table" target="#tab_3">3</ref> highlights some of the most popular datasets used in this domain, emphasizing their size, types of content, and primary applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4.">Emerging Trends</head><p>Several trends in deepfake detection have emerged:</p><p>• Multimodal Solutions: Techniques like temporal feature prediction and hybrid models increasingly integrate multiple modalities (e.g., audio and video) to enhance detection accuracy  <ref type="bibr" target="#b30">[31,</ref><ref type="bibr" target="#b31">32]</ref>. • Adversarial Learning: GAN-based methods for deepfake generation have inspired adversarial learning approaches to detect increasingly realistic fakes. • Real-Time and Scalable Solutions: Biosensors and hybrid architectures focus on reducing latency, with potential for real-time applications <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b26">27]</ref>. • Privacy-Preserving Techniques: Blockchain-based federated learning represents a shift towards safeguarding data privacy while achieving robust detection <ref type="bibr" target="#b28">[29]</ref>.</p><p>These trends indicate a paradigm shift towards integrating diverse modalities, leveraging advanced architectures, and prioritizing real-time and privacy-preserving solutions for scalable and effective deepfake detection.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.5.">Mathematical Foundations for Detection</head><p>In deepfake detection, various mathematical models and techniques are employed to enhance accuracy and robustness. The key mathematical foundations for these detection models include Generative Adversarial Networks (GANs), Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Attention Mechanisms, Adversarial Training Loss, and Ensemble Prediction. As shown in Table <ref type="table" target="#tab_4">4</ref>, GANs leverage an adversarial training approach, where a generator and discriminator interact to distinguish real from fake data. CNNs, on the other hand, apply convolution operations to extract spatial features from images, crucial for analyzing image patterns in deepfakes. RNNs are employed for sequential data, such as video frames, to capture temporal dependencies. The attention mechanism, often used in Vision Transformers (ViTs), helps models focus on significant features, enhancing the detection process. Additionally, adversarial training loss is designed to improve model robustness by exposing it to adversarial examples. Finally, ensemble prediction aggregates results from multiple models to boost the overall detection accuracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Challenges and Gaps</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Current Challenges</head><p>Despite the advancements in deepfake detection technologies, several technical challenges persist, limiting the effectiveness of current solutions: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Attention Mechanism</head><p>Attention(𝑄, 𝐾, 𝑉 ) = softmax (︁</p><formula xml:id="formula_0">𝑄𝐾 𝑇 √ 𝑑 𝑘</formula><p>)︁ 𝑉 𝑄, 𝐾, 𝑉 : Query, key, and value matrices. 𝑑 𝑘 : Dimension of the key vector. • Detection Accuracy for Low-Quality Videos: Many deepfake detection models struggle with low-resolution or highly compressed videos, which are often encountered on social media platforms. This degradation in quality obscures telltale artifacts, reducing detection performance. • Computational Overhead: Deep learning-based detection methods, while highly accurate, often require significant computational resources. Balancing the need for high detection accuracy with computational efficiency remains a key challenge, particularly for real-time applications. • Generalization Across Techniques: As new and more sophisticated deepfake generation techniques emerge, detection models often fail to generalize, requiring constant retraining on updated datasets. • Real-Time Detection: Many existing approaches lack the speed needed for real-time detection, especially in live-streaming or high-throughput environments, where immediate detection is crucial. • Robustness to Adversarial Attacks: Deepfake detection models are vulnerable to adversarial attacks that subtly alter fake content to evade detection mechanisms.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Adversarial Training Loss</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Research Gaps</head><p>In addition to technical challenges, there are several gaps in current research that must be addressed to advance deepfake detection methodologies: Addressing these challenges and research gaps will require a concerted effort from academia, industry, and policymakers to ensure that deepfake detection technologies remain effective, equitable, and ethical in the face of evolving threats.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Future Directions</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.1.">Recommendations</head><p>To advance the field of deepfake detection and mitigate the risks associated with synthetic media, the following actionable steps are recommended: </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.2.">Potential Impact</head><p>The proposed advancements in deepfake detection can have far-reaching implications across various domains:</p><p>• Policy-Making: Improved detection methods and standardized datasets can inform regulatory frameworks, helping governments and organizations address the ethical and legal challenges posed by deepfake technology. • Societal Trust: By effectively mitigating the spread of synthetic media, advanced detection technologies can restore public trust in digital content, reducing the impact of misinformation and manipulation. • Adoption of AI Technologies: The development of robust and ethical deepfake detection systems will encourage the responsible adoption of AI technologies in industries such as media, entertainment, and cybersecurity. • Enhanced Security Measures: Real-time detection capabilities can be integrated into digital platforms, safeguarding users against malicious deepfake content and protecting sensitive information.</p><p>By addressing these recommendations and leveraging the potential impact, the research community can ensure that deepfake detection technologies remain a step ahead of evolving generative methods, fostering a safer and more trustworthy digital environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Conclusion</head><p>This survey has explored the current state of deepfake detection technologies, highlighting the rapid advancements in methods designed to counteract the growing sophistication of generative models. Key insights include the effectiveness of hybrid approaches, such as combining multimodal analysis with AI-based techniques, and the potential of transformer-based architectures to improve accuracy and scalability. Despite these advancements, challenges persist in detecting low-quality or adversarially manipulated deepfakes, underscoring the need for robust and adaptable solutions.</p><p>This work consolidates knowledge from diverse fields, presenting a comprehensive review of the strengths and limitations of existing deepfake detection methods. By identifying research gaps-such as the need for standardized datasets and ethical frameworks-this survey provides a roadmap for future studies. It also emphasizes the importance of integrating human expertise with automated systems to enhance the interpretability and reliability of detection outcomes.</p><p>As deepfake technology continues to evolve, the importance of proactive research and collaboration cannot be overstated. The development of lightweight, real-time detection models and the establishment of legal and ethical standards are crucial steps toward combating the misuse of synthetic media. By fostering cross-disciplinary partnerships and prioritizing innovation, the research community can address emerging threats and ensure the responsible use of AI technologies, safeguarding societal trust and digital integrity.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Timeline illustrating the evolution of deepfake technology.</figDesc><graphic coords="4,117.13,100.42,361.01,270.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Workflow illustrating the steps in deepfake detection.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>ℒ</head><label></label><figDesc>adv = E (𝑥,𝑦)∼𝒟 [max 𝛿∈𝑆 ℓ(𝑓 (𝑥 + 𝛿), 𝑦)] 𝛿: Perturbation within constraint 𝑆. ℓ(𝑓 (𝑥), 𝑦): Loss function comparing prediction 𝑓 (𝑥) with label 𝑦. 6. Ensemble Prediction 𝑃 ensemble = 1 𝑁 ∑︀ 𝑁 𝑖=1 𝑃 𝑖 𝑃 𝑖 : Prediction probability from the 𝑖-th model. 𝑁 : Number of models.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>•</head><label></label><figDesc>Temporal Inconsistencies: Analyze frame-to-frame transitions for unnatural movement or discontinuities. • Frequency Domain Analysis: Techniques like Discrete Fourier Transform (DFT) or Wavelet Transform to detect anomalies in high-frequency bands. • Biometric Feature Analysis: Focus on facial landmarks, eye movement, and lip-sync patterns to identify irregularities.</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Comparison of Deepfake Detection Approaches</figDesc><table><row><cell>Approach</cell><cell>Accuracy</cell><cell>Scalability</cell><cell>Real-Time Suitabil-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>ity</cell></row><row><cell>AI-Based Techniques</cell><cell>High</cell><cell>Moderate</cell><cell>Limited due to com-</cell></row><row><cell></cell><cell></cell><cell></cell><cell>putational intensity</cell></row><row><cell>Digital Forensics</cell><cell>Moderate</cell><cell>High</cell><cell>Moderate</cell></row><row><cell>Blockchain Solutions</cell><cell>High</cell><cell>Low</cell><cell>High</cell></row><row><cell>Handcrafted Feature Extrac-</cell><cell>Moderate</cell><cell>High</cell><cell>High</cell></row><row><cell>tion</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Hybrid Approaches</cell><cell>Very High</cell><cell>Moderate</cell><cell>High</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Performance Comparison of Deepfake Detection Techniques</figDesc><table><row><cell>Technique</cell><cell>Accuracy</cell><cell></cell><cell>Dataset(s)</cell><cell>Strengths/Weaknesses</cell></row><row><cell>Plasmonic Nanomaterials</cell><cell>95%</cell><cell></cell><cell>Custom Dataset</cell><cell>High sensitivity; robust to</cell></row><row><cell>(SPR) [26]</cell><cell></cell><cell></cell><cell></cell><cell>lighting conditions but com-</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>putationally intensive.</cell></row><row><cell>Hybrid Model (MesoNet4 +</cell><cell cols="2">98.73% (FaceForen-</cell><cell>FaceForensics++,</cell><cell>Real-time capability; limited</cell></row><row><cell>ResNet101) [27]</cell><cell>sics++),</cell><cell>96.89%</cell><cell>CelebV1, CelebV2</cell><cell>multimodal application.</cell></row><row><cell></cell><cell>(CelebV1)</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Popular Datasets Used for Deepfake Validation Vision Transformers (ViTs) and spatio-temporal transformer models like SFormer demonstrate exceptional performance, particularly in generalizing across datasets</figDesc><table><row><cell>Dataset Name</cell><cell>Description</cell><cell></cell><cell>Size</cell><cell cols="2">Types of Content</cell><cell>Source</cell></row><row><cell>FaceForensics++</cell><cell cols="2">Large-scale dataset</cell><cell>1,000 videos</cell><cell cols="2">Deepfake, Neural</cell><cell>University</cell></row><row><cell></cell><cell cols="2">for face forgery de-</cell><cell></cell><cell>Rendered,</cell><cell>Face</cell><cell>of</cell><cell>Erlangen-</cell></row><row><cell></cell><cell>tection.</cell><cell></cell><cell></cell><cell>Swapping</cell><cell></cell><cell>Nuremberg</cell></row><row><cell cols="3">DeepFakeDetection Focused on detecting</cell><cell>3,000 videos</cell><cell>Real, Deepfake</cell><cell></cell><cell>University of Califor-</cell></row><row><cell></cell><cell cols="2">deepfake videos.</cell><cell></cell><cell></cell><cell></cell><cell>nia, Berkeley</cell></row><row><cell>Celeb-DF</cell><cell cols="2">High-resolution</cell><cell>5,639 videos</cell><cell>Celebrities,</cell><cell>TV</cell><cell>Zhejiang University</cell></row><row><cell></cell><cell>deepfake</cell><cell>videos</cell><cell></cell><cell>Shows</cell><cell></cell></row><row><cell></cell><cell cols="2">featuring celebrities.</cell><cell></cell><cell></cell><cell></cell></row><row><cell>DFDC (Deepfake</cell><cell cols="2">Comprehensive</cell><cell cols="2">100,000 videos Real, Deepfake</cell><cell></cell><cell>Facebook AI</cell></row><row><cell>Detection Chal-</cell><cell cols="2">dataset for deepfake</cell><cell></cell><cell></cell><cell></cell></row><row><cell>lenge)</cell><cell>challenges.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>The Realism of</cell><cell cols="2">Evaluates realism in</cell><cell>Fully Anno-</cell><cell cols="2">Deepfake, GANs</cell><cell>Stanford University</cell></row><row><cell>Deepfakes</cell><cell cols="2">deepfake generation.</cell><cell>tated</cell><cell></cell><cell></cell></row><row><cell>[30].</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="3">• Transformer Architectures:</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Mathematical Formulas for Deepfake Detection Models min 𝐺 max 𝐷 𝑉 (𝐷, 𝐺) = E 𝑥∼𝑝 data (𝑥) [log 𝐷(𝑥)] + E 𝑧∼𝑝𝑧(𝑧) [log(1 − 𝐷(𝐺(𝑧)))] 𝐷(𝑥): Discriminator's probability of 𝑥 being real. 𝐺(𝑧): Data generated by 𝐺 from noise 𝑧. 𝑝 data (𝑥): Distribution of real data. 𝑡 = 𝜎(𝑊 ℎ ℎ 𝑡−1 + 𝑊 𝑥 𝑥 𝑡 + 𝑏) ℎ 𝑡 : Hidden state at time 𝑡. ℎ 𝑡−1 : Hidden state from the previous time step. 𝑥 𝑡 : Input at time 𝑡. 𝑊 ℎ , 𝑊 𝑥 : Weight matrices. 𝑏: Bias vector. 𝜎: Activation function.</figDesc><table><row><cell>Mathematical Concept</cell><cell cols="3">Formula and Explanation</cell></row><row><cell>1. Generative Adversarial</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Networks (GANs)</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell cols="3">𝑝 𝑧 (𝑧): Distribution of noise.</cell></row><row><cell>2. Convolutional Neural</cell><cell>𝑦[𝑖, 𝑗] =</cell><cell>∑︀ 𝑀 −1 𝑚=0</cell><cell>∑︀ 𝑁 −1 𝑛=0 𝑥[𝑖 + 𝑚, 𝑗 + 𝑛] • 𝑘[𝑚, 𝑛]</cell></row><row><cell>Networks (CNNs)</cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell cols="3">𝑥[𝑖, 𝑗]: Input image pixel at position (𝑖, 𝑗).</cell></row><row><cell></cell><cell cols="3">𝑘[𝑚, 𝑛]: Filter kernel of size 𝑀 × 𝑁 .</cell></row><row><cell></cell><cell cols="3">𝑦[𝑖, 𝑗]: Convolved output.</cell></row><row><cell>3. Recurrent Neural Net-</cell><cell>ℎ</cell><cell></cell><cell></cell></row><row><cell>works (RNNs)</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head></head><label></label><figDesc>While several datasets exist, there is a lack of universally accepted benchmarks that cover diverse content types, resolutions, and manipulation techniques. Creating standardized, diverse datasets would enhance model comparability and reliability.• Legal and Ethical Frameworks: Deepfake detection research often overlooks the legal and ethical implications of using synthetic media. Establishing guidelines for the responsible use of detection technologies and addressing privacy concerns is critical. • Robustness Against Evolving Deepfake Techniques: As generative models continue to evolve, there is a need for detection methods that can adapt to new manipulation techniques without requiring frequent retraining. • Cross-Platform Scalability: Detection methods often perform well on specific datasets but fail when deployed across different platforms or real-world scenarios. Research into scalable and robust cross-platform solutions is necessary. • Human-AI Collaboration: Current systems primarily focus on automated detection, with little emphasis on integrating human expertise to improve accuracy and interpretability of results. • Ethical Use of Detection Tools: There is a need to address potential misuse of detection tools themselves, such as leveraging them to create more advanced deepfakes by understanding their weaknesses.</figDesc><table /><note>• Standardized Datasets:</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>• Development of Lightweight, Real-Time Models: Future</head><label></label><figDesc></figDesc><table><row><cell>research should focus on creating</cell></row><row><cell>computationally efficient deepfake detection models capable of real-time processing. This in-</cell></row><row><cell>volves exploring novel architectures, such as transformer-based models optimized for speed and</cell></row><row><cell>scalability.</cell></row><row><cell>• Building</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_7"><head>More Diverse and Representative Datasets:</head><label></label><figDesc>Establishing datasets that include a wide variety of manipulation techniques, demographics, and content types will improve the robustness and generalizability of detection models. Collaboration among research institutions and industry can facilitate the creation of comprehensive benchmarks. •</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_8"><head>Creating Legal and Ethical Frameworks: Policymakers</head><label></label><figDesc>and researchers should work together to establish guidelines for the responsible use of generative technologies. This includes defining acceptable practices, ensuring transparency, and addressing privacy concerns in dataset usage. • Enhancing Robustness Against Adversarial Attacks: Research should prioritize techniques to make detection models resilient to adversarial examples, such as adversarial training, ensemble methods, and anomaly detection frameworks. • Integration of Multimodal Approaches: Combining audio, video, and textual data can lead to more comprehensive detection systems. Future work should focus on integrating these modalities effectively to improve detection accuracy. • Fostering Human-AI Collaboration: Developing tools that allow human experts to interact with detection systems can enhance the interpretability and reliability of results, particularly in high-stakes scenarios.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_0">The proliferation of deepfake technology has prompted extensive research into its origins, advancements, and countermeasures. This section provides a structured review, covering the historical background, key findings, critical analyses, and research gaps in deepfake technology.</note>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The author(s) have not employed any Generative AI tools.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Impact of deepfake technology on social media: Detection, misinformation and societal implications</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Al-Khazraji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">H</forename><surname>Saleh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">I</forename><surname>Khalid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Mishkhal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Eurasia Proceedings of Science Technology Engineering and Mathematics</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="429" to="441" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A review of deepfake technology: an emerging ai threat</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kaur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Soft Computing for Security Applications</title>
		<imprint>
			<biblScope unit="page" from="605" to="619" />
			<date type="published" when="2021">2021. 2022</date>
		</imprint>
	</monogr>
	<note>Proceedings of ICSCS</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Deepfake news: Ai-enabled disinformation as a multi-level public policy challenge</title>
		<author>
			<persName><forename type="first">C</forename><surname>Whyte</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of cyber policy</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="199" to="217" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Exploding ai-generated deepfakes and misinformation: A threat to global concern in the 21st century</title>
		<author>
			<persName><forename type="first">P</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Dhiman</surname></persName>
		</author>
		<idno>SSRN 4651093</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">Available at</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Extending the theory of information poverty to deepfake technology</title>
		<author>
			<persName><forename type="first">W</forename><surname>Matli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Management Data Insights</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page">100286</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Deepfake: a social construction of technology perspective</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">O</forename><surname>Kwok</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Koh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Current Issues in Tourism</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="1798" to="1802" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Deepfake disasters: A comprehensive review of technology, ethical concerns, countermeasures, and societal implications</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chapagain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kshetri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2024 International Conference on Emerging Trends in Networks and Computer Communications (ETNCC), IEEE</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Sarkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">De</forename><surname>Sarkar</surname></persName>
		</author>
		<title level="m">Combatting deep-fakes in india-an analysis of the evolving legal paradigm and its challenges</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Deepfakes: Deceptions, mitigations, and opportunities</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mustak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Salminen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Mäntymäki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rahman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Dwivedi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Business Research</title>
		<imprint>
			<biblScope unit="volume">154</biblScope>
			<biblScope unit="page">113368</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Deepfakes, misinformation, and disinformation in the era of frontier ai, generative ai, and large ai models</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Shoaib</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ahvanooey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2023 International Conference on Computer and Applications (ICCA), IEEE</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Decent deepfakes? professional deepfake developers&apos; ethical considerations and their governance potential</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pawelec</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AI and Ethics</title>
		<imprint>
			<biblScope unit="page" from="1" to="26" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Introduction to deepfake technology and its early foundations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Chataut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Upadhyay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Deepfakes and Their Impact on Business</title>
				<imprint>
			<publisher>IGI Global Scientific Publishing</publisher>
			<date type="published" when="2025">2025</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Deepfakes and beyond: A survey of face manipulation and fake detection</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tolosana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Vera-Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fierrez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Morales</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ortega-Garcia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">64</biblScope>
			<biblScope unit="page" from="131" to="148" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A survey on deepfake video detection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Fei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Iet Biometrics</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="607" to="624" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Deepfake detection for human face images and videos: A survey</title>
		<author>
			<persName><forename type="first">A</forename><surname>Malik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kuribayashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Abdullahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Khan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Ieee Access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="18757" to="18775" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Deepfake detection using deep learning methods: A systematic and comprehensive review</title>
		<author>
			<persName><forename type="first">A</forename><surname>Heidari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Jafari Navimipour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Dag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">e1520</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Deepfake detection: A systematic literature review</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Rana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Nobi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Murali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Sung</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE access</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="25494" to="25513" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Deepfake detection using spatiotemporal transformer</title>
		<author>
			<persName><forename type="first">B</forename><surname>Kaddar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Fezza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Akhtar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Hamidouche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hadid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Serra-Sagristà</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Multimedia Computing, Communications and Applications</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Deepfake detection based on discrepancies between faces and their context</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Nirkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wolf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Keller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hassner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="6111" to="6121" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Deepfake generation and detection, a survey</title>
		<author>
			<persName><forename type="first">T</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">81</biblScope>
			<biblScope unit="page" from="6259" to="6276" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Deepfake generation and detection: Case study and challenges</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tanwar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhattacharya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">E</forename><surname>Davidson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Nyameko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Aluvala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vimal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">The deepfake detection challenge (dfdc) dataset</title>
		<author>
			<persName><forename type="first">B</forename><surname>Dolhansky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Bitton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pflaum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Howes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Ferrer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2006.07397</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A comprehensive overview of deepfake: Generation, detection, datasets, and opportunities</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Seow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">K</forename><surname>Lim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Phan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">513</biblScope>
			<biblScope unit="page" from="351" to="371" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Deepfakes detection methods: A literature survey</title>
		<author>
			<persName><forename type="first">M</forename><surname>Weerawardana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Fernando</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2021 10th International Conference on Information and Automation for Sustainability (ICIAfS), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="76" to="81" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Deepfake: an overview</title>
		<author>
			<persName><forename type="first">A</forename><surname>Chadha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kashyap</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gupta</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of second international conference on computing, communications, and cyber-security: IC4S 2020</title>
				<meeting>second international conference on computing, communications, and cyber-security: IC4S 2020</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="557" to="566" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Enhancing sensing and imaging capabilities through surface plasmon resonance for deepfake image detection</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">U</forename><surname>Maheshwari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Paulchamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">K</forename><surname>Pandey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pandey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Plasmonics</title>
		<imprint>
			<biblScope unit="page" from="1" to="20" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Real-time deepfake video detection using eye movement analysis with a hybrid deep learning approach</title>
		<author>
			<persName><forename type="first">M</forename><surname>Javed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">H</forename><surname>Dahri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Laghari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">2947</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Advanced plasmonic resonanceenhanced biosensor for comprehensive real-time detection and analysis of deepfake content</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">U</forename><surname>Maheshwari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumarganesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kvm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gopalakrishnan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Selvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Paulchamy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rishabavarthani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Sagayam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">K</forename><surname>Pandey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pandey</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Plasmonics</title>
		<imprint>
			<biblScope unit="page" from="1" to="18" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">A novel blockchain-based deepfake detection method using federated and deep learning models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Heidari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">J</forename><surname>Navimipour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Dag</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Talebi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cognitive Computation</title>
		<imprint>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Temporal feature prediction in audio-visual deepfake detection</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">3433</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Multiclass ai-generated deepfake face detection using patch-wise deep learning model</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Arshed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mumtaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ibrahim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dewi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tanveer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">31</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Sformer: An end-to-end spatio-temporal transformer architecture for deepfake detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kingra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Aggarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kaur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Forensic Science International: Digital Investigation</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page">301817</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Deepfake detection: Enhancing performance with spatiotemporal texture and deep learning feature fusion</title>
		<author>
			<persName><forename type="first">A</forename><surname>Almestekawy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">H</forename><surname>Zayed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Taha</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Egyptian Informatics Journal</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page">100535</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Level up the deepfake detection: a method to effectively discriminate images generated by gan architectures and diffusion models</title>
		<author>
			<persName><forename type="first">L</forename><surname>Guarnera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Giudice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Battiato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Intelligent Systems Conference</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="615" to="625" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
