Detecting Deepfake Modifications of Biometric Images using Neural Networks Valeriy Dudykevych1, Serhii Yevseiev2, Halyna Mykytyn1, Khrystyna Ruda1, and Hennadii Hulak3 1 Lviv Polytechnic National University, Lviv, 79013, Ukraine 2 National Technical University “Kharkiv Polytechnic Institute,” Kharkiv, 61000, Ukraine 3 Borys Grinchenko Kyiv Metropolitan University, 18/2 Bulvarno-Kudriavska str., Kyiv, 04053, Ukraine Abstract The National Cybersecurity Cluster of Ukraine is functionally oriented towards building systems for the protection of various platforms within the information infrastructure, including the development of secure technologies for detecting deepfake modifications of biometric images based on neural networks in cyberspace. The paper introduces an instrumental platform for detecting deepfake modifications of biometric images and an analytical security structure of neural network Information Technologies (IT) based on a multi-level model of “resources—systems—processes—networks—management” according to the “object—threat—protection” concept. The instrumental platform integrates information neural network technology and decision support information technology, employing a modular architecture of the neural network detection system for deepfake modifications in the “preprocessing data—feature processing—classifier training” space. The core of the IT security structure is the integrity of the functioning of the neural network system for detecting deepfake modifications of human facial biometric images and data analysis systems that implement the information process of “splitting a video file into frames—detection, feature processing—classifier accuracy assessment”. The security of the multi-level model of neural network IT is based on systemic and synergistic approaches, enabling the construction of a comprehensive IT security system, considering the emergent property in the presence of potential targeted threats and the application of advanced technologies at the hardware and software levels. The proposed comprehensive security system for the information process of detecting deepfake modifications of biometric images covers hardware and software means by segments: automated classifier accuracy assessment; real-time detection of deepfake modifications; sequential image processing; accuracy evaluation of classification using cloud computing. Keywords 1 Intellectualization, cybersecurity, biometric image, deepfake, information technology, neural networks, detection system, instrumental platform, analytical security structure, comprehensive security system. 1. Introduction various societal domains. In the context of Industry 4.0 tasks, the Cybersecurity Strategy The problem statement. The security of critical of Ukraine, and the National Cybersecurity state infrastructure objects in both physical Cluster, one of the paramount tools for and cyberspace is currently a pressing issue addressing the challenge of safely within the realm of intellectualization across intellectualizing critical infrastructure objects is the utilization of neural network information CPITS-2024: Cybersecurity Providing in Information and Telecommunication Systems, February 28, 2024, Kyiv, Ukraine EMAIL: vdudykev@gmail.com (V. Dudykevych); serhii.yevseiev@gmail.com (S. Yevseiev); cosmos-zirka@ukr.net (H. Mykytyn); khrystyna.s.ruda@lpnu.ua (K. Ruda); h.hulak@kubg.edu.ua (H. Hulak) ORCID: 0000-0001-8827-9920 (V. Dudykevych); 0000-0003-1647-6444 (S. Yevseiev); 0000-0003-4275-8285 (H. Mykytyn); 0000-0001- 8644-411X (K. Ruda); 0000-0001-9131-9233 (H. Hulak) ©️ 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings 391 technologies for detecting deepfake The aim of the study. The primary objective modifications in the biometric images of of this study is to formulate an analytical individuals’ faces [1–3]. The accuracy criterion security structure for information technology for classifying biometric images through designed to detect deepfake modifications in neural networks hinges on the safety of biometric images. This structure aligns with detecting deepfake modifications, a the instrumental platform and a multi-level determination guided by the comprehensive model of neural network IT, encompassing security system of a multi-level information Information Resources (IR), Information technology framework [4, 5]. Systems (IS), Information Processes (IP), Analysis of recent achievements and Information Networks (IN), and Information publications. The ongoing development of Security Management (ISM). The constructed methodological principles for establishing algorithm within this structure is aimed at cybersecurity systems in information facilitating the secure operation of neural technologies that support the functioning of network IT. critical infrastructure objects remains pertinent [6, 7]. Currently, security processes are being 2. Instrumental Platform for implemented in tasks related to the detection of deepfake modifications in biometric facial Detecting Deepfake images using neural networks. Investigations Modifications of Biometric into security issues within the realm of machine Images learning, particularly dealing with complex threat models and corresponding protective The creation of an analytical security structure measures, are actively underway [8, 9]. The for detecting deepfake modifications of study [10] delves into the efficiency assessment biometric images is based on the following of contemporary algorithms designed to detect prerequisites: an instrumental platform fake content, shedding light on their (Fig. 1)—information neural network performance within the context of information technology (IT1); decision support information warfare scenarios. This comparative analysis technology (IT2). The development of contributes valuable insights into the ongoing information technologies for detecting efforts to bolster defenses against deceptive deepfake modifications of biometric images information dissemination. In [11], the security relies on: the use of a staged approach for model and data privacy in deep learning, as part detecting modified biometric images using of machine learning, are examined under the convolutional neural networks [16]; the influence of relevant attacks. This includes application of a neural network system for poisoning attacks and evasion attacks, both of detecting deepfake modifications based on its which impact decision-making processes in architecture and decision support systems for deep learning. Countermeasures against such assessing the classifier’s performance according attacks involve the recognition and removal of to the evaluation methodology [17]. The malicious data, training models to be insensitive information neural network technology is to such data, and concealing the model’s based on the following components: the object structure and parameters. The confidentiality of model, methodology for detecting deepfake data during deep learning is also jeopardized by modifications, accuracy of biometric image specific attacks, such as the inversion of the classification, and an evaluation methodology security model. Effective tools to counter for assessing the classifier’s performance. The privacy threats include cryptographic methods, constructive algorithm of IT1: “video notably homomorphic encryption [12, 13]. segmentation—detection—feature processing Furthermore, the study of hardware —classification” is implemented through the security for deep neural networks within the architecture of the neural network system using “threat—protection” space is discussed in [14]. a modular approach, incorporating individual Modern methods ensuring the detection of functional modules to enhance the efficiency deepfake modifications in biometric facial and adaptability of the deepfake modification images with an accuracy ranging from 0.94 to detection algorithm as shown on Fig. 2. 0.99 are known [15]. 392 The modular architecture of the neural 3. The saving of these features in formatted network system for detecting deepfake arrays to be processed as input data for modifications implements an interconnected classifier training. algorithm comprising “preprocessing data— The classifier training module of the deepfake feature processing—classifier training” flow. modification detection system implements a This algorithm is functionally deployed with a functional algorithm that includes: convolutional neural network in the space of 1. Classifier training. “input data—convolution—subsampling” and 2. Evaluation of the classifier based on ensures “indication—interpretation— selected metrics. identification—decision-making” [18]. 3. Decision on classifier admission— The data preprocessing module of the modified image; unmodified image. deepfake modification detection system functionally executes an algorithm that involves: 1. Splitting the video file into individual frames utilizing Python libraries. 2. Face detection using neural network- based tools. 3. Processing detected biometric images (cropping, adjusting height and width, reformatting) to create new standardized samples. Figure 1: Instrumental platform for detecting deepfake modifications of biometric images The feature processing module of the deepfake modification detection system is characterized by an algorithmic structure that includes: Figure 2: Architecture of a system for 1. The utilization of normalized facial detecting deepfake modifications based on biometric images. neural networks 2. The extraction of feature matrices using neural network tools. 393 The evaluation of the classifier in the system systemic and synergistic approach. The for detecting deepfake modifications of systemic approach adheres to principles of biometric images takes into account: hierarchy, structuring, and integrity, providing 1. Sensitivity and specificity of the grounds for the creation of a comprehensive IT classifier. security system within the space of optimal 2. Youden’s index, determining the optimal integration of methodological, technical threshold value for the classification of (hardware), software, and normative support biometric images. for secure functioning throughout the 3. Informatively classified biometric information life cycle in the system, and the images. algorithm of the information process at the The constructive algorithm of IT2, involving “phase—operation—processing” level. The “identification—classifier evaluation—new synergistic approach, exhibiting the emergent classifier model,” is implemented by the property, presents one facet of the integrity of decision support system in the data analysis information protection in IT, assuming the space, considering evaluation metrics such as: presence of properties specific to a 1. Classifier accuracy. comprehensive IT security system as a whole 2. The area under the curve. but not specific to its elements—complex 3. Logarithmic loss function, which security systems of information resources, positions the difference between the systems, processes, networks, and predicted probability of an element management. belonging to a certain class and the actual probability of belonging from the classifier [19]. 3. Security Structure for Detecting Deepfake Modifications based on a Multi-Level Model of Neural Network IT After analyzing existing approaches to secure detection of deepfake modifications in biometric images, the following proposals are made: 1. the creation of an analytical security structure for neural network information technologies designed for the detection of deepfake modifications in human facial biometric images within the space of secure object intelligence for critical infrastructure [20]. 2. the development of a comprehensive security system for the information Figure 3: Analytical structure of the security of process “phase—operation—processing” neural network-based informational technology based on levels such as “splitting video files into frames—detection, feature The core of the analytical structure of secure processing—evaluation of image neural network information technology is the classifier accuracy”. system for detecting deepfake modifications in The analytical security structure of neural biometric images based on neural networks and network IT for detecting deepfake the data analysis system, programmatically modifications, aiming to ensure the oriented towards the comprehensive confidentiality and integrity of human facial implementation of the information process biometric images (Fig. 3), incorporates a “splitting the video into frames—deepfake 394 detection—feature processing—evaluation of updating it. Table 1 presents a comprehensive image classification”. On this basis, decisions are security system for the information process of made regarding the sufficient accuracy of the detecting deepfake modifications at the deepfake modification classifier according to processing level of biometric images according the chosen model, with the possibility of to the “object—threat—protection” concept. Table 1 The comprehensive security system of the deepfake detection process of biometric image modification at the processing level Object: informational Threats Protection process Processing Intentional Incidental Hardware Software The automated classifier Leakage and/or violation of Failures and/or instability of Luna SA HSM Encrypt Easy accuracy assessment confidentiality, integrity of data and technical devices Luna SP Suricata models Operator errors Luna XML Webroot DNS Unauthorized access Unpatched software Protection Malicious software vulnerabilities 1Password Distributed Denial of Service BitLocker Attacks (DDoS) Bitdefender Antivirus The deepfake detection Data manipulation Technical malfunctions of the nShield ManageEngine in real-time Model inversion network and components Connect HSM Log360 Data poisoning Gryada-301 BitLocker Adversarial examples Baryer-301 Denial of service Canal-301 The sequential image Data poisoning Network failures Luna SA4 Cisco UVPN-ZAS processing Adversarial examples Physical damage to equipment HSM BitLocker Model manipulation Poor data management Luna PCM Leakage and/or violation of practices confidentiality, integrity of data and models Cracking of cryptographic protection algorithms The classification Leakage and/or violation of Data corruption Cisco Webroot DNS accuracy assessment confidentiality, integrity of data and Network failures Firepower Protection using cloud computing models Unpatched software Palo Alto AlienVault USM Malicious software vulnerabilities Networks Distributed denial of service attacks DoS on the side of the service PA-7000 (DDoS) provider Series Phishing and social engineering Operator errors Regulatory support for the analytical structure 4. Conclusions of neural network IT security is grounded in several international standards in the field of In the paper, we introduce a security cybersecurity, including ISO/IEC 27034:2017, methodology for IT detection of deepfake IEC 61508-3:2010, and ISO/IEC 13335-1:2004. modifications in biometric images using neural The C2PA Specification 1.0, a pioneering networks. The methodology is based on: functional standard by the Content Provenance 1. an instrumental platform. and Authenticity Coalition, establishes 2. an analytical security structure of neural scenarios, workflows, and requirements for network information technologies validating and ensuring the digital provenance according to a multi-level model. of content. These methods validate 3. a comprehensive security system for the information about the creation and information process of detecting modification of media files, empowering deepfake modifications at the processing content editors to create tamper-proof media level, following the concept of “object— by documenting who created or modified threat—protection”. digital content, the specifics of modifications This serves as the foundation for the made, implementing robust security measures, development of systematic approaches to and fostering transparency in the content secure deepfake detection within the security creation process. [17]. profiles of critical infrastructure. 395 References Privacy (EuroS&P) (2018). 399–414. doi: 10.1109/EuroSP.2018.00035. [1] H. Kagermann, W. Wahlster, J. Helbig, [10] Y. Shtefaniuk, I. Opirskyy, Comparative Securing the Future of German Analysis of the Efficiency of Modern Fake Manufacturing Industry: Detection Algorithms in Scope of Recommendations for Implementing the Information Warfare, 11th IEEE Strategic Initiative Industrie 4.0. Final International Conference on Intelligent Report of the Industrie 4.0 Working Data Acquisition and Advanced Group, Acatech, National Academy of Computing Systems: Technology and Science and Engineering (2013). Applications (2021) 207–211, doi: [2] National Security and Defense Council of 10.1109/IDAACS53288.2021.9660924.1. Ukraine. URL: https://www.rnbo.gov. [11] H. Bae, et al., Security and Privacy Issues ua/files/2021/STRATEGIYA%20KYBER in Deep Learning, ArXiv (2018). doi: BEZPEKI/proekt%20strategii_kyberbez 10.48550/arXiv.1807.11655. peki_Ukr.pdf [12] V. Grechaninov, et al., Decentralized [3] The national cybersecurity cluster. URL: Access Demarcation System https://cybersecuritycluster.org.ua/ Construction in Situational Center Network, in: Workshop on Cybersecurity [4] B. Bebeshko, et al., Application of Game Providing in Information and Theory, Fuzzy Logic and Neural Telecommunication Systems II, vol. Networks for Assessing Risks and 3188, no. 2 (2022) 197–206. Forecasting Rates of Digital Currency, J. [13] V Grechaninov, et al., Formation of Theor. Appl. Inf. Technol. 100(24) Dependability and Cyber Protection (2022) 7390–7404. Model in Information Systems of [5] K. Khorolska, et al., Application of a Situational Center, in: Workshop on Convolutional Neural Network with a Emerging Technology Trends on the Module of Elementary Graphic Primitive Smart Industry and the Internet of Classifiers in the Problems of Things, vol. 3149 (2022) 107–117. Recognition of Drawing Documentation [14] Q. Xu, M. Tanvir Arafin, G. Qu, Security of and Transformation of 2D to 3D Models, J. Theor. Appl. Inf. Technol. 100(24) Neural Networks from Hardware (2022) 7426–7437. Perspective: A Survey and Beyond, 26th [6] S. Yevseiev, et al., Synergy of Building Asia and South Pacific Design Cybersecurity Systems. РС Тесhnology Automation Conference (ASP-DAC), Сеntеr (2021). doi: 10.15587/978-617- (2021) 449–454. doi: 7319-31-2. 10.1145/3394885.3431639. [7] Y. Bobalo, V. Dudykevych, H. Mykytin, [15] X. Cao, N. Gong, Understanding the Strategic Security of the ”Object— Security of Deepfake Detection, Digital Information Technology” System, Forensics and Cyber Crime, LNICST 441 Publishing House of Lviv Polytechnic (2022) 360–378. doi: 10.1007/978-3- National University (2020). 031-06365-7_22. [8] M. Choraś, et al., Machine Learning—The [16] V. Dudykevych, H. Mykytyn, K. Ruda. Results Are Not the only Thing that Application of Deep Learning for Matters! What About Security, Detecting Deepfake Modifications in Explainability and Fairness?, Biometric Images, Mod. Spec. Technol. 1 Computational Science—ICCS 2020, (2022) 13–22. LNTCS 12140 (2020) 615–628. doi: [17] L. Wieclaw, et al., Biometrie 10.1007/978-3-030-50423-6_46. identification from Raw ECG Signal Using [9] N. Papernot, et al., SoK: Security and Deep Learning Techniques, 9th IEEE Privacy in Machine Learning, IEEE International Conference on Intelligent European Symposium on Security and Data Acquisition and Advanced 396 Computing Systems: Technology and Applications (IDAACS) (2017) 129-133. doi: 10.1109/IDAACS.2017.8095063. [18] V. Dudykevych, H. Mykytyn, K. Ruda, The Concept of a Deepfake Detection System of Biometric Image Modifications based on Neural Networks, IEEE 3rd KhPI Week on Advanced Technology (2022). doi: 10.1109/khpiweek57572.2022.9916378. [19] E. Altuncu, V. Franqueira, S. Li, Deepfake: Definitions, Performance Metrics and Standards, Datasets and Benchmarks, and a Meta-Review, ArXiv (2022). doi: 10.48550/arXiv.2208.10913 [20] X. Wang, T. Ahonen, J. Nurmi, Applying CDMA technique to network-on-chip, IEEE Transactions on Very Large Scale Integration (VLSI) Systems 15(10) (2007) 1091–1100. doi: 10.1109/tvlsi.2007.903914. [21] H. Hulak, et al., Dynamic Model of Guarantee Capacity and Cyber Security Management in the Critical Automated Systems, in: 2nd International Conference on Conflict Management in Global Information Networks, vol. 3530 (2022) 102–111. 397