<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Detecting Deepfake Modifications of Biometric Images using Neural Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Valeriy Dudykevych</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Yevseiev</string-name>
          <email>serhii.yevseiev@gmail.com</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Halyna Mykytyn</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Khrystyna Ruda</string-name>
          <email>khrystyna.s.ruda@lpnu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Hennadii Hulak</string-name>
          <email>h.hulak@kubg.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Borys Grinchenko Kyiv Metropolitan University</institution>
          ,
          <addr-line>18/2 Bulvarno-Kudriavska str., Kyiv, 04053</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>National Technical University “Kharkiv Polytechnic Institute</institution>
          ,”
          <addr-line>Kharkiv, 61000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>391</fpage>
      <lpage>397</lpage>
      <abstract>
        <p>The National Cybersecurity Cluster of Ukraine is functionally oriented towards building systems for the protection of various platforms within the information infrastructure, including the development of secure technologies for detecting deepfake modifications of biometric images based on neural networks in cyberspace. The paper introduces an instrumental platform for detecting deepfake modifications of biometric images and an analytical security structure of neural network Information Technologies (IT) based on a multi-level model of “resources-systems-processes-networks-management” according to the “object-threat-protection” concept. The instrumental platform integrates information neural network technology and decision support information technology, employing a modular architecture of the neural network detection system for deepfake modifications in the “preprocessing data-feature processing-classifier training” space. The core of the IT security structure is the integrity of the functioning of the neural network system for detecting deepfake modifications of human facial biometric images and data analysis systems that implement the information process of “splitting a video file into frames-detection, feature processing-classifier accuracy assessment”. The security of the multi-level model of neural network IT is based on systemic and synergistic approaches, enabling the construction of a comprehensive IT security system, considering the emergent property in the presence of potential targeted threats and the application of advanced technologies at the hardware and software levels. The proposed comprehensive security system for the information process of detecting deepfake modifications of biometric images covers hardware and software means by segments: automated classifier accuracy assessment; real-time detection of deepfake modifications; sequential image processing; accuracy evaluation of classification using cloud computing.</p>
      </abstract>
      <kwd-group>
        <kwd>1 Intellectualization</kwd>
        <kwd>cybersecurity</kwd>
        <kwd>biometric image</kwd>
        <kwd>deepfake</kwd>
        <kwd>information technology</kwd>
        <kwd>neural networks</kwd>
        <kwd>detection system</kwd>
        <kwd>instrumental platform</kwd>
        <kwd>analytical security structure</kwd>
        <kwd>comprehensive security system</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The problem statement. The security of critical
state infrastructure objects in both physical
and cyberspace is currently a pressing issue
within the realm of intellectualization across
various societal domains. In the context of
Industry 4.0 tasks, the Cybersecurity Strategy
of Ukraine, and the National Cybersecurity
Cluster, one of the paramount tools for
addressing the challenge of safely
intellectualizing critical infrastructure objects
is the utilization of neural network information
technologies for detecting deepfake
modifications in the biometric images of
individuals’ faces [
        <xref ref-type="bibr" rid="ref1">1–3</xref>
        ]. The accuracy criterion
for classifying biometric images through
neural networks hinges on the safety of
detecting deepfake modifications, a
determination guided by the comprehensive
security system of a multi-level information
technology framework [4, 5].
      </p>
      <p>Analysis of recent achievements and
publications. The ongoing development of
methodological principles for establishing
cybersecurity systems in information
technologies that support the functioning of
critical infrastructure objects remains pertinent
[6, 7]. Currently, security processes are being
implemented in tasks related to the detection of
deepfake modifications in biometric facial
images using neural networks. Investigations
into security issues within the realm of machine
learning, particularly dealing with complex
threat models and corresponding protective
measures, are actively underway [8, 9]. The
study [10] delves into the efficiency assessment
of contemporary algorithms designed to detect
fake content, shedding light on their
performance within the context of information
warfare scenarios. This comparative analysis
contributes valuable insights into the ongoing
efforts to bolster defenses against deceptive
information dissemination. In [11], the security
model and data privacy in deep learning, as part
of machine learning, are examined under the
influence of relevant attacks. This includes
poisoning attacks and evasion attacks, both of
which impact decision-making processes in
deep learning. Countermeasures against such
attacks involve the recognition and removal of
malicious data, training models to be insensitive
to such data, and concealing the model’s
structure and parameters. The confidentiality of
data during deep learning is also jeopardized by
specific attacks, such as the inversion of the
security model. Effective tools to counter
privacy threats include cryptographic methods,
notably homomorphic encryption [12, 13].</p>
      <p>Furthermore, the study of hardware
security for deep neural networks within the
“threat—protection” space is discussed in [14].
Modern methods ensuring the detection of
deepfake modifications in biometric facial
images with an accuracy ranging from 0.94 to
0.99 are known [15].</p>
      <p>The aim of the study. The primary objective
of this study is to formulate an analytical
security structure for information technology
designed to detect deepfake modifications in
biometric images. This structure aligns with
the instrumental platform and a multi-level
model of neural network IT, encompassing
Information Resources (IR), Information
Systems (IS), Information Processes (IP),
Information Networks (IN), and Information
Security Management (ISM). The constructed
algorithm within this structure is aimed at
facilitating the secure operation of neural
network IT.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Instrumental Platform for</title>
    </sec>
    <sec id="sec-3">
      <title>Detecting Deepfake</title>
    </sec>
    <sec id="sec-4">
      <title>Modifications of Biometric</title>
    </sec>
    <sec id="sec-5">
      <title>Images</title>
      <p>The creation of an analytical security structure
for detecting deepfake modifications of
biometric images is based on the following
prerequisites: an instrumental platform
(Fig. 1)—information neural network
technology (IT1); decision support information
technology (IT2). The development of
information technologies for detecting
deepfake modifications of biometric images
relies on: the use of a staged approach for
detecting modified biometric images using
convolutional neural networks [16]; the
application of a neural network system for
detecting deepfake modifications based on its
architecture and decision support systems for
assessing the classifier’s performance according
to the evaluation methodology [17]. The
information neural network technology is
based on the following components: the object
model, methodology for detecting deepfake
modifications, accuracy of biometric image
classification, and an evaluation methodology
for assessing the classifier’s performance. The
constructive algorithm of IT1: “video
segmentation—detection—feature processing
—classification” is implemented through the
architecture of the neural network system using
a modular approach, incorporating individual
functional modules to enhance the efficiency
and adaptability of the deepfake modification
detection algorithm as shown on Fig. 2.</p>
      <sec id="sec-5-1">
        <title>The modular architecture of the neural</title>
        <p>
          network system for detecting deepfake
modifications implements an interconnected
algorithm comprising “preprocessing data—
feature processing—classifier training” flow.
This algorithm is functionally deployed with a
convolutional neural network in the space of
“input data—convolution—subsampling” and
ensures “indication—interpretation—
identification—decision-making” [
          <xref ref-type="bibr" rid="ref2">18</xref>
          ].
        </p>
        <p>The data preprocessing module of the
deepfake modification detection system
functionally executes an algorithm that
involves:
1. Splitting the video file into individual
frames utilizing Python libraries.
2. Face detection using neural
networkbased tools.
3. Processing detected biometric images
(cropping, adjusting height and width,
reformatting) to create new
standardized samples.</p>
      </sec>
      <sec id="sec-5-2">
        <title>3. The saving of these features in formatted arrays to be processed as input data for classifier training.</title>
        <p>The classifier training module of the deepfake
modification detection system implements a
functional algorithm that includes:
1. Classifier training.
2. Evaluation of the classifier based on
selected metrics.
3. Decision on classifier admission—
modified image; unmodified image.
The feature processing module of the deepfake
modification detection system is characterized
by an algorithmic structure that includes:
1. The utilization of normalized facial
biometric images.
2. The extraction of feature matrices using
neural network tools.
The evaluation of the classifier in the system
for detecting deepfake modifications of
biometric images takes into account:
1. Sensitivity and specificity of the
classifier.
2. Youden’s index, determining the optimal
threshold value for the classification of
biometric images.
3. Informatively classified biometric
images.</p>
        <p>
          The constructive algorithm of IT2, involving
“identification—classifier evaluation—new
classifier model,” is implemented by the
decision support system in the data analysis
space, considering evaluation metrics such as:
1. Classifier accuracy.
2. The area under the curve.
3. Logarithmic loss function, which
positions the difference between the
predicted probability of an element
belonging to a certain class and the
actual probability of belonging from the
classifier [
          <xref ref-type="bibr" rid="ref3">19</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>3. Security Structure for Detecting</title>
    </sec>
    <sec id="sec-7">
      <title>Deepfake Modifications based</title>
      <p>on a Multi-Level Model of Neural</p>
    </sec>
    <sec id="sec-8">
      <title>Network IT</title>
      <p>
        After analyzing existing approaches to secure
detection of deepfake modifications in
biometric images, the following proposals are
made:
1. the creation of an analytical security
structure for neural network information
technologies designed for the detection of
deepfake modifications in human facial
biometric images within the space of
secure object intelligence for critical
infrastructure [
        <xref ref-type="bibr" rid="ref4">20</xref>
        ].
2. the development of a comprehensive
security system for the information
process “phase—operation—processing”
based on levels such as “splitting video
files into frames—detection, feature
processing—evaluation of image
classifier accuracy”.
      </p>
      <p>The analytical security structure of neural
network IT for detecting deepfake
modifications, aiming to ensure the
confidentiality and integrity of human facial
biometric images (Fig. 3), incorporates a
systemic and synergistic approach. The
systemic approach adheres to principles of
hierarchy, structuring, and integrity, providing
grounds for the creation of a comprehensive IT
security system within the space of optimal
integration of methodological, technical
(hardware), software, and normative support
for secure functioning throughout the
information life cycle in the system, and the
algorithm of the information process at the
“phase—operation—processing” level. The
synergistic approach, exhibiting the emergent
property, presents one facet of the integrity of
information protection in IT, assuming the
presence of properties specific to a
comprehensive IT security system as a whole
but not specific to its elements—complex
security systems of information resources,
systems, processes, networks, and
management.
The core of the analytical structure of secure
neural network information technology is the
system for detecting deepfake modifications in
biometric images based on neural networks and
the data analysis system, programmatically
oriented towards the comprehensive
implementation of the information process
“splitting the video into frames—deepfake
detection—feature processing—evaluation of
image classification”. On this basis, decisions are
made regarding the sufficient accuracy of the
deepfake modification classifier according to
the chosen model, with the possibility of
updating it. Table 1 presents a comprehensive
security system for the information process of
detecting deepfake modifications at the
processing level of biometric images according
to the “object—threat—protection” concept.
Regulatory support for the analytical structure
of neural network IT security is grounded in
several international standards in the field of
cybersecurity, including ISO/IEC 27034:2017,
IEC 61508-3:2010, and ISO/IEC 13335-1:2004.
The C2PA Specification 1.0, a pioneering
functional standard by the Content Provenance
and Authenticity Coalition, establishes
scenarios, workflows, and requirements for
validating and ensuring the digital provenance
of content. These methods validate
information about the creation and
modification of media files, empowering
content editors to create tamper-proof media
by documenting who created or modified
digital content, the specifics of modifications
made, implementing robust security measures,
and fostering transparency in the content
creation process. [17].</p>
    </sec>
    <sec id="sec-9">
      <title>4. Conclusions</title>
      <p>In the paper, we introduce a security
methodology for IT detection of deepfake
modifications in biometric images using neural
networks. The methodology is based on:
1. an instrumental platform.
2. an analytical security structure of neural
network information technologies
according to a multi-level model.
3. a comprehensive security system for the
information process of detecting
deepfake modifications at the processing
level, following the concept of “object—
threat—protection”.</p>
      <p>This serves as the foundation for the
development of systematic approaches to
secure deepfake detection within the security
profiles of critical infrastructure.
Privacy (EuroS&amp;P) (2018). 399–414.
doi: 10.1109/EuroSP.2018.00035.</p>
      <p>H. Kagermann, W. Wahlster, J. Helbig, [10] Y. Shtefaniuk, I. Opirskyy, Comparative
Securing the Future of German Analysis of the Efficiency of Modern Fake
Manufacturing Industry: Detection Algorithms in Scope of
Recommendations for Implementing the Information Warfare, 11th IEEE
Strategic Initiative Industrie 4.0. Final International Conference on Intelligent
Report of the Industrie 4.0 Working Data Acquisition and Advanced
Group, Acatech, National Academy of Computing Systems: Technology and
Science and Engineering (2013). Applications (2021) 207–211, doi:
National Security and Defense Council of 10.1109/IDAACS53288.2021.9660924.1.
Ukraine. URL: https://www.rnbo.gov. [11] H. Bae, et al., Security and Privacy Issues
ua/files/2021/STRATEGIYA%20KYBER in Deep Learning, ArXiv (2018). doi:
BEZPEKI/proekt%20strategii_kyberbez 10.48550/arXiv.1807.11655.
peki_Ukr.pdf [12] V. Grechaninov, et al., Decentralized
The national cybersecurity cluster. URL: Access Demarcation System
https://cybersecuritycluster.org.ua/ Construction in Situational Center
(FTTBKN2ohh..e0BKreete2wooehc2brrooay.)errs,so7kthilAs3nsk9pkFgo0pauf,R–o,lz.e7arzttey4eaIt0nslAa4.f,slo.L..As,foepDTgsApiseicpglicinipchtgalanitalcoinaoClR.dtnuiiosro1knrfes0Nn0Geoc(auafy2mrn,4adJea)l. [13] 3PTVDNM1reeeoGol8ptdervw8eceein,oodclndhmrinokaamgi.,bnni2iinulni(:nto2WyiIvc0ni,nao2ftora2ikrnoe)msndt1haaI9onlt7p.CiS,fo–oyyon2rbsnmF0teeCo6ramSyr.tmybisoPseatnrretosIimeoIt,ecnscutarviinoootondylff.
Convolutional Neural Network with a Situational Center, in: Workshop on
Module of Elementary Graphic Primitive Emerging Technology Trends on the
CRleacsosgifnieitrison oifnDratwheing DPorcoubmleemnstatioonf STmhianrgts, Ivnodl.u3s1tr4y9 (a2n0d22t)h1e07I–n1te1r7n. et of
and Transformation of 2D to 3D Models, [14] Q. Xu, M. Tanvir Arafin, G. Qu, Security of
J. Theor. Appl. Inf. Technol. 100(24) Neural Networks from Hardware
(2022) 7426–7437. Perspective: A Survey and Beyond, 26th
S. Yevseiev, et al., Synergy of Building Asia and South Pacific Design
Cybersecurity Systems. РС Тесhnology Automation Conference (ASP-DAC),
Сеntеr (2021). doi: 10.15587/978-617- (2021) 449–454. doi:
7319-31-2. 10.1145/3394885.3431639.
Y. Bobalo, V. Dudykevych, H. Mykytin, [15] X. Cao, N. Gong, Understanding the
Strategic Security of the ”Object— Security of Deepfake Detection, Digital
Information Technology” System, Forensics and Cyber Crime, LNICST 441
Publishing House of Lviv Polytechnic (2022) 360–378. doi:
10.1007/978-3National University (2020). 031-06365-7_22.</p>
      <p>M. Choraś, et al., Machine Learning—The [16] V. Dudykevych, H. Mykytyn, K. Ruda.
Results Are Not the only Thing that Application of Deep Learning for
Matters! What About Security, Detecting Deepfake Modifications in
Explainability and Fairness?, Biometric Images, Mod. Spec. Technol. 1
Computational Science—ICCS 2020, (2022) 13–22.</p>
      <p>LNTCS 12140 (2020) 615–628. doi: [17] L. Wieclaw, et al., Biometrie
10.1007/978-3-030-50423-6_46. identification from Raw ECG Signal Using
N. Papernot, et al., SoK: Security and Deep Learning Techniques, 9th IEEE
Privacy in Machine Learning, IEEE International Conference on Intelligent
European Symposium on Security and Data Acquisition and Advanced</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[1] [2] Computing Systems: Technology and Applications (IDAACS)</source>
          (
          <year>2017</year>
          )
          <fpage>129</fpage>
          -
          <lpage>133</lpage>
          . doi:
          <volume>10</volume>
          .1109/IDAACS.
          <year>2017</year>
          .
          <volume>8095063</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>V.</given-names>
            <surname>Dudykevych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mykytyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ruda</surname>
          </string-name>
          ,
          <article-title>The Concept of a Deepfake Detection System of Biometric Image Modifications based on Neural Networks</article-title>
          ,
          <source>IEEE 3rd KhPI Week on Advanced Technology</source>
          (
          <year>2022</year>
          ). doi:
          <volume>10</volume>
          .1109/khpiweek57572.
          <year>2022</year>
          .
          <volume>9916378</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>E.</given-names>
            <surname>Altuncu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Franqueira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>Deepfake: Definitions, Performance Metrics and Standards, Datasets and Benchmarks, and a Meta-Review, ArXiv (</article-title>
          <year>2022</year>
          ).
          <source>doi: 10.48550/arXiv.2208.10913</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Ahonen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Nurmi</surname>
          </string-name>
          ,
          <article-title>Applying CDMA technique to network-on-chip</article-title>
          ,
          <source>IEEE Transactions on Very Large Scale Integration (VLSI) Systems</source>
          <volume>15</volume>
          (
          <issue>10</issue>
          ) (
          <year>2007</year>
          )
          <fpage>1091</fpage>
          -
          <lpage>1100</lpage>
          . doi:
          <volume>10</volume>
          .1109/tvlsi.
          <year>2007</year>
          .
          <volume>903914</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>H.</given-names>
            <surname>Hulak</surname>
          </string-name>
          , et al.,
          <source>Dynamic Model of Guarantee Capacity and Cyber Security Management in the Critical Automated Systems, in: 2nd International Conference on Conflict Management in Global Information Networks</source>
          , vol.
          <volume>3530</volume>
          (
          <year>2022</year>
          )
          <fpage>102</fpage>
          -
          <lpage>111</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>