<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>I. Tsmots);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Ivan Tsmots†, Yurii Opotyak†,Yurii Lukashchuk*,† and Sofiia Tesliuk†</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Stepan Bandera 12, 79013, Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>A generalized analytical machine learning model for neuro-like data encryption and decryption has been developed. The main components are a neuro network architecture shaper, a weights matrix calculator, and a macropartial product table calculator. The implementation of which reduces setup time. An analysis of recent research and publications on the relevance of problems in the implementation of neuro-like cryptographic data encryption is carried out. The paper formulates rules for the formation of a neuronetwork architecture. The structure of a neuro network for cryptographic data encryption is determined by the number of neuro elements. A weights matrix calculator has also been developed. For this purpose, the method of singular value decomposition and the Jacob rotation method were used to find eigenvectors and eigenvalues. A simulation model was developed to demonstrate the operation. A macropartial product table calculator based on the table-algorithmic method was developed. To implement these tasks, C# programming language and Visual Studio 2022 development environment were chosen. Windows Forms was chosen as the development technology. The Accord.Math library was added to the project for operations with matrixes. The practical value is that the developed tools provide fast calculation of coefficients for a given neural network architecture. As a result, the appliance of such generalized analytical model will ensure the speed of computational processes, including data encryption and decryption. neuron-like network; neuro element; macropartial products; tabular-algorithmic method; method of singular decomposition of the matrix; Jacobi rotation method; eigenvectors; eigenvalues 1</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The importance of secure data encryption is paramount in the modern digital landscape, where
sensitive information is frequently transmitted and stored online. Cryptographic techniques are
commonly employed to safeguard the confidentiality and integrity of data, but traditional methods
can still be susceptible to attacks from hackers and other malicious entities.</p>
      <p>Neural-based data encryption is a promising new approach that uses artificial neural networks to
encrypt and decrypt data. This approach is based on the principle that neural networks can learn
complex patterns and relationships in data, making them well-suited for encryption tasks.</p>
      <p>However, one of the challenges in implementing neural-like data encryption is determining the
optimal pre-settings for the neural network. This is where the generalized analytical model of
presetting comes in, as it provides a systematic approach to determine the optimal settings for a given
data set and encryption task.</p>
      <p>Mobile smart systems refer to the combination of hardware and software solutions integrated
into mobile devices, such as smartphones and tablets, which enable them to perform advanced
computing, communication, and data processing tasks.</p>
      <p>These systems are highly intelligent, self-contained platforms equipped with sensors, processors,
storage, and connectivity features. The primary goal of mobile smart systems is to provide users with
seamless interaction with digital services while maintaining security, efficiency, and user experience.</p>
      <p>One critical aspect of mobile smart systems is their ability to secure data through encryption.
Encryption ensures that data transmitted over networks or stored on mobile devices is protected
from unauthorized access. Mobile systems use various encryption protocols to safeguard user
information, financial transactions, and communications.</p>
      <p>Thus, the development of a generalized analytical model of pre-settings for neural-like
cryptographic data encryption is a relevant research topic, as it has the potential to improve the
security and efficiency of data encryption in various fields, including finance, healthcare, and public
administration.</p>
      <p>The object of research is the process of cryptographic encryption of data using neuron-like
networks, as well as how weight coefficient matrices precomputation can be used to improve the
efficiency and effectiveness of this process.</p>
      <p>The subject of the research is a generalized analytical model of weight coefficient matrices
precomputation for neuron-like cryptographic data encryption, which is aimed at improving the
security and efficiency of cryptographic systems through the usage of neural networks.</p>
      <p>The goal of this research is to develop a generalized analytical model for neuro-like cryptographic
data encryption designed to enhance both the security and efficiency of encryption when compared
to existing methods. Additionally, the study aims to assess the performance of the proposed model
through a series of simulations and experiments.</p>
      <p>To achieve this goal, the following main tasks of the study are defined:
•
•
•
•
analysis of the latest research and publications;
choosing a Neural Network Architecture Shaper;
development of a program for calculating the weights matrix based on an improved method
of singular value decomposition;
development of software for calculating the table of macropartial products.</p>
      <p>The scientific novelty of the obtained results lies in the development of a generalized model for
the precomputation of weights matrices, specifically for implementing neuro-like data encryption.</p>
      <p>The core elements of the model include a neuro network architecture generator, a weights matrix
calculator, and a macropartial product tables calculator. This proposed approach reduces the time
required for the development of neuro-like networks</p>
      <p>The practical significance of the results lies in the fact that the developed generalized model for
weights matrix precomputation enhances both the security and efficiency of the data encryption
process. By leveraging this model, encryption becomes more robust against potential vulnerabilities,
while also improving operational efficiency.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Analysis of the latest research and publications</title>
      <p>
        Analysis of the trends in the data processing systems development shows increased usage of neural
and neuron-like methods [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ]. Paper [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] elaborates on economic growth prediction based on the
artificial neural network algorithm and RBF neural network algorithm by combining the theory for
economic forecasting and the characteristics of the BP neural network algorithm with the RBF neural
network.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] considered the problem of handling sets of medical data and proposed an improved
regression method based on artificial neural networks. Authors in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] propose accurate performing
network evaluation with the use of a graph convolutional network-based performance evaluation
method for ultralarge-scale networks with significantly less time-consuming than traditional
methods.
      </p>
      <p>
        One of the trends in the development of embedded systems is the usage of hardware-based
implementation of neuron-like networks that requires developing particular components for these
cases. A forward propagation channel model was proposed in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], where adjustment of model
weights was achieved by developing supervised learning Tempotron algorithm and oriented on
STM32 chips. In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], field-programmable gate arrays (FPGAs) are used to create hardware neural
networks, and a larger neuron-like network can be implemented in this case at a lower cost.
      </p>
      <p>
        An image cryptosystem based on a non-linear component neuron-like scheme is proposed in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
Neuron-like learning algorithms realize a memorable diffusion algorithm and the inputs and weights
of the neuron are regulated by the information of the image.
      </p>
      <p>
        Analysis of publications [
        <xref ref-type="bibr" rid="ref7 ref8 ref9">7-9</xref>
        ] shows that neural network cryptographic data protection is mainly
implemented by software. The main disadvantage of such an implementation is the difficulty of
providing real-time. In [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] presented an approach to the neuron-like network implementation,
oriented on embedded systems for real-time cryptographic data protection with symmetric keys for
onboard communication systems in unmanned aerial vehicles because of its suitability for hardware
implementation.
      </p>
      <p>
        Neural cryptography is considered in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], which is based on neural networks and Vector-Valued
Tree Parity Machine (VVTPM), which is a generalized architecture of TPM models proposed by
authors. In [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], a neural cryptography based on the complex-valued tree parity machine network
(CVTPM) is proposed, where the input, output, and weights of CVTPM are complex values and can
be considered as an extension of TPM.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13">10-13</xref>
        ], the ways of adaptation of an auto-associative neural network with non-iterative
learning to the tasks of cryptographic data protection are considered. Authors in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] describe an
auto-associative neural network concept of soft computing in combination with an encryption
technique intended to send data securely on the communication network.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], the AES algorithm was described to secure data bits and also designed as a two-stage
encryption and decryption algorithm that can be applied to IoT networks. Study [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] proposes a new
PQC neural network intended to map a code-based PQC method to a neural network structure. This
approach aimed to enhance the security of ciphertexts based on non-linear activation functions,
random perturbation of ciphertexts, and uniform distribution of ciphertexts.
      </p>
      <p>
        The papers [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
        ] analyze the main ways of development of on-board means of cryptographic
protection of data transmission in real-time, which showed that a promising way of development is
the use of neural network methods for data encryption and decryption. In [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] described the
implementation of neuron-like data encryption using polynomials, where the structure of the device
was developed with the usage of the base operation of neuron-like data encryption.
      </p>
      <p>
        The hardware programming language VHDL is used to develop the base operation of data
encryption for implementation on the FPGA. A security framework based on cellular automata is
proposed in [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], which is composed of three parts: entity authentication, data encryption, and
decryption, where authentication is based on a zero-knowledge protocol. System robustness is
realized in cases when a shared secret is not only an NP-complete problem but is dynamically
transformed by two-dimensional cellular automata into a more complex secret key.
      </p>
      <p>The analysis of the papers shows [16-19] that for the preliminary calculation of the weights
coefficients, the principal component method is used, which uses a system of eigenvectors that
correspond to the eigenvalues of the covariance matrix of the input data.</p>
      <p>In [20] proposed an algorithm for calculating the scalar product of operands in a floating-point
format using a table-algorithmic method and an algorithm for forming tables of macropartial
products that are used for developing neuron-like networks.</p>
      <p>Analysis of the methods for calculating the dot product with weights coefficients, which are
known in advance [21-24], showed that one of the most effective methods is the tabular-algorithmic
one, which is reduced to the operations of reading macropartial products, addition, and shift.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results of the study and their discussion</title>
      <p>3.1. A generalized analytical model of machine learning for neuro-like encryption
of data decryption
key.
weights</p>
      <p>.</p>
      <p>A neuro-like structure for data protection is proposed, which is focused on the use of the encryption
method with symmetric keys. In such a structure, the encryption key and the decryption key are the
same. The decryption key can be obtained by mathematical transformations from the encryption</p>
      <p>Encryption is carried out over blocks of data using a key. The key in the case of using a
neuronlike structure consists of a given number of neurons in a neural network  , a matrix of calculated
Next, let's look at the main stages of the data encryption and decryption process.</p>
      <p>The first step is to choose a neuron-like network architecture to encrypt and decrypt data. The
architecture of a neuron-like network is determined by the number of neurons N, the number of
inputs k, and the number of bits of inputs m. Incoming messages that are encrypted can have
different bits, n, and be transmitted to a different number of inputs, k, which are equal to the number
of neurons N. The bit size of the message n and the number of inputs k determine the architecture of
the neural network.
by the input data vector  according to the following formula:

=
⋮</p>
      <p>⋮






⋯ 
⋯ 
⋯
⋯ 
⋮



× ⋮
(1)</p>
      <p>As a result, multiplying the matrix of weighting coefficients W by the input vector is  reduced
to performing N operations on calculating the dot product.</p>
      <p>= 

 ( →
) ,
where 
– macropartial product; 
– calculation of the macropartial products table; 
–
calculation of the matrix of weights;  ( →</p>
      <p>) – formation of the structure of a neuron-like network,
the parameters of which are determined by the bit width m of the message X and the bit depth of the
inputs of the neuroelement n.</p>
      <sec id="sec-3-1">
        <title>3.2. Neural Network Architecture Shaper</title>
        <p>The structure of a neuron-like network for cryptographic data encryption is determined by the
number of neuro-like elements, which are calculated using the formula:
 =</p>
        <p>,


 =   ,
where N – is the number of neuro elements. For data with a bit width of m = 16, the number of
neuro-like elements N can be 16, 8, 4, and 2 with the number of inputs n 1, 2, 4, and 8, respectively.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.3. Simulation model for calculating matrices of weights coefficients</title>
        <p>The general formula for the Singular Value Decomposition (SVD) method is the next:
where A is an Nxn input data matrix; U is a left singular NxN matrix, columns contain
eigenvectors of the AAT matrix; D is a diagonal Nxn matrix containing singular (eigen) values; V is
a right singular Nxn matrix, columns contain eigenvectors of the ATA matrix.</p>
        <p>The calculation of eigenvalues and eigenvectors is performed using the Jacobi rotation method,
in which the calculation of eigenvalues and eigenvectors of a symmetric matrix is performed
iteratively. This process of calculating eigenvectors is known as diagonalization. To construct this
sequence, a specially selected rotation matrix Ji is used, the norm of the supra-diagonal part of which
is calculated as follows:</p>
        <p>Performing neuron-like cryptographic protection of data involves making pre-configurations.
Such pre-configurations are reduced to the choice of the structure of the neuron-like network, the
calculation of a matrix of weights coefficients, and a table of macropartial products. A generalized
analytical model of presets is written as follows:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
 ( )
=

( ) ,
and decreases with each two-way rotation of the matrix:
 (</p>
        <p>) =  ⋅  ( ) ⋅ 
where the matrix A-1 is equal to:</p>
        <p>To calculate the U matrix using the Jacobi method, the result of the AAT product is passed, and to
find the V matrix, the result of the ATA product is passed. When finding the D matrix, it is enough
to take the eigenvalues that were found when calculating the U matrix or the V matrix and place
them on the main diagonal. After finding the matrices U, V, and D, the weight coefficients are
calculated using the following formula:</p>
        <p>where A is an input matrix of dimension N×n, W is a weighting matrix of dimension n×n. The
weights matrix W is calculated by the following formula:</p>
        <p>,
 =</p>
        <p>⋅  ,
Tocalculatetheweightscoefficientsmatrix,thesingularvaluedecompositionmethodwasused:

=</p>
        <p>⋅
 = 
  ,
(9)
(10)
D–adiagonalmatrixNxncontainingsingular(eigen)values;V–arightsingularmatrixnxn,the</p>
        <p>Figure2presentsthetablethatwasusedforthesimulation.Theparametersinthiscasewillbe
thenext:N=8,n=8,m=2.</p>
        <p>1 1 0 1 1 0 0 1 0 1 1 0 1 0 1 1
1 0 0 0 0 1 1 0 0 1 0 1 1 0 1 1
1 0 1 0 1 0 0 0 1 1 1 0 1 0 1 0
1 0 0 0 1 0 1 1 0 0 0 1 0 1 1 1
0 1 0 0 1 1 0 1 1 1 1 0 0 0 0 1
1 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0
0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0
0 0 1 0 0 1 0 1 0 1 0 0 0 1 1 1
1 0 0 1 1 1 0 0 0 1 0 1 0 0 1 0
0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0
1 0 1 0 0 0 1 1 1 1 0 0 0 0 0 1
0 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1
0 1 1 0 0 0 1 1 0 1 0 0 0 1 0 1
networks with the number of neuro-like elements of 16, 8, 4, and 2 may be used. The dimensions of
the matrices of weights coefficients for such neuron-like networks will be 16×16, 8×8, 4×4, and 2×2,
respectively.</p>
        <p>A simulation model software was developed to calculate the neuron weight in the case of
cryptographic encryption of incoming data. The software implementation was carried out by C#
programming language in the Visual Studio 2022 development environment using the Windows
Forms development technology (Fig.3).</p>
        <p>The developed simulation model software provides a calculation of a matrix of weights for a given
structure of a neuro-like network. The user interface of the simulation model for calculating
weighting coefficients is shown.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.4. Simulation model for calculating macropartial product tables</title>
        <p>Calculating macro partial product tables for floating-point weights (where wj is the mantissa Wj of
the weights, and Wj is the order of the weights) involves the following operations:






determining the largest common order of the weight’s coefficients ;
calculating the order difference for each weighting coefficient Wj ;
shift to the right of the mantissa wj by the order difference ;
determining the maximum number of overflow bits q for macro-partial products PMi;
obtaining scaled mantises by shifting them to the right by q overflow bits of the calculated
macro-partial products PMi;
adding the number of overflow bits to the highest common order.</p>
        <p>The table of macropartial products is calculated using the following formula:
0,  
= 
= 
=. . . = 
,  
,  
= 1, 
= 0, 
= 
= 1, 
=. . . =</p>
        <p>= 0
=. . . =</p>
        <p>= 0

=
+  ,  
= 1, 
= 1, 
=. . . = 
= 0
+. . . +
= 0, 
= 
=. . . =</p>
        <p>= 1
,  
= 
= 
=. . . = 
= 1
where  ,  ,  , … , 
– table address inputs, 
– the mantissa Wj of the weights is reduced</p>
        <p>The number of macro-partial product tables for encrypting/decrypting commands is equal to the
number of neuronal elements in the network. The amount of memory required to store the
macro⎧
⎪ 
⎪ 
⎨</p>
        <p>⋮
⎪⎪ 
⎩</p>
        <p>to the largest common order.
partial product table is equal to:</p>
        <p>= 2 ,
where k is the number of inputs of the neural element.</p>
        <p>For neural networks with 16, 8, 4, and 2 neural elements, the number of tables and their sizes are</p>
        <p>On Figure 5 presented the weights matrix used for experiment</p>
        <p>To demonstrate the operation of the program, an example was prepared for working with the
following architecture: m=2, k=8, N=8, as well as a matrix of weights coefficients for a neuro-like
network with 8 neuro elements.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusions</title>
      <p>A software model for the implementation of neuro-like data encryption and decryption has been
developed. The main components: neural network architecture shaper, the weights matrix calculator,
and the macropartial product table calculator.</p>
      <p>An imitation model was developed and a user interface for calculating weights matrices for a
given neural architecture was presented.</p>
      <p>An imitation model and a user interface for calculating macropartial product tables for the
tablealgorithmic implementation of a scalar product are developed.</p>
      <p>The use of the developed models reduces the time of setting up neural networks for computational
processes, including data encryption and decryption.
[16] Q. Xie et al., "Research on Inversion Algorithm of Aerosol Extinction Coefficient Based on
Elman Neural Network," 2021 IEEE 16th Conference on Industrial Electronics and Applications
(ICIEA), Chengdu, China, 2021, pp. 62-66, doi: 10.1109/ICIEA51954.2021.9516085.
[17] Y. Kuroe, H. Iima and Y. Maeda, "Four Models of Hopfield-Type Octonion Neural Networks and
Their Existing Conditions of Energy Functions," 2020 International Joint Conference on Neural
Networks (IJCNN), Glasgow, UK, 2020, pp. 1-7, doi: 10.1109/IJCNN48605.2020.9206838.
[18] G. Zhang, K. Niwa and W. B. Kleijn, "Projected Weight Regularization to Improve Neural
Network Generalization," ICASSP 2020 - 2020 IEEE International Conference on Acoustics,
Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 4242-4246, doi:
10.1109/ICASSP40776.2020.9054133.
[19] B. Tao, M. Xiao, W. X. Zheng, J. Cao and J. Tang, "Dynamics Analysis and Design for a
Bidirectional Super-Ring-Shaped Neural Network With n Neurons and Multiple Delays,"
in IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 7, pp. 2978-2992,
July 2021, doi: 10.1109/TNNLS.2020.3009166.
[20] I. Tsmots, V. Rabyk, V. Teslyuk and Y. Opotyak, "Floating-Point Number Scalar Product
Hardware Implementation for Embedded Systems," 2023 17th International Conference on the
Experience of Designing and Application of CAD Systems (CADSM), Jaroslaw, Poland, 2023, pp.
6-10, doi: 10.1109/CADSM58174.2023.10076502.
[21] R. Tkachenko and I. Izonin, "Model and Principles for the Implementation of Neural-Like
Structures based on Geometric Data Transformations", Advances in Computer Science for
Engineering and Education. ICCSEEA2018. Advances in Intelligent Systems and Computing,
vol. 754, pp. 578-587, 2019.
[22] M. Bharathi and Y. J. M. Shirur, "Efficiency Evaluation of Scalable Multiply and Accumulate
Architectures in DSP: A Comparative Study of LUT Based and LUT-Less Based
Approaches," 2024 International Conference on Intelligent and Innovative Technologies in
Computing, Electrical and Electronics (IITCEE), Bangalore, India, 2024, pp. 1-5, doi:
10.1109/IITCEE59897.2024.10467265.
[23] R. Rohit, S. Dudeja and M. Rao, "VLUT: Design and Evaluation of Variable band LUT to realize
Activation Functions," 2023 30th IEEE International Conference on Electronics, Circuits and
Systems (ICECS), Istanbul, Turkiye, 2023, pp. 1-4, doi: 10.1109/ICECS58634.2023.10382912.
[24] M. Ebrahimi, R. Sadeghi and Z. Navabi, "LUT Input Reordering to Reduce Aging Impact on FPGA
LUTs," in IEEE Transactions on Computers, vol. 69, no. 10, pp. 1500-1506, 1 Oct. 2020, doi:
10.1109/TC.2020.2974955.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Economic Forecast Model and Development Path Analysis Based on BP and RBF Neural Network," 2023 IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT)</source>
          , Bhopal, India,
          <year>2023</year>
          , pp.
          <fpage>619</fpage>
          -
          <lpage>624</lpage>
          , doi: 10.1109/CSNT57126.
          <year>2023</year>
          .
          <volume>10134678</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Izonin</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tkachenko</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dronyuk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tkachenko</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gregus</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rashkevych</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Predictive modeling based on small data in clinical medicine: RBF-based additive input-doubling method [J]</article-title>
          .
          <source>Mathematical Biosciences and Engineering</source>
          ,
          <year>2021</year>
          ,
          <volume>18</volume>
          (
          <issue>3</issue>
          ):
          <fpage>2599</fpage>
          -
          <lpage>2613</lpage>
          . doi:
          <volume>10</volume>
          .3934/mbe.2021132
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yoshikane</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Tsuritani</surname>
          </string-name>
          ,
          <article-title>"Usage of a Graph Neural Network for Large-Scale Network Performance Evaluation," 2021 International Conference on Optical Network Design and Modeling (ONDM), Gothenburg</article-title>
          , Sweden,
          <year>2021</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.23919/ONDM51796.
          <year>2021</year>
          .
          <volume>9492331</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>Dai</surname>
          </string-name>
          and
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <article-title>"Embedded System Implementation of Spiking Neural Network Propagation Model,"</article-title>
          <source>2023 5th International Academic Exchange Conference on Science and Technology Innovation (IAECST)</source>
          , Guangzhou, China,
          <year>2023</year>
          , pp.
          <fpage>187</fpage>
          -
          <lpage>193</lpage>
          , doi: 10.1109/IAECST60924.
          <year>2023</year>
          .
          <volume>10503419</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ali</surname>
          </string-name>
          and
          <string-name>
            <given-names>B. H.</given-names>
            <surname>Abd</surname>
          </string-name>
          ,
          <article-title>"An Efficient area Neural Network Implementation using tan-sigmoid Look up Table Method Based on FPGA"</article-title>
          ,
          <source>in: 3rd International Conference for Emerging Technology (INCET)</source>
          , Belgaum, India,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          , doi: 10.1109/INCET54531.
          <year>2022</year>
          .
          <volume>9825348</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          and
          <string-name>
            <given-names>B.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>"The Memorable Image Encryption Algorithm Based on Neuron-Like Scheme,"</article-title>
          <source>in IEEE Access</source>
          , vol.
          <volume>8</volume>
          , pp.
          <fpage>114807</fpage>
          -
          <lpage>114821</lpage>
          ,
          <year>2020</year>
          , doi: 10.1109/ACCESS.
          <year>2020</year>
          .3004379
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Tsmots</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Teslyuk</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Łukaszewicz</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lukashchuk</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kazymyra</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holovatyy</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Opotyak</surname>
            <given-names>Y.</given-names>
          </string-name>
          <article-title>An Approach to the Implementation of a Neural Network for Cryptographic Protection of Data Transmission at UAV</article-title>
          . Drones.
          <year>2023</year>
          ;
          <volume>7</volume>
          (
          <issue>8</issue>
          ):
          <fpage>507</fpage>
          . https://doi.org/10.3390/drones7080507
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>Sooyong</given-names>
            <surname>Jeong</surname>
          </string-name>
          , Cheolhee Park, Dowon Hong, Changho Seo, and
          <string-name>
            <given-names>Namsu</given-names>
            <surname>Jho</surname>
          </string-name>
          .
          <source>Neural Cryptography Based on Generalized Tree Parity Machine for Real-Life Systems. Security and Communication Networks. Volume 2021. Article ID</source>
          <volume>6680782</volume>
          | https://doi.org/10.1155/
          <year>2021</year>
          /6680782
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Dong</surname>
          </string-name>
          and
          <string-name>
            <given-names>T.</given-names>
            <surname>Huang</surname>
          </string-name>
          , “
          <article-title>Neural cryptography based on complex-valued neural network</article-title>
          ,
          <source>” IEEE Transactions on Neural Networks and Learning Systems</source>
          , vol.
          <volume>31</volume>
          , no.
          <issue>11</issue>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Saraswat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Garg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tripathi</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Agarwal</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>Encryption Algorithm Based on Neural Network," 2019 4th International Conference on Internet of Things: Smart Innovation</source>
          and
          <string-name>
            <surname>Usages (IoT-SIU)</surname>
          </string-name>
          , Ghaziabad, India,
          <year>2019</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/IoT-SIU.
          <year>2019</year>
          .
          <volume>8777637</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Thanawala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Verma</surname>
          </string-name>
          and
          <string-name>
            <given-names>C.</given-names>
            <surname>Prakash</surname>
          </string-name>
          ,
          <article-title>"Implementation of New Approach to Secure IoT Networks with Encryption and Decryption Techniques," 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur</article-title>
          , India,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          , doi: 10.1109/ICCCNT49239.
          <year>2020</year>
          .
          <volume>9225279</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>A. C. H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>"Post-Quantum Cryptography Neural Network," 2023 International Conference on Smart Systems for applications in Electrical Sciences (ICSSES), Tumakuru</article-title>
          , India,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/ICSSES58299.
          <year>2023</year>
          .
          <volume>10201083</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>I. Tsmots</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rabyk</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Opotyak</surname>
          </string-name>
          ,
          <article-title>"Development of a Device on FPGA to Implement the Base Operation of Neural-like Data Encryption Using Polynomials,"</article-title>
          <source>2023 IEEE 18th International Conference on Computer Science and Information Technologies (CSIT)</source>
          , Lviv, Ukraine,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>5</lpage>
          , doi: 10.1109/CSIT61576.
          <year>2023</year>
          .
          <volume>10324267</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Corona-Bermúdez</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chimal-Eguía</surname>
            ,
            <given-names>J. C.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Téllez-Castillo</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <source>Cryptographic Services Based on Elementary and Chaotic Cellular Automata. Electronics</source>
          ,
          <volume>11</volume>
          (
          <issue>4</issue>
          ), 613. https://doi.org/10.3390/electronics11040613
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>I.</given-names>
            <surname>Tsmots</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rabyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tkachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Opotyak</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <article-title>"Implementation of Base Components of Neuro-like Cryptographic Data Protection Systems on FPGA,"</article-title>
          <source>2023 IEEE 13th International Conference on Electronics and Information Technologies (ELIT)</source>
          , Lviv, Ukraine,
          <year>2023</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/ELIT61488.
          <year>2023</year>
          .
          <volume>10310958</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>