<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main"></title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ivan</forename><surname>Tsmots</surname></persName>
							<email>ivan.h.tsmots@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Stepan Bandera 12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yurii</forename><surname>Opotyak</surname></persName>
							<email>yurii.v.opotiak@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Stepan Bandera 12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yurii</forename><surname>Lukashchuk</surname></persName>
							<email>yurii.a.lukashchuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Stepan Bandera 12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sofiia</forename><surname>Tesliuk</surname></persName>
							<email>sofiia.v.tesliuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Stepan Bandera 12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2C1D9519F8E46AAC51BDD5D7690E98F4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>neuron-like network</term>
					<term>neuro element</term>
					<term>macropartial products</term>
					<term>tabular-algorithmic method</term>
					<term>method of singular decomposition of the matrix</term>
					<term>Jacobi rotation method</term>
					<term>eigenvectors</term>
					<term>eigenvalues 1 (S. Tesliuk) 0000-0002-4033-8618 (I. Tsmots)</term>
					<term>0000-0002-8933-8635 (Yu. Lukashchuk)</term>
					<term>0000-0001-9889-4177 (Yu. Opotyak)</term>
					<term>0009-0005-6512-4447 (S. Tesliuk)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A generalized analytical machine learning model for neuro-like data encryption and decryption has been developed. The main components are a neuro network architecture shaper, a weights matrix calculator, and a macropartial product table calculator. The implementation of which reduces setup time. An analysis of recent research and publications on the relevance of problems in the implementation of neuro-like cryptographic data encryption is carried out. The paper formulates rules for the formation of a neuronetwork architecture. The structure of a neuro network for cryptographic data encryption is determined by the number of neuro elements. A weights matrix calculator has also been developed. For this purpose, the method of singular value decomposition and the Jacob rotation method were used to find eigenvectors and eigenvalues. A simulation model was developed to demonstrate the operation. A macropartial product table calculator based on the table-algorithmic method was developed. To implement these tasks, C# programming language and Visual Studio 2022 development environment were chosen. Windows Forms was chosen as the development technology. The Accord.Math library was added to the project for operations with matrixes. The practical value is that the developed tools provide fast calculation of coefficients for a given neural network architecture. As a result, the appliance of such generalized analytical model will ensure the speed of computational processes, including data encryption and decryption.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The importance of secure data encryption is paramount in the modern digital landscape, where sensitive information is frequently transmitted and stored online. Cryptographic techniques are commonly employed to safeguard the confidentiality and integrity of data, but traditional methods can still be susceptible to attacks from hackers and other malicious entities.</p><p>Neural-based data encryption is a promising new approach that uses artificial neural networks to encrypt and decrypt data. This approach is based on the principle that neural networks can learn complex patterns and relationships in data, making them well-suited for encryption tasks.</p><p>However, one of the challenges in implementing neural-like data encryption is determining the optimal pre-settings for the neural network. This is where the generalized analytical model of presetting comes in, as it provides a systematic approach to determine the optimal settings for a given data set and encryption task.</p><p>Mobile smart systems refer to the combination of hardware and software solutions integrated into mobile devices, such as smartphones and tablets, which enable them to perform advanced computing, communication, and data processing tasks.</p><p>These systems are highly intelligent, self-contained platforms equipped with sensors, processors, storage, and connectivity features. The primary goal of mobile smart systems is to provide users with seamless interaction with digital services while maintaining security, efficiency, and user experience.</p><p>One critical aspect of mobile smart systems is their ability to secure data through encryption. Encryption ensures that data transmitted over networks or stored on mobile devices is protected from unauthorized access. Mobile systems use various encryption protocols to safeguard user information, financial transactions, and communications.</p><p>Thus, the development of a generalized analytical model of pre-settings for neural-like cryptographic data encryption is a relevant research topic, as it has the potential to improve the security and efficiency of data encryption in various fields, including finance, healthcare, and public administration.</p><p>The object of research is the process of cryptographic encryption of data using neuron-like networks, as well as how weight coefficient matrices precomputation can be used to improve the efficiency and effectiveness of this process.</p><p>The subject of the research is a generalized analytical model of weight coefficient matrices precomputation for neuron-like cryptographic data encryption, which is aimed at improving the security and efficiency of cryptographic systems through the usage of neural networks.</p><p>The goal of this research is to develop a generalized analytical model for neuro-like cryptographic data encryption designed to enhance both the security and efficiency of encryption when compared to existing methods. Additionally, the study aims to assess the performance of the proposed model through a series of simulations and experiments.</p><p>To achieve this goal, the following main tasks of the study are defined:</p><p>• analysis of the latest research and publications;</p><p>• choosing a Neural Network Architecture Shaper;</p><p>• development of a program for calculating the weights matrix based on an improved method of singular value decomposition; • development of software for calculating the table of macropartial products.</p><p>The scientific novelty of the obtained results lies in the development of a generalized model for the precomputation of weights matrices, specifically for implementing neuro-like data encryption.</p><p>The core elements of the model include a neuro network architecture generator, a weights matrix calculator, and a macropartial product tables calculator. This proposed approach reduces the time required for the development of neuro-like networks</p><p>The practical significance of the results lies in the fact that the developed generalized model for weights matrix precomputation enhances both the security and efficiency of the data encryption process. By leveraging this model, encryption becomes more robust against potential vulnerabilities, while also improving operational efficiency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Analysis of the latest research and publications</head><p>Analysis of the trends in the data processing systems development shows increased usage of neural and neuron-like methods <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b3">[3]</ref>. Paper <ref type="bibr" target="#b0">[1]</ref> elaborates on economic growth prediction based on the artificial neural network algorithm and RBF neural network algorithm by combining the theory for economic forecasting and the characteristics of the BP neural network algorithm with the RBF neural network.</p><p>In <ref type="bibr" target="#b1">[2]</ref> considered the problem of handling sets of medical data and proposed an improved regression method based on artificial neural networks. Authors in <ref type="bibr" target="#b3">[3]</ref> propose accurate performing network evaluation with the use of a graph convolutional network-based performance evaluation method for ultralarge-scale networks with significantly less time-consuming than traditional methods.</p><p>One of the trends in the development of embedded systems is the usage of hardware-based implementation of neuron-like networks that requires developing particular components for these cases. A forward propagation channel model was proposed in <ref type="bibr" target="#b4">[4]</ref>, where adjustment of model weights was achieved by developing supervised learning Tempotron algorithm and oriented on STM32 chips. In <ref type="bibr" target="#b5">[5]</ref>, field-programmable gate arrays (FPGAs) are used to create hardware neural networks, and a larger neuron-like network can be implemented in this case at a lower cost.</p><p>An image cryptosystem based on a non-linear component neuron-like scheme is proposed in <ref type="bibr" target="#b6">[6]</ref>. Neuron-like learning algorithms realize a memorable diffusion algorithm and the inputs and weights of the neuron are regulated by the information of the image.</p><p>Analysis of publications <ref type="bibr" target="#b7">[7]</ref><ref type="bibr" target="#b8">[8]</ref><ref type="bibr" target="#b9">[9]</ref> shows that neural network cryptographic data protection is mainly implemented by software. The main disadvantage of such an implementation is the difficulty of providing real-time. In <ref type="bibr" target="#b7">[7]</ref> presented an approach to the neuron-like network implementation, oriented on embedded systems for real-time cryptographic data protection with symmetric keys for onboard communication systems in unmanned aerial vehicles because of its suitability for hardware implementation.</p><p>Neural cryptography is considered in <ref type="bibr" target="#b8">[8]</ref>, which is based on neural networks and Vector-Valued Tree Parity Machine (VVTPM), which is a generalized architecture of TPM models proposed by authors. In <ref type="bibr" target="#b9">[9]</ref>, a neural cryptography based on the complex-valued tree parity machine network (CVTPM) is proposed, where the input, output, and weights of CVTPM are complex values and can be considered as an extension of TPM.</p><p>In <ref type="bibr" target="#b10">[10]</ref><ref type="bibr" target="#b11">[11]</ref><ref type="bibr" target="#b12">[12]</ref><ref type="bibr" target="#b13">[13]</ref>, the ways of adaptation of an auto-associative neural network with non-iterative learning to the tasks of cryptographic data protection are considered. Authors in <ref type="bibr" target="#b10">[10]</ref> describe an auto-associative neural network concept of soft computing in combination with an encryption technique intended to send data securely on the communication network.</p><p>In <ref type="bibr" target="#b11">[11]</ref>, the AES algorithm was described to secure data bits and also designed as a two-stage encryption and decryption algorithm that can be applied to IoT networks. Study <ref type="bibr" target="#b12">[12]</ref> proposes a new PQC neural network intended to map a code-based PQC method to a neural network structure. This approach aimed to enhance the security of ciphertexts based on non-linear activation functions, random perturbation of ciphertexts, and uniform distribution of ciphertexts.</p><p>The papers <ref type="bibr" target="#b13">[13]</ref><ref type="bibr" target="#b14">[14]</ref><ref type="bibr" target="#b15">[15]</ref> analyze the main ways of development of on-board means of cryptographic protection of data transmission in real-time, which showed that a promising way of development is the use of neural network methods for data encryption and decryption. In <ref type="bibr" target="#b13">[13]</ref> described the implementation of neuron-like data encryption using polynomials, where the structure of the device was developed with the usage of the base operation of neuron-like data encryption.</p><p>The hardware programming language VHDL is used to develop the base operation of data encryption for implementation on the FPGA. A security framework based on cellular automata is proposed in <ref type="bibr" target="#b14">[14]</ref>, which is composed of three parts: entity authentication, data encryption, and decryption, where authentication is based on a zero-knowledge protocol. System robustness is realized in cases when a shared secret is not only an NP-complete problem but is dynamically transformed by two-dimensional cellular automata into a more complex secret key.</p><p>The analysis of the papers shows <ref type="bibr" target="#b16">[16]</ref><ref type="bibr" target="#b17">[17]</ref><ref type="bibr" target="#b18">[18]</ref><ref type="bibr" target="#b19">[19]</ref> that for the preliminary calculation of the weights coefficients, the principal component method is used, which uses a system of eigenvectors that correspond to the eigenvalues of the covariance matrix of the input data.</p><p>In <ref type="bibr" target="#b20">[20]</ref> proposed an algorithm for calculating the scalar product of operands in a floating-point format using a table-algorithmic method and an algorithm for forming tables of macropartial products that are used for developing neuron-like networks.</p><p>Analysis of the methods for calculating the dot product with weights coefficients, which are known in advance <ref type="bibr" target="#b21">[21]</ref><ref type="bibr" target="#b22">[22]</ref><ref type="bibr" target="#b23">[23]</ref><ref type="bibr" target="#b24">[24]</ref>, showed that one of the most effective methods is the tabular-algorithmic one, which is reduced to the operations of reading macropartial products, addition, and shift.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results of the study and their discussion</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">A generalized analytical model of machine learning for neuro-like encryption of data decryption</head><p>A neuro-like structure for data protection is proposed, which is focused on the use of the encryption method with symmetric keys. In such a structure, the encryption key and the decryption key are the same. The decryption key can be obtained by mathematical transformations from the encryption key.</p><p>Encryption is carried out over blocks of data using a key. The key in the case of using a neuronlike structure consists of a given number of neurons in a neural network 𝑁, a matrix of calculated weights 𝑊 .</p><p>Next, let's look at the main stages of the data encryption and decryption process. The first step is to choose a neuron-like network architecture to encrypt and decrypt data. The architecture of a neuron-like network is determined by the number of neurons N, the number of inputs k, and the number of bits of inputs m. Incoming messages that are encrypted can have different bits, n, and be transmitted to a different number of inputs, k, which are equal to the number of neurons N. The bit size of the message n and the number of inputs k determine the architecture of the neural network.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> shows the general structure of the neuro-like network used to encrypt data. However, the main disadvantage of classical neural networks, which significantly complicates their use in mobile smart systems, is the long learning process.</p><p>The proposed neuron-like architecture makes it possible to carry out the process of training a neural network by directly calculating the matrix of weighting coefficients W.</p><p>Once the values of the weighting matrix W are calculated, the basic operation of data encryption using such a neuron-like network is reduced to the procedure of multiplying the W weighting matrix by the input data vector 𝑥 according to the following formula:</p><formula xml:id="formula_0">𝑦 = 𝑊 𝑊 ⋯ 𝑊 𝑊 𝑊 ⋯ 𝑊 ⋮ ⋮ ⋯ ⋮ 𝑊 𝑊 ⋯ 𝑊 × 𝑥 𝑥 ⋮ 𝑥 (1)</formula><p>As a result, multiplying the matrix of weighting coefficients W by the input vector is 𝑥 reduced to performing N operations on calculating the dot product.</p><p>Performing neuron-like cryptographic protection of data involves making pre-configurations. Such pre-configurations are reduced to the choice of the structure of the neuron-like network, the calculation of a matrix of weights coefficients, and a table of macropartial products. A generalized analytical model of presets is written as follows:</p><formula xml:id="formula_1">𝑃 = 𝑓 𝑓 𝑓 ( → ) ,<label>(2)</label></formula><p>where 𝑃 -macropartial product; 𝑓 -calculation of the macropartial products table; 𝑓calculation of the matrix of weights; 𝑓 ( → ) -formation of the structure of a neuron-like network, the parameters of which are determined by the bit width m of the message X and the bit depth of the inputs of the neuroelement n.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Neural Network Architecture Shaper</head><p>The structure of a neuron-like network for cryptographic data encryption is determined by the number of neuro-like elements, which are calculated using the formula:</p><formula xml:id="formula_2">𝑁 = 𝑛 𝑚 ,<label>(3)</label></formula><p>where N -is the number of neuro elements. For data with a bit width of m = 16, the number of neuro-like elements N can be 16, 8, 4, and 2 with the number of inputs n 1, 2, 4, and 8, respectively.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Simulation model for calculating matrices of weights coefficients</head><p>The general formula for the Singular Value Decomposition (SVD) method is the next:</p><formula xml:id="formula_3">𝐴 = 𝑈𝐷𝑉 ,<label>(4)</label></formula><p>where A is an Nxn input data matrix; U is a left singular NxN matrix, columns contain eigenvectors of the AA T matrix; D is a diagonal Nxn matrix containing singular (eigen) values; V is a right singular Nxn matrix, columns contain eigenvectors of the A T A matrix.</p><p>The calculation of eigenvalues and eigenvectors is performed using the Jacobi rotation method, in which the calculation of eigenvalues and eigenvectors of a symmetric matrix is performed iteratively. This process of calculating eigenvectors is known as diagonalization. To construct this sequence, a specially selected rotation matrix Ji is used, the norm of the supra-diagonal part of which is calculated as follows:</p><formula xml:id="formula_4">𝐴 ( ) = 𝑎 ( ) ,<label>(5)</label></formula><p>and decreases with each two-way rotation of the matrix:</p><formula xml:id="formula_5">𝐴 ( ) = 𝐽 ⋅ 𝐴 ( ) ⋅ 𝐽<label>(6)</label></formula><p>To calculate the U matrix using the Jacobi method, the result of the AA T product is passed, and to find the V matrix, the result of the A T A product is passed. When finding the D matrix, it is enough to take the eigenvalues that were found when calculating the U matrix or the V matrix and place them on the main diagonal. After finding the matrices U, V, and D, the weight coefficients are calculated using the following formula:</p><formula xml:id="formula_6">𝐴𝑤 = 𝑈𝐷,<label>(7)</label></formula><p>where A is an input matrix of dimension N×n, W is a weighting matrix of dimension n×n. The weights matrix W is calculated by the following formula:</p><formula xml:id="formula_7">𝑤 = 𝐴 ⋅ 𝑈𝐷,<label>(8)</label></formula><p>where the matrix A -1 is equal to:</p><formula xml:id="formula_8">𝐴 = 𝑉𝐷 ⋅ 𝑈<label>(9)</label></formula><p>To calculate the weights coefficients matrix, the singular value decomposition method was used:</p><formula xml:id="formula_9">𝑊 = 𝑉𝐷 𝑈 𝑈𝐷,<label>(10)</label></formula><p>where U -a left singular matrix N x N, the columns contain the eigenvectors of the matrix AAT; D -a diagonal matrix N x n containing singular (eigen) values; V -a right singular matrix n x n, the columns contain the eigenvectors of the ATA matrix. A is the input matrix N x n.</p><p>Figure <ref type="figure" target="#fig_1">2</ref> presents the table that was used for the simulation. The parameters in this case will be the next: N =8, n = 8, m = 2. 1 1 0 1 1 0 0 1 0 1 1 0 1 0 1 1 1 0 0 0 0 1 1 0 0 1 0 1 1 0 1 1 1 0 1 0 1 0 0 0 1 1 1 0 1 0 1 0 1 0 0 0 1 0 1 1 0 0 0 1 0 1 1 1 0 1 0 0 1 1 0 1 1 1 1 0 0 0 0 1 1 0 0 1 0 0 1 1 0 1 1 1 0 1 0 0 0 1 0 0 0 0 1 0 0 1 0 0 1 0 0 0 0 0 1 0 0 1 0 1 0 1 0 0 0 1 1 1 1 0 0 1 1 1 0 0 0 1 0 1 0 0 1 0 0 1 1 0 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1 0 0 0 1 1 1 1 0 0 0 0 0 1 0 1 0 1 0 1 0 1 0 1 1 1 0 0 0 1 0 1 1 0 0 0 1 1 0 1 0 0 0 1 0 1  The dimension of the weights tables is determined by the number of neuro-like elements on the basis of which the neuro-like network is synthesized. For example, to encrypt data, neuron-like networks with the number of neuro-like elements of 16, 8, 4, and 2 may be used. The dimensions of the matrices of weights coefficients for such neuron-like networks will be 16×16, 8×8, 4×4, and 2×2, respectively.</p><p>A simulation model software was developed to calculate the neuron weight in the case of cryptographic encryption of incoming data. The software implementation was carried out by C# programming language in the Visual Studio 2022 development environment using the Windows Forms development technology (Fig. <ref type="figure" target="#fig_2">3</ref>).</p><p>The developed simulation model software provides a calculation of a matrix of weights for a given structure of a neuro-like network. The user interface of the simulation model for calculating weighting coefficients is shown.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Simulation model for calculating macropartial product tables</head><p>Calculating macro partial product tables for floating-point weights (where wj is the mantissa Wj of the weights, and Wj is the order of the weights) involves the following operations:  determining the largest common order of the weight's coefficients ;  calculating the order difference for each weighting coefficient Wj ;  shift to the right of the mantissa wj by the order difference ;  determining the maximum number of overflow bits q for macro-partial products PMi;  obtaining scaled mantises by shifting them to the right by q overflow bits of the calculated macro-partial products PMi;  adding the number of overflow bits to the highest common order.</p><p>The table of macropartial products is calculated using the following formula: </p><p>where 𝑥 , 𝑥 , 𝑥 , … , 𝑥 -table address inputs, 𝑤 -the mantissa Wj of the weights is reduced to the largest common order.</p><p>The number of macro-partial product tables for encrypting/decrypting commands is equal to the number of neuronal elements in the network. The amount of memory required to store the macropartial product table is equal to:</p><formula xml:id="formula_11">𝑄 = 2 , (<label>12</label></formula><formula xml:id="formula_12">)</formula><p>where k is the number of inputs of the neural element.</p><p>For neural networks with 16, 8, 4, and 2 neural elements, the number of tables and their sizes are respectively equal: 16 tables with a volume of Q=2 16 . On Figure <ref type="figure">5</ref> presented the weights matrix used for experiment 0,2365 -0,0436 -0,0413 0,0702 0,2453 -0,097 -0,8856 0,2861 0,1322 -0,4923 -0,0464 -0,1332 -0,8273 -0,0889 -0,1668 0,0035 0,1567 0,3824 0,2335 0,8059 -0,3371 -0,0931 0,0096 0,0525 0,2542 0,4588 0,6344 -0,5124 -0,1834 0,1373 -0,0836 0,0219 0,2234 -0,0371 0,0224 0,0668 0,0859 -0,0083 -0,2139 -0,9438 0,5489 -0,2009 0,1376 -0,0496 0,2193 -0,6944 0,3128 0,0929 0,3888 0,5098 -0,7202 -0,1861 -0,1812 -0,0012 0,0523 0,0134 0,5787 -0,3118 0,0313 0,1544 0,1409 0,688 0,1848 0,1253 Figure <ref type="figure">5</ref>: Weights matrix used for calculations.</p><p>To demonstrate the operation of the program, an example was prepared for working with the following architecture: m=2, k=8, N=8, as well as a matrix of weights coefficients for a neuro-like network with 8 neuro elements.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>A software model for the implementation of neuro-like data encryption and decryption has been developed. The main components: neural network architecture shaper, the weights matrix calculator, and the macropartial product table calculator.</p><p>An imitation model was developed and a user interface for calculating weights matrices for a given neural architecture was presented.</p><p>An imitation model and a user interface for calculating macropartial product tables for the tablealgorithmic implementation of a scalar product are developed.</p><p>The use of the developed models reduces the time of setting up neural networks for computational processes, including data encryption and decryption.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The general structure of the neuro-like network used to encrypt data.</figDesc><graphic coords="4,133.08,349.32,333.96,206.04" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Learning matrix for weights calculation.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: User interface of the simulation model software for calculating weight.</figDesc><graphic coords="6,78.84,423.24,443.04,330.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>𝑥 = 𝑥 = 𝑥 =. . . = 𝑥 = 0 𝑤 , 𝑖𝑓 𝑥 = 1, 𝑥 = 𝑥 =. . . = 𝑥 = 0 𝑤 , 𝑖𝑓 𝑥 = 0, 𝑥 = 1, 𝑥 =. . . = 𝑥 = 0 𝑤 + 𝑤 , 𝑖𝑓 𝑥 = 1, 𝑥 = 1, 𝑥 =. . . = 𝑥 = 0 ⋮ 𝑤 + ⋯ + 𝑤 , 𝑖𝑓 𝑥 = 0, 𝑥 = 𝑥 =. . . = 𝑥 = 1 𝑤 + 𝑤 +. . . +𝑤 , 𝑖𝑓 𝑥 = 𝑥 = 𝑥 =. . . = 𝑥 = 1 ,</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: User interface of the simulation model software for calculating the macropartial products table.</figDesc><graphic coords="8,79.08,62.40,441.96,297.84" type="vector_box" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Economic Forecast Model and Development Path Analysis Based on BP and RBF Neural Network</title>
		<author>
			<persName><forename type="first">M</forename><surname>Du</surname></persName>
		</author>
		<idno type="DOI">10.1109/CSNT57126.2023.10134678</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT)</title>
				<meeting><address><addrLine>Bhopal, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="619" to="624" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Predictive modeling based on small data in clinical medicine: RBF-based additive input-doubling method</title>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dronyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gregus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rashkevych</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J</title>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title/>
		<idno type="DOI">10.3934/mbe.2021132</idno>
	</analytic>
	<monogr>
		<title level="j">Mathematical Biosciences and Engineering</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="2599" to="2613" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Usage of a Graph Neural Network for Large-Scale Network Performance Evaluation</title>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yoshikane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tsuritani</surname></persName>
		</author>
		<idno type="DOI">10.23919/ONDM51796.2021.9492331</idno>
	</analytic>
	<monogr>
		<title level="m">2021 International Conference on Optical Network Design and Modeling (ONDM)</title>
				<meeting><address><addrLine>Gothenburg, Sweden</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Embedded System Implementation of Spiking Neural Network Propagation Model</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1109/IAECST60924.2023.10503419</idno>
	</analytic>
	<monogr>
		<title level="m">5th International Academic Exchange Conference on Science and Technology Innovation (IAECST)</title>
				<meeting><address><addrLine>Guangzhou, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="187" to="193" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">An Efficient area Neural Network Implementation using tan-sigmoid Look up Table Method Based on FPGA</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">H</forename><surname>Abd</surname></persName>
		</author>
		<idno type="DOI">10.1109/INCET54531.2022.9825348</idno>
	</analytic>
	<monogr>
		<title level="m">3rd International Conference for Emerging Technology (INCET)</title>
				<meeting><address><addrLine>Belgaum, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">The Memorable Image Encryption Algorithm Based on Neuron-Like Scheme</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2020.3004379</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="114807" to="114821" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">An Approach to the Implementation of a Neural Network for Cryptographic Protection of Data Transmission at UAV</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tsmots</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Teslyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Łukaszewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lukashchuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kazymyra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holovatyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Opotyak</surname></persName>
		</author>
		<idno type="DOI">10.3390/drones7080507</idno>
		<ptr target="https://doi.org/10.3390/drones7080507" />
	</analytic>
	<monogr>
		<title level="j">Drones</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">507</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Neural Cryptography Based on Generalized Tree Parity Machine for Real-Life Systems</title>
		<author>
			<persName><forename type="first">Sooyong</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cheolhee</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dowon</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Changho</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Namsu</forename><surname>Jho</surname></persName>
		</author>
		<idno type="DOI">10.1155/2021/6680782</idno>
		<idno>Article ID 6680782</idno>
		<ptr target="https://doi.org/10.1155/2021/6680782" />
	</analytic>
	<monogr>
		<title level="j">Security and Communication Networks</title>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Neural cryptography based on complex-valued neural network</title>
		<author>
			<persName><forename type="first">T</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Huang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="issue">11</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Encryption Algorithm Based on Neural Network</title>
		<author>
			<persName><forename type="first">P</forename><surname>Saraswat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Garg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tripathi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</author>
		<idno type="DOI">10.1109/IoT-SIU.2019.8777637</idno>
	</analytic>
	<monogr>
		<title level="m">4th International Conference on Internet of Things: Smart Innovation and Usages (IoT-SIU)</title>
				<meeting><address><addrLine>Ghaziabad, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Implementation of New Approach to Secure IoT Networks with Encryption and Decryption Techniques</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sarin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Thanawala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Prakash</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICCCNT49239.2020.9225279</idno>
	</analytic>
	<monogr>
		<title level="m">11th International Conference on Computing, Communication and Networking Technologies (ICCCNT)</title>
				<meeting><address><addrLine>Kharagpur, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Post-Quantum Cryptography Neural Network</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C H</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSSES58299.2023.10201083</idno>
	</analytic>
	<monogr>
		<title level="m">2023 International Conference on Smart Systems for applications in Electrical Sciences (ICSSES)</title>
				<meeting><address><addrLine>Tumakuru, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Development of a Device on FPGA to Implement the Base Operation of Neural-like Data Encryption Using Polynomials</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tsmots</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Teslyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Rabyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Opotyak</surname></persName>
		</author>
		<idno type="DOI">10.1109/CSIT61576.2023.10324267</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 18th International Conference on Computer Science and Information Technologies (CSIT)</title>
				<meeting><address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Cryptographic Services Based on Elementary and Chaotic Cellular Automata</title>
		<author>
			<persName><forename type="first">E</forename><surname>Corona-Bermúdez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Chimal-Eguía</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Téllez-Castillo</surname></persName>
		</author>
		<idno type="DOI">10.3390/electronics11040613</idno>
		<ptr target="https://doi.org/10.3390/electronics11040613" />
	</analytic>
	<monogr>
		<title level="j">Electronics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">613</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Implementation of Base Components of Neuro-like Cryptographic Data Protection Systems on FPGA</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tsmots</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Rabyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Opotyak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Teslyuk</surname></persName>
		</author>
		<idno type="DOI">10.1109/ELIT61488.2023.10310958</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 13th International Conference on Electronics and Information Technologies (ELIT)</title>
				<meeting><address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Research on Inversion Algorithm of Aerosol Extinction Coefficient Based on Elman Neural Network</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Xie</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICIEA51954.2021.9516085</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 16th Conference on Industrial Electronics and Applications (ICIEA)</title>
				<meeting><address><addrLine>Chengdu, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="62" to="66" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Four Models of Hopfield-Type Octonion Neural Networks and Their Existing Conditions of Energy Functions</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Kuroe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Iima</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Maeda</surname></persName>
		</author>
		<idno type="DOI">10.1109/IJCNN48605.2020.9206838</idno>
	</analytic>
	<monogr>
		<title level="m">2020 International Joint Conference on Neural Networks (IJCNN)</title>
				<meeting><address><addrLine>Glasgow, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="7" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Projected Weight Regularization to Improve Neural Network Generalization</title>
		<author>
			<persName><forename type="first">G</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Niwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">B</forename><surname>Kleijn</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICASSP40776.2020.9054133</idno>
	</analytic>
	<monogr>
		<title level="m">ICASSP 2020 -2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)</title>
				<meeting><address><addrLine>Barcelona, Spain</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="4242" to="4246" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Dynamics Analysis and Design for a Bidirectional Super-Ring-Shaped Neural Network With n Neurons and Multiple Delays</title>
		<author>
			<persName><forename type="first">B</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TNNLS.2020.3009166</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks and Learning Systems</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="2978" to="2992" />
			<date type="published" when="2021-07">July 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Floating-Point Number Scalar Product Hardware Implementation for Embedded Systems</title>
		<author>
			<persName><forename type="first">I</forename><surname>Tsmots</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Rabyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Teslyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Opotyak</surname></persName>
		</author>
		<idno type="DOI">10.1109/CADSM58174.2023.10076502</idno>
	</analytic>
	<monogr>
		<title level="m">17th International Conference on the Experience of Designing and Application of CAD Systems (CADSM)</title>
				<meeting><address><addrLine>Jaroslaw, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="6" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Model and Principles for the Implementation of Neural-Like Structures based on Geometric Data Transformations</title>
		<author>
			<persName><forename type="first">R</forename><surname>Tkachenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Izonin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ICCSEEA2018. Advances in Intelligent Systems and Computing</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">754</biblScope>
			<biblScope unit="page" from="578" to="587" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Efficiency Evaluation of Scalable Multiply and Accumulate Architectures in DSP: A Comparative Study of LUT Based and LUT-Less Based Approaches</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bharathi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">J M</forename><surname>Shirur</surname></persName>
		</author>
		<idno type="DOI">10.1109/IITCEE59897.2024.10467265</idno>
	</analytic>
	<monogr>
		<title level="m">2024 International Conference on Intelligent and Innovative Technologies in Computing, Electrical and Electronics (IITCEE)</title>
				<meeting><address><addrLine>Bangalore, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">VLUT: Design and Evaluation of Variable band LUT to realize Activation Functions</title>
		<author>
			<persName><forename type="first">R</forename><surname>Rohit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dudeja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Rao</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICECS58634.2023.10382912</idno>
	</analytic>
	<monogr>
		<title level="m">30th IEEE International Conference on Electronics, Circuits and Systems (ICECS)</title>
				<meeting><address><addrLine>Istanbul, Turkiye</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
			<biblScope unit="page" from="1" to="4" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">LUT Input Reordering to Reduce Aging Impact on FPGA LUTs</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ebrahimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sadeghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Navabi</surname></persName>
		</author>
		<idno type="DOI">10.1109/TC.2020.2974955</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Computers</title>
		<imprint>
			<biblScope unit="volume">69</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="1500" to="1506" />
			<date type="published" when="2020-10-01">1 Oct. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
