<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>(2014). doi: 10.1007/978-1-4471-5571-3.
[25] E. Javidmanesh. Global Stability and Bifurcation in Delayed Bidirectional Associative Memory
Neural Networks With an Arbitrary Number of Neurons. Journal of Dynamic Systems</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.23919/EUSIPCO.2019.8902708</article-id>
      <title-group>
        <article-title>Method for Automatic Processing of Audit Content Based on Bidirectional Neural Network Mapping</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tatiana Neskorodieva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugene Fedorov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cherkasy State Technological University</institution>
          ,
          <addr-line>Shevchenko blvd., 460, Cherkasy, 18006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vasyl' Stus Donetsk National University</institution>
          ,
          <addr-line>600-richchia str., 21, Vinnytsia, 21021</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>30</volume>
      <fpage>24</fpage>
      <lpage>26</lpage>
      <abstract>
        <p>Currently, the analytical procedures used during the audit are based on data mining techniques. The object of the research is the process of the content auditing of the receipt of raw materials for production and the manufactured products. The aim of the work is to increase the effectiveness and efficiency of audit due to mapping by full (bidirectional) counterpropagating neural network of content of the receipt of raw materials for production and the manufactured products while automating procedures for checking their compliance. The vectors of feature for the objects of the sequences of the receipt of raw materials for production and the manufactured products are generated, which are then used in the proposed method. The created method, in contrast to the traditional one, provides for a batch mode, which allows the method to increase the learning rate by an amount equal to the product of the number of neurons in the hidden layer and the power of the training set, which is critically important in the audit system for the implementation of multivariate intelligent analysis, which involves enumerating various methods of forming subsets analysis. The urgent task of increasing the audit efficiency was solved by automating the mapping of audit indicators by full (bidirectional) counterpropagating neural network. A learning algorithm based on  -means has been created, intended for implementation on a GPU using CUDA technology, which increases the speed of identifying parameters of a neural network model. The neural network with the proposed training method based on the  -means rule can be used to intellectualize the DSS audit. The prospects for further research are the application of the proposed method by neural network mapping for a wide class of artificial intelligence tasks, in particular, for creating a method for bidirectional mapping indicators of audit tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>1 audit</kwd>
        <kwd>mapping by neural network</kwd>
        <kwd>full (bidirectional) counterpropagating neural network</kwd>
        <kwd>content of the receipt of raw materials for production and the manufactured products</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the process of development of international and national economies and industry of IT in
particular, it is possible to distinguish the following basic tendencies: realization of digital
transformations, forming of digital economy, globalization of socio-economic processes and of IT
accompanying them [1]. These processes result in the origin of global, multilevel hierarchical
structures of heterogeneous, multivariable, multifunction connections, interactions and cooperation of
managing subjects (objects of audit), the large volumes of information about them have been
accumulated in the informative systems of account, management and audit.</p>
      <p>Consequently, nowadays the scientific and technical issue of the modern information technologies
in financial and economic sphere of Ukraine is forming of the methodology of planning and creation
of the decision support systems (DSS) at the audit of enterprises in the conditions of application of IT
and with the use of information technologies on the basis of the automated analysis of the large
volumes of data about financial and economic activity and states of enterprises with the multi-level
hierarchical structure of heterogeneous, multivariable, multifunction connections,
intercommunications and cooperation of objects of audit with the purpose of expansion of functional
possibilities, increase of efficiency and universality of IT-audit.</p>
      <p>Currently, the analytical procedures used during the audit are based on data mining techniques
[24]. Automated DSS audit means the automatic forming of recommendable decisions, based on the
results of the automated analysis of data, that improves quality process of audit [5,6]. Unlike the
traditional approach, computer technologies of analysis of data in the system of audit accelerate and
promote the process accuracy of audit, that extremely critical in the conditions of plenty of associate
tasks on lower and middle levels, and also amounts of indexes and supervisions in every task [7,8].</p>
      <p>When developing a decision-making system in audit based on data mining technologies, three
methods have been created: classifying variables, forming analysis sets, mapping analysis sets.</p>
      <p>The peculiarity of the methodology for classifying indicators is that qualitatively different (by
semantic content) variables are classified: numerological, linguistic, quantitative, logical. The essence
of the second technique is determined by the qualitative meaning of the indicators. In accordance with
this, sets are formed with the corresponding semantic content: document numbers, the name of
indicators, quantitative estimates of the values of indicators, logical indicators.</p>
      <p>The third technique is subordinated to the mappings of formed sets of the same type on each other
in order to determine equivalence in the following senses: numerological, linguistic, quantitative,
logical.</p>
      <p>The most urgent problem is the mapping of quantitative indicators. The mapping of quantitative
indicators of the audit can be implemented through an ANN with associative memory. The main
ANNs with associative memory are presented in Table 1. The memory capacity was considered only
for ANNs with a binary or bipolar data type that perform reconstruction or classification. HAM stands
for hetero-associative memory, AAM stands for auto-associative memory.</p>
      <sec id="sec-1-1">
        <title>As follows from Table 1, most neural networks have one or more disadvantages:</title>
      </sec>
      <sec id="sec-1-2">
        <title>1. not used to reconstruct either the original or another sample;</title>
      </sec>
      <sec id="sec-1-3">
        <title>2. do not work with real data.</title>
      </sec>
      <sec id="sec-1-4">
        <title>3. do not have a high capacity of associative memory.</title>
        <p>The aim of the work is to increase the efficiency of automatic data analysis in the audit DSS by
means of a bidirectional (forward and reverse) neural network mapping of sets of audit indicators in
order to identify systematic misstatements that lead to misstatement of reporting. In the audit system,
the topical task of the middle level is the automation of the analysis of the conformity of the content
of the supply of raw materials for production and the manufactured products (by the periods of
quantization of the verification period).</p>
        <p>It is assumed that the audit indicators are noisy with Gaussian noise, which in turn simulates
random accounting errors (as opposed to systematic ones). It is also assumed that the residuals for the
quantization periods are distributed according to the normal law, the parameters of which can be
estimated from the accounting data. For the achievement of the aim it is necessary to solve the
following tasks:
• formalize the content of the audit process of the receipt of raw materials for production and
the manufactured products;
• choose a neural network model for mapping audit indicators (which are noisy with Gaussian
noise, which in turn simulates random accounting errors (as opposed to systematic ones, which
lead to distortion of reporting));
• choose a criterion for evaluating the effectiveness of a neural network model;
• propose a method for training a neural network model in batch mode;
• propose an algorithm for training a neural network model in batch mode for implementation
on a GPU;
• perform numerical studies.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. MATERIALS AND METHODS 2.1. Formalize the content of the audit process of the receipt of raw materials for production and the manufactured products</title>
      <p>Formalize the content of the supply of raw materials for production and the manufactured products
are formed on the basis of audit variables (Table 2). Elements of mapping sets – data of the receipt of
raw materials for production and the manufactured products by the periods of quantization of the
verification period. The vector of the receipt of raw materials features   = (  1, . . . ,   ) formed by
indicators of quantity of raw materials by type  , 
∈  . The vector of manufactured products
features   = (  1, . . . ,   ) is formed by indicators of the quantity of manufactured products by type
Feature vectors for mapping raw material received - product produced</p>
      <p>Input vector elements</p>
      <p>Output vector elements D
sense
type of operation (receipt
of raw materials for</p>
      <p>production)
type of raw materials
set of types of raw</p>
      <p>materials
quantity of raw materials</p>
      <p>cost of raw material
total cost of raw material
designation



 
( )
  ,
 
( )</p>
      <p>sense
type of operation (production of
finished products (semi-finished</p>
      <p>products))
type of product
set of types of product</p>
      <p>quantity of product
direct material costs for a product</p>
      <p>type  of a raw material type 
direct material costs for a product
type 
 ,  ∈  .
1.</p>
      <p>associative memory.</p>
      <p>computational complexity.</p>
      <p>
        (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
=(  ∗1 , . . . ,   ∗  ), is represented as
      </p>
      <p>To assess the dimension of the features vector, an analysis was made of the nomenclature of
purchases of raw materials (components) of large machine-building enterprises. So, based on this
analysis, we can conclude that the sections of the nomenclature are on average from 8 to 12, the
number of groups in each section is from 2 to 10.</p>
      <p>We represent the implementation of the "generalized audit" in the form of a mapping (comparison)
of generalized quantitative features of the audited sets. The formation of generalized quantitative
features can be performed using ANN.
2.2.</p>
    </sec>
    <sec id="sec-3">
      <title>Choosing a neural network model for mapping audit sets</title>
      <p>In the work, the Full (bidirectional) Counterpropagating Neural Network (FCPNN), which is a
non-recurrent static two-layer ANN, was chosen as a neural network. FCPNN output is linear.
unlike most ANNs are used to reconstruct another sample using auto-associative and
heterounlike bidirectional associative memory and the Boltzmann machine, it works with real data.
unlike
bidirectional
associative
memory and the</p>
      <sec id="sec-3-1">
        <title>Boltzmann machine, it has less</title>
        <p>FCPNN model performing mapping of each input sample  = ( 1, . . . ,    ) to output sample 
 ∗ =</p>
        <p />
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – connection weight from the  -th element of the input sample to the  -th neuron,
  (∗2) – connection weight from the neuron-winner  ∗ to  -th element of output sample,
 (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – the number of neurons in the hidden layer.
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
 ∗ = (  ∗1 , . . . ,   ∗  ), is represented as
        </p>
        <p>
          FCPNN model performing mapping of each output sample  = ( 1, . . . ,    ) to input sample

(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – connection weight from the  -th element of the input sample to the  -th neuron of
 ∗
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) – connection weight from the neuron-winner  ∗ of hidden layer to  -th element of output
 (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – the number of neurons in the hidden layer.
2.3.
        </p>
        <p>Criterion choice for assessing the effectiveness of a neural network
model for mapping audit sets</p>
        <p>
          In this work for training model FCPNN was chosen target function, that indicates selection of the
vector
( 1(
          <xref ref-type="bibr" rid="ref11">11</xref>
          ), . . . , 
of
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
 
between the model sample and the test sample)
parameter
values

= ( 1(
          <xref ref-type="bibr" rid="ref11">11</xref>
          ), . . . ,     (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),  1(12), . . . ,   (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )  ),
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
 (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),  11 , . . . , 
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
 (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )  ) which deliver the minimum
        </p>
        <p>mean square error (difference
 =</p>
        <p>1
1
2    ∑
 =1  ( 2)

∗ −  
2
+
1</p>
        <p>
          ∑
 =1 
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
 
∗ −  
2
→  
 ,
 =
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )∗ –  -th, output sample according to the model,
where   
  –  -th test output sample,
  
 
  –  -th test input sample,
 – training set power,
        </p>
        <p>– is length of the sample  ,
  – is length of the sample  .</p>
        <p>
          (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )∗ –  -th, input sample according to the model,
2.4.
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Training method for neural network model in batch mode</title>
      <p>hidden layer.</p>
      <p>The disadvantage of FCPNN is that it does not have a batch learning mode, which leads to
reducing of the learning speed. For FCPNN was used concurrent training (combination of training
with and without a teacher). This work proposes training FCPNN in batch mode.
following blocks (Fig. 1).</p>
      <sec id="sec-4-1">
        <title>First phase (training of the hidden layer) (steps 1-6).</title>
        <sec id="sec-4-1-1">
          <title>The first phase allows you to calculate the weights of the hidden layer</title>
          <p>
            (
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) and consists of the
1.
          </p>
          <p>
            Learning iteration number  = 0, initialization by uniform distribution on the interval (
            <xref ref-type="bibr" rid="ref1">0,1</xref>
            ) or
[-0.5, 0.5] of weights
          </p>
          <p />
          <p>
            (
            <xref ref-type="bibr" rid="ref1">1</xref>
            )( ),  ∈ 1,   ,  ∈ 1,  (
            <xref ref-type="bibr" rid="ref1">1</xref>
            ),  ∈ 1,   where   – is length
of the sample  ,   – is length of the sample  and  (
            <xref ref-type="bibr" rid="ref1">1</xref>
            ) – the number and the neurons in the
,   ∈    },  ∈ 1,  , where  
–  -th training input vector,
  –  -th training output vector,  – training set power.
          </p>
          <p>Initial shortest distance  ̄(0) =0.</p>
        </sec>
      </sec>
      <sec id="sec-4-2">
        <title>2. Calculating the distance to all hidden neurons.</title>
        <p>i- th neuron of the hidden basis is determined by the formula:</p>
        <p>
          Distance    from µ-th input sample to each i-th neuron and from each µ-th output sample to each
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )( ) – connection weight from k-th input sample to i-th neuron at time  ,
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
hidden layer at time  .
        </p>
        <p>
          ( ) – pretrained connection weight from s-th element of output sample to i-th neuron of
1. Initializing the weights of the neurons of the hidden layer
2. Calculating the distance to all hidden neurons
3. Calculating the shortest distance and choosing the neuron
4. Setting the weights of the hidden layer neurons associated
5. Calculating the average sum of the least distances
yes
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
(
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
(6’)
(
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
        </p>
      </sec>
      <sec id="sec-4-3">
        <title>3. Calculating the shortest distance and choosing the neuron with the shortest distance Calculating the shortest distance</title>
        <p>and choosing the neuron-winner  ∗ , for which the distance</p>
        <p>is shortest
neighbors based on k-means rule
4. Setting the weights of the hidden layer neurons associated with the neuron-winner  
∗ and its
( + 1) =
   ( + 1) =</p>
        <p>∑ =1 ℎ( ,</p>
        <p>
          ∑ =1 ℎ( ,  )
∗ )  ,  ∈ 1,   ,  ∈ 1,  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),
        </p>
        <p>∗
where ℎ( ,  ∗) – rectangular topological neighborhood function,
5. Calculating the average sum of the shortest distances</p>
      </sec>
      <sec id="sec-4-4">
        <title>6. Checking the termination condition</title>
        <p>If | ̄( + 1) −  ̄( )| ≤  , the finish, else</p>
        <p>=  + 1, go to step 2.</p>
        <p>
          Second phase (training the output layer) (steps 7-12). The second phase allows you to calculate the
weights of the output layer   
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) and
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) and consists of the following blocks (Fig. 2).
        </p>
        <p>
          7. Learning iteration number  = 0, initialization by uniform distribution on the interval (
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ) or
[0.5, 0.5] of weights   
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
( ), 

(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
        </p>
        <p>
          ( ),  ∈ 1,  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ),  ∈ 1,   ,  ∈ 1,   , where   – length of the
        </p>
        <p>Initial shortest distance  ̄(0) =0.
–  -th training output vector,</p>
        <p>
          – training set power.
layer.
input sample  ,   – length of the output sample  ,  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – the number and the neurons in the hidden
,   ∈    },  ∈ 1,  , where   –  -th training input vector,  
7. Initialization of the neuron weights of the output layer
8. Calculating the distance to all hidden neurons
9. Calculating the shortest distance and choosing the neuron
with the shortest distance
10. Calculating the distance to all output neurons
11. Setting the weights of the hidden layer neurons associated
with the neuron-winner and its neighbors
12. Calculating the average sum of the least distances z (n + 1)
13. z (n + 1) − z (n) &gt; ε
        </p>
        <p>no
14. Output the weights
yes
time  ,</p>
        <p>layer at time  .</p>
      </sec>
      <sec id="sec-4-5">
        <title>8. Calculating the distance to all hidden neurons</title>
        <p>
          Sum of distances    from µ-th input sample in input layer from each i-th neuron of the hidden
layer and from each µ-th output sample in the input layer to each i-th neuron of the hidden layer is
determined by the formula:
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – pretrained connection weight from k-th element of input sample to i-th neuron at
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – pretrained connection weight from s- th element of input sample to i- th neuron of hidden
        </p>
      </sec>
      <sec id="sec-4-6">
        <title>9. Calculating the shortest distance and choosing the neuron with the shortest distance. Calculating the shortest distance</title>
        <p>and choosing the neuron-winner  ∗, for which the distance    is shortest.</p>
        <p>
          ∗ = 

   ,  ∈ 1,  ,  ∈ 1,  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref9">9</xref>
          )
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
10. Calculating the distance to all output neurons.
from the neuron-winner
        </p>
        <p>∗ to µ-th input and output sample in output layer is
  =
∑ =1(   −   (∗2)( ))2 +
 
the input sample in output layer at time  ,
neighbors based on k-means rule
where   (∗2 )( ) – weight of connection from the winner neuron   ∗ of hidden layer to j-th element of
11. Setting the weights of the output layer neurons associated with the neuron-winner   ∗ and its
where ℎ( ,  ∗) – rectangular topological neighborhood function,
12. Calculating the average sum of the shortest distances
13. Checking the termination condition</p>
        <p>If | ̄( + 1) −  ̄( )| ≤  , the finish, else  =  + 1, go to step 8.
2.5.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Algorithm for training neuron network model in batch mode for implementation on GPU</title>
      <p>For the proposed method of training FCPNN on audit data example, examines the algorithm for
implementation on a GPU with usage of CUDA parallel processing technology.</p>
      <p>
        The first phase (training the hidden layer). The first phase based on formulas (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )-(
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) is shown in
      </p>
      <p>
        Step 1 – Operator enters lengths   of the sample  , the lengths   of the sample  , the
number and the neurons in the hidden layer  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), power of the training set  , training set
      </p>
      <p>
        Step 2 – Initialization by uniform distribution over the interval (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) or [-0.5, 0.5] of weights
{(  ,   )|  ∈   
,
      </p>
      <p>∈    },  ∈ 1,  .</p>
      <p>GPU, which are grouped into P blocks. Each thread calculates the distance from µ-th input sample to</p>
      <p>
        Step 3 – Calculation of distances to all hidden neurons of the ANN, using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on
each i-th neuron    .
      </p>
      <p>Step 4 – Computation based on shortest distance reduction and determining the neurons with the
work of each block is a neuron-winner 
∗ with the smallest distance   .</p>
      <p>
        shortest distance using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on GPU, which are grouped into P blocks. The result of the
      </p>
      <p>
        Step 5 – Setting the weights of the output layer neurons associated with the neuron- winner 
its neighbors based on reduction using   ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⋅  threads on GPU, which are grouped into   ⋅
 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) blocks. The result of the work of each block is the weight
      </p>
      <p>
        Step 6 – Setting the weights of the output layer neurons associated with the neuron- winner 
its neighbors based on reduction using   ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⋅  threads on GPU, which are grouped into   ⋅
 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) blocks. The result of the work of each block is the weight
      </p>
      <p>
        Step 7 – Calculation based on reduction of the average sum of the shortest distances using 
threads on GPU, which are grouped into 1 block. The result of the block is the average sum of the
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
      </p>
      <p>( + 1).
 (1)( + 1).
smallest distances  ̄( + 1).
 ̄( )| ≤  , then finish, else – increasing number of iteration  =  + 1, go to step 3.</p>
      <p>Step 8 – If average sum of smallest distances of neighboring iterations are close, | ̄( + 1) −
(12)
(12’)
(13)</p>
      <sec id="sec-5-1">
        <title>Step 9 – Recording of weight</title>
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )( + 1) and
        </p>
        <p>
          (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )( + 1) in the database.
        </p>
        <p>
          Second phase (training of output layer). The second phase based on formulas (
          <xref ref-type="bibr" rid="ref8">8</xref>
          )-(13) is showed in
        </p>
        <p>
          Step 1 – Operator enters lengths   of the sample  , the lengths   of the sample  the number
and the neurons in the hidden layer  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), power of the training set  , training set
  ,     ∈   
,   ∈
        </p>
        <p>,  ∈ 1,  .</p>
        <p>
          Step 2 – Initialization by uniform distribution over the interval (
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ) or [-0.5, 0.5] of weights
        </p>
        <p>
          Step 3 – Calculation of distances to all hidden neurons of the ANN, using  ⋅  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) threads on
GPU, which are grouped into P blocks. Each thread calculates the distance from µ-th input sample to
each  -th neuron    .
neuron-winner   ∗ to µ-th output sample   .
        </p>
        <p>Step 4 – Computation based on shortest distance reduction and determining the neurons with the
work of each block is a neuron-winner  ∗ with the smallest distance   .</p>
        <p>
          shortest distance using  ⋅  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) threads on GPU, which are grouped into  blocks. The result of the
Step 5 – Calculating distances from the neuron-winner  
∗ to µ-th output sample using  ⋅  ( )
threads on GPU, which are grouped into  blocks. Each thread calculates the distance from the
        </p>
        <p>
          Step 6 – Setting the weights of the output layer neurons associated with the neuron-winner 
its neighbors based on reduction using  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) ⋅   ⋅  threads on GPU, which are grouped into  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) ⋅
  blocks. The result of the work of each block is the weight   
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )( + 1).
        </p>
        <p>
          Step 7 – Setting the weights of the output layer neurons associated with the neuron-winner 
its neighbors based on reduction using  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) ⋅   ⋅  threads on GPU, which are grouped into  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) ⋅
  block. The result of the block is the average sum of the smallest distances
        </p>
        <p>
          Step 8 – Calculation based on reduction of the average sum of the shortest distances using 
threads on GPU, which are grouped into 1 block. The result of the block is the average sum of the
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )( + 1).
 ̄( )| ≤  , then finish, else – increasing number of iteration  =  + 1, go to step 3.
        </p>
        <p>Step 10 – Recording of weight   (2)( + 1) and   ( 2)( + 1), in the database.</p>
        <p>Step 9 – If average sum of smallest distances of neighboring iterations are close, | ̄( + 1) −</p>
        <p>
          Evaluation of computational complexity of the proposed method using the GPU, and the
traditional method of teaching FOCPNN were based on the number of calculation distances,
computing of which is the most consuming part of method. Moreover,  1 – the maximum number
of iterations of the first training phase,  2  – the maximum number of iterations of the second
training phase,  (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) – the number of neurons in the hidden layer,  – the power of the training set.
2.7.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Discussion 2.8.</title>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>
        The traditional FCPNN learning method does not provide support for batch mode, which increases
computational complexity (Table 3). Proposed method eliminates this flaw and allows for
approximate increase of learning rate in   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ). By reducing the computational complexity, it is
possible to increase the accuracy of the method by decreasing the parameter ε and increasing the
number of neurons in the hidden and output layers.
      </p>
      <p>1. The urgent task of increasing the effectiveness of audit in the context of large volumes of
analyzed data and limited verification time was solved by automating the formation of generalized
features of audit sets and their mapping by means of a full bidirectional counterpropagating neural
network.</p>
      <p>
        2. For increased learning rate of full bidirectional counterpropagating neural network, was
developed a method based on the  -means rule for training in batch mode. The proposed method
provides: approximately increase learning rate in   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), where  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) is the number of neurons in the
hidden layer and  is the power of the learning set.
      </p>
      <p>3. Created a learning algorithm based on  -means, intended for implementation on a GPU using</p>
      <sec id="sec-7-1">
        <title>CUDA technology.</title>
        <p>4. The proposed method of training based on the  -means rule can be used to intellectualize the</p>
      </sec>
      <sec id="sec-7-2">
        <title>DSS audit. Prospects for further research is the study of the proposed method for a wide class of artificial intelligence tasks, as well as the creation of a method for bidirectional mapping audit features to solve audit problems.</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>3. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[1] The World Bank: World Development Report 2016: Digital Dividends</source>
          ,
          <year>2016</year>
          . URL: https://www.worldbank.org/en/publication/wdr2016.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schultz</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Tropmann-Frick</surname>
          </string-name>
          .
          <article-title>Autoencoder Neural Networks versus External Auditors: Detecting Unusual Journal Entries in Financial Statement Audits</article-title>
          .
          <source>Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS-2020)</source>
          , Maui, Hawaii, USA.
          <year>2021</year>
          , pp.
          <fpage>5421</fpage>
          -
          <lpage>5430</lpage>
          . doi:
          <volume>10</volume>
          .24251/hicss.
          <year>2020</year>
          .
          <volume>666</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Nonnenmacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kruse</surname>
          </string-name>
          , G. Schumann,
          <string-name>
            <given-names>G.</given-names>
            <surname>Marx</surname>
          </string-name>
          .
          <article-title>Using Autoencoders for Data-Driven Analysis in Internal Auditing</article-title>
          .
          <source>In Proceedings of the 54th Hawaii International Conference on System Sciences, Maui</source>
          , Hawaii, USA,
          <year>2021</year>
          , pp.
          <fpage>5748</fpage>
          -
          <lpage>5757</lpage>
          . doi:
          <volume>10</volume>
          .24251/hicss.
          <year>2021</year>
          .
          <volume>697</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bodyanskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Boiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zaychenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hamidov</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Zelikman</surname>
          </string-name>
          .
          <article-title>The Hybrid GMDHNeo-fuzzy Neural Network in Forecasting Problems in Financial Sphere</article-title>
          .
          <source>Proceedings of 2nd International Conference on System Analysis &amp; Intelligent Computing (SAIC)</source>
          , Kyiv, Ukraine, IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi: 10.1109/SAIC51296.
          <year>2020</year>
          .
          <volume>9239152</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T.</given-names>
            <surname>Neskorodіeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fedorov</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Izonin.</surname>
          </string-name>
          <article-title>Forecast Method for Audit Data Analysis by Modified Liquid State Machine</article-title>
          .
          <source>Proceedings of the 1st International Workshop on Intelligent Information Technologies &amp; Systems of Information Security (IntelITSIS</source>
          <year>2020</year>
          ), Khmelnytskyi, Ukraine,
          <fpage>10</fpage>
          -
          <lpage>12</lpage>
          June,
          <year>2020</year>
          : proceedings. - CEUR-WS vol.
          <volume>2623</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>35</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Neskorodіeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fedorov</surname>
          </string-name>
          .
          <article-title>Method for Automatic Analysis of Compliance of Expenses Data and the Enterprise Income by Neural Network Model of Forecast</article-title>
          .
          <source>Proceedings of the 2nd International Workshop on Modern Machine Learning Technologies and Data Science (MoMLeT&amp;DS-2020)</source>
          , Lviv-Shatsk,
          <year>Ukraine</year>
          ,
          <fpage>2</fpage>
          -3 June, 2020
          <source>: proceedings. - CEUR-WS, Volume I: Main Conference</source>
          . vol.
          <volume>2631</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>158</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.A.</given-names>
            <surname>Manziuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.S.</given-names>
            <surname>Kasianiuk</surname>
          </string-name>
          .
          <article-title>Information technology separating hyperplanes synthesis for linear classifiers</article-title>
          .
          <source>Journal of Automation and Information Sciences</source>
          , vol.
          <volume>51</volume>
          (
          <issue>5</issue>
          ) (
          <year>2019</year>
          )
          <fpage>54</fpage>
          -
          <lpage>64</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v51.
          <year>i5</year>
          .
          <fpage>50</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.V.</given-names>
            <surname>Prokopenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grigor</surname>
          </string-name>
          .
          <article-title>Development of the comprehensive method to manage risks in projects related to information technologies</article-title>
          .
          <source>Eastern-European Journal of Enterprise</source>
          Technologies vol.
          <volume>2</volume>
          ,
          <issue>2018</issue>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>43</lpage>
          . doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2018</year>
          .
          <volume>128140</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>U.P.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tiwari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.K.</given-names>
            <surname>Singh</surname>
          </string-name>
          .
          <article-title>Gradient evolution-based counter propagation network for approximation of noncanonical system</article-title>
          .
          <source>Soft Computing</source>
          ,
          <volume>23</volume>
          <fpage>13</fpage>
          , (
          <year>2019</year>
          )
          <fpage>4955</fpage>
          -
          <lpage>4967</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00500-018-3160-7.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>A.</given-names>
            <surname>Asokan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Gunavathi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Anitha</surname>
          </string-name>
          .
          <article-title>Classification of Melakartha ragas using neural networks</article-title>
          .
          <source>Proceedings of the International Conference on Innovations in Information, Embedded and Communication Systems (ICIIECS-2017)</source>
          ,
          <fpage>17</fpage>
          -
          <lpage>18</lpage>
          March 2017, Coimbatore, India. IEEE.
          <year>2017</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICIIECS.
          <year>2017</year>
          .
          <volume>8276040</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Sonika</surname>
            ;
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Pratap; M. Chauhan</surname>
            ;
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Dixit</surname>
          </string-name>
          .
          <article-title>New technique for detecting fraudulent transactions using hybrid network consisting of full-counter propagation network and probabilistic network</article-title>
          .
          <source>2016 International Conference on Computing, Communication and Automation (ICCCA)</source>
          ,
          <fpage>29</fpage>
          -
          <lpage>30</lpage>
          April 2016,
          <string-name>
            <given-names>Greater</given-names>
            <surname>Noida</surname>
          </string-name>
          , India, IEEE.
          <year>2016</year>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>30</lpage>
          . doi:
          <volume>10</volume>
          .1109/CCAA.
          <year>2016</year>
          .
          <volume>7813713</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>