<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Information Technology and Interactions, December</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Automatic Analysis Method of Audit Data Based on Neural Network Mapping</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tatiana Neskorodieva</string-name>
          <email>t.neskorodieva@donnu.edu.ua</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugene Fedorov</string-name>
          <email>fedorovee75@ukr.net</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cherkasy State Technological University</institution>
          ,
          <addr-line>Shevchenko blvd., 460, Cherkasy, 18006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Vasyl' Stus Donetsk National University</institution>
          ,
          <addr-line>600-richchia str., 21, Vinnytsia, 21021</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>0</volume>
      <fpage>2</fpage>
      <lpage>03</lpage>
      <abstract>
        <p>The work solves the problem of increasing the efficiency and effectiveness of analytical audit procedures by automating data comparison by neural network mapping. The object of the research is the process of auditing the compliance of payment sequences and supply for raw materials. The vectors of signs for the objects of the sequences of payment and supply of raw materials are generated, which are then used in the proposed method. The created method, in contrast to the traditional one, provides for a batch mode, which allows the method to increase the learning rate by an amount equal to the product of the number of neurons in the hidden layer and the power of the training set, which is critically important in the audit system for the implementation of enumerating various methods of forming subsets analysis. The urgent task of increasing the audit efficiency was solved by automating the mapping of audit indicators by forward-only counterpropagating neural network. A learning algorithm based on  -means has been created, intended for implementation on a GPU using CUDA technology, which increases the speed of identifying parameters of a neural network model.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>sequences of payment and supply of raw materials.</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>In the process of development of international and national economies and industry of IT in
particular, it is possible to distinguish the following basic tendencies: realization of digital
transformations, forming of digital economy, globalization of socio-economic processes and of IT
accompanying them [1]. These processes result in the origin of global, multilevel hierarchical
structures of heterogeneous, multivariable, multifunction connections, interactions and cooperation of
managing subjects (objects of audit), the large volumes of information about them have been
accumulated in the informative systems of account, management and audit.</p>
      <p>Consequently, nowadays the scientific and technical issue of the modern information technologies
in financial and economic sphere of Ukraine is forming of the methodology of planning and creation
of the decision support systems (DSS) at the audit of enterprises in the conditions of application of IT
and with the use of information technologies on the basis of the automated analysis of the large
volumes of data about financial and economic activity and states of enterprises with the multi-level
hierarchical
structure
of
heterogeneous,
multivariable,
multifunction
connections,
intercommunications and cooperation of objects of audit with the purpose of expansion of functional
possibilities, increase of efficiency and universality of IT-audit.</p>
      <p>2020 Copyright for this paper by its authors.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Problem Statement</title>
      <p>Automated DSS audit means the automatic forming of recommendable decisions, based on the
results of the automated analysis of data, that improves quality process of audit. Unlike the traditional
approach, computer technologies of analysis of data in the system of audit accelerate and promote the
process accuracy of audit, that extremely critical in the conditions of plenty of associate tasks on
lower and middle levels, and also amounts of indexes and supervisions in every task.</p>
      <p>When developing a decision-making system in audit based on data mining technologies, three
methods have been created: classifying variables, forming analysis sets, mapping analysis sets [2,3].</p>
      <p>The peculiarity of the methodology for classifying indicators is that qualitatively different (by
semantic content) variables are classified: numerological, linguistic, quantitative, logical. The essence
of the second technique is determined by the qualitative meaning of the indicators. In accordance with
this, sets are formed with the corresponding semantic content: document numbers, the name of
indicators, quantitative estimates of the values of indicators, logical indicators. The third technique is
subordinated to the mappings of formed sets of the same type on each other in order to determine
equivalence in the following senses: numerological, linguistic, quantitative, logical. The aim of the
work is to increase the efficiency of automatic data analysis in the audit DSS by means of a neural
network mapping of sets of audit indicators in order to identify systematic misstatements that lead to
misstatement of reporting. It is assumed that the audit indicators are noisy with Gaussian noise, which
in turn simulates random accounting errors (as opposed to systematic ones).</p>
      <p>For the achievement of the aim it is necessary to solve the following tasks:
• generate vectors of indicators for objects of sequences of payment and supply of raw
materials;
• choose a neural network model for mapping audit indicators (which are noisy with Gaussian
noise, which in turn simulates random accounting errors (as opposed to systematic ones, which
lead to distortion of reporting));
• choose a criterion for evaluating the effectiveness of a neural network model;
• propose a method for training a neural network model in batch mode;
• propose an algorithm for training a neural network model in batch mode for implementation
on a GPU;
• perform numerical studies.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Literature Survey</title>
      <p>The most urgent problem is the mapping of quantitative indicators. The mapping of quantitative
indicators of the audit can be implemented through an ANN with associative memory. The main
ANNs with associative memory are presented in Table 1. The memory capacity was considered only
for ANNs with a binary or bipolar data type that perform reconstruction or classification. HAM stands
for hetero-associative memory, AAM stands for auto-associative memory.</p>
      <p>As follows from Table 1, most neural networks have one or more disadvantages:
1. cannot be used for reconstruction of the other sample;
2. do not work with real data.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Materials and Methods 4.1. Formation of features vectors of elements of payment sequences and raw materials supply</title>
      <p>Feature of elements of the sequence of payment and supply of raw materials are formed on the basis of audit
variables (Table 2). Elements of mapping sets - payment (delivery) data for each supplier with which a
longterm supply agreement is in force during the year for which the audit is carried out. The vector of payment signs
  = (  1, . . . ,   ) formed by indicators of the cost of paid raw materials    by type   ∈   . The vector of
supply features   = (  1, . . . ,   ) is formed by indicators of the quantity of supplied raw materials by type.  
by type   ∈   .</p>
      <p>ANN
Forward-only
Counterpropagation Neural
Network [4, 5]
Full (bi-directional)
Counterpropagation Neural
Network [7]
Deep Belief Network [8]
Restricted Boltzmann
machine [9, 10]
Self-Organizing map [11, 12]
Learning Vector
Quantization [13]
Principal Component
Analysis NN [14]
Independent Component
Analysis NN [15, 16]
Cerebellar Model
Articulation Controller [17]
Recurrent correlative
autoassociative memory [18, 19]
Hopfield Neural Network
[20, 21]
Gauss machine [22, 23]
Bidirectional associative
memory [24]
Brain State Model [25, 26]
Hamming neural network
[27]
Boltzmann machine [28, 29,
30, 31]
ART-2 [32, 33]</p>
      <p>AАМ,
HAM
AAM
AАМ
AАМ
AАМ
HAM
HAM
HAM
AАМ
AАМ
AАМ
AАМ,
HAM
AАМ
AАМ
AАМ,
HAM
AАМ
Medium
Medium
High
Low
Low
Low
High</p>
      <p>Medium</p>
      <p>To assess the dimension of the features vector, an analysis was made of the nomenclature of purchases of
raw materials (components) of large machine-building enterprises. So, based on this analysis, we can conclude
that the sections of the nomenclature are on average from 8 to 12, the number of groups in each section is from
2-10.</p>
      <p>Analyzing the homogeneity of the procurement nomenclature, we can conclude that for continuous
operation, a plant can have long-term contracts with suppliers in quantities from 50 to 100.
Real
Real
Binary
Binary
Real
Real
Real
Real
Real
Bipolar
Bipolar
Bipolar
Bipolar
Real
Bipolar
Binary
Real</p>
      <p>Reconstruction of
other sample
Reconstruction of
the original or
other sample
Reconstruction of
the original sample
Reconstruction of
the original sample
Clustering
Clustering
Dimension
reduction
Dimension
reduction
Coding
Reconstruction of
the original sample
Reconstruction of
the original sample
Reconstruction of
the original sample
Reconstruction of
the original or
other sample
Clustering
Reconstruction of
the original sample
Reconstruction of
the original or
other sample
Clustering</p>
      <p />
      <p>sense
type of operation
(payment of supplier</p>
      <p>invoice)
type of supplier to whom</p>
      <p>payment is transferred
type of paid raw materials
set of types of paid raw</p>
      <p>materials
number of types of paid</p>
      <p>raw materials
number of paid raw



materials
 (  ∈   )
 (  ∈   )
 (  ∈   )
price of paid raw material
price of paid raw material
cost of paid raw material</p>
      <p>set  
maximum delivery time
according to the contract</p>
      <p />
      <p />
      <p>sense
type of operation (receipt of raw</p>
      <p>materials from a supplier)
type of supplier who supplied raw</p>
      <p>materials
type of raw material received
set of types of raw materials
number of types of raw materials
obtained
obtained
(  ∈   )
number of received raw material  
price of the received raw material


 (  ∈   )
 (  ∈   )
price of the received raw material
cost of the received raw material</p>
      <p>set  
delivery lag after the fact</p>
      <p>Based on the analysis performed, the following quantitative features were selected for the elements of the
supply sequence for each supplier and for all types of raw materials (the order of quantity of the assortment of
raw materials is described above):
  - cost of paid raw material by type  ( = 1,  ).</p>
      <p>For the elements of the supply chain for each supplier and for all types of raw materials, the following
features have been selected:
  - amount of received raw material by type  ( = 1,  ).</p>
      <p>We represent the implementation of the "generalized audit" in the form of a mapping (comparison) of
generalized quantitative features of the audited sets. The formation of generalized quantitative features can be
performed using ANN.
4.2.</p>
    </sec>
    <sec id="sec-6">
      <title>Choosing a neural network model for mapping audit sets</title>
      <p>In the work, the Forward-only Counterpropagating Neural Network (FOCPNN), which is a
nonrecurrent static two-layer ANN, was chosen as a neural network. FOCPNN output is linear.</p>
      <p>FOCPNN advantages:
1.</p>
      <p>Unlike most ANNs are used to reconstruct another sample using hetero-associative memory.
Unlike bidirectional associative memory and the Boltzmann machine, it works with real data.
Unlike a full counterpropagating neural network, it has less computational complexity (it does
not perform additional reconstruction of the original sample).
=(  ∗1 , . . . ,   (∗2 ) ), is represented as</p>
      <p>
        (2)
FOCPNN model performing mapping of each input sample  = ( 1, . . . ,    ) to output sample 
 ∗= 


  ,   = √∑ =1(  − 
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ))2,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – connection weight from the  -th element of the input sample to the  -th neuron,
  (∗2) – connection weight from the neuron-winner  ∗ to j-th element of output sample,
 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – the number of neurons in the hidden layer.
following blocks (Fig 1).
power.
      </p>
      <p>The disadvantage of FOCPNN is that it does not have a batch learning mode, which leads to
reducing of the learning speed. For FOCPNN was used concurrent training (combination of training
with and without a teacher). This work proposes training FOCPNN in batch mode.</p>
      <p>First phase (training of the hidden layer) (steps 1-6).</p>
      <p>
        The first phase allows you to calculate the weights of the hidden layer 
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and consists of the
1.
      </p>
      <p>
        Learning iteration number  = 0, initialization by uniform distribution on the interval (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) or
[-0.5, 0.5] of weights
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( ),  ∈ 1,   ,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), where   – is length of the sample 
and  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – the number and the neurons in the hidden layer.
      </p>
      <p>
        Training set is {  |  ∈    },  ∈ 1,  , where   –  -th training input vector,  – training set






(2)
(
        <xref ref-type="bibr" rid="ref2">3</xref>
        )
(4)
(
        <xref ref-type="bibr" rid="ref4">5</xref>
        )
(6)
(7)
      </p>
      <p>Criterion choice for assessing the effectiveness of a neural network
model for mapping audit sets</p>
      <p>
        In this work for training model FOCPNN was chosen target function, that indicates selection of the
mean square error (difference between the model sample and the test sample)
vector of parameter values 
= ( 1(11), . . . ,     (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  11 , . . . ,   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )  ), which deliver the minimum
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (2) (2)
where   –  -th, output sample according to the model
  –  -th test output sample.
      </p>
      <p>=
1</p>
      <p>∑
 =1‖  −   ‖</p>
      <p>→ 
2

,
4.4.</p>
    </sec>
    <sec id="sec-7">
      <title>Training method for neural network model in batch mode</title>
      <p>Initial shortest distance  ̄(0) =0.
2. Calculating the distance to all hidden neurons.</p>
      <p>
        Distance   from µ-th input sample to each i-th neuron is determined by the formula:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( ))2,  ∈ 1,  ,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
      </p>
      <p>
        (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( ) – connection weight from k-th input sample to i-th neuron at time  .
3. Calculating the shortest distance and choosing the neuron with the shortest distance
and choosing the neuron-winner   ∗ , for which the distance   is shortest
neighbors based on k-means rule
ℎ( ,  ∗) = {
1,  =  ∗
0,  ≠  ∗
0.5, 0.5] of weights
      </p>
      <p>hidden layer,   – length of the sample   .
6. Checking the termination condition
If | ̄( + 1) −  ̄( )| ≤  , the finish, else  =  + 1, go to step 2.</p>
      <p>Second phase (training the output layer) (steps 7-12). The second phase allows you to calculate the
weights of the output layer</p>
      <p>
        (2) and consists of the following blocks (Figure 2).
7. Learning iteration number  = 0, initialization by uniform distribution on the interval (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) or
[(2)( ),  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  ∈ 1,   , where  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – the number and the neurons in the
Training set is {(  ,   )|  ∈
      </p>
      <p>,   ∈    },  ∈ 1,  , where   –  -th training input vector,  
–  -th training output vector,  – training set power,   – length of the sample  .</p>
      <p>
        Initial shortest distance  ̄(0) =0.
8. Calculating the distance to all hidden neurons
Distance   from µ-th input sample to each i-th neuron is determined by the formula:

(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ))2,  ∈ 1,  ,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – pretrained connection weight from k-th element of input sample to i-th neuron at
1. Initializing the weights of the neurons of the hidden layer
2. Calculating the distance to all hidden neurons
3. Calculating the shortest distance and choosing the neuron
4. Setting the weights of the hidden layer neurons associated
5. Calculating the average sum of the least distances
yes
(8)
(9)
(
        <xref ref-type="bibr" rid="ref11 ref8">10</xref>
        )
(11)
 = 

 = √∑ =1(
      </p>
      <p>−   (∗2 )( ))2,  ∈ 1,  ,
9. Calculating the shortest distance and choosing the neuron with the shortest distance.
and choosing the neuron-winner   ∗ , for which the distance   is shortest.
10. Calculating the distance to all output neurons.</p>
      <p>Distance   from the neuron-winner   ∗ to µ-th output sample is determined by the formula:
(2)
 ∗  ( ) – weight of connection from the winner neuron   ∗ to j-th element of the output
sample at time  .</p>
      <p>6
7. Initialization of the neuron weights of the output layer
8. Calculating the distance to all hidden neurons
9. Calculating the shortest distance and choosing the neuron
10. Calculating the distance to all output neurons
11. Setting the weights of the hidden layer neurons associated
with the neuron-winner and its neighbors
12. Calculating the average sum of the least distances z (n 1)
13. z (n 1)  z (n)  </p>
      <p>no
14. Output the weights
yes
11. Setting the weights of the output layer neurons associated with the neuron-winner   ∗ and its
neighbors based on k-means rule</p>
      <p>
        (2)( + 1) = ∑ =1 ℎ( , ∗ )  ,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  ∈ 1,   ,
      </p>
      <p>where ℎ( ,  ∗) – rectangular topological neighborhood function,
12. Calculating the average sum of the shortest distances
13. Checking the termination condition
If | ̄( + 1) −  ̄( )| ≤  ,the finish, else  =  + 1, go to step 8.</p>
    </sec>
    <sec id="sec-8">
      <title>Algorithm for training neuron network model in batch mode for implementation on GPU</title>
      <p>For the proposed method of training FOCPNN on audit data example, examines the algorithm
for implementation on a GPU with usage of CUDA parallel processing technology.</p>
      <p>
        The first phase (training the hidden layer). The first phase based on formulas (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )-(7) is shown in
      </p>
      <p>
        Step 1 – Operator enters lengths s of the sample    , the lengths s of the sample    , the
number and the neurons in the hidden layer  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), power of the training set  , training set {  |  ∈
(12)
(
        <xref ref-type="bibr" rid="ref6">13</xref>
        )
   },  ∈ 1,  .
      </p>
      <p>
        Step 2 – Initialization by uniform distribution over the interval (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) or [-0.5, 0.5] of weights
  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( ),  ∈ 1,   ,  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ).
      </p>
      <p>
        Step 3 – Calculation of distances to all hidden neurons of the ANN, using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on
GPU, which are grouped into P blocks. Each thread calculates the distance from µ-th input sample to
each i-th neuron   .
      </p>
      <p>1
2
3
4
5
6
7
+</p>
      <p>
        8
Step 4 – Computation based on shortest distance reduction and determining the neurons with the
shortest distance using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on GPU, which are grouped into P blocks. The result of the
work of each block is a neuron-winner   ∗ with the smallest distance   .
      </p>
      <p>
        Step 5 – Setting the weights of the output layer neurons associated with the neuron- winner   ∗ and
its neighbors based on reduction using   ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⋅  threads on GPU, which are grouped into   ⋅
 (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) blocks. The result of the work of each block is the weight   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( + 1).
      </p>
      <p>Step 6 – Calculation based on reduction of the average sum of the shortest distances using 
threads on GPU, which are grouped into 1 block. The result of the block is the average sum of the
smallest distances  ̄( + 1).</p>
      <p>Step 7 – If average sum of smallest distances of neighboring iterations are close, | ̄( + 1) −
 ̄( )| ≤  , then finish, else – increasing number of iteration  =  + 1, go to step 3.</p>
      <p>
        Step 8 – Recording of weight   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )( + 1) in the database.
      </p>
      <p>
        Second phase (training of output layer). The second phase based on formulas (8)-(
        <xref ref-type="bibr" rid="ref6">13</xref>
        ) is showed in
Figure. 4. This flowchart operates as follows.
      </p>
      <p>
        Step 1 – Operator enters lengths  of the sample    , the lengths  of the sample    , the
number and the neurons in the hidden layer  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), power of the training set  , training set {  |  ∈
   },  ∈ 1,  .
      </p>
      <p>
        Step 2 – Initialization by uniform distribution over the interval (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) or [-0.5, 0.5] of weights
  (2)( ),  ∈ 1,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ),  ∈ 1,   .
      </p>
      <p>
        Step 3 – Calculation of distances to all hidden neurons of the ANN, using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on
GPU, which are grouped into P blocks. Each thread calculates the distance from µ-th input sample to
each  -th neuron   .
      </p>
      <p>
        Step 4 – Computation based on shortest distance reduction and determining the neurons with the
shortest distance using  ⋅  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) threads on GPU, which are grouped into P blocks. The result of the
work of each block is a neuron-winner   ∗ with the smallest distance   .
Step 5 – Calculating distances from the neuron-winner   ∗ to µ-th output sample using  ⋅  
threads on GPU, which are grouped into P blocks. Each thread calculates the distance from the
neuron-winner   ∗ to µ-th output sample   .
      </p>
      <p>
        Step 6 – Setting the weights of the output layer neurons associated with the neuron-winner   ∗ and
its neighbors based on reduction using  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⋅   ⋅  threads on GPU, which are grouped into  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ⋅
  blocks. The result of the work of each block is the weight   (2)( + 1).
      </p>
      <p>Step 7 – Calculation based on reduction of the average sum of the shortest distances using 
threads on GPU, which are grouped into 1 block. The result of the block is the average sum of the
smallest distances  ̄( + 1).</p>
      <p>Step 8 – If average sum of smallest distances of neighboring iterations are close, | ̄( + 1) −
 ̄( )| ≤  , then finish, else – increasing number of iteration  =  + 1, go to step 3.</p>
      <p>Step 9 – Recording of weight   (2)( + 1), in the database.
4.6.</p>
    </sec>
    <sec id="sec-9">
      <title>Numerical research</title>
      <p>The results of the comparison of the proposed method using GPU and the traditional FOCPNN
training method are presented in Table 3.</p>
      <p>
        Evaluation of computational complexity of the proposed method using the GPU, and the
traditional method of teaching FOCPNN were based on the number of calculation distances,
computing of which is the most consuming part of method. Moreover,  1 – the maximum number
of iterations of the first training phase,  2 – the maximum number of iterations of the second
training phase,  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) – the number of neurons in the hidden layer,  – the power of the training set.
      </p>
      <p>
        Comparison of computational complexity for P  1000 n1max  100 , n2max  100 , N(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) 100
showed the result of reducing the computation time in comparison with the traditional method by a
factor of 100 000.
      </p>
      <p>The sampling rate is inversely proportional to the quantization step of the audit data and depends
on the audit period and level. Increasing the sampling rate of data acquisition leads to a directly
proportional increase in the power of the training set. which increases the computational complexity
by the same proportionality coefficient in the sequential learning mode, and in the parallel mode the
speed does not decrease significantly.
Feature</p>
      <p>Computational
complexity
4.7.</p>
    </sec>
    <sec id="sec-10">
      <title>Discussion 4.8.</title>
    </sec>
    <sec id="sec-11">
      <title>Conclusion</title>
      <p>
        The traditional FOCPNN learning method does not provide support for batch mode, which
increases computational complexity (Table 3). Proposed method eliminates this flaw and allows for
approximate increase of learning rate in   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ).
      </p>
      <p>1. The urgent task of increasing the effectiveness of audit in the context of large volumes of analyzed
data and limited verification time was solved by automating the formation of generalized features of audit
sets and their mapping by means of a forward-only counterpropagating neural network.</p>
      <p>
        2. For increased learning rate of forward-only counterpropagating neural network, was developed a
method based on the k-means rule for training in batch mode. The proposed method provides:
approximately increase learning rate in   (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), where  (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) is the number of neurons in the hidden
layer and  is the power of the learning set.
      </p>
      <p>3. Created a learning algorithm based on  -means, intended for implementation on a GPU using
CUDA technology.</p>
      <p>4. The proposed method of training based on the  -means rule can be used to intellectualize the
DSS audit.</p>
      <p>Prospects for further research is the study of the proposed method for a wide class of artificial
intelligence tasks, as well as the creation of a method for mapping audit features to solve audit problems.
5. References</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>where  [1] The World Bank: World Development Report 2016: Digital Dividends</source>
          .
          <year>2016</year>
          . URL: https://www.worldbank.org/en/publication/wdr2016 [2]
          <string-name>
            <given-names>T.V.</given-names>
            <surname>Neskorodieva</surname>
          </string-name>
          .
          <article-title>Postanovka elementarnykh zadach audytu peredumovy polozhen bukhhalterskoho obliku v informatsiinii tekhnolohii systemy pidtrymky rishen (Formulation of elementary tasks of audit subsystems of accounting provisions precondition IT DSS)</article-title>
          .
          <source>Modern information systems</source>
          .
          <volume>3</volume>
          (
          <issue>1</issue>
          ),
          <fpage>48</fpage>
          -
          <lpage>54</lpage>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .20998/
          <fpage>2522</fpage>
          -
          <lpage>9052</lpage>
          .
          <year>2019</year>
          .
          <volume>1</volume>
          .08 (In Russian).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.V.</given-names>
            <surname>Neskorodieva</surname>
          </string-name>
          .
          <article-title>Formalization method of the first level variables in the audit systems IT</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <surname>ISSN</surname>
          </string-name>
          1028-
          <fpage>9763</fpage>
          .
          <article-title>Matematychni mashyny i systemy (Mathematical machines and systems</article-title>
          ), №
          <fpage>4</fpage>
          ,
          <year>2019</year>
          . DOI:
          <volume>10</volume>
          .34121/
          <fpage>1028</fpage>
          -9763-2019-4-
          <fpage>79</fpage>
          -
          <issue>86</issue>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Heht-Nielsen</surname>
          </string-name>
          .
          <article-title>Counterpropagating networks</article-title>
          .
          <source>Proc. Int. Conf. on Neural Networks</source>
          , New York, NY, Vol.
          <volume>2</volume>
          , (
          <year>1987</year>
          ):
          <fpage>19</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Heht-Nielsen</surname>
          </string-name>
          .
          <article-title>Application counterpropagating networks</article-title>
          .
          <source>Neural Networks</source>
          , Vol.
          <volume>1</volume>
          , (
          <year>1988</year>
          ):
          <fpage>19</fpage>
          -
          <lpage>32</lpage>
          . DOI.
          <volume>10</volume>
          .1016/
          <fpage>0893</fpage>
          -
          <lpage>6080</lpage>
          (
          <issue>88</issue>
          )
          <fpage>90015</fpage>
          -
          <lpage>9</lpage>
          [6]
          <string-name>
            <surname>Sivanandam</surname>
            <given-names>S.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sumathi</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deepa</surname>
            <given-names>S.N.</given-names>
          </string-name>
          <article-title>Introduction to Neural Networks using Matlab 6.0</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          New Delhi: The
          <string-name>
            <surname>McGraw-Hill Comp</surname>
          </string-name>
          ., Inc.,
          <year>2006</year>
          . DOI.
          <volume>10</volume>
          .1007/978-3-
          <fpage>540</fpage>
          -35781-
          <issue>0</issue>
          [7]
          <string-name>
            <surname>R.M. Neal</surname>
          </string-name>
          <article-title>Connectionist learning of belief networks</article-title>
          .
          <source>Artificial Intelligence</source>
          , Vol.
          <volume>56</volume>
          , (
          <year>1992</year>
          ):
          <fpage>71</fpage>
          -
          <lpage>113</lpage>
          . DOI.
          <volume>10</volume>
          .1016/
          <fpage>0004</fpage>
          -
          <lpage>3702</lpage>
          (
          <issue>92</issue>
          )
          <fpage>90065</fpage>
          -
          <lpage>6</lpage>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Dayan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.J.</given-names>
            <surname>Frey</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.M.</given-names>
            <surname>Neal</surname>
          </string-name>
          .
          <source>The Helmholtz machine. Neural Networks</source>
          , Vol.
          <volume>7</volume>
          , (
          <year>1995</year>
          ):
          <fpage>889</fpage>
          -
          <lpage>904</lpage>
          . DOI.
          <volume>10</volume>
          .1162/neco.
          <year>1995</year>
          .
          <volume>7</volume>
          .5.
          <issue>889</issue>
          [9]
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          .
          <article-title>The 'wake-sleep' algorithm for unsupervised neural networks</article-title>
          .
          <source>Science</source>
          , Vol.
          <volume>268</volume>
          , (
          <year>1995</year>
          ):
          <fpage>1158</fpage>
          -
          <lpage>1161</lpage>
          . DOI.
          <volume>10</volume>
          .1126/science.
          <volume>7761831</volume>
          [10]
          <string-name>
            <given-names>T.</given-names>
            <surname>Kohonen.</surname>
          </string-name>
          Self-organizing
          <string-name>
            <surname>Maps</surname>
          </string-name>
          . Berlin: Springer-Verlag,
          <year>1995</year>
          . DOI.
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -97610-0 [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Martinez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Berkovich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Schulten</surname>
          </string-name>
          . «
          <article-title>Neural-gas» network for vector quantization and its application to time series prediction</article-title>
          .
          <source>IEEE Trans. on Neural Networks, July</source>
          <year>1993</year>
          , Vol.
          <volume>4</volume>
          , (
          <year>1993</year>
          ):
          <fpage>558</fpage>
          -
          <lpage>569</lpage>
          . DOI.
          <volume>10</volume>
          .1109/72.238311 [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Haykin</surname>
          </string-name>
          .
          <source>Neural Networks and Learning Machines. Upper Saddle River</source>
          , NJ: Pearson Education, Inc.,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T.D.</given-names>
            <surname>Sanger</surname>
          </string-name>
          .
          <article-title>Optimal unsupervised learning in a single-layer linear feedforward neural network</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <string-name>
            <given-names>Neural</given-names>
            <surname>Networks</surname>
          </string-name>
          , vol.
          <volume>2</volume>
          , (
          <year>1989</year>
          ):
          <fpage>459</fpage>
          -
          <lpage>473</lpage>
          . DOI.
          <volume>10</volume>
          .1016/
          <fpage>0893</fpage>
          -
          <lpage>6080</lpage>
          (
          <issue>89</issue>
          )
          <fpage>90044</fpage>
          -
          <lpage>0</lpage>
          [14]
          <string-name>
            <given-names>M.S.</given-names>
            <surname>Bartlett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Movellan</surname>
          </string-name>
          , T.J. Sejnowski.
          <article-title>Face Recognition by Independent Component Analysis</article-title>
          .
          <source>IEEE Trans. on Neural Networks</source>
          , Vol.
          <volume>13</volume>
          ,
          <string-name>
            <surname>Issue</surname>
            <given-names>6</given-names>
          </string-name>
          , (
          <year>2002</year>
          ):
          <fpage>1450</fpage>
          -
          <lpage>1464</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          DOI.10.1109/tnn.
          <year>2002</year>
          .
          <volume>804287</volume>
          [15]
          <string-name>
            <given-names>B.</given-names>
            <surname>Draper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Baek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.S.</given-names>
            <surname>Bartlett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.R.</given-names>
            <surname>Beveridge</surname>
          </string-name>
          .
          <article-title>Recognizing Faces with PCA and ICA</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>Computer</given-names>
            <surname>Vision</surname>
          </string-name>
          and
          <article-title>Image Understanding (Special Issue on Face Recognition)</article-title>
          , Vol.
          <volume>91</volume>
          ,
          <string-name>
            <surname>Issues</surname>
          </string-name>
          1-
          <issue>2</issue>
          , (
          <year>2003</year>
          ):
          <fpage>115</fpage>
          -
          <lpage>137</lpage>
          . DOI.
          <volume>10</volume>
          .1016/s1077-
          <volume>3142</volume>
          (
          <issue>03</issue>
          )
          <fpage>00077</fpage>
          -
          <lpage>8</lpage>
          [16]
          <string-name>
            <surname>J.S. Albus</surname>
          </string-name>
          <article-title>A new approach to manipulator control: the cerebellar model articulation controller</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <article-title>Journal of dynamic systems, measurement, and control trans</article-title>
          .
          <source>ASME</source>
          , Vol.
          <volume>97</volume>
          ,
          <string-name>
            <surname>Issue</surname>
            <given-names>6</given-names>
          </string-name>
          , (
          <year>1975</year>
          ):
          <fpage>228</fpage>
          -
          <lpage>233</lpage>
          . DOI.
          <volume>10</volume>
          .
          <issue>1115</issue>
          /1.3426922 [17]
          <string-name>
            <surname>T.D. Chiueh</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          <article-title>Goodman Recurrent correlation associative memories</article-title>
          .
          <source>Transactions on Neural Networks, IEEE, March</source>
          <year>1991</year>
          , Vol.
          <volume>2</volume>
          , Issue 2. (
          <year>1991</year>
          ):
          <fpage>275</fpage>
          -
          <lpage>284</lpage>
          . DOI.
          <volume>10</volume>
          .1109/72.80338 [18]
          <string-name>
            <surname>T.D. Chiueh</surname>
            ,
            <given-names>R.M.</given-names>
          </string-name>
          <string-name>
            <surname>Goodman</surname>
          </string-name>
          .
          <article-title>High capacity exponential associative memories</article-title>
          .
          <source>Int. Conf. Neural Networks. IEEE, 24-27 July</source>
          <year>1988</year>
          , San Diego. CA., Vol.
          <volume>1</volume>
          , (
          <year>1988</year>
          ):
          <fpage>153</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          DOI.10.1109/icnn.
          <year>1988</year>
          .
          <volume>23843</volume>
          [19]
          <string-name>
            <given-names>J.J.</given-names>
            <surname>Hopfield</surname>
          </string-name>
          <article-title>Neural networks and physical systems with emergent collective computation abilities</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>Proc. Nac. Academy of Sciences USA</source>
          , Vol.
          <volume>79</volume>
          , (
          <year>1982</year>
          ):
          <fpage>2554</fpage>
          -
          <lpage>2558</lpage>
          . DOI.
          <volume>10</volume>
          .1073/pnas.79.8.2554 [20]
          <string-name>
            <given-names>J.J.</given-names>
            <surname>Hopfield</surname>
          </string-name>
          ,
          <string-name>
            <surname>Tank D.W.</surname>
          </string-name>
          <article-title>Neural computation of decisions in optimization problems</article-title>
          . Biolog.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <surname>Cybern</surname>
          </string-name>
          , Vol.
          <volume>52</volume>
          , (
          <year>1985</year>
          ):
          <fpage>141</fpage>
          -
          <lpage>152</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Akiyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yamashita</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kajiura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aiso</surname>
          </string-name>
          .
          <article-title>Combinational optimization with Gaussian machines</article-title>
          .
          <source>Neural Networks. International 1989 Joint Conference on Neural Networks</source>
          , IEEE, Washington, DC, USA, USA Vol.
          <volume>1</volume>
          , (
          <year>1989</year>
          ),
          <fpage>533</fpage>
          -
          <lpage>540</lpage>
          . DOI.
          <volume>10</volume>
          .1109/ijcnn.
          <year>1989</year>
          .
          <volume>118630</volume>
          [22]
          <string-name>
            <given-names>P.S.</given-names>
            <surname>Neelakanta</surname>
          </string-name>
          , D. DeGroff.
          <article-title>Neural network modelling: statistical mechanics and cybernetic perspectives Boca Raton</article-title>
          , Florida: CRC Press,
          <year>1994</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [23]
          <string-name>
            <surname>B. Kosko</surname>
          </string-name>
          <article-title>Bidirectional associative memories</article-title>
          .
          <source>IEEE Trans. on Syst., Man and Cybern</source>
          , Vol.
          <volume>18</volume>
          , (
          <year>1988</year>
          ),
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
          . DOI.
          <volume>10</volume>
          .1109/21.87054 [24]
          <string-name>
            <given-names>J.A.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.W.</given-names>
            <surname>Silverstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.A.</given-names>
            <surname>Ritz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.S.</given-names>
            <surname>Jones</surname>
          </string-name>
          .
          <article-title>Distinctive features, categorical perception and probability learning: Some applications of a neural models</article-title>
          .
          <source>Psychological Review</source>
          , Vol.
          <volume>84</volume>
          , (
          <year>1977</year>
          ),
          <fpage>413</fpage>
          -
          <lpage>451</lpage>
          . DOI.
          <volume>10</volume>
          .1037/
          <fpage>0033</fpage>
          -
          <lpage>295x</lpage>
          .
          <year>84</year>
          .5.413 [25]
          <string-name>
            <given-names>J.A.</given-names>
            <surname>Anderson</surname>
          </string-name>
          .
          <article-title>Cognitive and psychological computation with neural models</article-title>
          .
          <source>IEEE Trans. on Syst., Man and Cybern</source>
          , Vol.
          <volume>13</volume>
          , (
          <year>1983</year>
          ):
          <fpage>799</fpage>
          -
          <lpage>815</lpage>
          . DOI.
          <volume>10</volume>
          .1109/tsmc.
          <year>1983</year>
          .
          <volume>6313074</volume>
          [26]
          <string-name>
            <surname>R.P. Lippmann</surname>
          </string-name>
          <article-title>An introduction to computing with neural nets</article-title>
          .
          <source>IEEE Acoustics, Speech and Signal Processing Magazine</source>
          , April. (
          <year>1987</year>
          ):
          <fpage>4</fpage>
          -
          <lpage>22</lpage>
          . DOI.
          <volume>10</volume>
          .1109/massp.
          <year>1987</year>
          .
          <volume>1165576</volume>
          [27]
          <string-name>
            <given-names>A.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Igel</surname>
          </string-name>
          .
          <source>Training Restricted Boltzmann Machines: An Introduction Pattern Recognition</source>
          , Vol.
          <volume>47</volume>
          , (
          <year>2014</year>
          ):
          <fpage>25</fpage>
          -
          <lpage>39</lpage>
          . DOI.
          <volume>10</volume>
          .1016/j.patcog.
          <year>2013</year>
          .
          <volume>05</volume>
          .
          <volume>025</volume>
          [28]
          <string-name>
            <given-names>N.</given-names>
            <surname>Srivastava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          .
          <article-title>Multimodal Learning with Deep Boltzmann Machines</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>Journal of Machine Learning Research</source>
          , Vol.
          <volume>15</volume>
          , (
          <year>2014</year>
          ):
          <fpage>2949</fpage>
          -
          <lpage>2980</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>R.R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          .
          <article-title>Deep Boltzmann machines</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          , Vol.
          <volume>5</volume>
          , (
          <year>2009</year>
          ):
          <fpage>448</fpage>
          -
          <lpage>455</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>R.R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Larochelle</surname>
          </string-name>
          .
          <article-title>Efficient Learning of Deep Boltzmann Machines</article-title>
          .
          <source>Journal of Machine Learning Research</source>
          , Vol.
          <volume>9</volume>
          , (
          <year>2010</year>
          ):
          <fpage>693</fpage>
          -
          <lpage>700</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>G.A.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Grossberg.</surname>
          </string-name>
          <article-title>ART-2: self-organization of stable category recognition codes for analog input patterns</article-title>
          .
          <source>Applied Optics</source>
          , Vol.
          <volume>26</volume>
          , (
          <year>1987</year>
          ):
          <fpage>4919</fpage>
          -
          <lpage>4930</lpage>
          . DOI:
          <volume>10</volume>
          .1364/ao.26.
          <volume>004919</volume>
          [32]
          <string-name>
            <given-names>G.A.</given-names>
            <surname>Carpenter</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Grossberg.</surname>
          </string-name>
          <article-title>ART-3: self-organization of stable category recognition codes for analog input patterns</article-title>
          .
          <source>Neural</source>
          , Vol.
          <volume>3</volume>
          , (
          <year>1990</year>
          ):
          <fpage>129</fpage>
          -
          <lpage>152</lpage>
          . DOI:
          <volume>10</volume>
          .1016/
          <fpage>0893</fpage>
          -
          <lpage>6080</lpage>
          (
          <issue>90</issue>
          )
          <fpage>90085</fpage>
          -
          <lpage>y</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>