<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Neural Network Modeling Method of Transformations Data of Audit Production with Returnable Waste</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Tatiana Neskorodieva</string-name>
          <email>t.neskorodieva@donnu.edu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Eugene Fedorov</string-name>
          <email>fedorovee75@ukr.net</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksii Smirnov</string-name>
          <email>Dr.smirnovoa@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Pavlo Rymar</string-name>
          <email>p.rymar@donnu.edu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Central Ukrainian National Technical University</institution>
          ,
          <addr-line>avenue University, 8, Kropivnitskiy, 25006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Cherkasy State Technological University</institution>
          ,
          <addr-line>Shevchenko blvd., 460, Cherkasy, 18006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Vasyl' Stus Donetsk National University</institution>
          ,
          <addr-line>600-richchia str., 21, Vinnytsia, 21021</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Currently, the analytical procedures used during the audit are based on data mining techniques. The object of the research is the process of the content auditing of the production with returnable waste and intermediate products. The aim of the work is to reduce the risk of incorrect display of the dataset in the DSS of the audit of the method of neural network modeling of transformations of audit data of production with recyclable waste and intermediates. This will reduce the risk of the validated data misclassification. Audit data set transformations of a prerequisite "Completeness" are presented the sequences of sets data mappings of consecutive operations. Reached further development a method of parametrical identification of the MRMLP model which considers number of iterations of training and combines Gaussian distributions and Cauchy that increases the forecast accuracy as on initial iterations all search space is investigated, and on final iterations the search becomes directed. The software implementing the offered methods in MATLAB package was developed and investigated on the data of the release of raw materials into production and the posting of finished products of a with a two-year depth of sampling with daily time intervals. The made experiments confirmed operability of the developed software and allow to recommend it for use in practice in a subsystem of the automated analysis of DSS of audit for check of maps of sets of data of the raw materials release into production and the products output.</p>
      </abstract>
      <kwd-group>
        <kwd>1 production audit</kwd>
        <kwd>returnable waste</kwd>
        <kwd>intermediate products</kwd>
        <kwd>mapping by neural network</kwd>
        <kwd>modified recurrent multilayered perceptron</kwd>
        <kwd>metaheuristics</kwd>
        <kwd>DSS</kwd>
        <kwd>risk of wrong mapping of data sets</kwd>
        <kwd>risk of the validated data misclassification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In the process of development of international and national economics, industry of IT, it is
possible to distinguish the following basic tendencies: digital transformations realization, digital
economy forming, socio-economic processes globalization and IT accompanying them [1].
These processes result in the origin of global, multilevel hierarchical structures of heterogeneous,
multivariable, multifunction connections, interactions and cooperation of manage objects
(objects of audit). Large volumes of information about them have been accumulated in the
information systems of account, management and audit.</p>
      <p>Consequently, nowadays the scientific and technical issue of the modern information
technologies in financial and economic sphere of Ukraine is forming of the methodology of
planning and creation of the decision support systems (DSS) at the audit of enterprises in the
conditions of application of IT and with the use of information technologies on the basis of the
automated analysis of the large volumes of data about financial and economic activity and states
of enterprises with the multi-level hierarchical structure of heterogeneous, multivariable,
multifunction connections, intercommunications and cooperation of objects of audit with the
purpose of expansion of functional possibilities, increase of efficiency and universality of IT
audit [2, 3].</p>
      <p>Currently, analytical procedures used during the audit are based on data mining techniques
[4-6]. Automated DSS audit means the automatic forming of recommendable decisions, based on
the results of the data automated analysis, that improves quality process of audit and reducing the
risk of incorrect display of datasets [7, 8]. Unlike the traditional approach, computer
technologies of analysis of data in the audit system accelerate and promote the process accuracy
of audit, that extremely critical in the conditions of plenty of associate tasks on lower and middle
levels and amounts of indexes and supervisions in every task [9].</p>
      <p>When developing a decision-making system in audit based on data mining technologies, three
methods have been created: classifying variables, forming analysis sets, mapping analysis sets.</p>
      <p>The peculiarity of the methodology for classifying indicators is that qualitatively different (by
semantic content) variables are classified: numerological, linguistic, quantitative, logical. The
essence of the second technique is determined by the qualitative meaning of the indicators. In
accordance with this, sets are formed with the corresponding semantic content: document
numbers, name of indicators, quantitative estimates of the values of indicators, logical indicators.</p>
      <p>The third technique is subordinated to the mappings of formed sets of the same type on each
other to determine equivalence in the following senses: numerological, linguistic, quantitative,
logical.</p>
      <p>For modeling of data transformations of audit of production neural networks.</p>
      <p>The following are most often used as neural networks for mapping audit indicators:
– Elman's (ENN) neural network or a simple recurrent network (SRN) [10, 11] which is a
recurrent two-layer network and is constructed based on MLP. The advantage of this network is
simpler architecture and higher speed of training, than in gated, reservoir and bidirectional
networks. A disadvantage is the insufficient accuracy of the forecast in comparison with gated,
reservoir and bidirectional networks;</p>
      <p>– the bidirectional recurrent neural network (BRNN) [12, 13] which is a recurrent two-layer
network and is constructed based on two neural networks of Elman. The advantage of this
network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage
is higher complexity of determination of architecture, lower speed of training, than in a normal
neural network of Elman;</p>
      <p>– long short-term memory (LSTM) [14, 15] which is a recurrent network and is constructed
based on memory units (contain one or more cells), and input, output, a forget of gates (FIRs
filters). The advantage of this network is higher forecast accuracy, than in a normal neural
network of Elman. A disadvantage is higher complexity of architecture determination, lower
training speed, than in a normal neural network of Elman;</p>
      <p>– the bidirectional long short-term memory (BLSTM) [16, 17] which is a recurrent network
and is constructed based on two neural networks of LSTM. The advantage of this network is
higher forecast accuracy, than in normal LSTM. A disadvantage is higher complexity of
architecture determination, lower speed of training, than in normal LSTM;</p>
      <p>– the gated recurrent unit (GRU) [18, 19] which is a recurrent two-layer network and is
constructed based on the hidden unit’s gates of reset and update (FIRs filters). The advantage of
this network is higher accuracy of the forecast, than in a normal neural network of Elman. A
disadvantage is higher complexity of architecture determination, lower training speed, than in a
normal neural network of Elman;</p>
      <p>– the echo state network (ESN) [20] which is a recurrent two-layer network is constructed
based on the reservoir (represents a layer of the interconnected not full-connected neurons). The
advantage of this network is higher forecast accuracy, than in a normal neural network of Elman.
A disadvantage is higher complexity of architecture determination, lower training speed, than in
a normal neural network of Elman;</p>
      <p>– the liquid state machine (LSM) [21] which is a recurrent two-layer network is constructed
based on the reservoir (represents a layer of the interconnected not full-connected spike neurons)
and MLP. The advantage of this network is higher forecast accuracy, than in a normal neural
network of Elman. A disadvantage is higher complexity of architecture determination, lower
training speed, than in a normal neural network of Elman.</p>
      <p>Thus, any of networks does not meet all criteria.</p>
      <p>For acceleration of training and increase an accuracy of data transformations model of
production audit now are used metaheuristics (or modern heuristics) [22]. The metaheuristics
expands opportunities heuristic, combining heuristic methods based on high-level strategy [23].</p>
      <p>Existing metaheuristics possess one or more of the following disadvantages:
– there is only description abstraction of a method or the method description is focused on the
solution only of a certain task [24];
– influence of iteration number on solution search process the is not considered [25];
– the convergence of a method is not guaranteed [26];
– there is no opportunity to use not binary potential solutions [27];
– the procedure of parameters values determination is not automated [28];
– there is no opportunity to solve problems of conditional optimization [29];
– insufficient accuracy of a method [30].</p>
      <p>In this regard there is a creation problem of effective metaheuristic methods of optimization.</p>
      <p>In this regard, it is the actual to create a neural network that considers the functional structure
of production with returnable and non-returnable waste and intermediate products and learns
based on effective metaheuristics.</p>
      <p>The aim of the work is to reduce the risk of incorrect display of the dataset in the audit DSS
by the method of neural network modeling of audit data transformations of production with
recyclable waste and intermediates.</p>
      <p>For the objective achievement it is necessary to solve the following tasks:
– offer structural model of audit data transformations of production;
– offer neural network model of audit data transformations of production based on a recurrent
multilayered perceptron;</p>
      <p>– select criterion for evaluation of neural network model efficiency of production audit data
transformation;</p>
      <p>– offer a method of parametrical identification of neural network model of production audit
data transformation based on the return distribution in time;</p>
      <p>– offer a method of parametrical identification of neural network model of audit data
transformation of production based on cross entropy and stochastic search of an extremum with
training at vectors of normal distribution;
– execute numerical research.</p>
      <p>
        The problem formulation. Let for model of data transformations of production audit the
training set be set S = {(xµ , d µ , d(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., d µ(H ) )} , µ ∈1, P , where xµ is the µ -th training input

µ

vector, d µ is the µ -th training output vector of finished goods, dµ(k ) is the µ -th training output
vector of unreturnable waste which are received after each k -th of a layer of production of
semi-products.
      </p>
      <p>Then a problem of increase an accuracy of production audit data transformations on model of
the modified recurrent multilayered perceptron (MRLMP) g (x, w) , where x is an input signal,
w is the parameters vector, is represented as the problem of finding such a model parameter
vector w* that satisfies the criterion</p>
      <p>
        1 P
F =P ∑µ=1(g (xµ , w* ) − (d µ , d µ(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., d µ(H ) ))2 → min .
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and methods 2.1.</title>
    </sec>
    <sec id="sec-3">
      <title>Formalization of audit subject area data subelements transformations of the prerequisite "Completeness"</title>
      <p>
        Audit data set transformations of a prerequisite "Completeness" will be presented the sequences
of sets data mappings of consecutive operations
→  → ZiM , i1    im    iM , (i1,im ,iM ) ∈ A(I) , I = 1, I ,
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>Zi1 → Zi2 → Zim
where Z is reporting data set,
(i1,im ,iM ) is combination of consecutive operation types of a set I = 1, I ,
A(I) is set of possible combinations on a set I = 1, I .</p>
      <p>Therefore "Completeness" prerequisite audit we will present as the transformations checking
of subelements of data domain in the form of the sequences of mapping of splitting’s data
elements of the sequences</p>
      <p>ℜ(Zi1 ) → ℜ(Zi2 ) → ℜ(Zim ) →  → ℜ(ZiM ) , i1    im    iM , (i1,im ,iM ) ∈ A(I) ,
where ℜ(Z ) is splitting set Z .</p>
      <p>
        Possible combinations set of consecutive operation types (i1,im ,iM ) defined in (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
includes check in direct and in the opposite direction.
      </p>
      <p>
        The model of the subelements transformation of the "Completeness" prerequisite audit
subject domain will be formed on the example of the direct material costs audit. Models of their
I = 1, I , (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
conversions can be presented in the form of graphs in which everyone corresponds to a
subelement, and an edge – to map which describes interrelation between the corresponding
subelements.
      </p>
      <p>
        For this purpose, we use formalization of a set of direct material costs in the form of the
graph G(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = (Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) , R(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ) (fig. 1) where vertex – accounts on which account of these current assets
is kept and edges are operations as a result of which there is their conversion. Then model of
subelements conversion of audit data domain of a prerequisite "Completeness" at direct full
check ( (i1,im ,iM ) = (
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1, 2, 3, 4</xref>
        ) ) represents maps of subsets of these raw materials receipt
Z ℜ(r1 ) (i1) ∈ ℜ(Zi1 ) release of raw materials in production Z ℜ(r2 ) (i2 ) ∈ ℜ(Zi2 ) , then in subsets of
production data Z ℜ(r3 ) (i3 ) ∈ ℜ(Zi3 ) receipt of finished goods Z ℜ(r4 ) (i4 ) ∈ ℜ(Zi4 ) ,
 
T ∈ t jm ,Tm , j =1, Jm , m =1, M ,T  .
      </p>
      <p>In that specific case, if splitting sets it was carried out on the basis of the logical conditions
characterizing belonging to one of accounting item subspecies, then the model of subelements
conversion of audit data domain of a prerequisite "Completeness" at direct full check is the set of
the sequences of sets maps of these calculations operations for suppliers types in subsets of these
operations on raw materials types, then in subsets of these operations on products types and
finished goods types.</p>
      <p>
        Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i )
      </p>
      <p>ℜ 2
Z (R3 ) (i )
ℜ 2</p>
      <p>Z (R3 ) (i )
ℜ 3</p>
    </sec>
    <sec id="sec-4">
      <title>Choosing a neural network model for mapping audit sets</title>
      <p>
        The unit diagram of model of the modified recurrent multilayered perceptron (MRMLP)
fullconnected recurrent layers of semi-products production (the neurons forming them are
designated in continuous white color), not full-connected non-recurrent layers of unreturnable
raw materials
receipt
release of raw
materials in
production
Z ℜ(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i1 )
      </p>
      <p>…
Z (R1 ) (i )</p>
      <p>ℜ 1
raw materials
receipt
release of raw
materials in
production</p>
      <p>
        Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i )
ℜ 2
…
Z (R2 ) (i )
ℜ 2
      </p>
      <p>
        Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i )
ℜ 3
      </p>
      <p>…
Z (R3 ) (i )
ℜ 3</p>
      <p>
        Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i )
ℜ 3
receipt of
finished
goods
write-off of
production
prime cost of
finished goods
Z (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) (i )
ℜ 4
      </p>
      <p>…
Z (R4 ) (i )</p>
      <p>ℜ 4
receipt of
finished
goods
write-off of
production
prime cost of
finished goods
waste (forming them neurons is presented on fig. 2 are designated in continuous black color) and
not full-connected non-recurrent layer of finished goods (the neurons forming it are designated
in continuous gray color).</p>
      <p>
        …
…
…
…
…
…
…, y(H ) = ( y1(H ) ,..., yN(H(H)) ) , it is presented in the form
The MRMLP model, the executing map of each input sample of raw materials x = (x1,..., xN(0) ) ,
output samples of finished goods y = ( y1,..., yN (H ) ) and unreturnable waste y(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) = ( y1(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., yN(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ) ,
y(0) (n) = x , i ∈1, N (0) ,
i i
y(jk) (n) = f (k) (s(k) (n)) , j ∈1, N (k) , k ∈1, H ,
j
      </p>
      <p>N (k−1)
s(k ) (n) =)+ b(k ∑
j j</p>
      <p>N (k )
w(k ) y(k −1) (n) + ∑ w (k ) y(k ) (n − 1) ,
i =i 1 ij i =i 1 ij
y (n) = f (s (n)) , j ∈1, N (H ) ,</p>
      <p>j j
s (n) =b0 (n) + w y(H ) (n) ,</p>
      <p>j jj i
s(k) (n) =b(k) (n) + w (k) y(jk) (n) ,
j 0 jj
y(k) (n) = f(k) (s(k) (n)) , j ∈1, N (k) , k ∈1, H ,</p>
      <p>j j
where N (k) – neurons number in k -th layer of semi-products production and unreturnable
waste,
H – quantity of layers of semi-products production and unreturnable waste,
N (0) – number of neurons of an input layer (raw materials layer),
b(k) – bias for j - th of a neuron k -th of a layer of semi-products production,
j

bj – bias for j - th of a neuron of a finished goods layer,
b(k) – bias for j -th of a neuron k -th of a the unreturnable remains layer,</p>
      <p>j
w(k) – communication weight from i-th of a neuron of k -th 1-th layer of semi-products
ij
production to j-th to a neuron of k -th of a layer of semi-products production,
w (k) – communication weight from i-th of a neuron k -th of the semi-products production layer
ij
to j - th neuron of k -th 1-th layer of semi-products production,
w jj – communication weight from j -th neuron H-th of a layer of semi-products production to
j -th neuron of a finished goods layer,
w (k ) – communication weight from j -th of a neuron k-th of a layer of semi-products
jj
production to j-th neuron of k -th layer of irretrievable waste,
y(jk) (n) – output of j -th neuron of k -th of a layer of semi-products production in timepoint n ,
y j (n) – output of j -th finished goods layer in timepoint n ,
y(jk) (n) – output of j -th neuron of k -th of a irretrievable waste layer in timepoint n ,
f (k) – neurons function neuron activation of k -th of a layer of semi-products production,
f – neurons function activation of a finished goods layer,
f(k) – neurons function activation of k -th of a unreturnable waste layer.</p>
      <p>2.3.</p>
    </sec>
    <sec id="sec-5">
      <title>Criterion choice for evaluation of neural network model efficiency of data transformations of production audit</title>
      <p>
        In this work for training of the MRMLP model the function of the purpose which means the
choice of such values of a vector of parameters is selected
w = (w1(
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) ,..., wN(H(H)−1)N(H ) , w1(
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) ,..., w N(H(H))N(H−1) , w1(
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) ,..., w N(H(H))N(H ) , w11,..., w N(H )N(H ) ) , which deliver a
minimum of a root mean square error (the differences of a sample on model and a test sample)
      </p>
      <p>
        F =PN1(H ) ∑µP=1yµ − d µ 2 + H1P ∑kH=1N1(k) ∑µP=1yµ(k) − dµ(k) 2 → mwin ,
where yµ , y(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., yµ(H ) – µ -th output samples on model,
      </p>
      <p>
        µ
d µ , dµ(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., dµ(H ) – µ -th test output samples,
      </p>
      <p>
H – quantity hidden layers,
P – power of a test set.</p>
      <p>2.4.</p>
      <p>Method of parametrical identification of data
transformations model of production audit based on the
back propagation in time in a sequential mode
1. Number of iterations of training n = 1 , initialization by means of uniform distribution on an
interval (0.1) or [-0.5, 0.5] bias b(k ) (n) , b(k ) (n) , j ∈1, N (k ) , k ∈1, H , bj (n) , j ∈1, N (H ) , and

j j
weights w(k ) (n) ,
ij</p>
      <p>i ∈1, N (k −1) , j ∈1, N (k ) , k ∈1, H ,
~
w jj (n) , j ∈1, N (H ) , w (k ) (n) , j ∈1, N (k ) , k ∈1, H .</p>
      <p>
        jj
w (k ) (n) ,
ij
i ∈1, N (k ) , j ∈1, N (k −1) , k ∈1, H ,
2. The training set is set {(xµ , d µ , d µ(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) ,..., d µ(H ) ) | xµ ∈ R N(0) , d µ ∈ R N(H ) , d µ(k ) ∈ R N(k ) } ,
µ ∈1, P , where xµ – µ -th the training input vector raw materials, d µ – µ -th the training output
vector of finished goods, d (k ) – µ -th the training output vector of unreturnable waste which are
µ
received after each k -th of semi-products production layer, P – power of a training set. Number
of the current train from a training set µ =1 .
      </p>
      <p>3. Initial calculation of a signal output for each full-connected recurrent hidden layer
y(k ) (n − 1) =0 , i ∈1, N (k ) , k ∈1, H .</p>
      <p>i
4. Calculation of a signal output for each full-connected recurrent layer of semi-products
production considering returnable waste (forward propagation)
y(0) (n) = x ,
i µi
y(k ) (n) = f (k ) (s(k ) (n)) , j ∈1, N (k ) , k ∈1, H ,
j j</p>
      <p>N (k−1) N (k )
s(k ) (n) =)(n) b(k + ∑ wi(jk ) (n) y(k −1) (n) + ∑ wi(jk ) (n) y(k ) (n − 1) .</p>
      <p>j j i =1i i =1i
5. Calculation of a signal output for not full-connected non-recurrent layer of finished goods
(forward propagation)
y (n) = f (s (n)) , j ∈1, N (H ) ,</p>
      <p>j j
s j (n)
=b (n) + w (n) y(H ) (n)</p>
      <p>j jj i
y(k ) (n) = f (k ) (s(k ) (n)) , j ∈1, N (k ) , k ∈1, H ,</p>
      <p>j j
s(k ) (n) =b(k ) (n) + w (k ) (n) y(jk ) (n) .</p>
      <p>j j jj
7. Calculation of energy of error ANN</p>
      <p>1 N (H ) 2 1 H N (k ) 2
E(n) =∑j=1(e 2 j (n)) + 2 ∑k=1 ∑j=1 (e(jk ) (n)) ,</p>
      <p>.</p>
      <p>6. Calculation of a signal output for each not full-connected non-recurrent layer of
unretainable waste (forward propagation)
b(jk)(n +1) =b(k)(n) −η</p>
      <p>j
wi(jk)(n +1) =wi(jk)(n) −η
∂E(n)
∂b(k)(n)</p>
      <p>j
∂wi(jk)(n)
ej(n) =yj(n) − dµj , e(k)(n) =y(k)(n) − dµ(jk) .</p>
      <p>j j
8. Setup of synoptic weights based on generalized the delta rule (backward propagation)
, j∈1,N(k) , k ∈1,H ,
∂E(n) , i∈1,N(k−1) , j∈1,N(k) , k ∈1,H ,
∂E(n) , j∈1,N(k) , i∈1,N(k−1) , k ∈1,H ,
wi(jk)(n +1) =wi(jk)(n) −η∂wi(jk)(n)
b (n +1) =bj (n) −η</p>
      <p>j
w jj(n +1) =w jj(n) −η
∂E(n)
∂bj (n)
∂E(n)
∂w jj(n)
, j∈1,N(H) ,</p>
      <p>, j∈1,N(H) ,
bj(k)(n +1) =b(k)(n) −η∂b(k)(n) , j∈1,N(k) , k ∈1,H ,</p>
      <p>∂E(n)
j
j
∂E(n)
w(k)(n +1) =w(k)(n) −η∂w(k)(n) , j∈1,N(k) , k ∈1,H ,
jj jj</p>
      <p>jj
where η is the parameter determining training speed (at big η training happens quicker, but the
danger to receive the incorrect solution increases), 0 &lt; η&lt;1.</p>
      <p>∂E(n)
∂b(k)(n)
j</p>
      <p>= g(jk)(n),
∂E(n)</p>
      <p>= y(k−1)(n)g(jk)(n),
∂wi(jk)(n) i
∂E(n)</p>
      <p>=y(k)(n −1)g(jk)(n),
∂wi(jk)(n) i</p>
      <p>= g j(n),
∂E(n) = y(jH)(n)g j(n) ,
∂E(n)
∂bj (n)
∂w jj(n)</p>
      <p>∂E(n)
∂b(k)(n)</p>
      <p>j
∂E(n)
∂w(jjk)(n)
= g(jk)(n),
= y(jk)(n)g(jk)(n) ,
 f ′(H ) (s(H ) (n)) ( w (n)g (n) + w (H ) (n)g(H ) (n)) ,
g (k ) (n) =  j  N (k+j1j) j jj j
j  f ′(k ) (s(k ) (n))  ∑
 j  l=1

w(k +1) (n)g (k +1) (n) + w (k ) (n)g(k ) (n)  , k &lt; H
jl l jj j

,
g j (n) = f ′(s j (n))e j (n) ,
g(k ) (n) = f′(k ) (s(k ) (n))e(k ) (n) .</p>
      <p>l j j
9. Check of a termination condition
If n mod P &gt; 0 then µ = µ + 1 , n = n + 1 , go to 4.</p>
      <p>If n mod P = 0 and
If n mod P = 0 and
1 P</p>
      <p>∑ E(n − P + s) &gt; ε then n = n + 1 , go to 2.</p>
      <p>P s=1
1 P</p>
      <p>∑ E(n − P + s) &lt; ε to be completed.</p>
      <p>P s=1
k
=H</p>
      <p>A high probability of hit in a local extremum belongs to disadvantage of this method that
reduces training accuracy, and impossibility of training in batch mode that reduces training
speed. In this regard in work the alternative method of training at a basis metaheuristic is offered.
2.5.</p>
    </sec>
    <sec id="sec-6">
      <title>Method of parametrical identification of data transformations model of production audit on a basis metaheuristic</title>
      <p>The offered method of parametrical identification of production audit data transformations model
is based on a method of cross entropy and stochastic search of an extremum with training at
vectors of normal distribution [30].</p>
      <p>Feature of the offered method will be that for speed control of convergence of a method,
speed control of change of distribution parameters and for providing that on initial iterations all
search space was investigated, and on final iterations the search became directed, at generation
of potential solutions number of iterations is considered. Besides, not only Gaussian distribution,
but also Cauchy's distribution is used, and their value depends on number of iterations.</p>
      <p>The offered method consists of the following stages:
1. Initialization
1.1. Task of the maximum number of iterations N , population size K , solution lengths M
(corresponds to length of a vector of the model parameters MRMLP), the maximum quantity of
the selected best solutions B , parameter for generation of scales parameters vector β , 0 &lt; β &lt; 1.
1.2. Initialization of a location’s parameters vector
γloc =(γloc ,..., γloc ) , γloc = xmin + 1 (xmax − xmin ) .</p>
      <p>1 M j j j j</p>
      <p>2
1.3. Initialization of a location’s parameters vector
γscale =(γscale ,..., γscale ) , γscale =β(xmax − xmin ) .</p>
      <p>
        1 M j j j
1.4. Define the best solution (best vector of the model parameters MRMLP)
2. Iteration number n = 1 .
3. Creation of the current population of potential solutions P .
3.1. Solution number k = 1 , P = ∅ .
3.2. Generation of the new potential solution x (vector of the model parameters MRMLP)
k
xkj = γljoc + γsjcale   NN− n  С(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) +  Nn  N (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        )  , j ∈1, M ,
where N (
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) is standard normal distribution,
C(
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ) is standard Cauchy distribution.
      </p>
      <p>3.3. If k &lt; K then P = P  {xk } , k = k + 1 , transition to a step 3.2.
4. Sort P on function of the purpose, i.e. F (xk ) &lt; F (xk +1) .</p>
      <p>5. Define the best solution (best vector of the model parameters MRMLP) on the current
population
k* = arg mkin F (xk ) , k ∈1, K .
6. Define the best solution (best vector of the model parameters MRMLP) on all iterations
if F (xk* ) &lt; F (x* ) then x* = xk* .</p>
      <p>7. Modification of distribution parameters (on a basis B the first, i.e. best, new potential
solutions from population P ).</p>
      <p>7.1. Modification of a vector of parameters of locations
γljoc =  Nn  γljoc +  NN− n  γ ljoc , γ ljoc =B1 ∑kB=1 xkj , j ∈1, M .
7.2. Modification of a vector of parameters of scales
γsjcale = NN− n β(x mjax − x mjin ) , j ∈1, M .
5. If n &lt; N then n = n + 1 , go to a step 3.</p>
      <p>Result is x* .</p>
      <p>2.6.</p>
    </sec>
    <sec id="sec-7">
      <title>Algorithm for parametric identification of the production audit data transformation model based on metaheuristics</title>
      <p>For the proposed parametric identification method of the model for transforming production
audit data based on metaheuristics, an algorithm has been developed that is designed to be
implemented on a GPU using the CUDA information parallel processing technology and is
shown in Fig. 3. This block diagram functions as follows.</p>
      <p>Step 1. Operator input of the maximum number of iterations N , population size K , solution
length M , maximum number of selected best solutions B , parameter to generate a vector of scale
parameters β , 0 &lt; β &lt; 1, minimum and maximum values for the solution xmin , xmax , j ∈1, M .
j j
Step 2. Initialization of location parameters vector
2. Initialization of location p arameters
3. Initialization of the vector of scale p arameters
4. Determining the best solution
5. Setting the iteration number n=1
6. Building a current p opulation of p otential solutions
7. Sorting p otential solutions
8. Determining the best solution for the current p opulation
9. Determining the best solution across all iterations
10. Calculation of the p arameters of the average location
11. Modification of location p arameters
12. Modification of scale p arameters
14. Writing the obtained best solution to the database
13. n&lt;N</p>
      <p>+
Database
if F (xk* ) &lt; F (x* ) , then x* = xk* .</p>
      <p>Step 10. Calculation of each j -th parameter of the average location γ ljoc based on parallel
reduction using MB GPU threads, which are grouped into M blocks.</p>
      <p>Step 11. Modification of each j -th location parameter using M
grouped into 1 block. Each thread computes
γloc =  n  γloc +  N − n  γloc .</p>
      <p>j  N  j  N  j</p>
      <p>Step 12. Modification of each j -th scale parameter, using M
grouped into 1 block. Each thread computes
γsjcale = NN− n β(x mjax − x mjin ) .</p>
      <p>Step 13. Termination condition.</p>
      <p>If n &lt; N , then n = n + 1 and go to step 6.</p>
      <p>Step 14. Recording the best solutions for all iterations in the database.</p>
      <sec id="sec-7-1">
        <title>GPU threads, which are</title>
      </sec>
      <sec id="sec-7-2">
        <title>GPU threads, which are</title>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>3. Numerical research</title>
      <p>The numerical research of the offered method of parametrical identification was conducted with
use of technology of parallel processing of information of CUDA in a MATLAB package, at the
same time the amount of threads in the unit corresponded to the population size, sorting of
population was carried out on the basis of an algorithm of pair and unpaired sorting, search of
the best solution on the current population x * finding of an average vector of a location γ loc it
k
was executed on the basis of an algorithm of a parallel reduction.</p>
      <p>In this work population size K = 3M , maximum number of iterations N = 100 , parameter
for generation of a vector of parameters of scales β =0.1, the maximum quantity of the selected best
solutions B = 0.1K.</p>
      <p>The results of the qualitative characteristics of the parametric identification methods of the
proposed MRMLP neural network model, comparison in Table 1, where Q is the number of
parameters for the MRMLP neural network, P is the power of the training set.</p>
    </sec>
    <sec id="sec-9">
      <title>4. Discussion</title>
      <p>Backpropagation method in sequential mode:
coefficient of
determination
- cannot be used in batch training mode, i.e. it is impossible to parallelize computations on a
GPU, which reduces the speed of finding a solution (Table 1);</p>
      <p>- high probability of hitting a local extremum, which reduces the accuracy of finding a
solution (Table 1).</p>
      <p>The proposed metaheuristic method eliminates the indicated disadvantages.</p>
    </sec>
    <sec id="sec-10">
      <title>5. Conclusion</title>
      <p>The article discusses the problem of reducing the risk of incorrect display of data sets in the audit
DSS based on the method of neural network modeling of transformations of production audit
data with recyclable waste and intermediates due to a modified recurrent multilayer perceptron
(MRMLP). Reached further development a method of parametrical identification of the MRMLP
model which considers number of iterations of training and combines Gaussian distributions and
Cauchy that increases the forecast accuracy as on initial iterations all search space is
investigated, and on final iterations the search becomes directed. The software implementing the
offered methods in MATLAB package was developed and investigated on of the release of raw
materials into production and the posting of finished products of a manufacturing enterprise with
a two-year depth of sampling with daily time intervals. The made experiments confirmed
operability of the developed software and allow to recommend it for use in practice in a
subsystem of the automated analysis of DSS of audit for check of maps of sets of data of the raw
materials release into production and the products output. Prospects of further research are in
checking the offered methods on broader set of test databases.
[13] M.Berglund, T.Raiko, M.Honkala, L.Kärkkäinen, A.Vetek, J.T.Karhunen, Bidirectional
Recurrent Neural Networks as Generative Models, In Advances in Neural Information
Processing Systems, 28, Cortes C., Lawrence N.D., Lee D.D., Sugiyama M., Garnett R.,
Eds., Curran Associates, Inc., 2015, pp. 856−864
[14] M.Sundermeyer, R.Schluter, H.Ney, LSTM neural networks for language modeling. in
Thirteenth Annual Conference of the International Speech Communication Association,
2012
[15] P.Potash, A.Romanov, A.Rumshisky, Ghostwriter: using an LSTM for automatic rap lyric
generation, in Proceedings of the 2015 Conference on Empirical Methods in Natural
Language Processing, 2015, pp. 1919–1924
[16] E.Kiperwasser, Y.Goldber, Simple and Accurate Dependency Parsing Using Bidirectional
LSTM Feature Representations, Transactions of the Association for Computational
Linguistics, vol. 4, 2016, pp.313–327
[17] A.Graves, J.Schmidhuber, Framewise phoneme classification with bidirectional LSTM
and other neural network architecture, Neural Networks, vol. 18, no. 5, 2005, pp. 602–610
[18] J.Chung, C.Gulcehre, K.Cho, Y.Bengio, Empirical evaluation of gated recurrent neural
networks on sequence modeling, arXiv preprint arXiv: 1412.3555, 2014
[19] R.Dey, F.M.Salem, Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks
arXiv: 1701.05923, 2017
[20] H.Jaeger, M.Lukosevicius, D.Popovici, U.Siewert, Optimization and applications of echo
state networks with leaky integrator neurons, Neural Networks, vol. 20, no. 3, 2007,
pp. 335–352
[21] W.Maass, Liquid state machines: motivation, theory, and applications, In: Computability
in context: computation and logic in the real world, 2010, pp. 275–296
[22] A.Nakib, El-G.Talbi, Metaheuristics for Medicine and Biology, Berlin: Springer-Verlag,
2017
[23] X.Yang, Nature-inspired algorithms and applied optimization, Charm: Springer, 2018
[24] S.Subbotin. A.Oliinyk, V.Levashenko, E.Zaitseva, Diagnostic Rule Mining Based on
Artificial Immune System for a Case of Uneven Distribution of Classes in Sample,
Communications, vol.3, 2016, pp. 3-11
[25] C.Blum, G.R.Raid, Hybrid Metaheuristics, Powerful Tools for Optimization, Charm:</p>
      <p>Springer, 2016
[26] X.-S.Yang, Optimization Techniques and Applications with Examples, Hoboken, New</p>
      <p>Jersey: Wiley &amp; Sons, 2018. doi:10.1002/9781119490616
[27] R.Martí, P.M.Pardalos, M.G.C.Resende, Handbook of Heuristics, Charm: Springer, 2018.</p>
      <p>doi:10.1007/978-3-319-07124-4
[28] O.Bozorg‐Haddad, M.Solgi, H.Loaiciga, Meta-heuristic and Evolutionary Algorithms for
Engineering Optimization. Hoboken, New Jersey: Wiley &amp; Sons, 2017.
doi:10.1002/9781119387053.
[29] B.Chopard, M.Tomassini, An Introduction to Metaheuristics for Optimization. New York:</p>
      <p>Springer, 2018. doi:10.1007/978-3-319-93073-2.
[30] J.Radosavljević, Metaheuristic Optimization in Power Engineering, New York: The
Institution of Engineering and Technology, 2018. doi:10.1049/pbpo131e.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[1] The World Bank, World Development Report</source>
          <year>2016</year>
          ,
          <string-name>
            <given-names>Digital</given-names>
            <surname>Dividends</surname>
          </string-name>
          ,
          <year>2016</year>
          . URL: https://www.worldbank.org/en/publication/wdr2016
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Barmak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.V.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.A.</given-names>
            <surname>Manziuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.S.</given-names>
            <surname>Kasianiuk</surname>
          </string-name>
          ,
          <article-title>Information technology separating hyperplanes synthesis for linear classifiers</article-title>
          .
          <source>Journal of Automation and Information Sciences</source>
          , vol.
          <volume>51</volume>
          (
          <issue>5</issue>
          ),
          <year>2019</year>
          , pp.
          <fpage>54</fpage>
          -
          <lpage>64</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v51.
          <year>i5</year>
          .
          <fpage>50</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.V.</given-names>
            <surname>Prokopenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grigor</surname>
          </string-name>
          ,
          <article-title>Development of the comprehensive method to manage risks in projects related to information technologies</article-title>
          ,
          <source>Eastern-European Journal of Enterprise</source>
          Technologies vol.
          <volume>2</volume>
          ,
          <issue>2018</issue>
          , pp.
          <fpage>37</fpage>
          -
          <lpage>43</lpage>
          . doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2018</year>
          .128140
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Schultz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tropmann-Frick</surname>
          </string-name>
          ,
          <source>Autoencoder Neural Networks versus External Auditors, Detecting Unusual Journal Entries in Financial Statement Audits, Proceedings of the 53rd Hawaii International Conference on System Sciences (HICSS-2020)</source>
          , Maui, Hawaii, USA,
          <year>2021</year>
          , pp.
          <fpage>5421</fpage>
          -
          <lpage>5430</lpage>
          . doi:
          <volume>10</volume>
          .24251/hicss.
          <year>2020</year>
          .666
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J.</given-names>
            <surname>Nonnenmacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kruse</surname>
          </string-name>
          , G.Schumann,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Marx, Using Autoencoders for Data-Driven Analysis in Internal Auditing</article-title>
          .
          <source>In Proceedings of the 54th Hawaii International Conference on System Sciences, Maui</source>
          , Hawaii, USA,
          <year>2021</year>
          , pp.
          <fpage>5748</fpage>
          -
          <lpage>5757</lpage>
          . doi:
          <volume>10</volume>
          .24251/hicss.
          <year>2021</year>
          .697
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bodyanskiy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Boiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zaychenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hamidov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zelikman</surname>
          </string-name>
          ,
          <article-title>The Hybrid GMDHNeo-fuzzy Neural Network in Forecasting Problems in Financial Sphere</article-title>
          ,
          <source>Proceedings of 2nd International Conference on System Analysis &amp; Intelligent Computing (SAIC)</source>
          , Kyiv, Ukraine, IEEE,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          , doi:10.1109/SAIC51296.
          <year>2020</year>
          .9239152
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Sonika</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pratap</surname>
          </string-name>
          ;
          <string-name>
            <given-names>A.</given-names>
            <surname>Chauhan</surname>
          </string-name>
          ,
          <article-title>New technique for detecting fraudulent transactions using hybrid network consisting of full-counter propagation network and probabilistic network</article-title>
          ,
          <source>2016 International Conference on Computing, Communication and Automation (ICCCA)</source>
          ,
          <fpage>29</fpage>
          -
          <lpage>30</lpage>
          April 2016,
          <string-name>
            <given-names>Greater</given-names>
            <surname>Noida</surname>
          </string-name>
          , India, IEEE.
          <year>2016</year>
          , pp.
          <fpage>29</fpage>
          -
          <lpage>30</lpage>
          . doi:
          <volume>10</volume>
          .1109/CCAA.
          <year>2016</year>
          .7813713
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>T.</given-names>
            <surname>Neskorodіeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Fedorov</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.Izonin</surname>
          </string-name>
          ,
          <article-title>Forecast Method for Audit Data Analysis by Modified Liquid State Machine</article-title>
          ,
          <source>Proceedings of the 1st International Workshop on Intelligent Information Technologies &amp; Systems of Information Security (IntelITSIS</source>
          <year>2020</year>
          ), Khmelnytskyi, Ukraine,
          <fpage>10</fpage>
          -
          <lpage>12</lpage>
          June,
          <year>2020</year>
          , CEUR-WS, vol.
          <volume>2623</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>35</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2623</volume>
          /paper3.pdf
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>T.</given-names>
            <surname>Neskorodіeva</surname>
          </string-name>
          , E.Fedorov,
          <article-title>Method for Automatic Analysis of Compliance of Expenses Data and the Enterprise Income by Neural Network Model of Forecast</article-title>
          ,
          <source>Proceedings of the 2nd International Workshop on Modern Machine Learning Technologies and Data Science (MoMLeT&amp;DS-2020)</source>
          , Lviv-Shatsk,
          <year>Ukraine</year>
          ,
          <fpage>2</fpage>
          -3 June, 2020: proceedings,
          <source>CEUR-WS, Volume I: Main Conference</source>
          , vol.
          <volume>2631</volume>
          ,
          <year>2020</year>
          , pp.
          <fpage>145</fpage>
          -
          <lpage>158</lpage>
          . URL: http://ceurws.org/Vol-
          <volume>2631</volume>
          /paper11.pdf
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>I.</given-names>
            <surname>Sutskever</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Martens</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          ,
          <article-title>Generating text with recurrent neural networks</article-title>
          ,
          <source>in Proceedings of the 28th International Conference on Machine Learning (ICML-11)</source>
          ,
          <year>2011</year>
          , pp.
          <fpage>1017</fpage>
          -
          <lpage>1024</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Mikolov</surname>
          </string-name>
          , M.Karafi´,
          <string-name>
            <given-names>L.</given-names>
            <surname>Burget</surname>
          </string-name>
          , J.Cernock'y, S.Khudanpur,
          <article-title>Recurrent neural networkbased language model</article-title>
          , in Interspeech, vol.
          <volume>2</volume>
          ,
          <issue>2010</issue>
          , p.
          <fpage>3</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sundermeyer</surname>
          </string-name>
          , et al.,
          <article-title>Translation modeling with bidirectional recurrent neural networks</article-title>
          ,
          <source>Proceedings of the Conference on Empirical Methods on Natural Language Processing</source>
          , October,
          <year>2014</year>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>