=Paper=
{{Paper
|id=Vol-3101/Paper13
|storemode=property
|title=Neural network modeling method of transformations data of audit production with returnable waste
|pdfUrl=https://ceur-ws.org/Vol-3101/Paper13.pdf
|volume=Vol-3101
|authors=Tatiana Neskorodieva,Eugene Fedorov,Pavlo Rymar,Oleksii Smirnov
|dblpUrl=https://dblp.org/rec/conf/citrisk/NeskorodievaFRS21
}}
==Neural network modeling method of transformations data of audit production with returnable waste==
Neural Network Modeling Method of Transformations Data
of Audit Production with Returnable Waste
Tatiana Neskorodieva1, Eugene Fedorov1,2, Oleksii Smirnov3 and Pavlo Rymar1
1
Vasyl' Stus Donetsk National University, 600-richchia str., 21, Vinnytsia, 21021, Ukraine
2
Cherkasy State Technological University, Shevchenko blvd., 460, Cherkasy, 18006, Ukraine
3
Central Ukrainian National Technical University, avenue University, 8, Kropivnitskiy, 25006, Ukraine
Abstract
Currently, the analytical procedures used during the audit are based on data mining techniques. The
object of the research is the process of the content auditing of the production with returnable waste
and intermediate products. The aim of the work is to reduce the risk of incorrect display of the dataset
in the DSS of the audit of the method of neural network modeling of transformations of audit data of
production with recyclable waste and intermediates. This will reduce the risk of the validated data
misclassification. Audit data set transformations of a prerequisite "Completeness" are presented the
sequences of sets data mappings of consecutive operations. Reached further development a method
of parametrical identification of the MRMLP model which considers number of iterations of training
and combines Gaussian distributions and Cauchy that increases the forecast accuracy as on initial
iterations all search space is investigated, and on final iterations the search becomes directed. The
software implementing the offered methods in MATLAB package was developed and investigated on
the data of the release of raw materials into production and the posting of finished products of a with
a two-year depth of sampling with daily time intervals. The made experiments confirmed operability
of the developed software and allow to recommend it for use in practice in a subsystem of the
automated analysis of DSS of audit for check of maps of sets of data of the raw materials release into
production and the products output.
Keywords1
production audit, returnable waste, intermediate products, mapping by neural network, modified
recurrent multilayered perceptron, metaheuristics, DSS, risk of wrong mapping of data sets, risk of the
validated data misclassification.
1. Introduction
In the process of development of international and national economics, industry of IT, it is
possible to distinguish the following basic tendencies: digital transformations realization, digital
economy forming, socio-economic processes globalization and IT accompanying them [1].
CITRisk’2021: 2nd International Workshop on Computational & Information Technologies for Risk-Informed Systems, September
16–17, 2021, Kherson, Ukraine
EMAIL: t.neskorodieva@donnu.edu.ua (T.Neskorodieva); fedorovee75@ukr.net (E.Fedorov); Dr.smirnovoa@gmail.com
(O.Smirnov); p.rymar@donnu.edu.ua (P.Rymar)
ORCID: 0000-0003-2474-7697 (T.Neskorodieva); 0000-0003-3841-7373 (E.Fedorov); 0000-0001-9543-874X (O.Smirnov); 0000-
0002-0647-2020 (P.Rymar)
© 2021 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
These processes result in the origin of global, multilevel hierarchical structures of heterogeneous,
multivariable, multifunction connections, interactions and cooperation of manage objects
(objects of audit). Large volumes of information about them have been accumulated in the
information systems of account, management and audit.
Consequently, nowadays the scientific and technical issue of the modern information
technologies in financial and economic sphere of Ukraine is forming of the methodology of
planning and creation of the decision support systems (DSS) at the audit of enterprises in the
conditions of application of IT and with the use of information technologies on the basis of the
automated analysis of the large volumes of data about financial and economic activity and states
of enterprises with the multi-level hierarchical structure of heterogeneous, multivariable,
multifunction connections, intercommunications and cooperation of objects of audit with the
purpose of expansion of functional possibilities, increase of efficiency and universality of IT
audit [2, 3].
Currently, analytical procedures used during the audit are based on data mining techniques
[4-6]. Automated DSS audit means the automatic forming of recommendable decisions, based on
the results of the data automated analysis, that improves quality process of audit and reducing the
risk of incorrect display of datasets [7, 8]. Unlike the traditional approach, computer
technologies of analysis of data in the audit system accelerate and promote the process accuracy
of audit, that extremely critical in the conditions of plenty of associate tasks on lower and middle
levels and amounts of indexes and supervisions in every task [9].
When developing a decision-making system in audit based on data mining technologies, three
methods have been created: classifying variables, forming analysis sets, mapping analysis sets.
The peculiarity of the methodology for classifying indicators is that qualitatively different (by
semantic content) variables are classified: numerological, linguistic, quantitative, logical. The
essence of the second technique is determined by the qualitative meaning of the indicators. In
accordance with this, sets are formed with the corresponding semantic content: document
numbers, name of indicators, quantitative estimates of the values of indicators, logical indicators.
The third technique is subordinated to the mappings of formed sets of the same type on each
other to determine equivalence in the following senses: numerological, linguistic, quantitative,
logical.
For modeling of data transformations of audit of production neural networks.
The following are most often used as neural networks for mapping audit indicators:
– Elman's (ENN) neural network or a simple recurrent network (SRN) [10, 11] which is a
recurrent two-layer network and is constructed based on MLP. The advantage of this network is
simpler architecture and higher speed of training, than in gated, reservoir and bidirectional
networks. A disadvantage is the insufficient accuracy of the forecast in comparison with gated,
reservoir and bidirectional networks;
– the bidirectional recurrent neural network (BRNN) [12, 13] which is a recurrent two-layer
network and is constructed based on two neural networks of Elman. The advantage of this
network is higher forecast accuracy, than in a normal neural network of Elman. A disadvantage
is higher complexity of determination of architecture, lower speed of training, than in a normal
neural network of Elman;
– long short-term memory (LSTM) [14, 15] which is a recurrent network and is constructed
based on memory units (contain one or more cells), and input, output, a forget of gates (FIRs
filters). The advantage of this network is higher forecast accuracy, than in a normal neural
network of Elman. A disadvantage is higher complexity of architecture determination, lower
training speed, than in a normal neural network of Elman;
– the bidirectional long short-term memory (BLSTM) [16, 17] which is a recurrent network
and is constructed based on two neural networks of LSTM. The advantage of this network is
higher forecast accuracy, than in normal LSTM. A disadvantage is higher complexity of
architecture determination, lower speed of training, than in normal LSTM;
– the gated recurrent unit (GRU) [18, 19] which is a recurrent two-layer network and is
constructed based on the hidden unit’s gates of reset and update (FIRs filters). The advantage of
this network is higher accuracy of the forecast, than in a normal neural network of Elman. A
disadvantage is higher complexity of architecture determination, lower training speed, than in a
normal neural network of Elman;
– the echo state network (ESN) [20] which is a recurrent two-layer network is constructed
based on the reservoir (represents a layer of the interconnected not full-connected neurons). The
advantage of this network is higher forecast accuracy, than in a normal neural network of Elman.
A disadvantage is higher complexity of architecture determination, lower training speed, than in
a normal neural network of Elman;
– the liquid state machine (LSM) [21] which is a recurrent two-layer network is constructed
based on the reservoir (represents a layer of the interconnected not full-connected spike neurons)
and MLP. The advantage of this network is higher forecast accuracy, than in a normal neural
network of Elman. A disadvantage is higher complexity of architecture determination, lower
training speed, than in a normal neural network of Elman.
Thus, any of networks does not meet all criteria.
For acceleration of training and increase an accuracy of data transformations model of
production audit now are used metaheuristics (or modern heuristics) [22]. The metaheuristics
expands opportunities heuristic, combining heuristic methods based on high-level strategy [23].
Existing metaheuristics possess one or more of the following disadvantages:
– there is only description abstraction of a method or the method description is focused on the
solution only of a certain task [24];
– influence of iteration number on solution search process the is not considered [25];
– the convergence of a method is not guaranteed [26];
– there is no opportunity to use not binary potential solutions [27];
– the procedure of parameters values determination is not automated [28];
– there is no opportunity to solve problems of conditional optimization [29];
– insufficient accuracy of a method [30].
In this regard there is a creation problem of effective metaheuristic methods of optimization.
In this regard, it is the actual to create a neural network that considers the functional structure
of production with returnable and non-returnable waste and intermediate products and learns
based on effective metaheuristics.
The aim of the work is to reduce the risk of incorrect display of the dataset in the audit DSS
by the method of neural network modeling of audit data transformations of production with
recyclable waste and intermediates.
For the objective achievement it is necessary to solve the following tasks:
– offer structural model of audit data transformations of production;
– offer neural network model of audit data transformations of production based on a recurrent
multilayered perceptron;
– select criterion for evaluation of neural network model efficiency of production audit data
transformation;
– offer a method of parametrical identification of neural network model of production audit
data transformation based on the return distribution in time;
– offer a method of parametrical identification of neural network model of audit data
transformation of production based on cross entropy and stochastic search of an extremum with
training at vectors of normal distribution;
– execute numerical research.
The problem formulation. Let for model of data transformations of production audit the
training set be set S = {(x µ ,d µ ,d µ(1) ,...,d µ( H ) )} , µ ∈1, P , where xµ is the µ -th training input
vector, d µ is the µ -th training output vector of finished goods, d µ( k ) is the µ -th training output
vector of unreturnable waste which are received after each k -th of a layer of production of
semi-products.
Then a problem of increase an accuracy of production audit data transformations on model of
the modified recurrent multilayered perceptron (MRLMP) g (x, w) , where x is an input signal,
w is the parameters vector, is represented as the problem of finding such a model parameter
vector w * that satisfies the criterion
1 P
F = ∑ ( g (x µ , w * ) − (d µ ,d µ(1) ,...,d µ( H ) )) 2 → min . (1)
P µ=1
2. Materials and methods
2.1. Formalization of audit subject area data subelements
transformations of the prerequisite "Completeness"
Audit data set transformations of a prerequisite "Completeness" will be presented the sequences
of sets data mappings of consecutive operations
, I = 1, I , (2)
Z i → Z i → Z i → → Z i , i1 im iM , (i1 , im , iM ) ∈ A(I)
1 2 m M
where Z is reporting data set,
(i1 , im , iM ) is combination of consecutive operation types of a set I = 1, I ,
is set of possible combinations on a set I = 1, I .
A(I)
Therefore "Completeness" prerequisite audit we will present as the transformations checking
of subelements of data domain in the form of the sequences of mapping of splitting’s data
elements of the sequences
,
ℜ( Z i ) → ℜ( Z i ) → ℜ( Z i ) → → ℜ( Z i ) , i1 im iM , (i1 , im , iM ) ∈ A(I)
1 2 m M
I = 1, I , (3)
where ℜ( Z ) is splitting set Z .
Possible combinations set of consecutive operation types (i1 , im , iM ) defined in (3)
includes check in direct and in the opposite direction.
The model of the subelements transformation of the "Completeness" prerequisite audit
subject domain will be formed on the example of the direct material costs audit. Models of their
conversions can be presented in the form of graphs in which everyone corresponds to a
subelement, and an edge – to map which describes interrelation between the corresponding
subelements.
For this purpose, we use formalization of a set of direct material costs in the form of the
graph G (1) = ( Z (1) , R (1) ) (fig. 1) where vertex – accounts on which account of these current assets
is kept and edges are operations as a result of which there is their conversion. Then model of
subelements conversion of audit data domain of a prerequisite "Completeness" at direct full
check ( (i1 , im , iM ) = (1, 2, 3, 4) ) represents maps of subsets of these raw materials receipt
(r ) (r )
Z ℜ1 (i1 ) ∈ℜ( Z i ) release of raw materials in production Z ℜ 2 (i2 ) ∈ℜ( Z i ) , then in subsets of
1 2
( r3 ) (r )
production data Z ℜ (i3 ) ∈ℜ( Z i ) receipt of finished goods Z ℜ 4 (i4 ) ∈ℜ( Z i ) ,
3 4
T ∈ t j , Tm , j = 1, J m , m =
1, M , T .
m
In that specific case, if splitting sets it was carried out on the basis of the logical conditions
characterizing belonging to one of accounting item subspecies, then the model of subelements
conversion of audit data domain of a prerequisite "Completeness" at direct full check is the set of
the sequences of sets maps of these calculations operations for suppliers types in subsets of these
operations on raw materials types, then in subsets of these operations on products types and
finished goods types.
Z (1) (i ) Z (1) (i ) write-off of
production
release of raw ℜ 2 ℜ 3
materials in prime cost of
raw materials receipt of
production finished goods
receipt finished
goods
Z ℜ(1) (i1 ) Z ℜ(1) (i2 ) Z ℜ(1) (i3 ) Z ℜ(1) (i4 )
… … … …
(R ) (R )
(R )
Z ℜ 1 (i1 )
(R )
Z ℜ 2 (i2 ) Z ℜ 3 (i3 ) Z ℜ 4 (i4 )
receipt of write-off of
finished production
raw materials release of raw goods prime cost of
receipt materials in finished goods
production
Zℜ 3 (i3 )
(R )
Zℜ 3 (i2 )
(R )
Figure 1: Model of conversions of audit data domain subelements of a prerequisite "Completeness"
2.2. Choosing a neural network model for mapping audit sets
The unit diagram of model of the modified recurrent multilayered perceptron (MRMLP) full-
connected recurrent layers of semi-products production (the neurons forming them are
designated in continuous white color), not full-connected non-recurrent layers of unreturnable
waste (forming them neurons is presented on fig. 2 are designated in continuous black color) and
not full-connected non-recurrent layer of finished goods (the neurons forming it are designated
in continuous gray color).
…
… … … …
…
Figure 2: The unit diagram of the modified recurrent multilayered perceptron model
The MRMLP model, the executing map of each input sample of raw materials x = ( x1 ,..., x ( 0 ) ) ,
N
(1) (1) (1)
output samples of finished goods y = ( y1 ,..., y ( H ) ) and unreturnable waste y = ( y1 ,..., y (1) ) ,
N N
(H ) (H ) (H )
…, y = ( y1 ,..., y ( H ) ) , it is presented in the form
N
yi(0) (n) = xi , i ∈1, N (0) ,
y (jk ) (n) = f ( k ) ( s (jk ) (n)) , j ∈1, N ( k ) , k ∈1, H ,
N ( k −1) N (k )
(k )
s j ( n) =(k )
bj + ∑( k ) ( k −1)
wij yi ( n) + ∑
wij( k ) yi( k ) ( n − 1) ,
=i 1 =i 1
y j (n) = f ( s j (n)) , j ∈1, N ( H ) ,
s=
j
(n) b0 (n) + w jj yi( H ) (n) ,
y (jk ) (n) = f ( k ) ( s (j k ) (n)) , j ∈1, N ( k ) , k ∈1, H ,
(k ) (k )
s=j
( n ) b 0
(n) + w(jjk ) y (jk ) (n) ,
where N ( k ) – neurons number in k -th layer of semi-products production and unreturnable
waste,
H – quantity of layers of semi-products production and unreturnable waste,
N (0) – number of neurons of an input layer (raw materials layer),
b(j k ) – bias for j - th of a neuron k -th of a layer of semi-products production,
b j – bias for j - th of a neuron of a finished goods layer,
b j( k ) – bias for j -th of a neuron k -th of a the unreturnable remains layer,
wij( k ) – communication weight from i-th of a neuron of k -th 1-th layer of semi-products
production to j-th to a neuron of k -th of a layer of semi-products production,
wij( k ) – communication weight from i-th of a neuron k -th of the semi-products production layer
to j - th neuron of k -th 1-th layer of semi-products production,
w – communication weight from j -th neuron H-th of a layer of semi-products production to
jj
j -th neuron of a finished goods layer,
w(jjk ) – communication weight from j -th of a neuron k-th of a layer of semi-products
production to j-th neuron of k -th layer of irretrievable waste,
y (jk ) (n) – output of j -th neuron of k -th of a layer of semi-products production in timepoint n ,
y j (n) – output of j -th finished goods layer in timepoint n ,
y (jk ) (n) – output of j -th neuron of k -th of a irretrievable waste layer in timepoint n ,
f ( k ) – neurons function neuron activation of k -th of a layer of semi-products production,
f – neurons function activation of a finished goods layer,
f ( k ) – neurons function activation of k -th of a unreturnable waste layer.
2.3. Criterion choice for evaluation of neural network model
efficiency of data transformations of production audit
In this work for training of the MRMLP model the function of the purpose which means the
choice of such values of a vector of parameters is selected
(1) (H ) (1) (H )
(1) (H )
w = ( w ,..., w ( H −1) ( H ) , w ,..., w ( H ) ( H −1) , w ,..., w ( H ) ( H ) , w11 ,..., w ( H ) ( H ) ) , which deliver a
11 N N 11 N N 11 N N N N
minimum of a root mean square error (the differences of a sample on model and a test sample)
1 P
+ 1
2 H
1 P (k ) (k ) 2
(H ) ∑ ∑ (k ) ∑
= F y
− d yµ − d µ → min ,
PN= µ 1
µ µ
HP k 1 N=
= µ 1
w
where y µ , yµ(1) ,..., yµ( H ) – µ -th output samples on model,
d µ ,d µ(1) ,...,d µ( H ) – µ -th test output samples,
H – quantity hidden layers,
P – power of a test set.
2.4. Method of parametrical identification of data
transformations model of production audit based on the
back propagation in time in a sequential mode
1. Number of iterations of training n = 1 , initialization by means of uniform distribution on an
interval (0.1) or [-0.5, 0.5] bias b(j k ) (n) , b j( k ) (n) , j ∈1, N ( k ) , k ∈1, H , b j (n) , j ∈1, N ( H ) , and
weights wij( k ) (n) , i ∈1, N ( k −1) , j ∈1, N ( k ) , k ∈1, H , wij( k ) (n) , i ∈1, N ( k ) , j ∈1, N ( k −1) , k ∈1, H ,
~ (n) , j ∈1, N ( H ) , w ( k ) (n) , j ∈1, N ( k ) , k ∈1, H .
w jj jj
(0) ( H ) (k ) (k )
2. The training set is set {(x µ ,d µ ,d µ(1) ,...,d µ( H ) ) | x µ ∈ R N ,d µ ∈ R N ,d µ ∈ R N } ,
µ ∈1, P , where x µ – µ -th the training input vector raw materials, d µ – µ -th the training output
vector of finished goods, d µ( k ) – µ -th the training output vector of unreturnable waste which are
received after each k -th of semi-products production layer, P – power of a training set. Number
of the current train from a training set µ =1 .
3. Initial calculation of a signal output for each full-connected recurrent hidden layer
yi( k ) (n − 1) =
0 , i ∈1, N ( k ) , k ∈1, H .
4. Calculation of a signal output for each full-connected recurrent layer of semi-products
production considering returnable waste (forward propagation)
yi(0) (n) = xµi ,
y (jk ) (n) = f ( k ) ( s (jk ) (n)) , j ∈1, N ( k ) , k ∈1, H ,
( k −1) (k )
N N
b(j k ) (n) + ∑ wij( k ) (n) yi( k −1) (n) + ∑ wij( k ) (n) yi( k ) (n − 1) .
s (jk ) (n) =
=i 1 =i 1
5. Calculation of a signal output for not full-connected non-recurrent layer of finished goods
(forward propagation)
y (n) = f ( s (n)) , j ∈1, N ( H ) ,
j j
s=
j
(n) b j (n) + w jj (n) yi( H ) (n)
.
6. Calculation of a signal output for each not full-connected non-recurrent layer of
unretainable waste (forward propagation)
y (jk ) (n) = f ( k ) ( s (j k ) (n)) , j ∈1, N ( k ) , k ∈1, H ,
(k )
s=j
(n) b j( k ) (n) + w(jjk ) (n) y (jk ) (n) .
7. Calculation of energy of error ANN
( ) ( )
(H ) (k )
1N 2 1 H N 2
=E ( n) ∑
2 =j 1
e j (n) + ∑ ∑ e (j k ) (n) ,
2 =k 1 =j 1
(k )
e=
j
(n) y j (n) − dµj , e=
j
(n) y (jk ) (n) − dµ( jk ) .
8. Setup of synoptic weights based on generalized the delta rule (backward propagation)
∂E (n)
b(j k ) (=
n + 1) b(j k ) (n) − η ( k ) , j ∈1, N ( k ) , k ∈1, H ,
∂b j (n)
∂E (n)
wij( k ) (=
n + 1) wij( k ) (n) − η , i ∈1, N ( k −1) , j ∈1, N ( k ) , k ∈1, H ,
∂wij( k ) (n)
∂E (n)
wij( k ) (=
n + 1) wij( k ) (n) − η ( k ) , j ∈1, N ( k ) , i ∈1, N ( k −1) , k ∈1, H ,
∂wij (n)
∂E (n)
b j (n=
+ 1) b j (n) − η , j ∈1, N ( H ) ,
∂b j (n)
∂E (n)
w jj (n=
+ 1) w jj (n) − η , j ∈1, N ( H ) ,
∂w (n)jj
∂E (n)
b j( k ) (=
n + 1) b j( k ) (n) − η ( k ) , j ∈1, N ( k ) , k ∈1, H ,
∂b j (n)
∂E (n)
w(jjk ) (=
n + 1) w(jjk ) (n) − η ( k ) , j ∈1, N ( k ) , k ∈1, H ,
∂w jj (n)
where η is the parameter determining training speed (at big η training happens quicker, but the
danger to receive the incorrect solution increases), 0 < η < 1 .
∂E (n)
(k )
= g (jk ) (n) ,
∂b j (n)
∂E (n)
(k )
= yi( k −1) (n) g (jk ) (n) ,
∂wij (n)
∂E (n) (k ) (k )
= yi (n − 1) g j (n) ,
∂wij( k ) (n)
∂E (n)
= g j (n) ,
∂b (n)
j
∂E (n)
= y (jH ) (n) g j (n) ,
∂w jj (n)
∂E (n)
(k ) = g (jk ) (n) ,
∂b j (n)
∂E (n)
(k ) = y (jk ) (n) g (jk ) (n) ,
∂w jj (n)
(k )
( j jj )
f ′( H ) ( s ( H ) (n)) w (n) g (n) + w ( H ) (n) g ( H ) (n) ,
j jj j
k= H
g j ( n) = ( k ) ( k ) N
( k +1)
,
f ′ ( s j (n)) ∑ w jl (n) gl
( k +1) ( k +1)
(n) + w(jjk ) (n) g (jk ) (n) , k < H
l =1
g j (n) = f ′( s j (n))e j (n) ,
gl( k ) (n) = f ′( k ) ( s (j k ) (n))e (j k ) (n) .
9. Check of a termination condition
If n mod P > 0 then µ = µ + 1 , n= n + 1 , go to 4.
1 P
If n mod P = 0 and ∑ E (n − P + s) > ε then n= n + 1 , go to 2.
P s =1
1 P
If n mod P = 0 and ∑ E (n − P + s) < ε to be completed.
P s =1
A high probability of hit in a local extremum belongs to disadvantage of this method that
reduces training accuracy, and impossibility of training in batch mode that reduces training
speed. In this regard in work the alternative method of training at a basis metaheuristic is offered.
2.5. Method of parametrical identification of data
transformations model of production audit on a basis
metaheuristic
The offered method of parametrical identification of production audit data transformations model
is based on a method of cross entropy and stochastic search of an extremum with training at
vectors of normal distribution [30].
Feature of the offered method will be that for speed control of convergence of a method,
speed control of change of distribution parameters and for providing that on initial iterations all
search space was investigated, and on final iterations the search became directed, at generation
of potential solutions number of iterations is considered. Besides, not only Gaussian distribution,
but also Cauchy's distribution is used, and their value depends on number of iterations.
The offered method consists of the following stages:
1. Initialization
1.1. Task of the maximum number of iterations N , population size K , solution lengths M
(corresponds to length of a vector of the model parameters MRMLP), the maximum quantity of
the selected best solutions B , parameter for generation of scales parameters vector β , 0 < β < 1 .
1.2. Initialization of a location’s parameters vector
1
γ loc =γ( 1loc ,..., γ loc
M
) , γ loc = x min + ( x max − x min ).
j j 2 j j
1.3. Initialization of a location’s parameters vector
γ scale = ( γ1scale ,..., γ scale
M
) , γ scale
j
=β( x max
j
− x min
j
).
1.4. Define the best solution (best vector of the model parameters MRMLP)
x* = γ loc .
2. Iteration number n = 1 .
3. Creation of the current population of potential solutions P .
3.1. Solution number k = 1 , P = ∅ .
3.2. Generation of the new potential solution xk (vector of the model parameters MRMLP)
scale N − n n
xkj = γ loc + γ С (0,1) + N (0,1) , j ∈1, M ,
N N
j j
where N (0,1) is standard normal distribution,
C (0,1) is standard Cauchy distribution.
3.3. If k < K then P = P {xk } , k= k + 1 , transition to a step 3.2.
4. Sort P on function of the purpose, i.e. F ( xk ) < F ( xk +1 ) .
5. Define the best solution (best vector of the model parameters MRMLP) on the current
population
k * = arg min F ( xk ) , k ∈1, K .
k
6. Define the best solution (best vector of the model parameters MRMLP) on all iterations
if F ( x * ) < F ( x* ) then x* = x * .
k k
7. Modification of distribution parameters (on a basis B the first, i.e. best, new potential
solutions from population P ).
7.1. Modification of a vector of parameters of locations
B
n loc N − n loc loc 1
γ loc
j
= j
N
γ +
N j
γ , γ j
= ∑ x , j ∈1, M .
B k =1 kj
7.2. Modification of a vector of parameters of scales
N −n max min
γ scale
= β( x j − x j ) , j ∈1, M .
j
N
5. If n < N then n= n + 1 , go to a step 3.
Result is x* .
2.6. Algorithm for parametric identification of the production
audit data transformation model based on metaheuristics
For the proposed parametric identification method of the model for transforming production
audit data based on metaheuristics, an algorithm has been developed that is designed to be
implemented on a GPU using the CUDA information parallel processing technology and is
shown in Fig. 3. This block diagram functions as follows.
Step 1. Operator input of the maximum number of iterations N , population size K , solution
length M , maximum number of selected best solutions B , parameter to generate a vector of scale
parameters β , 0 < β < 1 , minimum and maximum values for the solution x min
j
, x max
j
, j ∈1, M .
Step 2. Initialization of location parameters vector
1
γ loc =γ
( 1loc ,..., γ loc
M
) , γ loc= x min + ( x max − x min ).
j j 2 j j
1. Inp ut p arameters
2. Initialization of location p arameters
3. Initialization of the vector of scale p arameters
4. Determining the best solution
5. Setting the iteration number n=1
6. Building a current p opulation of p otential solutions
7. Sorting p otential solutions
8. Determining the best solution for the current p opulation
9. Determining the best solution across all iterations
10. Calculation of the p arameters of the average location
11. Modification of location p arameters
12. Modification of scale p arameters
-
13. n