<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Method of Dynamic Stock Buffer Management Based on a Connectionist Expert System</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Eugene Fedorov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Olga Nechyporenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Cherkasy State Technological University</institution>
          ,
          <addr-line>Shevchenko blvd., 460, Cherkasy, 18006</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>30</fpage>
      <lpage>41</lpage>
      <abstract>
        <p>The paper proposes a method for dynamic stock buffer management based on a connectionist expert system. The novelty of the study lies in the fact that for the dynamic stock buffer management, a method was created based on the connectionist and logical approach and a neural network model with sigmoid functions for the dynamic stock buffer management. Three criteria for evaluating the effectiveness of the proposed model were selected, the parameters of the proposed model were identified based on the backpropagation method in batch mode, which is focused on the technology of parallel information processing, and the matrix pseudo-inversion method. The proposed model and methods for its parametric identification make it possible to increase the speed, accuracy and reliability of decision making. The proposed method can be used in various intelligent systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review</title>
      <p>Currently, dynamic stock buffer management is based on production rules [3-7]. There are no
computer systems for dynamic stock buffer management that are based on artificial intelligence
methods. At the present time, artificial intelligence methods are used to control dynamic objects, with
the most popular being artificial neural networks [8-10].</p>
      <p>The advantages of neural networks are [11-13]:
 the possibility of their training and adaptation;
 the ability to identify patterns in the data, their generalization, i.e. extracting knowledge from
data, so knowledge about the object is not required (for example, its mathematical model);
 parallel processing of information that increases computing power.</p>
      <p>The disadvantages of neural networks are [14-16]:
 difficulty in determining the structure of the network, since there are no methods for
calculating the number of layers and neurons in each layer for specific applications;
 difficulty in forming a representative sample;
 high probability of the training and adaptation method to hit a local extremum;
 inaccessibility for human understanding of the knowledge accumulated by the network (it is
impossible to represent the relationship between input and output in the form of rules), since they
are distributed among all elements of the neural network and are presented in the form of its
weight coefficients.</p>
      <p>Recently, neural networks have been combined with expert systems. The advantage of expert
systems is [17-19] the representation of knowledge in the form of association rules that are easily
accessible for human understanding. The disadvantages of expert systems are [20-21]:
 the impossibility of their training and adaptation (weights of association rules cannot be
automatically adjusted);
 the impossibility of parallel processing of information, which increases computing power.</p>
      <p>Thus, it is relevant to create a method for dynamic stock buffer management, which will eliminate
these shortcomings.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Formation of knowledge about the stock buffer management</title>
      <p>It was assumed in the work that the stock buffer is divided into three zones (red, yellow and green)
of the same size. The proposed artificial neural network is based on knowledge about the stock buffer
management, presented in the form of association rules given below.</p>
      <p>1. If the depth of being in the red zone of the stock buffer is at least half of this zone (i.e. the
amount of stocks is not more than half of this zone) and staying in the red zone of the stock buffer for
at least four days, and staying in the green zone of the stock buffer for at least four days, then check
the data, because an anomaly has occurred in them. The conclusion of the rule is encoded as (0,0,0,1)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v1  y  (0,0,0,1) .</p>
      <p>2. If the depth of being in the red zone of the stock buffer is at least half of this zone (i.e. the
amount of stocks is not more than half of this zone) and the stay in the red zone of the stock buffer is
at least four days, and the stay in the green zone of the stock buffer is less than four days, then
increase the stock buffer size by 1/3. The conclusion of the rule is encoded as (1,0,0,0)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v2  y  (1,0,0,0) .</p>
      <p>3. If the depth of being in the red zone of the stock buffer is at least half of this zone (i.e. the
amount of stocks is not more than half of this zone) and the stay in the red zone of the stock buffer is
less than four days, and the stay in the green zone of the stock buffer is at least four days, then check
the data, because an anomaly has occurred in them. The conclusion of the rule is encoded as (0,0,0,1)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v3  y  (0,0,0,1) .</p>
      <p>4. If the depth of being in the red zone of the stock buffer is at least half of this zone (i.e. the
amount of stocks is not more than half of this zone) and the stay in the red zone of the stock buffer is
less than four days and the stay in the green zone of the stock buffer is less than four days, then
increase the stock buffer size by 1/3. The conclusion of the rule is encoded as (1,0,0,0)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v4  y  (1,0,0,0) .</p>
      <p>5. If the depth of being in the red zone of the stock buffer is less than half of this zone (i.e. the
amount of stocks is more than half of this zone) and staying in the red zone of the stock buffer for at
least four days, and staying in the green zone of the stock buffer for at least four days , then check the
data, since an anomaly has occurred in them. The conclusion of the rule is encoded as (0,0,0,1)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v5  y  (0,0,0,1) .</p>
      <p>6. If the depth of being in the red zone of the stock buffer is less than half of this zone (i.e. the
amount of stocks is not more than half of this zone) and the stay in the red zone of the stock buffer is
at least four days, and the stay in the green zone of the stock buffer is less than four days , then
increase the stock buffer size by 1/3. The conclusion of the rule is encoded as (1,0,0,0)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v6  y  (1,0,0,0) .</p>
      <p>7. If the depth of being in the red zone of the stock buffer is less than half of this zone (i.e. the
amount of stocks is more than half of this zone) and the stay in the red zone of the stock buffer is less
than four days, and the stay in the green zone of the stock buffer is at least four days, then decrease
the stock buffer size by 1/3. The conclusion of the rule is encoded as (0,1,0,0)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v7  y  (0,1,0,0) .</p>
      <p>8. If the depth of being in the red zone of the stock buffer is less than half of this zone (i.e. the
amount of stocks is more than half of this zone) and the stay in the red zone of the stock buffer is less
than four days, and the stay in the green zone of the stock buffer is less than four days, then do not
change the stock buffer size. The conclusion of the rule is encoded as (0,0,1,0)
 xmax  x1min 
 x1  x1min  1 6   x2  4  x3  4 v8  y  (0,0,1,0) ,
where x1 – current stock size in pieces,
x2 – time spent in the red zone of the stock buffer in days,
x3 – time spent in the green zone of the stock buffer in days,
y – action code,
x1min – the minimum number of stocks of goods in pieces (the border between the black and red zones
of the stock buffer),
x1max – the maximum number of stocks of goods in pieces (the border between the green and blue
zones of the stock buffer),
v j – weight of jth association rule.
4. Creation of a mathematical model of the neural network with sigmoid
functions for dynamic stock buffer management</p>
      <p>To dynamically manage the stock buffer, the work has further improved the mathematical models
of the artificial neural network through the use of association rules and multidimensional logistic
functions, which reduces the number of hidden layers, which simplifies the identification of artificial
neural network parameters. The neural network maps inputs to outputs according to the knowledge of
stock buffer management.</p>
      <p>The structure of the neural network model with sigmoid functions in the form of a graph is shown
in Figure 1.
layer 0
layer 1
layer 2</p>
      <p>The input (zero) layer contains three neurons (the number of neurons corresponds to the number of
input variables). The hidden layer contains eight neurons (the number of neurons corresponds to the
number of association rules). The output layer contains four neurons (the number of neurons
corresponds to the number of actions). The functioning of the neural network with sigmoid functions
is presented as follows.</p>
      <p>In the hidden layer, multidimensional logistic functions are calculated (corresponds to the
aggregation of subconditions of association rules connected by a conjunction)</p>
      <p>Z
y hj  f j (x)   sigmzj (xz ) ,
z 1
sigmzj (xz )  1  exp  xz  mzj   , j 1, J .</p>
      <p>
    zj  
1</p>
      <p>Based on the knowledge about stock buffer management, the parameters of the activation
functions were chosen as follows:
xmax  x1min
m11  m12  m13  m14  m15  m16  m17  m18  x1min  1
 11  12   13  14  0.05 ,  15   16  17   18  0.05 ,
m21  m22  m23  m24  m25  m26  m27  m28  4 ,  21   22   25   26  0.05 ,
 23  24  27  28  0.05 , m21  m22  m23  m24  m25  m26  m27  m28  4 ,
 31  33  35  37  0.05 ,  32   34   36   38  0.05 .</p>
      <p>In the output layer, the sums of weighted multivariate logistic functions are calculated
(corresponds to the aggregation of activated association rules with the same conclusions, i.e. actions)</p>
      <p>J
yout   wjk y hj , k 1, K ,
k
j 1
v j , ( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
w jk  0, ( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
.</p>
      <p>Thus, the mathematical model of the neural network with sigmoid functions is presented in the
form
yout   wjk 1  exp  xz  mzj  </p>
      <p>J Z 
k j 1 z1     zj  
1
, k 1, K .</p>
      <p>To decide on the choice of action for the model (1), the following rule is used
k*  argmax ykout , k 1, K .</p>
      <p>k
5. Choice of criteria for evaluating the effectiveness of a mathematical
model of a neural network with sigmoidal functions for dynamic stock
buffer management</p>
      <p>In this work, to evaluate the parametric identification of the mathematical model of the neural
network with sigmoidal functions (1), the following were chosen:
 accuracy criterion, which means the choice of such parameters’ values v  (v1,...,vJ ) , that
deliver a minimum of the mean squared error (the difference between the output according to the
model and the desired output)</p>
      <p>F 
1 I K</p>
      <p>  ( yiokut  dik )2  min ,
2I i1 k 1 v
where di  (di1,...,diK ) – i th test output vector, dik {0,1} ,
yiout  ( yio1ut ,..., yioKut ) – output vector obtained from the model,
I – number of test implementations;
 reliability criterion, which means the choice of such parameters’ values v  (v1,...,vJ ) , that
provide a minimum probability of making an incorrect decision (the difference between the model
output and the desired output)</p>
      <p>F 
1 I </p>
      <p> arg max y out  arg max dik   min ,</p>
      <p>I i1  k1,K ik k1,K v
arg max yiokut  arg max dik   1, arg mk1a,Kx yiokut  arg mk1a,Kx dik
 k1,K k1,K  0, arg max yiokut  arg mk1a,Kx dik
 k1,K
;
</p>
      <p>performance criterion, which means the choice of such parameters’ values v  (v1,...,vJ ) , that
deliver a minimum of computational complexity</p>
      <p>F  T  min .</p>
      <p>v
6. Identification of the parameters of the mathematical model of the neural
network with sigmoid functions for dynamic stock buffer management
based on the backpropagation method in batch mode</p>
      <p>To identify the parameters of the mathematical model of the neural network with sigmoid
functions for dynamic stock buffer management (1), the procedure for determining these parameters
based on the backpropagation method has been further improved in the work by calculating only the
(1)
(2)
(3)
(4)
vector of parameters v  (v1,...,vJ ) and batch learning to speed up learning, which involves the
following steps:</p>
      <p>Initialization by uniform distribution on the interval (0,1) of weights v j , z 1, Z , j 1, J .</p>
      <p>Specifying the training set {(xi ,di ) | xi  RZ ,di {0,1}Z } , i 1, I , where xi – i th normalized
training input vector, di – i th learning output vector, Z – number of input variables, I – training
set cardinality. Specifying parameters for activation functions mzj , zj , z 1, Z , j 1, J . Iteration
number n=1.
3. Output signal calculation (forward move)</p>
      <p>Calculation of error energy based on criterion (3)
yiokut   wjk 1 exp  xiz  mzj  </p>
      <p>J Z 
j 1 z1     zj  
1</p>
      <p>, i 1, I , k 1, K ,
v j , ( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
w jk  
0, ( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
.</p>
      <p>E 
1 I K ( yiokut  dik )2 .</p>
      <p>2I i1 k 1
Setting the weights of the output layer (backward move)

wjk 
wjk  
0,
E
wjk
, ( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
( j, k ) {(2,1), (4,1), (6,1), (7,2), (8,3), (1,4), (3,4), (5,4)}
, j 1, J , k 1, K ,
where  – parameter that determines the learning rate (with a large  learning rate is faster, but the
risk of getting the wrong solution increases), 0   1 ,
E
wjk

1 I</p>
      <p> f j (xi )( yiokut  dik ) .</p>
      <p>I i1
1.</p>
      <sec id="sec-3-1">
        <title>6. Check completion condition.</title>
        <p>If n  N , then increase iteration number n, go to 3.
7. Algorithm for identifying the parameters of the mathematical model of
the neural network with sigmoid functions for dynamic stock buffer
management based on the method of backpropagation in batch mode
The algorithm for identifying the parameters of the mathematical model of the neural network with
sigmoid functions for dynamic stock buffer management based on the backpropagation method in
batch mode, designed for implementation on the GPU using CUDA technology, is shown in Figure 2.
This block diagram functions as follows.</p>
        <p>1. Iteration number n=1, initialization by uniform distribution on the interval (0,1) of weights v j ,
z 1, Z , j 1, J .</p>
        <p>2. Specifying the training set {(xi ,di ) | xi  RZ ,di {0,1}Z } , i 1, I , where x p – i th normalized
training input vector,, di – i th learning output vector, Z – number of input variables, I – the training
set cardinality. Specifying parameters for activation functions mzj , zj , z 1, Z , j 1, J .</p>
        <p>3. Calculation of the output signal according to the model (1), using KI threads, which are
grouped into K blocks. Each thread computes yiokut .
is calculated. The partial sums obtained in each block are added.</p>
        <p>4. Calculation of error energy based on criterion (2), using KI strands that are grouped into K
blocks. In each block, based on parallel reduction, a partial sum of I elements of the form
( yiokut  dik )2</p>
        <p>2I
5. Adjust the weights of the output layer using JI strands, which are grouped into J blocks. In
each block, based on parallel reduction, the sum of I elements of the form
f j (xi )( yiokut  dik )</p>
        <p>I
is
calculated.</p>
        <p>6. Check completion condition.</p>
        <p>If n  N , then n=n+1, go to 3.</p>
        <p>+
1
2
3
4
5
6
v
8. Identification of the parameters of the mathematical model of the neural
network with sigmoid functions for dynamic stock buffer management
based on the method of matrix pseudo-inversion</p>
        <p>Modification of the output layer weights based on the matrix pseudo-inversion method involves
the following steps:
1.</p>
        <p>Create matrix Yh  [ yihj ] , i 1, I , j 1, J .</p>
        <p>Create matrix D  dik  , i 1, I , k 1, K .</p>
        <p>Calculate matrix W  [wjk ] , j 1, J , k 1, K ,
where Yh  – pseudo-inverse matrix.</p>
        <p>W  Yh  D ,</p>
        <p>The matrix pseudo-inversion method is based on the singular value decomposition (SVD) method,
which allows computing the pseudo-inverse matrix Yh  in the form</p>
        <p>Yh   VΣ UT ,
where Σ obtained from matrix Σ  diag( 1... q ) by replacing each  i with 1/ i followed by the
transposition of this modified matrix, q  min{ J , I}.</p>
        <p>The Singular Value Decomposition (SVD) method consists of the following steps:
1. The input data matrix is set in the form Yh  [y1h ,..., y hI ] , where y ih – vector of dimensions J ,
I  J .
2. Bidiagonalization is performed, which uses the Householder transformation (reflection) and
allows us to represent matrix Yh as a product of matrices</p>
        <p>Yh  U1BV1T
or as a procedure call</p>
        <p>( U1 , B1 , V1 ) = bidiagonal( Yh ),
where matrices U1, V1 are orthogonal and have dimension J  J and I  I respectively, matrix B1 is
upper bidiagonal and has dimension J  I .</p>
        <p>3. z=1.
4. The QR method is performed, which uses the Givens rotation and allows us to represent
matrix B as a product of matrices
or as a procedure call</p>
        <p>T</p>
        <p>Bz  Uz1Bz1Vz1
( U z 1 , B z 1 , Vz 1 , e) = qr( B z ),
where matrices U z 1, Vz 1 are orthogonal and have dimension J  J and I  I respectively, matrix
B z 1 is upper bidiagonal and has dimension J  I , e – error.</p>
        <p>5. If e   , then z  z 1 , go to step 4.
6. Calculate matrix U</p>
        <p>Calculate matrix V</p>
      </sec>
      <sec id="sec-3-2">
        <title>Calculate matrix Σ</title>
        <p>z
U   Um .</p>
        <p>m1
z
V   Vm .</p>
        <p>m1
Σ  B z .</p>
        <p>Thus, the vector of left singular vectors is represented as U  [u1...u J ] , the vector of right singular
vectors is represented as V  [v1...v I ] , the matrix of singular values of dimension J  I is represented
as Σ  diag( 1... q ) , q  min{ J , I}.</p>
        <sec id="sec-3-2-1">
          <title>Bidiagonalization as procedure bidiagonal(X)</title>
          <p>1. Initialization
2.
3.</p>
          <p>z=1.</p>
          <p>Calculation of the Householder matrix in the form</p>
          <p>B  Yh , B  [bji ] , j 1, J , i 1, I ,
U  I , U  [usj ] , s 1, J , j 1, J ,
V  I , V  [viz ] , i 1, I , z 1, I .
8. If z  min{ J , I} 1, then z  z 1 , go to step 3.</p>
          <p>The result is matrices U , B, V .</p>
          <p>Householder transformation as a procedure householder(B, l, C)
1. Vector formation x  (x1,...,xC ) in the form</p>
          <p>Vector formation u  (ul ,...,uC ) in the form</p>
          <p>xc  bcl , c 1,C .
0,


uc  xc  sgn( xc )

xc ,</p>
          <p>C
 (xs )2 , c  l
sc
1  c  l
l 1  c  C
.</p>
          <p>Vector calculation v  (vl ,...,vC ) in the form</p>
          <p>( cos , sin  , r) = rot( ll , l,l 1 ).</p>
          <p>Calculation of the upper bidiagonal matrix B and the left orthogonal matrix U</p>
          <p>Q z = householder(B, z, J).</p>
          <p>B  QzB , U  UQ z .</p>
          <p>If z  min{ J , I} 1, then go to step 8.</p>
          <p>Calculation of the Householder matrix in the form</p>
          <p>Pz = householder(BT, z+1, I).</p>
          <p>Calculation of the upper bidiagonal matrix B and the right orthogonal matrix V1</p>
          <p>B  BPz1 , V  Pz1V .</p>
          <p>Calculation of the upper bidiagonal matrix B and the right orthogonal matrix V2</p>
          <p>Q  I , Q  [qiz ] , i 1, I , z 1, I ,
qll  cos , ql,l 1  sin , ql 1,l  sin , ql 1,l 1  cos .</p>
          <p>Σ  ΣQ , V  VQT .</p>
          <p>T
Zeroing all  ji , that have  ji   .</p>
          <p>Givens rotation
( cos , sin  , r) = rot( ll , l1,l ).</p>
          <p>Q  I , Q  [qsj ], s 1, J , j 1, J ,
qll  cos , ql,l1  sin , ql1,l  sin , ql1,l1  cos .</p>
          <p>Calculation of the upper bidiagonal matrix Σ and the left orthogonal matrix U2
Σ  QΣ , U  UQT .
10. Zeroing all  ji , that have  ji   .
11. If l  min{ J , I} 1, then l  l  1 , go to step 3.
12. Calculation of the error as the sum of the modules of the elements lying above the main
diagonal</p>
          <p>q j1
e     ji , q  min{ J , I}.</p>
          <p>j2 i1</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>The result is matrices U , Σ , V and error e.</title>
        <sec id="sec-3-3-1">
          <title>Givens rotation as a procedure rot(f, g)</title>
          <p> sin </p>
          <p> , that transforms vector ( f , g)T to vector
cos 
 cos
(r,0)T , i.e.  sin
 sin  f   r 
cos  g    0 .</p>
          <p>If f =0, then cos  0 , sin  1 , r = g.
If f  g and f  0 , then tan  g ,
f
If f  g and f  0 , then cotan   f ,
cos </p>
          <p>1
1 tan 2
</p>
          <p>1
1  g 2 sin  1tatnan 2  gf </p>
          <p>
 f 
1
1  gf 2


g
,
sin </p>
          <p>1
1 cotan 2
</p>
          <p>,
1
1  gf 2 cos 

</p>
          <p>cotan
1  cotan 2
 f 
g
,
1   f 2


 g 
r  f cos 
r  f sin </p>
          <p>f
1  gf 2

</p>
          <p>.</p>
          <p>g
1  gf 2


.</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>The result is the elements cos , sin  , r.</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>9. Experiments and results</title>
      <p>A numerical study of the proposed mathematical model of a neural network with sigmoid
functions and a conventional multilayer perceptron was carried out in the Matlab package using the
Deep Learning Toolbox (to identify the parameters of the model of a multilayer perceptron and a
neural network with radial basis functions based on backpropagation, as well as to identify the
parameters of the proposed neural network model with sigmoid functions (1) based on back
propagation and matrix pseudo-inversion).</p>
      <p>Table 1 presents the computational complexity, root mean square errors (RMSE), the probabilities
of making wrong decisions on the dynamic stock buffer management, obtained on the basis of the
data set of the logistics company "Ekol Ukraine" and "Vitronic" (This dataset contains 2000
precedents. During the simulation, 80 % of the precedents were randomly selected for training, and
20 % for testing procedures) using an artificial neural network such of the multilayer perceptron
(MLP) type and radial basic function neural networks (RBFNN) with back propagation (BP), as well
as the proposed model (1) with back propagation (BP) and matrix pseudo-inversion. At the same time,
MLP had 2 hidden layers (each consisted of 6 neurons, like the input layer), RBFNN had one hidden
layer of 12 neurons. I – is the training set cardinality, N – is the number of iterations performed.</p>
      <p>According to Table 1, the best results in terms of computational complexity are given by model (1)
with parameter identification based on BP in batch mode using CUDA (reduces the computational
complexity by I times), and the best results in terms of RMSE and the probability of making an
incorrect decision are given by model (1) with parameter identification based on matrix
pseudoinversion without using CUDA.
1. To solve the problem of improving the efficiency of dynamic stock buffer management,
relevant artificial intelligence methods have been investigated. These studies have shown that by far
the most effective is the use of artificial neural networks in combination with expert systems.</p>
      <p>2. The novelty of the study lies in the fact that the proposed method of dynamic stock buffer
management is based on logic and artificial neural networks. It provides representation of knowledge
about stock buffer management in the form of association rules, reduces computational complexity,
mean squared error, and the probability of making the wrong decision by automatically choosing the
model structure, reducing the probability of hitting a local extremum, and using CUDA parallel
information processing technology.</p>
      <p>3. As a result of the numerical study, it was found that the proposed method of dynamic stock
buffer management based on the connectionist expert system in the case of identifying the parameters
of the neural network model based on the method of matrix pseudo-inversion, provides the probability
of making the wrong decision on the dynamic stock buffer management of 0.02, and the
root-meansquare error of 0.05, and in the case of identifying the parameters of the neural network model based
on the backpropagation method in batch mode using CUDA, it reduces the computational complexity
by a factor of I, where I is the training set cardinality.</p>
      <p>4. Further research prospects are the use of the proposed method of dynamic stock buffer
management based on the connectionist expert system for various intelligent systems for managing
dynamic objects in natural language.
11.
[1] G. G. Shvachych, O. V. Ivaschenko, V. V. Busygin, Ye. Ye. Fedorov, Parallel computational
algorithms in thermal processes in metallurgy and mining, Naukovyi Visnyk Natsionalnoho
Hirnychoho Universytetu, 4 (2018) 129–137. doi: 10.29202/nvngu/2018-4/19.
[2] G. Shlomchak, G. Shvachych, B. Moroz, E. Fedorov, D. Kozenkov, Automated control of
temperature regimes of alloyed steel products based on multiprocessors computing systems,
Metalurgija, 58 (2019) 299-302.
[3] U. P. Nagarkatte, N. Oley, Theory of constraints and thinking processes for creative thinkers:
creative problem solving, Boca Raton, Fl, CRC Press, 2018.
[4] B. Sproull, Theory of Constraints, Lean, and Six Sigma Improvement Methodology: Making the</p>
      <p>Case for Integration, London, CRC Press, 2019.
[5] J. F. Cox, J.G. Schleher, Theory of Constraints Handbook, New York, NY, McGraw-Hill, 2010.
[6] E. M. Goldratt, My saga to improve production, Selected Readings in Constraints Management,</p>
      <p>Falls Church, VA: APICS (1996) 43-48.
[7] E. M. Goldratt, Production: The TOC Way (Revised Edition) including CD-ROM Simulator and</p>
      <p>Workbook, Revised edition, Great Barrington, MA: North River Press, 2003.
[8] S. N. Sivanandam, S. Sumathi, S. N. Deepa, Introduction to Neural Networks using Matlab 6.0,</p>
      <p>The McGraw-Hill Comp., Inc., New Delhi, 2006.
[9] S. Haykin, Neural networks and Learning Machines, Upper Saddle River, New Jersey: Pearson</p>
      <p>Education, Inc., 2009.
[10] K.-L. Du, K. M. S. Swamy, Neural Networks and Statistical Learning, Springer-Verlag, London,
2014.
[11] E. Fedorov, V. Lukashenko, V. Patrushev, A. Lukashenko, K. Rudakov, S. Mitsenko, The
method of intelligent image processing based on a three-channel purely convolutional neural
network, in: CEUR Workshop Proceedings, vol. 2255, 2018, pp. 336–351. doi:
10.1109/EWDTS.2013.6673185
[12] . Zhang, Z. Tang, C. Vairappan, A novel learning method for Elman neural network using local
search, in: Neural Information Processing – Letters and Reviews, vol. 11, 2007, pp. 181–188.
[13] R. Dey, F. M. Salem, Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks,
arXiv:1701.05923, 2017. – URL: https://arxiv.org/ftp/arxiv/papers/1701/1701.05923.pdf.
[14] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase
representations using RNN encoder-decoder for statistical machine translation, in: Proceedings of
the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha,
Qatar, 2014, pp. 1724–1734. doi: 10.3115/v1/D14-1179.
[15] H. Jaeger, W. Maass, J. Prıncipe, Special issue on echo state networks and liquid state machines,</p>
      <p>Neural Networks 20 (2007) 287–289. doi: 10.1016/j.neunet.2007.04.001.
[16] A. H. S. Hamdany, R. R. O. Al-Nima, L. H. Albak, Translating cuneiform symbols using
artificial neural network, in: TELKOMNIKA Telecommunication, Computing, Electronics and
Control, volume 19, No. 2, 2021, pp. 438-443. doi: 10.12928/telkomnika.v19i2.16134
[17] P. B. Shekhawat, S. S. Dhande, Building an iris plant data classifier using neural network
associative classification, international journal of advancements in technology, 2 (2011) 491-506.
[18] P. B. Shekhawat, S. S. Dhande, A classification technique using associative classification,</p>
      <p>International journal of computer application, 20 (2011) 20-28.
[19] M. A. Kadhim, A. Alam, H. Kaur, Design and implementation of intelligent agent and diagnosis
domain tool for rule-based expert system, in: International Conference on Machine Intelligence
and Research Advancement, 2013, pp. 619-622. doi: 10.5815/ijisa.2016.09.08.
[20] S. Ticketek , O. Abdoun, J. Abouchabaka, An expert system for a constrained mobility
management of human resources, in: 10th International Colloquium on Logistics and Supply
Chain Management (LOGISTIQUA), Rabat, Morocco, 2017, pp. 53-58.
[21] A. Asemi, A. Ko, M. Nowkarizi, Intelligent libraries: a review on expert systems, artificial
intelligence, and robot, Library Hi Tech, 39 (2021), 412-434. doi:10.1108/lht-02-2020-0038.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>