=Paper=
{{Paper
|id=Vol-2870/paper126
|storemode=property
|title=Dynamic Stock Buffer Management Method Based on Linguistic Constructions
|pdfUrl=https://ceur-ws.org/Vol-2870/paper126.pdf
|volume=Vol-2870
|authors=Eugene Fedorov,Olga Nechyporenko
|dblpUrl=https://dblp.org/rec/conf/colins/FedorovN21
}}
==Dynamic Stock Buffer Management Method Based on Linguistic Constructions==
Dynamic Stock Buffer Management Method Based on Linguistic
Constructions
Eugene Fedorov and Olga Nechyporenko
Cherkasy State Technological University, Shevchenko blvd., 460, Cherkasy, 18006, Ukraine
Abstract
The paper proposes a dynamic stock buffer management method based on linguistic
constructions. The novelty of the research lies in the fact that a control system based on fuzzy
logic and linguistic constructions was created for the dynamic stock buffer management and
two artificial neuro-fuzzy network models were created. Three criteria for evaluating the
effectiveness were selected and the parameters of the proposed models were identified based
on the backpropagation algorithm in batch mode and the genetic algorithm, which are
oriented on the parallel information processing technology. The proposed models and
procedures for their parametric identification make it possible to increase the speed, accuracy
and reliability of decision making. The proposed dynamic stock buffer management method
based on linguistic constructions can be used in various intelligent systems that exercise
control in natural language.
Keywords 1
dynamic stock buffer management, theory of constraints, artificial neural network, fuzzy
inference systems, genetic algorithm, linguistic constructions
1. Introduction
Currently, one of the most pressing problems in the field of processing natural language structures
is insufficiently high speed, adequacy and probability of correct recognition [1, 2]. This leads to the
fact that the dynamic objects management in natural language can be ineffective. Therefore, the
development of methods that increase the efficiency of using linguistic structures for managing
objects is an urgent task.
In this work, as a field of application of natural language constructions, we have chosen dynamic
stock buffer management, which is used for supply chain management and is based on the theory of
constraints [3-5].
To date, there are no computer systems for dynamic stock buffer management, which are based on
soft computing and linguistic constructions.
At present, artificial intelligence methods are used to control dynamic objects, with artificial neural
networks as the most popular [6-8].
The advantages of neural networks are [9-11]:
the possibility of their training and adaptation;
the ability to identify patterns in data, their generalization, i.e. extracting knowledge from
data, so no knowledge of the object is required (for example, its mathematical model);
the parallel processing of information that increases computing power.
The disadvantages of neural networks are [12-14]:
the difficulty in determining the structure of the network, since there are no algorithms for
calculating the number of layers and neurons in each layer for specific applications;
COLINS-2021: 5th International Conference on Computational Linguistics and Intelligent Systems, April 22–23, 2021, Kharkiv, Ukraine
EMAIL: fedorovee@ukr.net (E. Fedorov); olne@ukr.net (O. Nechyporenko)
ORCID: 0000-0003-3841-7373 (E. Fedorov); 0000-0002-3954-3796 (O. Nechyporenko)
© 2021 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
the difficulty in forming a representative sample;
the high probability of the training and adaptation method hitting a local extremum;
the inaccessibility for human understanding the knowledge accumulated by the network, since
they are distributed among all elements of the neural network and are presented in the form of its
weight coefficients.
Recently, neural networks have been combined with fuzzy inference systems.
The advantages of fuzzy inference systems are [15, 16]:
representation of knowledge in the form of rules that are easily understandable by a person;
no need for an accurate estimate of variable objects (incomplete and inaccurate data).
The disadvantages of fuzzy inference systems are [17-19]:
impossibility of their training and adaptation (parameters of membership functions cannot be
automatically adjusted);
impossibility of parallel processing of information, which increases computing power.
Since metaheuristics [20-22] and, in particular, genetic algorithms can be used to train the
parameters of membership functions instead of neural network learning algorithms, let’s note their
advantages and disadvantages.
The advantage of genetic algorithms for training neural networks is [23, 24] a decrease in the
probability of hitting a local extremum.
The disadvantages of genetic algorithms for training neural networks are [25-27]:
the speed of the solution search method is lower than that of the neural network training
methods;
in the case of binary genes, an increase in the search space reduces the accuracy of the
solution with a constant chromosome length;
in the case of binary genes, there are encoding / decoding operations that reduce the algorithm
speed.
Due to this, creating a dynamic stock buffer management method, which will eliminate the
indicated disadvantages, is an urgent task.
The aim of the work is to increase the efficiency of dynamic stock buffer management with an
artificial neuro-fuzzy network, which is trained based on a genetic algorithm.
To achieve this goal, it is necessary to solve the following tasks:
1. Creation of a fuzzy dynamic stock buffer management system.
2. Creation of mathematical models of an artificial neuro-fuzzy network for dynamic stock buffer
management.
3. The choice of criteria for evaluating the effectiveness of mathematical models of an artificial
neuro-fuzzy network for dynamic stock buffer management.
4. Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network
for dynamic stock buffer management based on the backpropagation algorithm in batch mode.
5. Identification of the parameters of the mathematical model of an artificial neuro-fuzzy network
for the dynamic stock buffer management based on a genetic algorithm.
2. Creation of a fuzzy dynamic stock buffer management system
For the dynamic stock buffer management, a fuzzy inference system has been further improved in
the work, which provides the representation of knowledge about stock buffer management in the form
of rules with linguistic constructs that are easily accessible for human understanding, and involves the
following steps:
formation of linguistic variables;
formation of a base of fuzzy rules;
fuzzification;
aggregation of subconditions;
activation of conclusions;
aggregation of conclusions;
defuzzification.
As clear input variables the following were chosen:
current stock size x1 in pieces;
the time spent in the red zone of the stock buffer x2 in units of time;
the time spent in the green zone of the stock buffer x3 in units of time.
As linguistic input variables the following were chosen:
the depth of the stay in the red zone of the stock buffer ~x1 , depending on the current size of
the stock, with its values ~11 big , ~12 small , in which the ranges of values are fuzzy sets
~ ~
A11 {x1 | A~ ( x1 )} , A12 {x1 | 1 A~ ( x1 )} ;
11 11
duration of the stay in the red zone of the buffer of stocks ~ x2 with its values
~ ~ ~
21 long , 22 short , in which the ranges of values are fuzzy sets A21 {x2 | A~ ( x2 )} ,
21
~
A22 {x2 | 1 A~ ( x2 )} ;
21
the duration of the stay in the green zone of the stock buffer ~x3 with its values
~
~31 long , ~32 short , in which the ranges of values are fuzzy sets A31 {x3 | A~ ( x3 )} ,
31
~
A32 {x3 | 1 A~ ( x3 )} .
31
As a clear output variable, the number of the action type was chosen that changes the size of the
stock buffer y .
As a linguistic output variable, we chose the action y , which changes the size of the stock buffer,
~ ~ ~
with its values 1 increase, 2 decrease, 3 unchange, whose ranges of values are fuzzy sets
~ ~ ~
B1 { y | B~ ( y)} , B2 { y | B~ ( y)} , B3 { y | B~ ( y)} .
1 2 3
The proposed fuzzy rules take into account all possible states of the stock buffer (all possible
combinations of the values of the input linguistic variables) and the corresponding actions:
~
R1 : IF ~x1 is ~11 AND ~
x2 is ~11 AND ~x3 is ~31 THEN ~y is 3 ( F 1 ),
~
R 2 : IF ~x1 is ~11 AND ~
x2 is ~21 AND ~x3 is ~32 THEN ~y is 1 ( F 2 ),
~
x2 is ~22 AND ~x3 is ~31 THEN ~y is 2 ( F 3 ),
R 3 : IF ~x1 is ~11 AND ~
~
R 4 : IF ~x1 is ~11 AND ~
x2 is ~22 AND ~x3 is ~32 THEN ~y is 1 ( F 4 ),
~
x2 is ~11 AND ~x3 is ~31 THEN ~y is 2 ( F 5 ),
R 5 : IF ~x1 is ~12 AND ~
~
x2 is ~21 AND ~
R 6 : IF ~x1 is ~12 AND ~ x3 is ~32 THEN ~y is 1 ( F 6 ),
~
x2 is ~22 AND ~
R 7 : IF ~x1 is ~12 AND ~ x3 is ~31 THEN ~y is 2 ( F 7 ),
~
R 8 : IF ~x is ~ AND ~
1 12 x is ~ AND ~
2 22 x is ~ THEN ~y is ( F 8 ),
3 32 3
r r
where F – fuzzy rule coefficients R .
For example, fuzzy rule R 2 corresponds to the following knowledge: if the depth of the stay of the
stock buffer in the red zone is large and the stay of the stock buffer in the red zone is long and the stay
of the stock buffer in the green zone is short, then increase the size of the stock buffer.
Lets determine the degree of truth of each subcondition of each rule using the membership
function A~ ( xi ) .
ij
The Gaussian function was chosen as the membership functions of the subconditions, i.e.
1 x m 2
A~ ( xi ) exp i i1
, i 1,3 ,
i1
2 i1
A~ ( xi ) 1 A~ ( xi ) , i 1,3 ,
i2 i1
where mij – expected value,
ij – standard deviation.
The membership functions of the conditions take into account all possible states of the stock buffer
(all possible combinations of the values of linguistic variables) and are defined as
A~1 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
11 21 31
A~ 2 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
11 21 32
A~ 3 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
11 22 31
A~ 4 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
11 22 32
A~ 5 ( x ) ~ (x )
A12 1
~ ( x2 )
A11
~ (x ) ,
A31 3
A~ 6 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
12 21 32
A~ 7 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 ) ,
12 22 31
A~ 8 ( x ) A~ ( x1 ) A~ ( x2 ) A~ ( x3 )
12 22 32
or as
A~1 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
11 21 31
A~ 2 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
11 21 32
A~ 3 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
11 22 31
A~ 4 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
11 22 32
A~ 5 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
12 11 31
A~ 6 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
12 21 32
A~ 7 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} ,
12 22 31
A~ 8 ( x ) min{ A~ ( x1 ), A~ ( x2 ), A~ ( x3 )} .
12 22 32
The membership functions of conclusions connect all possible states of the stock buffer (all
possible combinations of values of linguistic variables) with the corresponding actions and are defined
in the form
C~1 ( x , z ) A~1 ( x ) B~ ( z ) F 1 ,
3
C~ 2 ( x , z ) A~ 2 ( x ) B~1 ( z ) F 2 ,
C~ 3 ( x , z ) A~ 3 ( x ) B~2 ( z ) F 3 ,
C~ 4 ( x , z ) A~ 4 ( x ) B~1 ( z ) F 4 ,
C~ 5 ( x , z ) A~ 5 ( x ) B~2 ( z ) F 5 ,
C~ 6 ( x , z ) A~ 6 ( x ) B~1 ( z ) F 6 ,
C~ 7 ( x , z ) A~ 7 ( x ) B~2 ( z ) F 7 ,
C~ 8 ( x , z ) A~ 8 ( x ) B~3 ( z ) F 8
or as
C~1 ( x , z ) min{ A~1 ( x ), B~3 ( z )}F 1 ,
C~ 2 ( x , z ) min{ A~ 2 ( x ), B~1 ( z )}F 2 ,
C~ 3 ( x , z ) min{ A~ 3 ( x ), B~2 ( z )}F 3 ,
C~ 4 ( x , z ) min{ A~ 4 ( x ), B~1 ( z )}F 4 ,
C~ 5 ( x , z ) min{ A~ 5 ( x ), B~2 ( z )}F 5 ,
C~ 6 ( x , z ) min{ A~ 6 ( x ), B~1 ( z )}F 6 ,
C~ 7 ( x , z ) min{ A~ 7 ( x ), B~2 ( z )}F 7 ,
C~ 8 ( x , z ) min{ A~ 8 ( x ), B~3 ( z )}F 8 .
In this work, membership functions Bk (z ) and weighting coefficients of fuzzy rules F r are
defined as
1, z k
Bk ( z ) [ z k ] , k 1,3 ,
0, z k
F1 F 2 F 3 F 4 F 5 F 6 F 7 F 8 1 .
The membership function of the final conclusion is defined as
C~ ( x , z ) 1 (1 C~1 ( x , z )) ... (1 C~ 8 ( x , z )) , z 1,3
or as
C~ ( x , z ) max{C~1 ( x , z ),..., C~ 8 ( x , z )} , z 1,3
To obtain the number of the type of action that changes the size of the stock buffer, the maximum
membership function method is used
z* arg max C~ ( x , z ) , z 1,3 .
z
3. Creation of mathematical models of an artificial neuro-fuzzy network for
dynamic stock buffer management
For the dynamic stock buffer management, the mathematical models of the artificial neural
network has been further improved in the work through the use of pi-sigma, inverted pi and min-max
neurons, which makes it possible to simulate the stages of fuzzy inference that determines the
structure of the models.
The structure of the model of an artificial neuro-fuzzy network in the form of a graph is shown in
Figure 1.
z
x1
x2
y1
y2
x3
y3
layer 0 layer 1 layer 2 layer 3 layer 4
Figure 1: The structure of the model of an artificial neuro-fuzzy network in the form of a graph
The input (zero) layer contains three neurons (corresponds to the number of input variables). The
first hidden layer implements fuzzification and contains six neurons (corresponds to the number of
values of linguistic input variables). The second hidden layer implements the aggregation of
subconditions and contains five neurons (corresponds to the number of fuzzy rules). The third hidden
layer implements the activation of conclusions and contains five neurons (corresponds to the number
of fuzzy rules). The output (fourth) layer implements the aggregation of conclusions and contains
three neurons (corresponding to the number of values of the linguistic output variable).
The functioning of an artificial neuro-fuzzy network is presented as follows.
In the first layer, the membership functions of the subconditions are calculated
1 x m 2
A~ ( xi ) exp i i1
, i 1,3 ,
i1
2 i1
A~ ( xi ) 1 A~ ( xi ) , i 1,3 .
i2 i1
In the second layer, the membership functions of conditions are calculated based on:
pi-sigma neuron
3 2
A~ r ( x ) wijr A~ ( xi ) , r 1,8 ;
ij
i 1 j 1
min-max neuron
A~r ( x ) min max wijr A~ ( xi ) , i 1,3 , j 1,2 , r 1,8 ,
i j ij
1
w11 1 , w12
1
0 , w121 1 , w122 0 , w31
1
1 , w32
1
0,
2
w11 1 , w12
2
0 , w21
2
1 , w22
2
0 , w31
2
0 , w32
2
1,
3
w11 1 , w12
3
0 , w21
3
0 , w22
3
1 , w31
3
1 , w32
3
0,
4
w11 1 , w12
4
0 , w21
4
0 , w22
4
1 , w31
4
0 , w32
4
1,
5
w11 0 , w12
5
1 , w21
5
1 , w22
5
0 , w31
5
1 , w32
5
0,
6
w11 0 , w12
6
1 , w21
6
1 , w22
6
0 , w31
6
0 , w32
6
1,
7
w11 0 , w12
7
1 , w21
7
0 , w22
7
1 , w31
7
1 , w32
7
0,
8
w11 0 , w12 8
1 , w21
8
0 , w22 8
1 , w31
8
0 , w32
8
1.
In the third layer, the membership functions of conclusions are calculated based on:
pi neuron
C~ r ( x , z ) wr A~ r ( x ) B~r ( z ) , z 1,3 , r 1,8 ;
min neuron
C~ r ( x , z ) wr min{ A~ r ( x ), B~r ( z )} , z 1,3 , r 1,8 ,
where wr F r .
In the fourth layer, membership functions of the final conclusion are calculated based on:
inverted pi neuron
8
y z C~ ( x , z ) 1 w (1 ( x , z)) , z 1,3 ;
r 1
z
r ~
Cr
max neuron
yz C~ ( x , z) max wrz C~r ( x , z) , z 1,3 , r 1,8 ,
r
w11 0 , w12 1 , w31 0 , w14 1 , w51 0 , w16 1 , w17 0 , w81 0 ,
w12 0 , w22 0 , w32 1 , w42 0 , w52 1 , w62 0 , w72 1 , w82 0 ,
w13 1 , w23 0 , w33 0 , w43 0 , w53 0 , w63 0 , w73 0 , w83 1 .
Thus, the mathematical model of an artificial neuro-fuzzy network based on pi-sigma and inverted
pi neurons is presented as
8 3 2 r
y z C~ ( x , z ) 1 wrz 1 wr
i 1 j 1
wij A~ ( xi ) B~ ( z ) , z 1,3 .
r (1)
ij
r 1
Thus, the mathematical model of an artificial neuro-fuzzy network based on min-max neurons is
presented as
r1,8
i1,3 j1, 2 ij
r
y z C~ ( x , z ) maxwrz wr min min max wijr A~ ( xi ) , B~ ( z ) , z 1,3 .
(2)
To make a decision on choosing an action that changes the size of the stock buffer, for models (1) -
(2), the method of the maximum membership function is used
z* arg max yz arg max C~ ( x , z) , z 1,3 .
z z
4. The choice of criteria for evaluating the effectiveness of mathematical
models of an artificial neuro-fuzzy network for dynamic stock buffer
management
In this work, to assess the parametric identification of mathematical models of an artificial neuro-
fuzzy network (1) - (2), the following are selected:
accuracy criterion, which means the choice of such values of the parameters
(m11, m21, m31, 11, 21, 31) , which deliver the minimum of the mean square error (the
difference between the model output and the desired output)
1 P 3
F
3P p 1 z 1
( y pz d pz ) 2 min ,
(3)
where d pz – response received from the control object, d pz {0,1} ,
y pz – model response,
P – number of test implementations;
reliability criterion, which means the choice of such values of parameters
(m11, m21, m31, 11, 21, 31) , which provide the minimum probability of making a wrong
decision (the difference between the model output and the desired output)
1 P
F arg max y pz arg max d pz min ,
P p 1 z1,3 z1,3 (4)
1, arg max y z arg max d pz
arg max y arg max d z1,3 z1, 3
pz y z arg max d pz
;
0, arg max
z
z1,3 z1,3
z1, 3 z1, 3
performance criterion, which means the choice of such values of parameters
(m11, m21, m31, 11, 21, 31) , which provide a minimum of computational complexity
F T min . (5)
5. Identification of the parameters of the mathematical model of an artificial
neuro-fuzzy network for dynamic stock buffer management based on the
backpropagation algorithm in batch mode
For identification of the parameters of the mathematical model of an artificial neuro-fuzzy network
for dynamic stock buffer management (1), the procedure for determining these parameters based on
gradient descent has been further improved in the work by calculating only the vector of parameters
(m11, m21, m31, 11, 21, 31) without the need to calculate weights and a batch learning mode to
accelerate learning, which involves following steps:
1. Initialization by means of uniform distribution on the interval (0,1) of mathematical
expectations mij , standard deviations ij , i 1,3 , j 1,2 .
2. Setting the training set {( x p , d p ) | x p R3 , d p {0,1}3} , p 1, P , where x p – p th training input
vector, d p – p th training output vector, P – power of the training set. Iteration number n=1.
3. Calculation of the output signal according to the model (1) (forward move)
8 3 2 r
y pz C~ ( x p , z ) 1 wrz 1 wr
i 1 j 1
wij A~ ( x pi ) B~ ( z ) , z 1,3
r
ij
r 1
4. Calculation of ANN error energy based on criterion (3)
1 P 3
E
3P p 1 z 1
( y pz d pz ) 2
5. Setting the parameters of the membership function of the model subconditions (1) (backward
move)
E
mi1 mi1 , i 1,3 ,
mi1
E
i1 i1 , i 1,3 ,
i1 (n)
where – parameter that determines the learning rate (with large learning is faster, but the risk of
getting an incorrect solution increases), 0 1 .
6. Checking the termination condition
If E , then n=n+1, go to 3.
The value is calculated experimentally.
6. Identification of the parameters of the mathematical model of an artificial
neuro-fuzzy network for dynamic stock buffer management based on a
genetic algorithm
For identification of the parameters of the mathematical model of an artificial neuro-fuzzy network
for dynamic stock buffer management (2), the procedure for determining these parameters has been
further improved in the work by using a combination of a genetic algorithm and simulated annealing
to accelerate and improve the accuracy of parameter identification, which involves the following
steps:
defining individuals of the initial population;
defining a fitness function;
selecting a reproduction (selection) operator;
selecting a crossing-over operator;
selecting the mutation operator;
selecting the reduction operator;
defining a stop condition.
Real genes were selected for the following reasons:
the ability to search in large spaces, which is difficult to do in the case of binary genes, when
an increase in the search space reduces the accuracy of the solution with a constant chromosome
length;
the ability to customize solutions locally;
the absence of encoding / decoding operations, which are necessary for binary genes,
increases the speed of the algorithm;
the proximity to the formulation of most applied problems (each real gene is responsible for
one variable or parameter, which is impossible in the case of binary genes).
As the chromosome, which represents the i th individual of the population H {hi } , i 1, I , there
is a vector of mathematical expectations and standard deviations of input features
hi (m11, m21, m31, 11, 21, 31 ) .
Criterion (4) was chosen as a fitness function.
To select vectors of parameters for crossing and mutation, the following effective combination is
used as the reproduction operator
1 1 i 1
P(hi ) exp(1 / g (n)) a (2a 2) (1 exp(1 / g (n))) , i 1, I .
|H | | H | | H | 1
Thus, in the early stages of the genetic algorithm, uniform selection is used to explore the entire
search space (random selection of chromosomes), and in the final stages, linearly ordered selection is
used to make the search directed (the current best chromosomes are preserved). This combination
does not require scaling and can be used while minimizing the fitness function.
To combine two variants of the vector of parameters selected by the reproduction operator,
uniform crossover is used as a crossover operator.
The choice of parents is carried out through the following effective combination - in the early
stages of the genetic algorithm, outbreeding is used, which provides an exploration of the entire
search space, and in the final stages, inbreeding is used, which makes the search directed. This
combination does not require scaling and can be used while minimizing the fitness function.
Once the parents are selected, crossbreeding is carried out and two offspring are produced.
It is necessary to increase the variety of options for a global search for the optimal vector of
parameters.
To provide a variety of variants of the vector of parameters after crossover, a heterogeneous
mutation is used.
The mutation step is defined as
n
b
( Max hij
) r 1 , r 0.5
j
N
b
,
(h Min )r 1 n , r 0.5
ij j
N
where Maxj , Minj – maximum and minimum value of the j th gene,
n – iteration number,
N – maximum number of iterations,
r – random number obtained from a uniform distribution law, r U (0,1) ,
b – parameter that controls the mutation step decrease rate, b 0 .
To simulate annealing, the mutation probability is defined as
Pm P0 exp( 1 / g (n)) , g (n) g (n 1) , g (0) T0 ,
where P0 – initial mutation probability.
T0 – initial annealing temperature, T0 0 ,
– a parameter that determines the learning rate (decrease in the annealing temperature) (for large
learning is faster, but the risk of getting an incorrect solution increases), 0 1 .
Thus, in the early stages of the genetic algorithm, a mutation with a large step occurs with a high
probability, which ensures the exploration of the entire search space, and at the final stages, the
probability of a mutation and its step tend to zero, which makes the search directed.
The reduction operator allows forming a new population based on the previous population and
vectors of parameters obtained by crossover and mutation. Scheme ( ) is used as a reduction
operator, which does not require scaling and can be used to minimize the fitness function.
The paper proposes the following condition
1 max F (hi ) , i 1, I .
i
The value is calculated experimentally.
7. Numerical study
A numerical study of the proposed mathematical models of an artificial neuro-fuzzy network and a
conventional multilayer perceptron was carried out in the Matlab package using:
Deep Learning Toolbox (to identify the parameters of the model of a conventional multilayer
perceptron based on backpropagation),
Global Optimization Toolbox (to identify the parameters of the model of a conventional
multilayer perceptron and the proposed model of an artificial neuro-fuzzy network (2) based on a
genetic algorithm),
Fuzzy Logic Toolbox (to identify the parameters of the proposed model of an artificial neuro-
fuzzy network (1) based on backpropagation).
Table 1 shows the computational complexity, root mean square errors (RMS), the probabilities of
making incorrect decisions on the dynamic stock buffer management, obtained based on the data set
of the logistics company Ekol Ukraine using an artificial neural network of the multilayer perceptron
(MLP) type with backpropagation (BP) and genetic algorithm (GA), and the proposed models (1) and
(2) with back propagation (BP) and genetic algorithm (GA), respectively. Also, the MLP had 2 hidden
layers (each consisted of 6 neurons, like the input layer). It was experimentally established that the
parameter = 0.05 .
Table 1
Computational complexity, root mean square error, probability of making incorrect decision for
dynamic stock buffer management
Parameter identification model The probability of making Computational
RMS
and method an incorrect decision complexity
Regular MLP with BP in sequential 0.5 0.2 T=PN
mode
Regular MLP with GA without 0.4 0.15 T=PNI
parallelism
Author's model (1) with BP in 0.1 0.04 T=N
batch mode with Gaussian
membership function
Author's model (2) with GA with 0.05 0.02 T=N
parallelism with Gaussian
membership function
Author's model (1) with BP in 0.12 0.05 T=N
batch mode with bell-shaped
membership function
Author's model (2) with GA with 0.07 0.03 T=N
parallelism with bell-shaped
membership function
According to Table 1, the best results are obtained by model (2) with the identification of
parameters based on GA and with Gaussian membership function.
Based on the experiments carried out, it can be argued that the procedure for identifying
parameters based on the genetic algorithm is more effective than the training method based on
backpropagation by reducing the probability of hitting a local extremum, automatic selection of the
models structure and using the technology of parallel information processing.
8. Conclusions
1. To solve the problem of increasing the efficiency of control of dynamic objects in natural
language, the corresponding methods of artificial intelligence were investigated using the example
of dynamic stock buffer management. These studies have shown that as of today the most
effective is the use of artificial neural networks in combination with a fuzzy inference system and a
genetic algorithm.
2. The novelty of the research lies in the fact that the proposed method of dynamic stock buffer
management is based on fuzzy logic and linguistic constructions; provides a representation of
knowledge about stock buffer management in the form of rules with linguistic constructions that
are easily understandable by a person; reduces computational complexity, root mean square error
and the probability of making an incorrect decision by automatically choosing the structure of the
model, reducing the likelihood of hitting a local extremum and using the technology of parallel
information processing for the genetic algorithm and backpropagation in batch mode.
3. As a result of the numerical study, it was found that the proposed method of neuro-fuzzy
dynamic stock buffer management based on the linguistic constructions provides the probability of
incorrect decisions on the dynamic stock buffer management of 0.02, and the root mean square
error of 0.05.
4. Further research prospects are the use of the proposed method of neuro-fuzzy dynamic
management of the stock buffer based on linguistic constructions for various intelligent control
systems for dynamic objects in natural language.
9. References
[1] P. F. Dominey, M. Hoen, T. Inui, A neurolinguistic model of grammatical construction
processing, in: Journal of Cognitive Neuroscience, vol. 18, issue 12, 2006, pp. 2088–2107. doi:
10.1162/jocn.2006.18.12.2088.
[2] N. Khairova, N. Sharonova, Modeling a logical network of relations of semantic items in super
phrasal unities, in: Proceedings of the 2011 9th East-West Design & Test Symposium (EWDTS),
2011, pp. 360-365. doi: 10.1109/EWDTS.2011.6116585.
[3] J. F. Cox, J.G. Schleher, Theory of Constraints Handbook, New York, NY, McGraw-Hill, 2010.
[4] E. M. Goldratt, My saga to improve production, Selected Readings in Constraints Management,
Falls Church, VA: APICS (1996) 43-48.
[5] E. M. Goldratt, Production: The TOC Way (Revised Edition) including CD-ROM Simulator and
Workbook, Revised edition, Great Barrington, MA: North River Press, 2003.
[6] S. N. Sivanandam, S. Sumathi, S. N. Deepa, Introduction to Neural Networks using Matlab 6.0,
The McGraw-Hill Comp., Inc., New Delhi, 2006.
[7] S. Haykin, Neural networks and Learning Machines, Upper Saddle River, New Jersey: Pearson
Education, Inc., 2009.
[8] K.-L. Du, K. M. S. Swamy, Neural Networks and Statistical Learning, Springer-Verlag, London,
2014.
[9] E. Fedorov, T. Utkina, О. Nechyporenko, Forecast method for natural language constructions
based on a modified gated recursive block, in: CEUR Workshop Proceedings, vol. 2604, 2020,
pp. 199-214.
[10] Z. Zhang, Z. Tang, C. Vairappan, A novel learning method for Elman neural network using local
search, in: Neural Information Processing – Letters and Reviews, vol. 11, 2007, pp. 181–188.
[11] R. Dey, F. M. Salem, Gate-Variants of Gated Recurrent Unit (GRU) Neural Networks,
arXiv:1701.05923, 2017. – URL: https://arxiv.org/ftp/arxiv/papers/1701/1701.05923.pdf.
[12] K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, Y. Bengio, Learning phrase
representations using RNN encoder-decoder for statistical machine translation, in: Proceedings of
the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha,
Qatar, 2014, pp. 1724–1734. doi: 10.3115/v1/D14-1179.
[13] H. Jaeger, W. Maass, J. Prıncipe, Special issue on echo state networks and liquid state machines,
Neural Networks 20 (2007) 287–289. doi: 10.1016/j.neunet.2007.04.001.
[14] A. H. S. Hamdany, R. R. O. Al-Nima, L. H. Albak, Translating cuneiform symbols using
artificial neural network, in: TELKOMNIKA Telecommunication, Computing, Electronics and
Control, volume 19, No. 2, 2021, pp. 438-443. doi: 10.12928/telkomnika.v19i2.16134
[15] A. Rotshtein, S. Shtovba, I. Mostav, Fuzzy rule based innovation projects estimation, in:
Proceedings Joint 9th IFSA World Congress and 20th NAFIPS International Conference, 2001,
pp. 122-126.
[16] G. P. Reddya, Y. Deepika, K. S. Prasad, G. K. Kumar, Fuzzy logics associated with neural
networks in the real time for better world, in: Proceedings of the International Conference on
Advancements in Aeromechanical Materials for Manufacturing (ICAAMM-2016), MLR Institute
of Technology, Hyderabad, Telangana, India, volume 4, 2017, pp. 8507-8516. doi:
10.1016/j.matpr.2017.07.197
[17] V. T. Yen, Y. N. Wang, P. V. Cuong, Recurrent fuzzy wavelet neural networks based on robust
adaptive sliding mode control for industrial robot manipulators, in: Neural Computing and
Applications, volume 31, 2019, pp. 6945–6958. doi: 10.1007/s00521-018-3520-3
[18] H. Das, B. Naik, H. S. Behera, Medical disease analysis using neuro-fuzzy with feature
extraction model for classification, in: Informatics in Medicine Unlocked, volume 18, 2020
pp. 100288. doi: 10.1016/j.imu.2019.100288
[19] S. Balochian, E. Ebrahimi, Parameter optimization via cuckoo optimization algorithm of fuzzy
controller for liquid level control, Journal of Engineering (2013). doi: 10.1155/2013/982354.
[20] S. Subbotin, A. Oliinyk, V. Levashenko, E. Zaitseva, Diagnostic rule mining based on artificial
immune system for a case of uneven distribution of classes in sample, Communications, volume
3 (2016) 3-11.
[21] O. O. Grygor, E. E. Fedorov, T. Yu. Utkina, A. G. Lukashenko, K. S. Rudakov, D. A. Harder,
V. M. Lukashenko, Optimization method based on the synthesis of clonal selection and
annealing simulation algorithms, Radio Electronics, Computer Science, Control (2019) 90-99.
doi: 10.15588/1607-3274-2019-2-10.
[22] E. Fedorov, V. Lukashenko, T. Utkina, A. Lukashenko, K. Rudakov, Method for parametric
identification of Gaussian mixture model based on clonal selection algorithm, in: CEUR
Workshop Proceedings, vol. 2353, 2019. pp. 41-55.
[23] A. P. Engelbrecht, Computational Intelligence: an introduction, Chichester, West Sussex, Wiley
& Sons, 2007.
[24] X.-S. Yang, Nature-inspired Algorithms and Applied Optimization, Charm: Springer, 2018.
[25] A. Nakib, El-G. Talbi, Metaheuristics for Medicine and Biology, Berlin: Springer-Verlag, 2017.
[26] I. Loshchilov, CMA-ES with restarts for solving CEC 2013 benchmark problems, in:
Proceedings of IEEE congress on evolutionary computation (CEC 2013), 2013, pp. 369–376.
doi: 10.1109/CEC.2013.6557593.
[27] J. Byrne, E. Hemberg, M. O’Neill, A. Brabazon, A Local Search Interface for Interactive
Evolutionary Architectural Design, in: Proceedings of European Conference on Evolutionary and
Biologically Inspired Music, Sound, Art and Design (Evo-MUSART 2012), Lecture Notes in
Computer Science 7247, 2012, pp. 23–34.