=Paper= {{Paper |id=Vol-2899/paper018 |storemode=property |title=Fuzzy artificial neural network for prediction and management tasks |pdfUrl=https://ceur-ws.org/Vol-2899/paper018.pdf |volume=Vol-2899 |authors=Kibriyo Mukhamadieva }} ==Fuzzy artificial neural network for prediction and management tasks== https://ceur-ws.org/Vol-2899/paper018.pdf
Fuzzy artificial neural network for prediction and management
tasks
Kibriyo Mukhamadieva 1
1
    Bukhara Engineering Technological Institute, Q.Murtazaev 15, Bukhara, 200100, Uzbekistan


                Abstract
                In the work the task of prediction of parameters at construction of systems of forecasting and
                management is considered. Existing solutions in the use of fuzzy neural networks are studied
                and analyzed. The structure of a fully coupled fuzzy artificial neural network without a layer
                of fuzzy rules, corresponding to the "classical" multilayer perceptron, is proposed. Tested for
                learning time and RMS error of different structures of the proposed fuzzy artificial neural
                network in predicting the performance consumed by a coal company. The results allow you to
                select the number of neurons in the hidden layers depending on the desired accuracy of the
                prediction of the output parameter.

                Keywords
                fuzzy logic, fuzzy artificial neural network, perceptron, identity function (IF)

1. Introduction

    Prediction issues are relevant for any human activity: weather forecasts, exchange rate forecasts,
socio-economic forecasts, etc. In order to make these forecasts, there is usually a sufficient amount of
data collected over many years, as well as data obtained from current observations or from experts.
However, there are a number of specific tasks: forecasting the demand for resources of an enterprise,
forecasting natural phenomena, etc., when information about an object, its parameters and states is
incomplete, uncertain and/or poorly formalized. Such types of problems arise when building control
systems of objects, the parameters of which are difficult to measure, or the relationship between them
is not uniquely established. An example is the water treatment system of an industrial enterprise, when
the number of indicators (concentration of dissolved salts and gases in water) reaches several dozens,
they are all interrelated, and there is no unambiguous function to select the number of reagents for
treatment depending on the initial chemical composition of water. Another example is the prediction of
electricity demand for a coal mining enterprise, the energy consumption of which is a complex non-
stationary process, which is influenced by a significant number of mining-geological, technological,
industrial, climatic and other factors [1-2].
    Among the currently known models and forecasting methods we can distinguish [2]: multiplicative
models, dynamic linear and nonlinear models, threshold autoregressive models, Kalman filters, time
series, ARMAX models, nonparametric regression models, artificial neural networks (ANN), statistical
models, and hybrid models, such as fuzzy artificial neural networks (FANN).
    Various kinds of regressions and models generated from them, as well as time series can be
effectively used in cases where the dependence of the predicted indicator in time is continuous, has a
smooth character and does not contain jumps and discontinuities. In the case of forecasting on the basis
of non-periodic data series, in order to obtain an acceptable accuracy (at least in units of percent), a
significant number of row members or regression coefficients have to be taken into account. Moreover,
when processing non-periodic signals, both regression and time series give adequate results only within
the interpolation interval. Artificial neural networks are more flexible than the aforementioned models,
_________________________________________________________
III International Workshop on Modeling, Information Processing and Computing (MIP: Computing-2021), May 28, 2021, Krasnoyarsk,
Russia
EMAIL: mkb78@mail.ru (K. Mukhamadieva)
ORCID: 0000-0002-4436-6333 (K. Mukhamadieva)
             © 2021 Copyright for this paper by its authors.
             Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
             CEUR Workshop Proceedings (CEUR-WS.org)

                                                                                  118
which is explained by the presence of a complete relationship between the input and intermediate
variables, as well as by the possibility of introducing nonlinearity into the activation functions [3]. This
explains their expanding application in solving computational, statistical, prognostic and other
problems. A feature of "classical" ANNs is that in order to train them it is necessary to have a
sufficiently large amount of initial data, which is not always possible. To overcome the limitations
inherent to "classical" ANN, fuzzy artificial neural networks were developed that use the theory of
fuzzy sets, which makes it possible to build predictive models for cases of uncertainty or lack of input
data. The introduction of such fuzzy is possible for input and output data, weights of neurons, as well
as for the production of intermediate transformations [4].

2. Main part

    To date, more than a dozen varieties of fuzzy artificial neural networks are known: Takagi-Sugeno-
Kang (TSK), Wang-Mendel (WM), adaptive ANFIS, FALCON, GARIC, NEFCON and FUN FANN,
fuzzy multilayer perceptron, hybrid neural network, as well as FANN, which are various modifications
of "classical" ANS (Kohonen fuzzy self-organizing network, fuzzy baseline radial network and others.)
[4-6].
    While "classical" ANNs are presented in the literature as a universal tool for processing unrelated to
the subject domain data, FANN, as a rule, are focused on a narrower range of tasks. Thus, Takagi
Sugheno Kanga and Wang-Mendel FANN are used for data classification and prediction [3], ANFIS
and GARIC are used in control systems [5], FALCON for parametric identification in adaptive
automatic control systems (ACS) with a tunable model [5], NEFCON in synthesis tasks when dealing
with a black box object [5], FUN in control tasks in mobile robotics [6], and fuzzy multilayer perceptron
for the identification of data sets [4]. In [4] a class of hybrid artificial neural networks, which use fuzzy
neurons with fuzzy inputs and outputs and/or fuzzy weights, but with a clear activation function, is
separately identified. The peculiarity of all considered FANN is the presence of a special layer - a layer
of rules, in which intermediate operations of data transformation are performed using fuzzy logic
operations [4-5].
    This feature complicates the development of FANN training algorithms, because in addition to
calculating the value of synaptic weights of the links it is necessary to adjust the fuzzy inference rules
for the layer of rules, and, possibly, the parameters of the fuzzy layer membership functions.
    Thus, the number of fuzzy layer rules depends on the number of input variables and the number of
their terms, and can be determined by the formula [4]:
                                               𝑁     𝑁 ,                                                (1)
where 𝑁 number of rules, 𝑁            number of input variables, 𝑁 - number of terms
    Consequently, for a network with five input variables, each of which is represented by three terms,
the number of such rules will be 125. As the number of rules increases, the time spent on training of
FANN and the algorithm of its training will also increase accordingly.
    Some publications propose to involve experts to tune membership functions of a fuzzy layer [7],
which creates certain difficulties: first, it is necessary to involve as many experts as possible for an
adequate representation of knowledge, and second, each expert will contribute his subjective error. The
authors of this article believe that the FANN should be adjusted without involving experts, but on the
basis of the available real data.
    We propose the structure of a fuzzy artificial neural network corresponding to the "classical"
multilayer perceptron [3] with two hidden layers. Figure 1: shows an example of such FANN for the
case of two input variables.




                                                      119
1 layer                  2-layer                     3-layer                   4-layer          5-layer
Figure 1: Structure of the proposed FANN

   The proposed FANN contains five layers:
   1. input data layer,
   2. first hidden fuzzy layer (phasing layer - fuzzy reduction),
   3. second hidden layer containing neurons with linear activation function, performing summation
   of data obtained in the second layer multiplied by synaptic weights vector w l,
   4. layer performing summation of data obtained in the third layer multiplied by synaptic weights
   vector w 2,
   5. output data layer.
   The number of neurons in the first hidden (fuzzy) layer is equal to the sum of the number of terms
of all input variables [8].
   The number of neurons in the second hidden layer is determined in the process of FANN adjustment.
At the initial stage this index can be chosen equal to the number of neurons in the first hidden layer.
   The fourth layer is resultant, so it consists of one neuron with linear activation function.
   Based on the notation system accepted in literature, we can write the structure of FANN shown in
Figure 1: as 2-5-4-1. To explain the working principle of the first hidden layer of the proposed FANN,
consider the membership functions - the terms of the first hidden layer for one input variable. Suppose
there are three such terms, as shown in Figure 2:.




Figure 2: Terms of the first hidden FANN layer for a single input variable

   Then on the interval xl-x2 only the effect of the term T1 will appear, on the interval x2-x3 - the
terms and T2, on the interval x3-x4 - the terms T2 and T3, and on the interval x4-x5 - the term T3. This
means that when you change the values of the input variable x in the interval xl-x5, the values of the


                                                   120
coefficients by which the elements of the vector wl are multiplied, which then arrive to the neurons of
the second hidden layer, will change. The selection of membership functions for the first hidden layer
of FANN can be done in two ways. The first way is to distribute the accessory function carriers
uniformly in the carrier domain for the whole range of possible values of input variables, as shown in
Figure 2:, and the second way is based on the processing of initial statistical information [8].
   To verify the proposed structure of FANN, let's take the data of the half-hour graph of the power
consumption of the coal mining enterprise for a day. Figure 3: shows a graph of the original data, and
predicted values, obtained with the proposed FANN with the structure 2-5-3-1 and with the structure
2-14-5-1.




Figure 3: Graphs of initial and forecast data

  I.    the original time series (dotted line);
 II.    time series synthesized by FANN (2-5-3-1);
III.    time series synthesized by FANN (2-14-5-1)

   To estimate the training time and RMS errors, NINSs with different structures were constructed.
Since the number of input variables is constant in all cases (two variables), the parameters of tested
NINSs are represented by the number of neurons in the first hidden layer N HL1 and the second hidden
layer N HL2. The results are shown in Table 1

Table 1
Differ by several orders of magnitude, the error values on the ordinate axis are plotted in the
logarithmic scale
              №               Structure of FANN          Training time, с         RMS error 𝜀,
                                                                                      %
                                NHL!         NHL2
              1                   4           2               5.625                   6.15
              2                   4           3               8.156                   5.37
              3                   4           4               9.516                    4.4
              4                   5           3               9.953                   5.86
              5                   5           4              12.344                   5.86
              6                   5           5              15.344                   5.86
              7                   6           3              10.093                   8.67
              8                   6           4              13.062                   5.92

                                                   121
                 9                      6           5                 16.062              2.40
                 10                     8           3                 11.234              0.01
                 11                     8           5                 17.562             0.011
                 12                     8           7                 23.875             0.008
                 13                    10           3                 13.797              2.25
                 14                    10           5                 22031               2.26
                 15                    10           8                 35860               2.26
                 16                    14           3                 17.453             0.0045
                 17                    14           4                 23.328             0.0055
                 18                    14           5                 28.985             0.0031

   Figure 4:, Figure 5: show the dependences of the learning time and the mean-square error, plotted
according to Table 1. Since the minimum and maximum error values in Table 1 differ by several orders
of magnitude, the error values on the ordinate axis are plotted in the logarithmic scale.




Figure 4: Plot of the root‐mean‐square error obtained in the FANN

       as a function of the number of neurons in the second hidden layer:
  I.         a network with 4 IF;
 II.         a network with 5 IF;
III.         a network with 5 IF;
IV.          a network with 8 IF;
 V.          a network with 10 IF;
VI.          a network with 14 IF




Figure 5: Graph of the dependence of the training time of the FANN


                                                        122
    as a function of the number of neurons in the second hidden layer:
  I.      network with 4 IF;
 II.      a network with 5 IF;
III.      a network with 5 IF;
IV.       a network with 8 IF;
 V.       a network with 10 IF;
VI.       a network with 14 IF.
    From the above dependencies of Figure 4:, Figure 5: The number of neurons in the first hidden layer
has the greatest influence on the learning time and the error value.
    The accuracy of prediction obtained with the proposed FANN can be improved by adjusting the
accessory functions for hidden layers. Thus, for example, a slight modification of the parameters of
membership functions for the first hidden layer, shown in Figure 6:, Figure 7: for FANN with the
structure 2-5-5-1, allows to reduce the error of 5,86% to 0,11% (version 1) and 0,19% (version 2).




Figure 6: Identity functions for the three terms of the first variable: solid line‐version 1; dotted line‐
version 2




Figure 7: Identity functions for two terms of the second variable: solid line‐version 1; dotted line‐
version 2




                                                    123
3. Conclusions

    Exclusion of fuzzy rules layer from the structure of fuzzy artificial neural network, built on the
structure of perceptron with two hidden layers, allows to simplify the procedure of its training and does
not limit the researcher in the number of fuzzy neurons. Exclusion of the rules layer will also allow to
refuse the subjective component introduced by experts. The presence of fuzzy in the first hidden layer
provides an opportunity to train a neural network with a small amount of input data.

4. References

[1] S. Sulzberger, FUN: Optimization of Fuzzy Rule Based Systems Using Neural Networks / S.
    Sulzberger, N. Tschichold-Gtirman, S. Vestli, IEEE International Conference on Neural Networks
    (ICNN-93), San-Francisco, California, volume 1, pp. 312–316, 1993.
[2] Ye. I. Kucherenko, V. A. Filatov, I. S. Tvoroshenko, R. N. Baidan, Intellectual technologies in
    decision-making technological complexes based on fuzzy interval logic, East European Journal of
    Advanced Technologies 2 (2005) 92–96.
[3] Coskun-Setirek, Z. Tanrikulu, Intelligent interactive voice response systems and customer
    satisfaction, International Journal of Advanced Trends in Computer Science and Engineering 8(1)
    (2019) 4–11. https://doi.org/10.30534/ijatcse/2019/02812019.
[4] D. Fontes, P. A. Pereira, F. Fontes, A Decision Support System for TV self-promotion Scheduling,
    International Journal of Advanced Trends in Computer Science and Engineering 8(2) (2019) 134-
    140. https://doi.org/10.30534/ijatcse/2019/06822019.
[5] S. Tvoroshenko, Structure and functions of intelligent decision-making tools in complex systems,
    Artificial Intelligence 4 (2004) 462–470.
[6] Ye. I. Kucherenko, I. S. Tvoroshenko, Operative evaluation of the space of states of complex
    distributed objects using fuzzy interval logic, Artificial Intelligence 3 (2011) 382–387.
[7] S. Tvoroshenko, Analysis of Decision-Making Processes, Intelligent Systems, Information
    Processing Systems 2 (2010) 248–253.
[8] S. Egorov, A. N. Shaykin, Logical modeling under uncertainty based on fuzzy interval Petri nets,
    News of the Russian Academy of Sciences, Theory and Control Systems 2 (2002) 134–139.




                                                    124