=Paper= {{Paper |id=Vol-2623/paper3 |storemode=property |title=Forecast Method for Audit Data Analysis by Modified Liquid State Machine |pdfUrl=https://ceur-ws.org/Vol-2623/paper3.pdf |volume=Vol-2623 |authors=Tatiana Neskorodieva,Eugene Fedorov,Ivan Izonin |dblpUrl=https://dblp.org/rec/conf/intelitsis/NeskorodievaFI20 }} ==Forecast Method for Audit Data Analysis by Modified Liquid State Machine== https://ceur-ws.org/Vol-2623/paper3.pdf
    Forecast Method for Audit Data Analysis by Modified
                  Liquid State Machine

      Tatiana Neskorodieva1[0000-0003-2474-7697], Eugene Fedorov2[0000-0003-3841-7373],
                           Ivan Izonin3[0000-0002-9761-0096]
             1 Donetsk National University named Vasyl Stus, Vinnitsa, Ukraine
2 Cherkasy State Technological University, Cherkasy, Shevchenko blvd., 460, 18006, Ukraine
                   3 Lviv Polytechnic National University, Lviv, Ukraine


                        t.neskorodieva@donnu.edu.ua
                            fedorovee75@ukr.net
                           ivanizonin@gmail.com



       Abstract. The forecast problem is considered for the audit data analysis auto-
       mation. A forecast neural network model based on a modified liquid state ma-
       chine is proposed. To choose the criterion of efficiency estimation of forecast
       neural network model and to offer the method of parametric identification of
       forecast neural network model. This allows you to increase the forecast effi-
       ciency by reducing computational complexity and improving the forecast accu-
       racy. Software has been developed that implements the proposed method. The
       developed software is studied when solving the problem of forecasting indica-
       tors for data checking of the "paid-received" display.

       Keywords: Audit Data, Automatic Analysis, "Paid-Received" Display, Fore-
       cast Method, Neural Network, Modified Liquid State Machine.


1      Introduction

In the process of development of international and national economies and industry of
IT in particular, it is possible to distinguish the following basic tendencies: realization
of digital transformations, forming of digital economy, globalization of socio-
economic processes and of IT accompanying them [1]. These processes result in the
origin of global, multilevel hierarchical structures of heterogeneous, multivariable,
multifunction connections, interactions and cooperation of managing subjects (objects
of audit), the large volumes of information about them have been accumulated in the
informative systems of account, management and audit. Thus, the intercommunica-
tions mentioned above, have a network structure at every level.
   In relation to the enterprises of Ukraine [2] it is marked that in the conditions of
swift development of infrastructure and deepening of informatization of economic
processes efficiency of activity of enterprises, establishments and organizations are all
more depend on the information technologies (IT) used in management system. Now

Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0). IntelITSIS-2020
environment of information technologies (IT-environment) as a structural constituent
of organization is the difficult system, that unites various informative, programmatic,
technical, human and other types of resources for the achievement of enterprise aims.
   Currently, the scientific and technical problem of modern information technologies
in the financial and economic sphere is the formation of a design methodology [3] and
the creation of decision support systems (DSS) for enterprise audit based on the au-
tomated analysis of large volumes of data on financial and economic activities and the
state of enterprises in order to expand functional capabilities, increasing the efficiency
and versatility of IT audit [4 - 6].


2      Related Works

One of the analysis element among the audit tasks is a forecast of economic indica-
tors. Among them methods of short-term forecast, such as regressive [7], structural [8,
9], logical [10 - 12]; methods of long-term forecast, such as autoregressive [13], ex-
ponential smoothing out [14]. A compromise between the above-mentioned groups of
methods is connectionist [15-17], that can use neural networks both for a short-term
and for long-term forecasts. At the same time connectionist methods usually use par-
ametric identification based on local search, that reduces forecast accuracy [18].
   Regressive methods [7] conduct a forecast based on regressive linear and nonlinear
model. The advantages of these methods are simplicity of construction of model,
design transparency. The disadvantages of these methods are high computational
complexity of parametric identification. Structural methods [8, 9] conduct a forecast
based on Markov chain. The advantages of these methods are simplicity of construc-
tion of model, design transparency. Logical methods [10 - 12] conduct a forecast
based on regressive tree. The advantages of these methods are design transparency,
speed of process of construction of tree. The disadvantages of these methods are the
problem of construction of optimal tree. The disadvantages of these method groups
are absence of possibility of long-term forecast, that results in insufficient forecast
accuracy.
   Autoregressive methods [13] conduct a forecast based on autoregressive linear
model. The advantages of these methods are simplicity of construction of model,
design transparency. The disadvantages of these methods are high computational
complexity of parametric identification, absence of ability of design of nonlinear pro-
cesses, that results in insufficient forecast accuracy. Methods of the exponential
smoothing [14] conduct a forecast based on linear model with one (single smoothing),
two (double smoothing) or by three (triple smoothing) parameters. The advantages of
these methods are simplicity of construction of model, design transparency, a result
can be got quicker, than at the use of other models. The disadvantages of these meth-
ods are a subzero adaptivity, absence of ability of design of nonlinear processes, that
results in insufficient forecast accuracy.
   In the last few years among analytical procedures of exposure of financial fraud the
class of procedures, based on the methods of intellectual analysis of data, was formed
[15]. In practice different methods of extraction of data, namely: K-nearest neighbors,
decision tree [16], fuzzy logic [17], logistic model, Bayesian belief networks, naive
Bayes algorithm, Beneish M-Score, Benford’s law, Altmann Z-Score, are being used
for the improvement of accuracy of the fraud finding out [18]. Connectionist methods
[19, 20] conduct a forecast based on nonlinear model of neural network. The ad-
vantages of these methods are scalability, high adaptivity. The disadvantages of these
methods are absence of design transparency; complexity of choice of model structure;
hard requirements to training sample; problem of choice of method of parametric
identification; that results in insufficient forecast accuracy high computational com-
plexity of parametric identification.
   The general feature of all the methods mentioned above is that they possess a high
computational complexity and/or do not give high forecast accuracy.
   Therefore, an actual task is an increase of forecast efficiency by reducing computa-
tional complexity and improving forecast accuracy. The aim of study is to increase
the efficiency of the forecast method due to the modification of model of artificial
neural network and method of its parametric identification.
   For the achievement of the aim it is necessary to solve the following tasks:

• to offer the forecast neural network model;
• to choose the criterion of efficiency estimation of forecast neural network model;
• to offer the method of parametric identification of forecast neural network model;
• to conduct numeral researches on audit data for checking of the "paid-received"
  display.


3      Modified Liquid State Machine

As well as in the traditional liquid state machine (LSM) [21, 22] in the proposed mod-
ified liquid state machine (MLSM) the hidden layer corresponds to the reservoir or
liquid, and output layer correspond to the weekend to the layer of multi-layered per-
ceptron (MLP). Unlike the traditional LSM in the offered MLSM: for the increase of
forecast accuracy the method of pseudoinverse matrix, was used instead of the meth-
od of backpropagation, like the echo-state network (ESN) [23, 24]; for the reduction
of computation complexity the hidden layer was done 1D, but not 3D.
    Pulse neural networks are the third generation of artificial neural networks and,
from the point of view of physiology, is the most realistic model of ANNs. Since the
traditional LSM is based on pulsed neurons, the LIF (Leaky Integrate and Fire) model
of the neuron, which has the least computational complexity in comparison with other
models, is selected as the hidden layer neuron model. The LIF neuron model is pre-
sented in a kind

                                du
                            τ      = urest − u (t ) + I (t ) R ,                    (1)
                                dt

where:
  τ – time constant, τ= C ⋅ R ,
  C – capacity,
   R – resistance,
   u (t ) –potential (voltage),
   I (t ) – input current,
   urest – resting potential.

   Firing time of LIF neuron is certain in a kind t f : u (t f ) ≥ θ , where θ – threshold
value. After the firing time of LIF neuron voltage is drop in a constant ur , thus
                                                       r
ur < θ , and during a refractory period t                   saves a value ur . After completion of
refractory period of LIF neuron continues to function to the new firing.
   Taking into account (1) model of the modified liquid state machine is certain in a
next kind (2)-(5):

                                   ∆t       ∆t 
                     u j=
                         (n∆t )       u  + 1 −  u j (n∆t − ∆t ) +
                                   τ rest     τ 

     R  M in − h in        Nh                        
          ∑ wij y (n − i ) + ∑ wijh − h ui (n∆t − ∆t )  , j ∈ 1, N ,
                                                                   h
 +∆t                                                                                              (2)
=    τ  i 0=i 1                                       
                                                      

               u ,        (t jf + t Ir > n∆t ∧ j ∈ I ) ∨ (t jf + t Er > n∆t ∧ j ∈ E )
   u j (n∆t ) = u (n∆t ), else
                   r                                                                    , j ∈ 1, N h ,
                 j

               ur ,       u j (n∆t ) ≥ θ         n∆t ,     u j (n∆t ) ≥ θ
                                                                                , j ∈ 1, N ,
                                                                                          h
   u j (n∆t ) = u (n∆t ), else            , tj = 
                                               f
                                                            f                                     (3)
                 j                               id (t j ), else

                                  id (v) = v , inc(v)= v + 1 ,                                    (4)

                                                           Nh
                                                           ∑ wih−out ui (n∆t ) , j ∈1, N ,
                                                                                       h
         y out ( n) = f out ( s out (=
                                     n)) , s out (n)                                              (5)
                                                           i =0

where:
  I is numbers set of inhibitory neurons of the hidden layer,
  E is numbers set of exciting neurons of the hidden layer,
  ∆t is a time sampling step,
  M is an amount of unit delays of input layer,
   N h is an amount of hidden layer neurons,
   wijin − h are the weights between an input and hidden layer,

   wijh − h are the weights between the hidden layer neurons,
    wijh −out are the weights between the hidden and output layer.


4        Choice of Criterion for Estimation of Efficiency of Modified
         Liquid State Machine Model

In the work for training the LSM model (2)-(5), the goal function was chosen in ac-
cording (6), that means the choice of such values of the parameter vector
W = ( w1h −out ,..., wh −hout ) , that deliver a minimum of mean-square error (differences
                      N
of output on a model and test output)

                                  1 P
                          =
                          F         ∑ ( y out − dµ )2 → min
                                  P µ=1 µ               W
                                                            ,                                          (6)


where:
     yµout is µ -th output signal on a model,
    dµ is µ -th test output signal.


5        Method of Parametric Identification of Modified Liquid State
         Machine Model

The method includes thr following steps:
1. Initializing of weights between an input and hidden layer
                                  in − h
    If U (0,1) < Pin −out , than wij       = U (−1,1) , i ∈ 1, M , j ∈ 1, N h .
                                                                                  h −out
    Initializing of weights between the hidden and output layer wi                         (n) = U (−1,1) ,

i ∈ 1, N h , where U (a, b) is an uniform distribution on a segment [a, b] ,
    Pin −out is a probability of weights between an input and hidden layer.
   2. To form the set of numbers of hidden layer inhibitory (I) and excitatory (E) neu-
rons as follows:
   2.1. I = ∅ , E = ∅ .
    2.2. If U (0,1) < PI , then I = I  { j} , otherwise E = E  { j} , j ∈ 1, N h ,
where:
  PI is a probability that a neuron is inhibitory.
                                                                               h−h
   3. To calculate weights between the hidden layer neurons wij                      , i, j ∈ 1, N h , as
follows:
             h−h
   3.1. wij        = 0;
                                               
                                                              (           )
                                                          2
   3.2. if i ∈ E , j ∈ E , i ≠ j , P=
                                    EE
                                       CEE exp  − dij / σ  , U (0,1) < PEE , then
                                                          
wijh − h = wEE ;
                                                  
                                                      (           )
                                                             2
   3.3. if         i ∈ E, j ∈ I ,     P=
                                       EI
                                          CEI exp  − dij / σ  ,             U (0,1) < PEI ,      then
                                                             
wijh − h = wEI ;
                                        
                                                  (   )                       h−h
                                                     2
   3.4. if i ∈ I , j ∈ E , P=
                            IE
                                CIE exp  − dij / σ  , U (0,1) < PIE , then wij = wIE ;
                                                     
                                                   
                                                          (           )
                                                             2
   3.5. if i ∈ I , j ∈ I , i ≠ j , P=II
                                          CII exp  − dij / σ  , U (0,1) < PII , then
                                                              
wijh − h = wII ,
where:
  CEE , CEI , CIE , CII – coefficients for connections EE, EI, IE, II respectively,
   dij is an Euclidean distance between two neurons,
   σ is an constant,
   PEE , PEI , PIE , PII are the probabilities that connection is EE, EI, IE, II respec-
tively,
    wEE , wEI , wIE , wII are the weights of connections EE, EI, IE, II respectively.

   4. A learning set is given,      {( x , d ) x ∈ R, d ∈ R} , µ ∈1, P , where x is the µ -
                                        µ    µ    µ   µ                                    µ
s learning set input value, d µ is the µ -s learning set output value, P is power of
learning set number. Number of values from a learning set µ =1 . Number of itera-
tions of learning set n = 1 .
   5. Initial calculation of output signal of input layer
    y in (n − i ) =
                  0 , i ∈ 1, M .
   6. Initial calculation of output signal of the hidden layer
   ui (0) = 0 , i ∈ 1, N h .
   7. Initial calculation of firing time of hidden layer neurons:
   if j ∈ I , then t jfn = −t Ir , j ∈ 1, N h ,
  if j ∈ E , then t jfn = −t Er , j ∈ 1, N h ,

where t Ir , t Er are the refractory period for inhibitory (I) and excitatory (E) neurons
respectively.
   8. Calculation of output signal of input layer

                                                                    ( x − m) 2 
                                                                        µ
                                                  (n) exp  −
                              y in (n) = xµ , y in=                             ,
                                                                       2σ 2   
                                                                                

where:
  m is an expected value,
  σ is an standard deviation.
  9. Calculation of output signal for the hidden layer:

                                             ∆t          ∆t 
                            u j=
                                (n∆t )          urest + 1 −  u j (n∆t − ∆t ) +
                                             τ             τ 

    R M                                                     
                                                        h
                                   N
 +∆t  ∑ wijin − h y in (n − i ) + ∑ wijh − h ui (n∆t − ∆t )  , j ∈ 1, N h .
=   τ  i 0=i 1                                             
                                                             

   10. If a refractory period proceeds, then an output signal for the hidden layer is
drop in a constant

            u ,         (t jf + t Ir > n∆t ∧ j ∈ I ) ∨ (t jf + t Er > n∆t ∧ j ∈ E )
u j (n∆t ) = ur (n∆t ), else                                                        , j ∈ 1, N h .
              j

   11. If firing of neuron, then an output signal for the hidden layer is drop in a con-
stant and fixed firing time:

               
               ur ,       u j (n∆t ) ≥ θ          n∆t ,      u j (n∆t ) ≥ θ
   u j (n∆t ) =u (n∆t ), else            , t f =       f                    , j ∈ 1, N h ,
               
                j
                                               j
                                                    
                                                    
                                                     id (t j
                                                             ), else

  12. Calculation of output signal for an output layer:
                                                            Nh
       y   out
                 ( n) = f   out
                                  (s    =
                                       out , s out (n)
                                         ( n))              ∑ wih−out (n)ui (n∆t ) , j ∈1, N out ,
                                                            i =0

where:
    f out is a sigmoid function of activating of neurons of output layer.
                       w0h −out (n) b h −out (=
  It is considered that=                      n), u0 (n∆t ) 1 .
  13. To modify weights of output layer based on method of pseudoinverse matrix:
                             U ui (n∆t )  , n ∈ 1, P , i ∈ 0, N h .
     13.1. To create a matrix=

                                     (                                    )
     13.2. To create a vector T = ( f out )−1 (d1 ), , ( f out )−1 (d p ) .

     13.3. To calculate a vector W = (b h −out (n), w1h −out (n),..., wh −out (n)) , W = U +T ,
                                                                        Nh

     where U + is a pseudoinverse matrix.


6        Experiments and Results

The numerical experiments were conducted with the use of package of MATLAB.
The probability of weights between an input and hidden layer Pin −out = 0.3 . The
probability of that a neuron is inhibitory PI = 0.2 . Coefficients for connections
CEE = 0.3 , CEI = 0.2 , CIE = 0.4 , CII = 0.1 . Weights of connections wEE = wEI =
wIE = wII = 0.4 . Constant σ =2, Resistance of R = 1 Оm, time constant τ =30 ms,
capacity C = 30 nF, resting potential urest = 0 . Refractory period for inhibitory
             r                                          r
neurons t =2 ms, refractory period for excitatory neurons t
                                                          =3 ms. Thresholding
             I                                          E
θ =1.5 mV, constant urest = 0 mV. Amount of unit delays M =10, amount of neu-

rons of the hidden layer N h = 2 M . For the determination of forecast model structure
on the basis of the modified liquid state machine (MLSM) the row of experiments
were done, the results of that are presented on the fig. 1.




    Fig. 1. Chart of dependence of mean-square error of forecast from the amount of the hidden
                                            neurons

As the input data for determination of parameters values of forecast neural network
model the indexes of delivery and payment of supplies of machine-building enterprise
were used with the two-year sampling period with daily allowance temporal intervals,
value of indexes present a commercial secret and they have been scaled.
   The criterion of choice of neural network model structure was minimum mean-
square error of forecast. As be obvious from a fig. 1, with the increase of amount of
the hidden neurons the value of error reduces. For a forecast it is enough to use 10
time delay in an input layer and 20 hidden neurons, as at the further increase of
amount of delays and hidden neurons the change of value of error is insignificant.
   Neural networks were researched in the given paper for a forecast on a criterion
minimum mean-square error (MSE) of forecast. The following neural networks were
used in the experiments (tabl.1): JNN (Jordan neural network), ENN (Elman neural
network) also called SRN (Simple recurrent network), NARMA (nonlinear auto-
regressive-moving average), BRNN (bidirectional recurrent neural network), LSTM
(long short-term memory), GRU (gated recurrent unit), ESN (echo state network),
LSM (liquid state machine), proposed by the authors MLSM (modified liquid state
machine).

               Table 1. Comparative features of neural networks for a forecast
 Network       JNN    ENN      NARMA BRNN          LSTM GRU       ESN     LSM    MLSM
                      (SRN)
 Minimum       0.17   0.16     0.13      0.14      0.8    0.10    0.05    0.07   0.05
 MSE      of
 forecast


According to the table 1, MLSM and ESN have the most forecast accuracy, however
ESN requires a greater computational complexity due to the relations between an
input and output layer and between the neurons of output layer. It is because of that
the neural network of MLSM and ESN do not use a local search, that reduces proba-
bility of hit in a local extremum.
   The components of the technical solution are the liquid state machine and the LIF model
of the neuron (2) - (5). There is a process of parametric identification of the forecast model
described in section 5. Limitations of the parametric identification method of the MLSM
model - for the pseudoinversion method of matrix, in the case of a very large amount of data,
difficulties arise in calculating the inverse matrix and the complexity of matrix multiplica-
tions. In addition, in the presence of noise, the matrix pseudo-inversion method should be
modified. In the future, it is planned to use CUDA parallel processing technology for block
matrix multiplication to accelerate matrix calculations.


7      Conclusions

In the article the problem of increasing the efficiency of the forecast method for audit
data is examined due to an offer of modified liquid state machine (MLSM). The sci-
entific contribution is to improve the model and method of parametric identification
of a liquid state machine. Compared with the traditional model, advanced model of
the liquid state machine, that is based on the model of LIF (Leaky Integrate and Fire)
of neuron, is improved, and limited to the 1D hidden layer, that allows to reduce
computational complexity. Compared to the traditional method, advanced method of
parametric identification of MLSM needs further development, that is based on the
method of pseudoinverse matrix, that improves forecast accuracy, as a local search
that reduces probability of hit in a local extremum is not used. Software realizing an
offered method was worked out and researched on the indexes of delivery and pay-
ment of supplies of machine-building enterprise with the two-year sampling period
with daily allowance of temporal intervals. The conducted experiments confirmed the
capacity of the worked out software and allow to recommend it for the use in practice
in the subsystem of the automated analysis of DSS of audit. The proposed method
was used by Ekol Ukrane, a logistics company, to forecast the amount of freight traf-
fic and for the forecast of indicators in the audit task of display "paid - received" cal-
culations with suppliers of the machine-building enterprise. The prospects of further
researches are grounded on the proposal to check the offered methods for the wide set
of test databases.


References
 1. The World Bank: World Development Report 2016: Digital Dividends.
    https://www.worldbank.org/en/publication/wdr2016 (2016). Accessed 12 February 2020.
 2. The State Audit Service of Ukraine: Praktychna metodolohiia IT-audytu. (Practical meth-
    odology of ІТ-audit). http://dkrs.kmu.gov.ua/kru/doccatalog/ document (2015). Accessed
    12 February 2020.
 3. Neskorodieva, T.V. Postanovka elementarnykh zadach audytu peredumovy polozhen bu-
    khhalterskoho obliku v informatsiinii tekhnolohii systemy pidtrymky rishen (Formulation
    of elementary tasks of audit subsystems of accounting pro-visions precondition IT DSS).
    Modern information systems. 3(1), 48-54. (2019) doi:10.20998/2522-9052.2019.1.08.
 4. Zhu, B. Research on the Application of Big Data in Audit Analysis Program. International
    Seminar on Automation, Intelligence, Computing, and Networking. Francis Academic
    Press, UK (2019).
 5. Bidiuk, P.I., Prosiankina-Zharova, T.I., Terentieev, O.M., Lakhno, V.A., Zhmud, O.V.: In-
    tellectual technologies and decision support systems for the control of the economic and
    financial processes. Journal of Theoretical and Applied Information Technology. 96(1),
    71-87 (2019).
 6. Ahmad, S., Mehdi, G., Mohammed, F.: Data mining techniques for anti money laundering.
    International Journal of Applied Engineering Research. 12(20), 10084-10094 (2017).
 7. Khrystyianivsky, V.V., Neskorodieva, T.V., Polshkov, Yu.N.: Ekonomiko-
    matematicheskie metodyi i modeli: praktika primeneniya v kursovyih i diplomnyih rabotah
    (Economic and mathematical methods and models: application practice in term papers and
    dissertations). DonNU, Donetsk (2013).
 8. Alevizos Е., Artikis А., and Paliouras G. Event forecasting with pattern Markov chains. In
    Proceedings of ACM International Conference on Distributed Event-Based Systems, Bar-
    celona, Spain, 12 p. doi: 10.1145/3093742.3093920 (2017).
 9. Anastasios, P., Vasilis, S., Dionysios, M., Aristotelis, K. A combined statistical framework
    for forecasting default rates of Greek financial institutions’ credit portfolios. Athens, Bank
    of Greece (2018).
10. Lytvynenko, T.I.: Problem of data analysis and forecasting using decision trees method.
    Programming problems. 2, 220-226 (2016).
11. Cornel, L., Mirela, L. Using the Method of Decision Trees in the Forecasting Activity.
    Economic Insights. Trends and Challenges. 4 (67), No. 1, pp. 41–48 (2015).
12. Hovorushchenko, T. Information technology for assurance of veracity of quality infor-
    mation in the software requirements specification. In: Shakhovska,, N., Stepashko, V.
    (eds) Advances in Intelligent Systems and Computing II. CSIT 2017. Advances in Intelli-
    gent Systems and Computing, vol 689, pp. 166-185. Springer, Cham (2018)
    doi:10.1007/978-3-319-70581-1.
13. Baillie, R. T., Kapetanios, G., Papailias, F.: Modified information criteria and selection of
    long memory time series models. Computational Statistics and Data Analysis. 76. 116–131
    (2014).
14. Bidyuk,, P., Prosyankina-Zharova,, T., Terentiev, O.: Modelling nonlinear nonsta-tionary
    processes in macroeconomy and finances. In: Hu, Z., Petoukhov, S., Dychka, I., He M.
    (eds) Advances in Computer Science for Engineering and Edu-cation. Advances in Intelli-
    gent Systems and Computing, vol 754, pp. 735-745. Springer, Cham (2019). doi:
    10.1007/978-3-319-91008-6_72.
15. Zgurovsky, M.Z., Zaychenko, Y.P.: The fundamentals of computational intelligence: sys-
    tem approach. Springer International Publishing. Switzerland (2017).
16. Alex, G., Pereira, G., Pappa L. A customized classification Algorithm for Credit card
    Fraud Detection. Engineering Applications of Artificial Intelligence, vol. 72: pp. 21-29
    (2018).
17. Vlasenko, A., Vlasenko, N., Vynokurova, O., Bodyanskiy, Y., Peleshko, D.: A novel en-
    semble neuro-fuzzy model for financial time series forecasting. Data 4(3), 126. (2019)
    10.3390/data4030126.
18. Tkachenko, R., Izonin, I., Greguš ml., M., Tkachenko, P., Dronyuk, I.: Committee of the
    SGTM neural-like structures with extended inputs for predictive analytics in insurance. In:
    Younas, M., Awan, I., Benbernou, S. (eds) Big Data Innovations and Applications. Inno-
    vate-Data 2019. Communications in Computer and In-formation Science. vol. 1054. pp.
    121-132. Springer, Cham (2019) doi:10.1007/978-3-030-27355-2.
19. Tkachenko, R., Tkachenko, P., Izonin, I., Vitynskyi, P., Kryvinska, N., Tsymbal, Y.
    Committee of the Combined RBF-SGTM Neural-Like Structures for Prediction Tasks. In:
    Awan, I., Younas, M., Ünal, P., Aleksy, M. (eds) Mobile Web and Intelligent Information
    Systems. Lecture Notes in Computer Science, vol 11673, pp. 267-277. Springer, Cham
    (2019) doi:10.1007/978-3-030-27192-3_21.
20. Leoshchenko, S., Oliinyk, A., Subbotin, S., Zaiko, T.: Using recurrent neural net-works for
    data-centric business. In: Ageyev, D., Radivilova, T., Kryvinska, N. (eds) Data-Centric
    Business and Applications. Lecture Notes on Data Engineering and Communications
    Technologies. 42, pp. 73-91. Springer, Cham (2020) doi:10.1007/978-3-030-35649-1_4.
21. Wang, Q., Li, P. D-lsm: Deep liquid state machine with unsupervised recurrent reservoir
    tuning, in pattern recognition. 23-rd International Conference. Cancun, IEEE. 2652–2657
    (2016).
22. Haykin, S.S.: Neural networks and learning machines. Pearson, Delhi (2016).
23. Bengio, Y. Goodfellow, I.J., Courville, A.: Deep learning. MIT Press (2016)
    http://www.deeplearningbook.org.
24. Zhiteckii, L.S., Solovchuk, K.Yu.: Pseudoinversion in the problems of robust stabilizing
    multivariable discrete time control systems of linear and nonlinear static objects under
    bounded disturbances. Automation and Information Sciences. 3, 57-70 (2017).