=Paper= {{Paper |id=Vol-2064/paper25 |storemode=property |title= Multilayered neural-like network of direct propagation with the adjustment according to similarity measures of vectors of the learning sample |pdfUrl=https://ceur-ws.org/Vol-2064/paper25.pdf |volume=Vol-2064 |authors=Andrey Krasnov,Evgeniy Nadezhdin,Dmitry Nikol’skii,Elena Shmakova }} == Multilayered neural-like network of direct propagation with the adjustment according to similarity measures of vectors of the learning sample == https://ceur-ws.org/Vol-2064/paper25.pdf
УДК 004.75
               Krasnov A.E.1, Nadezhdin E.N.1, Nikol’skii D.N.1, Shmakova E.G.2
 1 State Institute of Information Technologies and Telecommunications (SIIT&T «Informika»), Moscow, Russia
                                2 Russian State Social University, Moscow, Russia



   MULTILAYERED NEURAL-LIKE NETWORK OF DIRECT PROPAGATION WITH THE
 ADJUSTMENT ACCORDING TO SIMILARITY MEASURES OF VECTORS OF THE LEARNING
                                SAMPLE*
   Abstract
        The architecture of a multilayer network consisting of several levels of active elements is considered.
        The input level forms the signals propagated to the connectors (synapses) of the first level. All odd
        layers of the network consist of connectors (synapses), and even ones consist of switches (neurons).
        The number of connectors and switches in each layer corresponds to the number of reference signals,
        the training sample vectors. The process of recurrent adjustment of synaptic connections and
        neuronal responses of the network is explained both by measures of the similarity of the training
        sample vectors and similarity measures of these similarity measures. An experimental study using the
        example of a six-layer network showed that a multi-layer neural-like network of direct propagation
        is much easier to learn than a recursive network trained by the method of the error back propagation.
        At the same time, the proposed network is resistant to significant interference when distinguishing
        signals, which is due to the consideration of additional connections between the components of the
        reference signals. When analyzing signals against a noise background, under the condition of
        "interference amplitude / signal amplitude" is less than the average spread of the reference signals,
        this advantage can become decisive, since it makes it possible to realize an almost error-free signal
        difference.
   Keywords
        Multilayered network; direct distribution; vectors of the training sample; measures of similarity of
        vectors; measures of similarity over similarity measures; discrimination, signal; noise.
              Краснов А.Е.1, Надеждин Е.Н.1, Никольский Д.Н.1, Шмакова Е.Г.2
        1 Государственный научно-исследовательский институт информационных технологий и

                                     телекоммуникаций, г. Москва, Россия
                2 Pоссийский государственный социальный университет, г. Москва, Россия



     МНОГОСЛОЙНАЯ НЕЙРОПОДОБНАЯ СЕТЬ ПРЯМОГО РАСПРОСТРАНЕНИЯ С
      НАСТРОЙКОЙ ПО МЕРАМ СХОДСТВА ВЕКТОРОВ ОБУЧАЮЩЕЙ ВЫБОРКИ
   Аннотация
        Рассмотрена архитектура многослойной сети, состоящей из нескольких уровней активных
        элементов. Входной уровень формирует сигналы, распространяемые на коннекторы
        (синапсы) первого уровня. Все нечетные слои сети состоят из коннекторов (синапсов), а
        четные – из коммутаторов (нейронов). Количество коннекторов и коммутаторов в каждом
        слое соответствует количеству эталонных сигналов – векторов обучающей выборки.
        Объяснен процесс рекуррентной настройки синаптических связей и нейронных откликов
        сети как по мерам сходства векторов обучающей выборки, так и мерам сходства данных
        мер сходства. Проведенное экспериментальное исследование на примере шестислойной сети
        показало, что многослойная нейроподобная сеть прямого распространения обучается
        намного проще, чем рекурсивная сеть, обучаемая методом обратного распространения
        ошибки. В тоже время, предложенная сеть устойчива к значительным помехам при
        различении сигналов, что обусловлено учетом дополнительных связей между


   * Труды II Международной научной конференции «Конвергентные когнитивно-

информационные технологии» (Convergent’2017), Москва, 24-26 ноября, 2017
   Proceedings of the II International scientific conference "Convergent cognitive information
technologies" (Convergent’2017), Moscow, Russia, November 24-26, 2017

                                                        209
         компонентами эталонных сигналов. При анализе сигналов на фоне помех, при условии
         «амплитуда помехи / амплитуда сигнала» меньше среднего разброса эталонных сигналов,
         это преимущество может стать решающим, так как позволяет осуществить почти
         безошибочное различие сигналов.
   Ключевые слова
         Многослойная сеть; прямое распространение; векторы обучающей выборки; меры сходства
         векторов; меры сходства над мерами сходства; различение; сигнал; помеха.

Introduction
    The structure of heterogeneous multiply connected network intended for modeling of neurodynamic problems,
recognition of signals and images, data processing of phased antenna arrays was proposed in [1]. The
implementation of the architecture of such network on clusters of universal and / or graphic processors is briefly
described, the approbation of the developed network model on examples of solving a number of known high-
dimensional neurodynamic problems is performed in [2]. A wide class of neural-like networks was formally
described as a structural model of the cybernetic network, where the functional-structural topology of the
cybernetic network was also presented for the first time, taking into account the exchange of information and
control data streams [3, 4].
    In [5] the structure of the multilayered network is proposed, which is a subset of cybernetic networks
functioning on the principle of "winner-takes-all" [6]. In this network, both during its training and application, the
direct propagation of signals was used. Experimentally, the high efficiency of the network was shown when
distinguishing noisy signals. In particular, it was revealed that the level of error of discrimination (the number of
incorrect decisions per 100 implementations of interference) of reference signals (with an average variation Var
= 13%) on which the network was trained, nonlinearly depends on values of the ratio N0 = "amplitude of noise /
amplitude of signal". At the same time, the error is practically zero at N0 < Var = 14% and reaches 10% at N0 =
18%.
    The parallel software architecture was described, based on the object-oriented approach and on the known
GoF design patterns [7, 8]. The template "factory method" is used to expand the class of network simulators, and
the "Bridge" template allows building various implementations for CPU and GPU platforms. The architecture is
intended for simulation of multidimensional problems (network neurodynamics, compression of
multidimensional data, pattern recognition) and, in particular, multi-layer networks from [5].
    In this paper, architectural solutions for learning a multi-layer network are discussed in more detail.
Formulation of the problem
    The aim of the work is to consider the principle of learning a multilayered network – adjusting its synaptic
connections and neural responses according to similarity measures of vectors of the training sample and similarity
measures over these similarity measures. This consideration also explains the high efficiency of the discrimination
of noisy signals.
Structure of the multilayered neural-like network of direct propagation
   An example of the structure of the multilayered network of the direct propagation is shown in Fig. 1 [5, 8].
   As seen in Figure 1, the network consists of several layers of active elements. The input layer forms the input
signals propagated to connectors (synapses) of the first layer. All odd layers consist of connectors (synapses), and
even ones consist of switches (neurons). The number of synapses and neurons in each layer corresponds to the
number of reference signals – vectors of the training sample. Figure 1 shows an example of an already trained
network, with the memory register of each connector of the first layer containing the corresponding reference
vector of the training sample. The first reference signal without interference is fed to the input layer. Therefore,
every even layer of the network is unmistakably, with a measure of similarity 1, identifying the input signal, as
Signal 1.
Principle of the learning of neurosimilar network
   Principle of the learning of neurosimilar network is that each synapse’s memory register of an odd layer,
starting with the third one, records the responses of all neuron-switches of previous layers to the reference signals
sequentially fed to the network input in the learning process.




                                                        210
                            Figure 1. Structure of the neural network of the direct propagation
   The response of each neuron of any layer to the input signal – the vector Z at the input of the previous synaptic
connector – is formed as an odd power of cosine of the angle between the input vector Z and the vector X stored
in the connector register: (𝒁, 𝑿) = cos2𝑛+1 (𝒁, 𝑿) . In the above example, n = 20, which provides a strong
nonlinearity – the resonance response of the neuron-commutator to the input signal.
   Let us consider in more detail the iterative process of learning the network. To simplify the drawings, the
controllers controlling the configuration and operation of connectors are not shown on them.
Recursive training of neurosimilar network
    At the first training step of the network, each reference training sample signal is written to the memory register
of the corresponding synapse-connector of the first network layer, as shown in Figure 2.
    In the second step of network training (Figure 3), all reference signals of the training sample are sequentially
fed to the input layer of the network, and for each reference signal (in Figure 3 – signal 1), the responses of all
switch-neurons of the second layer are written to connector’s registers of the third layer. As a result, in the
registers of connectors of the third layer, similarity measures (𝑺𝑘 , 𝑺𝑛 ) = cos2𝑛+1 (𝑺𝑘 , 𝑺𝑛 ) of all vectors of the
reference training sample are written (Figure 4).
    In the third step of network training (Figure 5), all reference signals of the training sample are sequentially fed
to the input layer of the network, and for each reference signal (in Figure 5 – signal 3), the responses of all switch-
neurons of the fourth layer are written to connector’s registers of the fifth layer. As a result, in the registers of
connectors of the fifth layer, similarity measures under similarity measures ((𝑺𝑘 , 𝑺𝑛 ), (𝑺𝑙 , 𝑺𝑚 )) =
 cos2𝑛+1 ((𝑺𝑘 , 𝑺𝑛 ), (𝑺𝑙 , 𝑺𝑚 )) of all vectors of the reference training sample are recorded (Figure 5).




                                                           211
         Figure 2. The first step of learning the network




Figure 3. The first cycle of the second step of training the network




                               212
                                Figure 4. End of the second step of learning the network




                            Figure 5. The third cycle of the third step of learning the network
Experiments
   As a result of the first experiment, we present in Figure 6 an illustrative example of the network distinguishing
the third reference signal against a noise background when N0 = "amplitude of noise / amplitude of signal" = 0.25,
that corresponds to the ratio Noise/Signal = 6.25%.




                                                           213
                 Figure 6. Distinguishing of the third reference signal against a noise background (N0 = 0.25)
     Figure 6 shows that the first layer of the network, where the input signal is compared with the reference one,
gives an incorrect recognition of the input noisy signal.
     However, in the subsequent odd layers, where similarity measures are compared with the reference similarity
measures, responses are formed that correctly identify the input noisy signal.
     For an experiment as a test example were chosen 6 components of 6 reference signals Sk = (sk1, .., sk6)T (k = 1,
2,.., 6), provided in the table 1.
     Apparently from the table 1, the average variation (Var) of signals makes 13%. At the same time:
                                                                                2 1/2
                                       1                1       [∑6
                                                                  𝑚=1(𝑠𝑘𝑚 −𝑠𝑚  ]                1
                              𝑉𝑎𝑟 = 6 ∑6𝑘=1 𝑉𝑎𝑟𝑘 = 6 ∑6𝑘=1                    2 1/2     , 〈𝑠𝑚 〉 = 6 ∑6𝑘=1 𝑠𝑘𝑚.    (1)
                                                                   [∑6
                                                                     𝑚=1 𝑠𝑚  ]

   The task consists in a research of dependence of an error of the distinction of reference signals from table 1
from the amplitude of an additive hindrance N.
                                                  Table 1 Reference signals




   In this case the observed signal X is expressed as:
                                                            𝑿 = 𝑺𝑘 + 𝑵.                                           (2)
   In an imitating model experiment the hindrance was generated as a vector N = (n1, n2, nm)T of the random evenly
distributed sizes:
                                             𝑛𝑚 = 𝑁0 ∗ 𝑠𝑚 ∗ (1 − 2 ∗ random( )),                                  (3)
where random( ) – is the size which is evenly distributed in the range of (-1, 1), and N0 – the maximum relation
of amplitude of a hindrance to amplitude of a useful signal.
    For descriptive reasons reference signals from table 1 are given in the figure 7, and one of signals (S3) with the
maximum variation of amplitudes in 18% in the presence of a hindrance with N0 = 13% is given in the figure 8.




                                                             214
                                                  Figure 7. Reference signals




                        Figure 8. Reference signal S3 at one of realization of a hindrance with N0 = 13%
    Projections of all reference signals to the plane (X, Y) carried out under technology [9] are given in the figure 9
at one of realization of a hindrance with N0 = 13%.




        Figure 9. Projections of reference signals in absence and presence of a hindrance (N0 = 13% ) on the plane (X, Y)
   From the figure 9 it is especially visually visible that in the presence of a hindrance it is probably wrong to
identify an observed signal Z1 with the reference signal S3, and Z2 with S4.

                                                              215
   In the figure 10 the dependence of an error of distinction of reference signals from values of N0 is shown.




   Figure 10. Dependence of an error of distinction of reference signals from values of N0 = "amplitude of noise / amplitude of
                                                              signal"
   From statistics of errors of charts (figure 10) it is visible that in comparison with the 2nd layer the 6th layer of
network gives a prize in reliability of recognition from 1% to 4%.
Conclusion
    The carried out research showed that the multilayered neurosimilar network of direct distribution gives some
advantage at distinction of noisy reference signals. This advantage is caused by accounting of additional
communications between components of reference signals. In the analysis of signals against the background of
hindrances with N0 = "amplitude of noise/amplitude of signal"  of average variation of reference signals this
advantage can become decisive as allows us to make almost faultless distinction of signals.
    The interesting result turns out when giving on an entrance of the trained network of any signal, considerably
different from all reference signals.
    So, in the figure 11 the example of responses of network to an entrance signal XT = (11,07; 17,92; 3,14; 23,51;
12,78; 14,90) is shown.




    Figure 11. The coordinated responses of network to an entrance signal, considerably different from all reference signals
    At the same time, the network shows on the similarity of an entrance signal with the 3rd reference signal 𝑺𝑇3 =
(16,00; 14,00; 3,00; 21,00; 6,00; 16,00), though degree of this similarity is extremely small (0,25). At the same time,
the 4th and 6th layers carry an entrance signal to the 3rd reference signal.
    In the figure 12 the example of responses of network to an entrance signal XT = (13,47; 14,91; 0,02; 29,01; 12,05;
5,51) is shown.
    In this example the 2nd layer of network shows on the similarity of an entrance signal with 4rd reference signal
𝑺𝑇4 = (20,00; 9,00; 7,00; 22,00; 8,00; 11,00), and the 4th and 6th layers carry an entrance signal to the 3rd reference
signal.
    Thus, the considered multilayered network of direct distribution of signals with the adjustment on measures
of similarity of vectors of the training selection is very simple for the process of training and has the increased
reliability of recognition of reference signals in the presence of hindrances in comparison with one layer.




                                                              216
     Figure 12. Uncoordinated responses of network to an entrance signal, considerably different from all reference signals
Acknowledgement
   The work was supported by the state in the person of the Ministry of Education and Science of Russia by lot
code 2017-14-579-0002 on the topic: "Development of effective algorithms for detecting network attacks based
on identifying of deviations in the traffic of extremely large volumes arriving at the border routers of the data
network and creating a sample of software complex for detection and prevention of information security threats
aimed at denial of service". Agreement No. 14.578.21.0261 on granting a subsidy of September 26.2017, a unique
identifier of the work (draft) RFMEFI57817X0219.
                                                             References
    1.   Kalachev A.A., Krasnov A.E., Nadezhdin E.N., Nikolskiy D.N., Repin D.S. Geterogennaya mnogosvyaznaya set aktivnyih elementov //
         Innovatsionnyie, informatsionnyie i kommunikatsionnyie tehnologii: sbornik trudov XIII Mezhdunarodnoy nauchno-prakticheskoy
         konferentsii / Pod red. S.U. Uvaysova. – Moskva: Assotsiatsiya vyipusknikov i sotrudnikov voenno-vozdushnoy inzhenernoy
         akademii im. prof. Zhukovskogo. 2016, #1, S. 277-280. — URL: https://elibrary.ru/item.asp?id=27332412
    2.   Kalachev A.A., Krasnov A.E., Nadezhdin E.N., Nikolskiy D.N., Repin D.S. Model geterogennoy seti dlya simulyatsii neyrodinamicheskih
         zadach // Sovremennyie informatsionnyie tehnologii i IT-obrazovanie. – Moskva: Fond sodeystviya razvitiyu internet-media, IT-
         obrazovaniya, chelovecheskogo potentsiala "Liga internet-media" (Moskva). 2016, Tom 12, #1, S. 80-90. —
         URL:https://elibrary.ru/item.asp?id=27539221
    3.   Krasnov A.E., Nadezhdin E.N., Nikolskiy D.N., Repin D.S., Kalachev A.A. Kiberneticheskaya set kak strukturnaya model
         neyropodobnyih sistem // Informatizatsiya obrazovaniya i nauki. 2017, # 3(35), S. 109-122. — URL:
         https://elibrary.ru/item.asp?id=29426094
    4.   Krasnov A. E., Nadezhdin E. N., Nikolskii D. N., Repin D.S., Kalachev A. A. Nejropodobnaya kiberne ticheskaya set' // Informacionnye
         innovacionnye tekhnologii. – Izdatelstvo: Assotsiatsiya vyipusknikov i sotrudnikov VVIA imeni professora N.E. Zhukovskogo
         sodeystviya sohraneniyu istoricheskogo i nauchnogo naslediya VVIA imeni professora N.E. Zhukovskogo (Moskva). 2017, #1, S. 278-
         281. — URL: https://elibrary.ru/item.asp?id=29386197
    5.   Kazakov K.V., Kalachev A.A., Krasnov A.E., Nikolskiy D.N., Shevelev S.A. Sravnenie effektivnostey razlicheniya signalov na fone silnyih
         pomeh na osnove mnogokriterialnoy i neyrosetevoy tehnologiy. Innovatsionnyie, informatsionnyie i kommunikatsionnyie
         tehnologii: sbornik trudov XIII Mezhdunarodnoy nauchno-prakticheskoy konferentsii / pod red. S.U. Uvaysova. – Moskva:
         Assotsiatsiya vyipusknikov i sotrudnikov voenno-vozdushnoy inzhenernoy akademii im. prof. Zhukovskogo. 2016,. #1, S. 257–259.
         — URL: https://elibrary.ru/item.asp?id=27332404
    6.   Petrunin Yu.Yu., Ryazanov M.A. Savelev A.V. Ot iskusstvennogo intellekta k modelirovaniyu mozga. MGU im. M.V. Lomonosova, MAKS
         Press, Moskva. 2014. – 84 s. — URL:
    7.   https://istina.msu.ru/publications/book/7869957/
    8.   Krasnov A. E., Nikol’skii D. N., Kalachev A. A. Arhitektura parallel'nogo programmnogo obespecheniya dlya modelirovaniya
         nejropodobnoj kiberneticheskoj seti // Informacionnye innovacionnye tekhnologii. – Izdatelstvo: Assotsiatsiya vyipusknikov i
         sotrudnikov VVIA imeni professora N.E. Zhukovskogo sodeystviya sohraneniyu istoricheskogo i nauchnogo naslediya VVIA imeni
         professora N.E. Zhukovskogo (Moskva). 2017, #1, S. 275-287. — URL: https://elibrary.ru/item.asp?id=29386196
    9.   Krasnov A.E., Nadezhdin E.N., Nikol’skii D.N. Arhitektura parallel'nogo programmnogo obespecheniya dlya simulyacii mnogomernyh
         zadach. Sovremennyie informatsionnyie tehnologii i IT-obrazovanie. 2017, Tom 13, #1, S. 49 – 57. — URL:
         http://sitito.cs.msu.ru/index.php/SITITO/article/view/202/172


                                                                    217
    10. Krasnov A.E., Nikol'skiy D.N., Kalachev A.A. Snizhenie razmernosti spektral'nykh dannykh neyropodobnym algoritmom //
        Svidetel'stvo o gosudarstvennoy registratsii programmy dlya EVM, Rossiyskaya federatsiya, № 2017612195, 2017. — URL:
        http://www1.fips.ru/wps/portal/ofic_pub_ru/#page=document&type=doc&tab=PrEVM&id=1852C2F0-10AD-461C-9711-
        44C9C9CBDC33

Note on the authors:
Krasnov Andrey E., Doctor of Physics and Mathematics, Professor, Chief Researcher, State Institute of Information
          Technologies and Telecommunications (SIIT&T «Informika»), a.krasnov@informika.ru
Nadezhdin Evgeniy N., Doctor of Technical Sciences, Professor, Chief Researcher, State Institute of Information
          Technologies and Telecommunications (SIIT&T «Informika»), e.nadezhdin@informika.ru
Nikol’skii Dmitry N., Candidate of Physical and Mathematical Sciences, Associate Professor, Leading Researcher,
          State Institute of Information Technologies and Telecommunications (SIIT&T «Informika»),
          d.nikolsky@informika.ru
Shmakova Elena G., Candidate of Technical Sciences, Associate Professor, Head of the Department of Information
          Systems, Networks and Security, Russian State Social University, rusja_lena@mail.ru

Об авторах:
Краснов Андрей Евгеньевич, доктор физико-математических наук, профессор, главный научный
        сотрудник, Государственный научно-исследовательский институт информационных
        технологий и телекоммуникаций, a.krasnov@informika.ru
Надеждин Евгений Николаевич, доктор технических наук, профессор, главный научный сотрудник,
        Государственный научно-исследовательский институт информационных технологий и
        телекоммуникаций, e.nadezhdin@informika.ru
Никольский Дмитрий Николаевич, кандидат физико-математических наук, доцент, ведущий научный
        сотрудник, Государственный научно-исследовательский институт информационных
        технологий и телекоммуникаций, d.nikolsky@informika.ru
Шмакова Елена Германовна, кандидат технических наук, доцент, заведующая кафедрой
        «Информационных систем, сетей и безопасности», Российский государственный социальный
        университет, rusja_lena@mail.ru




                                                           218