=Paper=
{{Paper
|id=Vol-2805/paper15
|storemode=property
|title=New Methods of Network Modelling Using Parallel-Hierarchical Networks for Processing Data and Reducing Erroneous Calculation Risk
|pdfUrl=https://ceur-ws.org/Vol-2805/paper15.pdf
|volume=Vol-2805
|authors=Leonid Timchenko,Waldemar Wojcik,Natalia Kokriatskaia,Volodymyr Tverdomed,Oleksandr Poplavskyi,Olga Levchenko,Natalia Kryvinska
|dblpUrl=https://dblp.org/rec/conf/citrisk/TimchenkoWKTPLK20
}}
==New Methods of Network Modelling Using Parallel-Hierarchical Networks for Processing Data and Reducing Erroneous Calculation Risk==
New Methods of Network Modelling Using Parallel- Hierarchical Networks for Processing Data and Reducing Erroneous Calculation Risk Leonid Timchenko1 [0000-0003-0090-3886], Waldemar Wojcik2 [0000-0002-6473-9627], Natalia Kokriatskaia1 [0000-0001-9813-1399], Volodymyr Tverdomed1 [0000-0002-0695-1304], Oleksandr A. Poplavskyi3 [0000-0003-0465-6843], Olga Levchenko1 [0000-0001-7659-347X] , Natalia Kryvinska 4 [0000- 0003-3678-9229] 1State University of Infrastructure and Technology, Kyiv, Ukraine, timchenko_li@gsuite.duit.edu.ua, kokryatska_ni@gsuite.duit.edu.ua, tverdomed@gsuite.duit.edu.ua, olevchenko76@gmail.com 2Politechnika Lubelska, Lublin, Poland waldemar.wojcik@pollub.pl 3National University of Construction and Architecture, Ukraine apoplavskyi@gmail.com 4Comenius University in Bratislava, Skivakia Natalia.Kryvinska@fm.uniba.sk Abstract. This paper proposes a new type of parallel-hierarchical network – a machine learning technology based on the completion of G-transformations. The network contains horizontal and vertical branches, which create a hier- archical structure. Each vertical and horizontal branch undergoes G- transformation, which functions by calculating the differences of its elements at every step, and on selected elements. The selected elements are multiplied by the quantity of received non-zero differences. Elements calculated in this way present input data for further network transformations. When the horizontal and vertical branches are formed, their elements shift in time, which determines the formation of tail and intermediate network ele- ments. The risk of erroneous calculations is reduced in a parallel-hierarchical network because when processing information in the presented network, the sum of the resulting elements, i.e. tail elements, are equal to the sum of the in- put network elements. This presents the ability to lower the risk of erroneous calculations, which assists in controlling the equality of the sums of the tail el- ements and the sums of the input elements. The obtained results can be used to solve a wide range of problems in various systems that require complex operations and risk assessment, such as compari- son between or partial searches of digital images. Keywords: parallel-hierarchical network, functional series, basic network, tail element, G-transform, risk reduction of erroneous calculations. Copyright © 2020 for this paper by its authors. This volume and its papers are published under the Creative Commons License Attribution 4.0 International (CC BY 4.0). 2 1 Introduction The possibilities for computational facilities have reached that critical moment where theoretical and applied research have revealed constraints in their application for the solution of a number of serial arithmetic problems using computers of the first five generations. For the parallel processing of information, the concept of evolutional improvements in computing and micro processing facilities have turned out to be non- efficient. Constantly changing requirements regarding real-time signal processing and opera- tion rate of the equipment has shown the necessity of creating computational struc- tures with new architecture, enabling the processing of enormous data arrays with a high processing rate. We can state that nowadays we are approaching a new and im- portant stage in the development of engineering facilities intended for processing both one-dimensional signals, and images. The progress of computational facilities comprises the evolutionary transition from conventional von Neumann computational structures to “expert systems” and intelli- gent neural engineering systems, simulating the brain activity of human beings, and the intelligent computational facilities of the sixth generation. These latest achievements require the reconsideration of Charles Babbage’s idea regarding the logic structure of computers and the transition to other physical- technological fundamentals of information presentation, approaching natural parallel transformation and hierarchical processing. Since electronic devices have closely approached the physical limit of operations, the solution of the problem of information parallel processing, namely, real-time im- age processing, completely depends on the development of fast-acting and parallel intelligent computational processes, operational algorithms and architectures which are oriented on neural-like principles of information processing and transformation. The existing methods of designing conventional algorithms and the architecture of computers do not meet the requirements of those algorithmic and architectural solu- tions that were achieved while designing computational structures with high levels of parallelism. 2 The main ideas of organizing parallel-hierarchical transformation The Fundamentals of PHCS theory are based on learning and creation of mathemati- cal models of PH transforms writing, transmission, processing and presentation of machine information [1-3]. Initial is the following axiom: the set of analog operands, as the measure of information in the most compressed form can be presented in the form of coefficients totality of parallel-hierarchical decomposition, digitizing of which in IF of the area is strictly determined by the structure of PH network. Proceed- ing from the conditions of reaching maximum possible fast acting of computational structures [4], to provide the highest level of compression, in combination with natu- 3 ral parallelism, the organization of such flexible (easily reconfigured) networking algorithmic structure [5-7] the “skeleton” of which is strictly defined before. The requirements, regarding similar networking structure, that can find wide appli- cation in theory and practice of various branches of science and technology as univer- sal tool for investigation of information fields, and include conventional (regarding software part) and non-conventional (including, on the one hand, requirements, con- cerning flexible reconfiguration of basic algorithmic structure for performing intellec- tual operations – preprocessing, analysis and synthesis of information fields, on the other hands – engineering, that realize PH transforms). To find mathematically sub- stantiated connections between the quality level of concrete algorithms and architec- ture of PHCS with maximum possible efficiency of PH transform, we formulate the theorem of limiting compression of information. Theorem 1.1. For PН transform in conditions of admissible choice of numerical in- formation at each level of its processing, there exists minimum time of transform, where the amount of output coefficients of transformation with greatest probability meets the requirements of ideal model. Let 𝜏𝜏𝑘𝑘 where 𝑘𝑘 = 1, 2, . .. is the time of selection of random element from infor- mation array, we denote by 𝑇𝑇𝑛𝑛 = 𝜏𝜏1 + 𝜏𝜏2 + ⋯ + 𝜏𝜏𝑛𝑛 the time of selection of all the elements from input array of information. If the elements of input array of infor- mation are independent random values, distributed according to a definite rule, then, due to identify of operations, performed in each cycle, 𝜏𝜏1 , 𝜏𝜏2 , … are equally distributed independent values. Let 𝐹𝐹 (𝑡𝑡) be the function of random value distribution 𝑇𝑇𝑛𝑛 , i.e., 𝐹𝐹𝑛𝑛 (𝑡𝑡) = 𝑃𝑃(𝑡𝑡 ≥ 𝐹𝐹𝑛𝑛 ). We denote by 𝑚𝑚(𝑡𝑡) the number of cycles of input array elements selection, 𝑘𝑘𝑛𝑛 – is the number of similar elements in input array of information. We can find probabilities distribution of this random value, using function 𝐹𝐹𝑛𝑛 (𝑡𝑡). Actually event {𝑚𝑚(𝑡𝑡) = 𝑛𝑛} = �𝑇𝑇𝑛𝑛−𝑘𝑘𝑛𝑛 ≤ 𝑡𝑡 ≤ 𝑇𝑇𝑛𝑛 � = �𝑇𝑇𝑛𝑛−𝑘𝑘𝑛𝑛 ≤ 𝑡𝑡� ∩ {𝑇𝑇𝑛𝑛 ≥ 𝑡𝑡 } , then 𝑃𝑃{𝑚𝑚(𝑡𝑡) = 𝑛𝑛 − 𝑘𝑘𝑛𝑛 } = 𝑃𝑃�𝑇𝑇𝑛𝑛−𝑘𝑘𝑛𝑛 ≤ 𝑡𝑡� − 𝑃𝑃{𝑇𝑇𝑛𝑛 ≤ 𝑡𝑡}. The difference 𝐹𝐹𝑛𝑛 (𝑡𝑡) − 𝐹𝐹𝑛𝑛−𝑘𝑘𝑛𝑛 (𝑡𝑡) we will denote as 𝑃𝑃{𝑚𝑚(𝑡𝑡) = 𝑛𝑛 − 𝑘𝑘}. We investigate time 𝑡𝑡 = 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 of element selec- tion, at which the probability 𝑃𝑃{𝑚𝑚(𝑡𝑡) = 𝑛𝑛 − 𝑘𝑘𝑛𝑛 } will be the greatest, in case, if value 𝑇𝑇𝑛𝑛 has the density 𝐹𝐹𝑛𝑛 , then it is not difficult to find 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 . It is sufficient to solve the equation: 𝐹𝐹𝑛𝑛 − 𝐹𝐹𝑛𝑛−𝑘𝑘𝑛𝑛 = 0 (1.1) Hence, to investigate the process of information transformation it is important to find the function 𝑃𝑃𝑛𝑛−𝑘𝑘𝑛𝑛 (𝑡𝑡) – the probability of the fact, that during time 𝑡𝑡 the selection occurs of 𝑛𝑛 − 𝑘𝑘𝑛𝑛 of various elements from input data array by 𝑛𝑛 − 𝑘𝑘𝑛𝑛 steps and val- ue 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 – the time, at which probability 𝑃𝑃{𝑚𝑚(𝑡𝑡) = 𝑛𝑛 − 𝑘𝑘𝑛𝑛 } will be the greatest. To find 𝑃𝑃𝑛𝑛−𝑘𝑘𝑛𝑛 (𝑡𝑡) it is necessary to make assumption regarding the distribution of random value 𝜏𝜏𝑘𝑘 ,𝑘𝑘 = 1, 2, . .. It is quite natural to assume, that𝜏𝜏𝑘𝑘 obeys to normal dis- tribution. Let us consider in details this assumption. Let random value 𝜏𝜏1 , 𝜏𝜏2 , … have normal distribution law with parameters 𝜏𝜏and 𝛥𝛥 𝜏𝜏, then 𝑇𝑇𝑛𝑛 is also distributed normal- ly, but according to parameters 𝑛𝑛𝑛𝑛 and 𝛥𝛥𝛥𝛥 √𝑛𝑛. 4 At large 𝑛𝑛(𝑛𝑛 > 10) Laplace approximated integral formula occurs𝑃𝑃𝑃𝑃(𝑡𝑡) = Ф ( 𝐼𝐼2 ) − Ф ( 𝐼𝐼1 ), 𝑡𝑡1 − 𝑛𝑛 𝑃𝑃𝜏𝜏 𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝑃𝑃𝜏𝜏 where 𝑡𝑡1 ≤ 𝑡𝑡 ≤ 𝑡𝑡2 , 𝑙𝑙1 = ; 𝑙𝑙2 = 2 ; �𝑛𝑛 𝑃𝑃𝜏𝜏 ( 1 − 𝑃𝑃𝜏𝜏 ) �𝑛𝑛 𝑃𝑃𝜏𝜏 ( 1 − 𝑃𝑃𝜏𝜏 ) 1 𝑙𝑙 𝑥𝑥2 − 2 Ф ( 𝐼𝐼 ) – Gaussian integral Ф ( 𝐼𝐼 ) = ∫ 𝑙𝑙 𝑑𝑑𝑑𝑑. Having calculated the √2𝜋𝜋 0 parameters 𝜏𝜏 and 𝛥𝛥 𝜏𝜏, 𝑛𝑛𝑛𝑛 and 𝛥𝛥𝛥𝛥 √𝑛𝑛the equation (1.1) will take the form: ( 𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 ( 𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝜏𝜏) 2 1 − 1 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 𝑙𝑙 2𝑛𝑛𝑛𝑛𝜏𝜏2 − 𝑙𝑙 = 0. (1.2) 𝛥𝛥𝛥𝛥 − �𝑛𝑛 𝑃𝑃𝜏𝜏 ( 1 − 𝑃𝑃𝜏𝜏 ) 𝛥𝛥𝛥𝛥 �2(𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜋𝜋 Taking the logarithm of the expression (1.2) we obtain: ( 𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 − 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 �𝛥𝛥𝛥𝛥 − √2 𝑛𝑛 𝜋𝜋 � −1 +𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑙𝑙 2𝑛𝑛𝑛𝑛𝜏𝜏2 =𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 �𝛥𝛥𝛥𝛥 �2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝜋𝜋� −1 + ( 𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝜏𝜏) 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑙𝑙 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 ; then ( 𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 − 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 �𝛥𝛥𝛥𝛥 √2 𝑛𝑛 𝜋𝜋 � − 2𝑛𝑛𝑛𝑛𝜏𝜏 2 ( 𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝜏𝜏) 2 = − 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 �𝛥𝛥𝛥𝛥 �2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝜋𝜋� − ; 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏 2 1 (𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 1 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝛥𝛥 𝜏𝜏 + 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 (2𝑛𝑛 𝜋𝜋) + 2𝑛𝑛 𝛥𝛥𝜏𝜏2 =𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝛥𝛥 𝜏𝜏 + 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑛𝑛 (2(𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜋𝜋) + (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏)2 ; 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 or 1 1 (𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝛥𝛥 𝜏𝜏 + 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 2 𝜋𝜋 + 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑛𝑛 + 2 2 2𝑛𝑛 𝛥𝛥𝜏𝜏 2 1 1 =𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝛥𝛥 𝜏𝜏 + 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 2 𝜋𝜋 + 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 2 2 (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏)2 + , 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏 2 1 (𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 1 (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏)2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑛𝑛 + = 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) + ; 2 2𝑛𝑛 𝛥𝛥𝜏𝜏2 2 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 1 (𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 1 1 𝑘𝑘𝑛𝑛 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑛𝑛 + = 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 𝑛𝑛 + 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − ) + 2 2𝑛𝑛 𝛥𝛥𝜏𝜏2 2 𝑛𝑛 (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏)2 ; 2 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 (𝑡𝑡 − 𝑛𝑛𝑛𝑛)2 𝑘𝑘𝑛𝑛 (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏)2 = 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − ) + ; 𝑛𝑛 𝛥𝛥𝜏𝜏2 𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏2 2 2 1 (𝑡𝑡 − 𝑛𝑛𝑛𝑛) ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) − 𝑛𝑛 (𝑡𝑡 − ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏) = 𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏 2 2 𝑘𝑘 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − 𝑛𝑛𝑛𝑛 ); (𝑛𝑛 − 𝑘𝑘𝑛𝑛 )(𝑡𝑡 2 − 2𝑛𝑛𝑛𝑛𝑛𝑛 + 𝑛𝑛2 𝜏𝜏 2 − 𝑛𝑛(𝑡𝑡 2 − 2𝑡𝑡(𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝜏𝜏 + (𝑛𝑛 − 𝑘𝑘𝑛𝑛 )2 𝜏𝜏 2 ) = 𝑛𝑛(𝑛𝑛 − 𝑘𝑘 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − 𝑛𝑛𝑛𝑛 ); 𝑛𝑛𝑡𝑡 2 − 2𝑛𝑛2 𝑡𝑡𝑡𝑡 + 𝑛𝑛3 𝜏𝜏 2 − 𝑘𝑘𝑛𝑛 𝑡𝑡 2 +⥄ 2𝑛𝑛𝑘𝑘𝑛𝑛 𝑡𝑡𝑡𝑡 − 𝑛𝑛2 𝑘𝑘𝑛𝑛 𝜏𝜏 2 − 𝑛𝑛𝑡𝑡 2 + 2𝑛𝑛2 𝑡𝑡𝑡𝑡 − 2𝑛𝑛𝑘𝑘𝑛𝑛 𝑡𝑡𝑡𝑡 − 𝑘𝑘 −𝑛𝑛3 𝜏𝜏 2 + 2𝑛𝑛2 𝑘𝑘𝑛𝑛 𝜏𝜏 2 − 𝑛𝑛𝑘𝑘𝑛𝑛2 𝜏𝜏 2 = 𝑛𝑛(𝑛𝑛 − 𝑘𝑘𝑛𝑛 )𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − 𝑛𝑛 ); 𝑛𝑛 𝑘𝑘𝑛𝑛 − 𝑘𝑘𝑛𝑛 𝑡𝑡 2 + 𝑛𝑛2 𝑘𝑘𝑛𝑛 𝜏𝜏 2 − 𝑛𝑛𝑘𝑘𝑛𝑛2 𝜏𝜏 2 = 𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑛𝑛 𝑙𝑙𝑙𝑙 ( 1 − ), 𝑛𝑛 5 𝑘𝑘𝑛𝑛 𝑘𝑘𝑛𝑛 𝑡𝑡 2 = 𝑛𝑛2 𝑘𝑘𝑛𝑛 𝜏𝜏 2 − 𝑛𝑛𝑘𝑘𝑛𝑛2 𝜏𝜏 2 + 𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − ), 𝑛𝑛 𝑛𝑛 𝑘𝑘𝑛𝑛 𝑡𝑡 2 = ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) 𝑛𝑛𝜏𝜏 2 + 𝑘𝑘 ( 𝑘𝑘𝑛𝑛 − 𝑛𝑛 ) 𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 ( 1 − ), 𝑛𝑛 𝑛𝑛 𝛥𝛥𝜏𝜏2 𝑛𝑛 𝑡𝑡 2 = 𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) �𝜏𝜏 2 − 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 �, 𝑘𝑘𝑛𝑛 𝑛𝑛 − 𝑘𝑘𝑛𝑛 1 𝑛𝑛 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 = �𝑛𝑛 ( 𝑛𝑛 − 𝑘𝑘𝑛𝑛 ) �𝜏𝜏 2 − 𝛥𝛥𝜏𝜏 2 𝑙𝑙𝑙𝑙 𝑙𝑙𝑙𝑙 � 𝑛𝑛 − 𝑘𝑘 � 𝑛𝑛 𝑘𝑘 (1.3) 𝑛𝑛 If we assume in the expression (1.3) 𝑘𝑘𝑛𝑛 = 0, then 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 = 𝑛𝑛𝑛𝑛. In the given case time of transformation is maximum and is defined by the number of input elements. If 𝑘𝑘𝑛𝑛 = 𝑛𝑛, then 𝑡𝑡𝑛𝑛−𝑘𝑘𝑛𝑛 = 𝜏𝜏, i.e., the same parameter is determined by the time of one element and does not depend on the dimensionality of input array, what was to be proved. ─ Corollary 1. The maximum speed of PH read/write of information is achieved by quantization of optimum criteria time by a number of serially formed coefficients (tail elements) of PH transformation. ─ Corollary 2. To achieve real-time scale at minimum complexity of parallel- hierarchal algorithmic and engineering facilities, the operands of numerical field must be writing, storing and reading of information is performed by means of PH codes. Known serial logic-time codes – are codes, oriented on achieving processed on the basis of the method of PH transformation; and while maximum possible speed at minimal possible consumption of power for their preservation, PH codes for parallel writing-reading of information are codes, oriented at obtaining maxi- mum possible compression and algorithmic speed at minimum complexity of algo- rithmic facilities. ─ Corollary 3. PH transformation allows to realize the principle of distributed net- working processing, that is very important while realization of uniform neural-like computational structures. 3 Block diagram of multistage neural networks organization and an example of a semantic parallel-hierarchal network Special software However, network 𝑁𝑁 were more or less uniform environment, as it is in classic form in acoustics and optics, then we would deal with wave, generated by point source. In case of 𝑁𝑁 network the situation will be different. If for any non-zero elements propagating at great values differs greatly from the metrics of physical space in the network, then the results will be passages from one area into another and be- havior will differ by far less regularity than phenomena of the wave type, used in classical physics. That is why, while modeling such processes, new approaches, tak- ing into account non-uniformities of network space, are required. In this case, we come to the conclusion that natural neural networks are non-uniform and have a char- 6 acteristic 3D architecture. At the same time, it is known, that N networks do not take into account non-uniformity and 3D dimensionality of natural neural networks. Fur- ther, these very ideas, regarding non-uniformity, S-dimensionality and presence of signal delay in the network laid the foundation of construction PH network. As we will see in the following sections, the topology of PH network, unlike the known arti- ficial neural networks is not accidental. The topology of natural neural networks, that assigns the method of network cells connection, is, probably defined genetically, on the global level, that is why connections are not absolutely accidental. The presentation of this dynamic structural complex on a semantic level is one of the chapter's tasks. The basis of notions about such complex forms the following provisions. This, first of all, refers to addition of excitations at the moment of combining of various stimula- tions. First, the fact, that the cortex of cerebral hemisphere contains a great number of nervous cells, where afferent impulses converge (they carry excitations to central nervous system), these impulses arrive from various receptors- visual, auditory, thermal, muscle, etc. This proves the availability of complex mechanism of interac- tion between various cortex zones. The availability of such mechanism of interaction assumes such characteristic fea- tures of computation organization in the cortex: topographic character of video im- age, simultaneity (parallelism) of signals actin, mosaic structure of the cortex [4], rough hierarchy of the cortex [3], space-correlated in time, perception mechanism , training [4]. However, main unsolved problem, so far, remains the problem how in- teraction of nervous cells, emerging at the moment stimuli combination, is structured in the cortex of cerebral hemispheres. On structural level, the organization of cortex zones in the form of interacting neu- ral networks can be presented, as it is suggested in Fig. 1. Here, each layer of cortex zones is presented in the form of neural network as neurobiological process of hierar- chically interdependent interaction of convergent-divergent structures. However, out- puts of the neural networks of the same name of each of cortex zones form corre- sponding inputs for the next cortex zone. The term outputs of the name means the availability of multiple correlative process of coincidence of these inputs signals in time-neural networks are, probably, the ideal tool that can operate in a steady-state mode in conditions of uncertainty. Neural networks, functioning on the principle of dynamic multifunctionality, include interaction of convergent-divergent structures in horizontal and vertical directions and form 3D architecture. The structure of such interaction differs from similar ones, that paths of horizontal routes, due to complexi- ty of different schemes of convergent-divergent processes, vary (change genetically at defined level). The path of these routes can slightly vary in the process of training. 7 Fig. 1. Block diagram of multistage neural networks organization One of the central ideas of this article is realization of such statement. How is real time can the redundancy of multistage structure (for instance, neural network) can be optimized on the level of its inter element bonds? The answer to this question can be the suggested concept of multistage network. Formation of multistage network as- sumes the process of serial transformation of correlated space areas and creation of decor related in time element of physical environment while its transition from one stable state into another. Such process of image analysis is performed at many stages, each of which in- cludes the many stages, each of which includes the realization of above-mentioned procedure. The condition of complex image transition into higher level is dynamics of processing in time in parallel channels of lower level. The resulted in space-time area image components. For better understanding of the suggested neural network, we draw certain seman- tic analogies. Imagine that a group of researchers jointly solves certain scientific prob- lem. Each of researches has his own knowledge regarding this problem: all of them 8 propose their ideas and reach common conclusion, creating matrix of opinions 𝑀𝑀1 of the first level of discussion. This judgment in the process of discussion can be revised. Each revision, by virtue, is a new general judgment. Mathematical description of this process will form 2nd level of discussion (network) by means of formation of matrix 𝑀𝑀2 elements. In this case, matrix 𝑀𝑀2 rows – in the terminology of our example – is time sequence of general judgment formation. The first intermediate result of this discussion will be décor related in time with all other following judgment and pre- sents, the first impression (initial solution) of the given problem. At each following level of problem solving, further revision of the first intermediate results in the dis- cussion and formation of the matrix of judgments 𝑀𝑀𝑗𝑗 . Such revision is carried out each time, when all the judgments at the given moment of time in certain approxima- tion converge. This occurs in the case, when certain general judgment that satisfies all the participants is formed from numerous non-converged judgments. Intermediate results of the discussion are revised results of the previous level of the discussion. General result of the discussion is serial process of multistage revision of the problem being solved and consists of separate intermediate judgments. That is why parallel hierarchical process can be defined as simultaneous analysis of certain phenomena (object) by means of hierarchies’ allocation of most efficient notions about it. Let us consider in more details a process of G-transformation [9], simulation at every level of the PH network. The example of semantic organization of this process [8], is shown in Fig. 2, where 1H, 2H, 3H – are the first, the second, the third observers, that identify certain visual scheme, – are the results of visual scene identi- fication. Concrete semantic content of the nodes of the formed network, can be, for in- stance, 11 – small object moves, 12 – speed up quickly, 13 – object has extended form; 14 – mainly of grey color, 15 – considerable black color on the boundaries of the object, 16 – white color is noticed on the object, 17 – moves dawn with great speed, 18 – slightly changes the direction of the motion, 19 – one edge of the object has curved form, 21 – object is of great color and moves at the speed of the bird, 22 – if it is a bird, then the speed is high, 23 – moves swiftly, 24 – I have never seen the bird at such coloration, 25 – unusual coloration, 26 – probably, starts diving, 27 – quickly goes out for diving, 28 – speed does not drop, 29 – by speed looks like wild bird, 210 – there are different colors, 211 – the color resembles the color of wild bird, 212 – enters into nose dive, 213 – same route of motion as the wild bird, 214 – by form re- sembled wild bird, 215 – by color does not resemble ordinary bird, 216 – curved edge looks like a beak of the bird, 31 – if it is a bird, then it is very maneuverable with unusual colouring, 32 – speed increase, 33 – many-hued colouration, 34 – continues diving, 35 – by speed and the form resembles birds of prey, 36 – colour is the same as the colour of bird of prey, 37 – by route of motion and by the form of the beak – it is, probably, a bird of prey; 41 – by the route of motion and speed of motion, form and colour it resembles a bird of prey. From the considered example, it is obvious that the suggested semantic PH net- work (Fig. 2) – is semantic organization of dynamic data structure [9,10], that in- 9 cludes blocks, which correspond to changeable in real time objects or notions and bonds that indicate temporal interconnection between blocks. Unlike the known structures of semantic networks [9] here it becomes clear how to represent in the network such situation as an exception from the rules. For instance, if in the considered example of semantic PH network (Fig.2) as an identified object, the observers can mistakenly recognize, for instance, instead “object, being observed is wild bird” wrong knowledge: “object, being observed – is ordinary bird”, then, in the next analysis, wrong knowledge about the object can be corrected. Conclusions By analogy with known apparatus of formation and storage of information about the object in the form of complex packages, called frames, in the suggested structure of semantic network, information about the object is stored in hierarchically organized frames. Each frame is described by its functional row. For instance, for the considered example (Fig.2), the frames are formed from such blocks of the network: 1st frame –11 → 14 → 17 → 21 ; 2nd frame – 12 → 15 → 18 → 22 → 24 → 26 → 28 → 210 → 212 → 31 ; 3rd frame – 13 → 16 → 19 → 23 → 25 → 27 → 29 → 211 → 213 →→ 214 → 215 → 216 → 31 → 32 → 33 → 34 → 35 → 36 → 37 → 41 . Final information about the object is stored in tail blocks of frames, i.e., for out ex- ample – 21 , 31 , 41 . One of the central ideas of this article is realization of such statement. How is real time can the redundancy of multistage structure [11] (for instance, neural network) can be optimized on the level of its inter element bonds? The answer to this question can be the suggested concept of multistage network. Formation of multistage network assumes the process of serial transformation of correlated space areas and creation of decor related in time element of physical environment while its transition from one stable state into another [12]. 10 Fig. 2. Example of semantic parallel-hierarchal network [9] 11 References 1. Ruei-Yu Wu, Gen-Huey Chen , Jung-Sheng Fu, Gerard J. Chang (2008). Finding cycles in hierarchical hypercube networks J. Information Processing Letters. Volume 109, Issue 2, Pages 112–115. 2. Sven Behnke (2003) Hierarchical Neural Networks for Image Interpretation. Springer- Verlag. Berlin. Heeidelberg. New York. 3. L. Srivastava, S.N. Singh , J. Sharma (1998). Parallel self-organising hierarchical neural network-based fast voltage estimation. IEE Proceedings - Generation, Transmission and Distribution, Volume 145, Issue 1, p. 98 – 104. 4. Wang B, Rahal I, Dong A. (2011). Parallel hierarchical clustering using weighted confi- dence affinity. International Journal of Data Mining, Modelling and Management 3(2): 110-129. 5. Thompson Richard H., Swanson Larry W. (2010). Hypothesis-driven structural connectivi- ty analysis supports network over hierarchical model of brain architecture. Proceedings of the National Academy of Sciences USA. 107(34): 15235–15239. Published online 2010 August 9. doi: 10.1073/pnas.1009112107. 6. Lafer-Sousa Rosa, conway Bevil R. (2013). Parallel, multi-stage processing of colors, fac- es and shapes in macaque inferior temporal cortex. Nature Neuroscience 16, 1870–1878. 7. Kaiser M. (1994). Time-Delay Neural Networks for Control In: 4th International Sympo- sium on Robot Control (SYROCO '94), Capri, Italy. 8. L.I. Timchenko, N.I. Kokryatskaya, A.A. Yarovyy, V.V. Melnikov, G.L. Kosenko (2013). Method of predicting the position of the energy center of the image of a laser beam using a parallel–hierarchical network. Cybern. Syst. Anal. 49 (5), 785-795. 9. L. I. Timchenko (2000). A multistage parallel-hierarchic network as a model of a neurolike computation scheme. Cybern. Syst. Anal. 36(2), 251–267. 10. Leonid I. Timchenko, Natalia I. Kokryatskaya, Viktor V. Melnikov, Galina L. Kosenko (2013). Method of forecasting energy center positions of laser beam spot images using a parallel hierarchical network for optical communication systems. J. Opt. Eng. 52 (5), 055003 (May 09, 2013). - doi: 10.1117/1.OE.52.5.055003. 11. Leonid I. Timchenko, Nikolay S. Petrovskiy and Natalia I. Kokriatskaia (2014). Laser beam image classification methods with the use of parallel-hierarchical networks running on a programmable logic device, J. Opt. Eng. 53(10), 103106 (Oct 27, 2014). doi:10.1117/1.OE.53.10.103106. 12. L. Tymchenko, V. Tverdomed, N. Petrovsky, N. Kokryatska, Y. Maistrenko (2019). De- velopment of a method of processing images of laser beam bands with the use of parallel hierarchic networks // Eastern European Journal of Enterprise Technologies. - 2019. - №6/9(102). - р. 21-27. 13. Oleg G. Avrunin; Maksym Y. Tymkovych; Sergii V. Pavlov; Sergii V. Timchik; Piotr Kisała, et al. Classification of CT-brain slices based on local histograms, Proc. SPIE 9816, Optical Fibers and Their Applications 2015, 98161J (December 18, 2015). 14. Oksana Chepurna; Irina Shton; Vladimir Kholin; Valerii Voytsehovich; Viacheslav Popov, et al. Photodynamic therapy with laser scanning mode of tumor irradiation, Proc. SPIE 9816, Optical Fibers and Their Applications 2015, 98161F (December 18, 2015). 15. Olexander N. Romanyuk; Sergii V. Pavlov; Olexander V. Melnyk; Sergii O. Romanyuk; Andrzej Smolarz, et al. Method of anti-aliasing with the use of the new pixel model, Proc. SPIE 9816, Optical Fibers and Their Applications 2015, 981617 (December 18, 2015). 12 16. S. O. Romanyuk; S. V. Pavlov; O. V. Melnyk. New method to control color intensity for antialiasing. Control and Communications (SIBCON), 2015 International Siberian Confer- ence. - 21-23 May 2015. - DOI: 10.1109/SIBCON.2015.7147194. 17. Waldemar Wójcik, Andrzej Smolarz// Information Technology in Medical Diagnostics . London, July 11, 2017 by Taylor &Francis Group CRC Press Reference - 210 Pages. 18. Vassilenko, S Valtchev, JP Teixeira, S Pavlov. Energy harvesting: an interesting topic for education programs in engineering specialities / «Internet, Education, Science” (IES-2016) – 2016. – P. 149-156. 19. Roman Kvyetnyy, Yuriy Bunyak, Olga Sofina, and etc. "Blur recognition using second fundamental form of image surface", Proc. SPIE 9816, Optical Fibers and Their Applica- tions 2015, 98161A (17 December 2015). 20. Roman N. Kvyetnyy, Olexander N. Romanyuk, Evgenii O. Titarchuk, and etc. "Usage of the hybrid encryption in a cloud instant messages exchange system ", Proc. SPIE 10031, Photonics Applications in Astronomy, Communications, Industry, and High-Energy Phys- ics Experiments 2016, 100314S (28 September 2016). 21. Roman Kvyetnyy, Olga Sofina, Pavel Orlyk, Andres J. Utreras, Waldemar Wójcik, and etc. "Improving the quality perception of digital images using modified method of the eye aberration correction", Proc. SPIE 10031, Photonics Applications in Astronomy, Commu- nications, Industry, and High-Energy Physics Experiments 2016, 1003113 (28 September 2016). 22. Fefelov A.O., Lytvynenko V.I., Taif M.A., Savina N.B., Voronenko M.A., LurieI.A., Boskin O.O.. Hybrid immune algorithms in the gene regulatory networks reconstruction. Proceedings of the Second International Workshop on Computer Modeling and Intelligent Systems (CMIS-2019), Zaporizhzhia, Ukraine, April 15-19,2019, рр. 185-210.