<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Khabarovsk, Russia</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Intelligent High-Performance Computing for Big Data Processing in Fiber Optical Measuring Networks</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Elena V. Zakasovskaya</string-name>
          <email>elena.zakasovskaya@vvsu.ru</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Valentin S. Tarasov</string-name>
          <email>valentin.tarasov@vvsu.ru</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nadezhda I. Denisova</string-name>
          <email>denisovanadezda0@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Far Eastern Federal University</institution>
          ,
          <addr-line>Vladivostok</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Saint Petersburg University</institution>
          ,
          <addr-line>St Petersburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Vladivostok State University of Economics and Service</institution>
          ,
          <addr-line>Vladivostok</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <volume>1</volume>
      <fpage>6</fpage>
      <lpage>19</lpage>
      <abstract>
        <p>The paper deals with the problem of reconstructing the parameters of physical fields using distributed information and measurement systems for cases of incomplete laying of measurement lines. High-performance computing is typically used for solving advanced problems and performing research activities through computer modeling. The rise of Big Data has changed the entire perspective of data and data handling. Ever growing analytical needs for Big Data can be satisfied with extremely high-performance computing models. A new combined algorithm is presented, which is concluded in the “optimization of the geometry” of the measuring network with a view to further applying the complex of neural networks. The possibility of choosing and using the appropriate neural network from a complex of several pre-trained.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Computerization of almost all areas of modern life has a great impact on a human, on the majority of his activities
and it contributes to development of information technology. In accordance with the modern tendency in the
development of measuring instruments, in case of large amount of collected and processed information it is necessary
to use not a lot of measuring instruments, but rather complex devices such as information and measuring systems [1].</p>
      <p>Information measuring systems (IMS) is used to solve a wide range of applied problems however the main purpose
is to provide continuous monitoring of large-scale and spatially inhomogeneous multidimensional physical fields [2].</p>
      <p>What is important in the work of the IMS is the process of collecting information, on which the type of measuring
network depends. Topology of the communication system depends on the choice of network technology and as a
consequence the scope of application, types of input signals, types of measurements and functional properties of
components. Examples of IMS with fundamentally different network topology are:
• Fiber-optic measuring networks (FOMN) based on fiber-optic information and measuring networks;
• Information-measuring systems based on wireless sensor networks (WSN).</p>
      <p>Present day science intensive production cannot do without constant monitoring and control over the behavior
dynamics of parameters range of distributed physical fields (PFs). The distributed information-measuring systems are
called upon to solve this problem.</p>
      <p>Information and measuring systems based on wireless sensor networks have great potential. This sort of
lowpower communication devices can be deployed over the entire area of almost any physical space, ensuring continuous
monitoring of physical phenomena in real time, processing and transferring collected information, and coordinating
actions with other nodes of the network. However, it is impossible to fully deploy intelligent distributed IMS using
wireless technologies in critical infrastructures, primarily due to the lack of network technologies that meet
information security requirements [3].</p>
      <p>FOMN are of greatest interest. These systems have various topologies and organization, and are constructed, for
example, on fiber-optical element base [1]. One of the fundamental parts of the distributed fiber optical measuring
system (DFOMS) is distributed fiber optical measuring network responsible for collecting measuring information
regarding the PF parameters under study [2].</p>
      <p>FOMN represents a set of fiber-optical measuring lines (MLs) [1, 2] stacked in accordance with a certain setup on
the surface studied. Thus, reconstructing distributed PFs’ parameters against characteristics of optical radiation
passing through FOMN assigns this mathematical problem to tomography [4]. To restore distributed function of PFs
by means of FOMN, MLs were stacked along 2-4 directions.</p>
      <p>In case of full data, in other words, with sufficient quality projections on all 180 degrees of angular range, high
quality reconstructions are known to be obtained [4-6]. For comparison, to receive quality images in industry
tomography, the necessary number of directions is p = 102-103.</p>
      <p>Standard analytical methods are unacceptable for a fiber-optical tomography as direct application of inverse
operator does not provide a unique stable solution. It is characteristic of any low angle problem in tomography. So,
there is a good reason to consider other algorithms, perhaps, their synthesis as well. Restoring PFs’ functions by using
FOMN can be broken into several steps: sampling, receiving and processing projection data, and back projecting.</p>
      <p>The existing success and great prospects for the development of information and measuring systems are largely
due to the fact that sensor networks built on the same principles can be used in completely different areas of human
activity. However, wireless sensors have a number of limitations that have a negative impact on the provision of
information security in the transmission of data within and outside the network.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Notifications and Standard Definitions</title>
      <p>Let's f (x1, x2) is the function of distributed PFs’ parameter on a planar surface  Throughout this paper we will
assume that f is infinitely differentiable and has a compact support. The 2D Radon transform  maps a density
function f as its line integrals. The objective of tomography is to produce an accurate image of an object interior based
on a finite number of scanned views.</p>
      <p>Mathematically, the problem is to reconstruct f, given the measurements of g(, s) on . The short-term objective
will be focused on a comprehensive description of a projection function g = f.</p>
      <p>
        Let index i determine an i-th direction of scanning i, and index j determine the samples sij in the selected i-th
direction. In this case a pair of indexes (i, j) corresponds to a straight line Lij along which the area is scanned. Then a
projection value along the straight line Lij can be written as
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
gij = ij f =  f ( x1 , x2 ) dl,
      </p>
      <p>Lij
S =</p>
      <p>N S .
k =1 k
where  is the Radon transform of function f, and dl is a gain along the straight line Lij. Pairs of numbers (i, sij)
determine the parallel setup of scanning on a plane.</p>
      <p>Let's break the area of research S R2 into smaller sites, so as</p>
      <p>We consider the function f constant in each cell Sk and equal to fk, the symbol f also denotes the matrix
corresponding to this partition:
 f1

f =  fm+1


 fn(m−1)+1
f2
fm+ 2
fn(m−1)+ 2
fm </p>
      <p>
f2m   F = ( f1


fnm 
f2
fm
fm+1
f2m</p>
      <p>Т
fnm ) .</p>
      <p>
        Elementary cells Sk are referred to as image elements. Let’s assume function f as constant in each site Sk and equal
to fk. Symbol f also denotes a matrix corresponding to this decomposition. Then the integrated equations (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) are
transformed into a system of linear algebraic equations whose matrix forms looks as following:
AF = G
      </p>
    </sec>
    <sec id="sec-3">
      <title>Optimization of FOMN Geometry</title>
      <p>
        The specificity of the fiber-optic tomography tasks is the presence of an FOMN ultra-small survey data acquisition
scheme. As a rule, in such FOMN, the number of measuring lines is less than the number of monitored areas. SLAE
is underdetermined here (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ).
      </p>
      <p>
        Due to the fact that the input data has a large dimension, it is necessary to perform processing that allows you to
select the most significant parameters by reducing the number of free variables in SLAE (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ).
      </p>
      <p>
        In this context, FOMN optimization consists in deleting rows and columns along the edges of the matrix f (in the
“trimming” of matrix f), the sum of the elements of which has values equal to zero. Knowing the size of the matrix f
and the values of the column of the projection data, one can always check whether the matrix f has such a row or
column. Then the rows and columns with this feature are removed from the matrix f. Next, the matrix itself (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) and
the column of projection data are modified.
      </p>
      <p>As a result of the Trimming Algorithm, a new matrix f of size n '× m' is formed, with n'≤n, m'≤m. Thus, when
executing the algorithm described above, candidate areas in which the required “objects” are located are selected.</p>
      <p>Let us give an example of applying the above method to a specific distribution of a parameter of a physical field,
which in analytical form is given by a function:
z( x, y) = e
-0.5((x-6)2 +( y-11)2 )
+ e
-0.5((x-16)2 +( y-7)2 )
nK ,mK </p>
      <p>Let FOMN, after applying the procedure described in clause 2, have dimensions n '× m', n'≤n, m'≤m. In general,
the sizes n and m should decrease (n'≤n, m'≤m). This happens in most cases, because the spatial frequency b = π
imposes restrictions on the size of the objects under study. In extreme cases, you will have to use the neural network
for the entire area.</p>
      <p>It is not known in advance what sizes will be there. Therefore, the question naturally arises: what exactly is the
size of the neural network to use?</p>
      <p>The answer to the question posed is contained in the approach proposed in this paper. It consists in the following:
1. We will train in parallel (independently of each other) several neural networks of different sizes.</p>
      <p>Denote by NN (ni, mi) a neural network of size ni × mi, i.e. neural network, which is intended for FOMN
processing of the appropriate size.</p>
      <p>Through SN denote we set of all K pre-trained neural networks of the form NN (ni, mi):</p>
      <p>SN =  NN ( n1 , m1 ) ,  , NN ( ni , mi ) , ..., NN ( nK , mK )  ,  
n1    ni    nK , m1    mi     mK ,</p>
      <p>( ni , mi )  ( n j , m j ) ; i  j, 1  i, j  K .</p>
      <p>
        2. For processing the projection data of the n '× m', n'≤n, m'≤m posting from FOMN, we choose in the SN set of
the form (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) a neural network of a suitable size, i.e. NN (ni, mi), for which
      </p>
      <p>ni−1  n '  ni , mi−1  m '  mi , 1  i, j  K . </p>
      <p>
        From conditions (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ), (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ), it obviously follows that the neural network NN (ni, mi) is a network of the smallest
dimension, with which it is possible to process the projection data of a measuring network of size n '× m'.
5
      </p>
    </sec>
    <sec id="sec-4">
      <title>Using RBF Networks</title>
      <p>In the work, neural networks of radial basis type are used in the work of neural networks. Earlier in the article [10],
the authors have already investigated the possibility of using radial basis neural networks (RBFNN).</p>
      <p>The information generated by the network, represented by vector G, was a set of topographic data for which the
neural network must reconstruct the vector F. Thus, the neural network must perform the transformation F = A-1(G),
having previously been trained on a set of training pairs {(G, F)}.</p>
      <p>To create a training page, the author used Reinforcement method of selecting training pairs in which pairs of the
form (Gi, Fi) were considered, where AFi = Gi.</p>
      <p>When creating RBFNN training pairs in [10], Gaussian-type functions were used, and the parameters were selected
as lattice points of the corresponding scanning scheme and Gaussian pairs.</p>
      <p>For example, for the 5x5 field, a training page was created, consisting of 3325 training pairs, on which the RBFNN
network was trained. It was experimentally shown that the constructed network makes it possible to restore the
functions of the spatial distribution of the studied physical quantity with an error at a single point of no more than 2%,
and has good predictive capabilities. However, it was noted that with this method of recovery in high-dimensional
tasks using FOMN, there are serious difficulties in training the network due to the very large amount of training
pages. Therefore, it became necessary to search for optimal paths when using neural networks.</p>
      <p>One of the ways to optimize the processing of information is the use of a set of previously-trained neural networks
of various dimensions.</p>
      <p>The choice of reference functions should depend on the width of the spectrum b of the function f (x, y) under study.
Functions of a Gaussian type</p>
      <p>z ( x, y ) = e− ai (( x−ci )2 +( y −bi )2 )
can be used here, since they take non-zero values only in the zone around a certain center.</p>
      <p>To analyze the neural network method for solving the problem using RBFNN, this work considered the
tomographic task of restoring the functions of the FP according to the information coming from the
informationmeasuring system of size 30x30.</p>
      <p>It was assumed that the reference effect on the field has the form of a smooth function with a limited effective
width of the spectrum b equal to the conditional spectral unit p. It is considered that all values of the function are
nonnegative and normalized.</p>
      <p>In this work, the same three types of reference distributions of a physical quantity are used as in [10]. The first and
second types refer to the regular method, and the third refers to the random method. We describe them in more
details.</p>
      <p>
        Type I. The reference field distributions in this case are single Gaussians of the form (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ), whose centers are located
at the nodes of the measuring network. It was found that the optimal parameters for learning are ai parameters, which
take values.
      </p>
      <p>Type II. Analytically, these functions can be represented as:</p>
      <p>
        z( x, y) = e-a1 (x-c1 )2 +( y-b1 )2  + e-a2 (x-c2 )2 +( y-b2 )2 
provided that the carriers are at least 2π / b. These are Gaussian couples with non-intersecting carriers.
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
      </p>
      <p>
        Type III. Reference distributions of this type were obtained using a randomization process with normalization.
Each integer random set a1, а2, b1, c1, b2, c2= 1, N was assigned a function of the form (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ). Each vector before
inclusion in the training page was normalized.
6
      </p>
    </sec>
    <sec id="sec-5">
      <title>Numerical Modeling</title>
      <p>To analyze the neural network method of solving the problem using the RBFNN complex, the
informationmeasuring network 30x30 was considered.</p>
      <p>In Table 1 for each neural network of a radial-basic type belonging to the set of SN, the following characteristics
are presented:</p>
      <p>- dimensions (ni, mi) correspond to the geometry of the measuring network, which is processed by a neural network
of radial basis type NN(ni, mi),
- the total amount of the training page (TP) includes types I - III,
- the average training time for the results of a series of several (from 10 to 15) computational experiments,
- values of the normalized mean square error (MSE) across the entire training page,
- the number of impacts recognized by the neural network NN(ni, mi),
- predicting capabilities, i.e. recognition by a network of types of effects that do not belong to the training page,
- the quality of training is averaged characteristic, associated, including with the presence of artifacts as a result of
insufficient amount of the training page.</p>
      <p>From the results in Table 1, it can be seen that as the size of the network grows, the quality of learning decreases.
From the above results, it follows that processing with the help of Trimming Algorithm an area with subsequent
collective processing by neural networks yields a large gain in accuracy. This is explained by the localization of the
site of impact on the network and processing using a neural network, as a rule, of a lower dimension, which is trained
more efficiently and quickly. At the same time, the error of the standard deviation for the elements from the training
page drops from 15 to 20 times.</p>
      <p>Figure 2 shows the results of processing the species exposure z ( x, y ) = e
−0.5(( x−6)2 +( y −19)2 )
.</p>
      <p>After localization of the impact, a 3 × 3 size similar region was obtained, which is processed by a well-trained
neural network NN (3.3). Finally, the original dimensions of the area were restored (Fig. 3b). In fig. 3c shows the
result of the restoration of the function under study using a neural network of maximum size NN (30.30). The quality
1.
2.
3.
4.
5.
6.
7.</p>
      <p>1.
2.
3.</p>
      <p>Size
(ni, mi)
3×3
5×5
7×7
10×10
15×15
20×20
30×30</p>
      <p>Volume</p>
      <p>TP
of training NN (30,30) is low, which explains the appearance of artifacts even with the restoration of a single impact
on the measuring network.</p>
      <p>
        Tables 2 and 3 present the results of the work of the proposed method for the reference effects on the measuring
network of single effects (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) and double effects (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ), respectively.
      </p>
      <p>−0.5((x−6)2 +( y−19)2 )</p>
    </sec>
    <sec id="sec-6">
      <title>Conclusion</title>
      <p>Information driven economy relies on the actionable insights extracted from data analytics. The era of data
revolution has paved way to the need of convergence of paradigms like High Performance Computing and Big Data
processing. The amalgamation of these paradigms is a herculean task involving various aspects like data management
and computing efficiency. This has given rise to evolution of the data storage technologies and computing models.</p>
      <p>The article presents a new combined projection data processing algorithm for reconstructing information received
from fiber-optic measurement lines distributed by FOMN.</p>
      <p>This algorithm consists in the sequential execution of two processes:
1. Pre-processing of measurement information by localizing the locations of impact on FOMN,
2. Application of a complex of neural networks for processing measuring systems of various geometries.
From the above results it follows that:
1. Processing using the area cropping procedure with subsequent collective processing by neural networks gives a
gain in accuracy largely due to the localization of the site of impact on the network and processing using the neural
network, as a rule, of lower dimensionality, which is trained more qualitatively and quickly. At the same time, the
value of the mse error for the elements from the training page drops 15 to 20 times.</p>
      <p>2. Reducing the mse error and shortening the processing time mainly depends on how radically the computational
process has been optimized as a result of the preprocessing and on the complexity of the function being restored.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kulchin</surname>
            ,
            <given-names>Yu.N.</given-names>
          </string-name>
          :
          <article-title>Distributive Fiber Optical Measuring System</article-title>
          . Fizmatlit, Moscow, 272 p. (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Kulchin</surname>
            ,
            <given-names>Yu.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vitrik</surname>
            ,
            <given-names>O.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kirichenko</surname>
            ,
            <given-names>O.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petrov</surname>
          </string-name>
          , Yu.S.:
          <article-title>Multidimensional signal processing by using fiber optic distributed measuring network // Quantum Electronics</article-title>
          , Vol.
          <volume>20</volume>
          , No.
          <volume>5</volume>
          ,
          <fpage>711</fpage>
          -
          <lpage>714</lpage>
          (
          <year>1995</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Zakasovskaya</surname>
            <given-names>E.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasov</surname>
            <given-names>V.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glushchenko</surname>
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Information security issues in the distributed information measurement system // ICIEAM, S.-</article-title>
          <string-name>
            <surname>Petersburg</surname>
          </string-name>
          , Russia. May 16-
          <fpage>19</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Natterer</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <source>Mathematics of Computerized Tomography</source>
          , John Wiley &amp; Sons Ltd., N. Y.,
          <volume>288</volume>
          p. (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Herman</surname>
          </string-name>
          , G. T.:
          <article-title>Projections-Based Image Reconstruction</article-title>
          . In: “Basics of Reconstructive Tomography”, Moscow, Mir,
          <volume>352</volume>
          p. (
          <year>1983</year>
          )
          <article-title>(in Russian)</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Mel'nikov</surname>
            ,
            <given-names>V.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meshkov</surname>
            ,
            <given-names>S.V.</given-names>
          </string-name>
          :
          <article-title>Theory of activated rate processes: Exact solution of the Kramers problem</article-title>
          ,
          <source>J. Chem. Phys</source>
          .
          <volume>85</volume>
          :
          <fpage>1018</fpage>
          -
          <lpage>1027</lpage>
          (
          <year>1986</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Zakasovskaya</surname>
            <given-names>E.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fadeev</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          :
          <article-title>Restoration of point influences by the fiber-optical network in view of a priori information</article-title>
          .
          <source>SPIE Proc. APCOM</source>
          , V.
          <volume>6675</volume>
          (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Zakasovskaya</surname>
            ,
            <given-names>E.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tarasov</surname>
            ,
            <given-names>V.S.:</given-names>
          </string-name>
          <article-title>Optical fiber imaging based tomography reconstruction from limited data // Computer Methods in Applied Mechanics and Engineering</article-title>
          . Vol.
          <volume>328</volume>
          , pp.
          <fpage>542</fpage>
          -
          <lpage>543</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Kulchin</surname>
            ,
            <given-names>Yu.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakasovskaya</surname>
            ,
            <given-names>E.V.</given-names>
          </string-name>
          :
          <article-title>Artifacts suppression in limited data problem for parallel fiber optical measuring systems // Optical Memory</article-title>
          &amp;
          <string-name>
            <given-names>Neural</given-names>
            <surname>Networks</surname>
          </string-name>
          . - Vol.
          <volume>18</volume>
          , №
          <fpage>3</fpage>
          . - pp.
          <fpage>171</fpage>
          -
          <lpage>180</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Kulchin</surname>
            ,
            <given-names>Yu.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zakasovskaya</surname>
            ,
            <given-names>E.V.</given-names>
          </string-name>
          :
          <source>Application of Radial Basis Function Neural Network for Information Processing in Fiber Optical Distributed Measuring Systems, Optical Memory &amp; Neural Networks (Information Optics)</source>
          , Vol.
          <volume>17</volume>
          , № 4, pp.
          <fpage>317</fpage>
          -
          <lpage>327</lpage>
          . (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>