<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Using Models of Parallel Specialized Processors to Solve the Problem of Signal Separation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>V A Zasov</string-name>
          <email>vzasov@mail.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Samara State Transport University</institution>
          ,
          <addr-line>Svobody street, 2B, Samara, Russia, 443066</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>290</fpage>
      <lpage>299</lpage>
      <abstract>
        <p>This paper considers models of highly efficient specialized processors used for parallel data processing as part of solving the problem of extracting individual signals from an additive mixture of several signals. The proposed models of recursive, nonrecursive, and regularization-based parallel specialized processors provide versatility in solving the problem of signal separation with various algorithms. An advantage of regularization-based processors is that they make the solution stable under conditions where the parameters of objects exhibit expected uncertainty when the inverse problem of signal separation is ill-posed. This paper presents the results we obtained from an asymptotic analysis of the computational complexity involved. The results identify the time it takes to solve problems by using specialized processors. The paper also identifies the conditions for the efficient use of specialized processors.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The problem of signal separation consists in determining source signals unavailable for direct
measurements by using source signals measured in accessible points where the signals are an additive
mixture of source signals that are distorted when transmitted.</p>
      <p>
        The computational complexity of algorithms involved in solving that problem is high and, for many
applications, is of O(N 3 ) order, where N is the number of signal sources [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This makes it difficult to
use these algorithms.
      </p>
      <p>
        Computation parallelization is the conventional approach to reducing the time it takes to solve the problem of
signal separation [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ]. Parallel algorithms for signal separation have been developed for multicore processors,
multiprocessor systems with shared and distributed memory, and multicomputer systems [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. That solution is
needed in many practical fields such as monitoring and diagnosis of technical facilities [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ],
communications, medical diagnosis, speech [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and image [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] processing.
      </p>
      <p>This is because in complicated facilities, measured signals present an additive mixture of signals
received from many components, and in most practical applications the extraction of parameters that
describe the state of specific components is impossible without signal separation.</p>
      <p>
        The next significant performance improvement is possible through the use of specialized processors
whose architecture and computational processes correspond most to the structure of the algorithm for
the class of problems in question [
        <xref ref-type="bibr" rid="ref1 ref8">1,8</xref>
        ].
      </p>
      <p>
        Signal separation methods can be classified into two groups—deterministic and statistical [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The deterministic group is based on principal information about signal transmission channels
(statistical, frequency, amplitude, and other channel characteristics); that is, transmission channels and
signals are known.</p>
      <p>
        The statistical group is based on principal information about signal sources such as lacking source
correlation and the knowledge of signal distribution laws. In this case, explicit information about
transmission channels is unavailable, and only observed signals are known. For that reason, the methods
within this group are often called “blind” [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Thus, the solution to the problem of separating of signal sources reduces to using a deterministic or
statistical method to calculate the separating matrix equal or close, in terms of specific criteria, to the
matrix inverse to mixing matrix.</p>
      <p>The functionality of commercially available specialized digital signal processors and
fieldprogrammable gate arrays is insufficient for solving the complex problem of signal separation.</p>
      <p>
        Reference [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] only proposes basic signal-separation functions and objectives for specialized
processors used for signal separation and restoration; and for the processor models described in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ],
the analysis of the computational complexity involved in parallel processing is inadequate to identify
the conditions for the efficient use of the processors.
      </p>
      <p>
        It is advisable that the structure of specialized processors should be regular and have a neural network
architecture [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Besides, those processors do not provide stable solutions under conditions where the
properties of objects exhibit expected uncertainty when the inverse problem of signal separation is
illposed.
      </p>
      <p>
        The purpose of this paper is to develop parallel specialized processors for signal separation under
conditions where the parameters of objects exhibit expected uncertainty and to analyze asymptotically
the computational complexity of parallel processing to identify the conditions for the efficient use of the
processors.
2. Research Area
Since there are many algorithms for solving the problem of signal separation [
        <xref ref-type="bibr" rid="ref1 ref10 ref11 ref12 ref13 ref9">1,9-13</xref>
        ], parallel
specialized processors should provide versatility in this class of problems. We will assume that the
processor model consists of two units: the generic unit, which carries out the algorithm’s procedure
steps; and the specialized unit, which provides structural simulation for the algorithm.
      </p>
      <p>
        Let us assume that the model of signal formation is a linear multidimensional system with N inputs
and M outputs [
        <xref ref-type="bibr" rid="ref1 ref14">1,14</xref>
        ]. The model’s input signals are sn  k  , n  1,2,...,N ; its output signals, xm  k  ,
m 1,2,...,M . The input signals come from a variety of sources unavailable for direct measurement, and
the output signals come from various receivers such as detectors and antennas. We will assume that each
output M is linked with all the N inputs through linear signal-transmission channels.
      </p>
      <p>
        The mathematical model of signal formation is described by discrete-convolution equations (1),
where the m th observable signal is the additive mixture of channel-distorted source signals and noise
[
        <xref ref-type="bibr" rid="ref1 ref14">1,14</xref>
        ] - that is,
      </p>
      <p>N G1
xm  k     hmn  g ,I  sn k  g   ym k  ,
n1 g0
(1)
where hmn  g ,I  is the element N  M of the mixing matrix h  g ,I  for the channels’ pulse responses;
y  k  is the noise vector; g  0,...,G -1 and k  0,...,K -1 are the samples of the pulse responses for
channels and signals, respectively.</p>
      <p>
        Let us assume that the channels’ pulse responses hmn  g ,l  are finite and that they depend on a certain
parameter vector, l (time, locations of sources and receivers in relation to one another, etc.) [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>Generally, the solution to the problem of separating source signals is (1), and it can be written as</p>
      <p>M G1
sn  k     wnm  g ,I  xm k  g  , (2)</p>
      <p>m1 g0
where wnm  g ,I  are the pulse responses of separating filters, and they form the separating matrix
w  g ,I  , which is equal or close to a matrix inverse to the matrix h  g ,I  .</p>
      <p>In separating signals by source, we will split the computation into two steps.</p>
      <p>In step 1, the elements wnm  g ,I  of separating matrix w  g ,I  are determined from the measured
dynamical properties of channels or from signal parameters, and the signal-separation algorithm is
adjusted. The algorithm for computing wnm  g ,I  (the adjustment algorithm) takes into account the
nuances of the given signal-separation algorithm.</p>
      <p>In step 2, the signals are separated with the digital separating filters adjusted in step 1. This
computational process is the same for different signal-separation algorithms.</p>
      <p>With this in mind, we will look at two computational units in the model of a parallel specialized
processor: the adjusting processor (AP) (the versatile unit) and the functional processor (FP) (the
specialized unit).</p>
      <p>The nonrecursive, recursive and regularization-based processor models treated below differ in the
methods used to solve (1) but feature the same basic elements that the models are based on.
3. Model for a Nonrecursive Parallel Specialized Processor
The model for the nonrecursive parallel specialized processor implements the method used to solve the
system of equations (1) by inversing the mixing matrix h  g ,I  .</p>
      <p>
        The nonrecursive processor model [
        <xref ref-type="bibr" rid="ref10 ref11">10,11</xref>
        ] is best written in the form convenient for parallel stream
processing in the time domain:
      </p>
      <p>M G1
s1  k     w1m  g ,I xm k  g </p>
      <p>m1 g0
................................................... ,</p>
      <p>M G1
sN  k     wNm  g ,I xm k  g </p>
      <p>m1 g0
where s1  k  ,...,sN  k  are the calculated signals that are approximations (samples) of the true signals
s1  k  ,...,sN  k  in the points they are formed in; and wnm  g ,I  are the elements of separating matrix
w  g ,I  obtained from
and the mth column of the spectral matrix Wg ,I , which is inverse of the Hg ,I matrix—that is,
Wg ,I  H1 g ,I at M=N (or of the pseudo-inverse matrix Wg ,I  H g ,I at M≠N).</p>
      <p>The functional processor separates the source signals xm  k  according as they belong to the sources
sn  k  . This processor implements a model that is inverse of the signal-formation model, and the
processor has the regular homogeneous structure composed of N  M adjustable filters (AF) and N
adders (A).</p>
      <p>The filters and the units compute linear convolutions simultaneously and independently of one
another. The computational complexity LnFoPnrec  K  of the functional processor’s operation algorithm,
which determines the processing time, is characterized by the height of its parallel structure, is of
LnFoPnrec ( K )  O(K) order, and does not depend on the number of signal sources.</p>
      <p>
        The adjustment processor calculates the coefficients wnm  g ,I  for the AFs from the measurements
of the channels’ transient responses (deterministic methods) or from the characteristics of signal sources
(statistical separation methods) [
        <xref ref-type="bibr" rid="ref13 ref9">9,13</xref>
        ].
      </p>
      <p>For deterministic methods, the algorithm used to compute the coefficients wnm  g ,I  consists of the
following steps: using a fast Fourier transform for the channels’ transient responses to obtain the mixing
spectral matrix H  ,I  ; inversing the spectral matrix H  ,I  to obtain the separating spectral matrix
W  ,I  ; and using an inverse fast Fourier transform (IFFT) for the elements of separating matrix
W  ,I  to obtain weight coefficients for the AFs, specified by the matrix w  g ,I  .</p>
      <p>The parallel form of the adjustment algorithm’s first and third steps has a width of N  M and is
implemented by the N  M units of the fast Fourier transform and the inverse fast Fourier transform.
The computational complexity LnAoPn1r,e3c G  , which determines the time it takes to complete these steps
depending on the heights of their parallel structures, is of LnAoPn1r,e3c G   O( G log2 G ) order.</p>
      <p>
        The parallel form of the algorithm used to compute the separating matrix has a width of G and is
implemented with G units for inversing N order matrices (assuming that N=M). Each of these units, in
turn, implements the parallel form of the algorithm used to compute the inverting matrix (e.g., [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]) with
a width of O(N 4 ) .
      </p>
      <p>The height LnAoPn2rec (N)  О(log22 N) of the algorithm’s parallel form determines the time it takes to
invert the spectral matrix H  ,I  .</p>
      <p>The computational complexity LnAoPn1r,e2c,3 G,N  of the adjustment processor’s operation algorithm is
significantly higher than the computational complexity LnFoPnrec  O(K) of the functional processor’s
operation algorithm—that is,</p>
      <p>O( G log2 G )  O(log22 N )  O( G log2 G  log22 N )  O(K) .</p>
      <p>For instance, at K  G  N the relation
 log2 G . Thus, separating signals with the
LnAoPn1r,e2c,3 G,N </p>
      <p>LnFoPnrec (K)
proposed nonrecursive processor is acceptable if within the signal interval determined by K  Glog2 G
the parameters of the mixing matrix are assumed invariable—that is, if the signal-formation model is
quasistationary.</p>
      <p>Furthermore, given the polynomial relationship between the width O(N 4 ) of the parallel form of the
matrix-inverting algorithm and the number of signal sources, we can conclude that the model of the
nonrecursive processor we discussed can be used in practice to separate the signals sn  k  when the
number of signal sources is low.</p>
    </sec>
    <sec id="sec-2">
      <title>4. Model for a Recursive Parallel Specialized Processor</title>
      <p>Figure 1 shows a recursive parallel specialized processor model that implements the iteration method
for solving the system of equations (1).</p>
      <p>
        The model’s functional processor (FP) has a regular homogeneous structure composed of M identical
processing units (PU) [
        <xref ref-type="bibr" rid="ref10 ref11">10,11</xref>
        ]. All PUs operate parallel in time, and each implements the recursive

algorithm for extracting one signal sn k  from an additive mixture of several signals.
      </p>
      <p>The computational complexity LrFePc  K  of the FP operation algorithm, which determines its
operating time, is characterized by the height of its parallel form, is of LrFePc ( K )  O(K  ) order (where
 is the number of iterations), and does not depend on the number of signal sources.</p>
      <p>The adjusting processor (AP) consists of a clock unit (CU) and M groups of devices comprising an
adjusting unit (AU) and a memory unit (MU). The pulse responses of the filters FPmn , n 1,...,N , and
m 1,...,M (note the exclusion of m  n ) need not be calculated with the AP since the frequency
characteristics of AFmn filters are equal to the frequency characteristics of channels with related indexes
in the mixing matrix H  ,I  of the signal-formation model. These characteristics should only be stored
in MUm, m 1,...,M .</p>
      <p>The transient responses h11  g  , h22  g  ,...,hMN  g  , g  0,...,G 1 of the adjustable inverse filters
AIFmn (only if m  n ) are computed in AUm, m 1,...,M .</p>
      <p>The algorithm for computing hmn( mn )  g ,I  consists of the following steps: using a fast Fourier
transform for the transient responses of the channels hmn( mn )  g ,I  to obtain the characteristics
Hmn g ,I  ; computing H mn  g ,I   1 ; and using a fast Fourier transform for the</p>
      <p>H mn  g ,I 
characteristics Hmn g ,I  to obtain the weight coefficients hmn( mn )  g ,I  for AIFs.</p>
      <p>The parallel form of the adjustment algorithm’s first and third steps has a width of N and is
implemented by N FFT and IFFT units.</p>
      <p>The computational complexity LrAePc1,3 G  , which determines the time it takes to complete these steps
AP1,3 G   O( G log2 G ) order.
depending on the height of their parallel form, is of Lrec</p>
      <p>The parallel form of the algorithm used to compute the channels’ inverse characteristics Hmn g ,I 
has a width of O( N  G ) and is implemented with N  G division units, while the height
LrCeUc (N)  О(1) of the algorithm’s parallel form is a constant.</p>
      <p>The CU synchronizes the transmission of parameters from AP to FP, sets the initial conditions, and
controls the output registers while completing processing iterations. This unit’s operation algorithm has
a constant complexity of LrCeUc (N)  О(1) .</p>
      <p>The assessment of the computational complexity of the AP and FP algorithms (e.g., at K  G ),
LrAePc1,2,3 G 
</p>
      <p>O( G log2 G )  O(1)  O(1)</p>
      <p> log2 G ,</p>
      <p>LrFePc (K) O( K )
for the recursive processor presents the conclusion that signal separation with the proposed recursive
processor is acceptable for the quasistationary model of signal formation.</p>
      <p>But the width of the AP algorithm’s parallel form for the recursive processor is significantly lower
than that of the nonrecursive one: O  N  G  O( N 4  G ) .</p>
      <p>This advantage of the recursive processor makes it possible to apply the solution to the problem of
separating signals sn  k  for many more sources under conditions where computational resources are
limited.</p>
      <p>
        For a recursive processor to separate signals steadily, the object must allow the receivers of signals
to be installed such that in the linear superposition of signals at the outputs of each of the receivers, the
signal from a specific source is predominant [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>The solution s
functional F s expressed as
5. Model for a Regularization-Based Parallel Specialized Processor
If the parameters of the mixing matrix H  ,I  or of the source signals s(k ) make the problem of signal
separation ill-posed or if those parameters show expected uncertainty, then one should at once find a
regularized, stable solution to (1) or its equivalent in the frequency domain.</p>
      <p>
        The proposed model for a regularization-based specialized processor is based on the Tikhonov
regularization [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ].
      </p>
      <p>
        Two conditions to the Tikhonov regularization are set in the processor model: the disparity
minimization Hs  x  min , as in the least-squares technique (LST); and the minimization of the
s
solution norm, s  min , as in the Moore–Penrose pseudo-inverse of a matrix [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
s
contained in the processor provides the absolute minimum of the smoothing
where   0 is the regularization parameter;  s  is the stabilizing functional; and H and x are
approximate values of H and x for which
      </p>
      <p>F s  Hs  x
2</p>
      <p> s ,</p>
      <p>H  H   H and x  x   ,
where  and  H are the upper estimates of absolute measurement errors for the signals x and the
mixing-matrix elements H .</p>
      <p>The proposed model uses  s   s 2 as a stabilizing functional. For control purposes, it is more
natural and convenient to present signals in time form, so we will write the smoothing functional for M
= N and K = G as</p>
      <p>M K 1 2 N M K 1
F s     xm  k ,I   xm k ,I     sn, k  .2 (3)
m1 k0 nm1 k0</p>
      <p>The signal xm  k ,I  in (3) derives from the separation results redistorted by the signal formation
model; that is,</p>
      <p>N
xm k,I   sn, k  * hmn (k, l) ,</p>
      <p>n1
where sn,  k, I are the regularized results of signal separation for the object’s nth node.</p>
      <p>Under the conditions described above, the smoothing functional can be written as</p>
      <p>M K 1 2 N M G1
F w     xm  k, I   xm k, I     wnm,  g ,I  ,2</p>
      <p>m1 k 0 nm1 g0
an expression that is more suitable for the regularized computation of the elements wnm,  g, I  for the
separating matrix w  g,I  , which sets the weights of the functional processor’s AFs.</p>
      <p>The elements wnm,  g ,I  for the selected regularization parameter  are determined from the
minimum condition of the smoothing functional F w , keeping in mind that this functional’s
quadratic form is positively definite.</p>
      <p>The elements wnm,  g ,I  are calculable, for instance, by solving the system of M  N  G equations
written as</p>
      <p>F (w)
wnm  g, l</p>
      <p>
         0 relative to wnm,  g, I  , using the parallel algorithms for solving linear
algebraic equations [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ].
      </p>
      <p>For the known (specified) errors  H and  , we propose calculating the regularization parameter as
the root of the equation</p>
      <p>Hs  x
2</p>
      <p>   H s 2 / r,M ,N  ,
in which  is the parameter s , where   r,M ,N   1 is a scalable multiplier determined by the
problem’s dimension ( M  N ) and by the measurement error of signal and channel parameters (the error
depends, in particular, on the resolution r of analog-to-digital conversion).</p>
      <p>With the regularization parameter  so obtained, the smoothness and disparity of the solution for
s are acceptable for practical purposes.</p>
      <p>Figure 2 shows the model of a regularization-based specialized processor.</p>
      <p>The FP is a model that is inverse of the signal-formation model and that separates measured signals.
This processor has a homogeneous structure and consists of AFs and AUs.</p>
      <p>The computation of regularization parameter  and the adjustment of AFs, whose number is equal
to M 2 , are run by the AP. The AP’s processing unit (PU) computes AF parameters with the least-squares
technique through minimizing the smoothing functional F w .</p>
      <p>The elements of the mixing matrix h  g,I  , which set the samples for the pulse responses of the
signal-formation model’s channels, enter the inputs of the AP, and the AP generates a direct
signalformation model.</p>
      <p>Disparity evaluation units (DEUs) compute the disparity  m2 for each of the processor’s m channels.
The units receive signals delayed by delay units (DUs), which serve to delay signals from the processor
inputs and signals from the outputs of the direct signal-formation model.</p>
      <p>
        It is advisable to use deterministic parallel optimization algorithms [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] designed for multicore
processors to minimize the smoothing functional F w .
      </p>
      <p>All the AFs operate in parallel independently of one another. This makes the proposed processor fast
and reliable.</p>
      <p>
        The processor model shown in figure 2 is generalized to obtain stable solutions for system (1). For
that reason, the model is highly complicated. For practical applications, it can be significantly simplified
by using prior information about the signal-formation model (such as the presence of reference inputs
[
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]) or by using simpler and parallel regularization algorithms [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>AP’s processing unit</title>
        <p>DUM</p>
      </sec>
      <sec id="sec-2-2">
        <title>Adjusting processor</title>
        <sec id="sec-2-2-1">
          <title>DEUM</title>
          <p>DU1
DEU1
A1
dir
AF1N
AF11
AF11
AF1M
A1
inv</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Functional processor</title>
        <p>(inverse object model)
h11
h1N
hM1</p>
        <p>hMN</p>
        <sec id="sec-2-3-1">
          <title>AFMN</title>
          <p>AM
dir
AFM1</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>Direct object model</title>
          <p>AFN1</p>
        </sec>
        <sec id="sec-2-3-3">
          <title>AFNM</title>
          <p>AN
inv</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>6. Modeling Results</title>
      <p>Figure 3 shows the modeling results for test signals separated by the nonrecursive parallel specialized
processor with a multicore, GPU-based architecture.</p>
      <p>The signal-formation model had three signal sources: the first two were triangular pulses with
different frequencies and shapes while the third was a speech signal. The signal receivers used 8-bit
ADCs with a sample rate of 12 kHz.</p>
      <p>The figure 3 shows the initial signals (top), additive mixtures of signals in each of the receivers
(middle), and extraction results for each signal (bottom).</p>
      <p>The error of signal separation does not exceed 10%, a value acceptable for many engineering
applications.</p>
      <p>Time (ms)</p>
      <p>V
Time, sec V
Time, sec V</p>
      <p>V
Time, sec V
Time, sec V
Time, sec
a
a
b</p>
      <p>Time, sec
Time, sec
Time, sec</p>
      <p>
        The results of computational experiments shown in figure 4 for the example above show
notably shorter task times (the Independent Component Analysis (ICA) [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] algorithm was used).
Signal samples
      </p>
    </sec>
    <sec id="sec-4">
      <title>7. Basic Conclusions</title>
      <p>This paper proposed using models of highly efficient nonrecursive, recursive and regularization-based
parallel specialized processors to solve the problem of signal separation.</p>
      <p>Once adjusted, the processors can solve the problem within a period that does not depend on the
number of signal sources—that is, the processors have a task time of TFP( N )  O(1) order.</p>
      <p>These models are applicable where the parameters of the model-formation model are variable
(quasistationary).</p>
      <p>Regularization-based processors make the solution stable under conditions where the parameters of
objects exhibit expected uncertainty when the inverse problem of signal separation becomes ill-posed.</p>
      <p>The regular homogeneous structure of the functional processor can be conveniently implemented as
an integrated circuit or a multicore-architecture computational system.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Zasov</surname>
            <given-names>V A</given-names>
          </string-name>
          <year>2013</year>
          <article-title>Algorithms and Computational Devices for Separating and Restoring Signals in Multivariable Dynamic Systems</article-title>
          (Samara: Samara State Transport University Press) p
          <fpage>233</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Gergel</surname>
            <given-names>V P</given-names>
          </string-name>
          <year>2007</year>
          <article-title>Theories and Applications of Parallel Computations (Moscow:</article-title>
          IT Internet University: BINOM Knowledge Laboratory) p
          <fpage>423</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Demiyanovich</surname>
            <given-names>Y K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Burova</surname>
            <given-names>I G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yevdokimova</surname>
            <given-names>T O</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ivantsova O N and Miroshnichenko I D 2012 Parallel Algorithms</surname>
          </string-name>
          : Development and
          <string-name>
            <surname>Implementation</surname>
          </string-name>
          (Moscow: IT Internet University: BINOM Knowledge Laboratory) p
          <fpage>344</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Patterson</surname>
            <given-names>D A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Hennessy J L 2012</surname>
          </string-name>
          <article-title>Computer Organization</article-title>
          and
          <string-name>
            <surname>Design (Saint Petersburg</surname>
          </string-name>
          : Peter) p
          <fpage>784</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Vasin</surname>
            <given-names>N N</given-names>
          </string-name>
          and
          <string-name>
            <surname>Diyazitdinov R R 2016</surname>
          </string-name>
          <article-title>A machine vision system for inspection of railway track</article-title>
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>3</issue>
          )
          <fpage>410</fpage>
          -
          <lpage>415</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2016-40-3-
          <fpage>410</fpage>
          -415
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Ifeachor</surname>
            <given-names>E C</given-names>
          </string-name>
          and
          <string-name>
            <surname>Jervis B W 2004 Digital Signal Processing: A Practical Approach</surname>
          </string-name>
          (Moscow: Williams Publishing House) p
          <fpage>992</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Denisova</surname>
            <given-names>A Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Juravel Y N and Myasnikov</surname>
            <given-names>V V</given-names>
          </string-name>
          <year>2016</year>
          <article-title>Estimation of parameters of a linear spectral mixture for hyperspectral images with atmospheric</article-title>
          distortions
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>3</issue>
          )
          <fpage>380</fpage>
          -
          <lpage>387</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2016-40-3-
          <fpage>380</fpage>
          -387
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Mitropolskiy</surname>
            <given-names>Y I</given-names>
          </string-name>
          <year>1985</year>
          Problems of Versatile to Custom
          <source>Tools Ratio in Computational Systems Kibernetika i vychislitelnaya tekhnika</source>
          <volume>1</volume>
          <fpage>35</fpage>
          -
          <lpage>48</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Cichocki</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <article-title>Amari Sh 2002 Adaptive blind signal and image processing: Learning algorithms</article-title>
          and applications (New-York: John Wiley &amp; Sons, Ltd) p
          <fpage>555</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Zasov</surname>
            <given-names>V A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Romkin M V 2012</surname>
          </string-name>
          <article-title>Parallel Computations for the Signal Separation Problem in Multidimensional Dynamic Systems Parallel Computations and Management Objectives</article-title>
          .
          <source>Proc. of the 6-th Int. Conf. (Moscow: Russian Academy of Science</source>
          , Trapeznikov Institute of Control Science Press)
          <fpage>96</fpage>
          -
          <lpage>102</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Zasov</surname>
            <given-names>V A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Romkin M V 2013 Parallel Computational</surname>
          </string-name>
          <article-title>Models for Solving the Problem of Signal Separation Vestnik transporta Povolzhiya</article-title>
          <volume>6</volume>
          (
          <issue>42</issue>
          )
          <fpage>77</fpage>
          -
          <lpage>86</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Haykin</surname>
            <given-names>S 2006</given-names>
          </string-name>
          <string-name>
            <surname>Neural Networks: A Comprehensive Foundation</surname>
          </string-name>
          (Moscow: Williams Publishing House) p
          <fpage>1104</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Kravchenko</surname>
            <given-names>V F</given-names>
          </string-name>
          <year>2007</year>
          <article-title>Digital Signal and Image Processing in Radiophysical Applications</article-title>
          (Moscow: Nauka, Fizmatlit) p
          <fpage>544</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Zasov</surname>
            <given-names>V A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Nikonorov Ye N 2017</surname>
          </string-name>
          <article-title>Modeling and Investigating the Stability of a Solution to the Inverse Problem of Signal</article-title>
          <source>Separation CEUR Workshop Proceedings</source>
          <volume>1904</volume>
          <fpage>78</fpage>
          -
          <lpage>84</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Tikhonov</surname>
            <given-names>A N</given-names>
          </string-name>
          and
          <string-name>
            <surname>Arsenin</surname>
            <given-names>V Y</given-names>
          </string-name>
          <year>1986</year>
          <article-title>Methods for Solving Ill-posed Problems: a Textbook for Universities (Moscow: Nauka</article-title>
          , Fizmatlit) p
          <fpage>288</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Tyrtyshnikov E E 2007</surname>
          </string-name>
          <article-title>Matrix Analysis</article-title>
          and
          <string-name>
            <surname>Linear Algebra</surname>
          </string-name>
          (Moscow: Nauka, Fizmatlit) p
          <fpage>480</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Strongin</surname>
            <given-names>R G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gergel</surname>
            <given-names>V P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Grishagin</surname>
            <given-names>V A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Barkalov K A 2013 Parallel</surname>
          </string-name>
          <article-title>Computation in Global Optimization Problems</article-title>
          (Moscow: Moscow State University Press) p
          <fpage>285</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Dzhigan</surname>
            <given-names>V I</given-names>
          </string-name>
          <string-name>
            <surname>2013 Adaptive Signal</surname>
          </string-name>
          <article-title>Filtering: Theory and Algorithms</article-title>
          (Moscow: Tekhnosfera) p
          <fpage>528</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Zhdanov</surname>
            <given-names>A I</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sidorov Y V 2015</surname>
          </string-name>
          <article-title>Parallel implementation of a randomized regularized Kaczmarz's algorithm</article-title>
          <source>Computer Optics</source>
          <volume>39</volume>
          (
          <issue>4</issue>
          )
          <fpage>536</fpage>
          -
          <lpage>541</lpage>
          DOI: 10.18287/
          <fpage>0134</fpage>
          -2452-2015-39-4-
          <fpage>536</fpage>
          -541
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>