<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Parallel Newton Methods for Numerical Analysis of Infeasible Linear Programming Problems with Block-Angular Structure of Constraints</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Leonid D. Popov</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Krasovskii Institute of Mathematics and Mechanics UB RAS</institution>
          ,
          <addr-line>Ekaterinburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ural Federal University</institution>
          ,
          <addr-line>Ekaterinburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>253</fpage>
      <lpage>260</lpage>
      <abstract>
        <p>For the linear programming (LP) problems, perhaps infeasible, with block-angular matrices of constraints, two parallel second order optimization methods are proposed. Being applied to initial LP problem they give its solution if this problem is solvable, and automatically deliver a solution of some relaxed LP problem, otherwise. The methods use penalty and barrier functions as well as Tikhonov regularization of Lagrangian of initial problem. The methods contain a parameter which tends to zero and minimize a residual of constraints. Parallel calculations of matrix operations follow MPI-type paradigm.</p>
      </abstract>
      <kwd-group>
        <kwd>linear programming</kwd>
        <kwd>infeasibility</kwd>
        <kwd>ill-posed problems</kwd>
        <kwd>generalized solution</kwd>
        <kwd>regularization</kwd>
        <kwd>penalty function</kwd>
        <kwd>parallel calculations</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Infeasible linear programs is widely studied class of the improper problems in
mathematical programming theory (see [1,2,3,4,5,6] and others). They arise naturally in many
applications of linear programming (LP) such as network ows analysis problems,
problems of portfolio selection and scheduling problems which re ect complex production
processes, e.g. in manufacture and power systems. Their inherent infeasibility may be
caused by multiple factors such as errors in the input data, imbalance of its resource
constraints and objectives, modelling bias as well as its structural distortions.</p>
      <p>Basic approach addressing this issue is to introduce a more general form of solution
for unsolvable problem which is in fact an optimum for some repaired or relaxed LP
problem [1,2,7,8,9]. Following this approach in [10,11] two numerical methods were
proposed which automatically adjust rigth-hand-side vector of initial LP problem if it
is infeasible and then nd its generalized solution using penalty and barrier functions as
well as Tikhonov regularization of standard Lagrangian [12,13,14]. The methods contain
a small parameter which tends to zero and minimize the residual of constraints.</p>
      <p>In this paper we discuss how these methods may be t to parallel calculations in
MPI-type paradigm under the assumption that constraints matrix of initial LP problem
has a block-angular structure.</p>
      <p>Copyright ⃝c by the paper's authors. Copying permitted for private and academic purposes.
In: A. Kononov et al. (eds.): DOOR 2016, Vladivostok, Russia, published at http://ceur-ws.org</p>
      <sec id="sec-1-1">
        <title>Consider the linear program and its dual one</title>
        <p>The Problem State and Preliminaries
min f(c; x) : Ax = b; x</p>
        <p>0g
max f(b; y) : A⊤y</p>
        <p>cg ;
where A 2 IRm0 n0 , c 2 IRn0 and b 2 IRm0 are given, x 2 IRn0 and y 2 IRm0 are
vectors of primal and dual variables respectively, rankA = m0 n0, ( ; ) denotes
scalar product.</p>
        <p>
          Assume that dual problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) is feasible, but it isn't known a priory whether primal
problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is feasible or not. If not then it may be transformed (or repaired) to solvable
one merely by adjusting of its right-hand-side vector. Note that such problems are
known as improper ones of the rst kind [1,2].
        </p>
        <p>
          If problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is infeasible, we de ne its generalized solution as a solution x^ of the
relaxed problem
min f(c; x) : Ax = b + ∆^b; x
0g (=: ^) ;
where ∆^b =argminf∥∆b∥: Ax = b + ∆b; x
set of feasible solutions for (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) coincides with
0g, ∥ ∥ is Euclidean norm. Obviously, the
M = Arg min ∥Ax
x 0
b∥ :
If (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is solvable, then ∆^b = 0, and generalized solution introduced above coincides
with an usual one.
        </p>
        <p>
          As mentioned above, in [10,11] two numerical methods were proposed which being
applied to LP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) give its solution if such solution exists, or automatically
deliver its generalized solution x^, if it is not so. Below we discuss how they may be t
to parallel calculations in MPI-type paradigm under the assumption that constraints
matrix in (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) has a block-angular structure.
        </p>
        <p>Two types of infeasible constraint angularity are investigated in this paper:
a) block-angular matrix with linking columns and
b) block-angular matrix with linking rows.</p>
        <p>Both variants exploit parallel algebraic operations with matrices.
2</p>
        <p>
          Block-Angular Matrices with Linking Columns
Consider the case when constraints matrix in infeasible problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) has the following
structure
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
To take the advantage of this structure for parallelization of the calculation process,
make use of the mixed barrier-penalty-type function for problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
Consider the unconstrained smooth optimization problem: nd x^ϵ &gt; 0 such that
(ϵ; x^ϵ) = min (ϵ; x) :
        </p>
        <p>
          x&gt;0
As shown in [10], if optimal set of (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) is bounded, then an unique minimizer x^ϵ exists
for every ϵ &gt; 0, even in improper case. This minimizer satis es to nonlinear vector
equation
∇x (ϵ; x) = c + ϵ 1AT (Ax
b)
ϵX 1e = 0 ;
where X is diagonal matrix with coordinates xi on its diagonal, i.e. X = diag(x),
e = [1; : : : ; 1]T .
        </p>
        <p>
          Theorem 1 ([10]). Suppose that the feasible set of the dual problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) has an interior
point. Then
(c; x^ϵ) ! ^;
b
        </p>
        <p>Ax^ϵ = ∆^bϵ ! ∆^b
as ϵ ! +0.</p>
        <p>
          Corollary 1. Let the optimal vector x^ of the relaxed problem (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) be unique. Then
x^ϵ ! x^ as ϵ ! +0.
        </p>
        <p>
          To solve equation (
          <xref ref-type="bibr" rid="ref7">7</xref>
          ), one can apply Newton method as follows
x(t + 1) = x(t)
tw(t);
        </p>
        <p>w(t) = H 1(ϵ; x(t))∇ (ϵ; x(t)); t = 0; 1; : : : :
Here x(0) is an appropriate start point, t is a step parameter (usually t 1), H(ϵ; )
is a Hessian of the function (ϵ; ), ∇x (ϵ; ) is its gradient.</p>
        <p>Without going deep into all issues of implementation of this algorithm (e.g. see [15]
for details), we only consider how to calculate vector w(t) in parallel. In our case
Hence, to nd w(t) one has to solve a sparse linear system
with</p>
        <p>H(ϵ; x) = ϵ 1AT A + ϵX 2 :</p>
        <p>H(ϵ; x(t))w = ∇ (ϵ; x(t))
0 H1</p>
        <p>
          H2
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
where
Hi = ϵ 1QiT Qi + ϵXi 2 (i = 1; : : : ; n); Mi = ϵ 1PiT Qi (i = 1; : : : ; n) ;
n
Mm+1 = ϵ 1 ∑ PiT Pi + ϵXn+21 :
        </p>
        <p>
          i=1
Diagonal matrices Xi = diag(xi) above correspond to the partition of a primal vector
x onto sub-vectors xi according to matrix partition (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ).
        </p>
        <p>
          Sparse structure of the Hessian gives us the opportunity to solve linear system (
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
in parallel mode (see section 4 for details).
3
        </p>
        <p>Block-Angular Matrices with Linking Rows</p>
      </sec>
      <sec id="sec-1-2">
        <title>Now consider the case when</title>
        <p>
          To exploit this structure for parallelization of the calculation process, make use of the
regularized barrier-type function for problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(ϵ; x) = (b; y)
(Ai; y));
ϵ &gt; 0 ;
(
          <xref ref-type="bibr" rid="ref10">10</xref>
          )
ϵ n0
2 ∥y∥2 + ϵ ∑ ln (ci
        </p>
        <p>i=1
(ϵ; y^ϵ) = max (ϵ; y) :</p>
        <p>
          y2Ω
where Ai denotes the i-th column of the matrix A. Function (
          <xref ref-type="bibr" rid="ref10">10</xref>
          ) is nite and strongly
concave on Ω = fy: AT y &lt; cg. Therefore, for every ϵ &gt; 0 there exists an unique vector
y^ϵ such that
Note that y^ϵ satis es to nonlinear vector equation
∇y (ϵ; y^ϵ) = b
        </p>
        <p>ϵy^ϵ + ϵAD(y) 1e = 0 ;
where diagonal matrix D(y) = diag(c
diagonal, e = [1; : : : ; 1]T .</p>
        <p>AT y) has elements di = ci
(Ai; y) on its
Theorem 2 ([11]). Let y^ϵ be the maximizer of (ϵ; ) on y. Then the vector x^ϵ with
coordinates
x^iϵ = ϵ(ci</p>
        <p>
          (Ai; y^ϵ)) 1; i = 1; : : : ; n0 ;
is the minimizer of (ϵ; ) on x &gt; 0.
Corollary 2. Suppose that the feasible set of the dual problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) has an interior
point. Then ϵy^ϵ ! ∆^b as ϵ ! +0.
        </p>
        <p>
          To solve equation (
          <xref ref-type="bibr" rid="ref12">12</xref>
          ) one can apply Newton method again. Now its iterations are
as follows
y(t + 1) = y(t) + tw(t);
        </p>
        <p>w(t) = H 1(ϵ; y(t))∇y (ϵ; y(t)); t = 0; 1; : : : :
Here y(0) is an appropriate start point, t is a step parameter (usually t
is a Hessian of the function (ϵ; ), ∇y (ϵ; ) is its gradient.</p>
        <p>In the case under consideration
1), H(ϵ; )</p>
        <p>H(ϵ; y) = ϵI + ϵAD(y) 2AT ;
where I is the identity matrix of appropriate order. Therefore, to nd w(t) one has to
solve a sparse linear system
with
where</p>
        <p>H(ϵ; y(t))w = ∇y (ϵ; y(t))
C
C
C ;
C
C
A
Hi = ϵIi + ϵQiDi(y) 2QiT ; Mi = ϵPiDi(y) 2QiT ;
(i = 1; : : : ; n) ;
n
Mn+1 = ϵIn+1 + ϵ ∑ PiDi(y) 2PiT :</p>
        <p>
          i=1
Here, diagonal matrices Di(y) = diag(ci QiT yi PiT yn+1) correspond to the partition
of a dual vector y onto sub-vectors yi according to matrix partition (
          <xref ref-type="bibr" rid="ref9">9</xref>
          ).
        </p>
        <p>
          Again sparse structure of the Hessian gives us the opportunity to solve linear system
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          ) in parallel mode.
4
        </p>
        <p>Parallel Scheme of Implementation
It is easy to see that in the both cases above the structure of the Hessians is just the
same. Let us discuss how to solve a sparse linear system of the type</p>
        <p>H2
. . . ...</p>
        <p>Hn MnT
M1 M2 : : : Mn Mn+1</p>
        <p>M1T
M2T
1</p>
        <p>
          0 f1
C
C
C
C
C
A
(
          <xref ref-type="bibr" rid="ref13">13</xref>
          )
(
          <xref ref-type="bibr" rid="ref14">14</xref>
          )
on massively parallel processor (MPP) or cluster of workstations (COW) with
MPIinstructions.
        </p>
        <p>
          For simplicity assume that a number of processors in our MIMD-computer is equal
to n + 1. Let processors with indexes from 1 to n be called the slaves and processor
with index n + 1 be called the master. Suppose that input data of LP problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is
distributed between the slaves in such a manner that the slave with index i stores the
matrix pair (Qi; Pi) in its local memory. Therefore, each slave with index i can calculate
its own part of Hessian (Hi; Mi) and some auxiliary matrices with index i concurrently.
The master controls the calculation process and, besides, calculates Hn+1 and other
matrices with additional index n + 1. All processors exchange with the messages using
MPI-instructions like send, gather and broadcast to coordinate their works.
        </p>
        <p>
          Parallel resolution of system (
          <xref ref-type="bibr" rid="ref14">14</xref>
          ) may be as follows. At rst, all the slaves calculate
its own Cholesky decomposition of Hi = LiLiT concurrently, then form the auxiliary
matrices Si = L 1MiT , Ri = SiT Si, and send them to the master. The master
gathers this matricesi into a global sum Hn+1 = Hn+1 ∑in=1 Ri and then calculates its
own Cholesky decomposition of Hn+1 = Ln+1LTn+1. As a result the global Cholesky
decomposition is calculated
0 H1
        </p>
        <p>H2</p>
        <p>L2
. . .</p>
        <p>Ln
S1T S2T : : : SnT Ln+1
Further, all the slaves solve their own angular systems Lizi = fi concurrently, then
form the products hi = SiT zi and send them to the master. The master gathers this
vectors, forms the global sum fn+1 = fn+1 ∑in=1 hi and solves its own angular system
Ln+1zn+1 = fn+1. Thereby, a solution of the following system is obtained</p>
        <p>L2
. . .</p>
        <p>Ln
S1T S2T : : : SnT Ln+1
Finally, the master solves the angular system LTn+1wn+1 = zn+1 and broadcasts wn+1 to
all the slaves. The slaves solve their angular systems LiT wi = zi Siwn+1 concurrently.
Therefore, the following system is solved</p>
        <p>LT
2
. . .</p>
        <p>S1
S2
.
.</p>
        <p>.</p>
        <p>LTn Sn</p>
        <p>LTn+1
It means that all components of w = [w1; : : : ; wn; wn+1] are found.</p>
        <p>Note that similar parallel scheme of calculations was applied by author for sparse
linear system in [16], where one can nd the estimates of possible speedup
Sn+1</p>
        <p>
          n
1 + Cm 1 ! n
(m ! 1) ;
(
          <xref ref-type="bibr" rid="ref15">15</xref>
          )
as well as an example of implementation scheme for Parallel Matlab and the
encouraging results of numerical experiments with large LP problems. In (
          <xref ref-type="bibr" rid="ref15">15</xref>
          ) C &gt; 0 is some
technical constant depending upon concrete computer equipment, m characterizes the
average dimension of the blocks of the matrix A in problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ). If m is comparatively
large then the speedup tends to n.
5
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Conclusion</title>
      <p>For a linear program with block-angular matrix of constraints, two parallel second
order optimization methods are proposed. The methods give usual solution if initial
problem is solvable, and automatically deliver a solution of some relaxed LP problem,
otherwise. The methods use penalty and barrier functions as well as Tikhonov
regularization of Lagrangian of initial LP problem. The methods contain a small parameter
which tends to zero and minimize the residual of constraints. Parallel calculations of
matrix operations follow MPI-type paradigm.</p>
      <p>Acknowledgements. This work was supported by Russian Foundation for Basic
Research (project no. 16-07-00266).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Eremin</surname>
            ,
            <given-names>I. I.</given-names>
          </string-name>
          , Mazurov, Vl. D., and Astaf'ev,
          <string-name>
            <surname>N. N.</surname>
          </string-name>
          :
          <article-title>Improper Problems of Linear and Convex Programming</article-title>
          . Nauka, Moscow. In Russian (
          <year>1983</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Eremin</surname>
            ,
            <given-names>I. I.</given-names>
          </string-name>
          <article-title>Theory of Linear Optimization. Inverse and Ill-Posed Problems Series</article-title>
          . VSP. Utrecht, Boston, Koln, Tokyo (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Chinneck</surname>
            ,
            <given-names>J.W.</given-names>
          </string-name>
          :
          <article-title>Feasibility and viability</article-title>
          . in: T. Gal, H.J.
          <string-name>
            <surname>Greenberg</surname>
          </string-name>
          (Eds.),
          <source>Advances in Sensitivy Analysis and Parametric Programming</source>
          , Kluwer Academic Publishers, Boston (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Ben-Tal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Nemirovski</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Robust Solutions of Linear Programming Problems Contaminated with Uncertain Data</article-title>
          .
          <source>Math. Progr. 88: 3</source>
          ,
          <issue>411</issue>
          {
          <fpage>424</fpage>
          (
          <year>2000</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Amaral</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ; Barahona,
          <string-name>
            <surname>P.</surname>
          </string-name>
          :
          <article-title>Connections between the total least squares and the correction of an infeasible system of linear inequalities</article-title>
          .
          <source>Linear Algebra and its Appl</source>
          .
          <volume>395</volume>
          ,
          <issue>191</issue>
          {
          <fpage>210</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Leon</surname>
            ,
            <given-names>T</given-names>
          </string-name>
          ; Liern,
          <string-name>
            <surname>V.</surname>
          </string-name>
          :
          <article-title>A Fuzzy Method to Repair Infeasibility in Linear Constrained Problems</article-title>
          .
          <source>Fuzzy Set and Systems</source>
          .
          <volume>122</volume>
          ,
          <issue>237</issue>
          {
          <fpage>243</fpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Dax</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>The smallest correction of an inconsistent system of linear inequalities</article-title>
          .
          <source>Optimization and Engineering</source>
          .
          <volume>2</volume>
          ,
          <issue>349</issue>
          {
          <fpage>359</fpage>
          (
          <year>2001</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Eremin</surname>
            ,
            <given-names>I. I.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Popov</surname>
            ,
            <given-names>L.D.</given-names>
          </string-name>
          :
          <article-title>Numerical analysis of improper linear programs using the DELTA-PLAN-ES interactive system</article-title>
          .
          <source>Optimization Methods and Software</source>
          . Vol.
          <volume>2</volume>
          . 69{
          <issue>78</issue>
          (
          <year>1993</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Skarin</surname>
          </string-name>
          , V. D.:
          <article-title>Regularized Lagrange function and correction methods for improper convex programming problems</article-title>
          .
          <source>Proc. Steklov Inst. Math. Mathematical Programming. Regularization and Approximation, suppl. 1</source>
          ,
          <issue>S116</issue>
          {
          <fpage>S144</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Popov</surname>
          </string-name>
          , L. D.:
          <article-title>Use of barrier functions for optimal correction of improper problems of linear programming of the 1st kind</article-title>
          .
          <source>Automation and Remote Control</source>
          .
          <volume>73</volume>
          :
          <issue>3</issue>
          ,
          <issue>417</issue>
          {
          <fpage>424</fpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Popov</surname>
          </string-name>
          , L. D.:
          <article-title>Dual approach to the application of barrier functions for the optimal correction of improper linear programming problems of the rst kind</article-title>
          .
          <source>Proceedings of the Steklov Institute of Mathematics. 288:1</source>
          ,
          <issue>173</issue>
          {
          <fpage>179</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Fiacco</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>McCormick</surname>
            ,
            <given-names>G.P.</given-names>
          </string-name>
          :
          <article-title>Nonlinear Programming: Sequential Unconstrained Minimization Techniques</article-title>
          . John Wiley and Sons, Inc: New York-London-Sidney (
          <year>1968</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Tikhonov</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ;
          <string-name>
            <surname>Vasil'ev</surname>
            ,
            <given-names>F.P.</given-names>
          </string-name>
          :
          <article-title>Methods for solving ill-posed extremal problems</article-title>
          , in: Mathematics Models and
          <string-name>
            <given-names>Numerical</given-names>
            <surname>Methods</surname>
          </string-name>
          . Banach center publ.,
          <source>Warszava</source>
          .
          <volume>3</volume>
          ,
          <issue>291</issue>
          {
          <fpage>348</fpage>
          (
          <year>1978</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Rockafellar</surname>
          </string-name>
          , R.T.:
          <article-title>Convex Analysis</article-title>
          . Princeton University Press: Princeton,
          <string-name>
            <surname>N.J.</surname>
          </string-name>
          (
          <year>1970</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Gill</surname>
          </string-name>
          , Philip E.: Practical optimization / Philip E. Gill, Walter Murray,
          <string-name>
            <surname>Margaret</surname>
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Wright</surname>
          </string-name>
          . London ; New York : Academic Press (
          <year>1981</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Popov</surname>
          </string-name>
          , L. D.:
          <article-title>Experience in organizing hybrid parallel calculations in the Evtushenko{ Golikov method for problems with block-angular structure</article-title>
          .
          <source>Automation and Remote Control</source>
          .
          <volume>75</volume>
          :
          <issue>4</issue>
          ,
          <issue>622</issue>
          {
          <fpage>631</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>