<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Theoretical Bound of the Complexity of Some Extragradient- Type Algorithms for Variational Inequalities in Banach Spaces</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Denysov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir Semenov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>L  p</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>q  Ax    .  </string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>L  p</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>q </string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13 Volodymyrska Street, Kyiv, 01161</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>144</fpage>
      <lpage>155</lpage>
      <abstract>
        <p>The work presents a study of three new extragradient-type algorithms for solving variational inequalities in a Banach space. Two algorithms are natural modifications of Tseng method and “Extrapolation from the Past” method for problems in Banach spaces, based on the generalized Alber projection. The third algorithm, called the operator extrapolation method, is a variant of forward-reflected-backward algorithm, where the generalized Alber projection is also used instead of the metric projection onto the admissible set. Advantage of the latter algorithm is only one calculation of the operator value and the generalized projection onto the feasible set on each iteration. For variational inequalities with monotone Lipschitz operators acting in a 2uniformly convex and uniformly smooth Banach space O  1  estimates for the complexity in terms of the gap function are proved.</p>
      </abstract>
      <kwd-group>
        <kwd>1 variational inequality</kwd>
        <kwd>monotone operator</kwd>
        <kwd>extragradient algorithm</kwd>
        <kwd>Extrapolation from the Past</kwd>
        <kwd>gap function</kwd>
        <kwd>complexity</kwd>
        <kwd>2-uniformly convex Banach space</kwd>
        <kwd>uniformly smooth Banach space</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>n
where p 1  i1 pi . The last saddle point problem can be rewritten as a bilinear minmax problem on
the product of standard simplexes
min max  Jv, Px  x ,
xn v2n
where J is an n  2n -matrix of the form  I</p>
      <p>I  , where I is the identity matrix. We got the following
variational inequality problem</p>
      <p>find x  n , v  2n :  P*  I  J v , x  x    J *  I  P x,v  v   0 x  n v  2n .</p>
      <p>
        Note that often nonsmooth optimization problems can be effectively solved with algorithms for
variational inequalities, if former are reformulated as saddle point problems [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. With the growing
popularity of generative adversarial networks (GANs) and other adversarial learning models, a steady
interest in algorithms for solving variational inequalities has arisen among specialists in the field of
machine learning [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>
        The most widely known method for solving variational inequalities is so called Korpelevich
extragradient algorithm [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Many publications are devoted to the study of this algorithm and its
modifications [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref6">6, 10–16</xref>
        ].
      </p>
      <p>
        An effective modern version of the extragradient method is Nemirovski mirror-prox method [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
This method can be interpreted as a variant of the extragradient method with projection understood in
the sense of Bregman divergence. One more interesting method of dual extrapolation for solving
variational inequalities was proposed by Nesterov [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. Adaptive variants of the Nemirovski proximal
mirror method were studied in [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11–13</xref>
        ].
      </p>
      <p>
        In the early 1980s, Popov proposed a modification of the classical Arrow-Hurwitz algorithm for
finding saddle points of convex-concave functions [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. Recently Popov's algorithm for variational
inequalities has become well known among machine learning specialists under the name “Extrapolation
from the Past” [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>
        A modification of Popov's method for solving variational inequalities with monotone operators was
studied in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. And in the article [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], a two-stage proximal algorithm for solving the equilibrium
programming problem is proposed, which is an adaptation of the method [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] to the general Ky Fan
inequalities. The algorithm from [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] uses Bregman divergence instead of Euclidean distance. Further
development of this circle of ideas led to the emergence of the so-called forward-reflected-backward
algorithm [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]:
and similar method [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ]:
xn1  PC  xn n Axn n1  Axn  Axn1  ,
xn1  PC  xn  n Axn   n1  Axn  Axn1  .
      </p>
      <p>
        Recently, using theory of Banach spaces and constructions of their geometry [
        <xref ref-type="bibr" rid="ref26 ref27 ref28 ref29 ref30">26–30</xref>
        ], progress has
been achieved in the research of algorithms for problems above in Banach spaces [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref3">3, 23–25</xref>
        ]. Extensive
material on this topic is contained in the book [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The next algorithm for solving variational inequalities
in a 2-uniformly convex and uniformly smooth Banach space was proposed in [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]:
xn1  C J 1  Jxn   Axn  ,
 yn  C J 1  xn  n Axn ,

xn1  J 1  Jyn  n  Ayn  Axn ,
where С is Alber generalized projection operator [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ],   0 , J is normalized dual mapping from
E to E* . This method weakly converges for inversely strongly monotone (cocoercive) operators
A : E  E* . Shehu [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] has recently extended Tseng’s result to 2-uniformly convex and uniformly
smooth Banach spaces. He proposed the next weakly converging process:
where n  0 is either chosen based on operator A Lipschitz constant value or calculated with a kind
of linear search procedure.
      </p>
      <p>
        It should be noted that the early research on algorithms for solving variational inequalities was
usually concentrated on the study of convergence of algorithms and related questions of an asymptotic
nature [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref17 ref19 ref20 ref21 ref22 ref23 ref24 ref25">13–15, 17, 19–25</xref>
        ].
      </p>
      <p>
        More recent studies are focused on estimating the number of iterations of algorithms required to
obtain an approximate solution of a given quality [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref16 ref18 ref6 ref8">6, 8, 10–12, 16, 18</xref>
        ]. This direction of research was
initiated by the work of Nemirovski [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], where mentioned earlier mirror-prox algorithm was proposed
and O  1  complexity estimate in terms of the gap function was obtained for the class of problems with
monotone Lipschitz continuous operator.
      </p>
      <p>
        A fundamental question arose about the construction of an algorithm with O  1  complexity
estimation and single computation of the operator's value and projection onto the feasible set at the
iteration step. Algorithm [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] answers this question in the case of a Hilbert space.
      </p>
      <p>
        This work is devoted to the study of three new extragradient type algorithms for solving monotone
variational inequalities in a Banach space. The first two algorithms are natural modifications of Tseng's
method [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and “Extrapolation from the Past” method [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] for problems in Banach spaces using the
generalized Alber projection. Iteration of each of these algorithms is more economical than iteration of
the extragradient method, because the first one uses single projection on iteration, and the second one
needs only one operator calculation. The third algorithm, called operator extrapolation method, is a
variant of the forward-reflected-backward algorithm, proposed by Malitsky and Tam [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Operator
extrapolation method also uses generalized Alber projection onto the feasible set. An attractive feature
of the algorithm is only one computation at the iterative step of the operator value and the generalized
projection. The O  1  complexity estimations are proved in terms of the gap function for variational
inequalities with monotone Lipschitz operators acting in a 2-uniformly convex and uniformly smooth
Banach space.
      </p>
      <p>The article has the following structure. Section 2 contains the necessary information on the geometry
of Banach spaces. Section 3 is devoted to variational inequalities. The algorithms are described in
Section 4. The formulations and proofs of complexity estimations are presented in Section 5.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>
        Let us remind some basic terms and results from Banach spaces geometry, which are needed for us
to formulate and prove our results [
        <xref ref-type="bibr" rid="ref23 ref25 ref26 ref27 ref28 ref29 ref30">23, 25–30</xref>
        ].
      </p>
      <p>Everywhere later E is a real Banach space with norm  , E – dual space for E , x , x – value
of functional x  E on element x  E . Let’s also denote   norm in E .</p>
      <p>
        Let SE  x  E : x 1 . Banach space E is called strictly convex, if for all x, y  SE and x  y
Banach space E is called uniformly convex, if  E    0  0, 2 [
        <xref ref-type="bibr" rid="ref26 ref27">26, 27</xref>
        ]. Banach space E is
called 2-uniformly convex, if exists c  0 such that  E    c 2 for all  0, 2 [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. Obviously
2uniformly convex space is uniformly convex. It’s known that uniformly convex Banach space is
reflexive [
        <xref ref-type="bibr" rid="ref26 ref27 ref28">26-28</xref>
        ].
      </p>
      <p>Banach space E is called smooth, if
lim
t0
x  ty  x</p>
      <p>
        t
exists for all x, y  SE [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Banach space E is called uniformly smooth if limit (1) exists uniformly
for x, y  SE [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. There is a duality between convexity and smoothness of E and it’s dual space E
(1)
[
        <xref ref-type="bibr" rid="ref26 ref27">26,27</xref>
        ]: E – strictly convex  E – smooth; E – smooth  E – strictly convex; E – uniformly
convex  E – uniformly smooth; E – uniformly smooth  E – uniformly convex. The first two
implications can be reversed for reflexive space E .
      </p>
      <p>
        Smoothness module of space E is defined as
Uniform smoothness of Banach space E is equivalent to the relation limt0  E t t1  0 [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ].
Banach space E is called 2-uniformly smooth, if exists c  0 such that  E t   ct2 for all t  0 [
        <xref ref-type="bibr" rid="ref27 ref28">27,
28</xref>
        ]. Banach space E is 2-uniformly convex if and only if E is 2-uniformly smooth [
        <xref ref-type="bibr" rid="ref27 ref28 ref29">27–29</xref>
        ].
      </p>
      <p>
        It is known, that Hilbert spaces and spaces Lp (1  p  2 ) are 2-uniformly convex and uniformly
smooth ( Lp are uniformly smooth for p 1,  and 2-uniformly smooth for p 2,  ) [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ].
      </p>
      <p>Multivalued mapping J : E  2E , which has the form</p>
      <p>
        Jx  x  E : x, x  x 2  x 2 ,

is called normalized dual mapping [
        <xref ref-type="bibr" rid="ref27 ref28">27, 28</xref>
        ].
      </p>
      <p>
        For Hilbert space J  I (identity). It is known [
        <xref ref-type="bibr" rid="ref23 ref27 ref28">23, 27, 28</xref>
        ] that: if space E is smooth, then mapping
J is single-valued; if E is strictly convex, then J is injective and strictly monotone; if space E is
reflexive, then mapping J is surjective; if E is uniformly smooth, then J is uniformly continuous on
bounded subsets of E . The explicit form of the mapping J for spaces p , Lp and Wpm ( p 1, ) is
given in [
        <xref ref-type="bibr" rid="ref23 ref27 ref28 ref3">3, 23, 27, 28</xref>
        ].
      </p>
      <p>
        Let E be a smooth Banach space. Let’s consider functional, introduced by Y. Alber in [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]:
d  x, y  x 2  2 Jy, x  y 2
x, y  E.
      </p>
      <p>The next useful 3-point identity follows from the definition above:
d  x, y  d  x, z   d  z, y  2 Jz  Jy, x  z
x, y, z  E.</p>
      <p>If space E is strictly convex, then for x, y  E we have</p>
      <sec id="sec-2-1">
        <title>For Banach spaces</title>
        <p>
          p , Lp and Wpm (1  p  2 ) we have  
[
          <xref ref-type="bibr" rid="ref29 ref30">29, 30</xref>
          ]. For a Hilbert space,
inequality from Lemma 1 becomes identity with   1.
        </p>
        <p>
          Lemma 2 ([
          <xref ref-type="bibr" rid="ref29">29</xref>
          ]). Let E be a 2-uniformly smooth Banach space. Then for some   0
x  y 2  x 2  2 Jx, y  y 2
x, y  E .
        </p>
        <p>
          For Banach spaces p , Lp and Wpm ( 2  p   ) we have   p 1 [
          <xref ref-type="bibr" rid="ref29 ref30">29, 30</xref>
          ]. For a Hilbert space,
inequality from Lemma 2 becomes identity with  1 .
        </p>
        <p>
          Let K be a non-empty closed and convex subset of a reflexive, strictly convex and smooth space
E . It is known [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ] that for each x  E there exists unique point z  K such that
1
p 1
d  z, x  inf d  y, x .
        </p>
        <p>yK
d  x, y  0  x  y .</p>
        <p>1

d  x, y  </p>
        <p>x  y 2 x, y  E .</p>
        <p>
          Lemma 1 ([
          <xref ref-type="bibr" rid="ref25">25</xref>
          ]). Let E be a 2-uniformly convex smooth Banach space. Then for some   1
        </p>
        <p>
          This point z is denoted by  K x , and the corresponding operator K : E  K is called generalized
projection of E onto K (generalized Alber projection) [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. Note that for a Hilbert space  K
coincides with the metric projection onto the set K .
        </p>
        <p>
          Lemma 3 ([
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]). Let K – closed and convex subset of a reflexive, strictly convex and smooth space
E , x  E , z  K . Then
        </p>
        <p>
          Inequality from Lemma 3 is equivalent to the next one [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]:
z  K x

        </p>
        <p>Jz  Jx, y  z  0</p>
        <p>y  K .</p>
        <p>d  y,K x  d K x, x  d  y, x y  K .</p>
        <p>Remark 1. The main element of the algorithms studied below is the calculation of a new point
x  K J 1  Jx  x 
by known x  E and x  E . From Lemma 3 and mentioned 3-point identity follows the inequality
fundamental for the analysis of algorithms.</p>
        <p>d  y, x   d  y, x  d  x , x 2 x, y  x
y  K .</p>
        <p>
          Basic information about monotone operators and variational inequalities in Banach spaces can be
found in [
          <xref ref-type="bibr" rid="ref1 ref23 ref28 ref3">1, 3, 23, 28</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Variational inequalities</title>
      <p>Let E be a 2-uniformly convex and uniformly smooth Banach space, C is a non-empty subset of
space E , A is an operator, acting from E to E* . Let’s consider the variational inequality:
find x  C :</p>
      <p>Ax, y  x  0 y  C .</p>
      <sec id="sec-3-1">
        <title>Let’s denote set of solutions of (2) as S .</title>
        <p>We need the next assumptions:
 set C  E is convex and closed;
 operator A : E  E* is a monotone and Lipschitz continuous with constant L  0 on C ;
 set S is non-empty.</p>
        <p>Let’s consider dual variational inequality:
find x  C :</p>
        <p>
          Ay, x  y  0 y  C .
was studied in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] for inversely strongly monotone operators A : E  E* . However, for Lipschitz
continuous monotone operators, algorithm (5) generally does not converge. Numerous modifications
of the extragradient algorithm [
          <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref24 ref6 ref9">6, 9–16,24</xref>
          ] can be used for such conditions.
        </p>
        <p>
          In this paper, we focus on three algorithms: the Tseng method [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], the extrapolation from the past
method [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ] and more recent forward-reflected-backward algorithm [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. We consider their natural
(2)
(3)
(4)
(5)
        </p>
        <p>
          We will denote set of solutions of (3) as S d . Note that set S d is convex and closed. Inequality (3)
is sometimes called weak or dual formulation of (2) (or Minty inequality), and solutions of (3) are called
weak solutions for (2). For monotone operator A we always have S  S d . In our setting we have
S d  S [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
        </p>
        <p>
          Variational inequality (2) can be formulated as a fixed-point problem [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]:
with   0 . Formulation (4) is useful, as it leads to obvious algorithmic idea. Procedure
x  C J 1  Jx   Ax ,
xn1  C J 1  Jxn   Axn 
modifications for problems in Banach spaces using Alber generalized projection instead of the metric
projection.
        </p>
        <p>
          The goal is to estimate the number of iterations of algorithms necessary to obtain an approximate
solution of a given quality. The quality of approximate solution x  C of variational inequality (2) will
be measured using the non-negative gap function [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]
versa, if for some x  C we have Gap x  0 , then x is a solution of (2).
        </p>
        <p>Everywhere below we assume, that set C  E is bounded.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Algorithms</title>
      <p>Let us study the next iterative extragradient-type algorithms for finding solutions of variational
inequality problem (2).</p>
      <p>
        Algorithm 1. Modified Tseng method ([
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]).
      </p>
      <p>Select x1  E , n  0 . Set n  1 .
1. Calculate</p>
      <p>If yn  xn , then STOP. Else calculate
yn  C J 1  Jxn  n Axn  .</p>
      <p>xn1  J 1  Jyn n  Ayn  Axn  .
3. Set n : n  1 and go to step 1.</p>
      <p>
        Algorithm 1 is a modification of forward-backward-forward method by P. Tseng [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] for problems
in Banach spaces, where generalized Alber projection is used instead of the metric one.
      </p>
      <p>
        The weak convergence of the Algorithm 1 in 2-uniformly convex and uniformly smooth Banach
space is proved in [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ].
      </p>
      <p>Note that in the case of a Hilbert space and without constraints, the Algorithm 1 coincides with the
Korpelevich extragradient method.</p>
      <p>Algorithm 2. Extrapolation from the Past.</p>
      <p>Select x1  y0  E , n  0 . Set n  1 .
1. Calculate</p>
      <sec id="sec-4-1">
        <title>2. Calculate</title>
        <p>yn  C J 1  Jxn  n Ayn1  .</p>
        <p>xn1  C J 1  Jxn  n Ayn ,
if xn1  yn  xn , then STOP. Else set n : n  1 and go to step 1.</p>
        <p>
          Algorithm 2 is a modification of L.D. Popov algorithm [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ] for problems in Banach spaces using
generalized Alber projection operator instead of the metric one.
        </p>
        <p>
          The convergence of Algorithm 2 in a Hilbert space and in Euclidean space with Bregman divergence
instead of Euclidean distance is proved in [
          <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
          ].
        </p>
        <p>Algorithm 3. Operator extrapolation.</p>
        <p>Select x0  x1  E , n  0 . Set n  1 .
1. Calculate
xn1   J 1  Jxn n Axn n1  Axn  Axn1  .</p>
        <p>C
2. If xn1  xn  xn1 , then STOP. Else set n : n  1 and go to step 1.</p>
        <p>
          Algorithm 3 is a modification of modern "forward-reflected-backward algorithm" [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ] for
variational inequalities in Banach spaces.
        </p>
        <p>Remark 2. Some text Algorithm 3 can be represented in form, which is similar to Algorithm 2:</p>
        <p> yn  C J 1  Jxn  n Axn ,

xn1  C J 1  Jxn  n Ayn .</p>
        <p>This formulation indicates conceptual similarity of Algorithms 1 and 3. More precisely, the operator
extrapolation algorithm is obtained from Algorithm 1 in the same way, as Algorithm 2 can be obtained
from the analogue of the extragradient algorithm</p>
        <p>Now let us turn to the analysis of the algorithms, namely, to the estimation of the number of iterations
required to obtain an approximate solution of the variational inequality (2) with a given value of the
gap function.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Analysis</title>
      <p>We will prove, that Algorithms 1 – 3, mentioned above, require O  LD  iterations to obtain feasible
point x  C , for which Gap x   , where   0 and D  supa,bC d a,b   .</p>
      <p>Proof. For arbitrary y C we have
d  y, xn1   d  y, J 1  Jyn  n  Ayn  Axn  
 y 2  2 Jyn n  Ayn  Axn , y  Jyn n  Ayn  Axn  *2 
 y 2  2 Jyn , y  2n Ayn  Axn , y  Jyn n  Ayn  Axn  *2 
2
 d  y, yn   yn 2  2n Ayn  Axn , y  Jyn n  Ayn  Axn  * .
d  y, xn1   d  y, yn   2n Ayn  Axn , yn  y n2 Ayn  Axn *2 .
(7)
d  y, yn   d  y, xn   d  xn , yn   2 Jxn  Jyn , y  xn .</p>
      <sec id="sec-5-1">
        <title>Using Lemma 2, we get an estimation</title>
        <p>Let’s use 3-point identity to transform d  y, yn 
and now use this in (7)</p>
        <p>d  y, xn1   d  y, xn   d  xn , yn  
we have</p>
      </sec>
      <sec id="sec-5-2">
        <title>Identity is equivalent to inequality</title>
        <p>2 Jxn  Jyn , y  xn  2n Ayn  Axn , yn  y n2 Ayn  Axn *2 .</p>
        <p>From Lipschitz continuity of A , equality d  xn , yn   2 Jyn  Jxn , yn  xn  d  yn , xn  and Lemma 1
d  y, xn1   d  y, xn   1n2L2  d  yn , xn  </p>
        <p>2 Jxn  Jyn  n Axn  n Ayn , yn  y .</p>
        <p>yn  C J 1  Jxn  n Axn 
Jxn  n Axn  Jyn , yn  y  0 .
which was required to prove. ■</p>
        <p>Corollary 1. Let xn,  yn  are the sequences, generated by Algorithm 1 with n  2 1L . Then
for the sequence of means zN  N1 nN1yn the next inequality holds:</p>
        <p>GapzN   L  supyC dy,x1 .</p>
        <p>N</p>
      </sec>
      <sec id="sec-5-3">
        <title>For algorithm 2 we have the next result.</title>
        <p>Theorem 2. Let xn,  yn  are the sequences, generated by Algorithm 2. Let n 0, 2L1 . Then</p>
        <p>dy,xn1 dy,xn1n2L2dyn,xn 2n Ayn, yn  y .</p>
      </sec>
      <sec id="sec-5-4">
        <title>From monotonicity of A operator it follows</title>
        <p>dy,xn1 dy,xn1n2L2dyn,xn 2n Ay, yn  y .</p>
      </sec>
      <sec id="sec-5-5">
        <title>Let’s transform (8) as</title>
        <p>2n Ay, yn  y  dy,xndy,xn11n2L2dyn,xn.</p>
      </sec>
      <sec id="sec-5-6">
        <title>Summing (9) over n from 1 to N , we get</title>
        <p>N
2n1n Ay, yn  y  dy,x1,
which leads to
where zN  nN1nyn . Passing in (10) to the supremum over yC , we obtain
nN1n</p>
        <p>Ay,zN  y </p>
        <p>1
2nN1n</p>
        <p>dy,x1 ,
1
2nN1n
GapzN  
supyC dy,x1,
(8)
(9)
(10)
for the Cesaro means sequence zN  nN1nyn the next inequality holds:
nN1n
GapzN  supyC dy,x11L dx1, y0 .</p>
        <p>2nN1n</p>
      </sec>
      <sec id="sec-5-7">
        <title>Proof. For arbitrary yC we have</title>
        <p>d y,xn1  d y,xnd xn1,xn 2n Ayn, y  xn1 .</p>
        <p>From monotonicity of A follows</p>
        <p>Ayn, y  xn1  Ayn, y  yn  Ayn, yn  xn1  Ay, y  yn  Ayn, yn  xn1 .
So, we have
dy,xn1 dy,xndxn1,xn 2n Ayn, yn  xn1  2n Ay, y  yn 
 dy,xndxn1,xn 2n Ayn1,yn  xn1 
Let’s write 2n Ayn1, yn  xn1 as
From the inclusion xn1C and Lemma 3 follows inequality</p>
      </sec>
      <sec id="sec-5-8">
        <title>As a result, we have</title>
        <p>Jxn nAyn1  Jyn,xn1  yn  0.
2n Ayn1, yn  xn1  2 Jxn nAyn1  Jyn,xn1  yn  2 Jyn  Jxn,xn1  yn .</p>
        <p>2n Ayn  Ayn1, yn  xn1 2n Ay, y  yn . (11)
Estimating the right side of (11) with (12), we come to the next inequality:
dy,xn1 dy,xndxn1,yndyn,xn
Now let’s estimate term 2n Ayn  Ayn1,yn  xn1 . We get
2n Ayn  Ayn1,yn  xn1 2n Ay,y  yn .
(13)
2n Ayn  Ayn1,yn  xn1  2n Ayn1  Ayn * xn1  yn  2nL yn1  yn xn1  yn 
 2nL2 2 yn1  yn 2  12 xn1  yn 2
 1
Estimating norms in (14) using inequality from Lemma 1, we get</p>
        <p>2n Ayn  Ayn1,yn  xn1 nL dxn,yn1
Using (15) in (13), we get
 y,xn1 dy,xn1nL 2dxn1,yn1nL 1 2dyn,xn
nL1 2dyn,xnnL 2dxn1,yn. (15)
nL dxn,yn1 2n Ay,y  yn . (16)</p>
      </sec>
      <sec id="sec-5-9">
        <title>Let’s rewrite (16) as</title>
        <p>2n Ay,yn  y  dy,xnnL dxn,yn1 dy,xn1n1L dxn1,yn
1Ln1  2ndxn1, yn1nL 1 2dyn,xn. (17)</p>
      </sec>
      <sec id="sec-5-10">
        <title>Summing (17) over n from 1 to N , we get</title>
        <p>N
2n1n Ay, yn  y  dy,x11L dx1,y0,
and so</p>
        <p>Ay,zN  y  dy,x11L dx1,y0 ,
2nN1n
where zN  nN1nyn . Passing in (18) to the supremum over yC , we obtain
nN1n</p>
        <p>GapzN  supyC dy,x11L dx1,y0 ,</p>
        <p>2nN1n
GapzN   L 32supyC dy,x1 16 dx1, y0 .</p>
        <p>N
which was required to prove. ■</p>
        <p>Corollary 2. Let xn, yn are sequences, generated by Algorithm 2 with n  31L . Then for the
means sequence zN  N1 nN1yn the next inequality holds:</p>
      </sec>
      <sec id="sec-5-11">
        <title>Let’s study Algorithm 3. (18) 152</title>
        <p>sequence of Cesaro means zN1  nN1nxn1 the next inequality holds</p>
        <p>nN1n
GapzN1 
Proof. For sequence xn, generated by algorithm 3, the next inequality holds
2 nAxn n1Axn  Axn1,y  xn1  dy,xndxn1,xndy,xn1 yС . (19)</p>
      </sec>
      <sec id="sec-5-12">
        <title>Let’s rewrite (19) the next way:</title>
      </sec>
      <sec id="sec-5-13">
        <title>Summing (20) over n from 1 to N , we get</title>
        <p>dy,x1dy,xN1
dy,xndy,xn1 2n Axn1,xn1  y 2n Axn1  Axn,xn1  y 
2n1 Axn  Axn1,xn  y  2n1 Axn  Axn1,xn1  xn dxn1,xn . (20)</p>
        <p>N
 2n Axn1,xn1  y 2N AxN1  AxN,xN1  y 
n1</p>
        <p>nN1n</p>
        <p>N
 2n Axn1,xn1  y NL xN1  y 2.</p>
        <p>n1
So, we can come to inequality</p>
        <p>N
2n Axn1,xn1  y NL xN1  y 2 dy,xN1dy,x1 yС . (22)
n1
Using monotonicity of A, we get</p>
        <p>N N  N 
n Axn1,xn1  y  n Ay,xn1  y n  Ay,zN1  y , (23)
n1 n1  n1 
where zN1  nN1nxn1 . Using estimation (22) in (23), we come to inequality
 N  
2n  Ay,zN1  y  1 NL xN1  y 2  d y,x1 yС ,
 n1  
from which follows
 21 xN1  xN 2.
 N 
Gap  zN 1   sup Ay, zN 1  y   2n  sup d  y, x1  ,
yC  n1  yC
1
which was required to prove. ■
sequence zN 1  N1 nN1 xn1 the next inequality holds</p>
        <p>Corollary 3. Let  xn  be a sequence, generated by Algorithm 3 with n 
Gap  zN 1  </p>
        <p>L</p>
        <p>sup d  y, x1  .</p>
        <p>N yC</p>
        <p>1
2 L
. Then for the means</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>Three new extragradient-type algorithms for solving monotone variational inequalities in a Banach
space are studied in the paper.</p>
      <p>
        The first two algorithms are natural modifications of Tseng's method [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and “Extrapolation from
the Past” method [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ] for problems in Banach spaces using generalized Alber projection. Iteration of
each of these algorithms is more economical than iteration of the extragradient method. The first
algorithm has less projections, the second has less operator calculations.
      </p>
      <p>
        The third algorithm, called the method of operator extrapolation, is a variant of the Malitsky–Tam
forward-reflected-backward algorithm [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. Generalized Alber projection is used instead of the metric
projection onto feasible set. An attractive feature of the algorithm is only one computation of the
operator value and of the generalized projection onto the feasible set at the iterative step. For variational
inequalities with monotone Lipschitz operators acting in a 2-uniformly convex and uniformly smooth
Banach space, O  1  complexity bounds are proved in terms of the gap function.
      </p>
      <p>Let us point out two actual questions, related to the current research. First, fast and stable algorithms
for computing generalized Alber projection for a wide range of sets are needed to efficiently apply
algorithms to nonlinear problems in Banach spaces. Second, all results were obtained for the class of
2-uniformly convex and uniformly smooth Banach spaces, which does not contain spaces Lp and W m
p
( 2  p   ), which are important for applications. It is highly desirable to get rid of this limitation.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>This work was supported by the Ministry of Education and Science of Ukraine (project
“Mathematical modeling and optimization of dynamic systems for defense, medicine and ecology”,
state registration number 0119U100337) and the National Academy of Sciences of Ukraine (project
“New methods of research of correctness and solution search for discrete optimization problems,
variational inequalities and their applications”, state registration number 0119U101608).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kinderlehrer</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Stampacchia,</surname>
          </string-name>
          <article-title>An Introduction to Variational Inequalities and Their Applications</article-title>
          ,
          <source>Society for Industrial and Applied Mathematics</source>
          , Philadelphia,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <article-title>Network economics: A variational inequality approach</article-title>
          , Kluwer Academic Publishers, Dordrecht,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Alber</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ryazantseva</surname>
          </string-name>
          , Nonlinear Ill Posed Problems of Monotone Type, Springer, Dordrecht,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>F.</given-names>
            <surname>Facchinei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Finite-Dimensional Variational</surname>
          </string-name>
          Inequalities and Complementarity Problems, Springer Series in Operations Research, vol. I, Springer, New York,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Lyashko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Identification of age-structured contamination sources in ground water</article-title>
          , in: R.
          <string-name>
            <surname>Boucekkine</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Hritonenko</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Yatsenko</surname>
            ,
            <given-names>Y</given-names>
          </string-name>
          . (Eds.),
          <article-title>Optimal Control of Age-Structured Populations in Economy, Demography, and the Environment</article-title>
          , Routledge, London-New York,
          <year>2013</year>
          , pp.
          <fpage>277</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nemirovski</surname>
          </string-name>
          ,
          <article-title>Prox-method with rate of convergence O(1/T) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems</article-title>
          , SIAM J.
          <year>Optim</year>
          .
          <volume>15</volume>
          (
          <year>2004</year>
          )
          <fpage>229</fpage>
          -
          <lpage>251</lpage>
          . doi:
          <volume>10</volume>
          .1137/S1052623403425629.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gidel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Berard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vincent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lacoste-Julien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Variational</given-names>
            <surname>Inequality</surname>
          </string-name>
          <article-title>Perspective on Generative Adversarial Networks</article-title>
          , arXiv preprint arXiv:
          <year>1802</year>
          .
          <fpage>10551</fpage>
          . (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mroueh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. Das</surname>
          </string-name>
          ,
          <article-title>A decentralized parallel algorithm for training generative adversarial nets</article-title>
          , arXiv preprint arXiv:
          <year>1910</year>
          .
          <fpage>12999</fpage>
          . (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Korpelevich</surname>
          </string-name>
          ,
          <article-title>An extragradient method for finding saddle points and for other problems</article-title>
          , Matecon.
          <volume>12</volume>
          (
          <year>1976</year>
          )
          <fpage>747</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Yu</surname>
          </string-name>
          . Nesterov,
          <article-title>Dual extrapolation and its applications to solving variational inequalities and related problems</article-title>
          ,
          <source>Mathematical Programming</source>
          <volume>109</volume>
          (
          <year>2007</year>
          ),
          <fpage>319</fpage>
          -
          <lpage>344</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>F. S.</given-names>
            <surname>Stonyakin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. A.</given-names>
            <surname>Vorontsova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. S.</given-names>
            <surname>Alkousa</surname>
          </string-name>
          ,
          <article-title>New Version of Mirror Prox for Variational Inequalities with Adaptation to Inexactness</article-title>
          , in: Jacimovic M.,
          <string-name>
            <surname>Khachay</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malkova</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Posypkin</surname>
            <given-names>M</given-names>
          </string-name>
          . (Eds.),
          <source>Optimization and Applications</source>
          ,
          <string-name>
            <surname>OPTIMA</surname>
          </string-name>
          <year>2019</year>
          , volume
          <volume>1145</volume>
          of Communications in Computer and Information Science, Springer, Cham,
          <year>2020</year>
          ,
          <fpage>427</fpage>
          -
          <lpage>442</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -38603-0_
          <fpage>31</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Y.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <article-title>A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise</article-title>
          , arXiv preprint arXiv:
          <year>1902</year>
          .
          <fpage>01637</fpage>
          . (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Rublyov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Convergence of Extragradient Algorithm with Monotone Step Size Strategy for Variational Inequalities and Operator Equations</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          .
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <fpage>12</fpage>
          -
          <lpage>24</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v51.
          <year>i6</year>
          .
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tseng</surname>
          </string-name>
          ,
          <article-title>A modified forward-backward splitting method for maximal monotone mappings</article-title>
          ,
          <source>SIAM Journal on Control and Optimization</source>
          <volume>38</volume>
          (
          <year>2000</year>
          )
          <fpage>431</fpage>
          -
          <lpage>446</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Strongly</given-names>
            <surname>Convergent</surname>
          </string-name>
          <article-title>Splitting Method for Systems of Operator Inclusions with Monotone Operators</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          <volume>46</volume>
          (
          <issue>5</issue>
          ) (
          <year>2014</year>
          )
          <fpage>45</fpage>
          -
          <lpage>56</lpage>
          . https://doi.org/10.1615/JAutomatInfScien.v46.
          <year>i5</year>
          .
          <fpage>40</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Modified Extragradient Method with Bregman Divergence for Variational Inequalities</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          <volume>50</volume>
          (
          <issue>8</issue>
          ) (
          <year>2018</year>
          )
          <fpage>26</fpage>
          -
          <lpage>37</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v50.
          <year>i8</year>
          .
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Popov</surname>
          </string-name>
          ,
          <article-title>A modification of the Arrow-Hurwicz method for search of saddle points</article-title>
          ,
          <source>Mathematical notes of the Academy of Sciences of the USSR</source>
          .
          <volume>28</volume>
          (
          <year>1980</year>
          )
          <fpage>845</fpage>
          -
          <lpage>848</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Rublyov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Convergence of Two-Stage Method with Bregman Divergence for Solving Variational Inequalities</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          <volume>55</volume>
          (
          <year>2019</year>
          )
          <fpage>359</fpage>
          -
          <lpage>368</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-019-00142-7.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chabak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <article-title>A New Non-Euclidean Proximal Method for Equilibrium Problems</article-title>
          , In: O.
          <string-name>
            <surname>Chertov</surname>
          </string-name>
          et al. (Eds.),
          <source>Recent Developments in Data Science and Intelligent Analysis of Information</source>
          , volume
          <volume>836</volume>
          <source>of Advances in Intelligent Systems and Computing</source>
          , Springer, Cham,
          <year>2019</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -97885-
          <issue>7</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <article-title>A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity</article-title>
          ,
          <source>SIAM Journal on Optimization</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>1451</fpage>
          -
          <lpage>1472</lpage>
          . doi:
          <volume>10</volume>
          .1137/18M1207260.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>E. R.</given-names>
            <surname>Csetnek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <article-title>Shadow Douglas-Rachford Splitting for Monotone Inclusions</article-title>
          , Appl Math Optim.
          <volume>80</volume>
          (
          <year>2019</year>
          )
          <fpage>665</fpage>
          -
          <lpage>678</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00245-019-09597-8.
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>H.</given-names>
            <surname>Iiduka</surname>
          </string-name>
          , W. Takahashi,
          <article-title>Weak convergence of a projection algorithm for variational inequalities in a Banach space</article-title>
          ,
          <source>Journal of Mathematical Analysis and Applications</source>
          <volume>339</volume>
          (
          <issue>1</issue>
          ) (
          <year>2008</year>
          )
          <fpage>668</fpage>
          -
          <lpage>679</lpage>
          . https://doi.org/10.1016/j.jmaa.
          <year>2007</year>
          .
          <volume>07</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Y. I. Alber</surname>
          </string-name>
          ,
          <article-title>Metric and generalized projection operators in Banach spaces: properties and applications</article-title>
          .
          <source>in: Theory and Applications of Nonlinear Operators of Accretive and Monotone Type</source>
          , vol.
          <volume>178</volume>
          ,
          <string-name>
            <surname>Dekker</surname>
          </string-name>
          , New York,
          <year>1996</year>
          , pp.
          <fpage>15</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shehu</surname>
          </string-name>
          ,
          <article-title>Single projection algorithm for variational inequalities in Banach spaces with application to contact problem</article-title>
          ,
          <source>Acta Math. Sci. 40</source>
          (
          <year>2020</year>
          )
          <fpage>1045</fpage>
          -
          <lpage>1063</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10473-020-0412-2.
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>K.</given-names>
            <surname>Aoyama</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Kohsaka</surname>
          </string-name>
          ,
          <article-title>Strongly relatively nonexpansive sequences generated by firmly nonexpansive-like mappings</article-title>
          ,
          <source>Fixed Point Theory Appl</source>
          .
          <volume>95</volume>
          (
          <year>2014</year>
          ). doi:
          <volume>10</volume>
          .1186/
          <fpage>1687</fpage>
          -1812-2014-95.
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>J.</given-names>
            <surname>Diestel</surname>
          </string-name>
          , Geometry of Banach Spaces, Springer-Verlag, Berlin-Heidelberg,
          <year>1975</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>B.</given-names>
            <surname>Beauzamy</surname>
          </string-name>
          , Introduction to Banach Spaces and
          <string-name>
            <given-names>Their</given-names>
            <surname>Geometry</surname>
          </string-name>
          , North-Holland, Amsterdam,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>I. Cioranescu</surname>
          </string-name>
          , Geometry of Banach Spaces,
          <source>Duality Mappings and Nonlinear Problems</source>
          , Kluwer Academic, Dordrecht,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Inequalities in Banach spaces with applications</article-title>
          ,
          <source>Nonlinear Anal</source>
          .
          <volume>16</volume>
          (
          <issue>12</issue>
          ) (
          <year>1991</year>
          )
          <fpage>1127</fpage>
          -
          <lpage>1138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Z. B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. F.</given-names>
            <surname>Roach</surname>
          </string-name>
          ,
          <article-title>Characteristic inequalities of uniformly convex and uniformly smooth Banach spaces</article-title>
          ,
          <source>Journal of Mathematical Analysis and Applications</source>
          <volume>157</volume>
          (
          <issue>1</issue>
          ) (
          <year>1991</year>
          )
          <fpage>189</fpage>
          -
          <lpage>210</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>