<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>SIAM Journal on Optimization 30 (2020) 1451-1472. doi:10.1137/18M1207260.
[15] E. R. Csetnek</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="hindawi-id">192679</article-id>
      <article-id pub-id-type="doi">10.1137/18M1207260</article-id>
      <title-group>
        <article-title>Convergence  of  adaptive  operator  extrapolation  method  for  operator inclusions in Banach Spaces </article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Denysov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vladimir Semenov</string-name>
          <email>semenov.volodya@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13 Volodymyrska Street, Kyiv, 01161</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>3106</volume>
      <fpage>1451</fpage>
      <lpage>1472</lpage>
      <abstract>
        <p>   A novel iterative splitting algorithm for solving operator inclusions problem is considered in a real Banach space setting. The operator is a sum of the multivalued maximal monotone operator and the monotone Lipschitz continuous operator. The proposed algorithm is an adaptive modification of the “forward-reflected-backward algorithm” [14]. Step size update rule not require Lipschitz constant knowledge of the operator. Advantage of the proposed algorithm is a single calculation of the maximal monotone operator resolvent and value of the monotone Lipschitz continuous operator on each iterative step. Weak convergence of the method is proved for operator inclusions in 2-uniformly convex and uniformly smooth real Banach space.</p>
      </abstract>
      <kwd-group>
        <kwd> 1  Maximal monotone operator</kwd>
        <kwd>operator inclusion</kwd>
        <kwd>variational inequality</kwd>
        <kwd>splitting algorithm</kwd>
        <kwd>convergence</kwd>
        <kwd>adaptability</kwd>
        <kwd>2-uniformly convex Banach space</kwd>
        <kwd>uniformly smooth Banach space</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction </title>
      <p>
        Research and development of algorithms for operator inclusions (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and related problems is a rapidly
growing field of applied modern nonlinear analysis [1–4, 7–25].
      </p>
      <p>
        The most well-known and popular iterative method for solving monotone operator inclusions (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) in
      </p>
      <sec id="sec-1-1">
        <title>Hilbert space is the “forward-backward algorithm” (FBA) [1, 11, 12]</title>
        <p>xn1  JA  xn   Bxn  ,
where JA   I   A1 is the operator resolvent, A : H  2H ,   0 .</p>
        <p>Note that the FBA scheme includes well-known gradient method and proximal method as special
cases [1]. For inverse strongly monotone (cocoercive) operators B : Н  Н FBA method is weakly
converging [1]. However, FBA may diverge for Lipschitz continuous monotone operators B .</p>
        <p>The condition of the inverse strong monotonicity of the operator B is a rather strong assumption.</p>
      </sec>
      <sec id="sec-1-2">
        <title>To weaken it, Tseng [13] proposed the following modification of the FBA:</title>
        <p> yn  JA  xn   Bxn ,

xn1  yn    Byn  Bxn ,
where B : H  H – monotone and Lipschitz continuous operator with constant L  0 and  0, L1 .
A main limitation of Tseng method is their two calls of B per iteration. Evolution of this idea resulted
in the so-called “forward-reflected-backward algorithm” [14]:
and related method [15]:

xn1  JA  xn   Bxn    Bxn  Bxn1  ,    0,

1 </p>
        <p> ,
2L 

xn1  JA  xn   Bxn     Bxn  Bxn1  ,    0,

1 </p>
        <p> .
3L </p>
        <p>A special case of this algorithm is the algorithm “optimistic gradient descent ascent”, which is
popular among specialists in the field of Machine Learning [14, 15].</p>
        <p>These schemes are naturally called operator extrapolation schemes. Note that in the three above
algorithms there is a requirement to know operator’s Lipschitz constant – which is often unknown or
difficult to estimate. To overcome these difficulties, adaptive rules for updating parameter   0 on
each step have been proposed [16].</p>
        <p>Some progress has been achieved recently in the study of splitting algorithms for operator inclusions
in Banach spaces [2, 17–25]. This progress is mostly related to careful use of theoretical results and
concepts from Banach spaces geometry [2, 26-29]. Book [2] contains an extensive material on this
topic.</p>
        <p>
          In the article [17] for the solution of inclusions (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) in the 2-uniformly convex and uniformly smooth
Banach space the next algorithm is proposed:
xn1  JA  J 1  Jxn   Bxn  ,

where JA   J   A1 J is the resolvent of operator A: E  2E ,   0 , J is the normalized duality
mapping from E to E  . For inverse strongly monotone (cocoercive) operators B : E  E * method (
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
weakly converges [17]. Recently, Shehu [18] extended Tseng's result to 2-uniformly convex and
uniformly smooth Banach spaces. He proposed the following weakly converging process for
approximating the solution of inclusion (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ):
 yn  JA  J 1  xn  n Bxn ,

xn1  J 1  Jyn  n  Byn  Bxn ,
where n  0 can be set using knowledge of Lipschitz constant of the operator B or calculated with
linear search. Note that two values of operator B need to be calculated at the iteration step.
        </p>
        <p>
          In this article we study a new splitting algorithm for solving operator inclusion (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) in Banach space.
The algorithm is an adaptive modification of the well-known Malitsky–Tam
“forward-reflectedbackward algorithm” [14], where the step size update rule not require Lipschitz continuous constant
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
knowledge for operator B . It’s advantage is a single computation of maximal monotone operator A
resolvent and B operator value on each iterative step. The method weak convergence theorem is proved
for operator inclusions in 2-uniformly convex and uniformly smooth Banach space.
        </p>
        <p>The article structure is the following. Section 2 provides the necessary information from the areas
of geometry of Banach spaces and theory of monotonous operators. The proposed adaptive operator
extrapolation algorithm is described in section 3. Proof of weak convergence is presented in section 4.
Theoretical applications to nonlinear operator equations, convex minimization problems and variational
inequalities are given in sections 5 and 6. Some results of numerical experiments are also presented in
section 7.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries </title>
      <p>
        x  y
2
 1
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
      </p>
      <p>To formulate and prove our results about convergence of algorithms we need some concepts and
important facts about the geometry of real Banach space [26–33].</p>
      <p>Consider real Banach space E with norm  . Space E is the dual space for E . Let x, x is a
value of linear bounded mapping x  E on x  E (i.e. x, x  x  x ). Let’s   be dual norm in dual
space E .</p>
      <p>Let SE  x  E : x  1 . Space E is strictly convex, if  x, y  SE , x  y we have
[26]. The modulus of convexity of Banach space E is defined as [27]</p>
      <p> x  y 
 E    inf 1  : x, y  SE , x  y   
 2 
 0, 2 .</p>
      <p>Space E is uniformly convex, if  E    0   0, 2 [27]. Space E is 2-uniformly convex, if
there exists such c  0 that  E    c 2   0, 2 [27]. It is known that 2-uniform convexity implies
uniform convexity. Also the uniform convexity of real Banach space implies its reflexivity [26].</p>
      <p>A space E is smooth if the limit
lim
t0
x  ty  x</p>
      <p>
        t
exists for all x, y  SE [26]. A space E is uniformly smooth if the limit (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) exists uniformly over
x, y  SE [26].
      </p>
      <p>Convexity type and smoothness type of spaces E and E are in duality relationship [26, 27]: dual
space E is strictly convex  space E is smooth [26]; dual space E is smooth  space E is strictly
convex [26]; space E is uniformly convex  dual space E is uniformly smooth [26]; space E is
uniformly smooth  dual space E is uniformly convex [27]. We can reverse the first two implications
if the space E is reflexive [26].</p>
      <p>Widely used functional spaces Lp (1  p  2 ) and real Hilbert spaces are uniformly smooth (spaces</p>
      <sec id="sec-2-1">
        <title>Lp are uniformly smooth for p 1,  ) and 2-uniformly convex [26–29].</title>
        <p>
Also recall [1, 28, 31, 32] that a multivalued operator A: E  2E is called monotone if x, y  E
u  v, x  y  0
u  Ax, v  Ay .</p>
      </sec>
      <sec id="sec-2-2">
        <title>A monotone operator A: E  2E</title>
        <p>B : E  2E
we have that   A    B implies   A    B , where</p>
        <p>is called maximal monotone if for any monotone operator
 A   x,u  E  E : u  Ax
is a graph of A [1, 28, 31].</p>
        <p>
Lemma 1 ([1, 28]). Let A: E  2E be a maximal monotone operator, x  E , u  E  . Then
is called normalized duality mapping [30]. We also use the next facts [28, 30, 32, 33]: if Banach space
E is smooth then operator J is single-valued [30]; if real Banach space E is strongly convex then
operator J is one-to-one and strongly monotone [28]; if Banach space E is reflexive then operator J
is onto [28]; if Banach space E is uniformly smooth then on all bounded subsets of E operator J is
uniformly continuous [30].</p>
        <p>Remark 1. For a Hilbert space J  I . Explicit form of operator J for Banach spaces  p , Lp , and
Wpm ( p 1,  ) is provided in [28, 32].</p>
        <p>Consider reflexive, strictly convex and smooth space E [26]. The maximal monotonicity of operator

A: E  2E is equivalent to equality

for all   0 [31]. For maximal monotone operator A: E  2E and   0 resolvent JA : E  E is
defined as follows [31]</p>
        <p>R  J   A  E</p>
        <p>JAx   J   A1 Jx , x  E ,
where J is normalized duality mapping from E to E . It is known that</p>
        <p>A10  F  JA   x  E : JA x  x   0 .</p>
        <p>It is also known that the set A1 0 is closed and convex [1, 31].</p>
        <p>Consider smooth Banach space E [26]. Yakov Alber introduced the convenient real-valued
functional [32]</p>
        <p>D  x, y  x 2  2 Jy, x  y 2
x, y  E .</p>
        <sec id="sec-2-2-1">
          <title>Definition of D implies a useful 3-point identity:</title>
          <p>D  x, y  D  x, z   D  z, y  2 Jz  Jy, x  z
For strictly convex Banach space E and x, y  E [32]:
x, y, z  E .</p>
          <p>D  x, y  0  x  y .</p>
          <p>Lemma 2 ([31]). Let E be a uniformly convex and uniformly smooth Banach space  xn  ,  yn  –
bounded sequences of elements from Banach space E . Then
xn  yn  0   D  xn , yn   0 .</p>
          <p>Jxn  Jyn   0</p>
          <p>Lemma 3 ([24, 25]). Let E be a smooth Banach space and 2-uniformly convex. Then for some   1
we have:</p>
          <p>D  x, y   1 x  y 2

x, y  E .</p>
          <p>Remark 2. For Banach spaces  p , Lp and Wpm (1  p  2 ) we have  
[29]. And for a Hilbert
space inequality for Lemma 3 becomes identity.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Algorithm </title>
      <p>Let real Banach space E be a uniformly smooth and 2-uniformly convex [27]. Let A be a
multivalued operator acting from E into 2E , and B an operator acting from E into E* .</p>
      <sec id="sec-3-1">
        <title>Consider the next operator inclusion problem:</title>
        <p>
          find x  E : 0 A  B x , (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
        </p>
        <sec id="sec-3-1-1">
          <title>Let us denote  A  B1 0 set of solutions of this operator inclusion.</title>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Suppose that the next assumptions hold [20]:</title>
        <p>
 A: E  2E is a maximal monotone operator;
 B : E  E * is a monotone and Lipschitz continuous operator with Lipschitz constant L  0 ;
 Set  A  B1 0 is nonempty.</p>
        <p>
          Operator inclusion (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ) can be formulated as the problem of finding a fixed point:
        </p>
        <p>
          find x  E : x  JA  J 1  Jx   Bx ,
where   0 . Formulation (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) is useful, as it leads to well-known simple algorithmic idea. Calculation
scheme
        </p>
        <p>
          xn1  JA  J 1  Jxn   Bxn 
was studied in [17] for inverse strongly monotone operators B : E  E * . However, the scheme
generally does not converge for Lipschitz continuous monotone operators. Let's use the idea of work
[14] and consider modified scheme
xn1  JA  J 1  Jxn   Bxn    Bxn  Bxn1   
 
(
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
with extrapolation term
        </p>
        <p>And let’s use update rule for   0 similar to one from [16] to exclude explicit use of Lipschitz
constant of operator B .</p>
        <p>We will assume that we know constant   1 from Lemma 3.</p>
        <p>  Bxn  Bxn1  .</p>
      </sec>
      <sec id="sec-3-3">
        <title>Algorithm 1.</title>
        <p>Choose some x0  E , x1  E ,  0, 21  and 0 , 1  0 . Set n 1.
1. Compute
xn1  JA  J 1  Jxn  n Bxn  n1  Bxn  Bxn1  .</p>
        <p>n
2. If xn1  xn  xn1 , then xn  A  B1 0 , else return to 3.
3. Compute
 
min n ,
n1   

n ,
xn1  xn </p>
        <p>, if Bxn1  Bxn ,
Bxn1  Bxn * </p>
        <p>Set n  n 1 and return to 1.</p>
        <p>Step size sequence n  which is created by rule on step 3 is non-increasing and bounded from
below by</p>
        <p>So, we have lnimn  0 .</p>
      </sec>
      <sec id="sec-3-4">
        <title>Let us prove convergence result for proposed Algorithm 1.</title>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Proof of convergence </title>
      <p>The following lemma contains inequality which is crucial to proof weak convergence of adaptive
operator extrapolation method (Algorithm 1).</p>
      <p>
        Lemma 4. The next inequality holds for the sequence  xn  , generated by adaptive operator
extrapolation method (Algorithm 1):
Using the rule for n recalculation, we can estimate term 2n1 Bxn  Bxn1, xn  xn1 in (
        <xref ref-type="bibr" rid="ref11">11</xref>
        ) from
above. We get
n1 Bxn  Bxn1, xn  xn1 . (
        <xref ref-type="bibr" rid="ref10">10</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
(
        <xref ref-type="bibr" rid="ref9">9</xref>
        )
2n1 Bxn  Bxn1, xn  xn1  2n1 Bxn  Bxn1 * xn  xn1  2 n1 xn  xn1 xn1  xn 
n
which was required to prove.
      </p>
      <sec id="sec-4-1">
        <title>The next theorem states our main result.</title>
        <p></p>
        <p>Theorem 1. Let Banach space E be a uniformly smooth and 2-uniformly convex, A : E  2E be a
multivalued maximal monotone operator, B : E  E* be a monotone and Lipschitz continuous
operator. Suppose that J (normalized duality map) is sequentially weakly continuous and
 A  B1 0   . Then we have weak convergence of sequence  xn  generated by adaptive operator
extrapolation method (Algorithm 1) to a point z  A  B1 0 .</p>
        <p>Proof. Let z A  B1 0 . Denote
an  D  z, xn   2n1 Bxn1  Bxn , xn  z  n1 D  xn , xn1  ,</p>
        <p>n
bn  1 n1  n  D  xn1, xn 

 n n1 </p>
      </sec>
      <sec id="sec-4-2">
        <title>Inequality from Lemma 4 becomes</title>
        <p>an1  an  bn .</p>
        <p>As lnimn  0 exists, we have
1  n1  n 1  2 0,1 when n   .</p>
        <p>
n n1
Let’s show, that an  0 for all big enough n  N . We have
an  D  z, xn   2n1 Bxn1  Bxn , xn  z  n1 D  xn , xn1  </p>
        <p>n
 1

 1

xn  z 2  2n1 Bxn1  Bxn * xn  z  n1 xn1  xn 2 </p>
        <p>n
xn  z 2  2 n1 xn  xn1 xn  z  n1 xn1  xn 2  1
   n1  xn  z 2 .</p>
        <p>n n   n 
We can find such n0  N that
1  n1  0 for all n  n0 ,
 n
so an  0 for all n  n0 .</p>
        <p>Now we can conclude that the next limit exists</p>
        <p> 
lim  D  z, xn   2n1 Bxn1  Bxn , xn  z  n1 D xn , xn1   ,
n  n 
and
subsequence  xnk  that weakly converges to some point z  E . Let’s show that</p>
        <p>nk v  Jxnk 1  nk  Bxnk  By  nk 1  Bxnk  Bxnk 1   Jxnk , y  xnk 1  0 .</p>
        <p>And then, using monotonicity of B operator, we can obtain the following estimation
v, y  xnk  v, xnk  xnk 1 
Let us show that all weak partial limits of  xn  sequence belong to set  A  B1 0 . Consider a
 v, y  xnk 1  By  1nk  Jxnk  Jxnk 1  nk Bxnk  nk 1  Bxnk  Bxnk 1 , y  xnk 1 
 1
nk</p>
        <p>Jxnk  Jxnk 1, y  xnk 1  nk 1 Bxnk  Bxnk 1, y  xnk 1 </p>
        <p>nk
 By  Bxnk 1, y  xnk 1  Bxnk 1  Bxnk , y  xnk 1 
 1
nk</p>
        <p>Jxnk  Jxnk 1, y  xnk 1  nk 1 Bxnk  Bxnk 1, y  xnk 1  Bxnk 1  Bxnk , y  xnk 1 .</p>
        <p>nk</p>
      </sec>
      <sec id="sec-4-3">
        <title>From</title>
        <p>lnim xn  xn1  0 ,
and the Lipschitz continuity of B we have
lim Bxn  Bxn1 *  0 .
n</p>
      </sec>
      <sec id="sec-4-4">
        <title>Due to the uniform continuity on bounded sets of the J [30], we obtain</title>
        <p>lim Jxn  Jxn1 *  0 .
n</p>
      </sec>
      <sec id="sec-4-5">
        <title>Thus</title>
        <p>v, y  z  lim v, y  xnk  0 .</p>
        <p>k
The maximal monotonicity of operator A  B and arbitrariness of  y,v  A  B imply
z  A  B1 0 (Lemma 1).</p>
        <p>We need to prove that the sequence  xn  weakly converges to point z . Let’s argue by contradiction.
Let there exist a subsequence  xmk  that weakly converges to z , z  z . Obviously that
z A  B1 0 . We have the equality</p>
      </sec>
      <sec id="sec-4-6">
        <title>So the next limit exists</title>
        <p>lim Jxn , z  z .</p>
        <p>n
Due to the sequential weak continuity of the normalized duality mapping J , we obtain
Jz, z  z  lim Jxnk , z  z  lim Jxmk , z  z  Jz, z  z ,</p>
        <p>k k
so</p>
        <p>Jz  Jz, z  z  0 .</p>
        <p>As a result we get the contradiction – z  z .</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Algorithm variants </title>
      <p>For completeness, let us formulate a modification of the proposed algorithm with a fixed step size
parameter   0 .</p>
      <sec id="sec-5-1">
        <title>Algorithm 2.</title>
        <p>Select some points x0  E , x1  E , and    0, 0.5 1 L1  . Set n  1 .

  </p>
        <p>xn1  J 1  Jxn  n Bxn  n1  Bxn  Bxn1 .
2. If xn1  xn  xn1 , then xn  B10 . Else go to 3.</p>
      </sec>
      <sec id="sec-5-2">
        <title>3. Compute</title>
        <p> 
min n ,
n1   

n ,</p>
      </sec>
      <sec id="sec-5-3">
        <title>Set n  n  1 and return to 1.</title>
        <p>xn1  xn , if Bxn1  Bxn ,
Bxn1  Bxn * 
otherwise.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Weak convergence of Algorithm 3 follows from Theorem 1.</title>
        <p>Theorem 2. Let Banach space E be uniformly smooth and 2-uniformly convex, B : E  E* –
monotone and Lipschitz continuous operator, B10   . If J (normalized duality mapping) is
sequentially weakly continuous, then the sequence  xn  converges weakly to some point z  B10 .</p>
        <p>Remark 3. In [17], the following algorithm was proposed to find the zero of a reverse strongly
monotone operator B : E  E* :</p>
        <p>xn1  J 1  Jxn  n Bxn  , x1  E ,
  
where n a,b   0,  ,   0 – the constant of inverse strong monotonicity of operator B . This
  
algorithm generally does not converge for Lipschitz continuous monotone operators, but it converges
weakly ergodic.</p>
        <p>
          Using Theorem 2, let us consider the problem of minimizing a convex continuously Frechet
differentiable functional
f  x  min . (
          <xref ref-type="bibr" rid="ref12">12</xref>
          )
        </p>
        <p>
          xE
We can formulate variant of Algorithm 3 for (
          <xref ref-type="bibr" rid="ref12">12</xref>
          ), which is a smooth convex minimization problem.
Choose some x0  E , x1  E ,  0, 21  and 0 , 1  0 . Set n  1 .
        </p>
        <p>xn1  J 1  Jxn  nf  xn   n1 f  xn   f  xn1 .
2. If xn1  xn  xn1 , then xn arg min f . Else return to 3.
3. Compute
 
min n ,
n1   

n ,</p>
        <p>xn1  xn , if f  xn1   f  xn ,
f  xn1   f  xn  * </p>
        <p>otherwise.</p>
      </sec>
      <sec id="sec-5-5">
        <title>Set n  n  1 and return to 1.</title>
      </sec>
      <sec id="sec-5-6">
        <title>For this variant of the algorithm, from Theorem 2 we get</title>
        <p>Theorem 3. Let Banach space E be uniformly smooth and 2-uniformly convex, f : E  R – convex
continuously Frechet differentiable functional with Lipschitz continuous derivative, and arg min f is
non-empty. Assume that J is sequentially weakly continuous. Then sequence  xn  generated by
Algorithm 4 converges weakly to a point z arg min f .</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Application to variational inequalities </title>
      <p>
        Consider real Hilbert space H . Let C is a non-empty, convex and closed subset of space H ,
B : H  H is a monotone and Lipschitz continuous operator. Consider the next variational inequality
problem:
find x  C such that  Bx, y  x  0 y C ,
(
        <xref ref-type="bibr" rid="ref13">13</xref>
        )
      </p>
      <p>
        Let VI  B,C be a solution set of problem (
        <xref ref-type="bibr" rid="ref13">13</xref>
        ). Variational inequality (
        <xref ref-type="bibr" rid="ref13">13</xref>
        ) is equivalent to the
operator inclusion [1]
      </p>
      <p>find x  H such that 0 A  B x ,
where A  NC is a normal cone for convex and closed set C , i.e.</p>
      <sec id="sec-6-1">
        <title>It is known that</title>
        <p>w  H :  w, y  x  0 y  C, x  C,
NC x  
,</p>
        <p>JA   I   A1   I   NC 1  PC ,
where PC is projection operator onto the set C [1].</p>
        <p>
          For variational inequality problem (
          <xref ref-type="bibr" rid="ref13">13</xref>
          ), adaptive operator extrapolation method (Algorithm 1) takes
the following form:
        </p>
      </sec>
      <sec id="sec-6-2">
        <title>Algorithm 5.</title>
        <p>Choose some x1  H , x2  H ,   0, 0.5 and 1, 2  0 . Set n  2 .
generated by Algorithm 5 converges weakly to a point z VI  B,C  .</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Numerical experiments </title>
      <p>The numerical experiments are performed in Python 3.8.5 with NumPy 1.19 on a 64-bit PC with an
Intel Core i7-1065G7 1.3 – 3.9GHz and 16GB RAM. As the test example we consider a toy variational
inequality with pseudo-monotone operator.</p>
      <sec id="sec-7-1">
        <title>Example. Let</title>
        <p>and operator B : R3  R3 be defined as
С  x 5, 53 : x1  x2  x3  0  R3 ,</p>
        <p> 2
Bx  e x 2  0, 2 0

 2
0
3
0
2 
0  x
4 </p>
        <p>The variational inequality problem of finding x C :  Bx, y  x  0  yC has unique solution
x*  0, 0, 0 . Also, Lipschitz constant for the operator B is known – L  10.136 . We compare adaptive
and non-adaptive (marked as “lip” on figures) variants of “Extrapolation from the Past” algorithm [9]
and Algorithm 5. For stopping criteria and error estimation we use Euclidian distance to known solution
Dn  xn  x* . For this example, we use x0  (4,3,5) as starting point,   0.9( 2  1) / L for
nonadaptive version of “Extrapolation from the Past” and   0.9 / 2L for non-adaptive version of
Algorithm 5. Also, we use   0.3 and  0.45 (nearly maximal feasible values) for adaptive versions
of “Extrapolation from the Past” and Algorithm 5 accordingly.</p>
      </sec>
      <sec id="sec-7-2">
        <title>Time measurements below are got by averaging 100 runs for each algorithm.</title>
        <p>Table 1 
Time to reach desired error rate  Dn , seconds 
 
error\algorithm 
  11010  
  11013  
  11016  </p>
        <p>Extra Past (lip) 
0.0308 </p>
      </sec>
      <sec id="sec-7-3">
        <title>As we see from Table 1, for this problem Algorithm 5 outperforms other variants.</title>
      </sec>
      <sec id="sec-7-4">
        <title>On the figure below we can see convergence behavior.</title>
        <p>As it can be seen, both adaptive algorithms behave very closely. But it should be noted that
Algorithm 5 has only one projection on each iteration instead of two for “Extrapolation from the Past”
algorithm [9]
 yn  PC  xn   n Byn1 ,

xn1  PC  xn   n Byn .</p>
      </sec>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusions </title>
      <p>In this paper new iterative splitting algorithm for solving an operator inclusion with the sum of a
maximal monotone operator and a monotone Lipschitz continuous operator in a real Banach space is
investigated.</p>
      <p>Algorithm 1 is an improvement of the well-known “forward-reflected-backward algorithm” of
Malitsky–Tam [14] with adaptive step size, where the step update rule does not require a priori
knowledge of the Lipschitz constant of operator B [16].</p>
      <p>The algorithm advantage is a single computation of the resolvent of the maximal monotone operator
A and the value of the monotone Lipschitz continuous operator B at the iteration step.</p>
      <p>Method weak convergence theorem is proved for operator inclusions in Banach space with
2uniform convexity and uniform smoothness [27]. Theoretical applications to operator equations, convex
minimization problems, and variational inequalities are presented.</p>
      <p>An interesting challenge for the future is the development of a strongly convergent modification for</p>
      <sec id="sec-8-1">
        <title>Algorithm 1.</title>
        <p>In connection with the study we will point out two topical issues. First, all results are obtained for
the class 2-uniformly convex and uniformly smooth real Banach spaces [27], which does not contain
important for applications spaces Lp and Wpm ( 2  p   ). It is highly desirable to get rid of this
limitation. Second, fast and robust algorithms for computing the resolvent for a wide range of maximal
monotone operators are needed to effectively apply algorithms for nonlinear problems in Banach
spaces.</p>
        <p>An interesting question is the study of the behavior of Algorithm 1 in the situation A  I . Namely,
the question of asymptotic behavior of Bxn * . Note that an estimate Bxn *  O 1n  is theoretically
possible.</p>
      </sec>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgements </title>
      <p>This work was supported by the MES of Ukraine (project "Computational algorithms and
optimization for artificial intelligence, medicine and defense", 0122U002026).</p>
    </sec>
    <sec id="sec-10">
      <title>References </title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Bauschke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Combettes</surname>
          </string-name>
          ,
          <article-title>Convex Analysis and Monotone Operator Theory in Hilbert Spaces</article-title>
          . Berlin; Heidelberg; New York, Springer,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Alber</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ryazantseva</surname>
          </string-name>
          , Nonlinear Ill Posed Problems of Monotone Type, Springer, Dordrecht,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>H.</given-names>
            <surname>Raguet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Fadili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Peyre</surname>
          </string-name>
          ,
          <article-title>A generalized forward-backward splitting</article-title>
          ,
          <source>SIAM J. Imaging Sci. 6</source>
          (
          <year>2013</year>
          )
          <fpage>1199</fpage>
          -
          <lpage>1226</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D. R.</given-names>
            <surname>Luke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Charitha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Shefi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <article-title>Efficient, Quantitative Numerical Methods for Statistical Image Deconvolution and Denoising</article-title>
          . In: Salditt T.,
          <string-name>
            <surname>Egner</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Luke</surname>
            <given-names>D.R</given-names>
          </string-name>
          . (eds.) Nanoscale Photonic Imaging.
          <source>Topics in Applied Physics</source>
          , vol
          <volume>134</volume>
          . Springer, Cham,
          <year>2020</year>
          , pp.
          <fpage>313</fpage>
          -
          <lpage>338</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Tibshirani</surname>
          </string-name>
          ,
          <article-title>Regression shrinkage and selection via the lasso</article-title>
          ,
          <source>J. Royal Statist. Soc</source>
          .
          <volume>58</volume>
          (
          <year>1996</year>
          )
          <fpage>267</fpage>
          -
          <lpage>288</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S. I.</given-names>
            <surname>Lyashko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Klyushin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Identification of age-structured contamination sources in ground water</article-title>
          , in: R.
          <string-name>
            <surname>Boucekkine</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Hritonenko</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Yatsenko</surname>
            ,
            <given-names>Y</given-names>
          </string-name>
          . (Eds.),
          <article-title>Optimal Control of Age-Structured Populations in Economy, Demography, and the Environment</article-title>
          , Routledge, London-New York,
          <year>2013</year>
          , pp.
          <fpage>277</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Strongly</given-names>
            <surname>Convergent</surname>
          </string-name>
          <article-title>Splitting Method for Systems of Operator Inclusions with Monotone Operators</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          <volume>46</volume>
          (
          <issue>5</issue>
          ) (
          <year>2014</year>
          )
          <fpage>45</fpage>
          -
          <lpage>56</lpage>
          . https://doi.org/10.1615/JAutomatInfScien.v46.
          <year>i5</year>
          .
          <fpage>40</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Rublyov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Convergence of Two-Stage Method with Bregman Divergence for Solving Variational Inequalities</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          <volume>55</volume>
          (
          <year>2019</year>
          )
          <fpage>359</fpage>
          -
          <lpage>368</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-019-00142-7.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chabak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <article-title>A New Non-Euclidean Proximal Method for Equilibrium Problems</article-title>
          , In: O.
          <string-name>
            <surname>Chertov</surname>
          </string-name>
          et al. (Eds.),
          <source>Recent Developments in Data Science and Intelligent Analysis of Information</source>
          , volume
          <volume>836</volume>
          <source>of Advances in Intelligent Systems and Computing</source>
          , Springer, Cham,
          <year>2019</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -97885-
          <issue>7</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Modified Extragradient Method with Bregman Divergence for Variational Inequalities</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          <volume>50</volume>
          (
          <issue>8</issue>
          ) (
          <year>2018</year>
          )
          <fpage>26</fpage>
          -
          <lpage>37</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v50.
          <year>i8</year>
          .
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Lions</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mercier</surname>
          </string-name>
          ,
          <article-title>Splitting algorithms for the sum of two nonlinear operators</article-title>
          ,
          <source>SIAM J. Numer. Anal</source>
          .
          <volume>16</volume>
          (
          <year>1979</year>
          )
          <fpage>964</fpage>
          -
          <lpage>979</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>G. B.</given-names>
            <surname>Passty</surname>
          </string-name>
          ,
          <article-title>Ergodic Convergence to a Zero of the Sum of Monotone Operators in Hilbert Spaces</article-title>
          ,
          <source>Journal of Mathematical Analysis and Applications</source>
          <volume>72</volume>
          (
          <year>1979</year>
          )
          <fpage>383</fpage>
          -
          <lpage>390</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tseng</surname>
          </string-name>
          ,
          <article-title>A modified forward-backward splitting method for maximal monotone mappings</article-title>
          ,
          <source>SIAM Journal on Control and Optimization</source>
          <volume>38</volume>
          (
          <year>2000</year>
          )
          <fpage>431</fpage>
          -
          <lpage>446</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>