<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>The Regularized Operator Extrapolation Algorithm for Variational Inequalities</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vladimir Semenov</string-name>
          <email>semenov.volodya@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleh Kharkov</string-name>
          <email>olehharek@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13 Volodymyrska Street, Kyiv, 01161</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>9</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>In this article proposed and investigated a new algorithm for solving monotone variational inequalities in Hilbert spaces. Variational inequalities provide a universal instrument of formulating many problems of mathematical physics, machine learning, data analysis, optimal control, and operations research. The proposed iterative algorithm is a regularized (by applying the Halpern scheme) variant of the operator extrapolation method. In terms of the number of calculations required to perform an iterative step, this algorithm has an advantage over the extragradient method and the method of extrapolation from the past. For variational inequalities with monotone Lipschitz continuous operators, acting in Hilbert space, the strong convergence theorem of the method is proved. variational inequality, monotone operator, saddle point problem, operator extrapolation Proceedings ceur-ws.org</p>
      </abstract>
      <kwd-group>
        <kwd>method</kwd>
        <kwd>regularization</kwd>
        <kwd>Halpern method</kwd>
        <kwd>strong convergence</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        This article continues the series of articles [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1–3</xref>
        ] devoted to the development of computationally
efficient and adaptive algorithms for solving variational inequalities and equilibrium problems.
      </p>
      <p>
        Variational inequalities provide a universal instrument of formulating many topical problems of
mathematical physics, machine learning, data analysis, optimal control, and operations research [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ].
The development of algorithms for solving variational inequalities and related problems (equilibrium
problems, game problems) is an extremely popular field of research in computational mathematics [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18 ref19 ref20 ref21 ref22 ref23 ref24 ref25 ref26 ref27 ref28 ref29 ref30 ref31 ref32 ref33 ref34 ref35 ref6 ref7 ref8 ref9">6–
35</xref>
        ]. Some problems of non-smooth optimization can be effectively solved if they are formulated as
saddle problems. This approach allows to apply algorithms for solving variational inequalities in order
to get a solution of the optimization problem [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Recently, such a variant of building fast algorithms
for convex programming problems was developed: by using a duality theory, was made a transition to
some convex-concave saddle problem (Fenchel game) and then applied extragradient algorithms for
solving variational inequalities [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. Note that the increased use of generative adversarial neural
networks (GANs) and other adversarial or robust learning models has led to interest among machine
learning specialists in algorithms for solving saddle problems and variational inequalities [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
      </p>
      <p>
        The simplest method for solving variational inequalities is an analogue of the gradient descent
method, which in the case of the saddle point problem is known as the gradient descent-ascent method
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. But this method may not converge for variational inequalities with a monotone operator.
      </p>
      <p>
        A well-known
modification of the gradient descent method
with projection for variational
inequalities is the Korpelevich extragradient method [
        <xref ref-type="bibr" rid="ref14 ref15 ref16 ref17">14–17</xref>
        ], the iteration of which requires two
calculations of the value of the operator of the problem and two metric projections onto the admissible
set. Computationally cheap variants of the extragradient algorithm with one metric projection on an
admissible set were proposed in the articles [
        <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
        ]. Variants of the Korpelevich extragradient method,
including adaptive ones, are proposed in the articles [
        <xref ref-type="bibr" rid="ref20 ref21 ref22">20–22</xref>
        ]. In the Popov article [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] was proposed a
modification of the gradient descent-ascent method different from the extragradient algorithm for
finding saddle points of convex-concave functions. The iteration of this algorithm is cheaper than the
      </p>
      <p>
        2023 Copyright for this paper by its authors.
CEUR
iteration of the extragradient algorithm in terms of the number of operator value calculations: one
instead of two. Popov's algorithm for variational inequalities became known among Machine Learning
specialists as Extrapolation from the Past [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Important results related to this algorithm are obtained
in papers [
        <xref ref-type="bibr" rid="ref1 ref13 ref2 ref23 ref24 ref25">1, 2, 13, 23–25</xref>
        ]. In particular, it’s adaptive modifications are proposed in the papers [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        Further development of these ideas and attempts to reduce the complexity of iteration while
preserving the nature of convergence led to the inventing of a new Forward-Reflected-Backward
Algorithm for solving operator inclusions [
        <xref ref-type="bibr" rid="ref26 ref27">26, 27</xref>
        ]. The algorithm has an advantage over the
Korpelevich extragradient method and the method of Extrapolation from the Past in terms of the number
of calculations required for the iterative step. This scheme is known as Optimistic Gradient Descent
Ascent [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] and Operator Extrapolation Algorithm [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. For the present day, the task of developing a
strongly convergent variant of the operator extrapolation algorithm for variational inequalities in Hilbert
space is relevant. Strongly convergent modifications for the extragradient algorithm are proposed in [
        <xref ref-type="bibr" rid="ref2 ref7">2,
7</xref>
        ]. Recently, many results have been obtained for algorithms for solving variational problems in Banach
spaces [
        <xref ref-type="bibr" rid="ref28 ref29 ref3 ref30 ref9">3, 9, 28–30</xref>
        ]. In particular, analogs of the Korpelevich, Tseng, and Popov algorithms for
problems in uniformly convex Banach spaces are constructed and theoretically studied. In [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] was
proposed an adaptive version of the Forward-Reflected-Backward Algorithm for monotone variational
inequalities in a 2-uniformly convex and uniformly smooth Banach space.
      </p>
      <p>
        In this article a new algorithm for solving variational inequalities in Hilbert spaces is proposed. This
particular algorithm is a variant of the Operator Extrapolation Method (the
Forward-ReflectedBackward Algorithm from [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]), regularized by using Halpern schemes [
        <xref ref-type="bibr" rid="ref31 ref32">31, 32</xref>
        ]. For variational
inequalities with monotone Lipschitz continuous operators, acting in Hilbert space, the strong
convergence theorem of the method is proved.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries and problem statement</title>
      <sec id="sec-2-1">
        <title>Let’s consider the variational inequality:</title>
        <p>find x C :</p>
        <p>Ax, y  x  0 y  C ,
where C is a nonempty subset of a Hilbert space H , A is an operator, which is acting from H in H
.</p>
        <sec id="sec-2-1-1">
          <title>We denote the set of solutions (1) as S .</title>
          <p>Assume that the following conditions are met:
 C  H is a convex and closed set;
 operator A: H  H is a monotone on C , which means
and Lipshitz operator on C (with constant L  0 ), which means</p>
          <p>Ax  Ay, x  y  0</p>
          <p>x, y  C ,
Ax  Ay  L x  y
x, y  C ;
 S is a nonempty set.</p>
          <p>Let’s consider the dual variational inequality:
find x C :</p>
          <p>Ay, x  y  0 y  C .</p>
          <p>
            We denote the set of solutions (2) as S d . It is common known, that S d is a convex and closed set
[
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. Inequality (2) is called a weak or dual formulation of the variational inequality (1) (or Minty type
inequality), and the solutions of the inequality (2) – weak solutions of the variational inequality (1). For
the monotone operators A we always have S  S d . In our particular conditions (when the operator is
also continuous), we have S d  S [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ].
          </p>
          <p>Let K is a nonempty closed and convex subset of a Hilbert space H . We know that for each x  H
there exists unique element z  K such that
z  x  inf y  x .</p>
          <p>yK
(1)
(2)</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>This element z  K</title>
          <p>denote as P x , and the corresponding operator PK : H  K is called</p>
          <p>
            K
projection operator from H to K (metric projection) [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]. For this operator the following statements
are equivalent:
          </p>
          <p>
            z  PK x  z  K,
The last inequality is equivalent to the next one [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]:
z  x, y  z  0
          </p>
          <p>y  K .
y  PK x 2  y  x 2  PK x  x 2
y  K .</p>
          <p>
            The variational inequality (1) can be formulated as the problem of finding a fixed point [
            <xref ref-type="bibr" rid="ref4">4</xref>
            ]:
where   0 . Formulation (3) is useful because it leads to an iterative scheme
x  P  x  Ax ,
          </p>
          <p>
            C
xn1  PC  xn   Axn  ,
(3)
(4)
(5)
(6)
which is weakly convergent for inverse strongly monotone (also known as co-coercive) operators
A : H  H [
            <xref ref-type="bibr" rid="ref10">10</xref>
            ]. However, in general this scheme (4) does not convergent for Lipschitz continuous
monotone operators. The most famous modification of scheme (4) is the Korpelevich extragradient
method [
            <xref ref-type="bibr" rid="ref14">14</xref>
            ]:
          </p>
          <p>
            xn1  PC  xn   APC  xn   Axn  ,
the iteration of which requires two calculations of the value of the operator of the problem and two
metric projections onto the admissible set. Computationally cheap variants of the extragradient
algorithm with one metric projection on an admissible set were proposed in the articles [
            <xref ref-type="bibr" rid="ref18 ref19">18, 19</xref>
            ]. Further
development of these ideas and attempts to reduce the complexity of iteration while preserving the
nature of convergence led to the inventing of a new Forward-Reflected-Backward Algorithm [
            <xref ref-type="bibr" rid="ref26">26</xref>
            ]
xn1  P  xn  2 Axn   Axn1  .
          </p>
          <p>C
 n  ekn11 k1   .</p>
          <p>
            This scheme is known as Optimistic Gradient Descent Ascent [
            <xref ref-type="bibr" rid="ref13">13</xref>
            ] and Operator Extrapolation
Algorithm [
            <xref ref-type="bibr" rid="ref3">3</xref>
            ]. The weak convergence of algorithm (5) is proved in [
            <xref ref-type="bibr" rid="ref26">26</xref>
            ].
          </p>
          <p>
            The task of this article is to obtain a strongly convergent variant of the Operator Extrapolation
Algorithm. In order to do this, we regularize algorithm (5) using the well-known Halpern scheme [
            <xref ref-type="bibr" rid="ref31">31</xref>
            ]
yn1  n y  1 n Tyn ,
inequality
  0 . Then
where T : H  H is a nonexpansive operator, y  H .
          </p>
          <p>If the set of fixed points F T   x  H : x  Tx is nonempty and  n 0,1 , lnim n  0 ,
n1 n   , then scheme (6) is strongly convergent: lnim yn  PFT  y  0 .</p>
          <p>
            Remark 1. Halpern's iterative scheme (6) coincides with Bakushinskii's iterative regularization
scheme [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ] for the method of successive approximations xn1  Txn for approximation of fixed points
of the operator T : H  H .
          </p>
          <p>Now let`s recall the well-known lemmas about recurrent numerical inequalities.</p>
          <p>Lemma 1. Let’s consider  n  is a sequence of nonnegative numbers, which satisfies the recurrence
 n1  1 n  n  
n n for all n  1,
where sequences  n  and  n  have corresponding properties:  n 0,1 and  n   , where</p>
          <p>
            Lemma 2 ([
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]). Let’s consider  n  is a sequence of nonnegative numbers, which satisfies the
recurrence inequality
n1  1 n  n  
n n for all n  1,
where sequences  n  and  n  have corresponding properties:  n 0,1 , n1 n   , and
lnim  n  0 . Then lnim n  0 .
          </p>
          <p>
            Lemma 3 ([
            <xref ref-type="bibr" rid="ref33">33</xref>
            ]). Let`s consider an  is a numerical sequence, which has a subsequence  an  with
k
property ank  ank 1 for all k  1. Then there exists such a nondecreasing sequence  mk  of natural
numbers, that mk   and amk  amk 1 , ak  amk 1 for all k  n1 .
          </p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Regularized Operator Extrapolation Algorithm</title>
      <p>
        In article [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ], the following Operator Extrapolation Algorithm was proposed to solve the
variational inequality (1) (Forward-Reflected-Backward Algorithm)
xn1  PC  xn n Axn n1  Axn  Axn1   PC  xn  n n1  Axn n1Axn1  ,
(7)
where parameters n satisfy the condition 0  infn n  supn n  1/ 2L .
      </p>
      <p>
        Remark 2. Modifications with the Bregman projection and the generalized Alber projection are
proposed in [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. In terms of the number of calculations required to perform an iterative step, this
algorithm has an advantage over the Korpelevich extragradient method
and the method of extrapolation from the past (Popov`s method)
 yn  PC  xn  n Axn  ,

xn1  PC  xn  n Ayn  ,
 yn  PC  xn  n Ayn1  ,

xn1  PC  xn  n Ayn .
      </p>
      <p>
        It is known that for variational inequalities (1) with monotone and Lipschitz operators acting in
Hilbert space, algorithm (7) weakly convergent with O  1  - estimate of the efficiency in terms of the
gap function [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Based on the well-known Halpern method of approximation of fixed points of
nonexpansive operators [
        <xref ref-type="bibr" rid="ref31 ref32">31, 32</xref>
        ], we will build such a regularized version of the algorithm (7).
      </p>
      <p>Algorithm 1. Regularized Operator Extrapolation Algorithm.</p>
      <p>Initialization. We set the elements y  H , x0 , x1  С , a sequence of positive numbers n 
and such a sequence  n  , that</p>
      <p> n 0,1 ,
lnim n  0 , n1 n   .</p>
      <p>Iterations. We generate a sequence  xn  using an iterative scheme</p>
      <p>xn1  PC  n y  1 n  xn n Axn  1 n n1  Axn  Axn1  .</p>
      <p>For positive parameters   assume that this condition is fulfilled:
0  infn n  supn n  1 2L .
(8)</p>
      <p>In next sections, we will prove that the sequence  xn  , generated by Algorithm 1, strongly converges
to the projection of a point y onto a set S . Therefore, to find a normal solution (a solution with the
smallest norm) of the variational inequality (1), we can use the scheme</p>
      <p>xn1  PC 1 n  xn n Axn  1 n n1  Axn  Axn1  .</p>
      <p>Remark 3. For a smooth saddle point problem
min max L  x, y 
xC yD</p>
      <sec id="sec-3-1">
        <title>Algorithm 1 has the form</title>
        <p>xn1  PC  n x  1 n  xn  n1L  xn , yn   1 n n1 1L  xn , yn   1L  xn1, yn1  ,

 yn1  PD  n y  1 n  yn  n2L  xn , yn   1 n n1 2L  xn , yn   2L  xn1, yn1 .</p>
        <p>Now let's prove the strong convergence of Algorithm 1.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Main inequalities</title>
      <p>First, we will prove two auxiliary inequalities that will allow us to use Lemmas 1 and 2 to prove the
convergence of Algorithm 1</p>
      <p>Lemma 4. For the sequence  xn  , generated by algorithm 1, the next inequality holds
xn1  z 2  2n Axn  Axn1, xn1  z  12 xn1  xn 2 
 1 n   xn  z 2  2n1 Axn1  Axn , xn  z  12 xn  xn1 2  
 n y  z 2
 n y  xn1
2
 12  n  1 n n1L  xn1  xn 2 
1 n  1 n1L  xn  xn1 2 ,</p>
      <p>
 2
where z  S .</p>
      <p>Proof. Let z  S . Then
The monotonicity of the operator A and inclusion z  S gives us
n Axn  1 n n1  Axn  Axn1  , z  xn1 </p>
      <p>xn1  n y  1 n  xn  n Axn  1 n n1  Axn  Axn1  , z  xn1  0 .</p>
      <p>By using (11) in (10), we obtain
 n Axn  Axn1, z  xn1  1 n n1  Axn  Axn1  , z  xn1  n Axn1, z  xn1 
0
 n Axn  Axn1, z  xn1 </p>
      <p> 1 n n1 Axn  Axn1, z  xn  1 n n1 Axn  Axn1, xn  xn1 .</p>
      <p>0  2 xn1  n y  1 n  xn , z  xn1  2n Axn  Axn1, z  xn1 
21 n n1 Axn  Axn1, z  xn  21 n n1 Axn  Axn1, xn  xn1 .</p>
      <p>Now let's estimate from above the application 2n1 Axn  Axn1, xn  xn1 in (12). We obtain
2n1 Axn  Axn1, xn  xn1  2n1 Axn  Axn1  xn  xn1 
(9)
(10)
(11)
(12)
Then we transform application 2 xn1 n y  1n  xn, z  xn1 in (12). We obtain
 2n1L xn  xn1  xn1  xn  n1L xn  xn1 2 n1L xn  xn1 2 .</p>
      <p>2 xn1 n y  1n  xn, z  xn1 
 n y  1n  xn  z 2  xn1  z 2  xn1 n y 1n  xn 2 .
(13)
In order to transform the difference n y  1n  xn  z 2  n y  1n  xn  xn1 2 in (13) let`s
use the following identity
u  1 v  w 2  v  w v u 2  v  w 2  2 v  w,v  u  2 v  u 2 </p>
      <p> v  w 2  v  u 2  v  w 2  u  w 2  2 v u 2 ,
where u,v, w H ,   0 . Then</p>
      <p>n y  1n  xn  z 2  n y  1n  xn  xn1 2 
 1n  xn  z 2  1n  xn  xn1 2 n y  z 2 n y  xn1 2 .</p>
      <p>Now we have this inequality
0  1n  xn  z 2 1n  xn  xn1 2 n y  z 2 n y  xn1 2  xn1  z 2 
2n Axn  Axn1, z  xn1  21n n1 Axn  Axn1, z  xn </p>
      <p>1n n1L xn  xn1 2  1n n1L xn  xn1 2 .</p>
      <p>We rearrange the terms in (14) and finally get
xn1  z 2  2n Axn  Axn1, xn1  z  12 xn1  xn 2 
 1n  xn  z 2  2n1 Axn1  Axn, xn  z  12 xn  xn1 2  

n y  z 2 n y  xn1 2  1 n 1n n1L xn1  xn 2 </p>
      <p>
 2
which had to be proved. ■</p>
      <p>Lemma 5. For the sequence xn  , generated by Algorithm 1, the inequality holds
xn1  z 2  2n Axn  Axn1, xn1  z  12 xn1  xn 2 
 1n  xn  z 2  2n1 Axn1  Axn, xn  z  12 xn  xn1 2   2n y  z, xn1  z 

 1 n 1n n1L xn1  xn 2  1n 12 n1L xn  xn1 2 ,</p>
      <p> 2  
where z S .</p>
      <p>Proof. Let's apply an elementary inequality a  b 2  a 2  2 b, a  b . We obtain
y  z 2  y  xn1  xn1  z 2  y  xn1 2  2 y  z, xn1  z .</p>
      <p>By using (16) in (9), we get (15), which had to be proved. ■</p>
    </sec>
    <sec id="sec-5">
      <title>5. Strong convergence</title>
      <p>1n  12 n1L xn  xn1 2 ,
(14)
(15)
(16)
Now let`s prove that the sequence is bounded  xn  .</p>
      <p>Lemma 6. Let the condition (8) be fulfilled. Then the sequence  xn  , generated by Algorithm 1, is
bounded.</p>
      <p>Proof. Since 0  infn n  supn n  21L and lnim n  0 , there exists such number n0  1, that
 n  1 n n1L </p>
      <p>n1L  n 1n1L  0 and 1 n  12 n1L   0 .</p>
      <p>From (9) and (17) we obtain, that for n  n0 the next inequality holds</p>
      <p> n1  1 n n   n y  z 2 ,
where  n  xn  z 2  2n1 Axn1  Axn , xn  z  1 xn  xn1 2 , z  S .</p>
      <p>2
Let's get the lower bound of  n . We obtain
n  xn  z 2  2n1 Axn1  Axn , xn  z  12 xn  xn1 2 </p>
      <p>1
 xn  z 2  2n1 Axn1  Axn xn  z  2 xn1  xn 2 </p>
      <p>1
 xn  z 2  2n1L xn1  xn xn  z  2 xn1  xn 2 
 1n1L xn  z 2   1 n1L  xn  xn1 2  0 .</p>
      <p>
 2</p>
      <p>From inequalities (18), (19) and Lemma 1 follows the boundedness of the sequences n  and  xn 
, which had to be proved. Let's formulate the main result.</p>
      <p>Theorem 1. Let C is a nonempty convex closed subset of Hilbert space H , A: H  H is a
monotone and Lipschitz continuous operator on the set C , S   , y  H , condition (8) is fulfilled.
Then the sequence  xn  , generated by Algorithm 1, strongly converges to z  PS y .</p>
      <p>Proof. Let z  PS y . Lemma 6 implies the existence of such a number M  0 , that
Then from Lemma 5 the next inequality follows</p>
      <p>y  z, xn1  z  M for all n  1.
n1 n nn   12  n  1 n n1L  xn1  xn 2 </p>
      <p>
 1 n  12 n1L  xn  xn1 2  2 nM ,

(20)
where  n  xn  z 2  2n1 Axn1  Axn , xn  z  12 xn  xn1 2 .</p>
      <p>Consider a numerical sequence n  . Then two options are possible:
1. there exists a number n  1 that  n1   n for all n  n ;
2. there exists an increasing sequence of numbers (nk ) that nk 1 nk for all k  1.</p>
      <p>First let`s consider the option 1. In this case there exists lnim n . Since  n1  n  0 ,  n  0 and
inequalities (20) are fulfilled, then for n   we obtain
(17)
(18)
(19)</p>
      <p>Let`s show that all weak partial limits of the sequence  xn  belong to S . Consider a subsequence
 xnk  , which weakly converges to some point w H . It is obvious, that wC . Let`s show that w S
. We have</p>
      <p>xnk 1  nk y  1 nk  xnk  nk Axnk  1 nk nk 1  Axnk  Axnk 1 , y  xnk 1  0 y С .
By using the monotonicity of the operator A , derive an estimate:</p>
      <p>Ay, y  xnk  Axnk , xnk  xnk 1  Axnk , y  xnk 1 
 1
nk
 nk  y  xnk   xnk  xnk 1, y  xnk 1  1 n  nk 1 Axnk  Axnk 1, y  xnk 1
k nk
y  С .</p>
      <p>From lnim n  0 , constraint of the sequence  xn  , (21) and Lipshitz property of operator A , we obtain
(21)
(22)</p>
      <sec id="sec-5-1">
        <title>On the other hand</title>
        <p>Thus, w S .</p>
        <p>Let`s prove that
lim Ay, y  xn  0
k k</p>
        <p>y  С .</p>
        <p>Ay, y  w  lkim Ay, y  xnk  lkim Ay, y  xnk  0
y С .
lim y  z, xn1  z  0 .</p>
        <p>n
Consider the following subsequence  xnk  , that
Let`s consider that xnk</p>
        <p> w  S weakly. Then we obtain
which is a proof for (22).</p>
        <p>Now from (22), inequality</p>
        <p>lkim y  z, xnk  z  lnim y  z, xn1  z .
lkim y  z, xnk  z  y  z, w  z  y  PS y, w  PS y  0 ,</p>
        <p> n1  1 n n  2 n y  z, xn1  z ,
which holds for sufficiently large n , and Lemma 2 we conclude that</p>
        <p> n  xn  z 2  2n1 Axn1  Axn , xn  z  12 xn  xn1 2  0 .</p>
        <p>From (19) we obtain lnim xn  z  0 .</p>
        <p>Now let`s consider option 2. In this case there exists a nondecreasing sequence of numbers  m 
k
with the following properties (Lemma 3):
1. mk   ;
2.  mk 1   mk for all k  n1 ;
3.  mk 1   k for all k  n1 .</p>
        <p>From the inequality of Lemma 5, (19) and second property we get</p>
      </sec>
      <sec id="sec-5-2">
        <title>This leads us to</title>
        <p> 1 mk  1mk mk 1L xmk 1  xmk 2  1 mk  12 mk 1L  xmk  xmk 1 2  2 mk M</p>
        <p> 
 2
.</p>
        <p>By similar reasoning, we prove that the partial limits of the sequence are weak  xm  belong to S . From
k
identity
we obtain</p>
      </sec>
      <sec id="sec-5-3">
        <title>Then we get As in the previous part, we obtain the inequality</title>
        <p>y  z, xmk 1  z  y  z, xmk  z  y  z, xmk 1  xmk
lkim y  z, xmk 1  z  lkim y  z, xmk  z .</p>
        <p>lim y  z, xmk 1  z  0 .</p>
        <p>k
 mk 1  1 mk  mk  2 mk y  z, xmk 1  z  1 mk mk 1  2 mk y  z, xmk 1  z .</p>
      </sec>
      <sec id="sec-5-4">
        <title>With respect to the third property, we obtain</title>
      </sec>
      <sec id="sec-5-5">
        <title>As a result, we get</title>
        <p> k   mk 1  2 y  z, xmk 1  z .</p>
        <p>lkimk  2 lim y  z, xmk 1  z  0 .</p>
        <p>
          k
such as Axn  1 n  , and in [
          <xref ref-type="bibr" rid="ref35">35</xref>
          ]
regularization
Thus, lnim n  0 and, consequently, lim xn  z  0 , which had to be proved. ■
n
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>
        In this article proposed and investigated a new algorithm for solving variational inequalities in
Hilbert spaces. The proposed iterative algorithm is a regularized (by applying the Halpern scheme [
        <xref ref-type="bibr" rid="ref32 ref33">32,
33</xref>
        ]) variant of the Operator Extrapolation Method (Forward-Reflected-Backward Algorithm from
[
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]). For variational inequalities with monotone Lipschitz continuous operators, acting in Hilbert
space, the strong convergence theorem of the method is proved.
      </p>
      <p>An important issue is the study of the asymptotic behavior of Algorithm 1 in the situation C  H :
xn1  n y  1 n  xn n Axn  1 n n1  Axn  Axn1  .</p>
      <p>
        To be more precise, this issue is about the behavior of the norm
Axn . In our opinion, the estimation
should be Axn  1 n . Note that in [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ] was obtained an estimate for the extragradient method
      </p>
      <p>Axn  1 n for the extragradient method with Halpern

 yn  xn 

xn1  xn </p>
      <p>1
n  2</p>
      <p>1
n  2
 x0  xn  
 x0  xn  
1
8L
1
8L</p>
      <p>Axn ,</p>
      <p>Ayn.</p>
      <p>
        The parameters n of Algorithm 1 satisfy the condition 0  infn n  supn n  1/ 2L . This means
that the information about the Lipschitz constants of the operator A was used a priori. Algorithm 1
and the scheme from articles [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1–3</xref>
        ] allow you to build such an algorithm with adaptive value selection
n , that which does not require knowledge of Lipschitz constants of operators and linear search type
procedures.
      </p>
      <p>Algorithm 2. Adaptive regularized operator extrapolation algorithm.
Initialization. Set y  H , elements x0 , x1  С , numbers
and such a sequence  n  , which have properties  n 0,1 ,
 0, 12  , 1,0  0 ,</p>
      <p>    .</p>
      <p>lnim n  0 , n1 n
Iterations. We generate a sequence  xn  by using an iterative scheme
xn1  PC  n y  1 n  xn n Axn  1n n1  Axn  Axn1  ,
 
min n,
n1   

n ,</p>
      <p>xn1  xn , if Axn1  Axn,
Axn1  Axn * 
otherwise.</p>
      <p>
        In addition, based on the results of work [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], it is possible to obtain an analogue of Algorithm 1 with
a generalized Alber projection for solving variational inequalities in uniformly convex and uniformly
smooth Banach spaces.
      </p>
    </sec>
    <sec id="sec-7">
      <title>7. Acknowledgements</title>
      <p>This work was supported by the Ministry of Education and Science of Ukraine (project
"Computational algorithms and optimization for artificial intelligence, medicine and defense",
0122U002026).</p>
    </sec>
    <sec id="sec-8">
      <title>8. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Y. I.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>An Adaptive Algorithm for the Variational Inequality Over the Set of Solutions of the Equilibrium Problem</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          <volume>57</volume>
          (
          <year>2021</year>
          )
          <fpage>91</fpage>
          -
          <lpage>100</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-021-00332-2.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Kravets</surname>
          </string-name>
          ,
          <article-title>Adaptive Two-Stage Bregman Method for Variational Inequalities</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          <volume>57</volume>
          (
          <year>2021</year>
          )
          <fpage>959</fpage>
          -
          <lpage>967</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-021-00421-2.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <article-title>A Novel Algorithm with Self-adaptive Technique for Solving Variational Inequalities in Banach Spaces</article-title>
          , in: N. N.
          <string-name>
            <surname>Olenev</surname>
            ,
            <given-names>Y. G.</given-names>
          </string-name>
          <string-name>
            <surname>Evtushenko</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Jaćimović</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Khachay</surname>
          </string-name>
          , V. Malkova (Eds.),
          <source>Advances in Optimization and Applications</source>
          , volume
          <volume>1514</volume>
          of Communications in Computer and Information Science, Springer, Cham,
          <year>2021</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>64</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -92711-
          <issue>0</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kinderlehrer</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Stampacchia,</surname>
          </string-name>
          <article-title>An Introduction to Variational Inequalities and Their Applications</article-title>
          ,
          <source>Society for Industrial and Applied Mathematics</source>
          , Philadelphia,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <article-title>Network economics: A variational inequality approach</article-title>
          , Kluwer Academic Publishers, Dordrecht,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S. P.</given-names>
            <surname>Uryas</surname>
          </string-name>
          <article-title>'ev, Adaptive algorithms of stochastic optimization and game theory</article-title>
          ,
          <source>Nauka</source>
          , Moscow,
          <year>1990</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A. B.</given-names>
            <surname>Bakushinskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Goncharskii</surname>
          </string-name>
          ,
          <article-title>Iterative Methods for Solving Ill-Posed Problems</article-title>
          , Nauka, Moscow,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>F.</given-names>
            <surname>Facchinei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Finite-Dimensional Variational</surname>
          </string-name>
          Inequalities and Complementarity Problems, Springer Series in Operations Research, vol. II, Springer, New York,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Alber</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Ryazantseva</surname>
          </string-name>
          , Nonlinear Ill Posed Problems of Monotone Type, Springer, Dordrecht,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Bauschke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. L.</given-names>
            <surname>Combettes</surname>
          </string-name>
          ,
          <source>Convex Analysis and Monotone Operator Theory in Hilbert Spaces</source>
          , Springer, Berlin-Heidelberg-New York,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nemirovski</surname>
          </string-name>
          ,
          <article-title>Prox-method with rate of convergence O(1/T) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems</article-title>
          , SIAM J.
          <year>Optim</year>
          .
          <volume>15</volume>
          (
          <year>2004</year>
          )
          <fpage>229</fpage>
          -
          <lpage>251</lpage>
          . doi:
          <volume>10</volume>
          .1137/S1052623403425629.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>J.-K. Wang</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Abernethy</surname>
            ,
            <given-names>K. Y. Levy</given-names>
          </string-name>
          ,
          <article-title>No-Regret Dynamics in the Fenchel Game: A Unified Framework for Algorithmic Convex Optimization</article-title>
          , arXiv preprint arXiv:
          <volume>2111</volume>
          .
          <fpage>11309</fpage>
          . (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>G.</given-names>
            <surname>Gidel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Berard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Vincent</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lacoste-Julien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A Variational</given-names>
            <surname>Inequality</surname>
          </string-name>
          <article-title>Perspective on Generative Adversarial Networks</article-title>
          , arXiv preprint arXiv:
          <year>1802</year>
          .
          <fpage>10551</fpage>
          . (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Korpelevich</surname>
          </string-name>
          ,
          <article-title>An extragradient method for finding saddle points and for other problems</article-title>
          , Matecon.
          <volume>12</volume>
          (
          <year>1976</year>
          )
          <fpage>747</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>E. N.</given-names>
            <surname>Khobotov</surname>
          </string-name>
          ,
          <article-title>Modification of the extra-gradient method for solving variational inequalities and certain optimization problems</article-title>
          ,
          <source>USSR Comput. Math. Phys. 27</source>
          (
          <year>1987</year>
          )
          <fpage>120</fpage>
          -
          <lpage>127</lpage>
          . doi:
          <volume>10</volume>
          .1016/
          <fpage>0041</fpage>
          -
          <lpage>5553</lpage>
          (
          <issue>87</issue>
          )
          <fpage>90058</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>N.</given-names>
            <surname>Nadezhkina</surname>
          </string-name>
          , W. Takahashi,
          <article-title>Weak convergence theorem by an extragradient method for nonexpansive mappings and monotone mappings</article-title>
          ,
          <source>Journal of Optimization Theory and Applications</source>
          <volume>128</volume>
          (
          <year>2006</year>
          )
          <fpage>191</fpage>
          -
          <lpage>201</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Nomirovskii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. V.</given-names>
            <surname>Rublyov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Convergence of Extragradient Algorithm with Monotone Step Size Strategy for Variational Inequalities and Operator Equations</article-title>
          ,
          <source>Journal of Automation and Information Sciences</source>
          <volume>51</volume>
          (
          <year>2019</year>
          )
          <fpage>12</fpage>
          -
          <lpage>24</lpage>
          . doi:
          <volume>10</volume>
          .1615/JAutomatInfScien.v51.
          <year>i6</year>
          .
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tseng</surname>
          </string-name>
          ,
          <article-title>A modified forward-backward splitting method for maximal monotone mappings</article-title>
          ,
          <source>SIAM Journal on Control and Optimization</source>
          <volume>38</volume>
          (
          <year>2000</year>
          )
          <fpage>431</fpage>
          -
          <lpage>446</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Censor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gibali</surname>
          </string-name>
          , S. Reich,
          <article-title>The subgradient extragradient method for solving variational inequalities in Hilbert space</article-title>
          ,
          <source>Journal of Optimization Theory and Applications</source>
          <volume>148</volume>
          (
          <year>2011</year>
          )
          <fpage>318</fpage>
          -
          <lpage>335</lpage>
          (
          <year>2011</year>
          ).
          <source>doi:10.1007/s10957-010-9757-3.</source>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>F.</given-names>
            <surname>Bach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. Y.</given-names>
            <surname>Levy</surname>
          </string-name>
          ,
          <article-title>A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and Noise</article-title>
          , arXiv preprint arXiv:
          <year>1902</year>
          .
          <fpage>01637</fpage>
          . (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>K.</given-names>
            <surname>Antonakopoulos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Belmega</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mertikopoulos</surname>
          </string-name>
          ,
          <article-title>An adaptive mirror-prox method for variational inequalities with singular operators</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          <volume>32</volume>
          (
          <issue>NeurIPS</issue>
          ), Curran Associates, Inc.,
          <year>2019</year>
          ,
          <fpage>8455</fpage>
          -
          <lpage>8465</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. I. Stetsyuk</surname>
          </string-name>
          ,
          <article-title>Bregman Extragradient Method with Monotone Rule of Step Adjustment, Cybernetics</article-title>
          and
          <string-name>
            <given-names>Systems</given-names>
            <surname>Analysis</surname>
          </string-name>
          .
          <volume>55</volume>
          (
          <year>2019</year>
          )
          <fpage>377</fpage>
          -
          <lpage>383</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-019- 00144-5.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Popov</surname>
          </string-name>
          ,
          <article-title>A modification of the Arrow-Hurwicz method for search of saddle points</article-title>
          ,
          <source>Mathematical notes of the Academy of Sciences of the USSR</source>
          .
          <volume>28</volume>
          (
          <year>1980</year>
          )
          <fpage>845</fpage>
          -
          <lpage>848</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chabak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <article-title>A New Non-Euclidean Proximal Method for Equilibrium Problems</article-title>
          , in: O.
          <string-name>
            <surname>Chertov</surname>
          </string-name>
          et al. (Eds.),
          <source>Recent Developments in Data Science and Intelligent Analysis of Information</source>
          , volume
          <volume>836</volume>
          <source>of Advances in Intelligent Systems and Computing</source>
          , Springer, Cham,
          <year>2019</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -97885-
          <issue>7</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gorbunov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Taylor</surname>
          </string-name>
          , G. Gidel,
          <article-title>Last-Iterate Convergence of Optimistic Gradient Method for Monotone Variational Inequalities</article-title>
          , arXiv preprint arXiv:
          <volume>2205</volume>
          .
          <fpage>08446</fpage>
          . (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <article-title>A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity</article-title>
          ,
          <source>SIAM Journal on Optimization</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>1451</fpage>
          -
          <lpage>1472</lpage>
          . doi:
          <volume>10</volume>
          .1137/18M1207260.
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>E. R.</given-names>
            <surname>Csetnek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <article-title>Shadow Douglas-Rachford Splitting for Monotone Inclusions</article-title>
          , Appl Math Optim.
          <volume>80</volume>
          (
          <year>2019</year>
          )
          <fpage>665</fpage>
          -
          <lpage>678</lpage>
          . doi:
          <volume>10</volume>
          .1007/s00245-019-09597-8.
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>H.</given-names>
            <surname>Iiduka</surname>
          </string-name>
          , W. Takahashi,
          <article-title>Weak convergence of a projection algorithm for variational inequalities in a Banach space</article-title>
          ,
          <source>Journal of Mathematical Analysis and Applications</source>
          <volume>339</volume>
          (
          <year>2008</year>
          )
          <fpage>668</fpage>
          -
          <lpage>679</lpage>
          . https://doi.org/10.1016/j.jmaa.
          <year>2007</year>
          .
          <volume>07</volume>
          .019.
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shehu</surname>
          </string-name>
          ,
          <article-title>Single projection algorithm for variational inequalities in Banach spaces with application to contact problem</article-title>
          ,
          <source>Acta Math. Sci. 40</source>
          (
          <year>2020</year>
          )
          <fpage>1045</fpage>
          -
          <lpage>1063</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10473-020-0412-2.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Cholamjiak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Sunthrayuth</surname>
          </string-name>
          ,
          <article-title>Modified Tseng's splitting algorithms for the sum of two monotone operators in Banach spaces</article-title>
          ,
          <source>AIMS Mathematics 6</source>
          (
          <issue>5</issue>
          ) (
          <year>2021</year>
          )
          <fpage>4873</fpage>
          -
          <lpage>4900</lpage>
          . doi:
          <volume>10</volume>
          .3934/math.2021286.
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>B.</given-names>
            <surname>Halpern</surname>
          </string-name>
          ,
          <article-title>Fixed points of nonexpanding maps</article-title>
          ,
          <source>Bull. Amer. Math. Soc</source>
          .
          <volume>73</volume>
          (
          <year>1967</year>
          )
          <fpage>957</fpage>
          -
          <lpage>961</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>Viscosity approximation methods for nonexpansive mappings</article-title>
          ,
          <source>Journal of Mathematical Analysis and Applications</source>
          <volume>298</volume>
          (
          <year>2004</year>
          )
          <fpage>279</fpage>
          -
          <lpage>291</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.jmaa.
          <year>2004</year>
          .
          <volume>04</volume>
          .059.
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          [33]
          <string-name>
            <surname>P.-E. Mainge</surname>
          </string-name>
          ,
          <article-title>Strong convergence of projected subgradient methods for nonsmooth and nonstrictly convex minimization</article-title>
          ,
          <source>Set-Valued Analysis</source>
          <volume>16</volume>
          (
          <year>2008</year>
          )
          <fpage>899</fpage>
          -
          <lpage>912</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>E.</given-names>
            <surname>Gorbunov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Loizou</surname>
          </string-name>
          , G. Gidel, Extragradient Method:
          <article-title>O(1/K) Last-Iterate Convergence for Monotone Variational Inequalities and Connections With Cocoercivity</article-title>
          ,
          <source>arXiv preprint arXiv: 2110</source>
          .
          <fpage>04261</fpage>
          . (
          <year>2021</year>
          ). doi:
          <volume>10</volume>
          .48550/arXiv.2110.04261.
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>T.</given-names>
            <surname>Yoon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. K.</given-names>
            <surname>Ryu</surname>
          </string-name>
          ,
          <article-title>Accelerated algorithms for smooth convex-concave minimax problems with O(1/k2) rate on squared gradient norm</article-title>
          ,
          <source>Proceedings of the 38th International Conference on Machine Learning, Proceedings of Machine Learning Research</source>
          <volume>139</volume>
          (
          <year>2021</year>
          )
          <fpage>12098</fpage>
          -
          <lpage>12109</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>