=Paper= {{Paper |id=Vol-3746/Paper_4 |storemode=property |title=The Regularized Operator Extrapolation Algorithm for Variational Inequalities |pdfUrl=https://ceur-ws.org/Vol-3746/Paper_4.pdf |volume=Vol-3746 |authors=Vladimir Semenov,Oleh Kharkov |dblpUrl=https://dblp.org/rec/conf/dsmsi/SemenovK23 }} ==The Regularized Operator Extrapolation Algorithm for Variational Inequalities== https://ceur-ws.org/Vol-3746/Paper_4.pdf
                         The Regularized Operator Extrapolation Algorithm for
                         Variational Inequalities
                         Vladimir Semenov and Oleh Kharkov
                         Taras Shevchenko National University of Kyiv, 64/13 Volodymyrska Street, Kyiv, 01161, Ukraine

                                         Abstract
                                         In this article proposed and investigated a new algorithm for solving monotone variational
                                         inequalities in Hilbert spaces. Variational inequalities provide a universal instrument of
                                         formulating many problems of mathematical physics, machine learning, data analysis, optimal
                                         control, and operations research. The proposed iterative algorithm is a regularized (by applying
                                         the Halpern scheme) variant of the operator extrapolation method. In terms of the number of
                                         calculations required to perform an iterative step, this algorithm has an advantage over the
                                         extragradient method and the method of extrapolation from the past. For variational
                                         inequalities with monotone Lipschitz continuous operators, acting in Hilbert space, the strong
                                         convergence theorem of the method is proved.
                                         Keywords 1
                                         variational inequality, monotone operator, saddle point problem, operator extrapolation
                                         method, regularization, Halpern method, strong convergence

                         1. Introduction
                             This article continues the series of articles [1–3] devoted to the development of computationally
                         efficient and adaptive algorithms for solving variational inequalities and equilibrium problems.
                             Variational inequalities provide a universal instrument of formulating many topical problems of
                         mathematical physics, machine learning, data analysis, optimal control, and operations research [4, 5].
                         The development of algorithms for solving variational inequalities and related problems (equilibrium
                         problems, game problems) is an extremely popular field of research in computational mathematics [6–
                         35]. Some problems of non-smooth optimization can be effectively solved if they are formulated as
                         saddle problems. This approach allows to apply algorithms for solving variational inequalities in order
                         to get a solution of the optimization problem [11]. Recently, such a variant of building fast algorithms
                         for convex programming problems was developed: by using a duality theory, was made a transition to
                         some convex-concave saddle problem (Fenchel game) and then applied extragradient algorithms for
                         solving variational inequalities [12]. Note that the increased use of generative adversarial neural
                         networks (GANs) and other adversarial or robust learning models has led to interest among machine
                         learning specialists in algorithms for solving saddle problems and variational inequalities [13].
                             The simplest method for solving variational inequalities is an analogue of the gradient descent
                         method, which in the case of the saddle point problem is known as the gradient descent-ascent method
                         [6]. But this method may not converge for variational inequalities with a monotone operator.
                             A well-known modification of the gradient descent method with projection for variational
                         inequalities is the Korpelevich extragradient method [14–17], the iteration of which requires two
                         calculations of the value of the operator of the problem and two metric projections onto the admissible
                         set. Computationally cheap variants of the extragradient algorithm with one metric projection on an
                         admissible set were proposed in the articles [18, 19]. Variants of the Korpelevich extragradient method,
                         including adaptive ones, are proposed in the articles [20–22]. In the Popov article [23] was proposed a
                         modification of the gradient descent-ascent method different from the extragradient algorithm for
                         finding saddle points of convex-concave functions. The iteration of this algorithm is cheaper than the

                         Dynamical System Modeling and Stability Investigation (DSMSI-2023), December 19-21, 2023, Kyiv, Ukraine
                         EMAIL: semenov.volodya@gmail.com (V. Semenov); olehharek@gmail.com (O. Kharkov)
                         ORCID: 0000-0002-3280-8245 (V. Semenov); 0000-0002-9049-4846 (O. Kharkov)
                                      ©️ 2023 Copyright for this paper by its authors.
                                      Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
                                      CEUR Workshop Proceedings (CEUR-WS.org)


CEUR
                  ceur-ws.org
Workshop      ISSN 1613-0073
Proceedings
                                                                                                                                      33
iteration of the extragradient algorithm in terms of the number of operator value calculations: one
instead of two. Popov's algorithm for variational inequalities became known among Machine Learning
specialists as Extrapolation from the Past [13]. Important results related to this algorithm are obtained
in papers [1, 2, 13, 23–25]. In particular, it’s adaptive modifications are proposed in the papers [1, 2].
    Further development of these ideas and attempts to reduce the complexity of iteration while
preserving the nature of convergence led to the inventing of a new Forward-Reflected-Backward
Algorithm for solving operator inclusions [26, 27]. The algorithm has an advantage over the
Korpelevich extragradient method and the method of Extrapolation from the Past in terms of the number
of calculations required for the iterative step. This scheme is known as Optimistic Gradient Descent
Ascent [13] and Operator Extrapolation Algorithm [3]. For the present day, the task of developing a
strongly convergent variant of the operator extrapolation algorithm for variational inequalities in Hilbert
space is relevant. Strongly convergent modifications for the extragradient algorithm are proposed in [2,
7]. Recently, many results have been obtained for algorithms for solving variational problems in Banach
spaces [3, 9, 28–30]. In particular, analogs of the Korpelevich, Tseng, and Popov algorithms for
problems in uniformly convex Banach spaces are constructed and theoretically studied. In [3] was
proposed an adaptive version of the Forward-Reflected-Backward Algorithm for monotone variational
inequalities in a 2-uniformly convex and uniformly smooth Banach space.
    In this article a new algorithm for solving variational inequalities in Hilbert spaces is proposed. This
particular algorithm is a variant of the Operator Extrapolation Method (the Forward-Reflected-
Backward Algorithm from [26]), regularized by using Halpern schemes [31, 32]. For variational
inequalities with monotone Lipschitz continuous operators, acting in Hilbert space, the strong
convergence theorem of the method is proved.

2. Preliminaries and problem statement
    Let’s consider the variational inequality:
                                find x  C :Ax, y  x  0 y  C ,                              (1)
where C is a nonempty subset of a Hilbert space H , A is an operator, which is acting from H in H
.
We denote the set of solutions (1) as S .
  Assume that the following conditions are met:
   C  H is a convex and closed set;
   operator A : H  H is a monotone on C , which means
                                     Ax  Ay, x  y  0 x, y  C ,
      and Lipshitz operator on C (with constant L  0 ), which means
                                    Ax  Ay  L x  y       x, y  C ;
    S is a nonempty set.
   Let’s consider the dual variational inequality:
                                 find x  C :    Ay, x  y  0 y  C .                                 (2)
   We denote the set of solutions (2) as S d . It is common known, that S d is a convex and closed set
[4]. Inequality (2) is called a weak or dual formulation of the variational inequality (1) (or Minty type
inequality), and the solutions of the inequality (2) – weak solutions of the variational inequality (1). For
the monotone operators A we always have S  S d . In our particular conditions (when the operator is
also continuous), we have S d  S [4].
    Let K is a nonempty closed and convex subset of a Hilbert space H . We know that for each x  H
there exists unique element z  K such that
                                           z  x  inf y  x .
                                                     yK




                                                                                                         34
   This element z  K denote as PK x , and the corresponding operator PK : H  K is called
projection operator from H to K (metric projection) [4]. For this operator the following statements
are equivalent:
                         z  PK x  z  K ,               z  x, y  z  0 y  K .
   The last inequality is equivalent to the next one [4]:
                              y  PK x  y  x  PK x  x                 y  K .
                                       2              2             2


   The variational inequality (1) can be formulated as the problem of finding a fixed point [4]:
                                               x  PC  x   Ax  ,                                  (3)
where   0 . Formulation (3) is useful because it leads to an iterative scheme
                                             xn1  PC  xn   Axn  ,                               (4)
which is weakly convergent for inverse strongly monotone (also known as co-coercive) operators
A : H  H [10]. However, in general this scheme (4) does not convergent for Lipschitz continuous
monotone operators. The most famous modification of scheme (4) is the Korpelevich extragradient
method [14]:
                                   xn1  PC  xn   APC  xn   Axn   ,
the iteration of which requires two calculations of the value of the operator of the problem and two
metric projections onto the admissible set. Computationally cheap variants of the extragradient
algorithm with one metric projection on an admissible set were proposed in the articles [18, 19]. Further
development of these ideas and attempts to reduce the complexity of iteration while preserving the
nature of convergence led to the inventing of a new Forward-Reflected-Backward Algorithm [26]
                                      xn1  PC  xn  2 Axn   Axn1  .                           (5)
   This scheme is known as Optimistic Gradient Descent Ascent [13] and Operator Extrapolation
Algorithm [3]. The weak convergence of algorithm (5) is proved in [26].
   The task of this article is to obtain a strongly convergent variant of the Operator Extrapolation
Algorithm. In order to do this, we regularize algorithm (5) using the well-known Halpern scheme [31]
                                           yn1  n y  1  n  Tyn ,                              (6)
where T : H  H is a nonexpansive operator, y  H .
   If the set of fixed points F T   x  H : x  Tx is nonempty and  n   0,1 , lim  n  0 ,
                                                                                             n 


    , then scheme (6) is strongly convergent: lim
   
                                                        yn  PF T  y  0 .
   n 1   n                                        n 
   Remark 1. Halpern's iterative scheme (6) coincides with Bakushinskii's iterative regularization
scheme [7] for the method of successive approximations xn 1  Txn for approximation of fixed points
of the operator T : H  H .
    Now let`s recall the well-known lemmas about recurrent numerical inequalities.
   Lemma 1. Let’s consider n  is a sequence of nonnegative numbers, which satisfies the recurrence
inequality
                               n1  1  n  n  n n for all n  1,
where sequences  n  and  n  have corresponding properties:  n   0,1 and  n   , where
  0 . Then

                                            n  e  k 1 k 1   .
                                                          n1
                                                           




                                                                                                      35
   Lemma 2 ([7]). Let’s consider n  is a sequence of nonnegative numbers, which satisfies the
recurrence inequality
                                   n1  1  n  n  n n      for all n  1,
where sequences  n  and  n  have corresponding properties:  n   0,1 ,  n1 n   , and
                                                                                               



lim  n  0 . Then lim  n  0 .
n                 n 

   Lemma 3 ([33]). Let`s consider  an  is a numerical sequence, which has a subsequence ank          with
property ank  ank 1 for all k  1 . Then there exists such a nondecreasing sequence  mk  of natural

numbers, that mk   and amk  amk 1 , ak  amk 1 for all k  n1 .


       3. Regularized Operator Extrapolation Algorithm
    In article [26], the following Operator Extrapolation Algorithm was proposed to solve the
variational inequality (1) (Forward-Reflected-Backward Algorithm)
          xn1  PC  xn  n Axn  n1  Axn  Axn1    PC  xn   n  n1  Axn  n1 Axn1  ,   (7)

where parameters n satisfy the condition 0  inf n n  supn n  1/ 2 L .
   Remark 2. Modifications with the Bregman projection and the generalized Alber projection are
proposed in [2, 3]. In terms of the number of calculations required to perform an iterative step, this
algorithm has an advantage over the Korpelevich extragradient method
                                            yn  PC  xn  n Axn  ,
                                            
                                             xn 1  PC  xn  n Ayn  ,
and the method of extrapolation from the past (Popov`s method)
                                            yn  PC  xn  n Ayn 1  ,
                                            
                                             xn 1  PC  xn  n Ayn  .
   It is known that for variational inequalities (1) with monotone and Lipschitz operators acting in
                                                                  
Hilbert space, algorithm (7) weakly convergent with O 1 - estimate of the efficiency in terms of the
gap function [3]. Based on the well-known Halpern method of approximation of fixed points of
nonexpansive operators [31, 32], we will build such a regularized version of the algorithm (7).
   Algorithm 1. Regularized Operator Extrapolation Algorithm.
        Initialization. We set the elements y  H , x0 , x1  С , a sequence of positive numbers  n 
        and such a sequence  n  , that
                                                         n   0,1 ,
                                              lim  n  0 ,  n1 n   .
                                                                

                                              n 

        Iterations. We generate a sequence  xn  using an iterative scheme

                    xn1  PC n y  1  n  xn  n Axn  1  n  n1  Axn  Axn1   .
   For positive parameters 𝜆𝑛 assume that this condition is fulfilled:
                                      0  inf n n  supn n  1 2 L .                                      (8)




                                                                                                            36
   In next sections, we will prove that the sequence  xn  , generated by Algorithm 1, strongly converges
to the projection of a point y onto a set S . Therefore, to find a normal solution (a solution with the
smallest norm) of the variational inequality (1), we can use the scheme
                          xn1  PC 1  n  xn  n Axn  1  n  n1  Axn  Axn1   .
   Remark 3. For a smooth saddle point problem
                                                   min max L  x, y 
                                                       xC   yD

Algorithm 1 has the form

              
  xn1  PC  n x  1   n  xn  n1L  xn , yn   1   n  n 1  1L  xn , yn   1L  xn 1 , yn 1   ,
                                                                                                                            
 
             
   y  PD  n y  1   n  yn  n2 L  xn , yn   1   n  n 1   2 L  xn , yn    2 L  xn 1 , yn 1   .
  n1                                                                                                                           
   Now let's prove the strong convergence of Algorithm 1.

    4. Main inequalities
   First, we will prove two auxiliary inequalities that will allow us to use Lemmas 1 and 2 to prove the
convergence of Algorithm 1
   Lemma 4. For the sequence  xn  , generated by algorithm 1, the next inequality holds
                                                               1
            xn 1  z  2n Axn  Axn 1 , xn 1  z            xn 1  xn 
                      2                                                    2

                                                               2
                                                                               1          2
                           1   n   xn  z  2n 1 Axn 1  Axn , xn  z  xn  xn 1  
                                               2

                                                                               2           
                                      n y  z  n y  xn 1
                                                   2                  2
                                                                            1                           
                                                                              n  1   n  n 1 L  xn 1  xn 
                                                                                                                     2

                                                                            2                           
                                                                                                 1         
                                                                                     1   n    n 1L  xn  xn 1 ,
                                                                                                                        2
                                                                                                                                 (9)
                                                                                                 2         
where z  S .
  Proof. Let z  S . Then
           xn 1   n y  1   n  xn  n Axn  1   n  n 1  Axn  Axn 1  , z  xn 1  0 .                      (10)

The monotonicity of the operator A and inclusion z  S gives us
    n Axn  1   n  n 1  Axn  Axn 1  , z  xn 1 
            n Axn  Axn1 , z  xn1         1   n  n1  Axn  Axn1  , z  xn1  n Axn1 , z  xn1 
                                                                                                                0
                    n Axn  Axn1 , z  xn1 

                             1  n  n1 Axn  Axn1, z  xn  1  n  n1 Axn  Axn1, xn  xn1 .                   (11)
By using (11) in (10), we obtain
                    0  2 xn 1   n y  1   n  xn , z  xn 1  2n Axn  Axn1 , z  xn1 

                  2 1  n  n1 Axn  Axn1 , z  xn  2 1  n  n1 Axn  Axn1, xn  xn1 .                         (12)

Now let's estimate from above the application 2n1 Axn  Axn1 , xn  xn1 in (12). We obtain

                          2n1 Axn  Axn1, xn  xn1  2n1 Axn  Axn1  xn  xn1 



                                                                                                                                 37
                                         2n 1 L xn  xn 1  xn 1  xn  n 1 L xn  xn 1  n 1 L xn  xn 1 .
                                                                                                                              2                               2



Then we transform application 2 xn 1   n y  1   n  xn , z  xn 1 in (12). We obtain

                                                        2 xn 1   n y  1   n  xn , z  xn 1 

                                    n y  1   n  xn  z  xn1  z  xn1   n y  1   n  xn .
                                                                  2                       2                                              2
                                                                                                                                                             (13)

In order to transform the difference  n y  1   n  xn  z   n y  1   n  xn  xn 1
                                                                                         2                                                       2
                                                                                                                                                     in (13) let`s
use the following identity
             u  1    v  w  v  w    v  u   v  w  2 v  w, v  u   v  u 
                                     2                                2             2    2                                                       2



                                                         v  w  v  u  v  w  u  w  2 v  u ,
                                                                          2                          2                   2                   2                    2


where u , v, w  H ,   0 . Then

                                    n y  1   n  xn  z   n y  1   n  xn  xn1 
                                                              2                                                      2




                        1   n  xn  z  1   n  xn  xn 1   n y  z   n y  xn 1 .
                                              2                                    2                         2                     2



Now we have this inequality
       0  1   n  xn  z  1   n  xn  xn 1   n y  z   n y  xn 1  xn 1  z 
                              2                         2                       2                                2             2



                        2n Axn  Axn1 , z  xn1  2 1  n  n1 Axn  Axn1, z  xn 
                                               1   n  n 1 L xn  xn 1  1   n  n 1 L xn  xn 1 .
                                                                                             2                                           2
                                                                                                                                                             (14)
We rearrange the terms in (14) and finally get
                                                                              1
           xn 1  z        2n Axn  Axn 1 , xn 1  z                      xn 1  xn 
                       2                                                                  2

                                                                              2
                                                                             1           2
                         1   n   xn  z  2n 1 Axn 1  Axn , xn  z  xn  xn 1  
                                              2

                                                                             2             
                                                2                2
                                                                        1                  
                                       n y  z  n y  xn1     n  1   n  n1L  xn1  xn 2 
                                                                       2                   
                                                                                                                                      1                      2,
                                                                                                                          1   n    n 1 L  xn  xn 1
                                                                                                                                      2          
which had to be proved. ■
   Lemma 5. For the sequence  xn  , generated by Algorithm 1, the inequality holds
                                                     1
    xn 1  z  2n Axn  Axn 1 , xn 1  z          xn 1  xn 
            2                                                    2

                                                     2
                                                                        1              2
            1   n   xn  z  2n 1 Axn 1  Axn , xn  z  xn  xn 1   2 n y  z, xn1  z 
                                2

                                                                        2               
                                            1                          
                                              n  1   n  n 1L  xn 1  xn  1   n   1  n 1L  xn  xn 1 2 ,
                                                                                    2
                                                                                                                                                             (15)
                                            2                                                    2            
where z  S .
   Proof. Let's apply an elementary inequality a  b                                    a  2 b, a  b . We obtain
                                                                               2                 2



                            yz           y  xn 1  xn 1  z  y  xn 1  2 y  z , xn 1  z .
                                    2                                     2                              2
                                                                                                                                                             (16)
By using (16) in (9), we get (15), which had to be proved. ■

    5. Strong convergence

                                                                                                                                                               38
   Now let`s prove that the sequence is bounded  xn  .
   Lemma 6. Let the condition (8) be fulfilled. Then the sequence  xn  , generated by Algorithm 1, is
bounded.
   Proof. Since 0  inf n n  supn n  2 L and lim  n  0 , there exists such number n0  1 , that
                                                      1
                                                                n 
               1                             1
                   n  1   n  n 1 L   n 1 L   n 1  n 1 L   0 and 1   n   1  n 1 L   0 .       (17)
               2                             2                                                   2             
From (9) and (17) we obtain, that for n  n0 the next inequality holds

                                        n 1  1  n  n   n y  z 2 ,                                                  (18)
                                                                       1
where  n  xn  z           2n 1 Axn 1  Axn , xn  z              xn  xn 1 , z  S .
                        2                                                          2

                                                                       2
   Let's get the lower bound of  n . We obtain
                                                   1
       n  xn  z  2n 1 Axn 1  Axn , xn  z    xn  xn 1 
                    2                                           2

                                                   2
                                                      1
              xn  z  2n 1 Axn 1  Axn xn  z  xn 1  xn 
                     2                                            2

                                                      2
                                                             1
                     xn  z  2n 1 L xn 1  xn xn  z  xn 1  xn 
                             2                                        2

                                                             2
                                                                                    1          
                                                           1  n 1 L  xn  z    n 1 L  xn  xn 1  0 .
                                                                                 2                          2
                                                                                                                              (19)
                                                                                     2         
   From inequalities (18), (19) and Lemma 1 follows the boundedness of the sequences n  and  xn 
, which had to be proved. Let's formulate the main result.
   Theorem 1. Let C is a nonempty convex closed subset of Hilbert space H , A : H  H is a
monotone and Lipschitz continuous operator on the set C , S   , y  H , condition (8) is fulfilled.
Then the sequence  xn  , generated by Algorithm 1, strongly converges to z  PS y .

   Proof. Let z  PS y . Lemma 6 implies the existence of such a number M  0 , that

                                          y  z , xn 1  z  M for all n  1.
Then from Lemma 5 the next inequality follows
                                    n 1 n   nn     n  1   n  n 1 L  xn 1  xn 
                                                          1                                        2

                                                         2                            
                                                                                          1          
                                                                            1   n    n 1 L  xn  xn 1  2 n M ,
                                                                                                                2
                                                                                                                              (20)
                                                                                          2          
                                                                 1
where  n  xn  z  2n 1 Axn 1  Axn , xn  z                 xn  xn 1 .
                        2                                                    2

                                                                 2
   Consider a numerical sequence n  . Then two options are possible:

        1. there exists a number n  1 that  n1   n for all n  n ;
        2. there exists an increasing sequence of numbers ( nk ) that nk 1  nk for all k  1 .

   First let`s consider the option 1. In this case there exists lim  n . Since  n 1   n  0 ,  n  0 and
                                                                              n 
inequalities (20) are fulfilled, then for n   we obtain




                                                                                                                               39
                                                           xn1  xn  0 .                                                                (21)

   Let`s show that all weak partial limits of the sequence  xn  belong to S . Consider a subsequence

 x  , which weakly converges to some point w H . It is obvious, that w  C . Let`s show that w S
   nk

. We have
            xnk 1   nk y  1   nk  xnk  nk Axnk  1   nk  nk 1  Axnk  Axnk 1  , y  xnk 1  0 y  С .

By using the monotonicity of the operator A , derive an estimate:
        Ay, y  xnk  Axnk , xnk  xnk 1  Axnk , y  xnk 1 
                                                                                                     n 1                            y  С .
                                        n  y  xn   xn  xn 1 , y  xn 1  1   n 
                                  1
                                                                                                      k
                                                                                                           Axn  Axn 1 , y  xn 1
                                  n
                                   k
                                          k          k       k      k            k               k
                                                                                                      n   k
                                                                                                                  k        k     k




From lim  n  0 , constraint of the sequence  xn  , (21) and Lipshitz property of operator A , we obtain
         n 

                                                   lim Ay, y  xnk  0                      y  С .
                                                   k 
On the other hand
                                 Ay, y  w  lim Ay, y  xnk  lim Ay, y  xnk  0                                    y  С .
                                                  k                          k 

Thus, w  S .
  Let`s prove that
                                                    lim y  z, xn 1  z  0 .                                                            (22)
                                                    n 

Consider the following subsequence xnk , that         
                                          lim y  z, xnk  z  lim y  z , xn 1  z .
                                          k                                n 


Let`s consider that xnk  w  S weakly. Then we obtain

                          lim y  z, xnk  z  y  z, w  z  y  PS y, w  PS y  0 ,
                          k 

which is a proof for (22).
  Now from (22), inequality
                                              n1  1  n  n  2n y  z, xn1  z ,
which holds for sufficiently large n , and Lemma 2 we conclude that
                                                                    1
                        n  xn  z  2n 1 Axn 1  Axn , xn  z  xn  xn 1  0 .
                                   2                                           2

                                                                    2
From (19) we obtain lim xn  z  0 .
                            n

   Now let`s consider option 2. In this case there exists a nondecreasing sequence of numbers  mk 
with the following properties (Lemma 3):
           1.   mk   ;
           2.    m 1   m for all k  n1 ;
                   k         k


           3.    m 1   k for all k  n1 .
                   k

From the inequality of Lemma 5, (19) and second property we get
                          1                                                          1                                   .
                             mk  1   mk  mk 1 L  xmk 1  xmk  1   mk    mk 1 L  xmk  xmk 1  2 mk M
                                                                        2                                         2

                          2                                                          2           

This leads us to



                                                                                                                                           40
                                            lim xmk 1  xmk  lim xmk  xmk 1  0 .
                                            k                         k 

By similar reasoning, we prove that the partial limits of the sequence are weak xmk                   belong to S . From
identity
                                      y  z, xmk 1  z  y  z , xmk  z  y  z , xmk 1  xmk
we obtain
                                           lim y  z, xmk 1  z  lim y  z, xmk  z .
                                           k                            k 

As in the previous part, we obtain the inequality
                                                          lim y  z, xmk 1  z  0 .
                                                          k 

Then we get
                                 
              m 1  1   mk  mk  2 mk
                 k
                                                                                      
                                                     y  z, xmk 1  z  1   mk mk 1  2 mk y  z, xmk 1  z .
With respect to the third property, we obtain
                                                k   mk 1  2 y  z, xm 1  z .k


As a result, we get
                                                   lim  k  2lim y  z, xmk 1  z  0 .
                                                   k           k 

Thus, lim  n  0 and, consequently, lim xn  z  0 , which had to be proved. ■
      n                                            n



    6. Conclusions
   In this article proposed and investigated a new algorithm for solving variational inequalities in
Hilbert spaces. The proposed iterative algorithm is a regularized (by applying the Halpern scheme [32,
33]) variant of the Operator Extrapolation Method (Forward-Reflected-Backward Algorithm from
[26]). For variational inequalities with monotone Lipschitz continuous operators, acting in Hilbert
space, the strong convergence theorem of the method is proved.
   An important issue is the study of the asymptotic behavior of Algorithm 1 in the situation C  H :
                             xn1  n y  1  n  xn  n Axn  1  n  n1  Axn  Axn1  .
To be more precise, this issue is about the behavior of the norm Axn . In our opinion, the estimation
should be Axn   1 n  . Note that in [34] was obtained an estimate for the extragradient method

                             
such as Axn   1 n , and in [35]                          Axn   1 n  for the extragradient method with Halpern
regularization
                                                             1                  1
                                                yn  xn  n  2  x0  xn   8L Axn ,
                                                
                                                 x  x  1  x  x   1 Ay .
                                                 n 1   n
                                                             n2
                                                                       0    n
                                                                                 8L
                                                                                       n



   The parameters n of Algorithm 1 satisfy the condition 0  inf n n  supn n  1/ 2 L . This means
that the information about the Lipschitz constants of the operator A was used a priori. Algorithm 1
and the scheme from articles [1–3] allow you to build such an algorithm with adaptive value selection
n , that which does not require knowledge of Lipschitz constants of operators and linear search type
procedures.
   Algorithm 2. Adaptive regularized operator extrapolation algorithm.



                                                                                                                       41
      Initialization. Set y  H , elements x0 , x1  С , numbers
                                                   0, 12  , 1 , 0  0 ,
      and such a sequence n  , which have properties  n   0,1 ,
                                              lim  n  0 , n1 n   .
                                                                

                                              n 

      Iterations. We generate a sequence  xn  by using an iterative scheme

                       xn1  PC  n y  1   n  xn  n Axn  1  n  n1  Axn  Axn1   ,

                                                    xn1  xn 
                                          min n ,                 , if Axn1  Axn ,
                                   n1           Axn1  Axn * 
                                          
                                          n ,                          otherwise.
   In addition, based on the results of work [3], it is possible to obtain an analogue of Algorithm 1 with
a generalized Alber projection for solving variational inequalities in uniformly convex and uniformly
smooth Banach spaces.

7. Acknowledgements
   This work was supported by the Ministry of Education and Science of Ukraine (project
"Computational algorithms and optimization for artificial intelligence, medicine and defense",
0122U002026).

8. References
[1] Y. I. Vedel, S. V. Denisov, V. V. Semenov, An Adaptive Algorithm for the Variational Inequality Over
     the Set of Solutions of the Equilibrium Problem, Cybernetics and Systems Analysis 57 (2021) 91–100.
     doi:10.1007/s10559-021-00332-2.
[2] V. V. Semenov, S. V. Denisov, A. V. Kravets, Adaptive Two-Stage Bregman Method for Variational
     Inequalities, Cybernetics and Systems Analysis 57 (2021) 959–967. doi:10.1007/s10559-021-00421-2.
[3] Y. Vedel, V. Semenov, S. Denisov, A Novel Algorithm with Self-adaptive Technique for Solving
     Variational Inequalities in Banach Spaces, in: N. N. Olenev, Y. G. Evtushenko, M. Jaćimović, M.
     Khachay, V. Malkova (Eds.), Advances in Optimization and Applications, volume 1514 of
     Communications in Computer and Information Science, Springer, Cham, 2021, pp. 50–64.
     doi:10.1007/978-3-030-92711-0_4.
[4] D. Kinderlehrer, G. Stampacchia, An Introduction to Variational Inequalities and Their Applications,
     Society for Industrial and Applied Mathematics, Philadelphia, 2000.
[5] A. Nagurney, Network economics: A variational inequality approach, Kluwer Academic Publishers,
     Dordrecht, 1999.
[6] S. P. Uryas’ev, Adaptive algorithms of stochastic optimization and game theory, Nauka, Moscow, 1990.
[7] A. B. Bakushinskii, A. V. Goncharskii, Iterative Methods for Solving Ill-Posed Problems, Nauka,
     Moscow, 1989.
[8] F. Facchinei, J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity Problems,
     Springer Series in Operations Research, vol. II, Springer, New York, 2003.
[9] Y. Alber, I. Ryazantseva, Nonlinear Ill Posed Problems of Monotone Type, Springer, Dordrecht, 2006.
[10] H. H. Bauschke, P. L. Combettes, Convex Analysis and Monotone Operator Theory in Hilbert Spaces,
     Springer, Berlin–Heidelberg–New York, 2017.
[11] A. Nemirovski, Prox-method with rate of convergence O(1/T) for variational inequalities with Lipschitz
     continuous monotone operators and smooth convex-concave saddle point problems, SIAM J. Optim. 15
     (2004) 229–251. doi:10.1137/S1052623403425629.
[12] J.-K. Wang, J. Abernethy, K. Y. Levy, No-Regret Dynamics in the Fenchel Game: A Unified Framework
     for Algorithmic Convex Optimization, arXiv preprint arXiv:2111.11309. (2021).
[13] G. Gidel, H. Berard, P. Vincent, S. Lacoste-Julien, A Variational Inequality Perspective on Generative
     Adversarial Networks, arXiv preprint arXiv:1802.10551. (2018).


                                                                                                        42
[14] G. M. Korpelevich, An extragradient method for finding saddle points and for other problems, Matecon.
     12 (1976) 747–756.
[15] E. N. Khobotov, Modification of the extra-gradient method for solving variational inequalities and certain
     optimization problems, USSR Comput. Math. Phys. 27 (1987) 120–127. doi:10.1016/0041-
     5553(87)90058-9.
[16] N. Nadezhkina, W. Takahashi, Weak convergence theorem by an extragradient method for nonexpansive
     mappings and monotone mappings, Journal of Optimization Theory and Applications 128 (2006) 191–
     201.
[17] S. V. Denisov, D. A. Nomirovskii, B. V. Rublyov, V. V. Semenov, Convergence of Extragradient
     Algorithm with Monotone Step Size Strategy for Variational Inequalities and Operator Equations, Journal
     of Automation and Information Sciences 51 (2019) 12–24. doi:10.1615/JAutomatInfScien.v51.i6.20.
[18] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings, SIAM
     Journal on Control and Optimization 38 (2000) 431–446.
[19] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational inequalities
     in Hilbert space, Journal of Optimization Theory and Applications 148 (2011) 318–335 (2011).
     doi:10.1007/s10957-010-9757-3.
[20] F. Bach, K. Y. Levy, A Universal Algorithm for Variational Inequalities Adaptive to Smoothness and
     Noise, arXiv preprint arXiv:1902.01637. (2019).
[21] K. Antonakopoulos, V. Belmega, P. Mertikopoulos, An adaptive mirror-prox method for variational
     inequalities with singular operators, in: Advances in Neural Information Processing Systems 32
     (NeurIPS), Curran Associates, Inc., 2019, 8455–8465.
[22] S. V. Denisov, V. V. Semenov, P. I. Stetsyuk, Bregman Extragradient Method with Monotone Rule of
     Step Adjustment, Cybernetics and Systems Analysis. 55 (2019) 377–383. doi:10.1007/s10559-019-
     00144-5.
[23] L. D. Popov, A modification of the Arrow-Hurwicz method for search of saddle points, Mathematical
     notes of the Academy of Sciences of the USSR. 28 (1980) 845–848.
[24] L. Chabak, V. Semenov, Y. Vedel, A New Non-Euclidean Proximal Method for Equilibrium Problems,
     in: O. Chertov et al. (Eds.), Recent Developments in Data Science and Intelligent Analysis of Information,
     volume 836 of Advances in Intelligent Systems and Computing, Springer, Cham, 2019, pp. 50–58.
     doi:10.1007/978-3-319-97885-7_6.
[25] E. Gorbunov, A. Taylor, G. Gidel, Last-Iterate Convergence of Optimistic Gradient Method for
     Monotone Variational Inequalities, arXiv preprint arXiv:2205.08446. (2022).
[26] Y. Malitsky, M. K. Tam, A Forward-Backward Splitting Method for Monotone Inclusions Without
     Cocoercivity, SIAM Journal on Optimization 30 (2020) 1451–1472. doi:10.1137/18M1207260.
[27] E. R. Csetnek, Y. Malitsky, M. K. Tam, Shadow Douglas-Rachford Splitting for Monotone Inclusions,
     Appl Math Optim. 80 (2019) 665–678. doi:10.1007/s00245-019-09597-8.
[28] H. Iiduka, W. Takahashi, Weak convergence of a projection algorithm for variational inequalities in a
     Banach space, Journal of Mathematical Analysis and Applications 339 (2008) 668–679.
     https://doi.org/10.1016/j.jmaa.2007.07.019.
[29] Y. Shehu, Single projection algorithm for variational inequalities in Banach spaces with application to
     contact problem, Acta Math. Sci. 40 (2020) 1045–1063. doi:10.1007/s10473-020-0412-2.
[30] J. Yang, P. Cholamjiak, P. Sunthrayuth, Modified Tseng's splitting algorithms for the sum of two
     monotone operators in Banach spaces, AIMS Mathematics 6(5) (2021) 4873–4900.
     doi:10.3934/math.2021286.
[31] B. Halpern, Fixed points of nonexpanding maps, Bull. Amer. Math. Soc. 73 (1967) 957–961.
[32] H. K. Xu, Viscosity approximation methods for nonexpansive mappings, Journal of Mathematical
     Analysis and Applications 298 (2004) 279–291. doi:10.1016/j.jmaa.2004.04.059.
[33] P.-E. Mainge, Strong convergence of projected subgradient methods for nonsmooth and nonstrictly
     convex minimization, Set-Valued Analysis 16 (2008) 899–912.
[34] E. Gorbunov, N. Loizou, G. Gidel, Extragradient Method: O(1/K) Last-Iterate Convergence for
     Monotone Variational Inequalities and Connections With Cocoercivity, arXiv preprint arXiv:
     2110.04261. (2021). doi:10.48550/arXiv.2110.04261.
[35] T. Yoon, E. K. Ryu, Accelerated algorithms for smooth convex-concave minimax problems with
     O(1/k 2 ) rate on squared gradient norm, Proceedings of the 38th International Conference on Machine
     Learning, Proceedings of Machine Learning Research 139 (2021) 12098–12109.




                                                                                                            43