=Paper=
{{Paper
|id=Vol-3106/Paper_11.pdf
|storemode=property
|title=Convergence of Adaptive Forward-Reflected-Backward Algorithm for Solving Variational Inequalities
|pdfUrl=https://ceur-ws.org/Vol-3106/Paper_11.pdf
|volume=Vol-3106
|authors=Sergey Denisov,Vladimir Semenov
|dblpUrl=https://dblp.org/rec/conf/intsol/DenisovS21
}}
==Convergence of Adaptive Forward-Reflected-Backward Algorithm for Solving Variational Inequalities==
Convergence of Adaptive Forward-Reflected-Backward
Algorithm for Solving Variational Inequalities
Sergey Denisov and Vladimir Semenov
Taras Shevchenko National University of Kyiv, 64/13 Volodymyrska Street, Kyiv, 01161, Ukraine
Abstract
The article deals with the problem of the approximate solution of variational inequalities. A
novel iterative algorithm for solving variational inequalities in a real Banach space is
proposed and studied. The proposed algorithm is an adaptive variant of the forward-reflected-
backward algorithm (Malitsky, Tam, 2020), where the used rule for updating the step size
does not require knowledge of the Lipschitz continuous constant of the operator. In addition,
the Alber generalized projection is used instead of the metric projection onto the feasible set.
For variational inequalities with monotone and Lipschitz continuous operators, acting in a 2-
uniformly convex and uniformly smooth Banach space, a theorem on the weak convergence
of the method is proved.
Keywords 1
Variational inequality, monotone operator, Lipschitz continuous operator, forward-reflected-
backward algorithm, 2-uniformly convex Banach space, uniformly smooth Banach space,
convergence
1. Introduction
Many problems of operations research and mathematical physics can be written in the form of
variational inequalities [1–5]. The development and study of variational inequalities is an actively
developing area of applied nonlinear analysis [4, 6–23, 25–32]. Note that often non-smooth
optimization problems can be effectively solved if they are reformulated as saddle point problems and
algorithms for solving variational inequalities are applied [7]. With the advent of generative
adversarial networks (GANs), a steady interest in algorithms for solving variational inequalities arose
among specialists in the field of machine learning [8–10].
The classical variational inequality problem (in real Hilbert space H ) has the form
find x С : Ax, y x 0 y С ,
where C H is convex and closed, operator A : C H is monotone, Lipschitz continuous. The
most famous method for solving variational inequalities is the Korpelevich extra-gradient algorithm
[11]
x1 C , 0,
yn PC xn Axn ,
xn 1 PX xn Ayn ,
where PC is metric projection onto C . Many publications are devoted to the study of the extra-
gradient algorithm and its modifications [6, 7, 12–23]. An efficient modern version of the extra-
gradient method is the proximal mirror method of Nemirovski [7]. This method can be interpreted as
II International Scientific Symposium «Intelligent Solutions» IntSol-2021, September 28–30, 2021, Kyiv-Uzhhorod, Ukraine
EMAIL: denisov.univ@gmail.com (S. Denisov); semenov.volodya@gmail.com (V. Semenov)
ORCID: 0000-0003-4397-4617 (S. Denisov); 0000-0002-3280-8245 (V. Semenov)
©️ 2021 Copyright for this paper by its authors.
Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
CEUR Workshop Proceedings (CEUR-WS.org)
116
a variant of the extra-gradient method with projection understood in the sense of Bregman divergence
[24]. Also, an interesting method of dual extrapolation for solving variational inequalities was
proposed by Yu. Nesterov [25]. Adaptive variants of the Nemirovski mirror-prox method were
studied in [19–23].
In the early 1980s, L. D. Popov proposed an interesting modification of the classical Arrow-
Hurwitz algorithm for finding saddle points of convex-concave functions [26]. Let X and Y are
closed convex subset of spaces R d and R p , respectively, and L : X Y R be a differentiable
convex-concave function. Then, the algorithm [26] approximation of saddle points of L on X Y
can be written as
x0 , x1 X , y0 , y1 Y , 0,
xn PX xn 1 L xn 1 , yn 1 ,
yn PY yn 2 L xn 1 , yn 1 ,
xn 1 PX xn 1 L xn , yn ,
yn 1 PY yn 2 L xn , yn ,
where PX and PY are metric projection onto X and Y , respectively, 1 L and 2 L are partial
derivatives. Under some suitable assumptions, L. D. Popov proved the convergence of this algorithm.
A modification of Popov's method for solving variational inequalities with monotone operators
was studied in [27]. And in the article [27], a two-stage proximal algorithm for solving the
equilibrium programming problem is proposed, which is an adaptation of the method [26] to the
general Ky Fan inequalities. The mentioned equilibrium problem (Ky Fan inequality) has the form
find x С : F x, y 0 y С ,
where С is nonempty subset of vector space H (usually Hilbert space), F : C C R is function
such that F x, x 0 x С (called bifunction). And the two-stage proximal algorithm is written
like this
yn prox n F yn1 , xn ,
xn1 prox n F yn , xn ,
where n 0, , prox is proximal operator for function : C R is defined by
prox x arg min yC y 12 y x
2
.
In [28, 29], the two-stage proximal mirror method was studied, which is a modification of the two-
stage proximal algorithm [27] using Bregman divergence instead of the Euclidean distance. Note that
recently Popov's algorithm for variational inequalities has become well known among machine
learning specialists under the name “Extrapolation from the Past” [9]. Further development of this
circle of ideas led to the emergence of the so-called forward-reflected-backward algorithm [30] and
related methods [31, 32]. The forward-reflected-backward algorithm generates a sequence xn
iteratively defined by
xn1 PC xn n Axn n1 Axn Axn1 ,
with inf n n ,sup n n 0, 21L , where L is the Lipschitz constant of A .
In this paper, we propose a novel algorithm for solving variational inequalities in a Banach space.
Variational inequalities in Banach spaces arise and are intensively studied in mathematical physics
and the theory of inverse problems [1, 2, 4]. Recently, there has been progress in the study of
algorithms for problems in Banach spaces [4, 15–18]. This is due to the wide involvement of the
results and constructions of the geometry of Banach spaces [33–35]. The proposed algorithm is an
117
adaptive variant of the forward-reflected-backward algorithm [30], where the rule for updating the
step size does not require knowledge of the Lipschitz constant of operator. Moreover, instead of the
metric projection onto the feasible set, the Alber generalized projection is used [35]. An attractive
feature of the algorithm is only one computation at the iterative step of the projection onto the feasible
set. For variational inequalities with monotone Lipschitz operators acting in a 2-uniformly convex and
uniformly smooth Banach space, a theorem on the weak convergence of the method is proved.
2. Preliminaries
We recall several concepts and facts of the geometry of Banach spaces that are necessary for the
formulation and proof of the results [33–37].
Everywhere E denotes a real Banach space with the norm , E dual to E space, x , x is
value of functional x E on element x E . We denote norm in E as .
Let S E x E : x 1 . Banach space E is strictly convex if for all x, y S E and x y we
have
x y
1.
2
The modulus of convexity of the space E is defined as follows
x y
E inf 1 : x, y BE , x y 0, 2 .
2
Banach space E is uniformly convex if E 0 for all 0, 2 . Banach space E is called
2-uniformly convex if exists c 0 that
E c 2
for all 0, 2 . Obviously, a 2-uniformly convex space is uniformly convex. It is known that a
uniformly convex Banach space is reflexive.
A Banach space E is called smooth if the limit
x ty x
lim (1)
t 0 t
exists for all x, y S E . A Banach space E is called uniformly smooth if the limit (1) exists
uniformly in x, y S E . There is a duality between the convexity and smoothness of the Banach space
E and its dual E [33, 34]:
E is strictly convex space E is smooth space;
E is smooth space E is strictly convex space;
E is uniformly convex space E is uniformly smooth space;
E is uniformly smooth space E is uniformly convex space.
Note that if the space E is reflexive, the first two implications can be reversed. It is known that
Hilbert spaces and spaces L p (1 p 2 ) are 2-uniformly convex and uniformly smooth (spaces L p
are uniformly smooth for p 1, ) [33, 34].
E
Multivalued operator J : E 2 , acting as follows
Jx x E : x , x x x
2 2
,
118
is called the normalized duality mapping. It is known that [36]:
if the space E is smooth, then the mapping J is single valued;
if the space E is strictly convex, then the mapping J is injective and strictly monotone;
if the space E is reflexive, then the mapping J is surjective;
if the space E is uniformly smooth, then the mapping J is uniformly continuous on bounded
subsets of E .
Let E be a smooth Banach space. Consider the functional introduced by Yakov Alber [35]
x, y x 2 Jy, x y
2 2
x, y E .
A useful identity follows from the definition of :
x, y x, z z, y 2 Jz Jy, x z x, y, z E .
If the space E is strictly convex, then for x, y E we have x, y 0 x y .
Lemma 1 ([35]). Let E be a uniformly convex and uniformly smooth Banach space, xn and
yn are bounded sequences of E elements. Then
xn yn 0 Jxn Jyn 0 xn , yn 0 .
Lemma 2 ([37]). Let E be a 2-uniformly convex and smooth Banach space. Then, for some
number 1 , the inequality holds
1
x, y x y
2
x, y E .
Let K be a non-empty closed and convex subset of a reflexive, strictly convex and smooth space
E . It is known [35] that for each x E there is a unique point z K such that
z, x inf y, x .
yK
This point z is denoted by K x , and the corresponding operator K : E K is called the
generalized projection of E onto K (Alber generalized projection) [35]. Note that if E is a Hilbert
space, then K coincides with the metric projection onto the set K .
Lemma 3 ([35]). Let K be a closed and convex subset of a reflexive, strictly convex and smooth
space E , x E , z K . Then
z K x Jz Jx, y z 0 y K . (2)
Remark 1. The inequality (2) is equivalent to the following [35]:
y, K x K x, x y, x y K .
Basic information about monotone operators and variational inequalities in Banach spaces can be
found in [1, 2, 4, 35, 36]. We mention only two interesting examples of monotone operators acting in
a Banach space [4].
For p 2 , define the operator A by
u y
p
p 2
Au u x u x dy .
R3
x y 2
The operator A is potential and monotone, and acts from L p R
3
to L R , where p q 1 .
q
3 1 1
Note that A is the gradient of the functional
119
u x u y
p p
1
F u
2 p R3 R3
dxdy .
x y 2
Let G Rn be a bounded domain. Differential expression
Au xi ai x, xui
a x, u p 1 u p 2 u , p 1 ,
n p 1 p2
u u
xi xi 0
i 1
where the function ai x, s , i 0,1,..., n , is measurable as a function on x for every s 0,
and continuous for almost all x G as a function on s , ai x, s M for all s 0, and for
almost all x G , specifies a monotone operator acting from Sobolev space W0,1 p G to W0,1 p G
.
3. Algorithm
Let E be 2-uniformly convex and uniformly smooth Banach space, C be non-empty subset of
space E , A be an operator from E to E . Consider variational inequality:
find x C : Ax, y x 0 y C . (3)
We denote set of solutions of (3) by S .
Assume that the following conditions are satisfied:
set C E is convex and closed;
operator A : E E * is monotone and Lipschitz -type with L 0 on C ;
set S is non-empty.
Remark 2. We can formulate (3) as fixed-point problem [35]:
x C J 1 Jx Ax , (4)
where 0 . Formulation (4) is useful because it contains an obvious algorithmic idea.
Consider dual variational inequality:
find x C : Ay, x y 0 y C . (5)
We denote set of solutions of (5) by S d . Note that set S d is closed and convex [2]. Inequality (5)
is sometimes called weak or dual formulation of (3) (or Minty inequality) and solutions (5) are weak
solutions (3). For monotone operators A we always have S S d . In our conditions S d S [2].
We assume that the following is satisfied:
normalized duality mapping J : E E sequentially weakly continuous, i.e., from xn x
weak in E then Jxn Jx weak* in E .
Remark 3. In our situation, when the space E (and of course E ) is reflexive, the weak* and
weak convergence coincide in E .
Consider now a novel algorithm for solving the variational inequality (3). We will use a simple
rule for updating the parameters n without information about the Lipschitz constant of the operator
A . The proposed algorithm is a modification of the forward-reflected-backward algorithm recently
proposed in [30] for solving operator inclusions with the sum of the maximal monotone and Lipschitz
continuous monotone operators acting in a Hilbert space.
Let us know the constant 1 from the Lemma 2.
120
__________________________________________________________________________________
Algorithm 1.
Initialization. Choose x0 E , x1 E , 0, 21 and 0 , 1 0 . Let n 1 .
1. Calculate
xn1 C J 1 Jxn n Axn n1 Axn Axn1 .
2. If xn1 xn xn 1 , then STOP and xn S , else go to 3.
3. Calculate
xn1 xn
min n , , if Axn 1 Axn ,
n1 Axn 1 Axn *
n , otherwise.
Let n : n 1 and go to 1.
__________________________________________________________________________________
Sequence generated by rule of calculation n is non-increasing and lower bounded by
min 1 , L1 . Then exists lim n 0 .
n
The sequence xn generated by Algorithm 1 satisfies the inequality
2 n Axn n 1 Axn Axn 1 , y xn 1 y, xn xn1 , xn y, xn1 y С . (6)
Inequality (6) shows a rule of finishing the algorithm. Indeed if
xn1 xn xn1
then from (6) it follows then
Axn , y xn 0
for all y С , i.e., xn S .
Now we go to the proof of convergence of Algorithm 1.
4. Main inequality
In this section, we state and prove the inequality on which the proof of Algorithm 1 weak
convergence is based.
Lemma 4. For the sequence xn generated by Algorithm 1, the following inequality holds
n
z, xn 1 2n Axn Axn 1 , xn 1 z xn 1 , xn
n 1
z , xn 2n 1 Axn 1 Axn , xn z n 1 xn , xn 1
n
1 n 1 n xn1 , xn ,
n n1
where z S .
Proof. Let z S . We have
z, xn1 z, xn xn1 , xn 2 n Axn n 1 Axn Axn 1 , z xn 1 . (7)
121
From monotonicity of operator A we have
n Axn n 1 Axn Axn 1 , z xn 1 n Axn Axn1 , z xn1
n1 Axn Axn1 , z xn1 n Axn1 , z xn1
0
n Axn Axn1 , z xn1 n1 Axn Axn1 , z xn
n1 Axn Axn1 , xn xn1 . (8)
Applying (8) to (7) we obtain
z, xn1 z, xn xn1, xn 2n Axn Axn1, z xn1
2n1 Axn Axn1 , z xn 2n1 Axn Axn1 , xn xn1 . (9)
From rule of calculation n we have upper estimation for 2n1 Axn Axn1 , xn xn1 in (9). We
have
2n1 Axn Axn1 , xn xn1
n 1
2n1 Axn Axn1 * xn xn1 2 xn xn 1 xn 1 xn
n
n 1 xn xn 1 n 1 xn xn 1
2 2
n n
n 1 xn , xn 1 n 1 xn 1 , xn .
n n
We obtain
n
z, xn 1 2n Axn Axn 1 , xn 1 z xn 1 , xn
n 1
z , xn 2n 1 Axn 1 Axn , xn z n 1 xn , xn 1
n
1 n 1 n xn1 , xn .
n n1
The proof is complete. ■
Remark 4. We can change rule of updating for step 3 of Algorithm 1 to the following:
xn 1 , xn
min n , , if Axn 1 Axn ,
n 1 Axn 1 Axn * (10)
n , otherwise.
Lemma 4 holds also for variant of Algorithm 1 with the rule (10).
5. Convergence
Let us formulate the main result.
122
Theorem 1. Let C be a non-empty convex and closed subset of 2-uniformly convex and
uniformly smooth Banach space E , A : E E is monotone Lipschitz continuous operator, S
. Assume that normalized duality mapping J is sequentially weakly continuous. Then sequence
generated by Algorithm 1 xn converge weakly to z S .
Proof. Let z S . Assume
n 1
an z , xn 2n 1 Axn 1 Axn , xn z xn , xn 1 ,
n
bn 1 n1 n xn1 , xn .
n n1
Inequality from Lemma 4 takes form
an1 an bn .
Since there exists lim n 0 , then
n
n 1
1 n 1 2 0,1 , n .
n n 1
Show that an 0 for all large n N . We have
an z , xn 2n 1 Axn 1 Axn , xn z n 1 xn , xn 1
n
1 n 1
xn z 2n 1 Axn 1 Axn * xn z xn 1 xn
2 2
n
1 n 1 1
xn z 2 xn xn 1 xn z n 1 xn 1 xn n1 xn z .
2 2 2
n n n
Since there exists such n0 N that
1 n 1
0 for all n n0 ,
n
then an 0 starting from n0 .
So, we came into conclusion that there exists a limit
lim z, xn 2n1 Axn1 Axn , xn z n1 xn , xn1
n
n
and
n1 n
1 xn1 , xn .
n 1 n n 1
Hence, we obtain that the sequence xn is bounded and
lim xn1 , xn lim xn1 xn 0 .
n n
Since
lim 2n1 Axn1 Axn , xn z n1 xn , xn1 0 ,
n
n
then sequences z, xn converge for all z S .
123
Show that all weak cluster points of sequence xn are in the set S . Consider subsequence xnk
which converges weakly to z E . Easy to see that z C . Show that z S . We have
Jxn 1 Jxn n Axn n 1 Axn Axn 1 , y xn 1 0 y С .
Hence using monotonicity of operator A we have an inequality
Ay, y xn Axn , xn xn1 Axn , y xn1
1 n 1
Jxn Jxn 1 , y xn 1 Axn Axn 1 , y xn 1 y С .
n n
From lim xn xn1 0 and Lipschitz property of operator A it follows
n
lim Axn Axn1 * 0 .
n
From uniform continuity of normalized duality mapping J on bounded sets we get
lim Jxn Jxn1 * 0 .
n
Hence,
lim Ay, y xn 0 y С .
n
From other side
Ay, y z lim Ay, y xnk lim Ay, y xn 0 y С .
k n
Then it follows that z S .
Show that sequence xn converges weakly to z . Arguing by contradiction. Let exists the
such that x z weakly and z z . Easy to see that z S . We have
subsequence xmk mk
2 Jxn , z z z , xn z , xn z z .
2 2
From that we see the existence of limit lim Jxn , z z . From sequentially weak continuity of
n
normalized duality mapping J we get
Jz, z z lim Jxnk , z z lim Jxmk , z z Jz, z z ,
k k
i.e., Jz Jz, z z 0 . Then it follows that z z . The proof is complete. ■
The weak convergence of the variant of the algorithm with a constant parameter 0 is similarly
proved.
__________________________________________________________________________________
Algorithm 2.
1
Initialization. Choose x0 E , x1 E , 0, . Let n 1 .
2 L
1. Calculate
xn1 C J 1 Jxn 2 Axn Axn1 .
2. If xn1 xn xn 1 , then STOP and xn S , else let n : n 1 and go to 1.
__________________________________________________________________________________
124
Remark 5. A special case of Algorithm 2 is the optimistic gradient descent ascent (OGDA)
algorithm, popular among machine learning specialists [8, 9].
Lemma 5. For the sequence xn generated by Algorithm 2, the following inequality holds
z, xn1 2 Axn Axn1, xn1 z L xn1, xn
z, xn 2 Axn1 Axn , xn z L xn , xn1 1 2 L xn1 , xn ,
where z S .
Theorem 2. Let C be a nonempty convex and closed subset of 2-uniformly convex and uniformly
smooth Banach space E , operator A : E E is monotone and Lipschitz continuous with constant
L 0 . Let S and normalized duality mapping J is sequentially weakly continuous. Then
sequence generated by Algorithm 2 xn converge weakly to z S .
6. Conclusions
In this paper, we have proposed and studied a new algorithm for solving variational inequalities in
a Banach space. The proposed algorithm is an adaptive variant of the forward-reflected-backward
algorithm [30], where the rule for updating the step size does not require knowledge of the Lipschitz
continuous operator constant. Moreover, instead of the metric projection onto the admissible set, the
Alber generalized projection is used [35]. An attractive feature of the algorithm is only one
computation at the iterative step of the generalized projection onto the feasible set. For variational
inequalities with monotone Lipschitz continuous operators acting in a 2-uniformly convex and
uniformly smooth Banach space, a theorem on the weak convergence of the method is proved.
Based on the technique [38], similar results can most likely be obtained for problems with pseudo-
monotone, Lipschitz continuous, and sequentially weakly continuous operators acting in a uniformly
convex and uniformly smooth Banach space. Also, in a future article we will present a proof of the
convergence of a modification of the algorithm using the Bregman projection.
Note that a problem of significant interest in nonlinear analysis applications is to find
x A1 A2 0 , where A1 : E 2E is a maximal monotone operator and A2 : E E is a
1
monotone and Lipschitz operator. Based on the results of this work and [30] for solving this problem,
we can construct the following adaptive splitting method
xn 1 J n A1
1
Jx A x A x A x ,
n n 2 n n 1 2 n 2 n 1
xn1 xn
min n , , if A2 xn 1 A2 xn ,
n1 A2 xn1 A2 xn *
n , otherwise,
where 0, 21 . The proof of its convergence will be presented in another work soon.
Acknowledgements
This work was supported by Ministry of Education and Science of Ukraine (project “Mathematical
modeling and optimization of dynamical systems for defense, medicine and ecology”, 0119U100337).
125
References
[1] J.-L. Lions, Some Methods of Solving Non-Linear Boundary Value Problems, Dunod-Gauthier-
Villars, Paris, 1969.
[2] D. Kinderlehrer, G. Stampacchia, An Introduction to Variational Inequalities and Their
Applications, Society for Industrial and Applied Mathematics, Philadelphia, 2000.
[3] A. Nagurney, Network economics: A variational inequality approach, Kluwer Academic
Publishers, Dordrecht, 1999.
[4] Y. Alber, I. Ryazantseva, Nonlinear Ill Posed Problems of Monotone Type, Springer, Dordrecht,
2006.
[5] S. I. Lyashko, D. A. Klyushin, D. A. Nomirovsky, V. V. Semenov, Identification of age-
structured contamination sources in ground water, in: R. Boucekkline, N. Hritonenko, Y.
Yatsenko, Y. (Eds.), Optimal Control of Age-Structured Populations in Economy, Demography,
and the Environment, Routledge, London–New York, 2013, pp. 277–292.
[6] F. Facchinei, J. S. Pang, Finite-Dimensional Variational Inequalities and Complementarity
Problems, Springer Series in Operations Research, vol. I, Springer, New York, 2003.
[7] A. Nemirovski, Prox-method with rate of convergence O(1/T) for variational inequalities with
Lipschitz continuous monotone operators and smooth convex-concave saddle point
problems, SIAM J. Optim. 15 (2004) 229–251. doi:10.1137/S1052623403425629.
[8] C. Daskalakis, A. Ilyas, V. Syrgkanis, H. Zeng, Training GANs with optimism, arXiv preprint
arXiv:1711.00141. (2018).
[9] G. Gidel, H. Berard, P. Vincent, S. Lacoste-Julien, A Variational Inequality Perspective on
Generative Adversarial Networks, arXiv preprint arXiv:1802.10551. (2018).
[10] M. Liu, W. Zhang, Y. Mroueh, X. Cui, J. Ross, T. Yang, P. Das, A decentralized parallel
algorithm for training generative adversarial nets, arXiv preprint arXiv:1910.12999. (2020).
[11] G. M. Korpelevich, An extragradient method for finding saddle points and for other problems,
Matecon. 12 (1976) 747–756.
[12] Y. Censor, A. Gibali, S. Reich, The subgradient extragradient method for solving variational
inequalities in Hilbert space, Journal of Optimization Theory and Applications 148 (2011) 318–
335 (2011). doi:10.1007/s10957-010-9757-3.
[13] V. V. Semenov, Modified Extragradient Method with Bregman Divergence for Variational
Inequalities, Journal of Automation and Information Sciences 50(8) (2018) 26–37.
doi:10.1615/JAutomatInfScien.v50.i8.30.
[14] P. Tseng, A modified forward-backward splitting method for maximal monotone mappings,
SIAM Journal on Control and Optimization 38 (2000) 431–446.
[15] Y. Shehu, Convergence Results of Forward-Backward Algorithms for Sum of Monotone
Operators in Banach Spaces, Results Math. 74 (2019). doi:10.1007/s00025-019-1061-4.
[16] Y. Shehu, Single projection algorithm for variational inequalities in Banach spaces with
application to contact problem, Acta Math. Sci. 40 (2020) 1045–1063. doi:10.1007/s10473-020-
0412-2.
[17] J. Yang, P. Cholamjiak, P. Sunthrayuth, Modified Tseng's splitting algorithms for the sum of two
monotone operators in Banach spaces, AIMS Mathematics 6(5) (2021) 4873–4900.
doi:10.3934/math.2021286.
[18] P. Cholamjiak, Y. Shehu, Inertial forward-backward splitting method in Banach spaces with
application to compressed sensing, Appl. Math. 64 (2019) 409–435.
[19] F. Bach, K. Y. Levy, A Universal Algorithm for Variational Inequalities Adaptive to Smoothness
and Noise, arXiv preprint arXiv:1902.01637. (2019).
[20] K. Antonakopoulos, V. Belmega, P. Mertikopoulos, An adaptive mirror-prox method for
variational inequalities with singular operators, in: Advances in Neural Information Processing
Systems 32 (NeurIPS), Curran Associates, Inc., 2019, 8455–8465.
126
[21] F. S. Stonyakin, On the adaptive proximal method for a class of variational inequalities and
related problems, Trudy Inst. Mat. i Mekh. UrO RAN. 25 (2019) 185–197. doi:10.21538/0134-
4889-2019-25-2-185-197.
[22] F. S. Stonyakin, E. A. Vorontsova, M. S. Alkousa, New Version of Mirror Prox for Variational
Inequalities with Adaptation to Inexactness, in: Jacimovic M., Khachay M., Malkova V.,
Posypkin M. (Eds.), Optimization and Applications, OPTIMA 2019, volume 1145 of
Communications in Computer and Information Science, Springer, Cham, 2020, 427–442.
doi:10.1007/978-3-030-38603-0_31.
[23] S. V. Denisov, V. V. Semenov, P. I. Stetsyuk, Bregman Extragradient Method with Monotone
Rule of Step Adjustment, Cybernetics and Systems Analysis. 55 (2019) 377–383.
doi:10.1007/s10559-019-00144-5.
[24] L. M. Bregman, The relaxation method of finding the common point of convex sets and its
application to the solution of problems in convex programming, USSR Computational
Mathematics and Mathematical Physics 7(3) (1967) 200–217.
[25] Yu. Nesterov, Dual extrapolation and its applications to solving variational inequalities and
related problems, Mathematical Programming 109 (2007), 319–344.
[26] L. D. Popov, A modification of the Arrow-Hurwicz method for search of saddle points,
Mathematical notes of the Academy of Sciences of the USSR. 28 (1980) 845–848.
[27] L. Chabak, V. Semenov, Y. Vedel, A New Non-Euclidean Proximal Method for Equilibrium
Problems, In: O. Chertov et al. (Eds.), Recent Developments in Data Science and Intelligent
Analysis of Information, volume 836 of Advances in Intelligent Systems and Computing,
Springer, Cham, 2019, pp. 50–58. doi:10.1007/978-3-319-97885-7_6.
[28] D. A. Nomirovskii, B. V. Rublyov, V. V. Semenov, Convergence of Two-Stage Method with
Bregman Divergence for Solving Variational Inequalities, Cybernetics and Systems Analysis. 55
(2019) 359–368. doi:10.1007/s10559-019-00142-7.
[29] A. Gibali, D. V. Thong, A new low-cost double projection method for solving variational
inequalities, Optim. Eng. 21 (2020) 1613–1634. doi:10.1007/s11081-020-09490-2.
[30] Y. Malitsky, M. K. Tam, A Forward-Backward Splitting Method for Monotone Inclusions
Without Cocoercivity, SIAM Journal on Optimization 30 (2020) 1451–1472.
doi:10.1137/18M1207260.
[31] E. R. Csetnek, Y. Malitsky, M. K. Tam, Shadow Douglas-Rachford Splitting for Monotone
Inclusions, Appl Math Optim. 80 (2019) 665–678. doi:10.1007/s00245-019-09597-8.
[32] V. Cevher, B. C. Vu, A Reflected Forward-Backward Splitting Method for Monotone Inclusions
Involving Lipschitzian Operators, Set-Valued and Variational Analysis 29 (2021) 163–174.
doi:10.1007/s11228-020-00542-4.
[33] J. Diestel, Geometry of Banach Spaces, Springer-Verlag, Berlin–Heidelberg, 1975.
[34] B. Beauzamy, Introduction to Banach Spaces and Their Geometry, North-Holland, Amsterdam,
1985.
[35] Y.I. Alber, Metric and generalized projection operators in Banach spaces: properties and
applications. in: Theory and Applications of Nonlinear Operators of Accretive and Monotone
Type, vol. 178, Dekker, New York, 1996, pp. 15–50.
[36] M. M. Vainberg, Variational Method and Method of Monotone Operators in the Theory of
Nonlinear Equations, Wiley, New York, 1974.
[37] K. Aoyama, F. Kohsaka, Strongly relatively nonexpansive sequences generated by firmly
nonexpansive-like mappings, Fixed Point Theory Appl. 95 (2014). doi:10.1186/1687-1812-2014-
95.
[38] R.I. Bot, E.R. Csetnek, P.T. Vuong, The forward-backward-forward method from continuous and
discrete perspective for pseudo-monotone variational inequalities in Hilbert spaces, European
Journal of Operational Research 287(1) (2020) 49–60.
127