<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>December</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>with Adaptive Algorithms for Variational Inequalities</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vladimir Semenov</string-name>
          <email>semenov.volodya@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii Denysov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Harbour.Space University</institution>
          ,
          <addr-line>Carrer de Rosa Sensat 9-11, 08005, Barcelona</addr-line>
          ,
          <country country="ES">Spain</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>64/13 Volodymyrska Street, Kyiv, 01161</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Variational Inequality</institution>
          ,
          <addr-line>Network Economics, Blood Supply Chain, adaptive algorithms</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>9</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>Effectiveness of adaptive extragradient algorithms for network economics problems is demonstrated with the modified model of blood supply chain network. Informational system for comparing behavior of algorithms for solving variational inequalities is described. Described algorithms include adaptive modifications of extrapolation from the past, forwardbackward-forward and Tseng method. The blood supply chain model is a prominent example of more general perishable products delivery chain optimization problem. The provided software system enables users to edit model parameters, visualize networks, and solve pathbased cost minimization problems using a selected subset of these algorithms.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>numerical experiments, optimization</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>Variational inequality (VI) is a powerful approach to modelling diverse set of problems, including,
but not limited to, optimization and equilibrium problems [1-4]. Between other applications, last years
they show promising results in some machine learning areas – especially for GANs and other
adversarial techniques [5,6]. And, finally, a family of network economics problems could be naturally
formulated as a VI. Such formulation was obtained for different kind of network economics problems
in the series of papers by Anna Nagurney [7,8].</p>
      <p>Problem of special interest is modelling a delivery chain of perishable products [9], where it’s
unfeasible to store the product for a long time, and, as a result, product surplus in a demand point incurs
serious discard charges. Moreover, every step of operation and transport of such kind of a product
usually leads to some loss, also with corresponding discard charges. The similar model arises if we
consider blood delivery supply chain, organized as sequential interaction of blood collection, processing
and consuming facilities – from collection points to hospitals. Modelling of this problem for the
structure of American Red Cross (ARC) was investigated in [10], with proposal of the VI formulation.
Another VI model is used as part of blood supply chain optimization as bi-objective model in [11].</p>
      <p>Intensive development of numerical algorithms for solving VI started from extragradient algorithm
[12] and alternative algorithm [13] (now widely known as “extrapolation from the past”, or EFP, in
machine learning area). As operator calculation and projection to feasible set can be really
computationally expensive operations for medium to large sized problems, efforts have been made to
build algorithms that require minimum necessary amount of such operations.</p>
      <p>Also, VI solving algorithms often require a priory knowledge of the Lipschitz constant for the
operator to determine algorithm step size – and that becomes a serious problem for applying them to
real world problems, as calculating it could be complicated or almost impossible. To tackle this
difficulty, adaptive versions of VI algorithms were proposed, that allow to modify step size dynamically
during algorithm execution while preserving theoretical guarantee of the convergency.</p>
      <p>2023 Copyright for this paper by its authors.
CEUR</p>
      <p>ceur-ws.org</p>
    </sec>
    <sec id="sec-3">
      <title>2. Mathematical model for optimization of blood supply chain</title>
      <p>Let’s us formulate the mathematical model of blood supply chain optimization, following [10].</p>
    </sec>
    <sec id="sec-4">
      <title>2.1. Blood supply process and facilities</title>
      <p>Structure of the facilities and corresponding logistic can be schematically represented as the
following graph:</p>
      <p>In the graph above, edges represent different operations in the blood supply process – and
corresponding model parameters, such as collection risks or operational and waste discard costs, are
bound to the edges. Nodes represent facilities taking part in the supply chain. The first node is added
for the model to be concise, and has no specific meaning – but can be considered as a top-level
organization structure unit, e.g., Red Cross regional division, as in [10].</p>
      <p>The process starts with blood collection. Specifics of the collection layer is that edges have risk
parameter and corresponding costs associated with it, as the process itself is risky – donors can miss
appointments because of many reasons – for example, heavy rain greatly decreases visits number. The
next stages are testing for infection and contamination, separation of components (plasma and red cells),
and storing components in special facilities. In the graph above, blood centers, component labs, and
storage facilities are shown as a single layer to make drawing and model description clearer. Actually,
it’s often the real case – and even if they are really separated, it does not incur significant model changes,
as all edges have the same type of associated parameters.</p>
      <p>Mathematical properties of the shipment stage edges also coincide with parameters from the testing,
processing, and storage stage – but physical facilities are different, so we put them in a separate layer
in the graph.</p>
      <p>And the last, distribution, layer, corresponds to the last-mile delivery of the blood to demand points,
which usually are hospitals. These nodes have a specific role in the model – a stochastic demand and
corresponding surplus and shortage penalties are associated with them.</p>
    </sec>
    <sec id="sec-5">
      <title>2.2. Mathematical model notation</title>
      <p>Let’s denote  c,   ,   ,  ℎ – number of each type of nodes: collection, testing/processing/storage,
shipment and demand (hospitals). And let’s denote set of all edges as {
 },  = 1. .   , where   is the
total number of edges (links) in our supply chain network. And let’s denote in-flow on the edge   as
  (we will describe loss and out-flow later). Edge flows together form a vector  = ( 1, … ,    ).</p>
      <p>The operations with the blood itself have costs, which also can be associated with the edges. Let’s
denote   (  ) – unit operational cost on edge   (it’s dependent on the flow). Then the total operational
cost on the edge is   ⋅   (  ).</p>
      <p>Let’s denote 
= {  },  = 1. .</p>
      <p>– set of all simple paths from dummy source node to every demand
point, where  is a total number of such paths. Initial flow on path   = {  1
,   2, … ,     } is denoted
as   ≥ 0, and it is the initial flow of the first edge of the path, which is always edge between the source
node and a blood collection node. Our goal is to find optimal path flows vector  = ( 1, … ,   ). The
first obvious property of the feasible set  is</p>
      <p>xi ≥ 0,  = 1. .</p>
      <p>Each stage of blood processing could incur some losses – for example, small part of blood is taken
to test facility and discarded afterward. Let’s associate loss multiplier   ∈ (0,1] with every edge   ,  =
1. .   . We interpret it the next way: if an edge incoming flow (in-flow) is   , then it’s outcome
(outflow) is     . In this paper we assume, that multipliers are independent of link flow (thou it will be
interesting to consider depending case in the future).</p>
      <p>Losses during blood processing incur waste disposal cost. It can be significant for such kind
products, as blood – so it needs to be included into the model. Waste amount for the edge   depends
only on the edge flow and is equal to (1 −   )  , so waste discard function is also flow dependent –
let’s denote unit discard cost as   (  ), and the total one will be   ⋅   (  ).</p>
      <p>In terms of path flows, total loss on the path   can be calculated as   = ∏ :  ∈    ,  = 1. .  . As
a result, if initial flow on the path   is   , corresponding final path flow is    
, ∀  = 1. .  . And for
each destination node   ,  = 1. .  ℎ, let’s denote the supply amount as   . Obviously, we have
  = ∑  ∈ 

   ,
assume
we</p>
      <p>know
  ( ) =  (  &lt;  ) = ∫   ( ) .</p>
      <p>0
it’s</p>
      <p>something like  (∑ =ℎ1
where   is a set of all paths from source node to demand node   . At the same time, real demand  
is stochastic – we cannot know exact blood demand in a hospital for every moment in future. But we
probability
density
function
  ( )
and
probability
distribution
We are interested in minimizing expected difference between   and   – so, it’s tempting to use
(  −   )2) as the loss function. But consequences of blood shortage and
surplus are essentially different, and it makes sense to consider them separately in the model.</p>
      <p>Let’s denote blood shortage (undersupply) at the demand node   as
and blood surplus (oversupply) at the same node as
0,   ≤  
0,   ≥  
  = {  −   ,   &gt;   = max{0,   −   },
  = {</p>
      <p>−   ,   &lt;   = max{0,   −   }.</p>
      <p>Now corresponding part of the cost function for the model can be formulated the next way:
∑ ℎ
 =1 (    (</p>
      <p>
        ) +     (  ))
shortage and surplus correspondingly.
where   
,    are the penalties (costs), and  (  ),  (  ) – mathematical expectations of blood
(
        <xref ref-type="bibr" rid="ref1 ref2">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref3">2</xref>
        )
      </p>
      <p>And, finally, it’s preferrable for the model to count for the blood collection risks. These risks are
associated with the first layer edges only, and let’s assume they are also functions of the flow. Let’s
denote risks as   (  ),  = 1. .   . Risk minimization and cost minimization are different criteria, so in
the cost function they can be weighted differently – let  be the weight for risk part, and let cost part
have weight of one.</p>
      <p>It is worth making remarks about edge indices.</p>
      <p>Remark 1: As the blood collection process has sequential stages, it’s always possible to enumerate
edges in such a way, that any edge from a later stage has an index greater than the index of any other
edge from an earlier stage. E.g., an index of a distribution edge is always greater than an index of a
collection edge. We assume such kind of numeration in all following reasoning to simplify the model
formulation. As a result, edge indices in any path   ∈ 
make an increasing sequence.</p>
      <p>Taking into account the remark above, each edge flow can be calculated by path flows:

 = ∑
 :  ∈</p>
      <p>,
where    = ∏ :  ∈  , &lt;   – product of all loss multipliers of edges, which are preceding edge  
in the path   .</p>
      <p>Remark 2: We have exactly   collection edges on the first layer, so all edges   ,  = 1. .   are
collection edges (it gives us a simple way to write risk component of the cost function below).
2.3.</p>
    </sec>
    <sec id="sec-6">
      <title>Optimization problem and variational inequality</title>
      <p>Now we can formulate the supply chain network optimization problem as minimization of the
Φ( ̅) = ∑ 
 =1 (  (  (  ) +   (  ))) + ∑ ℎ</p>
      <p>
        =1 (    (  ) +     (  )) +
with regards to (
        <xref ref-type="bibr" rid="ref1 ref2">1</xref>
        ) – (
        <xref ref-type="bibr" rid="ref4">3</xref>
        ). And using (
        <xref ref-type="bibr" rid="ref4">3</xref>
        ), we can reformulate (
        <xref ref-type="bibr" rid="ref5">4</xref>
        ) in terms of path flows:
combined cost function

∑ 
 =1   (  )

 =1
 :  ∈ 
(
        <xref ref-type="bibr" rid="ref4">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref6">5</xref>
        )
(
        <xref ref-type="bibr" rid="ref7">6</xref>
        )
cost function for the same path. These functions have the next form:
where   and   are unit operation and waste discard unit cost functions for path   , and   is the risk
  ( ) =
∑   (
 );   ( ) =
∑   (

);   ( ) =
 :  ∈
      </p>
      <p>
        ∑
 :  ∈  , =1.. 
  (  )
where   is expressed in terms of path flows with (
        <xref ref-type="bibr" rid="ref4">3</xref>
        ). Notation is a bit different from [10] to make
algorithm implementation more straightforward, as a path edges sequence will be used in calculations.
      </p>
      <p>The problem can be formulated as a classic variational inequality. We need to find  ∗ ∈  such, that
where  =  + and
(  −  ∗,    Φ</p>
      <p>( ∗)) ≥ 0 ∀x ∈ 
 Φ</p>
      <p>:  ∈ 
=
∑    [  (  ) +   (  ) +   ( ′ (  ) +  ′ (  ))] +
+   (   
( 

(   ) − 1) +    
(</p>
      <p>
        (   )))
+  (   (   ) +  ′  (
 
) ⋅    )
where   and   are indices of the first and the last edge in the path   . Again, here   is expressed in
terms of path flows, using equality (
        <xref ref-type="bibr" rid="ref4">3</xref>
        ).
      </p>
    </sec>
    <sec id="sec-7">
      <title>3. Extragradient algorithms for variational inequalities</title>
      <p>
        Variational inequality (
        <xref ref-type="bibr" rid="ref7">6</xref>
        ) is a good example of a real-world problem, where effectiveness of
algorithms could be tested, and the behavior could be compared. Let as provide necessary information
about the algorithms used for the numerical experiments.
3.1.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Preliminaries</title>
      <p>At first, we need to introduce some notation. Let  be real Hilbert space with dot product ( , ) and
induced norm || ⋅ ||,  be a non-empty convex closed subset of  , and let’s  ∶ 
→  be a mapping.</p>
      <p>Definition 1: Mapping  ∶</p>
      <p>→  is called monotone, if
Definition 2: The following problem is called variational inequality (VI):
(</p>
      <p>
        −  ,  −  ) ≥ 0 ∀x, y ∈ H
find  ∈  : ( ,  −  ) ≥ 0 ∀y ∈ 
(
        <xref ref-type="bibr" rid="ref8">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref9">8</xref>
        )
∀ ∈     is the only element of C, for which
      </p>
      <p>
        Further algorithm formulations will be done for problem (
        <xref ref-type="bibr" rid="ref8">7</xref>
        ) with assumption that operator  is
monotone and uniformly continuous on any bounded set, and solutions set of the VI (
        <xref ref-type="bibr" rid="ref8">7</xref>
        ) is not empty.
Actually, for finite-dimensional space  it’s enough for operator to be monotone and continuous. Also,
formulations will use projection mapping with the next notation:
      </p>
      <p>Definition 3: The mapping   :</p>
      <p>→  is called metric projection to closed convex subset  ⊂  , if
‖PCx − x‖ = min ‖ −  ‖.</p>
      <p>z∈C</p>
      <p>
        The main idea behind the big family of projection methods for solving (
        <xref ref-type="bibr" rid="ref8">7</xref>
        ) is the result, that  ∈ 
is the solution of (
        <xref ref-type="bibr" rid="ref8">7</xref>
        ) if and only if  is a fixed point of   ( − 
the next computational procedure could be used for solving a VI:
). So, gradient projection method with
  +1 =   (
      </p>
      <p>
        −    ), where step size  &gt; 0.
kind of algorithms.
procedure:
But monotonicity of  is not enough for the procedure above to converge to the solution of the VI. It
needs strong monotonicity or inversely strong monotonicity (co-coercivity). So, more advanced
algorithms were proposed, which do not require extra assumptions on  – especially the extragradient
Historically the first extragradient algorithm, proposed in [12], has the next computational
{  =   (  −    )

  +1 =   (  −   )

,
It was proved, that for monotone and continuous  ∶  
→   the sequence {  }, generated by
procedure above, converges to the solution of (
        <xref ref-type="bibr" rid="ref8">7</xref>
        ), if step size  ∈ (0, ), where  is a Lipschitz constant
of the mapping  . This algorithm requires two mapping calculations and two metric projections on
every step, and also initially convergence was proven only for finite dimensional Euclidean space – but

1
it provides strong baseline to compare with.
      </p>
      <p>Later a lot of improvements and modifications were done by different authors – to avoid extra
calculations, prove convergence under weaker assumptions, drop the requirement to know the Lipschitz
constant in advance, or use Bregman divergence instead of Euclidean distance to speed up projection –
see [14-20]. Part of such results were obtained by the research group from Taras Shevchenko National
University of Kyiv, to which the current paper's authors belong. One more important direction is
development of parallelized versions of VI algorithms, as in [21].
3.2.</p>
    </sec>
    <sec id="sec-9">
      <title>Adaptive modification of extragradient algorithms</title>
      <p>
        Let us describe other selected algorithms, which are implemented as part of the system.
hyperplane projection was proposed.
the next form:
calculation on every step.
in [15]. The step calculation is:
in case of linear mapping  , and
(
        <xref ref-type="bibr" rid="ref10">9</xref>
        )
(
        <xref ref-type="bibr" rid="ref11">10</xref>
        )
(
        <xref ref-type="bibr" rid="ref12">11</xref>
        )
      </p>
      <p>The first method was proposed in [13] in 1980, and gained popularity nowadays under the name
extrapolation from the past. It has the next step calculation procedure:
where convergence is proved with  ∈ (0, 1</p>
      <p>3
one mapping calculation on the step, but with two projections. The weak convergence in
infinitedimensional space was proved in [20], where also modification with single projection and auxiliary
) for finite dimensional space. This procedure uses only</p>
      <p>
        Another interesting method was proposed in [14] by P. Tseng in year 2000. The step formula has
where weak convergency is proved for  ∈ (
        <xref ref-type="bibr" rid="ref1 ref2">0, 1</xref>
        ). Here we have one projection and two mapping
And, finally, algorithm with an elegant computational procedure was proposed by Malitsky and Tam
  +1 =   (  −
      </p>
      <p>
        (2  −   −1 ))
  +1 =   (  −    −  (  −   −1 ))
in generic case. The algorithm converges with  ∈ (
        <xref ref-type="bibr" rid="ref1 ref2">0, 1</xref>
        ). Here we need only one mapping calculation
and one projection on step.
      </p>
      <p>All methods above in their original form require knowledge of Lipschitz constant of the mapping  .
Let’s describe the adaptive modifications, where step size 
will be modified during algorithm run to
achieve convergency without using a priori known  , which can be hard to calculate.</p>
      <p>
        Here is the adaptive version of extrapolation from the past algorithm [17]. At first, we select
 0,  0 ∈  ,  ∈ (
        <xref ref-type="bibr" rid="ref1 ref2">0, 1</xref>
        ),  0 &gt; 0. And starting from  = 0 we use the next iteration procedure:
{


{


with the stop condition  
  ,
  ,
=   .
      </p>
      <p>3
  +1 =   (  −      ),
 +1 =   (  +1 −     ),</p>
      <p>ℎ</p>
      <p>=   (  −      ),
  +1 = 
 −   (</p>
      <p>−    ),
with a stop condition   +1 =</p>
      <p>=   .
 1 &gt; 0. And starting from  = 1 we have:</p>
      <p>
        Tseng algorithm can also be modified for adaptive step size calculation [18]. Let  1 ∈  ,  ∈ (
        <xref ref-type="bibr" rid="ref1 ref2">0,1</xref>
        ),
0,  ∈ (
        <xref ref-type="bibr" rid="ref1 ref2">0, 1</xref>
        ). Step is the next (again, starting from  = 1):
      </p>
      <p>And here is an adaptive modification of Malitsky and Tam algorithm [17]. Let  0,  1 ∈  ,  0,  1 &gt;
 +1 = {

{  , 
‖  +1 −   ‖

ℎ
}
.
‖   +1 −    ‖ ,  
 +1 ≠    ,</p>
      <p>Remark 3: For all adaptive versions above,  update procedure does not incur extra mapping
calculation – all values will be cached and reused in algorithms implementation, so number of mapping
calculations and projections is the same for both adaptive and stationary variants.</p>
    </sec>
    <sec id="sec-10">
      <title>4. Software for solving blood supply chain optimization problem</title>
      <p>
        To allow solving blood supply chain optimization problem (
        <xref ref-type="bibr" rid="ref6">5</xref>
        ), formulated as VI (
        <xref ref-type="bibr" rid="ref7">6</xref>
        ), it was added
to our numerical experiments software suite for VI algorithms. The system allows to plug subset of
implemented algorithms to solve one of implemented problems and analyze algorithms behavior.
4.1.
      </p>
    </sec>
    <sec id="sec-11">
      <title>Brief software description</title>
      <p>Basically, all VI based problems have common parts, which are used by algorithms – and the same
is true for extragradient algorithms family. A problem should have constraints with projecting
operation, and provide the mapping calculation routine. On the other side, big part of an algorithms’
parameters is common for all algorithms – e.g., starting point or initial step size. And some parameters
are more specific – like the second starting point or the adaptive step size calculation multiplier. That
is reflected inside the system with two class hierarchies – for algorithms and problems correspondingly.</p>
      <p>The simplified schema is shown on the Figure 2. Test suite is responsible for running selected set of
algorithms for the problem and saving run history (time and all problem and algorithm parameters on
every iteration). It also interacts with graphing component (now shown on the figure), to draw
convergency rate graphs according to different metrics.</p>
      <p>The system2 contains Python 3 implementation of algorithms mentioned above, both adaptive and
static modifications. Also, more than ten test problems implemented withing the architecture above,
from simple test problems to more real problems, such as PageRank calculation or traffic network
equilibrium search.</p>
      <p>The functionality can be used from code and a command line (selecting problem, algorithms set and
run parameters), which is convenient enough for researchers with python programming skills. For blood
supply chain problem, we added visual interface for problem editing, running the algorithms and
obtaining the results. The problems selector and editor UI (Figure 3 below) is implemented as a web
application using Plotly Dash framework [22].
2 Source code repository: https://github.com/compmath-sdeni/vi-alg-suite</p>
      <p>Figure 4 shows, how calculation results are presented in web version of UI. For now, output includes
convergence rate graph (iteration number vs. distance between approximate solutions) and textual
information with final results, timing and run parameters. For problems with known solution graph can
be switched to iteration vs. distance to real solution, and horizontal axis can be switched to calculation
time instead of iteration number. Within textual information we also have specific characteristic of the
problem, which are defined withing the problem class – for example, here we have supply amounts for
each hospital. Under the hood, the system also saves detailed run history for every algorithm (in tabular
format), so a researcher can monitor state and behavior on every iteration.</p>
      <p>
        It can be seen, that in terms of precision after 1000 iterations adaptive algorithm of Tseng slightly
outperforms EFP and Malitsky-Tam (MT) on this test problem. At the same time, from numbers we
see, that time for 1000 iterations of Tseng is two times bigger (0.64 sec. vs 0.32 sec.) compared to both
EFP and MT – which is expected, as in this problem projection is very cheap ( =  +), but the mapping
computation is very expensive (because of complicated derivative in (
        <xref ref-type="bibr" rid="ref7">6</xref>
        )). Also, it worth noting, that
despite much smaller distance between step iterations (0.001 for Tseng vs. 0.002 and 0.003 for MT and
EFP), goal function value is not so different (80493 vs. 80499).
      </p>
      <p>It should be noted, that timing from the single run is not a stable measure of performance – but
similar relative timings were obtained when running many experiments repeatedly and averaging the
results. Calculations were done on Ubuntu Linux machine with Intel Core i7-1065G7 1.3 – 3.9GHz and
16GB RAM with python 3.10, numpy 1.24.3.</p>
    </sec>
    <sec id="sec-12">
      <title>5. Conclusion</title>
      <p>Considered adaptive versions of extragradient algorithms allow effectively solving blood supply
chain optimization problem, without need of complicated handcrafting of step size and starting
parameters. Implementation-friendly model formulation allows achieving reasonable performance and
can be used for other problems of network economics family. And developed software system for
applying VI algorithms to a different kind of problems considerably decreases time needed to both to
get a solution for suitable problem and to conduct numerical experiments and compare behavior of
algorithms from extragradient family. As further development directions, it is worth adding other types
of algorithms, which are suitable for some subset of problems, and add a possibility for researchers to
more easily plug in own algorithms and problems to the system.</p>
    </sec>
    <sec id="sec-13">
      <title>6. Acknowledgements</title>
      <p>This work was supported by the Ministry of Education and Science of Ukraine (project
"Computational algorithms and optimization for artificial intelligence, medicine and defense",
0122U002026) and Vodafone Business machine learning and data science internship program.</p>
    </sec>
    <sec id="sec-14">
      <title>7. References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Facchinei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Pang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Finite-Dimensional Variational</surname>
          </string-name>
          Inequalities and Complementarity Problems, Springer Series in Operations Research, vol. II, Springer, New York,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Kinderlehrer</surname>
          </string-name>
          ,
          <string-name>
            <surname>G. Stampacchia,</surname>
          </string-name>
          <article-title>An Introduction to Variational Inequalities and Their Applications</article-title>
          ,
          <source>Society for Industrial and Applied Mathematics</source>
          , Philadelphia,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kassay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Radulescu</surname>
          </string-name>
          , Equilibrium Problems and Applications. London: Academic Press,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Polyak</surname>
          </string-name>
          ,
          <string-name>
            <surname>Finding Nonlinear Production-Consumption</surname>
            <given-names>Equilibrium</given-names>
          </string-name>
          ,
          <source>arXiv preprint arXiv:2204.04496</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Goodfellow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Pouget-Abadie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mirza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Warde-Farley</surname>
          </string-name>
          , Sh.
          <string-name>
            <surname>Ozair</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Courville</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Bengio</surname>
          </string-name>
          , Generative Adversarial Networks,
          <source>Advances in Neural Information Processing Systems</source>
          (
          <year>2014</year>
          )
          <fpage>2672</fpage>
          -
          <lpage>2680</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Madry</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Makelov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Schmidt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tsipras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vladu</surname>
          </string-name>
          ,
          <article-title>Towards deep learning models resistant to adversarial attacks, arXiv preprint</article-title>
          ,
          <source>arXiv:1706.06083</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Anna</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <article-title>Network economics: A variational inequality approach</article-title>
          , Kluwer Academic Publishers, Dordrecht,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          , Supply Chain Network Economics: Dynamics of Prices, Flows, and
          <string-name>
            <surname>Profits</surname>
          </string-name>
          , Edward Elgar Publishing,
          <year>2008</year>
          . doi:
          <volume>10</volume>
          .1111/j.1467-
          <fpage>9787</fpage>
          .
          <year>2008</year>
          .
          <volume>00567</volume>
          _4.x.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Masoumi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <source>Networks Against Time: Supply Chain Analytics for Perishable Products</source>
          ,
          <year>2013</year>
          . doi:
          <volume>10</volume>
          .1007/978-1-
          <fpage>4614</fpage>
          -6277-4.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Nagurney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. H.</given-names>
            <surname>Masoumi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <article-title>Supply Chain Network Operations Management of a Blood Banking System with Cost and Risk Minimization</article-title>
          ,
          <source>Comput Manag Sci</source>
          <volume>9</volume>
          (
          <year>2012</year>
          )
          <fpage>205</fpage>
          -
          <lpage>231</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10287-011-0133-z.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Attari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pasandideh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Niaki</surname>
          </string-name>
          ,
          <article-title>A hybrid robust stochastic programming for a bi-objective blood collection facilities problem (Case study: Iranian blood transfusion network)</article-title>
          ,
          <source>Journal of Industrial and Production Engineering</source>
          <volume>36</volume>
          (
          <year>2019</year>
          )
          <fpage>154</fpage>
          -
          <lpage>167</lpage>
          . doi:
          <volume>10</volume>
          .1080/21681015.
          <year>2019</year>
          .
          <volume>1645747</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Korpelevich</surname>
          </string-name>
          ,
          <article-title>An extragradient method for finding saddle points and for other problems</article-title>
          , Matecon.
          <volume>12</volume>
          (
          <year>1976</year>
          )
          <fpage>747</fpage>
          -
          <lpage>756</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Popov</surname>
          </string-name>
          ,
          <article-title>A modification of the Arrow-Hurwicz method for search of saddle points</article-title>
          ,
          <source>Mathematical notes of the Academy of Sciences of the USSR</source>
          <volume>28</volume>
          (
          <year>1980</year>
          )
          <fpage>845</fpage>
          -
          <lpage>848</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>P.</given-names>
            <surname>Tseng</surname>
          </string-name>
          ,
          <article-title>A modified forward-backward splitting method for maximal monotone mappings</article-title>
          ,
          <source>SIAM Journal on Control and Optimization</source>
          <volume>38</volume>
          (
          <year>2000</year>
          )
          <fpage>431</fpage>
          -
          <lpage>446</lpage>
          . doi:
          <volume>10</volume>
          .1137/S0363012998338806.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. K.</given-names>
            <surname>Tam</surname>
          </string-name>
          ,
          <article-title>A Forward-Backward Splitting Method for Monotone Inclusions Without Cocoercivity</article-title>
          ,
          <source>SIAM Journal on Optimization</source>
          <volume>30</volume>
          (
          <year>2020</year>
          )
          <fpage>1451</fpage>
          -
          <lpage>1472</lpage>
          . doi:
          <volume>10</volume>
          .1137/18M1207260.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Beck</surname>
          </string-name>
          , First-Order
          <source>Methods in Optimization. Philadelphia: Society for Industrial and Applied Mathematics</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>S.</given-names>
            <surname>Denysov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>Adaptive Variants of the Extrapolation from the Past Method and the Operator Extrapolation Method</article-title>
          , In: Shkarlet,
          <string-name>
            <surname>S.</surname>
          </string-name>
          , et al.
          <source>Mathematical Modeling and Simulation of Systems, MODS-2022, Lecture Notes in Networks and Systems 667</source>
          , Springer, Cham,
          <year>2023</year>
          , pp.
          <fpage>49</fpage>
          -
          <lpage>60</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -30251-
          <issue>0</issue>
          _
          <fpage>4</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>S. V.</given-names>
            <surname>Denisov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. I. Stetsyuk</surname>
          </string-name>
          ,
          <article-title>Bregman Extragradient Method with Monotone Rule of Step Adjustment</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          <volume>55</volume>
          (
          <year>2019</year>
          )
          <fpage>377</fpage>
          -
          <lpage>383</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-019-00144-5.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chabak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Vedel</surname>
          </string-name>
          ,
          <article-title>A New Non-Euclidean Proximal Method for Equilibrium Problems</article-title>
          , in: O.
          <string-name>
            <surname>Chertov</surname>
          </string-name>
          et al. (eds.),
          <source>Recent Developments in Data Science and Intelligent Analysis of Information</source>
          (
          <year>2019</year>
          ),
          <source>Advances in Intelligent Systems and Computing 836</source>
          , Springer, Cham,
          <year>2019</year>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>58</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -97885-
          <issue>7</issue>
          _
          <fpage>6</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>Y. V.</given-names>
            <surname>Malitsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. V.</given-names>
            <surname>Semenov</surname>
          </string-name>
          ,
          <article-title>An extragradient algorithm for monotone variational inequalities</article-title>
          ,
          <source>Cybernetics and Systems Analysis</source>
          ,
          <volume>50</volume>
          (
          <year>2014</year>
          )
          <fpage>271</fpage>
          -
          <lpage>277</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10559-014-9614-8.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>M.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Mroueh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ross</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>P. Das</surname>
          </string-name>
          ,
          <article-title>A decentralized parallel algorithm for training generative adversarial nets, arXiv preprint</article-title>
          , arXiv:
          <year>1910</year>
          .12999,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .48550/arXiv.
          <year>1910</year>
          .
          <volume>12999</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Plotly</surname>
            <given-names>Technologies</given-names>
          </string-name>
          <string-name>
            <surname>Inc</surname>
          </string-name>
          .
          <article-title>Collaborative data science</article-title>
          .
          <source>Montréal, QC</source>
          ,
          <year>2015</year>
          . https://plot.ly .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>