=Paper= {{Paper |id=Vol-1623/paperls7 |storemode=property |title=On a Local Search for Hexamatrix Games |pdfUrl=https://ceur-ws.org/Vol-1623/paperls7.pdf |volume=Vol-1623 |authors=Andrei Orlov, Alexander Strekalovsky |dblpUrl=https://dblp.org/rec/conf/door/OrlovS16 }} ==On a Local Search for Hexamatrix Games== https://ceur-ws.org/Vol-1623/paperls7.pdf
          On a Local Search for Hexamatrix Games

                          Andrei Orlov and Alexander Strekalovsky

                      Matrosov Institute for System Dynamics and Control
                       Theory of Siberian Branch of RAS, Irkutsk, Russia
                                    {anor,strekal}@icc.ru
                                http://nonconvex.isc.irk.ru



       Abstract. The problem of finding a Nash equilibrium in polymatrix game of
       three players (hexamatrix game) is considered. For the equivalent nonconvex
       optimization problem an issue of local search is investigated. First, we study the
       special local search method (SLSM), based on the linearization of the original
       problem with respect to basic nonconvexity. Then the new local search procedure
       is elaborated. This procedure uses the bilinear structure of the objective function
       and is based on the consecutive solving of auxiliary linear programs. At the end
       of the paper the results of comparative computational simulation for two local
       search methods are presented and analyzed.

       Keywords: Polymatrix games, hexamatrix games, global search theory, local
       search, computational simulation


1    Introduction

As known, the efficient numerical methods for solving problems of Game Theory is
the important issue of the contemporary mathematical optimization [9]. One of the
conventional concept of a solution in Game Theory is a Nash equilibrium. In such a
point for none of the players it is not profitable to change own strategy alone. Unfor-
tunately, there are no efficient algorithms for finding Nash equilibria in the games of a
general form with a large number of proper players’ strategies. From the optimization
viewpoint even classical bimatrix game has a nonconvex structure [7, 16], because a
problem of finding a Nash equilibrium in such a game is equivalent to some noncon-
vex bilinear programming problem. Therefore classical optimization methods are not
directly applicable for solving bimatrix games.
    The new nonconvex optimization approach to numerical finding of Nash equilibria
in bimatrix games, based on Mills theorem [5] and Global Search Theory (GST) [10,
12], demonstrated efficiency for large-scale problems [7, 16, 19]. Recently, the theoret-
ical foundation for finding Nash equilibrium in polymatrix games [20] (generalization
of Mills theorem) was developed [14]. In particular, polymatrix game of three play-
ers (hexamatrix game) can be reduced to the nonconvex mathematical optimization
problem with triple bilinear structure in the objective function [14].

Copyright c by the paper’s authors. Copying permitted for private and academic purposes.
In: A. Kononov et al. (eds.): DOOR 2016, Vladivostok, Russia, published at http://ceur-ws.org
478     A. Orlov and A. Strekalovsky

    The reduced problem belongs to the class of nonconvex optimization problems with
functions of A.D. Alexandrov, since it is easy to show that a bilinear function can be
represented as the difference of two convex functions [7, 16]. Such functions are called
d.c. functions [1, 10, 12]. Therefore, in order to solve this problem the Global Search
Theory for d.c. minimization (maximization) problems [10, 12] can be applied. The first
results of its application to hexamatrix games can be found in [8].
    According to GST, one of the key elements of global search algorithms in the reduced
problems is specialized local search methods, taking into account the structure of the
problem under investigation [11, 13, 15, 17]. Our experience shows that the efficiency of
the whole Global Search Algorithm depends heavily on the efficiency of Local Search.
    In this paper two local search methods for hexamatrix games are studied. The
first method employs the d.c. structure of the problem in question and applies the
linearization of the problem with respect to the basic nonconvexity [10, 13, 18]. The
second method is based on the consecutive solving of auxiliary linear programs and
employs the bilinearity of the objective function [7, 16, 17]. The results of comparative
computational simulation for these methods are presented and analyzed.


2     Problem Formulation
Consider the following hexamatrix game with mixed strategies
                                                                     
                     F1 (x, y, z) , hx, A1 y + A2 zi ↑ max, x ∈ Sm , 
                                                        x            
                                                                     
                     F2 (x, y, z) , hy, B1 x + B2 zi ↑ max, y ∈ Sn ,
                                                        y            
                     F3 (x, y, z) , hz, C1 x + C2 yi ↑ max, z ∈ Sl , 
                                                                     
                                                           z
                                                     p
                                                     X
where Sp = {u = (u1 , . . . , up )T ∈ IRp ui ≥ 0,          ui = 1},   p = m, n, l.
                                                     i=1

Definition 1. a) The triple (x∗ , y ∗ , z ∗ ) ∈ Sm × Sn × Sl satisfying the inequalities

          v1∗ = v1 (x∗ , y ∗ , z ∗ ) , F1 (x∗ , y ∗ , z ∗ ) ≥ F1 (x, y ∗ , z ∗ ) ∀x ∈ Sm , 
                                                                                           

          v2∗ = v2 (x∗ , y ∗ , z ∗ ) , F2 (x∗ , y ∗ , z ∗ ) ≥ F2 (x∗ , y, z ∗ ) ∀y ∈ Sn ,
            v3∗ = v3 (x∗ , y ∗ , z ∗ ) , F3 (x∗ , y ∗ , z ∗ ) ≥ F3 (x∗ , y ∗ , z) ∀z ∈ Sl
                                                                                           

will be henceforth called a Nash equilibrium point in the game
Γ3 = Γ (A, B, C) (A = (A1 , A2 ), B = (B1 , B2 ), C = (C1 , C2 )). Herewith, the strategies
x∗ , y ∗ , and z ∗ will be called the equilibrium strategies.
    b) The numbers v1∗ , v2∗ , and v3∗ will be called the payoffs of players 1, 2, and 3,
respectively, at the equilibrium point (x∗ , y ∗ , z ∗ ).
    c) Denote the set of all Nash equilibrium points of the game
Γ3 = Γ (A, B, C) by N E = N E(Γ3 ) = N E(Γ (A, B, C)).
   It is well known that in the case of the game Γ3 = Γ (A, B, C) due to Nash’s
Theorem [14], there exists a Nash equilibrium point in mixed strategies.
   From the viewpoint of a real numerical search the following definition of approxi-
mate Nash equilibrium point is appropriated.
                                                 On a Local Search for Hexamatrix Games                479

Definition 2. The triple (x0 , y 0 , z 0 ) ∈ Sm × Sn × Sl =: S satisfying the inequalities
                                                                           
                   F1 (x0 , y 0 , z 0 ) + ε ≥ F1 (x, y 0 , z 0 ) ∀x ∈ Sm , 
                   F2 (x0 , y 0 , z 0 ) + ε ≥ F2 (x0 , y, z 0 ) ∀y ∈ Sn ,
                   F3 (x0 , y 0 , z 0 ) + ε ≥ F3 (x0 , y 0 , z) ∀z ∈ Sl
                                                                           

will be called an ε-Nash equilibrium point for the game Γ3 ((x0 , y 0 , z 0 ) ∈ N E(Γ3 , ε)).

    Further consider the following optimization problem (σ , (x, y, z, α, β, γ)):

                          Φ(σ) , hx, A1 y + A2 zi + hy, B1 x + B2 zi+
                                                                                              
                                                                                              
                            +hz, C1 x + C2 yi − α − β − γ ↑ max ,
                                                                                              
                                                                                              
                                                                      σ                               (P)
           σ ∈ D , {(x, y, z, α, β, γ) ∈ IRm+n+l+3 | x ∈ Sm , y ∈ Sn , z ∈ Sl , 
                                                                                
                                                                                
              A1 y + A2 z ≤ αem , B1 x + B2 z ≤ βen , C1 x + C2 y ≤ γel } ,

where ep = (1, 1, ..., 1) ∈ IRp , p = m, n, l.

Lemma 1. [14] The objective function of Problem (P) is nonpositive at each feasible
point:
                   Φ(σ) = Φ(x, y, z, α, β, γ) ≤ 0 ∀σ ∈ D .

   The following equivalence theorem for Problem (P) and the problem of finding the
Nash equilibrium in the game Γ3 takes place.

Theorem 1. [14] The point (x∗ , y ∗ , z ∗ ) is a Nash equilibrium point in the hexama-
trix game Γ (A, B, C) = Γ3 if and only if it is the part of the global solution σ∗ ,
(x∗ , y ∗ , z ∗ , α∗ , β∗ , γ∗ ) ∈ IRm+n+l+3 to Problem (P). At the same time, the numbers α∗ ,
β∗ , and γ∗ are the payoffs of the first, the second, and the third players, respectively, in
the       game          Γ3 :      α∗      =        v1 (x∗ , y ∗ , z ∗ ),      β∗    = v2 (x∗ , y ∗ , z ∗ ),
                 ∗ ∗ ∗
γ∗ = v3 (x , y , z ). In addition, the optimal value V(P) of Problem (P) is equal to
zero:
                                     V(P) = Φ(x∗ , y ∗ , z ∗ , α∗ , β∗ , γ∗ ) = 0 .

  Furthermore, it is easy to prove the result of the relationship between an approxi-
mate solution of Problem (P) and an approximate Nash equilibrium.

Proposition 1. [14] Suppose, the point (x0 , y 0 , z 0 , α0 , β0 , γ0 ) is an ε-solution to Prob-
lem (P). Then the triple (x0 , y 0 , z 0 ) is the ε-Nash equilibrium point for the game
Γ3 = Γ (A, B, C): (x0 , y 0 , z 0 ) ∈ N E(Γ3 , ε).

    Thus, the seeking for a Nash equilibrium can be carried out by approximate solving
Problem (P). To this end, we propose to use the Global Search Theory mentioned
above [10, 12]. According to this Theory the global search consists of two principal
stages: 1) a local search, which takes into account the structure of the problem under
scrutiny; 2) the procedures based on Global Optimality Conditions (GOC) [10, 12],
allow improving the point provided the local search method, in other words, to escape
a local pit.
    The reminder of the paper will be devoted to the first stage of the approach.
480     A. Orlov and A. Strekalovsky

3     Local Search Method for D.C. Formulation of Problem (P)
It can be readily seen that the objective function of Problem (P) (which is the special
kind of quadratic functions with bilinear structure) is a d.c. function, since it can be
represented as a difference of two convex functions, for example, in the following way:
                    Φ(x, y, z, α, β, γ) = h0 (x, y, z) − g0 (x, y, z, α, β, γ) ,                (1)
where
                                   1                                         
                       h0 (x, y, z) = kx + A1 yk2 + kx + A2 zk2 +             
                                   4
                                                                              
                                                                              
                                                                             
                                                                              
                           2             2             2             2
               +kB1 x + yk + ky + B2 zk + kC1 x + zk + kC2 y + zk ,
                                                                              
                                                                              
                  1                                                                            (2)
         g0 (σ) =     kx − A1 yk2 + kx − A2 zk2 + ky − B1 xk2 + ky − B2 zk2 + 
                                                                              
                  4
                                                                              
                                                                              
                                                                             
                       +kC x − zk2 + kC y − zk2 + α + β + γ .
                                                                              
                                                                              
                             1                 2

    Besides, one can show that these functions are convex on (x, y, z) and σ, respectively.
    Thus, the first method of local search, which can be suggested here, is the Special
Local Search Method (SLSM) for d.c. programming [10, 12]. This method, well known
as the DCA [18], is based on a solving the sequence of problems linearized with respect
to basic nonconvexity and exploits the d.c. structure of Problem (P).
    In terms of Problem (P) the scheme of the SLSM can be written in the following
way. Let σ 0 = (x0 , y 0 , z 0 , α0 , β0 , γ0 ) ∈ D be a starting point. Further if one has the
point σ s ∈ D (at a current iteration), then the next point σ s+1 ∈ D is an approximate
solution of the linearized (at the point σ s ∈ D) problem:
          Ψs (σ) , g0 (σ) + has , xi + hbs , yi + hcs , zi ↓ min,     σ∈D,              (PL(σ s ))
                                                               σ

where
                                 1
as , −∇x h0 (xs , y s , z s ) = − [2xs + A1 y s + (B1 xs + y s )B1 + A2 z s + (C1 xs + z s )C1 ] ;
                                 2
                                 1
bs , −∇y h0 (xs , y s , z s ) = − [2y s + B1 xs + (xs + A1 y s )A1 + B2 z s + (C1 y s + z s )C2 ] ;
                                 2
                                 1
cs , −∇z h0 (xs , y s , z s ) = − [2z s + C1 xs + (xs + A2 z s )A2 + C2 y s + (y s + B2 z s )B2 ] .
                                 2
   Further, let the numerical sequence {ρs } such that
                                                       ∞
                                                       X
                           ρs > 0, s = 0, 1, 2, ...,         ρs ≤ +∞ ,
                                                       s=0

be given. Then the next iteration σ s+1 ∈ D is constructed to satisfy the following
inequality:
                      Ψs (σ s+1 ) ≤ inf {Ψs (σ) | σ ∈ D} + ρs .                 (3)
                                           σ
   The convergence theorem will be presented here in terms of Problem (P). Note that
the objective function of Problem (P) is bounded below due to Theorem 1 and Nash’s
Theorem.
                                               On a Local Search for Hexamatrix Games        481

Theorem 2. [10, 12, 16] The sequence {σ s } generated by the rule (3) satisfies the fol-
lowing conditions:
   a) lim [V(PL(σ s )) − Ψs (σ s+1 )] = 0, where V(PL(σ s )) is the optimal value of Prob-
      s→∞
lem (PL(σ s )).
   b) If the function h0 (·) is strongly convex, then we have lim kσ s − σ s+1 k = 0.
                                                                      s→∞
    c) Any limit point σ̂ of the sequence {σ s } generated by the SLSM is a solution of
the linearized problem (PL(σ̂)).
    The point σ̂ be called a critical (with respect to the SLSM) point to Problem (P).
The resulting critical point will also satisfy classical first order stationarity condition
[10, 12, 16].
    In the implementation of the SLSM the following inequality can be used as a stop-
ping criterion:
                               Ψs (σ s ) − Ψs (σ s+1 ) ≤ τ /2 ,
where τ > 0 is the given accuracy. If ρs ≤ τ /2 then point σ s will be τ -critical point to
Problem (P) with respect to the SLSM [10, 12, 16].


4    ”Mountain Climbing” Procedure

The second idea of a local search in Problem (P) finds its roots in the ”mountain
climbing” procedure, which is proposed by H. Konno [3] for the bilinear programming
problems. To implement this idea for Problem (P), first, let us split variables in several
groups, and, after that, solve a sequence of specially constructed linear programming
(LP) problems with respect to these groups of variables. Wherein, in contrast to results
of H. Konno, and analogous early publications, the auxiliary linear programs can be
solved approximately.
    Similar idea of a local search have previously demonstrated its efficiency in bimatrix
games [7, 16, 19], bilinear programming problems [6, 16], and bilevel problems [17].
    In order to describe the method, consider the following LP problems:
        (v,w)
                                                                    
       f1     (x, β) , hx, (A1 + B1T )v + (A2 + C1T )wi − β ↑ max , 
                                                              (x,β)
                                                                    
                   (x, β) ∈ X(v, w, γ̄) , {(x, β) | x ∈ Sm ,                (LP x (v, w, γ̄))
                                                                    
                 B1 x − βen ≤ −B2 w, C1 x ≤ γ̄el − C2 v} ;
                                                                    

        (u,w)
                                                                      
      f2        (y, γ) , hy, (B1 + AT1 )u + (B2 + C2T )wi − γ ↑ max , 
                                                                    (y,γ)
                                                                      
                     (y, γ) ∈ Y (u, w, ᾱ) , {(y, γ) | y ∈ Sn ,                 (LP y (u, w, ᾱ))
                                                                            
                    A1 y ≤ ᾱem − A2 w, C2 y − γel ≤ −C1 u} ;
                                                                            

           (u,v)
                                                                         
       f3          (z, α) , hz, (C1 + AT2 )u + (C2 + B2T )vi − α ↑ max , 
                                                                    (z,α)
                                                                         
                        (z, α) ∈ Z(u, v, β̄) , {(z, α) | z ∈ Sl ,               (LP z (u, v, β̄))
                                                                            
                     A2 z − αem ≤ −A1 v, B2 z ≤ β̄en − B1 u} .
                                                                            

Here (u, v, w, ᾱ, β̄, γ̄) ∈ D is a feasible solution to Problem (P).
482     A. Orlov and A. Strekalovsky

   It is easy to see that these problems will be feasible and bounded due to the bound-
edness from above of the function Φ(·) on the set D by Lemma 1.
   Let (x0 , y 0 , z 0 , α0 , β0 , γ0 ) ∈ D be a starting point. For example, one can use the
barycenters of standard simplexes as follows:
                  1                         1                    1
          x0i =     , i = 1, ..., m; yj0 = , j = 1, ..., n; zt0 = , t = 1, ..., l ;
                  m                         n                    l
                  α0 = max(A1 y 0 + A2 z 0 )i ; β0 = max(B1 x0 + B2 z 0 )j ;                    (4)
                         i                                       j
                                                     0            0
                                      γ0 = max(C1 x + C2 y )t .
                                            t

                                          Y Zγ -procedure

   Step 0. Set s := 1, y s := y 0 , z s := z 0 , γs := γ0 .
   Step 1. Using an LP technique, find a ρs /3-solution (xs+1 , βs+1 ) to Problem
(LP x (y s , z s , γs )), so that the following inequality holds:
                                                  (y s ,z s )
                  f1s (xs+1 , βs+1 ) + ρs /3 := f1     (xs+1 , βs+1 ) + ρs /3 ,
                        s+1           T    s          T
                  = hx , (A1 + B1 )y + (A2 + C1 )z s i − βs+1 + ρs /3 ≥
                          ≥ sup {f1s (x, β) | (x, β) ∈ X(y s , z s , γs )} .
                              (x,β)

   Step 2. Find a ρs /3-solution (y s+1 , γs+1 ) to Problem (LP y (xs+1 , z s , αs )), so that
the following inequality takes place:
                                                 (xs+1 ,z s )
                f2s (y s+1 , γs+1 ) + ρs /3 := f2       (y s+1 , γs+1 ) + ρs /3 ,
                     s+1             T    s+1
                = hy , (B1 + A1 )x            + (B2 + C2 )z s i − γs+1 + ρs /3 ≥
                                                        T
                                    s
                         ≥ sup {f2 (y, γ) | (y, γ) ∈ Y (xs+1 , z s , αs )} .
                             (y,γ)

   Step      3.   Find       a     ρs /3-solution     (z s+1 , αs+1 )                to    Problem
        s+1 s+1
(LP z (x , y , βs+1 )), so that the following inequality holds:
                                                (xs+1 ,y s+1 )
              f3s (z s+1 , αs+1 ) + ρs /3 := f3        (z s+1 , αs+1 ) + ρs /3 ,
                   s+1             T   s+1
              = hz , (C1 + A2 )x           + (C2 + B2 )y s+1 i − αs+1 + ρs /3 ≥
                                                    T
                                 s
                      ≥ sup {f3 (z, α) | (z, α) ∈ Z(xs+1 , y s+1 , βs+1 )} .
                         (z,α)

   Step 4. Set s := s + 1, and loop to Step 1.
   Note that the whole point (x0 , y 0 , z 0 , α0 , β0 , γ0 ) is not required to start the Y Zγ -
procedure, only the part (y 0 , z 0 , γ0 ) is sufficient.
   The following convergence result of the local search method described above takes
place.
                                                                ∞
                                                                P
Theorem 3. Suppose, ρs > 0, s = 0, 1, 2, . . . ,                      ρs < +∞. Then the sequence of
                                                                s=0
vectors σ s , (xs , y s , z s , αs , βs , γs ), produced by the Y Zγ -procedure, converges to the
point σ̂ , (x̂, ŷ, ẑ, α̂, β̂, γ̂), which satisfies the following inequalities:

                      Φ(σ̂) ≥ Φ(x, ŷ, ẑ, α̂, β, γ̂) ∀(x, β) ∈ X(ŷ, ẑ, γ̂) ,                 (5)
                                            On a Local Search for Hexamatrix Games    483

                    Φ(σ̂) ≥ Φ(x̂, y, ẑ, α̂, β̂, γ) ∀(y, γ) ∈ Y (x̂, ẑ, α̂) ,        (6)
                    Φ(σ̂) ≥ Φ(x̂, ŷ, z, α, β̂, γ̂) ∀(z, α) ∈ Z(x̂, ŷ, β̂) .         (7)
   Note, here we present the convergence result for sequence {σ s }, in contrast to the
theorem from [8], where the convergence of numerical sequence {Φs } was studied only.
Definition 3. Any 6-tuple σ̂, satisfying the inequalities (5), (6), and (7), will hence-
forth be called a critical point to Problem (P). If the inequalities (5), (6), and (7) are
satisfied with certain accuracy at some point, then this point is called the approximate
critical point.
    It can be readily seen, that this concept of a critical point to Problem (P) is due
to the structure of the proposed local search method. Moreover, the structure of the
Y Zγ -procedure suggests using the following inequalities as the stopping criterion:
                                                                    τ
                            f1s (xs+1 , βs+1 ) − f1s (xs , βs ) ≤     ,             (8)
                                                                    3
                                                                    τ
                             f2s (y s+1 , γs+1 ) − f2s (y s , γs ) ≤ ,              (9)
                                                                    3
                                                                    τ
                            f3s (z s+1 , αs+1 ) − f3s (z s , αs ) ≤ ,              (10)
                                                                    3
where τ > 0 is the given accuracy.
    Note that summing three inequalities (8)–(10), the conventional for classical nu-
merical methods criterion Φs+1 − Φs ≤ τ is obtained. Wherein usage of the system
(8)–(10) instead of this inequality agrees with the goal of the local search (finding a
critical point which is a partial global solution to Problem (P) with respect to three
pairs of variables).
    It can be shown that when at the iteration s of the Y Zγ -procedure the system (8)–
(10) is fulfilled, we obtain an approximate critical point to Problem (P) in the sense
of Definition 3.
    Now let us move on to a description of a numerical experiment.


5   Computational Simulations
The both local search methods have been implemented with the help of MATLAB 7.11
R2010b [4]. Auxiliary LP problems and convex quadratic problems have been solved by
IBM         ILOG       CPLEX           12.6.2     subroutines       ”cplexlp”      and
”cplexqp” [2], respectively, with settings on default. This package shows the consider-
able advantages with respect to standard MATLAB subroutines ”linprog” and ”quad-
prog”. The computer with Intel Core i5-2400 CPU (3.1 GHz), 4 Gb RAM has been
used.
    Test hexamatrix games were randomly generated by the relevant subroutines of
MATLAB by the following rules. Pseudorandom elements of matrices A1 , A2 , B1 ,
B2 , C1 , and C2 have an uniform distribution and were choosing with the accuracy
10−3 from the interval ] − 10, 10[, when min{m, n, l} ≤ 10, and from the interval
] − min{m, n, l}, min{m, n, l}[, when min{m, n, l} > 10.
484     A. Orlov and A. Strekalovsky

    The starting point for both methods was chosen according to the formula (4). The
accuracy of the stopping criteria was τ = 10−3 .
    Firstly, let us present the results of the SLSM testing on the problem series of
dimension (m + n + l) from (5 + 5 + 5) up to (50 + 50 + 50), that specified in Table
1. There was 1000, 100, and 10 problems in each series depending on the dimension of
the problems (see Table 1).
    In addition, the following notations have been used in Table 1: QP is the total
number of convex quadratic problems solved in course of the SLSM, T stands for the
total CPU time for all problems of series (in seconds); Φ0avg is the average value of the
objective function of Problem (P) at the starting point; Φ̂avg is the average value of the
function Φ at the obtained approximately critical point; Φ̂avg is the worst value of the
function Φ at critical points; F ailed is the number of problems where an approximate
critical point has not been obtained.


               Table 1. The SLSM testing with d.c. representation (1)–(2)

            m + n + l Series QP       T      Φ0avg   Φ̂avg Φ̂worst F ailed
             5+5+5     1000 166907 547.85 -12.1064 -1.8737 -10.4290   8
            10+10+10 1000 317630 1593.99 -11.3223 -2.1403 -6.9423     1
            20+20+20 100 110261 1431.38 -20.1273 -4.1621 -8.0592      1
            30+30+30 100 212920 6863.76 -26.7482 -5.5145 -10.6234     3
            40+40+40 10      43440 3545.94 -33.9384 -6.9155 -14.4621  0
            50+50+50 10      33877 2802.37 -37.8681 -8.2039 -10.7426  0



    Note that in several problems an approximate critical point has not been obtained
because the maximum number of iterations 100(m+n+l) has been attained. As a whole,
this variant of the SLSM shows itself very slow and inefficient. It can be explained by
the bad properties of the Hessian of Problem (PL(σ s )) at each iteration of the method.
In particular, function h0 (·) is not strongly convex (see Theorem 2). In order to improve
these properties let us consider another regularized d.c. representation of the function
Φ:
                                   Φ(σ) = h1 (σ) − g1 (σ) ,                           (11)

where
                h1 (σ) = h0 (x, y, z) + µkσk2 ,   g1 (σ) = g0 (σ) + µkσk2            (12)

with a regularizing parameter µ > 0 and Euclidean norm.
    Thus, the functions h1 (σ) and g1 (σ) are strongly convex with respect to variable
σ, and in accordance with convex optimization theory, the convergence of numerical
methods for solving Problem (PL(σ s )) is going to be faster.
    The results of the regularized SLSM testing, where µ = 10(m + n + l) are presented
in Table 2 with the same notations.
    It can be readily seen that the second variant of the SLSM finds a critical point
much faster than the SLSM with d.c. representation (1)–(2). Especially it appears for
                                       On a Local Search for Hexamatrix Games      485

                        Table 2. The regularized SLSM testing

             m + n + l Series QP    T     Φ0avg    Φ̂avg   Φ̂worst F ailed
               5+5+5    1000 7327 25.63 -12.1064 -10.8737 -27.0894    0
             10+10+10 1000 8056 42.51 -11.3223 -8.4771 -19.0162       0
             20+20+20 100 1554 20.89 -20.1273 -13.5700 -20.6861       0
             30+30+30 100 3696 124.73 -26.7482 -18.1116 -25.5931      1
             40+40+40 10      417 23.37 -33.9384 -19.5362 -25.9790    0
             50+50+50 10      583 49.53 -37.8681 -24.0294 -28.7654    0



problems of high dimensions. For example, for problems with (40 + 40 + 40) dimen-
sion the time rate is more than 150 times less. Moreover, only for one problem an
approximate point has not been obtained because the threshold number of iterations
10(m + n + l) was overcome.
    On the other hand, the values Φ̂avg and Φ̂worst obtained by regularized SLSM are
farther from the global value 0 than for the first variant of the method. At the same
time, the main goal of the local search stage according to the Global Search Theory
is the obtaining a critical point (point with ”good” properties, see (5)–(7)) as fast as
possible, because in course of a global search it is necessary to carry out a local search
many times [10, 12]. Therefore the regularized SLSM is more suitable to apply within a
global search. We hope to overcome the remaining gap to the global value by a global
search procedure.
    Now let us pass to the testing of the Y Zγ -procedure and its comparison with the
SLSM. The Y Zγ -procedure shows promising efficiency during the preliminary com-
putational simulation, therefore the series of test problems were increased for each
dimension. In Table 3 the results of applying the Y Zγ -procedure to the problem series
from (5 + 5 + 5) up to (200 + 200 + 200) dimension are presented. There was 10000,
1000, 100 and 10 problems in each series. In Table 3 the notation LP instead of the
notation QP has been used, because the auxiliary problems within the Y Zγ -procedure
are linear programs. All the other notations are the same as before.
    Further, in Table 4 the results of testing the regularized SLSM with µ = max{m, n, l}(m+
n + l) is presented. Such a value of regularizing parameter µ was chosen for ensuring
the diagonal predominance in the matrix of quadratic problem (PL(σ s )), which was
built by the random matrices A1 , A2 , B1 , B2 , C1 , and C2 .
    First of all, it is worth to emphasize the advantage of the Y Zγ -procedure con-
cerning the speed of finding a critical points in random generated hexamatrix games.
This advantage increases from 1.8 times for the problems of (5+5+5) dimension up
to approximately 20 times for the problems of (125+125+125) and (200+200+200)
dimension. Therefore, the Y Zγ -procedure is much more faster than the regularized
SLSM especially for problems of high dimension.
    As for Φ̂avg and Φ̂worst values for the Y Zγ -procedure they are considerably closer to
0 than for the regularized SLSM and even they are comparable with the corresponding
values for the SLSM with the d.c. representation (1)–(2) (see Table 1). In particular,
note that the average values of the objective function at critical points provided the
Y Zγ -procedure are not far from its global value.
486   A. Orlov and A. Strekalovsky




               Table 3. The Y Zγ -procedure testing on the large series

           m + n + l Series LP     T      Φ0avg   Φ̂avg Φ̂worst F ailed
            5+5+5    10000 99705 180.75 -12.1048 -3.0153 -13.9799  0
           10+10+10 10000 140637 262.82 -11.3247 -1.9221 -7.0783   0
           20+20+20 10000 215847 454.73 -19.9186 -2.5504 -7.6788   1
           30+30+30   1000 27723 69.76 -26.9311 -3.0342 -8.3432    0
           40+40+40   1000 36099 111.82 -32.8441 -3.3164 -7.0914   0
           50+50+50   1000 46692 181.52 -38.4491 -3.6867 -8.6850   0
           75+75+75    100  5352 39.02 -50.4902 -4.4565 -7.2436    0
         100+100+100 100    7779 114.68 -60.8572 -4.8479 -7.3159   0
         125+125+125 100    7389 172.97 -70.5397 -5.5732 -7.4939   0
         150+150+150 10      870 29.62 -79.3345 -5.0982 -7.6286    0
         175+175+175 10     1773 92.55 -85.8656 -6.1977 -7.9071    0
         200+200+200 10     1479 109.68 -92.4432 -6.0895 -7.4830   0




              Table 4. The regularized SLSM testing on the large series

          m + n + l Series QP      T      Φ0avg    Φ̂avg   Φ̂worst F ailed
           5+5+5    10000 95156 330.58 -12.1048 -10.0327 -35.4659     0
          10+10+10 10000 82471 423.34 -11.3247 -8.4828 -21.6185       0
          20+20+20 10000 106272 1440.18 -19.9186 -13.8200 -30.3405    0
          30+30+30   1000 13119 465.79 -26.9311 -18.0002 -29.1439     0
          40+40+40   1000 15961 920.04 -32.8441 -21.5384 -35.6769     2
          50+50+50   1000 16835 1370.53 -38.4491 -24.8071 -37.1146    1
          75+75+75    100  1695 332.06 -50.4902 -31.6184 -40.9689     0
        100+100+100 100    2168 966.36 -60.8572 -37.4916 -49.7228     0
        125+125+125 100    4764 3909.66 -70.5397 -41.8590 -51.6223    1
        150+150+150 10      136 181.03 -79.3345 -47.6004 -54.2623     0
        175+175+175 10      210 424.61 -85.8656 -52.5941 -61.2330     0
        200+200+200 10      762 2186.90 -92.4432 -53.7768 -59.0646    0
                                           On a Local Search for Hexamatrix Games         487

    So one can see that the reliability of the Y Zγ -procedure is superior than this one
for the regularized SLSM (only 1 failed problem for all series versus 4 problems, re-
spectively).
    To sum up, based on the results of computational simulation, we can conclude that
for the purpose of a local search, within global search algorithms, the using of the
Y Zγ -procedure is more preferable.


6    Conclusion

In the paper the problem of finding a Nash equilibrium in hexamatrix games was
considered. For the equivalent nonconvex optimization problem an issue of a local
search was investigated. We studied two local search methods which are based on
different ideas. First, we describe the special local search method (SLSM), based on
the linearization of the original problem in d.c. form with respect to basic nonconvexity.
Then the new local search procedure (so called the Y Zγ -procedure) was elaborated.
This procedure is based on the consecutive solving of auxiliary linear programs and
uses the bilinear structure of the objective function.
    The results of comparative computational simulation for two local search methods
certified the advantage of the Y Zγ -procedure with respect to the SLSM from the per-
spective of using them within the global search procedures, intended for a seeking a
Nash equilibria in hexamatrix games.


Acknowledgments. This work is carried out under financial support of Russian Sci-
ence Foundation (project no. 15-11-20015).


References

1. Horst, R., Tuy, H.: Global optimization. Deterministic approaches. Springer-Verlag, Berlin
   (1993)
2. IBM ILOG CPLEX optimization studio, http://www-03.ibm.com/software/products/
   ru/ibmilogcpleoptistud
3. Konno, H.: A cutting plane algorithm for solving bilinear programs. Math. Prog. 11, 14–27
   (1976)
4. MATLAB — The language of technical computing, http://www.mathworks.com/
   products/matlab/
5. Mills, H.: Equilibrium points in finite games. J. Soc. Indust. Appl. Math. 8(2), 397–402
   (1960)
6. Orlov, A.V.: Numerical solution of bilinear programming problems Comput. Math. Math.
   Phys. 48(2), 225–241 (2008)
7. Orlov, A.V., Strekalovsky, A.S.: Numerical search for equilibria in bimatrix games. Comput.
   Math. Math. Phys. 45(6), 947–960 (2005)
8. Orlov, A.V., Strekalovsky, A.S., Batbileg, S.: On computational search for Nash equilibrium
   in hexamatrix games. Optim. Lett. 10(2), 369–381 (2016)
9. Pang, J.-S.: Three modeling paradigms in mathematical programming. Math. Prog. Ser.B.
   125(2), 297–323 (2010)
488     A. Orlov and A. Strekalovsky

10. Strekalovsky, A.S.: Elements of nonconvex optimization [in Russian]. Nauka, Novosibirsk
   (2003)
11. Strekalovsky, A.S.: On a local search for reverse convex problems. In: Liberti, L., Maculan,
   N. (eds.) Global optimization: from theory to implementation, 33–44. Springer, New York
   (2006)
12. Strekalovsky, A.S.: On solving optimization problems with hidden nonconvex structures.
   In: Rassias, T.M., Floudas, C.A., Butenko, S. (eds.) Optimization in science and engineer-
   ing, 465–502. Springer, New York (2014)
13. Strekalovsky, A.S.: On local search in d.c. optimization problems. Applied Math. and
   Comput. 255, 73–83 (2015)
14. Strekalovsky, A.S., Enkhbat, R.: Polymatrix games and optimization problems. Autom.
   Remote Control. 75(4), 632–645 (2014)
15. Strekalovsky, A.S., Gruzdeva, T.V.: Local search in problems with nonconvex constraints.
   Comput. Math. Math. Phys. 47(3), 381–396 (2007)
16. Strekalovsky, A.S., Orlov, A.V.: Bimatrix games and bilinear programming [in Russian].
   FizMatLit, Moscow (2007)
17. Strekalovsky, A.S., Orlov, A.V., Malyshev, A.V.: Local search in a quadratic-linear bilevel
   programming problem. Numer. Anal. Appl. 3(1), 59–70 (2010)
18. Tao, P.D., Souad, L.B.: Algorithms for solving a class of non convex optimization. Methods
   of subgradients. In: Hiriart-Urruty J.-B. (ed.) Fermat days 85, 249–271. Elservier Sience
   Publishers B.V., North Holland (1986)
19. Vasilyev, I.L., Klimentova, K.B., Orlov, A.V.: A parallel search of equilibrium points in
   bimatrix games [in Russian]. Numer. Methods Prog. 8, 233–243 (2007) http://num-meth.
   srcc.msu.ru/english/index.html
20. Yanovskaya, E.B.: Equilibrium points in polymatrix games [in Russian]. Latv. Math.
   Collect. 8, 381–384 (1968)