=Paper= {{Paper |id=Vol-1987/paper48 |storemode=property |title=On Limit of Value Functions for Various Densities |pdfUrl=https://ceur-ws.org/Vol-1987/paper48.pdf |volume=Vol-1987 |authors=Dmitry V. Khlopin }} ==On Limit of Value Functions for Various Densities== https://ceur-ws.org/Vol-1987/paper48.pdf
       On Limit of Value Functions for Various Densities

                                             Dmitry V. Khlopin
                            Krasovskii Institute of Mathematics and Mechanics
                                          16, S.Kovalevskaja St.,
                                      620990 Yekaterinburg, Russia.
                                          Ural Federal University
                                              4, Turgeneva St.,
                                      620083, Yekaterinburg, Russia
                                           khlopin@imm.uran.ru




                                                           Abstract

                       The paper is concerned with zero-sum differential games and the
                       asymptotic properties of their value functions. The games with a com-
                       mon dynamics, a running cost, and capabilities of players are consid-
                       ered. Each payoff represents an average of the running cost with respect
                       to the given discount functions (densities); these games differ in densi-
                       ties only. We prove a Tauberian-type Theorem, that is, the fact that the
                       existence of a uniform limit of the value functions for uniform density
                       or for exponential density implies that the value functions uniformly
                       converge to the same limit for arbitrary piecewise continuous density
                       as the time scale parameter tends to zero.




1    Introduction
In dynamic optimization, it is not uncommon to normalize the payoff by taking the average over time with respect
to a certain probability distribution—for example, when the terminal time is large yet not exactly specified. In
this case, for a realization of the process (a function t 7→ z(t)), in addition to the running cost (a function
t 7→ g(z(t))), one also considers the payoff in the form of a certain average of the running cost,
                                                    ∫ ∞
                                                           ϱ(t)g(z(t)) dt,
                                                       0

with respect to a certain discount function, a probability density function ϱ. Most often, when the problem
is considered on infinite horizon, the potential infinity of the interval is emulated by considering the problems
where the payoff is taken over increasingly large intervals [0, T ] or in view of increasingly small discounts λ; then,
the limits of these problems are studied if such exist. Thus, effectively, for the payoffs
                                              ∫ ∞
                                                   λϱ(λt)g(z(λt)) dt,                                              (1)
                                                   0


Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of
the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at http://ceur-ws.org




                                                              328
one considers the asymptotic behavior of the corresponding value functions as the scale parameter λ tends to
zero. Usually, the densities of the uniform ϱ(t) = 1[0,1] (t) (Cezaro mean) and exponential ϱ(t) = e−t (Abel mean)
distributions are applied.
   The existence of such a limit of value functions in view of some density means that the value function’s response
to a change in the scale parameter λ is very weak when this parameter is sufficiently small. In particular, in
the stochastic statement, this value (the asymptotic value) is customarily considered the game value when the
planning horizon is infinite [Bewley et al., 1976]. In these statements, one could often obtain, in addition, an
asymptotically optimal strategy such that a payoff is close to the optimal one (uniform value)—for sufficiently
small values of the scale parameter [Mertens et al., 1981]; however, in this paper, we only consider the value
function asymptotics. The existence of uniform limits of value functions for payoffs (1) when averaged with respect
to the uniform and/or exponential densities was proved for a broad class of stochastic games [Ziliotto, 2016b], for
optimal control problems [Gaitsgori, 1985, Lions et al., 1986, Grüne, 1998, Artstein et al., 2000, Li et al., 2016,
Gaitsgori et al., 2013], and for certain classes of differential games [Buckdahn et al., 2011, Cannarsa et al., 2015]
in the so-called nonexpansive-like case.
   It turns out that, for stochastic games with a finite number of states and actions [Mertens et al., 1981], for
discrete-time control problems [Lehrer et al., 1992], for general control problems [Oliu-Barton et al., 2013], for
differential games [Khlopin, 2016], and a broad class of stochastic games [Ziliotto, 2016a, Ziliotto, 2016b], there
holds the following Tauberian theorem: the uniform convergence of value functions under payoffs (1) with one
of these densities (uniform or exponential) guarantees that the value functions in view of the other payoff also
converge uniformly—to the same limit. The general approach, which deduces these Tauberian theorems from
the Dynamic Programming Principle, is considered in [Khlopin, in print].
   Such Tauberian theorems guarantee that if there is a uniform asymptotics for one of these densities (uniform
or exponential) then, in addition to the value function’s insensitivity to the choice of the discount parameter λ
for payoff (1), this asymptotics is also insensitive to the choice between these two densities. Often, it is also
possible to prove insensitivity to the choice of the density ϱ from quite a broad class.
   Thus, we can find a sufficient asymptotic condition on the densities ϱλ , λ > 0, under which the uniform
convergence of value functions as λ → 0 for the payoffs with uniform or/and exponential densities (for Cesaro
and/or Abel means) implies the uniform convergence to the same limit (as λ → 0) of the value functions for the
payoffs                                            ∫       ∞
                                                              ϱλ (t)g(z(t)) dt.                                 (2)
                                                      0
  For example, to this end, for discrete-time control processes, in paper [Monderer et al., 1993], the following
sufficient asymptotic condition on a family of ϱλ , λ > 0 was proposed: all densities ϱλ are nonincreasing and
                                            ∫ T
                                       lim      ϱλ (t) dt = 0   ∀T > 0.                                      (3)
                                         λ→0    0

For Markov decision processes, the sufficiency of the asymptotic condition

                                                 lim V0∞ [ϱλ ] = 0                                              (4)
                                                 λ→0

                                                                                                       △
was refined in [Ziliotto, 2016c]; here, V0∞ [µ] is the total variation of a real-valued function µ on R+ = [0, ∞).
   For zero-sum differential games with the Isaacs condition, from the uniform convergence of the value functions
for Cesaro means, it follows (see [Khlopin, 2015]) that the value functions with payoffs (2) converge to the same
limit as λ → 0 for a family of densities ϱλ , λ > 0 if this family enjoys (3) and
                                          q[ϱλ ](r)
                                lim sup V0            [ln ϱλ ] < +∞           ∀r ∈ (0, 1);                      (5)
                                  λ→0
                                                                                         ∫ q[ϱ ](r)
here, for each r ∈ (0, 1), the quantile q[ϱλ ](r) is the minimal solution of the equation 0 λ       ϱλ (t)dt = r.
   The main aim of this paper to prove that in zero-sum differential games with the Isaacs condition the existence
of a uniform limit of the value functions for uniform density or for exponential density implies the uniform
convergence (to the same limit) for the values in view of payoffs (1) for every piecewise continuous density ϱ.
   To this end, first, we will improve condition (5) (see (12)), and, then, apply this improved sufficient asymptotic
condition for payoffs (2). The cornerstone of this proof is the sufficiency of (3)&(5), proved in [Khlopin, 2015]
for zero-sum differential games with Isaacs condition.




                                                                 329
2   Differential Game
Consider a system in Rm controlled by two players,

                                ẋ = f (x, p, q), x(0) ∈ Rm , t ≥ 0, p(t) ∈ P, q(t) ∈ Q;                         (6)

here, P and Q are non-empty compact subsets of finite-dimensional Euclidean spaces.
   Assume that the functions f : Rm × P × Q → Rm and g : Rm × P × Q → [0, 1] are continuous, and let these
functions be Lipschitz continuous in the state variable; namely, there exists a constant L > 0 such that, for all
x, y ∈ Rm , p ∈ P, and q ∈ Q,

                            f (x, p, q) − f (y, p, q) + g(x, p, q) − g(y, p, q) ≤ L x − y .

   Denote by P and Q the sets of all Borel measurable functions R+ ∋ t 7→ p(t) ∈ P and R+ ∋ t 7→ q(t) ∈ Q,
respectively. So, for each pair (p, q) ∈ P × Q, for every initial condition x(0) = x∗ ∈ Rm , system (6) generates
the unique solution x(·) = y(·; x∗ , p, q) defined for the whole R+ .
   We will essentially refer to the results proved in [Khlopin, 2015]. As a consequence, we need to admit all
assumptions on differential games from [Khlopin, 2015]. To make it happen, we also impose the Isaacs condition
(“the saddle point condition in a small game”) [Krasovskii et al., 1988]
                      [                           ]           [                           ]
             max min ⟨s, f (x, p, q)⟩ + g(x, p, q) = min max ⟨s, f (x, p, q)⟩ + g(x, p, q) ∀x, s ∈ Rm .
              p∈P q∈Q                                       q∈Q p∈P

It is easy to see that, for each nonnegative function ϱ : R+ → R+ , it implies that, for all t ∈ R+ , x, s ∈ Rm ,
                          [                               ]         [                                 ]
                 max min ⟨s, f (x, p, q)⟩ + ϱ(t)g(x, p, q) = min max ⟨s, f (x, p, q)⟩ + ϱ(t)g(x, p, q) .          (7)
                p∈P q∈Q                                               q∈Q p∈P

  Let D be the set of all probability density functions having their support in R+ . For a density ϱ ∈ D and a
number r ∈ (0, 1), let the quantile q[ϱ](r) be the minimum number such that
                                                       ∫ q[ϱ](r)
                                                                   ϱ(t) dt = r.
                                                        0

  For a given density ϱ ∈ D and an initial position x∗ ∈ Rm , let the goal of the first player be to maximize the
payoff function
                                             ∫ ∞
                                           △
                           c[ϱ](x∗ , p, q) =     ϱ(t)g(y(t; x∗ , p, q), p(t), q(t)) dt,                      (8)
                                                   0

and let the task of the second one be to minimize it.
  There are many ways to define a game and the sets of strategies for each player; for a very well made review
encompassing a large number of formalizations, refer to [Subbotin, 1995, Subsect.14,15]. We will consider the
nonanticipating strategies (see [Eliott et al., 1972]).
Definition 1 A map α : Q → P is called a nonanticipating strategy of the first player if, for all t > 0 and
q, q ′ ∈ Q, from q|[0,t] = q ′ |[0,t] it follows that α[q]|[0,t] = α[q ′ ]|[0,t] .
    A map β : P → Q is called a nonanticipating strategy of the second player if, for all t > 0 and p, p′ ∈ P, from
p|[0,t] = p′ |[0,t] it follows that β[p]|[0,t] = β[p′ ]|[0,t] .
   We denote by A and B the sets of all nonanticipating strategies of the first player and of the second player,
respectively.
   For each density ϱ ∈ D, define the corresponding value function by the following rule:
                                     ∫ ∞
                           △                  ( (               )               )
                 V[ϱ](x∗ ) = sup inf     ϱ(t)g y t; x∗ , α[q], q , α[q](t), q(t) dt ∀x∗ ∈ Rm ;              (9)
                             α∈A q∈Q      0

also, define
                                          ∫ ∞
                            △                        ( (               )               )
                 V+ [ϱ](x∗ ) = inf sup          ϱ(t)g y t; x∗ , p, β[p] , p(t), β[p](t) dt   ∀x∗ ∈ Rm .
                                β∈B p∈P   0




                                                                   330
For each density ϱ ∈ D with bounded supp ϱ, condition (7) guarantees ([Krasovskii et al., 1988], [Subbotin, 1995],
[Cardaliaguet et al., 2000]) the equality
                                                      V+ [ϱ] ≡ V[ϱ].                                              (10)
                                                                                           △
In the general case, for each density ϱ ∈ D, define the sequence of densities ϱn = n+1
                                                                                   n ϱ · 1[0,q[ϱ]( n+1 )] ∈ D. Since
                                                                                                    n

supp ϱn is compact, passing to the limit as n → ∞ in (8) and (10), we see that the payoffs c[ϱn ] converge to c[ϱ]
and the value functions V[ϱn ] = V+ [ϱn ] converge to V[ϱ] = V+ [ϱ] as n → ∞. Now, we have proved (10) for all
ϱ ∈ D.

3    The Main Result
For each density ϱ ∈ D and an arbitrary λ > 0, it is also possible to introduce the density ϱλscale by the rule
                                             ϱλscale (t) = λϱ(λt)       ∀t ≥ 0.
    Set
                            ϖλ (t) = λ · 1[0,1/λ] ,     πλ (t) = λ · e−λt ,       ∀λ > 0, t ≥ 0.
Thus, we define the uniform and exponential density families.
   For an interval [a, b) ⊂ R and a function y : [a, b) → R ∪ {∞}, denote by Vab [y] the total variation of the
function y in [a, b).
Theorem 1 Let a non-empty subset Ω ⊂ Rm be strongly invariant with respect to system (6).
  For a given map U∗ : Ω → [0, 1], the following conditions are equivalent:
1) for each piecewise continuous on (0, ∞) density µ ∈ D, there holds

                                       lim sup V[µλscale ](x∗ ) − U∗ (x∗ ) = 0;
                                       λ→0 x∗ ∈Ω


2) the value functions V[ϖλ ] converge to U∗ uniformly in Ω as λ → 0, i.e.,

                                         lim sup V[ϖλ ](x∗ ) − U∗ (x∗ ) = 0;                                      (11)
                                        λ→0 x∗ ∈Ω


3) the value functions V[πλ ] converge to U∗ uniformly in Ω as λ → 0, i.e.,

                                         lim sup V[πλ ](x∗ ) − U∗ (x∗ ) = 0;
                                         λ→0 x∗ ∈Ω


4) for every family of densities µλ ∈ D, λ > 0, it follows from (3) and
                                          q[µλ ](r)
                                lim sup V0            [µ] · q[µλ ](r) < +∞        ∀r ∈ (0, 1)                     (12)
                                 λ→0

     that the value functions V[µλ ] converge to U∗ uniformly in Ω as λ → 0, i.e.,

                                         lim sup V[µλ ](x∗ ) − U∗ (x∗ ) = 0.                                      (13)
                                         λ→0 x∗ ∈Ω


4    The Proof of Theorem 1
The implications 2) ⇒ 3), 3) ⇒ 2) were proved in [Khlopin, 2016].
  From (ϖ1 )λscale = ϖλ for all positive λ, it follows that 1) ⇒ 2).
  It remains to verify 2) ⇒ 4) ⇒ 1). To do this, we need the following proposition proved in [Khlopin, 2015]:
Proposition 1 Assume that the value functions V[ϖλ ], λ > 0 converge to a function U∗ uniformly in Ω as
λ → 0, i.e., (11) holds.
  Let a family of µλ ∈ D, λ > 0, satisfy (3) and (5).
  Then, for all positive δ < 1, there exists a positive λδ such that, for all positive λ < λδ ,
                                                                       1
                                    V[µλ ](x∗ ) > U∗ (x∗ ) − 8δ ln            ∀x∗ ∈ Ω.
                                                                       δ




                                                             331
4.1   The Proof of 2) ⇒ 4).
The proof is by reductio ad absurdum. Assume the converse. Then, for a positive ε < 1/20, there exists a family
of densities µ̂λ ∈ D, λ > 0 such that (3), (12), and
                                             lim sup sup |V[µ̂λ ](x∗ ) − U∗ (x∗ )| ≥ 3ε                                                         (14)
                                                λ→0    x∗ ∈Ω

hold. Choose positive δ < 1 and M such that
                                             1                q[µ ](1−ε)
                                     8δ ln     < ε, lim sup V0 λ         [µ] · q[µλ ](1 − ε) < M.
                                             δ        λ→0

   Now, for all positive λ, define the mapping µλ : R+ → R+ by the following rule:
                                                        ε
                              µλ (t) = µ̂λ (t) +                ∀t ∈ [0, q[µ̂λ ](1 − ε)]
                                                 q[µ̂λ ](1 − ε)
and µλ (t) = 0 otherwise. Then,
                     ∫ ∞             ∫ q[µ̂λ ](1−ε)            ∫ q[µ̂λ ](1−ε)
                          µλ (t)dt =                µλ (t)dt =                µ̂λ (t)dt + ε = 1,                                                (15)
                             0                    0                                  0

that is, µλ ∈ D.
   Now, since µ̂λ enjoys (3), we see that q[µ̂λ ](1 − ε) tends to ∞ as λ → 0, and
                     ∫ T                 ∫ T                  ∫ min{T,q[µ̂λ ](1−ε)}
                                                                                           ε
             lim sup     µλ (t) dt ≤ lim     µ̂λ (t) dt + lim                                      dt = 0
               λ→0    0              λ→0 0                λ→0 0                     q[µ̂λ ](1 − ε)
holds for all positive T ; thus, the family of µλ enjoys (3).
   Next, note that, for all x, y > 0, there holds
                                                         max{x, y}   max{x, y}      |x − y|
                                 | ln x − ln y| = ln               ≤           −1=           ;
                                                         min{x, y}   min{x, y}     min{x, y}
moreover, by (15), q[µλ ](r) < q[µ̂λ ](1 − ε) for all λ > 0 and r ∈ (0, 1). Then, we obtain
                                                                      q[µ̂λ ](1−ε)                 q[µ̂λ ](1−ε)
         q[µλ ](r)               q[µ̂λ ](1−ε)                    V0              [µλ ]      V                     [µ̂λ ] · q[µ̂λ ](1 − ε)   M
       V0            [ln µλ ] ≤ V0              [ln µλ ] ≤                                 ≤ 0                                            <
                                                             inf t∈[0,q[µ̂λ ](1−ε)) µλ (t)                            ε                     ε
for all r ∈ (0, 1), λ > 0. Thus, the family of µλ enjoys (5).
   Since the family of µλ enjoys all assumptions of Proposition 1, we can find a positive λδ such that
                                                                              1
                                     V[µλ ](x∗ ) > U∗ (x∗ ) − 8δ ln                      ∀x∗ ∈ Ω, λ ∈ (0, λδ ).                                 (16)
                                                                              δ
                                                                            △              △                                        △
   Consider a new differential game. Define the sets P− = Q, Q− = P and the maps f − (x, p− , q − ) = f (x, q − , p− ),
                 △
g − (x, p− , q − ) = 1 − g(x, q − , p− ) for all x ∈ Rm , p− ∈ P− , q − ∈ Q− . By (7) with s = −s− , we have
                   [                                          ]                 [                                        ]
  max min ⟨s− , f − (x, p− , q − )⟩ + ϱ(t)g − (x, p− , q − ) = min max ⟨s− , f − (x, p− , q − )⟩ + ϱ(t)g − (x, p− , q − )
p− ∈P− q − ∈Q−                                                                  q − ∈Q− p− ∈P−

for all s− , x ∈ Rm , t ≥ 0, ϱ ∈ D. Thus, the Isaacs condition also holds.
                                                                        △
   In addition, by P − = Q and Q− = P, we obtain y − (x∗ , p− , q − ) = y(x∗ , q − , p− ) for all x∗ ∈ Rm , p− ∈ P − ,
q − ∈ Q− . Then, Ω is a strongly invariant set for this dynamics.
   Moreover, thanks to Q− = P and A− = B, for each density ϱ ∈ D, the map
      Ω ∈ x∗ 7→ 1 − V[ϱ](x∗ ) =         1 − V+ [ϱ](x∗ )
                                                ∫ ∞      (     ( (               )               ))
                                      = sup inf      ϱ(t) 1 − g y t; x∗ , p, β[p] , p(t), β[p](t) dt
                                             β∈B p∈P     0
                                                                ∫ ∞          (     ( (                        )                       ))
                                      =         sup    −
                                                        inf −            ϱ(t) 1 − g y − t; x∗ , α− [q − ], q − , q − (t), α− [q − ](t) dt
                                             α− ∈A− q ∈Q          0
                                                                ∫ ∞
                                                                                ( (                        )                       ))
                                      =         sup    −
                                                        inf −            ϱ(t)g − y − t; x∗ , α− [q − ], q − , α− [q − ](t), q − (t) dt
                                             α− ∈A− q ∈Q          0




                                                                           332
is the value function (9) of the new game. In particular, (11) holds for this game with asymptotics U∗− ≡ 1 − U∗ .
    Applying Proposition 1 for this game, we can choose a positive λ− δ such that

                                                                                    1
                                     1 − V[µλ ](x∗ ) > 1 − U∗ (x∗ ) − 8δ ln                   ∀x∗ ∈ Ω
                                                                                    δ

holds for all positive λ < λ−
                            δ . Together with (16), it implies

                                                                                            1
                                   lim sup sup V[µλ ](x∗ ) − U∗ (x∗ ) ≤ 8δ ln                 < ε.
                                     λ→0     x∗ ∈Ω                                          δ

Then, thanks to (14), we obtain

                                         lim sup sup V[µ̂λ ](x∗ ) − V[µλ ](x∗ ) > 2ε.                                       (17)
                                          λ→0        x∗ ∈Ω

   However,                         ∫ ∞                                    ∫ ∞
                                           |µλ (t) − µ̂λ (t)|dt = ε +                     µ̂λ (t)dt = 2ε,
                                     0                                     q[µ̂λ ](1−ε)

therefore, by 0 ≤ g ≤ 1, for all x∗ ∈ Ω, α ∈ A, q ∈ Q, λ > 0, we have
                           ∫ ∞
                                |µλ (t) − µ̂λ (t)|g(y(t; x∗ , α[q](t), q(t)), α[q](t), q(t))dt ≤ 2ε,
                         ∫ 0∞
                     inf      µλ (t)g(y(t; x∗ , α[q](t), q(t)), α[q](t), q(t))dt −
                     q∈Q 0
                                      ∫ ∞
                              − inf        µ̂λ (t)g(y(t; x∗ , α[q](t), q(t)), α[q](t), q(t))dt ≤ 2ε
                                     q∈Q     0

                                                                           V[µλ ](x∗ ) − V[µ̂λ ](x∗ ) ≤ 2ε.

We obtain a contradiction with (17). This contradiction proves the implication 2) ⇒ 4).

4.2     The Proof of 4) ⇒ 1)
Consider a piecewise continuous in (0, ∞) density ϱ. Fix a number ε > 0; now, there exists a sufficiently large
natural n > 3 such that n5 < ε. For all n ∈ N, n > 3, set

                                                 △                     △
                                           rn = q[ϱ](1/n),          sn = q[ϱ](1 − 1/n).

Since the piecewise continuous function ϱ ∫is Riemann integrable
                                                            ∫s        on [rn , sn∫], there exists a staircase function
                                                s                                   sn
µn : R+ → R, supp µn ⊂ [rn , sn ] such that rnn µn (t) dt = rnn ϱ(t) dt = n−2
                                                                           n , rn |µn (t) − ϱ(t)| dt < 1/n hold. In
                                                         ∫∞
particular, its total variation in [0, ∞) is finite. Since 0 µn (t)dt = n−2
                                                                        n , put

                               △    n                           △                   nsn ∞
                           µ̄n =       µn ∈ D,               M = sn V0∞ [µ̄n ] =       V [µn ] ∈ R.
                                   n−2                                              n−2 0
   Now, for all λ > 0, we have
         [            ] [               ]                     q[µ̄n ](1 − ε)
      V0∞ (µ̄n )λscale · q (µ̄n )λscale (1 − ε) = λV0∞ [µ̄n ]                = V0∞ [µ̄n ]q[µ̄n ](1 − ε) ≤ sn V0∞ [µ̄n ] = M, (18)
         ∫ ∞                                         ∫ ∞             λ            ∫ sn (                )
                                                                              3                 n                    5
                 ϱλscale (t) − (µ̄n )λscale (t) dt =     ϱ(t) − µ̄n (t) dt < +                      − 1 ϱ(t) dt = < ε.
           0                                          0                       n      rn     n−2                     n

Thus, we have

                                sup V[ϱλscale ](x∗ ) − V[(µ̄n )λscale ](x∗ ) ≤ ε              ∀λ > 0.                       (19)
                                x∗ ∈Ω




                                                                     333
    Consider some positive T . For all positive λ < rn /T , we have µ̄n |[0,λT ] ≡ 0 and
                                       ∫ T                             ∫ T
                                             (µ̄n )λscale (t) dt = λ         µ̄n (λt) dt = 0.
                                        0                               0

Thus, (3) holds for densities (µ̄n )λscale , λ > 0.
  Thanks to (18), the densities (µ̄n )λscale , λ > 0 also satisfy (12). Applying condition 4) for this family, we give

                                       lim sup |V[(µ̄n )λscale ](x∗ ) − U∗ (x∗ )| = 0.
                                      λ→0 x∗ ∈Ω


Accounting for (19), we obtain
                                       lim sup sup |V[ϱλscale ](x∗ ) − U∗ (x∗ )| ≤ ε.
                                        λ→0      x∗ ∈Ω

Since the choice of a positive number ε was arbitrary, the implication 4) ⇒ 1) is proved.                           

5    Conclusion
Based on the results from [Khlopin, 2015], for differential games with the Isaacs condition, we managed to prove
that their value functions’ insensitivity to the choice of the scale parameter for the exponential or uniform
distribution family implies the same result with respect to all densities of a relatively general form. Apparently,
it can be proved for all dynamic games by the game value map method (similar [Khlopin, in print]), however,
this should be investigated further.
   In addition, we propose a new condition (3)&(12), which is sufficient for (13). For control systems on a
compact invariant set under the non-expansive dynamics assumptions, a more weak (than (3)&(5) or (4)) suf-
ficient asymptotic condition was proposed in paper [Li et al., 2016]. Their tightness under the non-expansive
dynamics assumption for control problems and for differential games remains to be tested. Whether asymptotic
condition (4) is sufficient in the deterministic framework is likewise unknown.

Acknowledgements
This work was supported by the Russian Science Foundation (project no. 17-11-01093).

References
[Artstein et al., 2000] Artstein, Z., & Gaitsgory, V. (2000) The value function of singularly perturbed control
          systems. Appl Math Optim, 41(3), 425-445
[Bardi et al., 1997] Bardi, M., & Capuzzo-Dolcetta, I. (1997) Optimal control and viscosity solutions of Hamilton-
          Jacobi-Bellman equations. Boston:Birkhauser
[Bewley et al., 1976] Bewley, T., & Kohlberg, E. (1976) The asymptotic theory of stochastic games. Mathematics
         of Operations Research, 1, 197-208
[Buckdahn et al., 2011] Buckdahn, R., Cardaliaguet, P., & Quincampoix, M. (2011) Some Recent Aspects of
        Differential Game Theory. Dyn Games Appl 1(1), 74-114
[Cannarsa et al., 2015] Cannarsa, P., & Quincampoix, M. (2015) Vanishing Discount Limit and Nonexpansive
        Optimal Control and Differential Games. SIAM Journal on Control and Optimization, 53(4), 1789-
        1814.
[Cardaliaguet et al., 2000] Cardaliaguet, P., & Plaskacz, S. (2000). Invariant Solutions of Differential Games and
         Hamilton–Jacobi–Isaacs Equations for Time-Measurable Hamiltonians. SIAM Journal on Control and
         Optimization, 38(5), 1501-1520.
[Eliott et al., 1972] Elliott, R.J., & Kalton, N. (1972) The existence of value for differential games. Provi-
          dence:AMS.
[Gaitsgori, 1985] Gaitsgory, V. (1985) Application of the averaging method for constructing suboptimal solutions
         of singularly perturbed problems of optimal control. Automation and Remote Control, 46, 1081-1088




                                                               334
[Gaitsgori et al., 2013] Gaitsgory, V., & Quincampoix, M. (2013) On sets of occupational measures generated
         by a deterministic control system on an infinite time horizon, Nonlinear Anal. 88, 27-41.
[Grüne, 1998] Grüne, L. (1998) On the Relation between Discounted and Average Optimal Value Functions. J
          Diff Eq, 148, 6599.
[Khlopin, 2015] Khlopin, D.V. (2015) On asymptotic value function for dynamic games with long- time-average
         payoff. In: Papers of the International Conference Systems Dynamics and Control Processes dedicated
         to the 90th Anniversary of Academician N.N.Krasovskii. pp.341-348, 2015 (In Russian)

[Khlopin, 2016] Khlopin, D.V. (2016) Uniform Tauberian theorem for differential games. Automat Rem Contr+,
         77(4), 734–750.

[Khlopin, in print] Khlopin, D.V. Tauberian Theorem for Value Functions. Dynamic Games and Applications,
         doi:10.1007/s13235-017-0227-5

[Krasovskii et al., 1988] Krasovskii, N.N., & Subbotin AI (1988) Game-Theoretical Control Problems. New
         York:Springer

[Lehrer et al., 1992] Lehrer, E., & Sorin, S. (1992) A uniform Tauberian theorem in dynamic programming.
          Mathematics of Operations Research, 17(2), 303-307

[Li et al., 2016] Li, X., Quincampoix, M., & Renault J. (2016) Limit value for optimal control with general
           means. Discrete and Continuous Dynamical Systems. Series A, 36, 2113-2132.

[Lions et al., 1986] Lions, P.-L., Papanicolaou, G., & Varadhan S.R.S. (1986) Homogenization of Hamilton-
          Jacobi Equations, unpublished work.

[Mertens et al., 1981] Mertens, J.F., & Neyman, A. (1981) Stochastic Games. Int J of Game Theory 10(2),
         53-66.

[Monderer et al., 1993] Monderer, D., & Sorin S. (1993) Asymptotic properties in Dynamic Programming. Int.
        J. of Game Theory, 22. 1-11.

[Oliu-Barton et al., 2013] Oliu-Barton M., & Vigeral G. (2013) A uniform Tauberian theorem in optimal control.
         In: P. Cardaliaguet, & R. Cressman (Eds.) Annals of the International Society of Dynamic Games,
         (pp.199-215) Boston:Birkhäuser.

[Subbotin, 1995] Subbotin, A.I. (1995) Generalized solutions of first order PDEs. Birkhauser:Boston
[Ziliotto, 2016a] Ziliotto, B. (2016) A Tauberian theorem for nonexpansive operators and applications to zero-
           sum stochastic games. Mathematics of Operations Research, 41(4), 1522-1534.
[Ziliotto, 2016b] Ziliotto, B. (2016) General limit value in zero-sum stochastic games. Intern J Game Theory,
           45(1-2), 353-374.
[Ziliotto, 2016c] Ziliotto, B. (2016) Tauberian theorems for general iterations of operators: applications to zero-
           sum stochastic games. arXiv preprint arXiv:1609.02175.




                                                       335