=Paper= {{Paper |id=Vol-1987/paper1 |storemode=property |title=A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function |pdfUrl=https://ceur-ws.org/Vol-1987/paper1.pdf |volume=Vol-1987 |authors=Alexander P. Afanasiev }} ==A Local Variational Problem of Second Order for a Class of Optimal Control Problems with Nonsmooth Objective Function== https://ceur-ws.org/Vol-1987/paper1.pdf
A Local Variational Problem of Second Order for a Class
of Optimal Control Problems with Nonsmooth Objective
                       Function

                                    Alexander P. Afanasiev
                      Institute for Information Transmission Problems,
       Russian Academy of Sciences, Nakhimovsky Prospekt 36-1, Moscow 117218, Russia
                  National Research University Higher School of Economics,
                        Myasnitskaya str., 20, Moscow 101000, Russia
                    National University of Science and Technology MISiS,
                        Leninskiy Prospekt 4, Moscow 119049, Russia
                            Lomonosov Moscow State University,
                       GSP-1, Vorobievy Gory, Moscow 119991, Russia
                                          apa@iitp.ru
                                       Elena V. Putilina
                      Institute for Information Transmission Problems,
                                Russian Academy of Sciences,
                    Nakhimovsky Prospekt 36-1, Moscow 117218, Russia
                                         elena@iitp.ru




                                                        Abstract
                       An optimal control problem with a nonsmooth objective function de-
                       fined on a polytope is studied. A scheme reducing the original problem
                       to the problem of maximizing the sum of matrix elements lying above
                       the main diagonal is constructed. Necessary conditions of optimal-
                       ity for this finite-dimensional optimization problem are established. A
                       method of searching for the global maximum is suggested.

1    Introduction
Optimal control problems with nonsmooth objective functions of the form
                                         J[u] = min{Gi x(τ ), i = 1, N } → max,                                       (1)
                                                   i                            u

under consideration in this work arise to study of the situation of switching during the contact of the optimum
trajectory with phase and functional constraints. These situations are typical for optimal control problems with
mixed constraints (see [Milyutin et al., 2004]). Moreover, the need to describe the situation of switching arises
in the continuation of solutions on the parameter (see [Dikusar et al., 2001]) and when forming local variational
problems (see [Afanasiev et al., 1990]) . First-order degeneration in the objective function (1) is studied in (see
[Afanasiev, 1998]). In this paper, we consider the second-degeneration for a local variational problem with the
objective function of the form (1).

Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of
the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at http://ceur-ws.org




                                                              1
2   Formulation and Discussion of the Problem
We will study the problem
                                                 ∫ 1
                                  J[u] = min              ⟨Pi x(t), u(t)⟩dt → max, i = 1, R,                          (2)
                                            i        0                             u

                                                              ẋ(t) = u(t),                                           (3)
                                                               u(t) ∈ M,                                              (4)
                                       x(· ) : [0, 1] → Rn , u(· ) : [0, 1] → Rn ,                                    (5)
M - is a polyhedron
x(0) = 0, x(1) ∈ X, X ∩ M ̸= 0.
   This problem arises in the study of the following important class of problems

                                     J[u] = min{Gi (x(T )), i = 1, p} → max,                                          (6)
                                                     i                                 u

                                                              ẋ(t) = u(t),                                           (7)
                                                            u(t) ∈ U (x(t)),
                         U (x(t)) = {u : K(x(t)) = u(t) = L(x(t)), M (x(t)) > N (x(t))},                              (8)
where M (· ) − is an m × n matrix, L(· ) − is an m × 1 matrix, K(· ) − is an k × n matrix, N (·) − is an k × 1 matrix.
   With the implementation of the procedure of continuation of the optimal trajectory
(see[Afanasiev et al., 1990]), where T is a parameter, there arises a need in lifting the degeneracy

                                  Gi (x(τ )) = Gj (x(τ )), i ̸= j i, j ∈ p0 , p0 ⊂ p.

Without loss of generality, we may assume that p0 = p. This case is studied in detail in (see
[Afanasiev et al., 1990]). The functions Gi (x(τ )) were presented in the form
                                                ∫ T
                                                         ⟨grad Gi (x(t)), ẋ(t)⟩dt,
                                                 0

and the consideration was limited to the study of the linear programming problem of the form

                                                             un+1 → max

                                                         un+1 > ⟨ai , u⟩, i ∈ p,
                                                 where ai = grad Gi (x0 ),
                                                     u ∈ M, M − polyhedron,
                                                                u ∈ Rn .
The second-order degeneration occurs when grad Gi (x0 ) = 0, i ∈ P.
To remove degeneracy, let us put into consideration

                                      ∂                      ∂ 2 Gi (x)
                                         grad Gi (ẋ)      =             u,
                                      ∂x              x=x0      ∂x2 x=x0
                              2                                                                2
and, under the assumption ∂ ∂x
                            Gi (x)
                               2                ̸= {0}n×n , i ∈ P, we introduce the notation ∂ ∂x
                                                                                               Gi (x)
                                                                                                  2            = Pi . And
                                     x=x0                                                               x=x0
the objective function of problem (2) – (5) became
                                                     ∫ 1
                                  J[v] = min               ⟨Pi x(t), u(t)⟩dt → max, i ∈ P.
                                            i         0                            u


Further we assume that P − is a skew-symmetric matrix, because a symmetric part of the matrix we may to take
out from under the integral sign.




                                                                   2
3    Reduction of the Problem
Denote by {Ri = (r1i , r2i , . . . , rni ), i = 1, N } the set of vertices of a polyhedron M . Then, by the Caratheodory
theorem,
                                        ∑P                         ∑N
                         U (t) =            Ri vi (t), vi (t) > 0,      vi (t) = 1. Let vi (t) = ẏi (t).
                                  i=1                                         i=1

The vectors Ri can be considered as columns of the matrix R = {Ri }N ×n , and it is easy to see that x(t) =
 ∫t
R 0 v(τ )dτ, y(0) = 0, Integrants of the objective function take the form

                                             ⟨Ci y(t), v(t)⟩, where Ci = RT AR.

    Finally, the problem (2) - (5) takes the form:
                                                              ∫ 1
                                         J[v] = min                 ⟨Ci y(t), v(t)⟩dt → max,                        (9)
                                                      i        0                              v


                                                                   ẏ(t) = v(t),                                   (10)
                             v(t) ∈ S, where S − simplex, y(0) = 0, Ry(1) = x(1) ∈ X,                              (11)

4    Finite-dimensional Analogue of Problem (9) - (11)
Let v(t) on the interval [0, 1] takes finite number of values of L. Then
                                                 
                                                 
                                                  v0 when t ∈ [t0 , t1 ]
                                                 
                                                 
                                                 
                                                  v1 when t ∈ [t1 , t2 ]
                                                 
                                                 
                                                 . . .
                                          v(t) =
                                                 
                                                  vi when t ∈ [ti−1 , ti ]
                                                 
                                                 
                                                 
                                                 
                                                 
                                                 
                                                   ...
                                                 
                                                   vL when t ∈ [tL−1 , tL ]

where t0 = 0, tL = 1.
  It is clear that, when t ∈ [ti−1 , ti ],
                                             ∫ t                    ∑
                                                                    i−1
                                  y(t) =           v(τ )dτ =              vα (tα − tα−1 ) + vi (t − ti ).
                                              0                     α=0

Let us introduce the notation vβ (tβ+1 − tβ ) = yβ .
Substitute the obtained y(t) and yβ , β = 0, L into the objective function (9) into constraints (10) and (11)), and
get
                                                  ∑ α−1
                                                  L−1  ∑
                                      J[v] = min          ⟨Ci yα , yβ ⟩ → max                                  (12)
                                                          i                                 {yi }
                                                               α=1 β=0

                                                    ∑
                                                    L−1
                                                          yα = y(1), Ry(1) ∈ X                                     (13)
                                                    α=0

                                    ⟨y(1), 1N ⟩ = 1, y(1) > 0, 1N = (1, 1, . . . , 1)T ,                           (14)
                                                                     | {z }
where Ci = RT Pi R.
    This problem is easily rewritten in the form of a problem of mathematical programming with a quadratic
(bilinear) constraints. Using the skew-symmetric property of matrices Ci , we can derive the following equality,

                                        ∑ α−1
                                        L−1 ∑                              ∑
                                                                           L−1      ∑
                                                                                    L
                                                    ⟨Ci yα , yβ ⟩ =                     ⟨Ci yα , yβ ⟩
                                        α=1 β=0                            α=1 β=α+1




                                                                          3
and finally have
                                                        Z → max,                                           (15)
                                                   ∑
                                                   L−1     ∑
                                                           L
                                            Z6                    ⟨Ci yα , yβ ⟩,                           (16)
                                                   α=1 β=α+1

                                                    ∑
                                                    L
                                                          yα = y(1),                                       (17)
                                                    α=0

                                      ⟨y(1), 1N ⟩ = 1, y(1) > 0, R y(1) ∈ Z.                               (18)
  To study properties of the solution of problem (15) - (18) we may to relax it by dropping constraints (18).
Then, the following theorem is valid.

T h e o r e m 1. Among the solutions of problem (15) - (16) (y(1) is f parameter) there is always such that
⟨yα , yβ ⟩ = 0 for any α ̸= β

 Proof. Let us replace problem (15) - (18) by the set of bilinear programming problems with linear constraints
of the form
                                         ∑ ∑
                                         L−1    L
                                                    ⟨Ci yα , yβ ⟩ → max                                   (19)
                                            α=1 β=α+1

                                             ∑
                                             L
                                                   yα = y(1), i = 1, p                                     (20)
                                             α=0

and let us solve each task separately, and among the solutions of problem (15) - (17) we will choose the one
which has the maximum value of the objective function. (Obviously that this technique makes sense to use only
in the proof).
    Reasonable computational procedures will be discussed later. Thus, the task is came down to the problem
of the form (19) - (20). This problem is investigated in detail in (see[Afanasyev, 1993]) where is set to among
solutions to the problem of the form (19) - (20) there is always such that, ⟨yα , yβ ⟩ = 0 V α ̸= β

    This theorem has important corollary.

C o r o l l a r y 1. Let the vector y(1) ∈ RN has a N0 6 N non-zero components. Then, whatever L (the
number of splits when constructing difference schemes), the solution of the problem (15) - (17) (and hence (15)
- (18) can be realized no more than N0 of mutually orthogonal vectors Jα .

5    The Theorem on the Bypass of the Vertices of the Polyhedron
Let us return to problem (2) - (5).

T h e o r e m 2. The solution of the problem (2) - (5) exists and can be always realized at the vertices of the
polyhedron M. and the number of switchings does not exceed N, where N – is the number of vertices of polyhedron
M.

 Proof. Consider the sequence v L (t) of simple (step) functions converging to a measurable function v(t), which
gives the optimal solution to the problem (9) - (11).

                                                 lim v L (t) = v ∗ (t),                                    (21)
                                               L→∞
                                                   ∫ t                             ∫ t
                                                                         ∗
                                      L
                              lim y (t) = lim            v (τ )dτ → y (t) =
                                                          L
                                                                                         v ∗ (τ )dτ.       (22)
                             L→∞            L→∞     0                               0

Let to each L there is a task (12) - (14), the solution of which, beginning with number N0 due to a corollary
of Theorem 1) does not depend on L. Therefore y ∗ (t) = {yα }, α = 1, N0 . That is, the solution of problem (9)
- (11) is completely determined by the solution of the problem (12) - (14) (finite-dimensional analogue). The
transition from problem (9) - (11) to problem (2) - (5) is obvious. Theorem is proven.




                                                              4
6    Properties of the Solution of the Difference Problem (12) - (14)
By theorem 1, a vector yα has only one nonzero component all nonzero components of vectors are the
components yα of the vector y(1). The essence problem (12) - (14) is to find the suitable traversal of vertices of
the polyhedron M .

    Let us we go back to the maximization of the expression

                                 ∑
                                 L−1    ∑
                                        L                          ∑
                                                                   N
                                             ⟨Ci yα , yβ ⟩ where         yα = y(1)
                                 α=1 β=α+1                         α=1

in view of the above. it is clear how the original problem is reduced to the following: consider the matrix
{yα Ciαβ yβ }N ×N – and put the task of finding such a permutation rows and columns with the same names to
maximize the sum of above-diagonal elements in the resulting matrix. Thus we can formulate the following
optimization problem.
    Suppose that there is a set of skew-symmetric matrices Ai = {aiαβ }n×n , i = 1, k. Let S(Ai ) denote the sum
of the above-diagonal elements. It is necessary to find a set of permutations of rows and columns with the same
names such that
                                               min S(Ai ) → max .                                            (23)
                                                 i

In (see[Afanasiev et al., 1990]) it is shown that, when i = 1, then nonnegativity of all partial sums of above-
diagonal elements along each strings is the necessary and sufficient condition for the maximum, i.e.
                                  ∑
                                  α=γ
                                        aiαβ > 0, α = 1, n, β = 1, n, γ = 1, n.                             (24)
                                β=α+1

Generally speaking, they are local maximum conditions. The fact that the problem (22) is a global optimization
problem shows the following example Matrix
                                                               
                                             0    6     2 −7
                                          −6     0     3 −2 
                                                               ,
                                          −2 −3        0 10 
                                             7    2 −10       0

with the sum of the above-diagonal elements 6 + 2 − 7 + 3 − 2 + 10 = 12 satisfying the condition (23), by using
the permutation (3, 4, 1, 2) can be reduced to the matrix
                                                                
                                                  0 10 −2 −3
                                             −10     0   7    2 
                                                                
                                                 2 −7    0    6 
                                                  3 −2 −6      0

with the sum of the above-diagonal elements 10 − 2 − 3 + 7 + 2 + 6 = 20 which also satisfy condition (23).
   Thus, even in the case of i = 1, to solve problem (22) should be involved methods of search for global
extremum. However, the use of conditions(23) in problem (22) when i > 1 allows one to apply the method of
branches and boundaries.

Acknowledgements
This work was supported by Russian Scientific Foundation, Project 16-11-10352.

References
[Milyutin et al., 2004] Milyutin A. A., Dmitruk A. V., Osmolovskii N. P. (2004). Maximum principle in optimal
         control Moscow State University, Faculty of Mechanics and Mathematics. Moscow, 168 p.
[Dikusar et al., 2001] Dikusar V. V., Koshka M., Figura A. (2001). A Method of continuation on a parameter in
         solving boundary value problems in optimal control. Differents. equations, 37:4, 453–45.




                                                        5
[Afanasiev et al., 1990] Afanasiev A. P., Dikusar V. V., Milyutin A. A., Chukanov S. V. (1990) A necessary
         condition in optimal control, Moscow: Nauka.
[Afanasiev, 1998] Afanasiev A. P. (1998) On local variational problems in the presence of functional constraints.
         Differents. equations, volume 34:11, 1459–1463.
[Afanasyev, 1993] Afanasyev A. P. (1993) Generalized isoperimetric problem on a polyhedron Differents. Equa-
         tions, Vol. 29:11, 1856–1867.




                                                       6