<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Numerical Algorithm for Optimal Control of Continuity Equations</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Krasovskii Institute of Mathematics and Mechanics Kovalevskay str.</institution>
          ,
          <addr-line>16 620990 Yekaterinburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Nikolay Pogodaev Matrosov Institute for System Dynamics and Control Theory Lermontov str.</institution>
          ,
          <addr-line>134 664033 Irkutsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>467</fpage>
      <lpage>474</lpage>
      <abstract>
        <p>An optimal control problem for the continuity equation is considered. The aim of a controller is to maximize the total mass within a target set at a given time moment. An iterative numerical algorithm for solving this problem is presented. Copyright ⃝c by the paper's authors. Copying permitted for private and academic purposes. In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at http://ceur-ws.org</p>
      </abstract>
      <kwd-group>
        <kwd>{ρt + divx (v (t</kwd>
        <kwd>x</kwd>
        <kwd>u(t)) ρ) = 0</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Consider a mass distributed on Rn that drifts along a controlled vector field v = v(t, x, u). The aim of the
controller is to bring as much mass as possible to a target set A by a time moment T .</p>
      <p>Let us give the precise mathematical statement of the problem. Suppose that ρ = ρ(t, x) is the density of the
distribution and u = u(t) is a strategy of the controller. Then, ρ evolves in time according to the continuity
equation
where ρ0 denotes the initial density. Our aim is to find a control u that maximizes the following integral
Typically, u belongs to a set U of admissible controls. Here we take the following one:</p>
      <p>U = fu( ) is measurable, u(t) 2 U a.e. t 2 [0, T ]g ,
where U is a compact subset of Rm.</p>
      <p>In this paper we propose an iterative method for solving problem (1)–(3), which is based on the needle
linearization algorithm for classical optimal control problems [Srochko, 2000]. Given an initial guess u0, the
algorithm produces a sequence of controls uk with the property J [uk+1] J [uk], for all k 2 N.</p>
      <p>A
J [u] =</p>
      <p>ρ(T, x) dx .
(3)</p>
      <p>A different approach for numerical solution of (1)–(3) was proposed by S. Roy and A. Borz`ı in [Roy, 2017].
The authors used a specific discretization of (1) to produce a finite dimensional optimization problem. It seems
difficult to compare the efficiency of both algorithms, because one was tested for 2D and the other for 1D
problems.</p>
      <p>Finally, let us remark that problem (1)–(3) is equivalent to the following optimal control problem for an
ensemble of dynamical systems:</p>
      <p>Maximize
∫
{x : x=y(T )}</p>
      <p>In what follows, Φs;t denotes the flow of a time-dependent vector field w = w(t, x), i.e., Φs;t(x) = y(t), where
y( ) is a solution to the Cauchy problem
{y˙(t) = w (t, y(t)) ,</p>
      <p>y(s) = x.</p>
      <p>Given a set A Rn and a time interval [0, T ], we use the symbol At for the image of A under the map ΦT;t, i.e.,
At = ΦT;t(A). The Lebesgue measure on R is denoted by L1.</p>
      <sec id="sec-1-1">
        <title>Assumptions</title>
        <p>The map v : [0, T ]</p>
        <p>Rn</p>
        <p>U ! Rn is continuous.</p>
        <p>The map x 7! v(t, x, u) is twice continuously differentiable, for all t 2 [0, T ] and u 2 U .
There exist positive constants L, C such that jv(t, x, u)
for all t 2 [0, T ], u 2 U , and x, x′ 2 Rn.</p>
        <sec id="sec-1-1-1">
          <title>The initial density ρ0 is continuously differentiable.</title>
          <p>v(t, x′, u)j</p>
          <p>L x x′j and jv(t, x, u)j
j</p>
          <p>C (1 + jxj),
The target set A Rn is a compact tubular neighborhood, i.e., A is a compact set that can be expressed
as a union of closed n-dimensional balls of a certain positive radius r.</p>
          <p>
            In addition, to guarantee the existence of an optimal control
            <xref ref-type="bibr" rid="ref1">(see [Pogodaev, 2016] for details)</xref>
            , we must assume
that
the vector field v takes the form
for some real-valued functions φi, and the set
is convex.
          </p>
          <p>Description
2. For each t, find
3. Let
4. For each ε 2 (0, T ], find
5. Construct uw;I(") by (5).
6. Find
7. Let uk+1 = uw;I(" ).</p>
          <p>{∫
w(t) 2 argmin
ρ(t, x) v (t, x, ω) nAt (x) dσ(x) : ω 2 U
}
.
g(t) =
∫</p>
          <p>v (t, x, w(t))] nAt (x) dσ(x) .</p>
          <p>{∫
I(ε) 2 argmax
g(t) dt : ι</p>
          <p>
            }
[0, T ] is measurable and L1(ι) = ε .
ε∗ 2 argmax {J [uw;I(")] : ε 2 (0, T ]}.
(4)
(5)
(6)
(7)
(8)
(9)
Theorem 2.1
            <xref ref-type="bibr" rid="ref1">([Pogodaev, 2016])</xref>
            Let u be an optimal control for (1)–(3) and ρ be the corresponding
trajectory with ρ0 2 C1(Rn). Then, for a.e. t 2 [0, T ], we have
          </p>
          <p>∫
ρ(t, x) v(t, x, ω) nAt (x) dσ(x) .</p>
          <p>Here At = Φt;T (A), where Φ is the phase flow of the vector field (t, x) 7! v (t, x, u(t)), nAt (x) is the measure
theoretic outer unit normal to At at x, σ is the (n 1)-dimensional Hausdorff measure.</p>
          <p>The precise definitions of the measure theoretic unit normal and the Hausdorff measure can be found, e.g.,
in [Evans, 1992]. We remark that whenever ∂A is an (n 1)-dimensional surface, nAt coincides with the usual
outer unit normal to At and σ coincides with the usual (n 1)-dimensional volume form.</p>
          <p>Let I(ε) [0, T ] be a measurable set of Lebesgue measure ε. Given two controls u and w, we consider their
mixture
uw;I(")(t) =
{w(t), t 2 I(ε),</p>
          <p>u(t), otherwise.</p>
          <p>The proof of Theorem 2.1 gives, as a byproduct, the following increment formula
∫</p>
          <p>∫
which will be used in the next section.
3</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>Numerical Algorithm</title>
      <p>In this section we describe the algorithm, prove the improvement property J [uk+1]
implementation.</p>
      <p>J [uk], and discuss a possible
1. Let uk be a current guess. For each t, compute the set ∂At and ρ(t, ) on ∂At.</p>
      <sec id="sec-2-1">
        <title>Necessary Optimality Condition</title>
        <sec id="sec-2-1-1">
          <title>The necessary optimality condition for problem (1)–(3) looks as follows:</title>
          <p>The algorithm produces an infinite sequence of admissible controls. Of course, any its implementation should
contain obvious modifications that would cause the algorithm to stop after a finite number of iterations. Note
that it may happen that problems (8) and (9) admit no solution. In this case I(ε) and ε∗ must be taken so that
the values of the corresponding cost functions lie near the supremums.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Step 2</title>
      </sec>
      <sec id="sec-2-3">
        <title>Step 4 3.2 Justi cation</title>
        <p>Since the integral from the right-hand side is positive for all small ε, we conclude that J [uk+1] = J [uw;I(" )] &gt;
J [uk], as desired.
In this step we must compute ρ(t, x) for all t and x satisfying x 2 ∂At. Recall that</p>
        <sec id="sec-2-3-1">
          <title>Using Jacobi’s formula, we may write</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>Meanwhile, by the definition of Φ, we have Combining the above identities gives</title>
          <p>ρ(t, x) =</p>
          <p>ρ0(y)
det DΦ0;t(y)
,</p>
          <p>where y = Φt;0(x).
d
dt</p>
          <p>[ ]
(det DΦ0;t(y)) = (det DΦ0;t(y)) tr DΦ0;t(y)−1 ddt DΦ0;t(y) .</p>
          <p>d
dt
d
dt</p>
          <p>DΦ0;t(y) = Dxv (t, Φ0;t(y), u(t)) DΦ0;t(y).</p>
          <p>(det DΦ0;t(y)) = (det DΦ0;t(y)) div v (t, Φ0;t(y), u(t)) .</p>
          <p>Thus, computing of ρ(t, x) requires solving two Cauchy problems, one for finding Φ0;t(y) and one for finding
det DΦ0;t(y).</p>
          <p>In general, the optimization problem (7) is nonlinear, which makes it difficult. On the other hand, in many cases</p>
        </sec>
        <sec id="sec-2-3-3">
          <title>U and v enjoy the following extra properties: the set U is convex and the vector field v is affine with respect to the control:</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Step 7</title>
        <sec id="sec-2-4-1">
          <title>In this step the cost</title>
          <p>∫</p>
          <p>A
ρ(T, x) dx =
must be computed. To that end, we must know the whole set A0, while on the other steps of the algorithm we
deal only with the boundaries of At. It is interesting to note that, under the additional assumption that
the target set A</p>
          <p>Rn is contractible and its boundary ∂A is an (n</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>1)-dimensional smooth surface,</title>
          <p>the knowledge of ∂A0 is enough for computing the cost.</p>
          <p>Indeed, since the target A = AT is contractible, the set A0 is contractible as well. Any differential form on
a contractible set is exact [Tu, 2011]. Hence ρ0 dx1 ^ ^ dxn = da, for some (n 1)-dimensional differential
form α. Now the Stokes theorem gives:
∫</p>
          <p>A0
ρ0 dx1 ^
^ dxn =
∫</p>
          <p>α.</p>
          <p>.</p>
        </sec>
        <sec id="sec-2-4-3">
          <title>Hence, to get the desired α, we may take</title>
          <p>4</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Examples</title>
      <p>4.1</p>
      <sec id="sec-3-1">
        <title>Boat</title>
        <p>This section describes several toy problems, which we used for testing the algorithm.</p>
        <p>Consider a boat floating in the middle of a river at night. Since it is dark, the boatmen cannot see any landmarks,
and therefore are unsure about the boat’s position. They want to reach a river island at a certain time with
highest probability. How should they act?</p>
        <sec id="sec-3-1-1">
          <title>Assume that the speed of the river water is given by</title>
          <p>the island is a unit circle centered at x0, the initial position of the boat is described by the density function</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>Thus, the boat’s position x(t) evolves according to the differential equation</title>
          <p>where u 2 R2 is a component of the boat’s velocity due to rowing. Here juj umax.</p>
          <p>Parameters for the computation: σ = 1, α = β = 0.5, umax = 0.75, x0 = ( 3, 0), T = 12.
4.2</p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>Pendulum</title>
        <p>Here we want to stop a moving pendulum whose initial position is uncertain. In this case we have
(11)
,
v1(x) =
(1)</p>
        <p>0 .</p>
        <p>x˙ = v0(x) + u v1(x),</p>
        <sec id="sec-3-2-1">
          <title>Hence the control system takes the form</title>
          <p>where u 2 [ umax, umax] is an external force. The initial position of the pendulum is given by (11). The target
is a unit circle centered at (π/2, 0).</p>
          <p>Parameters for the computation: σ = 1, umax = 0.5, x0 = (π/2, 0), T = 6.
4.3</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>Sheep</title>
        <p>Consider a herd of sheep located near the origin. The sheep are effected by a vector field v0(x) pushing them
away from the origin. To prevent this we can turn on repellers, which are located at the following positions
(
xk =</p>
        <p>R cos
2π(k
m
1)
, R sin
2π(k
m
1) )
,</p>
        <sec id="sec-3-3-1">
          <title>Each repeller produces a vector field vk(x). So we have v(x, u) = v0(x) +</title>
          <p>m
∑ ukvk(x),
k=1
where uk is an intensity of k-th repeller. The control u = (u1, . . . , um) belongs to the simplex</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>In what follows we set</title>
          <p>{
U =
(u1, . . . , um) :
m
∑ uk = 1, uk 2 [0, 1], k = 1, . . . , m
k=1
}</p>
          <p>.</p>
          <p>x
v0(x) = α √1 + jx
x0
x0j2
,
where x0 is a certain point not far from the origin, and
vk(x) = β e−|x−xk|4 (x
xk),
k = 1, . . . , m.</p>
          <p>Suppose that the initial distribution is given by (11), the target is an ellipse centered at x0 whose major and
minor semi-axes are a and b.</p>
          <p>Parameters for the computation: σ = 1, x0 = (0, 0), T = 3, m = 6, a = 2, b = 1.2.</p>
          <p>Remark 4.1 The answer to the minimization problem</p>
          <p>ω 2 U,
arising in the second step of the algorithm, is very simple. Let j be such that
then an optimal solution is given by ω¯ = (0, . . . , 0, 1, 0, . . . , 0), where 1 is located at the j-th position. In particular,
this means that at every time moment t only one repeller is turned on. Hence instead of repellers, we may think
of a dog that jumps from one place to another.</p>
        </sec>
      </sec>
      <sec id="sec-3-4">
        <title>Acknowledgements</title>
        <p>The work was supported by the Russian Science Foundation, grant No 17-11-01093.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[Pogodaev</source>
          , 2016] Pogodaev,
          <string-name>
            <surname>N.</surname>
          </string-name>
          (
          <year>2016</year>
          ).
          <article-title>Optimal control of continuity equations</article-title>
          .
          <source>NoDEA</source>
          , Nonlinear Differ.
          <source>Equ. Appl</source>
          .
          <volume>23</volume>
          (
          <issue>2</issue>
          ),
          <fpage>24</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Roy</source>
          , 2017] Roy,
          <string-name>
            <surname>S.</surname>
          </string-name>
          &amp; Borz`ı,
          <string-name>
            <surname>A.</surname>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>Numerical Investigation of a Class of Liouville Control Problems</article-title>
          .
          <source>Journal of Scientific Computing.</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[Srochko</source>
          , 2000] Srochko,
          <string-name>
            <surname>V. A.</surname>
          </string-name>
          (
          <year>2000</year>
          ).
          <article-title>Iterative methods for solving optimal control problems</article-title>
          . Moscow: Fizmatlit.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>[Evans</source>
          , 1992] Evans
          <string-name>
            <given-names>L.C.</given-names>
            &amp;
            <surname>Gariepy</surname>
          </string-name>
          <string-name>
            <surname>R. F.</surname>
          </string-name>
          (
          <year>1992</year>
          ).
          <article-title>Measure theory and fine properties of functions. Studies in Advanced Mathematics</article-title>
          . CRC Press, Boca Raton, FL.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Tu</source>
          , 2011] Tu,
          <string-name>
            <surname>L. W.</surname>
          </string-name>
          (
          <year>2011</year>
          ).
          <article-title>An introduction to manifolds</article-title>
          . 2nd revised ed. New York, NY: Springer.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>