=Paper= {{Paper |id=Vol-2642/paper12 |storemode=property |title=Maximum entropy and principle of least action for electrotechnical systems in deterministic chaos mode |pdfUrl=https://ceur-ws.org/Vol-2642/paper12.pdf |volume=Vol-2642 |authors=Vladimir Fedorov,,Igor Fedorov,,Sergey Fedorov }} ==Maximum entropy and principle of least action for electrotechnical systems in deterministic chaos mode== https://ceur-ws.org/Vol-2642/paper12.pdf
     Maximum entropy and principle of least action for
    electrotechnical systems in deterministic chaos mode

                 Vladimir Fedorov                             Igor Fedorov
          Omsk State Technical University            Omsk State Technical University
       11 Mira avenue, 644050, Omsk, Russia       11 Mira avenue, 644050, Omsk, Russia
                   dm.90@bk.ru                              omsk2010@bk.ru
                                        Sergey Fedorov
                               Omsk State Technical University
                            11 Mira avenue, 644050, Omsk, Russia
                                      temper99@mail.ru




                                                       Abstract
                      Entropy and its maximization are determined by different possible tra-
                      jectories of a chaotic system moving in its phase space between two
                      cells. The chaotic system, within the framework of this article, is un-
                      derstood as an electrotechnical system with deterministic chaos modes.
                      The paths of the chaotic system in phase space are supposed to be dif-
                      ferentiated by their actions, by the so-called principle of least action.
                      It is shown that the maximization of entropy leads to the trajectory
                      selection probability distribution as an action function from which one
                      can easily obtain the probability of the electrotechnical system transi-
                      tion from one state to another. Of interest is the fact that the most
                      probable trajectories are the paths of least action. This suggests that
                      the principle of least action in a probabilistic situation is equivalent
                      to the principle of maximum entropy or uncertainty associated with a
                      particular probability distribution.




1    Introduction
The objective of this paper is to investigate distributions of probabilities attributed to different trajectories of
a chaotic system moving between two points in phase space. The phase space of the system is usually defined
so that the point in the system represents the system state. If the system has N -bodies and moves in a three-
dimensional normal configuration space, the phase space has a dimension of 6N (3N -coordinates and 3N -pulses)
[1].
    Let us now consider a non-equilibrium electrotechnical system moving in the phase space between two points
a and b, which are located in two elementary cells of this phase space partition. If the movement of the electrical
system is regular or if the n-dimensional state space has a positive or zero Riemannian curvature, there is only one
possible trajectory between the two points. In the other case, there is only some unambiguous set of trajectories
that track each other between the initial and final cells. These trajectories should be the ways of minimizing

Copyright ⃝
          c by the paper’s authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
In: Sergei S. Goncharov, Yuri G. Evtushenko (eds.): Proceedings of the Workshop on Applied Mathematics and Fundamental
Computer Science 2020 , Omsk, Russia, 30-04-2020, published at http://ceur-ws.org




                                                            1
the action due to the principle of least action [2] and have some probability of occurrence. Any other set of
trajectories must have a zero probability.
   For an electrical system in chaotic motion, or when the Riemannian curvature of the phase space is negative,
everything is different. Two points indistinguishable in the starting cell can be exponentially separated. Usually,
these two points never meet in the final cell in the phase space after they leave the starting cell. However, they
can pass through the same cell at two different points in time. Therefore, between two set points there can
be multiple possible trajectories k (k = 1, 2, ..., w) with varying travel time tab (k) through the electrotechnical
system and varying probability pab (k) of its choosing path k. This is called a trajectory probability distribution.
   In this paper, the trajectory probability distribution due to dynamic instability is studied in terms of entropic
instability theory and the principle of least action. First of all, we suppose that different trajectories of non-
equilibrium electrotechnical systems moving between phase cells a and b are unambiguously differentiated by
their action defined [3] as follows                       ∫
                                             Aab (k) =              Lk (t)dt,                                   (1)
                                                          tab (k)

where Lk (t) is Lagrangian systems at time moment t on path k and is determined as Lk (t) = Uk (t) − Vk (t),
where Uk (t) is total kinetic energy and Vk (t) is the total potential energy of the electrotechnical system.
   Integral Aab (k) is determined by path k at time tab (k); tab (k) is the time of system Lk traveling by path
k. If paths k can be identified only by the value of their actions, then it is possible to study their probability
distributions due to Jaynes entropic concept and maximum entropy method [4] taking into account the value of
action Aab (k). This approach leads us to a probabilistic interpretation of Maupertuis’s mechanical principle and
probability distribution depending on the action.

2   Trajectory entropy
The entropy referred to is our lack of knowledge of the system in question. The more we know about∑the system,
the less the entropy. According to Shannon [3], this entropy can be measured
                                                                           ∑     by formula S = −   i pi ln pi ,
where pi is a certain probability attributed to situation i. We normalize i pi = 1 as usual with a summation
over all possible situations.
   Now for an ensemble of w possible paths, Shannon’s entropy can be defined as follows:
                                                       ∑
                                                       w
                                         H(a, b) = −         pab (k) ln pab (k).                                (2)
                                                       k=1

Function H(a, b) is the entropy of path and must be interpreted as the missing information needed to predict
which path from a to b the system chooses from the ensemble. According to our initial assumption, the value
that differentiates paths and their occurrence probability is a Lagrangian action.

3   Probability distribution of maximum entropy
An ensemble containing a large number of systems moving from a to b is considered. These systems are distributed
among w paths according to pab (k) in view of action Aab (k). The mathematical expectation of action on all
possible paths can be calculated by means of
                                                         ∑
                                                         w
                                           M (Aab ) =          pab (k)Aab (k).                                  (3)
                                                         k=1

   On the other hand, entropy H(a, b) of path in formulae (2) is a convex function depending on the normed
probability pab (k). Due to Jaynes principle [4], to obtain an optimal distribution, H(a, b) is to be maximized
under the constrains imposed by our limited knowledge of the system and corresponding variables, i.e. with
normalization pab (k) and mathematical expectation Aab
                              [                                               ]
                                              ∑w              ∑
                                                              w
                             δ −H(a, b) + α       pab (k) + η   pab (k)Aab (k) = 0.                         (4)
                                               k=1                  k=1

This results in the following probability distribution
                                                       1
                                           pab (k) =     exp [−ηAab (k)] .                                      (5)
                                                       Q




                                                           2
Putting this probability distribution (5) into H(a, b) of ratio (2), we get
                                                                         ∂
                                   H(a, b) = ln Q + ηAab = ln Q − η         ln Q,                                 (6)
                                                                         ∂η
                                ∑w
where Q is determined as Q =       k=1 exp [−ηAab (k)] and Aab is determined by the expression

                                                              ∂
                                                 Aab = −η        ln Q,                                            (7)
                                                              ∂η
η is a Lagrange multiplier.

4      Stability of trajectory probability distribution
Now let us show that the specified probability distribution is stable concerning the action fluctuations. Suppose
that each path is cut into two parts: part 1 (segments on the cell side a) and part 2 (segments on the side b). All
segments of part 1 are contained in group 1 and all segments of part 2 are in group 2. Each group has trajectory
entropy H1 = H2 = H and mean action A1 = A2 = A. The total entropy is H(a, b) = H1 + H2 = 2 and total
mean action is A(a, b) = A1 + A2 = 2A. Now consider a small variation in the division of the trajectories with
virtual changes in the two groups, such that δA1 = δA = −δA2 . As a result, the total entropy will be changed
and can be written as
                                       H ′ (a, b) = H(A + δA) + H(A − δA).                                      (8)
Because distribution (5) and ratio (6) result from the procedure of entropy maximization, the stability condition
requires that entropy does not increase with virtual changes of these two groups:

                                           δH = H ′ (a, b) − H(a, b) ≤ 0,                                         (9)

i.e.
                                      H(A + δA) + H(A − δA) − 2H(A) ≤ 0,                                         (10)
which means
                                                    ∂2H
                                                         ≤ 0.                                               (11)
                                                    ∂A2
    Let us consider whether this condition of entropy stability is always fulfilled. As follows from equation (6),
∂ 2 H/∂A2 = ∂η/∂A. Then, given the definition of mean action (3), we calculate
                                                     ∂A
                                                        = −δ 2 ,                                                 (12)
                                                     ∂η
which implies
                                              ∂2H        1
                                                    = − 2 ≤ 0,                                              (13)
                                              ∂A2       δ
where dispersion δ 2 = A−2      −2
                          1 + A2 ≥ 0 characterizes the fluctuation of action A.
  This proves the stability of the maximum entropy distribution and ratio (5) relative to the action fluctuations
on different trajectories.

5      Principle of maximum entropy and principle of least action
Now let us observe the connection between maximum trajectory entropy and the least action. It can be shown
that the paths of least action are the most likely at η = ∂H(a, b)/∂Aa,b > 0. In fact, due to expression (5),
positive ? means that the trajectories of least action are statistically more likely than trajectories of most action.
Thus, the most likely trajectories should minimize the action.
   This property of probability distribution of expression (5) can be mathematically analyzed in the same way
as the stability of probability distribution proved in section 4. Two groups 1 and 2 were considered before for
the segments of the path with H1 = H2 = and 1 = 2 = A. The total entropy is H(a, b) = 2H, and total mean
action is A(a, b) = 2A. Now suppose that two groups are deformed so that δH1 = δH = −δH2 . The total mean
action after group deformation can be written as [6].

                     A′ (a, b) = A1 (H1 + δH1 ) + A2 (H2 + δH2 ) = A(H + δH) + A(H − δH).                        (14)




                                                          3
   If the probability distribution of expression (5) and ratio (6) correspond to the least action, the total mean
action after group deformation cannot decrease, δA = A′ (a, b) − A(a, b) ≥ 0, i.e.

                                     A(H + δH) + A(H − δH) − 2A(H) ≥ 0,                                        (15)

which means
                                                  ∂2A
                                                       ≥ 0.                                                    (16)
                                                 ∂H 2
On the other hand, by means of ratio (6) we can prove that

                                            ∂2A     1 ∂η    1 ∂η
                                               2
                                                 =− 2    =− 3    .                                             (17)
                                            ∂H     η ∂H    η ∂A
Now in terms of ∂η/∂A = −1/δ 2 , we get
                                                   ∂2A     1
                                                      2
                                                        = 2 3.                                                 (18)
                                                   ∂H    δ η
It is seen that if equation (18) is true, we get
                                                      η ≥ 0.                                                   (19)
   In other words, the positive value of η implies that the principle of entropy maximization is closely related
to the principle of least action: the most probable trajectories determined by the maximum entropy probability
distribution are simply the paths of least action.

6   Closing remarks
This work can contribute to studying the behaviour of chaotic systems. The more chaotic the system under
consideration is, the more possible paths with different actions exist, and the greater the entropy is. Thus, it is
assumed that the entropy of path H(a, b) can be used as a chaos measure, as Kolmogorov-Sinai’s entropy [7].
This is a very encouraging result for the method of entropy and action integrals calculation in discontinuous and
undifferentiated space (e.g. strange attractors). The result of this work can be used to derive the method of the
maximum entropy change for dynamic systems moving in fractal phase space [8].
   To sum up, it can be stated that the entropy of trajectories is determined for many possible paths of chaotic
systems moving between two cells in phase space. It is shown that different paths are physically identified by
their actions, and the maximization of path entropy leads to the distribution of trajectory selection probability as
a function of the action. In this case, we show that the most probable paths obtained from the maximum entropy
probability distribution minimize the action. This indicates that the principle of least action in a probabilistic
situation is equivalent to the principle of entropy or uncertainty maximization, associated with the probability
distribution. This result can be considered as an argument to support this method of analysis for non-equilibrium
systems.

References
 [1] Wang Q.A. Maximum path information and the principle of least action for chaotic system Chaos, Solitons
     & Fractals, 23(4):1253-1258, 2005.
 [2] Beck C., Friedrich S. Thermodynamics of Chaotic Systems: An Introduction. Cambridge, UK: Cambridge
     University Press, 2013.
 [3] Vasiliev, V. A. Yu. M. Romanovsky, V. G. Yakhno. Autowave processes. M.: Nauka, 1987.
 [4] V.K. Fedorov [et al.] Synchronization of chaotic self-oscillations in the state space of electric, electric and
     electronic systems as a factor of self-organization. Omsk Scientific Bulletin, 3(113):196-205, 2012.
 [5] Romanovsky Yu. M., Stepanova N.V., Chernavsky D.S. Mathematical modeling in biophysics M.: Nauka,
     1975.
 [6] Fedorov V.K. The concept of entropy in the theoretical analysis of spatio-temporal self-organization of
     distributed active media and stable dissipative structure systems. Omsk Scientific Bulletin, 1(127):161-166,
     2014.




                                                         4
[7] Sbitnev, V.I. Stochasticity in a system of coupled vibrators Nonlinear waves, stochasticity and turbulence.
    - Gorky: IAP Academy of Sciences of the USSR, 1980. - P. 46-56.
[8] Khaitun S. D. The interpretation of entropy as a measure of disorder and its negative impact on the modern
    scientific picture of the world. Problems of Philosophy, 2:62-74, 2017.




                                                      5