<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Maximum entropy and principle of least action for electrotechnical systems in deterministic chaos mode</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vladimir Fedorov</string-name>
          <email>dm.90@bk.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Fedorov</string-name>
          <email>omsk2010@bk.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergey Fedorov</string-name>
          <email>temper99@mail.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Omsk State Technical University</institution>
          ,
          <addr-line>11 Mira avenue, 644050, Omsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Entropy and its maximization are determined by different possible trajectories of a chaotic system moving in its phase space between two cells. The chaotic system, within the framework of this article, is understood as an electrotechnical system with deterministic chaos modes. The paths of the chaotic system in phase space are supposed to be differentiated by their actions, by the so-called principle of least action. It is shown that the maximization of entropy leads to the trajectory selection probability distribution as an action function from which one can easily obtain the probability of the electrotechnical system transition from one state to another. Of interest is the fact that the most probable trajectories are the paths of least action. This suggests that the principle of least action in a probabilistic situation is equivalent to the principle of maximum entropy or uncertainty associated with a particular probability distribution.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The objective of this paper is to investigate distributions of probabilities attributed to different trajectories of
a chaotic system moving between two points in phase space. The phase space of the system is usually defined
so that the point in the system represents the system state. If the system has N -bodies and moves in a
threedimensional normal configuration space, the phase space has a dimension of 6N (3N -coordinates and 3N -pulses)
[1].</p>
      <p>Let us now consider a non-equilibrium electrotechnical system moving in the phase space between two points
a and b, which are located in two elementary cells of this phase space partition. If the movement of the electrical
system is regular or if the n-dimensional state space has a positive or zero Riemannian curvature, there is only one
possible trajectory between the two points. In the other case, there is only some unambiguous set of trajectories
that track each other between the initial and final cells. These trajectories should be the ways of minimizing
Copyright ⃝c by the paper's authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
the action due to the principle of least action [2] and have some probability of occurrence. Any other set of
trajectories must have a zero probability.</p>
      <p>For an electrical system in chaotic motion, or when the Riemannian curvature of the phase space is negative,
everything is different. Two points indistinguishable in the starting cell can be exponentially separated. Usually,
these two points never meet in the final cell in the phase space after they leave the starting cell. However, they
can pass through the same cell at two different points in time. Therefore, between two set points there can
be multiple possible trajectories k (k = 1; 2; :::; w) with varying travel time tab(k) through the electrotechnical
system and varying probability pab(k) of its choosing path k. This is called a trajectory probability distribution.</p>
      <p>
        In this paper, the trajectory probability distribution due to dynamic instability is studied in terms of entropic
instability theory and the principle of least action. First of all, we suppose that different trajectories of
nonequilibrium electrotechnical systems moving between phase cells a and b are unambiguously differentiated by
their action defined [
        <xref ref-type="bibr" rid="ref1">3</xref>
        ] as follows
      </p>
      <p>∫
Aab(k) =
tab(k)
where Lk(t) is Lagrangian systems at time moment t on path k and is determined as Lk(t) = Uk(t) Vk(t),
where Uk(t) is total kinetic energy and Vk(t) is the total potential energy of the electrotechnical system.</p>
      <p>Integral Aab(k) is determined by path k at time tab(k); tab(k) is the time of system Lk traveling by path
k. If paths k can be identified only by the value of their actions, then it is possible to study their probability
distributions due to Jaynes entropic concept and maximum entropy method [4] taking into account the value of
action Aab(k). This approach leads us to a probabilistic interpretation of Maupertuis’s mechanical principle and
probability distribution depending on the action.</p>
      <p>M (Aab) =
w
∑ pab(k)Aab(k):
k=1
pab(k) =
exp [</p>
      <p>Aab(k)] :</p>
    </sec>
    <sec id="sec-2">
      <title>Trajectory entropy</title>
      <p>
        The entropy referred to is our lack of knowledge of the system in question. The more we know about the system,
the less the entropy. According to Shannon [
        <xref ref-type="bibr" rid="ref1">3</xref>
        ], this entropy can be measured by formula S = ∑i pi ln pi ,
where pi is a certain probability attributed to situation i. We normalize ∑i pi = 1 as usual with a summation
over all possible situations.
      </p>
      <p>Now for an ensemble of w possible paths, Shannon’s entropy can be defined as follows:</p>
      <p>H(a; b) =
w
∑ pab(k) ln pab(k):
k=1
Function H(a; b) is the entropy of path and must be interpreted as the missing information needed to predict
which path from a to b the system chooses from the ensemble. According to our initial assumption, the value
that differentiates paths and their occurrence probability is a Lagrangian action.
3</p>
    </sec>
    <sec id="sec-3">
      <title>Probability distribution of maximum entropy</title>
      <p>An ensemble containing a large number of systems moving from a to b is considered. These systems are distributed
among w paths according to pab(k) in view of action Aab(k). The mathematical expectation of action on all
possible paths can be calculated by means of</p>
      <p>On the other hand, entropy H(a; b) of path in formulae (2) is a convex function depending on the normed
probability pab(k). Due to Jaynes principle [4], to obtain an optimal distribution, H(a; b) is to be maximized
under the constrains imposed by our limited knowledge of the system and corresponding variables, i.e. with
normalization pab(k) and mathematical expectation Aab
[</p>
      <p>H(a; b) +
w
∑ pab(k) +
k=1
w ]
∑ pab(k)Aab(k) = 0:
k=1</p>
      <sec id="sec-3-1">
        <title>This results in the following probability distribution</title>
        <p>(1)
(2)
(3)
(4)
(5)
Putting this probability distribution (5) into H(a; b) of ratio (2), we get
where Q is determined as Q = ∑w
k=1 exp [</p>
      </sec>
      <sec id="sec-3-2">
        <title>Aab(k)] and Aab is determined by the expression</title>
        <p>H(a; b) = ln Q +
Aab = ln Q
ln Q;
Aab =
ln Q;
is a Lagrange multiplier.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Stability of trajectory probability distribution</title>
      <p>Now let us show that the specified probability distribution is stable concerning the action fluctuations. Suppose
that each path is cut into two parts: part 1 (segments on the cell side a) and part 2 (segments on the side b). All
segments of part 1 are contained in group 1 and all segments of part 2 are in group 2. Each group has trajectory
entropy H1 = H2 = H and mean action A1 = A2 = A. The total entropy is H(a; b) = H1 + H2 = 2 and total
mean action is A(a; b) = A1 + A2 = 2A. Now consider a small variation in the division of the trajectories with
virtual changes in the two groups, such that A1 = A = A2. As a result, the total entropy will be changed
and can be written as</p>
      <p>H′(a; b) = H(A + A) + H(A</p>
      <p>A):
Because distribution (5) and ratio (6) result from the procedure of entropy maximization, the stability condition
requires that entropy does not increase with virtual changes of these two groups:
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
i.e.
0:
=</p>
      <p>2;
1
2
0;</p>
      <p>Let us consider whether this condition of entropy stability is always fulfilled. As follows from equation (6),
@2H=@A2 = @ =@A. Then, given the definition of mean action (3), we calculate
where dispersion 2 = A1−2 + A2−2 0 characterizes the fluctuation of action A.</p>
      <p>This proves the stability of the maximum entropy distribution and ratio (5) relative to the action fluctuations
on different trajectories.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Principle of maximum entropy and principle of least action</title>
      <p>Now let us observe the connection between maximum trajectory entropy and the least action. It can be shown
that the paths of least action are the most likely at = @H(a; b)=@Aa;b &gt; 0. In fact, due to expression (5),
positive ? means that the trajectories of least action are statistically more likely than trajectories of most action.
Thus, the most likely trajectories should minimize the action.</p>
      <p>This property of probability distribution of expression (5) can be mathematically analyzed in the same way
as the stability of probability distribution proved in section 4. Two groups 1 and 2 were considered before for
the segments of the path with H1 = H2 = and 1 = 2 = A. The total entropy is H(a; b) = 2H, and total mean
action is A(a; b) = 2A. Now suppose that two groups are deformed so that H1 = H = H2. The total mean
action after group deformation can be written as [6].</p>
      <p>A′(a; b) = A1(H1 +</p>
      <p>H1) + A2(H2 +</p>
      <p>H2) = A(H +</p>
      <p>If the probability distribution of expression (5) and ratio (6) correspond to the least action, the total mean
action after group deformation cannot decrease, A = A′(a; b) A(a; b) 0, i.e.</p>
      <p>A(H +
On the other hand, by means of ratio (6) we can prove that</p>
      <p>1= 2, we get
It is seen that if equation (18) is true, we get
0:
=
1
2 3 :
0:</p>
      <p>In other words, the positive value of implies that the principle of entropy maximization is closely related
to the principle of least action: the most probable trajectories determined by the maximum entropy probability
distribution are simply the paths of least action.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Closing remarks</title>
      <p>
        This work can contribute to studying the behaviour of chaotic systems. The more chaotic the system under
consideration is, the more possible paths with different actions exist, and the greater the entropy is. Thus, it is
assumed that the entropy of path H(a; b) can be used as a chaos measure, as Kolmogorov-Sinai’s entropy [
        <xref ref-type="bibr" rid="ref2">7</xref>
        ].
This is a very encouraging result for the method of entropy and action integrals calculation in discontinuous and
undifferentiated space (e.g. strange attractors). The result of this work can be used to derive the method of the
maximum entropy change for dynamic systems moving in fractal phase space [
        <xref ref-type="bibr" rid="ref3">8</xref>
        ].
      </p>
      <p>To sum up, it can be stated that the entropy of trajectories is determined for many possible paths of chaotic
systems moving between two cells in phase space. It is shown that different paths are physically identified by
their actions, and the maximization of path entropy leads to the distribution of trajectory selection probability as
a function of the action. In this case, we show that the most probable paths obtained from the maximum entropy
probability distribution minimize the action. This indicates that the principle of least action in a probabilistic
situation is equivalent to the principle of entropy or uncertainty maximization, associated with the probability
distribution. This result can be considered as an argument to support this method of analysis for non-equilibrium
systems.
[1] Wang Q.A. Maximum path information and the principle of least action for chaotic system Chaos, Solitons
&amp; Fractals, 23(4):1253-1258, 2005.
[2] Beck C., Friedrich S. Thermodynamics of Chaotic Systems: An Introduction. Cambridge, UK: Cambridge</p>
      <p>University Press, 2013.
[4] V.K. Fedorov [et al.] Synchronization of chaotic self-oscillations in the state space of electric, electric and
electronic systems as a factor of self-organization. Omsk Scienti c Bulletin, 3(113):196-205, 2012.
[5] Romanovsky Yu. M., Stepanova N.V., Chernavsky D.S. Mathematical modeling in biophysics M.: Nauka,
1975.
[6] Fedorov V.K. The concept of entropy in the theoretical analysis of spatio-temporal self-organization of
distributed active media and stable dissipative structure systems. Omsk Scienti c Bulletin, 1(127):161-166,
2014.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Vasiliev</surname>
            ,
            <given-names>V. A.</given-names>
          </string-name>
          <string-name>
            <surname>Yu. M. Romanovsky</surname>
            ,
            <given-names>V. G.</given-names>
          </string-name>
          <string-name>
            <surname>Yakhno</surname>
          </string-name>
          . Autowave processes. M.:
          <string-name>
            <surname>Nauka</surname>
          </string-name>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Sbitnev</surname>
            ,
            <given-names>V.I.</given-names>
          </string-name>
          <article-title>Stochasticity in a system of coupled vibrators Nonlinear waves, stochasticity and turbulence</article-title>
          . -
          <source>Gorky: IAP Academy of Sciences of the USSR</source>
          ,
          <year>1980</year>
          . - P.
          <fpage>46</fpage>
          -
          <lpage>56</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Khaitun</surname>
            <given-names>S. D.</given-names>
          </string-name>
          <article-title>The interpretation of entropy as a measure of disorder and its negative impact on the modern scientific picture of the world</article-title>
          .
          <source>Problems of Philosophy</source>
          ,
          <volume>2</volume>
          :
          <fpage>62</fpage>
          -
          <lpage>74</lpage>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>