<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Improvement of the mathematical models reduction principle by group elements deletion*</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yaroslav Matviychuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nataliya Melnykova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Systems of Artificial Intelligence, Lviv Polytechnic National University</institution>
          ,
          <addr-line>S. Bandera, str. 12, Lviv, 79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <fpage>2</fpage>
      <lpage>7</lpage>
      <abstract>
        <p>The classic principle of reduction suggests removing unnecessary elements of the model one at a time [2]. The reduction process can be significantly accelerated by group removal of elements. The article substantiates such removal and shows its effectiveness on the example of the identification of the Lorenz ODE.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Mathematical model</kwd>
        <kwd>reduction</kwd>
        <kwd>identification</kwd>
        <kwd>incorrectness</kwd>
        <kwd>ODE</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        But this property can not detect unnecessary parameters, because some of the needed
parameters may be close to zero. Next, we apply slight random variations — referred to as
disturbances — in a manner that preserves the system’s continuous dependence on the solution.
The identification algorithm then determines a parameter vector p′p'p′ corresponding to the
disturbed system, which differs from the original parameter vector ppp of the undisturbed system.
For each parameter, we calculate the absolute values of relative deviations (RD) using the
following approach: :
For the relevant parameters, the absolute differences (pi′−pi)(p'_i - p_i)(pi′−pi) approach zero as
the magnitude of the perturbations diminishes, owing to the continuous relationship between the
parameters and the introduced disturbances. A similar behavior is observed for the relative
deviations (RD):
δi = abs(( p'i − pi) / p'i ); i=1,…,m.
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Test recovery of the Lorentz Atractor</title>
      <p>
        The classical equations of the Lorentz attractor (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) are convenient for testing the principle of
reduction, since they allow an analytical transformation into an equivalent form (6) convenient for
our identification.
      </p>
      <p>δi → 0; i = 1,…,n; if disturb → 0.</p>
      <p>
        δi → 1; i = n+1,…,m; if disturb ≠ 0
Contrary, for the unnecessary parameters the values of the RD (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) are close to one due to (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ):
Criteria (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) and (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) were derived specifically for the precision-focused model but can be
generalized to apply to broader classes of mathematical models.
Typically, redundant parameters are characterized by significantly larger relative deviations (RD)
as per equation (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ), in contrast to essential parameters. Gradual removal of these superfluous
components enhances both the robustness and accuracy of the identification process. This
observation is supported by multiple case studies [2], including those presented in this work.
By applying the principle of model reduction, it becomes feasible to expand the model’s structure
systematically—assessing each new component for relevance and discarding non-essential ones
accordingly.
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
      </p>
      <p>Thus, the task of recovering the exact model (6) can be formulated as follows: given a discrete
signal y1= x1, it is necessary to compute its first, second, and third derivatives. y1'= y2, y2'= y3, y3',
and solve the identification problem (7).
(7)</p>
      <p>All polynomial coefficients of the problem (7) are 50. But for the exact model (6) only 7
coefficients are required. The remaining coefficients are unnecessary.</p>
      <p>
        The polynomial representation in problem (7) includes 50 coefficients. However, the accurate
model described in (6) requires only 7 of them; the rest are redundant and do not contribute to the
solution. The discrete signal y1=x1 was obtained by applying numerical integration to equations (
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
using the Runge-Kutta method with a time step of 0.02 seconds, covering the interval from 0 to 34
seconds. Based on the resulting dataset, a fifth-degree interpolation spline was generated, and its
first three derivatives were derived analytically. These computed values formed the datasets y1m,
y2m, y3m, y’3m, (m=1,…,1701), which were then used for solving the identification problem (7).
Subsequently, a step-by-step reduction of the coefficient arrays aijk, bijk, according to the principle
of reduction, was applied. The perturbations were added to values y’3m, with a relative value of
105. The RD δi (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) were calculated and the element with the largest δi was deleted . The criterion for
completing the reduction is the compact set of residual RD. The sign of this is the same number of
the RD that are larger and smaller than the average of the remaining area. The stopping criterion
for the reduction process was a compact distribution of the remaining δi\delta_iδi values.
Specifically, the number of deviations above and below the average had to be roughly equal,
indicating a balanced residual set.
      </p>
      <p>In the reduction process, the magnitude of the RD max(δi)-min(δi), and the middle relative error
model coefficients were calculated. After 43 reduction steps, there are 7 coefficients of the exact
model (6) with an middle relative error 0.0016. In Fig. 1 shows a change in relative error with
increasing number of reduction step.
5
10
15
20
25
30
35
40
45
The dependence of the area size of relative deviations on the reduction step is shown in Fig. 2.</p>
      <p>The same figure on an enlarged scale Fig 3. is demonstrated. The process of forming a compact
area is well visible.</p>
      <p>
        The model expansion approach (induction) was applied to the Lorenz system to evaluate its
effectiveness. Relative deviations (as defined in equation (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )) were computed for all 50 coefficients.
Beginning with the three coefficients exhibiting the lowest relative deviation values, additional
coefficients were incrementally incorporated into the model—each time selecting the one with the
next smallest RD from the remaining set.
The induction process was halted once a compact cluster of RD values was established, which
occurred after four iterations.
      </p>
      <p>The benefits of the induction approach over reduction are evident. Firstly, there is no need to
recalculate relative deviations at every stage—initial computation suffices for the entire process.
Secondly, the total number of iterations may be reduced.</p>
      <p>Thus, using the Lorenz attractor as a test case, the fundamental principles underpinning the
reduction method were effectively validated.</p>
    </sec>
    <sec id="sec-3">
      <title>4. Group deletion of model elements</title>
      <p>Removing model elements one at a time at each reduction step requires too much time to reduce
complex models with a large number of elements. The reduction process can be accelerated
by group removal of elements. You just need to make sure that the group you are deleting does not
have the necessary model elements.</p>
      <p>Fig. 2 shows that the size of the relative deviation region (RDR) of elements can change
significantly during the reduction process. In the test case of the Lorentz attractor the necessary
elements are known, and it is possible to study the change in their RD in the process of reduction.
It turned out that the RD of the necessary elements is always less than the middle of the RDR at
any of its sizes, The same conclusion is obtained for other test models [5].</p>
      <p>This means that you can delete a group of elements with an RD greater than the middle of the
RDR at any reduction step except for the last ones, when this criterion does not work. It is only
necessary to select the minimum size of the group to be removed in order to successfully complete
the reduction process.</p>
      <p>Fig. 4 shows the change in the error of the model coefficients when deleting elements with a
minimum group size of 4. The lacunaes correspond to three group deletions with group sizes of 13,
4, and 4 elements. The reduction process accelerated by 49%. If you reduce the minimum size of the
deleted group to 3, the reduction process is not successful.</p>
      <p>In the publication [6], on the basis of the principle of reduction, group removal of elements
without any justification is proposed.
5
10
15
20
25
30
35
40
45</p>
    </sec>
    <sec id="sec-4">
      <title>3.Conclusions</title>
      <p>The improvement of mathematical models using the principle of reduction is proposed. The
improvement consists in the group removal of unnecessary elements of models.</p>
      <p>Using the example of a test model, it is shown how to form a group of deleted elements without
the necessary model elements.</p>
      <p>Declaration on Generative AI
During the preparation of this work, the authors utilised ChatGPT and LanguageTool to identify
and rectify grammatical, typographical, and spelling errors. Following the use of these tools, the
authors conducted a thorough review and made necessary revisions, and accept full responsibility
for the final content of this publication.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Tikhonov</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.Y.</given-names>
            <surname>Arsenin</surname>
          </string-name>
          , Solutions of Ill-Posed
          <string-name>
            <surname>Problems</surname>
          </string-name>
          , New York, USA: Wiley. (
          <year>1977</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Yann</surname>
            <given-names>LeCun</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>J. S.</given-names>
            <surname>Denker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Solla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Howard</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. D.</given-names>
            <surname>Jackel</surname>
          </string-name>
          , Optimal Brain Damage,
          <source>Advances in Neural Information Processing Systems (NIPS</source>
          <year>1989</year>
          ),
          <fpage>2</fpage>
          , Morgan Kaufman, Denver, CO (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[3] Yaroslav Matviychuk Dynamical Systems Mathematical Macromodelling: Theory and Practice</source>
          . Lviv, Ukraine: Ivan Franko Lviv National University Publishing House. (Ukrainian). (
          <year>2000</year>
          ) http://ena.lp.edu.ua:8080/handle/ntb/22710
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Matviychuk</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shahovska</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kryvinska</surname>
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Poniszewska-Maranda</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>New principles of finding and removing elements of mathematical model for reducing computational</article-title>
          and time complexity // International Jornal of Grid and
          <string-name>
            <given-names>Utility</given-names>
            <surname>Computing</surname>
          </string-name>
          .
          <article-title>-</article-title>
          <year>2023</year>
          . - Vol.
          <volume>14</volume>
          ,
          <issue>iss</issue>
          .4. - P.
          <fpage>400</fpage>
          -
          <lpage>410</lpage>
          . DOI:
          <volume>10</volume>
          .1504/IJGUC.
          <year>2023</year>
          .132625
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Yaroslav</given-names>
            <surname>Matviychuk</surname>
          </string-name>
          , Olga Karchevska,
          <source>Increasing the Correctness of Mathematical Models by Novel Reduction Principle, Proceeding of the 16th International Conference on Computational Problems of Electrical Engineering</source>
          , Lviv, Ukraine, Sept. 2-
          <fpage>5</fpage>
          . (
          <year>2015</year>
          ) http://ieeexplore.ieee.org/document/7333351/ [6]
          <string-name>
            <given-names>Oleksandr</given-names>
            <surname>Gurbych</surname>
          </string-name>
          and
          <string-name>
            <given-names>Maksym</given-names>
            <surname>Prymachenko</surname>
          </string-name>
          . “
          <article-title>Method for reductive pruning of neural networks and its applications”</article-title>
          .
          <source>In: Computer Systems and Information Technologies</source>
          <volume>3</volume>
          (
          <year>2022</year>
          ), pp.
          <fpage>40</fpage>
          -
          <lpage>48</lpage>
          . doi:
          <volume>10</volume>
          .31891/csit-2022
          <source>-3-5.</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>