<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Method of Support Neural Networks for Modeling Nonlinear Dynamics</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Fomin</string-name>
          <email>fomin@op.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergii Polozhaenko</string-name>
          <email>polozhaenko@op.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii Prokofiev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleksiy Tataryn</string-name>
          <email>otataryn@stud.op.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Polytechnic National University</institution>
          ,
          <addr-line>1, Shevchenko ave., Odesa, 65044</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The work is devoted to resolving the contradiction between the speed of modeling nonlinear dynamic objects and the accuracy of constructing models in the form of neural networks. The purpose of the work is to reduce the time required to build nonlinear dynamics continuous-time models in the form of neural networks while ensuring the specified modeling accuracy. This purpose is achieved by developing a modeling method based on the superposition of a set of pre-trained neural networks (support models) that reflect the main properties of the subject area. The scientific novelty of the developed method lies in using pre-trained neural networks with time delays as support models for modeling nonlinear dynamic objects. Unlike existing pre-training methods, the developed method allows building simpler models with reduced training time while ensuring the specified accuracy. The practical benefit of the results of the work lies in the development of the support models method algorithm, which allows significantly reducing the training time of neural networks with time delays without losing the accuracy of the model. identification; nonlinear dynamics; pre-training; neural networks training speed 1 ICST-2025: Information Control Systems &amp; Technologies, September 24-26, 2025, Odesa, Ukraine Corresponding author. These authors contributed equally.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The current stage of modeling development, which is mainly based on using intelligent technologies,
is marked by a number of requirements from practice both for high accuracy of models and for the
speed of their construction [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        Achieving high accuracy of nonlinear dynamics continuous-time modeling today is carried out
through using machine learning methods, in particular, neural networks (NN) [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2-4</xref>
        ]. However,
applying such methods is often associated with high computational complexity, which leads to
significant time spent on building models [
        <xref ref-type="bibr" rid="ref3 ref4 ref5">3-5</xref>
        ].
      </p>
      <p>
        The problem of increasing the speed of modeling remains one of the most urgent, especially in
industries related to the personalization of models, which must adapt to changes in user behavior or
the environment (e.g., in authentication tasks, biomedical applications, human-machine systems),
while operating in real time [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ].
      </p>
      <p>
        It should be noted that various approaches are being actively researched as part of efforts to
accelerate NN learning. The most common of these are based on regularization and normalization
methods, which promote faster convergence and reduce the likelihood of overfitting [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Popular
experiments include activation functions aimed at reducing computational complexity and [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] model
construction time [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and experiments on optimizing NN architecture (LSTM, RC networks, etc.),
which demonstrate a significant reduction in training time while maintaining high performance for
dynamic systems. These studies emphasize the ongoing search for methods to improve the efficiency
and speed of building intelligent models. One of the common approaches to accelerating the process
of building an NN is pre-training [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Pre-trained models can be quickly adapted to new tasks,
making them an effective tool in reducing simulation time.
      </p>
      <p>
        This direction seems promising in modeling objects with a high degree of internal complexity
and interaction, first of all, nonlinear dynamic objects [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. At the same time, there is a significant
lack of studies in the field of NN pre-training that simulate the nonlinear dynamics of continuous
objects.
      </p>
      <p>The purpose of the work is to reduce the time for building nonlinear dynamics
continuoustime models in the form of NN while ensuring the specified modeling accuracy by developing a
method based on the NN models pre-training.</p>
      <p>To reach this goal, the following tasks were formulated.</p>
      <p>1. Development of the modeling method based on pre-training by superposition of a set of
pretrained NN (support models) reflecting the basic characteristics of the subject area.</p>
      <p>2. Building of support models in the form of NN reflecting the basic nonlinear and dynamic
characteristics of the subject area.</p>
      <p>3. Study of the speed of modeling complex nonlinear dynamics using the developed method of
support models.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement</title>
      <p>
        The approach to pre-training NN in practice faces a number of significant limitations [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ]. A
characteristic feature of the pre-training process is a significant time spent on building a general
model [
        <xref ref-type="bibr" rid="ref14 ref15">14-16</xref>
        ]. To achieve the purpose of the study, it is necessary to invent a way to accelerate the
construction of general models of nonlinear dynamics. The formal formulation of the problem of
accelerating the construction of general models of nonlinear dynamics based on pre-training of NN
is as follows.
      </p>
      <p>Let S be the domain for which there is enough labeling data NS (DS dataset):
θT 0 = θS : arg min LT ( fθT (x jT ), y jT ) = EθT</p>
      <p>tT
where LT is the loss function adopted for the target model.</p>
      <p>DT={(xjT, yjT)},
(2)
where xjT is the vector of the independent variables, yjT is the corresponding target variable (label),
j NT.</p>
      <p>Let f T T parameters, trained on DT dataset, which ensures the
accuracy E T and the duration t T of target model training.</p>
      <p>S of the general model. Using them as starting values T0
during training, the target model f T, a specified accuracy level E T is achieved in the minimum time
t T:</p>
      <p>DS={(xiS, yiS)},
where xiS is a vector of independent variables, yiS is the corresponding target variable (label),
i NS.</p>
      <p>Let f S be the general NN model with S parameters, which is trained on DS dataset.</p>
      <p>Let T be the target task in the subject area S, for which there are marked-up data of limited NT
size (DT dataset):
(1)
(3)</p>
    </sec>
    <sec id="sec-3">
      <title>3. Method of support models</title>
      <sec id="sec-3-1">
        <title>3.1. The method of support models</title>
        <p>To accelerate the construction of general models of nonlinear dynamics, a new approach is proposed
in the work. It consists in using a set of separate pre-trained NN (support models), each of which
reflects separate basic characteristics of the domain.</p>
        <p>To construct a set of support models, a set of support datasets DRk is used (k g, g is the number
of basic characteristics of the domain). Each of the support datasets DRk describes a separate basic
characteristic of the domain. On the basis of these datasets, support models f Rk with the parameters</p>
        <p>Rk are built. By combining support models that reflect the characteristics of the target problem, a
general model f S is built. It has a set of specified properties (nonlinear and dynamic). The target
model f T S, DT) is built by pre-training the general model f S obtained on the basis of the set of
support models f Rk on the DT dataset.</p>
        <p>This approach allows to preserve the advantages of pre-training, since support models obtained
once can be repeatedly used for different domains and target tasks, significantly reducing the total
time and resources for training models without collecting additional data [17-19].</p>
        <p>The structural scheme of the training process based on support models is presented in Fig. 1.</p>
        <p>The algorithm of the suggested method consists of the following steps.</p>
        <p>Step 1. Selection of basic domain properties and formation of a set of datasets DRk reflected the
selected properties.</p>
        <p>Step 2. Construction of support models set f Rk in the form of separate NN corresponding to the
established properties of the domain. Training of built models based on the generated datasets DRk.</p>
        <p>Step 3. Determination of the list of p properties of the target problem from the set of g basic
properties of the domain and construction of the general model f S based on the superposition of the
corresponding support models f Rh (h p, p g) obtained in Step 2.</p>
        <p>Step 4. Training of the target model f T S, DT) based on the general model f S obtained in Step 3.</p>
        <p>Step 5. Determination of the accuracy indicators E T and training time t T of the target model f T.
In case of unsatisfactory quality indicators of the target model f T, the control transfers to Step 2 to</p>
        <p>Rk of the support models f Rk, and, if necessary, to Step 1 to correct the set of
basic properties of the domain and the set of datasets DRk, reflecting the selected properties.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Selecting the basic properties of the domain for forming datasets</title>
        <p>The basic properties of the domain in the work are understood as the characteristics of objects that
reflect the essential features of their behavior. These properties can include real or abstract
parameters that are important for solving the modeling problem [20].</p>
        <p>The procedure for selecting the basic properties of the domain and generating datasets DRk
reflecting the selected properties is as follows.</p>
        <p>1. Defining the range of tasks to be solved in a given domain; analyzing properties that are
important for objects in a given domain, significantly affect the results of modeling and should be
taken into account when forming the dataset DS.</p>
        <p>2. Determination of signal types (for example, periodic, random, pulsed) that best reflect the
properties of the objects under study.</p>
        <p>3. Formation of the dataset DS based on the list of basic properties of the domain established in
paragraph 1, the set of input signals and reactions of the object formed in paragraph 2.</p>
        <p>4. Segmentation of the dataset DS into separate datasets DRk according to the defined list of basic
properties of the domain.</p>
        <p>In the above sequence of steps, the task of determining the type of signals that best reflect the
properties of the object under study remains the least formalized. Automating the selection of
support models is crucial for scaling the method to complex, multidimensional dynamical systems
and bridges the gap between human intuition and automated processing, allowing the application of
machine learning methods for objective identification and segmentation.</p>
        <p>To automate the selection of support models, improve the reproducibility and reduce the
subjective factor of the support model method, this paper proposes two approaches: input data
clustering [21] and feature contribution analysis [22].</p>
        <p>1. Clustering-based approach: to automatically segment the overall DS dataset into DRk subsets
that clearly demonstrate elementary properties of the object under study.</p>
        <p>2. Feature contribution analysis approach: to identify and quantify the underlying properties of
the subject area Pk.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Determination of the structure of support models and their pre-training</title>
        <p>To modelling nonlinear dynamics, the work uses time-delayed NN (TDNN) [23]. Due to their
simplicity and versatility, TDNNs are most widely used in modeling problems of nonlinear dynamic
objects. In practice, the TDNN structure is most often used, consisting of three layers: input, hidden,
and output [24]. The size of the layers in this TDNN structure is determined as follows:
•
•
•
the input layer consists of M neurons and is responsible for the memory (dynamic
characteristics) of the model,
the hidden layer consists of K neurons and is responsible for the nonlinear characteristics of
the model,
the output layer contains the number of Y neurons, which is equal to the number of outputs
of the model.</p>
        <p>For each support model, a labeled data set DSi={(xPkHj(t), fVi[xPkHj(t)])} is formed based on the signals
x(t) at the input of the object and the responses y(t)= fVi[x(t)] at its output. Typical signals are often
used as input signals: impulse x(t)=a (t), step x(t)=a (t), linear x(t)=at, and harmonic x(t)=a sin(t)
signals of various amplitudes a ∈ (0, 1]. Time delays at the input are implemented through shifts in
time series and the inclusion of previous values in the input vector.</p>
        <p>The main steps for implementing time delays and forming a training sample are as follows:
•
•
•</p>
        <p>Step 1. Selecting the number of delays. Determine the number of delays M that will be used.
Step 2. Forming the input vector. For each time moment tk, the input vector is formed as a
sequence of current and previous values (2.1).</p>
        <p>Step 3. Transferring delays to the network. Each formed input vector is transmitted to the
input layer of the neural network, where it is processed as a regular input.</p>
        <p>The algorithm for forming a training sample with time delays in pseudocode is shown below.
Algorithm 1: time_delay
1: Input: DSi, M, x(t), y(t)
2: Output: x, y
3: foreach DSi as x(t), y(t)
4: for k =0, ..., T-M+1 do
5: xk x(tk), x(tk ), x(tk ), ..., x(tk M+1)]
6: yk y(tk)
7: end for
9: end foreach</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Construction of a general model based on the superposition of the corresponding support models</title>
        <p>After completing the process of pre-training the set of support models f Rk, a general model f R is built
on their basis. This model is composed of a set of p support models f Rh that correspond to the
available basic properties of the object. The selection of support models that reflect the basic
properties of an object is generally subjective. To reduce subjectivity and increase the reproducibility
of the selection of support models, it is advisable to use methods of clustering input data or feature
contribution analysis.</p>
        <p>After completing the preliminary training process for the family of support models that
correspond to the existing basic characteristics of the object, a rough model f S(DS) is constructed
based on them. Assuming that the general f S and support f Rk models are constructed in the form of
NN with the same structure (dimension of the parameter vector dim S)=dim Rk)), the definition of
the general model is reduced to calculating the arithmetic operations on corresponding components</p>
        <p>Rk.</p>
        <p>At the same time, several approaches to the superposition of support models are considered:
additive superposition is used when each support model is responsible for independent
aspects of the system (e.g., dynamics and environmental impact). The outputs of the support
models f Si(DSi) are determined based on the calculation of the arithmetic mean of the
corresponding components of the parameter vectors of the support models Sv:
where i are the indices of the corresponding elements of the</p>
        <p>S Rk models.
multiplicative superposition is used in the case of interacting processes (e.g., where some
processes modify others). In this case, the outputs of the support models f Sv(DSv) are
multiplied:
vectors of the
•
•
•
combined superposition methods use complex combinations of support models, such as
weighted sums of outputs of nonlinear functions to combine the outputs of several models,
the application of nonlinear functions to the outputs of individual models, etc. Such methods
include determining the parameter vector S of the approximate model as the maximum
value among the corresponding components of the parameter vectors of the support models
Sv:
θiS =
1 h</p>
        <p> θiRk
h k=1</p>
        <p>b
θiS = θiSv
v=1
(4)
(5)
θiS = max(θiSv ), v = 1,b
(6)</p>
        <p>Thus, the advantage of forming an approximate model using the support models method is the
absence of a training procedure, which reduces computational complexity and significantly speeds
up the process of building an approximate model. At the same time, the dimension of the
approximate model f S (the dimension of the parameter vector S) remains the same as in the base
models, i.e., the complexity of the approximate model does not increase. Another advantage of
forming a general model using expressions (4)-(6) is the absence of a training procedure, which
significantly speeds up the process of constructing a approximate model.</p>
        <p>The algorithm for synthesizing a general model based on the superposition of support models is
given below.</p>
        <p>Algorithm 2: general_model_sp
1: Input: V, H, P, f (DSi), DSi, M, K
2: Output: f (DS)
3: foreach V as Vi :f (DSi)
4: DSi dataset_formation(Vi, H, P, DSi)
5: f (DSi init(M, K)
5: f (DSi train[f (DSi), time_delay(DSi)]
6: end foreach
7: for i =1, ..., v do
8: f (DS sp[f (DSi)]
9: end for</p>
        <p>Here, the init function initializes the model structure, train function learns the model, time_delay
function forms a dataset with time delays, sp function perform a superposition of support models
according to one of expressions (4) (6).</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiment setup</title>
      <p>The study of the support models method is carried out on the example of a nonlinear dynamics test
object. A simulation model of a test object in the form of a sequence of a nonlinear link with
saturation and a dynamic link of the first order [23, 24] is shown in Fig. 2.</p>
      <p>As typical characteristics from the set of properties of the domain of nonlinear dynamics, a
nonlinear characteristic in the form of saturation and a dynamic link of the first order were chosen
to describe the behavior of the test object. For the test object, a labeled DS dataset generated on the
base of signals x(t) at the input of the object and the responses y(t) at its output. The inputs are
pulsed x(t)= (t), stepped x(t)= (t), linear x(t)=at and harmonic x(t)=asin(t) signals of different
amplitudes a ∈ (0, 1).</p>
      <p>Based on the dataset DS, two support datasets are formed:
•
•
input stepped signals x(t)=
form of saturation DR1;</p>
      <p>(t) and responses y(t) of the object with nonlinearity in the
•
•
•
•
•
input stepped signals x(t)= (t) and responses y(t) of the object in the form of a dynamic
link of the first order DR2.</p>
      <p>The experiment consists in studying the training speed of the target model of the test object, built by
various methods:
affiliation mark: a superscript number followi
based on the general model f S(DS), pre-trained on the general DS dataset;
based on individual support models f R1(DR1), f R2(DR2), pre-trained on datasets DR1 and DR2;
based on the general model f R(f R1, f R2) in the form of a superposition of support models
f R1(DR1) and f R2(DR2).</p>
    </sec>
    <sec id="sec-5">
      <title>5. Simulation and results</title>
      <p>The structure of the target model f is chosen to be identical to the model based on the pre-trained
general model f (DS), the support models f (DR1), f (DR2) and the superposition of the support
models f (f , f ) and is a three-layer TDNN.</p>
      <p>According to the results of additional research, the number of neurons M=30 in the input and
K=30 in the hidden layers was adopted.</p>
      <p>The model is trained by the method of backpropagation of the error with updating the network
parameters by the Levenberg-Marquardt method. Pre-training is limited to 50 epochs to prevent
overtraining and preserve the ability to adapt.</p>
      <p>Fig. 3 shows the dependencies of mse loss functions on the number of training epochs for target
models built on the basis of the general model f (DS), support models f (DR1), f (DR2) and
superposition of support models f (f , f ).</p>
      <p>Results of an experiment to study the learning speed of a target model of a test object built on the
basis of a general model, on the basis of separate support models, and on the basis of a superposition
of support models, are presented in Table 1.</p>
      <p>The experiment shows the advantage of using support NN at the modelling nonlinear dynamics,
namely, a significant reduction in the training time of the TDNN model (by 4.6 times) compared to
the pre-trained general model with comparable accuracy of both models.</p>
      <p>Applying separate support models as general ones can also reduce the training time of an accurate
model (by 1.8 times).</p>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>The paper successfully solves the problem of reducing the time of building nonlinear dynamics
continuous-time models in the form of neural networks while ensuring the specified accuracy of
modeling. To resolve the conflict between the accuracy of modeling nonlinear dynamic objects and
the speed of model construction, a modeling method was developed based on pre-training through
the superposition of support models that reflect the basic properties of the subject area.</p>
      <p>The effectiveness of the developed method for modeling nonlinear dynamics was proven when
solving the problem of modelling a test nonlinear dynamic object. The experiment demonstrates a
4.6-fold reduction in the time of building a target model using support models compared to the
traditional modeling method based on pre-training. The advantages of the proposed approach are
the ability to quickly adapt to changing operating conditions, high speed of building the target model
while ensuring the specified modeling accuracy. In addition, the developed method allows improving
the efficiency of model training in the lack of labeled data for the target task. The disadvantages of
the proposed approach, inherited from methods based on pre-training, are the dependence of the
modeling results on the quantity and quality of data of the target dataset.</p>
      <p>The practical limitations of the application of the proposed method are the a priori need for
support models built on a sufficient amount of qualitative data. Insufficient data or poor data quality
can significantly reduce the accuracy of support models and, as a result, significantly reduce the
training time of an accurate model.</p>
      <p>Thus, the area of effective application of the proposed method is allocated: lack of marked data
of the target task in the presence of a general dataset of sufficient size; no significant discrepancies
between the characteristics of the general and target datasets.</p>
      <p>To improve and expand the scope of application of the support models method, it is necessary to
take into account the real conditions of the external environment by expanding the experimental
part at different levels of noise distortion and under conditions of time drift of the observed
parameters. In order to fully assess the potential of the proposed method and determine its place
among advanced solutions in further research, it is planned to expand the range of test objects,
including systems with different dynamics, multidimensional systems, objects with delays, etc.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this paper, the authors used Google Gemini to check the coherence of the
text of the article and to identify technical inaccuracies. The authors are solely responsible for the
content of the publication.
[16] J. Siebert, L.Joeckel, J. Heidrich, et al., Construction of a quality model for machine learning
systems, Software Qual J. 30 (2022) 307 335. doi: 10.1007/s11219-021-09557-y.
[17] P. Karampiperis, N. Manouselis, T. B. Trafalis, Architecture selection for neural networks, in:
Proceedings of the 2002 International Joint Conference on Neural Networks. IJCNN'02.</p>
      <p>Honolulu: HI, USA, 2, 2002, pp. 1115 1119. doi: 10.1109/IJCNN.2002.1007650.
[18] K. Pal, B. V. Patel, Data classification with K-fold cross validation and holdout accuracy
estimation methods with 5 different machine learning techniques, in: Fourth International
Conference on Computing Methodologies and Communication (ICCMC). Erode: India. 2020, p.
83 87. doi: 10.1109/ICCMC48092.2020.ICCMC-00016.
[19] K. Nakamichi, et al., Requirements-driven method to determine quality characteristics and
, in: IEEE 28th International
Requirements Engineering Conference (RE), Zurich, Switzerland, 2020. p. 260 270, doi:
10.1109/RE48521.2020.00036.
[20] T. Warren Liao, Clustering of time series data a survey, Pattern Recognition, 38 11 (2005)
18571874. doi: 10.1016/j.patcog.2005.01.025.
[21] G. Leylaz, S. Wang, JQ Sun, Identification of nonlinear dynamical systems with time delay, Int.</p>
      <p>J. Dynam. Control. 2022. 10, 13 24. doi: 10.1007/s40435-021-00783-7.
[22] L. Wenyuan, L. Zhu, F. Feng, at al., A time delay neural network based technique for nonlinear
microwave device modelling, Micromachines. Basel, 11 9 (2020) 831. doi: 10.3390/mi11090831.
[23] O. Fomin, et al. Interpretation of dynamic models based on neural networks in the form of
integral-power series. in: Arsenyeva, O., Romanova, T., Sukhonos, M., Tsegelnyk, Y. (eds). Smart
Technologies in Urban Engineering. Lecture Notes in Networks and Systems. Springer. Cham.
536 (2022) 258 265. doi: 10.1007/978-3-031-20141-7_24.
[24] O.O. Fomin, A.A. Orlov, Modeling nonlinear dynamic objects using pre-trained time delay
neural networks. Applied Aspects of Information Technology. 7 1 (2024) 24 33. doi:
10.15276/aait.07.2024.2.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N. M. A.</given-names>
            <surname>Chisty</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. P.</given-names>
            <surname>Adusumalli</surname>
          </string-name>
          ,
          <article-title>Applications of artificial intelligence in quality assurance and assurance of productivity</article-title>
          ,
          <source>ABC Journal of Advanced Research. 11 1</source>
          (
          <year>2022</year>
          ):
          <fpage>23</fpage>
          <lpage>32</lpage>
          . doi:
          <volume>10</volume>
          .18034/abcjar.v11i1.
          <fpage>625</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Sen</surname>
          </string-name>
          ,
          <article-title>Machine learning algorithms, models and applications</article-title>
          ,
          <source>IntechOpen. London: United Kingdom</source>
          .
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .5772/intechopen.94615.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K. T.</given-names>
            <surname>Chitty-Venkata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Emani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vishwanath</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Somani</surname>
          </string-name>
          ,
          <article-title>Neural Architecture Search Benchmarks: Insights and Survey</article-title>
          , in: IEEE Access,
          <year>2023</year>
          , vol.
          <volume>11</volume>
          , pp.
          <fpage>25217</fpage>
          -
          <lpage>25236</lpage>
          , doi: 10.1109/ACCESS.
          <year>2023</year>
          .
          <volume>3253818</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>The Past Decade and Future of the Cross Application of Artificial Intelligence and Decision Support System</article-title>
          ,
          <source>in: IEEE 6th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT)</source>
          , Shenzhen, China,
          <year>2025</year>
          , pp.
          <fpage>1666</fpage>
          -
          <lpage>1669</lpage>
          , doi: 10.1109/AINIT65432.
          <year>2025</year>
          .
          <volume>11036058</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>I.</given-names>
            <surname>Adikari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lakmali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dhananjaya</surname>
          </string-name>
          and
          <string-name>
            <given-names>D.</given-names>
            <surname>Herath</surname>
          </string-name>
          ,
          <article-title>Accuracy Comparison between Recurrent Neural Networks and Statistical Methods for Temperature Forecasting</article-title>
          ,
          <source>in: 2023 3rd International Conference on Advanced Research in Computing (ICARC)</source>
          , Belihuloya, Sri Lanka,
          <year>2023</year>
          , pp.
          <fpage>72</fpage>
          -
          <lpage>77</lpage>
          , doi: 10.1109/ICARC57651.
          <year>2023</year>
          .
          <volume>10145620</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Kariri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Louati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Louati</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Masmoudi</surname>
          </string-name>
          ,
          <source>Exploring the Advancements and Future Research Directions of Artificial Neural Networks: A Text Mining Approach. Appl. Sci</source>
          .
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <article-title>3186</article-title>
          . doi:
          <volume>10</volume>
          .3390/app13053186.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N. K.</given-names>
            <surname>Sinha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. M.</given-names>
            <surname>Gupta</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. H.</given-names>
            <surname>Rao</surname>
          </string-name>
          ,
          <article-title>Dynamic neural networks: an overview</article-title>
          ,
          <source>in: Proceedings of IEEE International Conference on Industrial Technology 2000 (IEEE Cat. No.00TH8482)</source>
          , Goa, India,
          <year>2000</year>
          , pp.
          <fpage>491</fpage>
          -
          <lpage>496</lpage>
          vol.
          <volume>2</volume>
          , doi: 10.1109/ICIT.
          <year>2000</year>
          .
          <volume>854201</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Kutyniok</surname>
          </string-name>
          ,
          <source>The mathematics of artificial intelligence</source>
          ,
          <source>in: Proc. Int</source>
          . Cong. Math.
          <year>2022</year>
          ; Vol.
          <volume>7</volume>
          :
          <fpage>5118</fpage>
          5139. doi:
          <volume>10</volume>
          .4171/ICM2022/141.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Kästner</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          &amp;
          <string-name>
            <surname>Kang</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <article-title>Teaching software engineering for AI-enabled systems</article-title>
          ,
          <source>The 42nd International Conference on Software Engineering (ICSE)</source>
          .
          <source>Software Engineering Education and Training</source>
          .
          <year>2020</year>
          . URL: https://arxiv.org/abs/
          <year>2001</year>
          .06691.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Hosna</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Merry</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gyalmo</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          et al.,
          <article-title>Transfer learning: a friendly introduction</article-title>
          ,
          <source>J Big Data</source>
          .
          <year>2022</year>
          ;
          <volume>9</volume>
          , (
          <issue>1</issue>
          ): 102, doi: 10.1186/s40537-022-00652-w.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Gholizade</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Soltanizadeh</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rahmanimanesh</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          et al.,
          <article-title>A review of recent advances and strategies in transfer learning</article-title>
          ,
          <source>Int J Syst Assur Eng Manag</source>
          <volume>16</volume>
          ,
          <year>2025</year>
          ,
          <volume>1123</volume>
          1162. doi:
          <volume>10</volume>
          .1007/s13198-024-02684-2.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhuang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Duan</surname>
          </string-name>
          et al.,
          <source>A Comprehensive Survey on Transfer Learning, in: Proceedings of the IEEE. 99</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>34</lpage>
          . doi:
          <volume>10</volume>
          .1109/JPROC.
          <year>2020</year>
          .
          <volume>3004555</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>I.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Swan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Giansiracusa</surname>
          </string-name>
          ,
          <source>Algebraic Dynamical Systems in Machine Learning, Appl Categor Struct 32</source>
          <volume>4</volume>
          (
          <year>2024</year>
          ).
          <source>doi: 10.1007/s10485-023-09762-9.</source>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Shu</surname>
          </string-name>
          ,
          <article-title>Transfer Learning and Deep Domain Adaptation, Advances and Applications in Deep Learning</article-title>
          . IntechOpen, Dec.
          <volume>09</volume>
          (
          <year>2020</year>
          ). doi:
          <volume>10</volume>
          .5772/intechopen.94072.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fang</surname>
          </string-name>
          , Ch. Shi,
          <article-title>Learning to Pre-train Graph Neural Networks</article-title>
          .
          <source>In: Proceedings of the AAAI Conference on Artificial Intelligence</source>
          .
          <volume>33</volume>
          ,
          <issue>5</issue>
          ,
          <year>2021</year>
          , 10 p. doi:
          <volume>10</volume>
          .1609/aaai.v35i5.
          <fpage>16552</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>