<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Suvoruv DvornikovV. The concept
of a modular cyberphysical systemfor the early diagnosis of energy equipment. Eastern-
European Journal of Enterprise Technologies</journal-title>
      </journal-title-group>
      <issn pub-type="ppub">0308-521X</issn>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.15587/1729</article-id>
      <title-group>
        <article-title>Development of a model selection tool based on numerical set analysis for rehabilitation using robotic</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>hnologi</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>s x  I</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>m. Y  I</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>j   Ф  x  I</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexander Trunov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Petro Mohyla Black Sea National University</institution>
          ,
          <addr-line>68 Desantnikov, 10, Mykolaiv 54000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <volume>89</volume>
      <fpage>72</fpage>
      <lpage>79</lpage>
      <abstract>
        <p>Decision-making systems are considered in which the task of selecting a model arises as a tool for significant information compression. The problem of substantiating and creating quantitative artificial intelligence (AI) tools for automatic model selection preceding approximation is analyzed. The sequence of actions of the process of evaluating mutually related data sets is formed, which is brought to codimensional conditions that are determined by the accuracy class of the devices, the measurement method, and the method of reducing models to a single form. The generalized model reduces alternative models of the selection set to a single form. Analytically determined characteristic constants allow us to study, compare them between groups, and classify the dynamics of assessing reliability and adequacy as integral indicators before the approximation begins. This tool for analyzing the suitability of the model for description, which is based only on the results of data from related numerical sets, is an AI tool. An analytically quantitative assessment of the interval of existence of the permissible value of characteristic constants is presented. The relationship between the magnitude of their boundaries for the model within the permissible values of the relative error suitable and unsuitable for description based on the properties of quadratic norms, which determines the maximum possible error, has been established. The formed toolkit for quantitative proof of the best membership of the model type was analyzed for the description of experimental data. The examples show that according to a set of definitions, the deviations of the characteristic constants are synchronous with the relative error and adequacy of the model. The high resolution of the adequacy indicator of 10-3 and the range, which is more than 40-60 times the max/min ratio, will be useful for program algorithms as an indicator.</p>
      </abstract>
      <kwd-group>
        <kwd>robotic rehabilitation</kwd>
        <kwd>approximation</kwd>
        <kwd>selection tool</kwd>
        <kwd>best model type</kwd>
        <kwd>characteristic constants</kwd>
        <kwd>permissible interval</kwd>
        <kwd>reliability</kwd>
        <kwd>adequac</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The problem of creating effective decision support systems and automated control systems is
associated with the need to develop large databases, bases of models, and algorithms. The latter
significantly exacerbates the need for automatic model construction and expert assessment using a
single integral indicator [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The urgent need to process large amounts of data has also initiated the
search for new means of using AI in particular long-term medical monitoring [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The application
and review of the course of the naive Bayesian method and cluster analysis methods DBSCAN,
PCA, k-means, identified the main advantages of using an ensemble of methods for generalizing
large amounts of data in space of states [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Such works may not have ended the discussion about
the belonging to the categories of AI tools of a scientifically based analysis algorithm that is
automatically implemented and synthesizes a conclusion without operator participation.
_________________
      </p>
      <p>
        However, they stimulated the search for indicators that, like the concepts of adequacy and distance
between clusters, define boundaries in a jump-like manner, and therefore are analytical tools for
recording such transitions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Their further development is to increase the efficiency of real-time
data compression and transmission based on the automatic creation of suitable models [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Their
further development to increase the efficiency of model selection only according to the
data of related numerical sets becomes relevant in the problems of expressing a big volume
of data. The particular importance is the problem of automatic approximation for
automatic compression of a large amount of data of monitoring systems or long-term
remote recovery, which does not occur in the conditions of the direct presence of a doctor
[
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Analysis of literature data and problem statement</title>
      <p>
        In works published in the last decade, machine learning methods have increasingly been used to
find key clusters [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Their results are one of the examples that demonstrate how to effectively
collect and study large volumes of medical data on individual patient characteristics [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. However,
from a general theoretical point of view, their application allows us to determine the distances
between cluster instances. The latter forms a quantitative basis for effective qualitative clustering
[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In addition, they can also serve as an example of finding tools for isolating the boundary of
dividing sets into clusters [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The results of the study of the applicability of AI methods to the identification of patient
medical conditions are presented in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Of particular importance is the study of the transition
period between conditions using Data Mining methods [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The experience of classifying patient
medical conditions based on the results of laboratory and other diagnostic studies demonstrates
the advantages of naive Bayesian and cluster analysis methods, in particular DBSCAN, PCA,
kmeans [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The work is an example of the isolation of common features and operations as a set,
which is a common part for different methods. With a special generalization of its proposals, it will
be clearly classified as a tool for AI analysis in the future. The marginal difference as a jump-like
variability of a feature is presented as one of the possible and simpler features of choosing the
boundaries of a cluster [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. This work confirms the transitions from one cluster to another by
jumping known complex features. It demonstrates a new AI approach to analysis and selection for
pre-formed simple features [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The list of features, conditions, operations and procedures for
automatically forming unambiguous conclusions is obviously a tool for the fields of application of
AI. Their positive experience becomes the basis for the further creation of special AI tools. In
addition, there is an increasing need for a well-founded preparation of a decision on the marginal
difference, which is proposed to be implemented on the basis of sequential measurement at four
points [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The idea of sequential calculation of the first and second derivatives at three points
forms the basis for registering the dynamics of changes in a medical indicator and derivatives of
the second and third orders in the presence of four points. However, to verify the conclusion about
the dynamics of changes, a comparative analysis is carried out at the fifth point. The
correspondence of the predicted value to the measured one is established in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], which serves as
the basis for data compression. It is especially relevant in the tasks of classification of states and
data compression in restorative medicine [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Such an intellectual analysis procedure expands the
functional capabilities of the system through the use of three-level comparative analysis [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The
need for such analysis and its applicability to data compression due to the structure, including a
recurrent analytical model, was demonstrated in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, its applicability to variable and
discontinuous processes is complex and requires correction.
      </p>
      <p>
        In this regard, an analysis of the works was conducted, most of which are devoted to the
search for general approaches to the choice of the type of model and uniform approximation with
the conditions that are basis-forming [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Their authors focus on substantiating and studying the
regularities of approximation of one of the types of models [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Thus, to ensure uniform
approximation, the possibilities of a piecewise-continuous approach to approximation by splines
are investigated [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In recent works, using different concepts of measure and methods of
quantitative measurement: metrics, a review and assessment of many years of experience have
been carried out. Thus, in work [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], types of metrics and distances in the hierarchy of categorical
semantics and functions, which are key in mathematics, including approximation theory, are
considered. The work presents applications to practical approximations of functions and to the
theory of graphs in general and trees in particular [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. The effectiveness of approximation of
functions that are difficult to calculate in conditions of limited resources of single-board computers
arises with the development of monitoring and recovery systems. For them, it is necessary to
choose simple and analytical expressions that will not be inferior in accuracy to spline
approximation [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. This work also considers and proves the uniqueness of the best approximation
by generalized polynomials. No less important are the proposals using approximation to simplify
operator and nonlinear differential equations [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Its author demonstrates the applicability of
approximation to the solution of differential and integral equations by using approximation for the
iterative-approximate solution of equations with analytical conditions [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. Another example from
a series of works that investigate the possibilities of approximating the combination of the sum of
a polynomial and an exponent is work [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The author proves that a sufficient condition for the
existence of a uniform approximation by the sum of a polynomial and an exponent is the
continuity of the function and the boundedness of its derivative at the beginning and end of the
definition interval [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. The order of the polynomial and the unknown constants in it and two
additional constants (the multiplier at the exponent and the constant in the exponent) provide the
best uniform approximation of the function. Such a model with an accurate reproduction of its
value at the extreme points of the segment is suitable for constructing continuous minimax spline
approximations [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. However, as the analysis of works [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10–12</xref>
        ] shows, the issue of choosing and
proving the best suitability of the model for approximation when considering several types of
models is not even raised in them.
      </p>
      <p>
        The application of approximation for the construction of models in dynamic programming
problems or for problems in which the models are given by nonlinear differential equations was
made in work [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. The fundamentality of the idea of piecewise linear approximation, regardless of
the time of its emergence, which was highlighted in it, gives its positive results for the formation
of models in the form of a convergent sequence. However, the application of these results to the
development of numerical methods has slowed down the development and implementation of
analytical models, including empirical ones.
      </p>
      <p>
        Examples of further development are piecewise quadratic and piecewise cubic recurrence
approximations, which have expanded the applicability of approximation to the structure of
nonlinear models in the form of analytical solutions of nonlinear algebraic equations and systems
and nonlinear differential equations and systems and recurrent networks [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. The obtained
recurrence models have analytical forms and allow fast calculation, which makes them especially
attractive for solving applied problems in complex nonlinear systems [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. However, despite these
results, the question of the evidential choice of the model type was not raised and therefore the
choice among the types of approximation was not made at the first stage. To analyze the quality of
models using multi-criteria assessments, for example, such as adequacy, it is necessary to carry out
an approximation and determine its constants. The latter, according to the results of using several
types of approximation for the analysis of alternatives, significantly increases the complexity of
solving model selection problems [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ].
      </p>
      <p>
        A review of the works, the results of which are presented, allows us to state that
uncertainty, as a feature of the structure of model types, is increasingly dominant [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. One
example of the implementation of systems whose models are described by fuzzy sets is a system
for supporting and making decisions in automated process control in marine technologies. The
experience of such a system will serve as an alternative prototype for further developments in
which AI tools and means of simplified representation of complex models are used [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. However,
the lack of analyticity of such models hinders their further spread.
      </p>
      <p>The implementation of the concept of creating a modular cyber-physical system will be
accompanied by a rapid growth in the volume of data transmission and storage [16]. As the
authors argue, early diagnostics will become a priority in the operation of industrial equipment
based on Industry 4.0 standards and should be included in the algorithm of its operation as an
integral structural element. In particular, it is expected that Internet of Thinqs methodologies will
also play their role in the formation of new requirements for compression and will contribute their
positive role in the process of information protection [16]. As a stable trend, the need for the use of
neural-fuzzy browsers [17] is observed. Their development will require the formation of new forms
of fuzzy model structures and the creation of computer libraries for operations with asymmetric
membership functions [17].</p>
      <p>Theoretically important are the results of work [18], which proved theorems on the
estimation of the approximation of polynomial approximations for segment-integrable functions.
The result of the work is analytical expressions for the estimates of deviations for approximations
of a function and its derivatives [18].</p>
      <p>Analysis of recent works on approximation shows that its application has been extended to
solutions of nonlinear differential equations by mixed finite-element approximation of solutions in
Hilbert space, in particular for the completely nonlinear Hamilton-Jacobi-Bellman equation [19].
The paper proposes to use indirect means of informing about the error values exceeding the limits
during the solution, which is a sign for grid correction [19]. However, the indicator used has
practical significance for grid selection, but does not assess the adequacy of the model creation
result. A priori and a posteriori limits on the approximation error are proven. Contributions from
the a posteriori error estimator can be used as refinement indicators in the adaptive grid correction
algorithm. The convergence of this procedure is proven and empirically investigated in numerical
experiments [19].</p>
      <p>Assessment of adequacy through statistical analysis of the stability of achieving a specific
goal as a mandatory step at the stage of its development is presented in [20]. The process of
forecasting using a model and recognizing errors is presented as a set of tests in the course of
developing more reliable and accurate models. The paper argues that assessment of the adequacy
of models is possible only by combining several statistical tests and a proper study of the achieved
goals [20]. One of the new modern views on the development of the method and software and tool
is presented by the work that artificially generates biomedical image databases [21]. Working with
such a database also requires evaluation and compression, for which, as shown in [22], it is
provided by methods of comparative analysis and fuzzy sets together with adequacy assessment.
The latest CNN Stacking Model for medical image classification and medical visualization of
anatomical organs on medical images is presented in [23]. It is expected that the implementation of
the latest results [24] together with the use of parallel algorithms and interpolation tools using
Bezier curves and B-splines for the restoration of medical data, including fragmentarily lost ones.
All these works taken together [21-24] emphasize the relevance of the need and importance of the
use of approximation for the representation, compression of biomedical information, including for
healthcare communication technologies. A special role will obviously be played by methods and
tools of hybrid classification based on the idea of Two ML classifiers and prediction [25].</p>
      <p>Thus, the latest results in the above works investigate and substantiate the properties and
possibilities of forming individual types of models. However, as can be seen from the analysis, the
existing approximation methodology does not contain tools for evidential selection of the best
model based on the data sets formed during the experiment.</p>
      <p>In this regard, the main unsolved problem is the justification and creation of quantitative AI
tools that are used to select a model before the approximation stage begins.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Purpose and objectives of the study</title>
      <p>The purpose of the work is to substantiate and build quantitative AI tools for the analysis
of connected numerical sets, which will provide an evidential choice of the type of model for the
available experimental data before the approximation stages. This will make it possible to reduce
the complexity and time required to create a model by carrying out approximation and quality
assessment only for one model that best describes the numerical data of connected sets.</p>
      <p>To achieve this goal, the following tasks were formulated:
– to form a sequence of actions for estimating the interval of permissible values of a vector
function for determining integral indicators;</p>
      <p>– to reduce alternative models to linear forms with characteristic constants as functions of
the index - the number of the element in the array (hereinafter, the characteristic constant will be
called the constant whose deviation most affects the model error);</p>
      <p>– to form intervals of existence of permissible values, which are determined by the
methodology of direct and indirect measurement;</p>
      <p>– to form tools for quantitatively proving the best fit of the data description model before
starting the approximation.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Materials and methods of research</title>
      <p>
        The general task of the research is to increase the efficiency of automatic decision-making
in management tasks while ensuring the given reliability and adequacy. The objects of the research
are decision-making support systems. The subject of the research is the decision-making process in
model selection problems using the method of determining their adequacy. The hypotheses of the
research are: the existence of a limiting interval of permissible values of integral indicators; the
existence of invariant properties, which is established by the combined application of qualitative
provisions of the theory of similarity and quantitative analysis tools. First of all, models according
to the theory of similarity are reduced to co-measured conditions, using the coordinate
rectification method, and statistical methods determine the parameters of the samples, which
determine the permissible intervals using discrete quadratic norms. For an arbitrary identifier and
several neighboring points of the definition set, characteristic constants are analytically found, i.e.
those that affect the increment of the function when the arguments change. The application of
functional analysis methods, operator expansion, quadratic norms and Bunyakovsky-Cauchy
inequalities sets boundaries for characteristic constants and indicator estimates. The latter forms a
closed system of actions for automatically proving the best fit of the data description model before
the approximation begins, which makes it possible to increase the efficiency of decision-making.
For modeling, the Python 3.12.0 programming language was used to build statistical data
processing programs and determine characteristic constants, the Microsoft Excel 2007
environment, Math-Cad 14 (USA). The task that was solved during the modeling was to evaluate
and compare examples of models based on the results of approximation. Also, for examples,
preceding the approximation as such, using the developed sequence of actions, to study the
synchronicity of changes in indicators, the quality of the model was investigated: relative error and
adequacy. For technical support of the study, a DESKTOP based on the Intel(R) Core(TM) i7-4600U
CPU processor was used in the research process. The study is based on analytical expressions for
calculating the general integral indicator of the reliability and adequacy of the model according to
certain criteria [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] for assessing the quality of the model. In addition, it is based on conclusions
about the need to develop reliable and unambiguous criteria for separating classes [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ],
assessments of the success of works that developed approximation approaches taking into account
the analysis of literary sources [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref7 ref8 ref9">7-13</xref>
        ]. The results of the search for automatic decision-making
algorithms, with their examples, stimulate the search for new AI tools [
        <xref ref-type="bibr" rid="ref15 ref6">6, 15-24</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Results of the formation of a sequence of actions that will provide an evidential choice of the type and assessment of the model for approximation</title>
      <p>5. 1. Formation of a sequence of actions for estimating the interval of admissible
values of a vector function for determining integral indicators</p>
      <p>The process of forming an empirical model was considered. Thus, at the stage when the
experiment was conducted and its statistical processing was carried out, the task of selecting and
proving the best suitability of the model among the alternatives for the description was set. Let us
assume that for each value of the identifier I, are given m values of the components of the vector
, as a known set in the form of an m + 1 - dimensional array. Further, specifying N groups of
results of measurements we will denote the vector
and present it in the form of an array:
(1)
We also introduce and denote l the component vector function
, as a set of its values
also in the form of a l+1-dimensional array of index - identifier I. Let us assume that the second
index - identifierj of these two arrays and their boundaries will differ depending on the number of
component vectors
and</p>
      <p>. Since, as a rule, measurements of physical quantities are made
directly and indirectly according to the laws of Ф(x[I,j]) for the components of vectors
, the
measurement value for the vector component of the vector-function
is given as an array:</p>
      <p>The results of statistical processing were then assumed to be known and uniquely specified.
The results of the experiment as mathematical expectations, the mean square errors of the
components of the vectors for each value of the identifier I of the quantities
and
were
represented by arrays. The suitability of the normal law for describing the probability distribution
density was also established. According to the data of the technical passports of the devices and
the method of indirect measurement (2), the maximum possible errors were found for each pair of
values of the function and argument. The maximum possible error will be estimated by the
differentiation method and according to the data of the accuracy classes of the devices and
statistical estimates of direct measurements by the quadratic norm:
(2)
(3)
(4)</p>
      <p>Thus, the sequence of actions brings interconnected data sets to the conditions of
comeasurableness, by generalizing the definition space as a vector set (1) and by sets of values
generalizes the vector of functions as sets (4). The designated interval (4) defines the space and its
boundaries in which the possible values of the vector function
point for assessing the reliability and adequacy as an integral indicator.
change and is the starting</p>
      <sec id="sec-5-1">
        <title>5. 2. Reduction of alternative models to linear forms with characteristic constants by functions of the identifier</title>
        <p>Suppose that the set of models considered as alternatives
is selected and
denoted by a set of numbers K:
*Here and further, we will denote the value of the partial derivative
at the point x[I,j],
and Δxj will be considered the maximum possible instrumental error as that determined by the
accuracy class of the device. Based on such an estimate, for each element of the set
, it is possible to estimate the interval of values, which is determined by the
estimate of the maximum possible error in the form:
or:
(5)</p>
        <p>To bring each of the models of the set (5) to the conditions of codimensional analysis, we
will use the coordinate straightening method. The justification and further presentation of the
main idea of the article will be carried out for a linear form, which we will represent by a vector
function, which is represented by a matrix-vector product:</p>
        <sec id="sec-5-1-1">
          <title>The component of this vector function contains unknown constants of type and</title>
          <p>and will take the form for next I+1 point:</p>
          <p>If we consider sequentially for each value of the array identifier-index two neighboring
points I and I+1, then the number of unknown constants will decrease for their difference:</p>
          <p>However, m unknown constants Aij for each l will need to be determined, for example by
the elimination method. To ensure the necessary condition for their determination, it is necessary
to supplement equation (5) with m-1 more equations. The latter is possible under the conditions
that N is greater than m or equal to the number of components of the vector :
(6)
(7)
(8)</p>
        </sec>
        <sec id="sec-5-1-2">
          <title>Let us introduce the notation of the coefficient values:</title>
          <p>Then system (9) by brute force I will allow us to compose a system of m equations of type (5):
(9)
(10)
(11)</p>
          <p>System (11) has a unique solution if there is at least one inhomogeneous equation among the
equations. Its simplification by the elimination method step by step reduces the number of
unknowns. In the case of three components of the vector , after the first step, system (11) will
take the form:
if we apply the notation of constants (10):
(12)
(13)
Its solution will be represented by an expression for calculating the characteristic constants:
Thus, we consider alternatives of models for suitability for description of connected sets.
Suppose that they are reduced to linear form (3), and constants Aij are found as functions of
identifier I for a group of neighboring points. Then the properties of models can be studied,
compared and classified between groups before the approximation in comparable conditions by the
nature of their changes. This tool for analyzing the suitability of a model for description, based
only on the results of the analysis of connected numerical sets, will obviously be a tool of artificial
intelligence.</p>
        </sec>
      </sec>
      <sec id="sec-5-2">
        <title>5. 3. Formation of the interval of existence of permissible values, determined by the methodology of direct and indirect measurement</title>
        <p>To demonstrate the capabilities of the artificial intelligence tool as the existence of
properties of visual analysis, condition (5) was analyzed. Thus, an arbitrary model q from the set of
sets of a limited number
, which is represented by the operator
, was considered.</p>
        <p>Let us assume that the operator is continuous and differentiable, which acts in the set of definition
of the vector</p>
        <p>and the values of the constants aqk are defined. They are calculated according to
the data from the set (1) for a set of four values of identifiersI, that is, for an arbitrary value I and
three following according to the algorithm (13). For this set, they will be a single and invariable set.
However, if this model is ideal, then they will not be variable. If the operator does not give an ideal
description, then the constants thus defined for two neighboring values of an arbitrary identifierI
have deviations of the values Δaqk. Thus, expanding the operator into a series taking into account
the deviations Δxj, the components of the vector
we write:</p>
        <p>However, the relative error is more informative:
Under these conditions, we write a system of equations for l x N constants for elements of a set of
type (2) for N values of the vector . Suppose that its solution is found and represented as a
set of characteristic constants aqk for each identifierI. Further analysis of the variational properties
of such a set will contain information about the suitability of the model for describing by the
chosen form q for the set of values of the connected sets and in comparable
conditions (6). At the same time, such variability can be estimated by the dynamics of changes in
the function of the dependence constants on the identifier I and by the requirements imposed on
the permissible value of the maximum possible relative error.</p>
      </sec>
      <sec id="sec-5-3">
        <title>5.4. Formation of a tool for quantitatively proving the best fit of the data description model before the approximation begins</title>
        <p>Direct application of the discrete quadratic norm (3), to condition (13) taking into account
the properties of the norm of the sum of functions gives:
We also present (14) using local absolute
and relative errors Ɛ. Such an expansion
is the basis for calculating the absolute error:</p>
        <p>The latter proves that if the permissible value of the relative maximum possible error is
determined, then:
- the boundary between two sets of suitable and unsuitable for description from the sets of
model forms selected for consideration:
(15)
(16)
(17)
the value of the permissible deviation of the characteristic constants for the description
form q is limited by their permissible error and the properties of the model:
(19)</p>
        <p>At the same time, these equations (18), (19) divide the sets of deviations of constants into
two sets of levels. The norm of relative error in one is less, and in the other is more than the given
Ɛ. This fact of the presence of a quantitative unambiguous division for the norm of error ‖ Ɛ‖ (17)
and for the norm of constants ‖Δaqk‖ (19) testifies to the peculiarity and new possibility of artificial
quantitative-qualitative analysis. In this regard, it is an AI tool and will be called a tool for
analyzing the capabilities and suitability of a model of related data sets.</p>
        <p>
          To quantitatively confirm that changes in characteristic constants from point to point of the
definition set as well as relative error and adequacy are indicators of model quality, we will use the
data of numerical experiments [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]. During the experiment, five values were taken as a sample,
which was experimentally obtained for each of the eight angles of rotation of the rotor Y. As a
result, for the mathematical expectation of each angle, which was found after statistical processing
of the sample with a volume of five elements each, we have eight values as a function of the
discrete system time factor X. Table 1 shows the experimental data of the mathematical
expectation of the shaft rotation angle Y and quantitative characteristics of the quality of
approximation by the exponential function, which is calculated using the found approximation
constant. Columns 1 and 2 show the discrete time coefficient X and the value of the mathematical
expectation Y, respectively, as the output set according to (1). The function, as a result of the
exponential approximation Y1, the absolute Y1–Y and the relative Ɛ of its error, as quality
characteristics are presented in columns 3-5, respectively. Column 6 shows the value of the
constant a in the space of straightened coordinates X, Y, according to (13). Column 7 shows Ɛa
the relative deviation from the value determined by the approximation in the aligned coordinates:
a=0.272099; b=0.699144. Column 8 shows the local values of the adequacy of the approximation E1
according to the model quality indicator, which is calculated using three of the seven criteria [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
Х
1
2
3
4
5
6
        </p>
        <p>Analysis of the data in Tables 1 and 2 shows that the relative error of the approximation
constant deviation in the aligned coordinates (column 7) changes synchronously, as does the
relative error of the model (column 5) and local adequacy (column 8). The error modulus increases,
while the adequacy decreases. Thus, for the approximation by the exponential function, the value
of the constant a changed almost twice (Table 1), and for the power approximation it practically
did not change ((4 - 5) % Table 2). The latter clearly demonstrates that the limits obtained from
expressions (17)-(18) are indicators of the quality of the model, as are the relative error and
adequacy, but which were calculated without approximation only according to the data of the
connected sets.</p>
        <p>Thus, the stability feature of the characteristic constant corresponds to a small relative error
or high adequacy and in itself reflects the qualitative correspondence of the function type to the
one best suited for approximation. High resolution of 10-3 and the range of changes in its value is
more than 40-60 times greater than the max/min ratio. High sensitivity makes the conclusion
based on it applicable for use, and the property itself is useful as an indicator-tool for artificial
inference.</p>
        <p>6. Discussion of the results of the development of an AI tool for
evidential selection of the model type before approximation</p>
        <p>The problem of choosing the best model, which arises as a result of the need to reproduce
curves, surfaces of bodies, is an urgent task for describing the functions of one and many variables
in the course of building empirical models. The areas it covers are not limited to cutting machines
or 3D printers, when operating computerized systems or monitoring systems integrated into IoT
and cloud services [21-25].</p>
        <p>In this regard, an AI tool has been built that opens up the possibility of determining the
boundaries of the model quality indicator using expressions (17), (18) without approximation, but
only using the data of connected sets.</p>
        <p>Such results are explained by the existence of a fundamental connection between the
sequence of actions for estimating the interval of admissible values of a vector function for
determining integral indicators of the theory of cognition. The latter is proposed to be evaluated
together with the methods of direct and indirect measurement, the accuracy class of the
instruments and the randomness of the measurement process.</p>
        <p>Another explanation of the result is the comparison in co-dimensional conditions, which is
provided by reduction to linear forms. Also, the fact that the correction of coordinates according to
the similarity theory reduces all alternative forms of models to a single vector function, and the
characteristic moments are analytically represented for an arbitrary identifier I and three
surrounding points. Also, the intervals of existence of permissible values, which are determined by
the methodology of direct and indirect measurement using the Taylor series expansion, quadratic
norms and Bunyakovsky-Koshchy inequalities.</p>
        <p>The last reason that explains the obtained result is that the created tool (19) uniquely
determines the boundaries of the division of the set of deviations of characteristic constants and
separates two sets of levels. This fact of the presence of a quantitative unique division for the error
rate ‖Ɛ‖ (17) and for the constant rate ‖ Δaqk.‖ (19) testifies to the peculiarity and new possibility of
artificial quantitative-qualitative analysis. In this regard, it is an AI tool and will be called a tool for
analyzing the capabilities and suitability of the model of related data sets. All of the above are
causal and explain the results achieved as a solution to the four tasks set, which are given in the
order of their formulation.</p>
        <p>
          A distinctive feature of the research and the obtained results is the analytical algorithm for
calculating the characteristic constants (13), the deviation interval as a function of the identifier I
(19), which takes into account instrumental, methodological and a given relative error. Going
beyond the upper limit of the deviation interval of one of the characteristic constants is a
criterionindicator of poor suitability of the function for describing the sets (1) and (2). The stability of the
values of the characteristic constants defined on the entire set is the opposite criterion-indicator,
which will determine the correspondence of the approximation form. Belonging to the best type
from the set considered will determine the smallest value of the estimate of the maximum possible
error (expression (18)). Such simplicity, analyticity and suitability for simple automatic inference
make the expressions (13), (18) and (19) obtained in the work a tool for artificial unambiguous
analysis and inference, that is, an AI tool. Unlike existing well-known criteria such as: relative
error and adequacy, they are calculated before approximation [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. As shown in Table. 1, 2 the
dynamics of changes in known indicators relative error and adequacy are synchronously reflected
by the dynamics of changes in characteristic constants, but they are calculated before
approximation.
        </p>
        <p>In addition, unlike existing ones, such analysis and selection are carried out before the start
of approximation, which reduces the total complexity of the work to the complexity of
approximating only the best model for the data of related sets.</p>
        <p>However, despite the axiomatic obviousness of such a statement, there are no examples
described in the literature when the task of quantitative comparison and selection of the type of
approximation form, which is based only on the analysis of numerical data sets, was posed. There
are also no known sources where the selection of the best of the models before the start of the
approximation process is given. The formulation and solution of such a problem significantly
reduces the total complexity of the approximation process as a whole, since it is performed only
for one form, and the algorithm of such analysis is an AI tool. However, if earlier, after finding the
approximation constants, it was necessary to once again verify the correctness of the conclusions
for each model, now there is no need for this.</p>
        <p>Thus, the methodology for constructing and evaluating a model based on connected sets is
supplemented by a selection stage, the basis of which is a quantitative comparison of the stability
of constants calculated using connected sets (13), (18) and (19). The complexity of this stage is
estimated by the total time t1 - calculation of characteristic constants using the analytical solution
(13) and time t2 - calculation using the analytical expression (19), multiplied by the number of
models K. Each of the traditional stages is now performed only for one pre-selected approximation
form.</p>
        <p>
          Such addition and unification into a single methodology allows, based on the assessment of a
given relative error, to prove why one of the proposed forms is better using the automatic
inference algorithm (19). Thus, after finding the approximation constants and calculating the
adequacy of the model, it is necessary to compare the quality of only one, and not all K models, by
assessing the adequacy by several of the seven criteria [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Let us denote this time t3. It consists of
the time of determining the characteristic constants and the other (7), the time of assessing the
adequacy by m criteria tаm. The total number of approximation constants is twice more then the
number of characteristic constants. Thus, if we assume that the time of determining one constant
is the same, then the time saving Δt will be:
        </p>
        <p>Thus, the time savings become obvious, starting with two models K=2. The reason for this is
that the time duration of operations t1 is less than tа, but greater than t2, which is a direct
calculation of the analytical expression (19). However, even if they are equalized, and the number
of adequacy criteria is reduced to three: reliability, accuracy and depth (m=3), then starting with
the number of models K=2, the savings become 3tа and only increase in further. The same will be
observed when using the calculation of floating-point operations for running all models and for the
proposed method, since time is a positive multiple for calculating time savings.</p>
        <p>
          The analysis of the studies performed by the article demonstrated the main limitation, which
is the need to bring them to the co-dimensional conditions of the types of models being analyzed.
It is obvious that its elimination lies in the plane of searches and selection of groups of functions,
which are inherent in separate classes of problems or research of applicability of vector indicators,
calibration algorithms and recurrent networks [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] for application of formal developments of a set
of models in a single form. In this regard, it should be emphasized that appeal to the fundamental
principle of the theory of cognition and similarity: commensurability of conditions during
comparison, eliminated this methodological error and will remove restrictions in its application. of
express approximation and quantitative assessment of its results [
          <xref ref-type="bibr" rid="ref1 ref5">1, 5</xref>
          ] is and will be relevant.
Approximation as a complex process of building a neural network with the use of a vector
indicator is presented in the work [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The coefficients of synaptic weights are analytically
determined as a convergent sequence of solutions of a system of nonlinear algebraic equations,
which is represented through the initial data for training [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The presented practical successes of
simultaneous approximation are confirmed by the estimation of the maximum value of the rejected
higher-order derivative and the solution of the problem of quantitative assessment is proposed as
an application of calibration and database [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. Other examples of the development of practical
quantitative assessment of model quality are given in [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. In it, the process of building a model is
presented as a technology for its creation, and the assessment of effectiveness is presented
quantitatively as a generalized assessment through a set of well-known factors. In essence, these
factors determine the criteria as components of the adequacy of the model in relation to the object
that needs to be described based on data about the function and its derivatives [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Simultaneous
multifunctional application of approximation as a description of sets of points, functions, their
derivatives in the formation of a network for its calibration and training gives its positive results.
As a result of such application, the possibilities for creating and simplifying models and
quantitatively determining the level and degree of adequacy are expanded [
          <xref ref-type="bibr" rid="ref1 ref6">1, 6</xref>
          ]. Also in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] it was
demonstrated that the use of the indicator vector expands its functional capabilities.
        </p>
        <p>Also, disadvantages that can limit further application are the need to simultaneously satisfy
exact and approximate approximation conditions. Under these conditions, the choice of
characteristic constants will require an analytical solution to the optimization problem with
equality constraints and inequalities, which will require its own research. It is obvious that further
search for ways to improve the form of models and the tool will require changes in the least
squares approximation paradigm and group accounting of arguments.</p>
        <p>Of course, the choice of content and algorithms for calculating each of them is subject to
standardization, which is also a direction for further research into the structure of empirical
models and algorithmization of automated systems using AI tools, which will include
intellectualized express model builders. An equally important development of further research are
cases when the forms of the initial models satisfy exact requirements on surfaces and the number
of such constraints is an integral part, which significantly increases the number of constants. Such
forecasts, taking into account the results of this article, will obviously form new needs for
reviewing both new types of AI tools and new areas of their application.</p>
      </sec>
      <sec id="sec-5-4">
        <title>Conclusions</title>
        <p>1. The formed sequence of actions brings the process of evaluating interconnected data sets
to co-measurable conditions, which are determined by the accuracy class of the device and the
measurement method. The generalized m-component vector model of the definition set and
lcomponent vector-function model of the value set determines the permissible interval. This allows
to study as integral indicators the features of the dynamics of assessing reliability and adequacy.
The range of changes of reliability is limited to the interval from 0 to 1 inclusive, and adequacy
changes in range from 0 to a magnitude of third or fived orders as un-dimensional and un-limited
numbers.</p>
        <p>2. The reduction of alternative models of choice set to a single form, including the method
straightening of coordinates, made it possible to establish analytical expressions of characteristic
constants as functions of only the identifier and 3 neighboring points. The dynamics of changes in
the values of characteristic constants reflect the properties of the models and allow them to be
compared between groups and classified before the start of approximation in comparable
conditions. This tool for analyzing the suitability of a model for description, based only on the
analysis of the results of processing related numerical sets, is an artificial intelligence tool.</p>
        <p>3. A quantitative assessment of the interval of existence of the permissible value of
characteristic constants, which is determined by the accuracy class of the device and the
measurement method, has been formed. The values of the limits of the measured value for a model
suitable for description within the permissible values of the relative error and not suitable for
description have been established. The existence and analytical relationship of the limits of
deviations of model constants based on the properties of quadratic norms, which will provide
automatic selection for the established permissible relative error, have been established and
determined.</p>
        <p>4. It has been demonstrated on numerical examples that the formed toolkit for quantitative
proof of the best fit of the model type for describing experimental data according to a set of
definitions, behaves similarly with the relative error and adequacy. High resolution of 10-3 and the
range of its value is more than 40-60 times higher than the max/min ratio, which will be useful for
the program as an indicator. The fact of deviation of the model constants for different points is an
indicator that will be considered as a tool of artificial intelligence since it automatically selects the
best form of the model and has similar properties as the relative error and adequacy of the model.</p>
      </sec>
      <sec id="sec-5-5">
        <title>Acknowledgements</title>
        <p>The author expresses gratitude to Dr. Tech., Professor Vladimir Davydovich Levenberg, who
first set me the task of approximating a function of two variables with an accuracy of one
hundredth of a percent in 1971 year.</p>
        <p>The author expresses his low bow and gratitude to Dr. Tech., Professor Leonid Mikhailovich
Dykhta, who reviewed my work with the integral approximation criterion in 1974. Leonid
Mikhailovich's critical remarks and discussions have always stimulated my research and searches.</p>
      </sec>
      <sec id="sec-5-6">
        <title>Declaration on Generative AI</title>
        <p>During the preparation of this work, the author utilised ChatGPT and LanguageTool to
identify and rectify grammatical, typographical, and spelling errors. Following the use of
these tools, the author conducted a thorough review and made necessary revisions, and
accepted full responsibility for the final content of this publication.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Trunov</surname>
            <given-names>A.</given-names>
          </string-name>
          <article-title>An adequacy criterion in evaluating the effectiveness of a model design process</article-title>
          . Vol.
          <volume>1</volume>
          No.
          <volume>4</volume>
          (
          <issue>73</issue>
          ) (
          <year>2015</year>
          ):
          <article-title>Mathematics and Cybernetics - applied aspects</article-title>
          .
          <source>DOI:10</source>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2015</year>
          .37204
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Melnykova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Melnykov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zakharchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Logoyda</surname>
          </string-name>
          and
          <string-name>
            <given-names>V.</given-names>
            <surname>Mahlovanyi</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>The Applying Processing Intelligence Methods for Classify Persons in Identify Personalized Medication Decisions," 2020 10th International Conference on Advanced Computer Information Technologies (ACIT)</source>
          , Deggendorf, Germany,
          <year>2020</year>
          , pp.
          <fpage>422</fpage>
          -
          <lpage>425</lpage>
          , doi: 10.1109/ACIT49673.
          <year>2020</year>
          .
          <volume>9208822</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>N.</given-names>
            <surname>Melnykova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Melnykov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Shahovska</surname>
          </string-name>
          and
          <string-name>
            <given-names>N.</given-names>
            <surname>Lysa</surname>
          </string-name>
          ,
          <article-title>"</article-title>
          <source>The Investigation of Artificial Intelligence Methods for Identifying States and Analyzing System Transitions Between States," 2020 IEEE 15th International Conference on Computer Sciences and Information Technologies (CSIT)</source>
          , Zbarazh, Ukraine,
          <year>2020</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>75</lpage>
          , doi: 10.1109/CSIT49958.
          <year>2020</year>
          .
          <volume>9321841</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Trunov</surname>
          </string-name>
          <article-title>Formation of Indicators for Evaluating the Model Based on a Set of Interconnected Data Sets in the Tasks of Communication Technologies in Healthcare</article-title>
          . IDDM'
          <year>2023</year>
          : 6th International Conference on Informatics &amp;
          <string-name>
            <surname>Data-Driven</surname>
            <given-names>Medicine</given-names>
          </string-name>
          ,
          <source>November 17 - 19</source>
          ,
          <year>2023</year>
          , Bratislava, Slovakia.
          <source>CEUR Workshop Proceedings (CEUR-WS.org)</source>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Trunov</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Beglytsia</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gryshchenko</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ziuzin</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koshovyi</surname>
          </string-name>
           V.
          <article-title>Methods and tools of formation of general indexes for automation of devices in rehabilitative medicine for poststroke patients</article-title>
          .
          <source>Eastern-European Journal of Enterprise Technologies</source>
          .
          <year>2021</year>
          . Vol.
          <volume>4</volume>
          , No.
          <volume>2</volume>
          (
          <issue>112</issue>
          ). P.
          <volume>35</volume>
          -
          <fpage>46</fpage>
          . DOI:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2021</year>
          .239288. ISSN 1729-3774.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Trunov</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Recurrent</surname>
          </string-name>
          <article-title>Approximation as the Tool for Expansion of Functions and modes of operation of Neural Network</article-title>
          . Vol.
          <volume>5</volume>
          No.
          <volume>4</volume>
          (
          <issue>83</issue>
          ) (
          <year>2016</year>
          ).pp.
          <fpage>41</fpage>
          -
          <lpage>48</lpage>
          : Ma thematics and Cybernetics - applied aspects. DOI: https://doi.org/10.15587/172940 61.
          <year>2016</year>
          .81298
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Vorobel</surname>
            <given-names>R. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popov</surname>
            <given-names>B. A.</given-names>
          </string-name>
          <article-title>Uniform approximation by exponential and power expressions with condition // Algorithms and programs for calculating functions on a computer. -</article-title>
          K.:
          <source>Institute of Cybernetics of the Ukrainian SSR Academy of Sciences</source>
          ,
          <year>1981</year>
          . - Issue. 5, part 1. - pp.
          <fpage>158</fpage>
          -
          <lpage>170</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Popov</surname>
            <given-names>B. A.</given-names>
          </string-name>
          <article-title>Uniform approximation by splines</article-title>
          .
          <article-title>-</article-title>
          K.: Science. Dumka,
          <year>1989</year>
          . - 272 p.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Tesler</surname>
            <given-names>G.S. Metrics</given-names>
          </string-name>
          <article-title>and norms in the hierarchy of categorical semantics and functions</article-title>
          .
          <source>mathematical machines and systems</source>
          , 2005, No. 2. - p.
          <fpage>63</fpage>
          -
          <lpage>75</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Dzyadyk Vladislav</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shevchuk</surname>
            <given-names>Igor A</given-names>
          </string-name>
          . Theory of Uniform Approximation of Functions by Polynomials // Walter de Gruyter,
          <year>2008</year>
          . - 480 p.
          <source>DOI</source>
          <volume>10</volume>
          .1515/9783110208245
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Dzyadik</surname>
            <given-names>V.K.</given-names>
          </string-name>
          <article-title>Approximation methods for solving differential and integral equations</article-title>
          . - Kyiv: Naukova Dumka,
          <year>1988</year>
          . - 304 p.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Malachivsky</given-names>
            <surname>Petro</surname>
          </string-name>
          .
          <article-title>Equally close functions by the sum of the polynomial and exponential with interpolations</article-title>
          .
          <source>ISSN 1816-1545 Physico-mathematical modeling and information technologies</source>
          <year>2007</year>
          , vip.
          <volume>6</volume>
          ,
          <fpage>77</fpage>
          -
          <lpage>90</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Bellman</surname>
          </string-name>
          , “
          <article-title>On the Approximation of Curves by Line Segments Using Dynamic Programming,” Communications of the ACM</article-title>
          , Vol.
          <volume>4</volume>
          , No.
          <volume>6</volume>
          ,
          <issue>1961</issue>
          , p.
          <fpage>284</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Trunov</surname>
            ,
            <given-names>Alexander.</given-names>
          </string-name>
          <article-title>"Recurrent Transformation of the Dynamics Model for Autonomous Underwater Vehicle in the Inertial Coordinate System." Eastern-</article-title>
          <source>European Journal of Enterprise Technologies</source>
          , vol.
          <volume>2</volume>
          , no.
          <issue>4</issue>
          ,
          <issue>2017</issue>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>47</lpage>
          , doi:10.15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2017</year>
          .95783
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Solesvik</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kondratenko</surname>
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kondratenko</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sidenko</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kharchenko</surname>
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boyarchuk</surname>
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Fuzzy decision support systems in marine practice</article-title>
          .
          <source>In: Fuzzy Systems 2017. IEEE Int. Conf. DOI:</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>