<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Best Chebyshev Approximation for Compression of Big Information Arrays</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>huk-Porkh</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>V.M. Glushkov Institute of Cybernetics</institution>
          ,
          <addr-line>40 Acad. Glushkov Ave., Kiev, 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The development of the theory of best uniform approximations was started by the work of Pafnuty L. Chebyshev (1821-1894) who for the first time has found the existence of the best polynomial approximation and detected its unique feature [1]. This theory later received his name. The scientific prove for the existence and uniqueness of such a polynomial as well as the basis for development of the classical analytical methods for its computing were given in the papers of P. Kirchberger, D. Jackson, S.-J. Valle-Poussin, E. Borel, S.N. Bernstein, N.I. Akhiezer and others. E.Y. Remez has proposed in 1933-1934 the method of successive Chebyshev interpolations and two based on it algorithms (I and II). They became the basis for numerical solution of the Best Chebyshev Approximation (BCA) problem. However because of the method computational complexity, implementation of the best approximants in practice became possible only in the late 1950s with the use of the computers developed then in Soviet Union. It should be noted that the first computer algorithms and programs of the polynomial BCA have been developed at V.M. Clushkov Institute of Cybernetics (GIC) of National Academy of Sciences of Ukraine (NASU) in the late 1950s with the direct assistance of E.Y. Remez. The results of specific problem solving were first reported by the authors in 1961 at the IV All-Union Mathematical Congress in Leningrad. The BCA technique development is traditionally held at GIC in the directions of the approximant class expansion and the BCA technique use to increase the accuracy of application problem solutions. Besides, it inspirits such important areas of the computer technology (CT) scientific foundations as the processing and compression of big numerical arrays, as the computational algorithm optimization for accuracy and speed by minimizing the total errors in their implementation. The related GIC results and some prospects for further development are described.</p>
      </abstract>
      <kwd-group>
        <kwd>compression of data arrays</kwd>
        <kwd>existence</kwd>
        <kwd>uniqueness</kwd>
        <kwd>Best Chebyshev Approximation (BCA)</kwd>
        <kwd>information processing</kwd>
        <kwd>complicity</kwd>
        <kwd>accuracy</kwd>
        <kwd>performance</kwd>
        <kwd>total error</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Improving the level and quality of information support for society is an increasingly
important as a determining condition of its effective development. Information on the
studied object states is usually represented by discrete form of functional
dependencies which characterize the various nature processes in the form of information arrays
of numerical data.</p>
      <p>Working with such big arrays is associated with serious difficulties in their use in
mathematical modeling and forecasting problems, in solving the problems of
economically storing large amounts of data, in their high-speed transmission through
communication channels, etc.</p>
      <p>To overcome the above difficulties, mathematical processing of the arrays is
applied using the interpolation, mean-square or uniform approximation methods to
compress them by replacing the discrete representation with their analytical
expression.</p>
      <p>The compression ratio is characterized by a compression coefficient C, which is
determined by the formula</p>
      <p>C  b f  bF  ,
where b(f) and b(F) are accordingly the array size of the given discrete function f and
the number of approximant parameters F.</p>
      <p>The most efficient and general way to solve such problem is provided by the best
uniform approximation method based on the P.L. Chebyshev’s results [1].</p>
      <p>A fundamental feature of Best Chebyshev Approximation (BCA) is that the
approximation accuracy limit obtained for a function discrete representation remains
guaranteed for any points of the function continuous interval. The feature makes BCA
preferable over both interpolation and RMS approximation methods. The specified
BCA advantage allows one high accuracy solving both the problem of high
compression ratio approximation of numerical data arrays (the direct approximation problem),
and the problem of recovering the missed or absent values of the original data (the
inverse approximation problem). The second problem usually arises when the
experiment repeating is either difficult or impossible. Typical examples are analysis and
synthesis of complex static or dynamic systems. The BCA feature is preserved when
solving both direct and inverse approximation problems.</p>
      <p>Known methods for the Chebyshev problem solving can be subdivided by the
spread of E.Y. Remez’ methods [2, 3], the linear and convex programming methods
[4, 5], the spline approximation methods [6], the methods of sequential differential
linearization by parameter coefficients (mainly for fractional rational approximation)
[7]. Among them, a special attention is paid to the second E.Y. Remez’ method which
provides relatively fast convergence rate (quadratic in some cases) with the ability of
computation unification. The last is important for the efficiency of computer
implementations.</p>
      <p>The first GIC results in the area of BCA algorithm and program development
addressed the polynomial approximation of univariate functions and the approximate
solving systems of incompatible equations. Both were based on the second method of
successive Chebyshev interpolations (SCI) proposed by E.Y. Remez. They have
started a series of successive developments [8 - 11].</p>
      <p>Farther development of the BCA thematic at GIC went in the next main directions.
1. Development of new BCA methods and algorithms for both analytically and
discretely defined univariate functions by approximants of various nature
(polynomial, fractional rational, etc.) as well as for the multivariate functions by generalized
polynomial approximants. They apply the Remez’ SCI with reducing the problems to
the linear programming problems. A branch of this direction is development of the
best piecewise approximation algorithms and programs. They are applied to big
arrays (of the order of 10 million values and more) breaking them by parts (a segment
approximation).</p>
      <p>2. Optimization of algorithms and programs for accuracy and speed by obtaining
estimates of their total errors; development of methods, techniques and methods for
modifying algorithms in order to further increase their computational efficiency;
development of the techniques for solving classes of applied problems, including those
related to compressing the large arrays of numerical information.</p>
      <p>Increasing the information flows in society together with the raising acute problem
of their compression when ensuring high accuracy and speed determine the growing
demand and special relevance of BCA.</p>
      <p>The ongoing BCA development at GIC addresses mainly its wider and systemic
use for various classes of applied problems as well as the computer algorithm
optimization.</p>
      <p>Brief introduction to the BCA problems, algorithms and optimization methods is
presented below. See [12 -15] for more detailed and specific information.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Problem Statement and Algorithms</title>
      <p>The problem of best (uniform) Chebyshev’s approximation of a function f x on
interval a, b in a general statement is based on the Chebyshev principle of
minimizing the measure of uniform approximation</p>
      <p>LH n   max f x  H n x; A .</p>
      <p>xa,b
It consists of finding such an approximate of degree n
with the coefficients
A  a0 , a1,, an  from the whole set of approximants Hn of degree  n which
satisfies the minimax condition</p>
      <p>LH n   min ,
where f x is a function continuous on a, b and min LH n  is the smallest
Hn
measure of uniform approximation.</p>
      <p>As H n x; A consider the classes Pn of all polynomials of degree no bigger than
and the classes rn of all fractional rational expressions of the degree n=l+m of type
rn x  Pl x/ Qm x  rn x; A; B ,
coefficients A  ai , i  0, l and B  b j , j  0, m .
where Pl(x) and Qm(x) are polynomials of the degrees l and m respectively with the</p>
      <p>
        Then the statements of the problems of finding the best Chebyshev approximants
are:
max f x  Pn x; A  LPn   min , min LPn   L n x   ,
xa,b Pn Pn
xmaax,b f x  rn x; A; B  Lrn   mrnin , mrnin Lrn   LRn x   ,
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
where Пn(x) and Rn(x) are target polynomial and fractional rational best Chebyshev
approximants, when  and  are the values of their best approximations.
      </p>
      <p>
        The existence, uniqueness and properties of these approximants follow respectively
from the classical theorems of E. Borel and P.L. Chebyshev for the polynomial
statement (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), and N.I. Akhiezer and P.L. Chebyshev for the fractional rational statement
[16, 17]. Based on these theorems, the only solutions to problems (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
coincide, respectively, with the solutions to next “elementary” problems:
max f x  Pn x; A   ,
xX1
max f x  rn x; A; B   ,
xX2
on (n+2) point subsets X 1 , X 2  a,b where the values  and  reach their max
possible values  and  .
      </p>
      <p>
        Each of the (n+2) point problems is a Chebyshev interpolation problem of the
function f (x) on the set of (n+2) points, which are called Chebyshev alternance for
the polynomial problem (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and extremal basis for the rational fractional problem (
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
respectively. This remarkable property of Chebyshev alternance is the basis for
finding all the best Chebyshev approximations.
      </p>
      <p>
        E.Y. Remez method of solving the problems (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) is based on the idea of
sequential Chebyshev interpolations (SCI), whose r steps are reduced to finding a
sequence of (n+2) point S-sets Sr  xr , v  0, n  1. This sequence converges to
v
the desired Chebyshev alternance or extreme basis if obtained by solving at each j-th
step the next systems of algebraic equations for (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) and (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) respectively:
f x j   Pn, j x j    1v  j ,
      </p>
      <p>
        v v
wxv j  f xv j  Pl, j xv j / Qm, j xv j   1v j .
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
The equations (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) are linear with respect to the coefficients ak , k  0, n of the
polynomial Pn, j x and the value j when the equations (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) are non-linear with respect
to the coefficients ai , i  0, l , the coefficients bi , i  0, m and the value j [2, 3, 8].
      </p>
      <p>The main difficulty of all the numerical implementations of the Remez method is
the choice of the (n+2) point sets at each step of. The subsets determine both the rate
of convergence and the existence of the SCI convergence. 3 options are known for
selection the (n+2) point sets: the optimal one, the semi-optimal one and the
acceptable one. The optimal option provides the quadratic convergence rate, which in
practice leads to a few iterations needed (even 1-2 iterations only: [3], p. 79).</p>
      <p>The BCA algorithms for both the polynomial/fractional rational approximation of
univariate functions and the generalized polynomial approximation of multivariate
functions developed at GIC are based on the second method of E.Y. Remez’ SCI. The
main advantage of the GIC algorithms in comparison with similar implementations
known from other publications is their optimal method of the (n+2) point set
replacement when moving to the next iteration [9 - 11].
2.1</p>
      <sec id="sec-2-1">
        <title>Polynomial and Fractional Rational Approximation</title>
        <p>
          The initial function f(x) can be defined both analytically and discretely in the
algorithms for solving the problems (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) and (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ). However, discrete calculated values are
used always when replacing the (n+2) point sets (at each iteration) [14, 15].
        </p>
        <p>
          The replacement of current (n+2) point sets S j by the next S j1 is implemented
according to the next rule. SCI selects the largest (n+2) deviations of the approximant
from the initial function, taking into account the alternation (serial inversion) of the
deviation value signs. Thus, the Chebyshev alternance for the problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) or the
extremal basis for the problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) is approached, and the best approximants are
computed together with their best approximation value. The solution of problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) at
each j-th step is reduced to solving the system of linear algebraic equations (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) by the
Kraut method optimized for accuracy and speed [14, 15].
        </p>
        <p>
          Two algorithms (“A” and “B”) have been developed for the problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ). Their
difference is correspondent to the difference of the next BCA polynomial forms:
n n
 ai xi or  bi Ti x respectively.
i 0 i 0
        </p>
        <p>According to N.S. Bakhvalov, the algorithm “B” improves the algorithm “A” in
the case of big variation of the polynomial coefficients, which can cause a big
rounding error when calculating values by Horner scheme. The N.S. Bakhvalov’s algorithm
for writing a polynomial in the form of a linear combination of Chebyshev
polynomials significantly reduces the indicated numeric error [14, 18].</p>
        <p>The total error analysis of algorithms “A” and “B” has shown that the advantage of
the algorithm “B” becomes noticeable at the polynomial degrees n&gt;10. For n≤10,
both the algorithms are approximately equivalent in precision.</p>
        <p>
          According to A. Ralston, convergence of the problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) is possible only for initial
approximations in the near neighborhood of the best polynomials when convergence
of the problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) is provided for any initial (n+2) point sets. However, A. Ralston of
did not give any practical recommendations regarding the initial choice [19].
        </p>
        <p>The drawback was eliminated in the Werner’s method able to converge from an
arbitrary initial approximation. However, the algorithm complexity and its low
(linear) convergence rate (according to the author himself) do not allow this method to be
efficiently used in practice [20].</p>
        <p>
          The combined algorithm (CA) developed at GIC has proposed a method of
choosing the initial (n+2) point sets “close” enough to the target one. It ensures the SCI
convergence for the problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ). The CA uses Werner’s method at the first step to
produce good initial (n+2) point set for SCI. It eliminates the theoretically possible
cases of “degeneration” or “almost degeneration”. Then the SCI process continues
until the desired is received. In contrast to the polynomial case (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ), in the problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ),
the CA start is controlled by given accuracy, and the systems of algebraic equations
(
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) are nonlinear. The system (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) is linearised by eliminating unknowns using the
iterative secant method [14].
        </p>
        <p>
          Numerous examples of solving problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) by the CA algorithm never caused
“degeneracy” or “almost degeneracy” confirming the stable convergence of the
Remez’ SCI for the problem (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) as well as for the problem (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ).
2.2
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>Approximation of Functions of Many Variables</title>
        <p>The problem of multivariate function approximation f X   f x1, x2 ,, xm  is
ly independent basis functions 1X ,,n X  :
n
solved by the generalized polynomials Fn X    ai i X  on a system of
lineari0
n
max f X    z j j X   Lz1, z2 ,, zn   min ,
xE j1
where Z  z1,,z n  is found as a solution to a special case of a uniformly best
approximation problem reduced to solving the next systems of incompatible linear
equations:</p>
        <p>n
i Z    z j  j X i  f X i , i  1, N</p>
        <p>j1</p>
        <p>The problems (5) - (6) and (7) are equivalent to the next linear programming
problem.
n
By attaching to each function i Z    aij z j  bi its “symmetrical copy”
j1
n
 N i Z   i Z    aij z j  bi the problem is reduced to the next problem
j1
of algebraic minimax:</p>
        <p>n
i Z    aij z j  bi  0 , i  1,2N; aiN , j   a ; biN   bi ,
ij
j1</p>
        <p> n 
max i Z   im1,a2xN  j1 aij z j  bi   LZ   min    .</p>
        <p>i1,2N 
relative to such parameter values Z  z1,,z n  so that the value of the quantity
max i Z  is the smallest possible, i.e.
1i 2N
max i Z   LZ   min    .
1i2N Z</p>
        <p>n
  min,  i    i Z    aij z j  bi    0 ,
j1</p>
        <p>The algorithm implements the direct and dual linear programming problems. The
leading one is dual which is solved by the modified simplex method (MSM) given
that in practice the number of equations N is much greater than the number of
unknowns n, and table of “extended basis” of size n  2 n  4 for MSM is
significantly less than the reference table of direct simplex method n  2 N . Some used
techniques allow reducing more than half the simplex table and transforming just the
modified (compressed) simplex table with the reference table being unchanged
during the dual MSM problem solution. (See [3, 10, 14] for more details.)</p>
        <p>The described MSM algorithm computes the desired parameters z1, z2 ,, zn of
the problem (7) - (8). The found parameter values are then used to find the values
i1 Z , i2 Z  , …, 
in1
Z  . Their smallest and largest modulo values A
(5)
(6)
(7)
(8)
and L are the lower and upper boundaries of the best approximation value  , i.e.
A    L .</p>
        <p>The criterion for the algorithm stop is fulfillment of the condition L    L  A .</p>
        <p>The techniques used in the algorithm can significantly reduce the number of
calculations and increase the accuracy [14].
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Algorithm Optimization</title>
        <p>At GIC, the optimization of computational algorithms for accuracy and speed is
traditionally based on the developed general theory of total error estimation.</p>
        <p>The early results in this research direction have been obtained in Soviet Union in
the 50-60s for estimating the approximation method errors and the fatal input data
errors by S.M. Nikolsky, V.K. Dzyadyk, N.P. Korneychuk, S. B. Stechkin et al. The
research was continued by A.N. Tikhonov, V.K. Ivanov, M.M. Lavrentiev. Since
1960s, interesting results on the rounding error estimates for numerical
implementations of algorithms were obtained by N.S. Bakhvalov, J.H. Wilkinson, V.V.
Voyevodin, V.A. Morozov, I. Babushka, S.L. Sobolev and others. With the
development of approximate methods of computational mathematics, the need for a
significant increase in the accuracy of their computer implementations has grown. An
important progress for the problem solution was obtained in 1970th within the general
scientific theory of analysis and estimates of total errors (TE), including all kinds of
computational errors of algorithms, together with the first works on optimization of
approximate methods and algorithms for various areas of numerical analysis.</p>
        <p>GIC researchers were the first in Soviet Union who on the base of the theory
estimated TE for the methods and algorithms of operator and singular integral equation
solving, Fourier transforms, BCA. The research was continued in [12, 13, 21-30].</p>
        <p>The efforts lead to founding the All-Union School for the Optimization of
Computing and to organizing the scientific international forum “Problems of Optimization of
Computing” held by GIC since 1969 till now.</p>
        <p>
          Such TE estimates were developed at GIC for BCA of the problems (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) and (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ).
They include both a priori and a posteriori deterministic majorant TE estimates. For
some function classes of specific structural properties the obtained TE estimates are
unimprovable by order with respect to some parameters. Inclusion of TE estimates
into the computational schemes of algorithms can significantly increase their
accuracy. In some cases, an order of magnitude improvement is achieved. Numerous
procedures and techniques are used for additional optimization of the algorithm accuracy
and speed. Among them, the input data preliminary processing is applied for
eliminating random inaccuracies [12-14, 27].
        </p>
        <p>Along with the basic BCA algorithms, using additional classes of approximants
such as exponential, logarithmic and the root of the polynomial extends the classes of
approximable functions.</p>
        <p>In order to increase the compression efficiency of big data arrays, an analysis of
the comparative BCA accuracy characteristics for different classes of approximants
have been examined, and a class of generalized polynomials
Qn x z11x   znn x for systems of basis functions 1x,,n x
using a special case of reducing the problem (5) to the univariate version has been
defined. A complex of BCA algorithms and programs for such polynomials has been
developed.</p>
        <p>A comparative analysis of the accuracy of approximations with different systems
of basis functions is executed to determine the most suitable system in order to further
improve the accuracy of solving the problem (5) in each specific case.</p>
        <p>In order to further increase the efficiency of BCA use for compressing big and
extra-big data arrays, a method of segment approximation has been developed. It is
based on selection of the optimal number of nodes for dividing the array into
subintervals with further applying BCA for each of them separately. Besides, the classes
of approximants on the subintervals are selected taking into account the features of
the interval dataset. This approach increases both the BCA accuracy at each
subinterval and the total compression ratio. It should be noted that the segment
approximation method for large arrays representing the functions with singularities
commonly provides much better accuracy compared to the approximation on the whole
interval [31, 32].</p>
        <p>The developed techniques for comprehensive BCA optimization provide benefits
of the developed tools in comparison with similar implementations such as [33, 34].
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Obtained Results</title>
      <p>The developed BCA algorithms and programs for many years were used as a
component of the built-in software of Soviet computers, including Macro Conveyor of MIR
computer series where they implemented the system-level functions in the basic
translator. In practice, their efficiency has been verified by many implementations in
Ukraine, in other republics of Soviet Union, and abroad. The BCA toolkit was used
for data array compression when solving various application problems in the fields of
science and technology (including defense), when computing the complex systems
characteristics in both static and dynamic mode of operations e.g. within the aircraft
structural strength modeling for Soviet Tupolev Design Bureau, and Antonov Design
Bureau (later Ukrainian Antonov Aeronautical Scientific-Technical Complex). The
BCA toolkit was also used to calculate trajectories of planets and artificial satellites;
transects and curves of transcontinental air pollution transport; profiles of highways
and railways (including the famous Baikal–Amur Mainline); state analysis and
forecasting for the Soviet Union energy system; the structural strength characteristics of
beams and ceilings in earthquake-prone zones.</p>
      <p>In the framework of the Chernobyl disaster consequences liquidation, the BCA
toolkit was used as a component of the current state calculation software used for
simulation of the Kiev water reservoir, the whole water system of the Dnieper
reservoir cascade, of Black Sea and all the lemans of North-Western Black Sea region and
of several other water objects [35].</p>
      <p>The BCA algorithms have been implemented in FORTRAN, Algol, Pascal and
then in C++. The last implementation is used by Ukrainian cluster supercomputers of
SCIT family as a component of its Basic Parallel Application Software (BPAS). The
SCIT BPAS includes the next BCA libraries: libроlу_арх.а (for the polynomial
approximation of univariate functions), libratfraction.a (for the fractional
rational approximation of univariate functions), libmany_var.a (for
approximation of multivariate functions by the systems of basis functions), icybmath.a (for
high precision computing the elementary and special mathematical functions),
many_var_interp.a (for interpolation of multivariate functions).</p>
      <p>It should be noted that accuracy of elementary and special functions provided by
the library is not less than 10-21 with the number of coefficients not more than 10. It is
much better than the accuracy of conventional approximations (of not more than 8
exact decimal numbers).</p>
      <p>All the libraries have copyright certificates.</p>
      <p>The developed libraries can be used, inter alia, as reusable components for
distributed solving problems in the EGI computational grid for high throughput computing.
Despite the fact that the BCA algorithms and software libraries are not internally
parallel, their use as an invariant BPAS component is supposed in a multitask mode
when solving many sub-problems concurrently. The mode is common for parallel
processing of big data arrays and this explains the libraries relevance for the High
Performance Computing (HPC) domain.</p>
      <p>The BPAS libraries implementation was funded by several projects including the
project of approximation subsystem development for compressing big arrays of
numeric data as a part of the Ukrainian Budget Committee Information and Analytical
System, and the NASU scientific and technical project for development of
software/hardware complexes [36].</p>
      <p>BCA by generalized polynomials together with algebraic ones is widely used in
solving many application problems for compressing big and extra-big
onedimensional arrays-vectors (with up to 10 million number elements) with
compression coefficients ranging from 100 to 500. For example, 1897 measurements of water
salinity at different depths of the Black Sea compressed by polynomials of degrees
from 9 to 14 with the compression ratios from 126 to 190 were reconstructed with the
error from 1.2% to 2.1%.</p>
      <p>Fig. 1 shows another example of BCA compression of a matrix by generalized
polynomials. The matrix of size 17,339 KB (2 million 250 thousand numbers) have been
compressed down to 52 KB providing the compression ratio C = 333.</p>
      <p>BCA of multivariate functions by generalized polynomials and approximants of
other classes was successfully applied for finding approximate solutions to systems of
incompatible linear equations, for finding analytical approximate solutions of linear
differential boundary value problems and initial boundary value problems of
mathematical physics, for solving linear Fredholm integral equations by minimizing the
maximum integral residual, and in other problems [37].</p>
      <p>It was also used in solving problems of calculating the structural strength
characteristics of reinforced concrete floors taking into account the conditions of precast
slabs. The results of these works in 2017 were presented at two International scientific
and practical conferences in Kharkov and Philadelphia (USA) and received positive
assessments and certificates [38, 39].</p>
      <p>Current work focuses on extending the BCA toolkit use for solving new classes of
application problems.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Conclusion</title>
      <p>Obtaining for a number of applications more accurate solutions in comparison with
other known methods confirms the relevance and efficiency of BCA tools. Thus,
BCA can be reasonably recommended for solving other classes of application
problems.</p>
      <p>BCA efficiency in compression of big and extra big arrays of numerical data with
high accuracy and compression ratio provides several benefits such as storage
resource saving, noise tolerance, high data delivery speed in communication networks,
including the grid infrastructures of distributed computing, and guaranteed accuracy
of data recovery.</p>
      <p>Multiple applications of the BCA toolkit both confirmed its high efficiency and
helped in its further improvement and development.</p>
      <p>
        GIC work on solving complex optimization problems, computation accuracy and
performance optimization for various classes of application problems including the
algorithms and software of Best Chebyshev Approximation put important
contribution to the computer technology progress.
5. Steifel, E.: Note on Jordan elimination, linear programming and Tchebycheff
approximation. Numerische Mathematik 2(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ), 1–17 (1960).
6. Stechkin S.B., Subbotin Y.N.: Splines in Computational Mathematics [in Russian]. Nauka,
      </p>
      <p>Moscow (1976).
7. Cody, W.J., Stoer, J.: Rational Chebyshev approximation using interpolation. Numerische</p>
      <p>
        Mathematik 9(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ), 177–188 (1966).
8. Remez, E.Y.: Special topics of numerical construction of solutions to problems of the
Chebyshev approximation [in Russian]. In Proc. IV All-Union Mathematical Congress in
Leningrad 07/10/1961, vol. 2 (1964).
9. Alexandrenko, V.L., Kalenchuk-Porkhanova, A.A.: The program of polynomial
approximation of functions is one of the variants of the E.Ya. Remez’ method [in Russian]. In
Collection of standard routines for the Ural machine, pp. 9-41. KVIRTU, Kiev (1962).
10. Alexandrenko, V.L.: An algorithm for constructing an approximate uniformly-best
solution to a system of incompatible linear equations [in Russian]. In Algorithms and
Algorithmic Languages, 3, 57–74. CC AS USSR, Moscow (1968).
11. Porkhanova, A.A.: Chebyshev approximation by rational expressions [in Russian]. In
Comput. mathematics in modern times of scientific-techn. progress, pp. 300–308. Znannia,
Kiev (1974).
12. Kalenchuk-Porkhanova, A.A.: Algorithms and error analysis of the best Chebyshev
approximation of a function of one variable [in Russian]. In Proc. Theory of approximation
of functions, Kaluga, June 24–28, 1975, pp. 213–218. Moscow (1977).
13. Ivanov, V.V., Kalenchuk, A.A.: On the efficiency of algorithms for polynomial and
fractional rational Chebyshev approximations [in Russian]. In Proc. Constructive theory of
functions, pp. 72–77. Sofia (1983).
14. Kalenchuk-Porkhanova, A.A.: Approximation of functions of one and many variables [in
Russian]. In Numerical methods for multiprocessor computing complex of the ES, pp.
366–395. Zhukovsky AFEA, Moscow (1987).
15. Kalenchuk-Porkhanova, A.A.: Approximation apparatus for the analysis and synthesis of
complex systems [in Russian]. In Proc. 50 years of V.M. Glushkov Institute of Cybernetics
of the National Academy of Sciences of Ukraine, pp. 354–361. GIC NASU, Kiev, 2008.
16. Nathanson, I.P.: Constructive theory of functions [in Russian], pp. 49, 50–56.
      </p>
      <p>Gostekhizdat, Moscow-Leningrad (1949).
17. Akhiezer, N.M.: Lectures on the theory of approximation [in Russian], pp. 64–66. Nauka,</p>
      <p>Moscow (1965).
18. Bakhvalov, N.S.: On stable computation of polynomials [in Russian]. Computational</p>
      <p>Mathematics and Mathematical Physics, 2(6), 1568–1574 (1971).
19. Ralston, A.: Rational Chebyshev approximation by Remez algorithm. Numerische</p>
      <p>
        Mathematik, 7(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), 322–330 (1965).
20. Werner, H., Stoer, J., Bommas, W.: Rational Chebyshev approximation. Numerische
      </p>
      <p>
        Mathematik, 10(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ), 289–306 (1967).
21. Babich, M.D., Ivanov, B.B.: Estimation of the total error in solving linear operator
equations by simple iteration [in Russian]. Computational Mathematics and Mathematical
Physics, 7(5), 988–1000 (1967).
22. Porkhanova, A.A.: On the study of the total error of the polynomial Chebyshev
approximation [in Russian]. In Problems of accuracy and efficiency of computational algorithms,
5, 127–137, IC AS UkrSSR, Kiev (1969).
23. Kalenchuk-Porkhanova, A.A.: Analysis of error estimates for algorithms of the Chebyshev
approximation. Problems of optimization and organization of calculations [in Russian].
      </p>
      <p>RDNTP, Kiev (1974).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Chebyshev</surname>
            ,
            <given-names>P.L.</given-names>
          </string-name>
          :
          <article-title>Full composition of writings [in Russian]</article-title>
          .
          <source>AS USSR</source>
          , Moscow, vol.
          <volume>2</volume>
          , pp.
          <fpage>23</fpage>
          -
          <lpage>51</lpage>
          , vol.
          <volume>2</volume>
          , pp.
          <fpage>151</fpage>
          -
          <lpage>235</lpage>
          , vol.
          <volume>3</volume>
          , pp.
          <fpage>240</fpage>
          -
          <lpage>255</lpage>
          (
          <year>1947</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Remez</surname>
          </string-name>
          , E.Y.:
          <article-title>On sequential interpolation method [in Russian]</article-title>
          .
          <source>Ukrains'kyi Matematychnyi Zhurnal</source>
          ,
          <volume>12</volume>
          (
          <issue>2</issue>
          ),
          <fpage>170</fpage>
          -
          <lpage>179</lpage>
          (
          <year>1960</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Remez</surname>
          </string-name>
          , E.Y.:
          <article-title>Fundamentals of numerical methods of the Chebyshev approximation</article-title>
          [in Russian].
          <source>Naukova Dumka</source>
          , Kiev (
          <year>1969</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Zukhovitsky</surname>
            <given-names>S.I.</given-names>
          </string-name>
          :
          <article-title>Algorithm for solving the Chebyshev approximation problem in the case of a finite system of incompatible linear equations [in Russian]</article-title>
          .
          <source>Proceedings of the USSR Academy of Sciences</source>
          ,
          <volume>79</volume>
          (
          <issue>4</issue>
          ),
          <fpage>561</fpage>
          -
          <lpage>564</lpage>
          (
          <year>1951</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          24.
          <string-name>
            <surname>Zadiraka</surname>
            ,
            <given-names>V.K.</given-names>
          </string-name>
          :
          <article-title>Theory of computing Fourier transforms</article-title>
          [in Russian].
          <source>Naukova Dumka</source>
          , Kiev (
          <year>1983</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          25.
          <string-name>
            <surname>Mikhalevich</surname>
            ,
            <given-names>V.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sergienko</surname>
            ,
            <given-names>I.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zadiraka</surname>
            ,
            <given-names>V.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Babich</surname>
          </string-name>
          , M.D.:
          <article-title>On the issue of computing optimization</article-title>
          .
          <source>Cybernetics and System Analysis</source>
          ,
          <volume>30</volume>
          (
          <issue>2</issue>
          ),
          <fpage>65</fpage>
          -
          <lpage>94</lpage>
          (
          <year>1994</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          26.
          <string-name>
            <surname>Khimich</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nikolaevskaya</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          :
          <article-title>Reliability analysis of computer solutions of systems of linear algebraic equations with approximate initial data</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          ,
          <volume>44</volume>
          (
          <issue>6</issue>
          ),
          <fpage>863</fpage>
          -
          <lpage>874</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          27.
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Best Chebyshev approximation of functions of one and many variables</article-title>
          .
          <source>Cybernetics and systems analysis</source>
          ,
          <volume>45</volume>
          (
          <issue>6</issue>
          ), 988, pp.
          <fpage>155</fpage>
          -
          <lpage>164</lpage>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          28.
          <string-name>
            <surname>Sergienko</surname>
            ,
            <given-names>I.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khimich</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yakovlev</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          :
          <article-title>Methods for obtaining reliable solutions to systems of linear algebraic equations</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          ,
          <volume>47</volume>
          (
          <issue>1</issue>
          ),
          <fpage>62</fpage>
          -
          <lpage>73</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          29.
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Ways to increase the efficiency of calculations using the apparatus of the best Chebyshev approximation [in Russian]</article-title>
          .
          <source>In Proc. High performance computing (HPC-UA)</source>
          , Kiev, pp.
          <fpage>195</fpage>
          -
          <lpage>202</lpage>
          (
          <year>2012</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          30.
          <string-name>
            <surname>Sergienko</surname>
            ,
            <given-names>I.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shilo</surname>
            ,
            <given-names>V.P.</given-names>
          </string-name>
          :
          <article-title>Modern approaches to solving complex discrete optimization problems</article-title>
          .
          <source>J. of Automation and Information Sciences</source>
          ,
          <volume>48</volume>
          (
          <issue>1</issue>
          ),
          <fpage>15</fpage>
          -
          <lpage>24</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          31.
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>L.P.</given-names>
          </string-name>
          :
          <article-title>Seeking Optimal Knots for Segment Approximation</article-title>
          .
          <source>J. of Automation and Information Sciences</source>
          ,
          <volume>48</volume>
          (
          <issue>11</issue>
          ),
          <fpage>68</fpage>
          -
          <lpage>75</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          32.
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>L.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>E.S.:</given-names>
          </string-name>
          <article-title>Increasing the efficiency of Chebyshev segment fractional rational approximation</article-title>
          .
          <source>Cybernetics and Systems Analysis</source>
          ,
          <volume>53</volume>
          (
          <issue>5</issue>
          ),
          <fpage>759</fpage>
          -
          <lpage>765</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          33.
          <string-name>
            <surname>Demyanov</surname>
            ,
            <given-names>V.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Malozemov</surname>
            ,
            <given-names>V.N.</given-names>
          </string-name>
          :
          <article-title>Introduction to minimax [in Russian]</article-title>
          .
          <source>Nauka</source>
          , Moscow (
          <year>1972</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          34.
          <string-name>
            <surname>Berdyshev</surname>
            ,
            <given-names>V.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Petrak</surname>
            ,
            <given-names>L.V.</given-names>
          </string-name>
          :
          <article-title>Function approximation, numerical information compression, applications</article-title>
          [in Russian].
          <source>UrD RAS</source>
          ,
          <string-name>
            <surname>Yekaterinburg</surname>
          </string-name>
          (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          35.
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          :
          <article-title>Modeling Flow States in Water Bodies Cybernetics</article-title>
          and
          <string-name>
            <given-names>Systems</given-names>
            <surname>Analysis</surname>
          </string-name>
          ,
          <volume>55</volume>
          (
          <issue>4</issue>
          ),
          <fpage>683</fpage>
          -
          <lpage>691</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          36.
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vakal</surname>
            <given-names>L.P.:</given-names>
          </string-name>
          <article-title>The approximation apparatus as part of supercomputer software with cluster architecture [in Russian]</article-title>
          .
          <source>Iskustvennyi Intellekt</source>
          ,
          <volume>1</volume>
          ,
          <fpage>158</fpage>
          -
          <lpage>165</lpage>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          37.
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>L.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>E.C.</given-names>
          </string-name>
          :
          <article-title>The use of Chebyshev approximations in solving mixed problems for partial differential equations [in Russian]</article-title>
          .
          <source>Bulletin of KNTU</source>
          ,
          <volume>42</volume>
          (
          <issue>3</issue>
          ),
          <fpage>119</fpage>
          -
          <lpage>123</lpage>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          38.
          <string-name>
            <surname>Azizov</surname>
          </string-name>
          , Т.,
          <string-name>
            <surname>Melnyk</surname>
          </string-name>
          , О.,
          <string-name>
            <surname>Orlova</surname>
          </string-name>
          , О.,
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>According to the calculation of reinforced concrete ceilings taking into account the change in torsional stiffness of prеfabricated plates against the formation of normal cracks</article-title>
          .
          <source>Theoretical &amp; Applied Science</source>
          ,
          <volume>49</volume>
          (
          <issue>5</issue>
          ),
          <fpage>180</fpage>
          -
          <lpage>189</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          39.
          <string-name>
            <surname>Azizov</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melnyk</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Orlova</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kalenchuk-Porkhanova</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vakal</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>Calculation of reinforced concrete ceilings with normal cracks accounting of Chebyshev approximation</article-title>
          .
          <source>In Proc. 6th Int. Sc. Conf. Transbud</source>
          , Kharkiv, Ukraine,
          <fpage>15</fpage>
          -
          <lpage>21</lpage>
          April,
          <year>2017</year>
          .
          <source>МАТЕС Web of Conferences</source>
          , vol.
          <volume>116</volume>
          ,
          <year>02002</year>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>