=Paper= {{Paper |id=Vol-1623/papermp19 |storemode=property |title=Explicit Univariate Global Optimization with Piecewise Linear Support Functions |pdfUrl=https://ceur-ws.org/Vol-1623/papermp19.pdf |volume=Vol-1623 |authors=Oleg Khamisov |dblpUrl=https://dblp.org/rec/conf/door/Khamisov16 }} ==Explicit Univariate Global Optimization with Piecewise Linear Support Functions== https://ceur-ws.org/Vol-1623/papermp19.pdf
           Explicit Univariate Global Optimization
           with Piecewise Linear Support Functions

                                          Oleg Khamisov⋆

                               Melentiev Energy Systems Institute,
                               Lermontov st. 130, Irkutsk, Russia
                                     khamisov@isem.irk.ru
                                   http://www.sei.irk.ru/



        Abstract. Piecewise linear convex and concave support functions com-
        bined with Pijavskii’s method are proposed to be used for solving global
        optimization problems. Rules for constructing support functions are in-
        troduced.


Keywords: global minimum, support functions, piecewise concave and convex func-
tions.


1     Introduction
We study the effectiveness of global optimization technique with nonlinear support
functions. It is assumed that objective and constrained functions are explicit functions,
and we take this term to mean the following. Given a set of elementary functions, an ex-
plicit function can be constructed from the elementary ones by means of the operations
of adding, subtraction, multiplication, and division of two functions, multiplication by
a constant, as well as the operations of composition and taking maximum or minimum
of two functions. It is obvious that the problem of finding the global minimum of an
explicit function may be quite complicated because, generally, an explicit function is
not convex.
    Possibilities of explicit functions optimization have already been studied. In [1] ex-
plicit functions are called factorable. Following step by step the process of construction
of an explicit function, we can also track the global minimum or at least the range of
a current function. The question is what is the best way to do this.
    One of the most commonly used approach to global optimization is based on the
properties of the Lipschitz constant of the function to be minimized. The Lipschitz
constant is a uniform measure of function’s rate of change over all the feasible set. This
is both its advantage and drawback. The Lipschitz constant for any function over any
segment can be found as the maximum of the absolute value of its derivative. After
that, it is easy to define step by step an estimate of the true Lipschitz constant for any
⋆
    This work is supported by the RFBR grand number 15-07-08986
Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: A. Kononov et al. (eds.): DOOR 2016, Vladivostok, Russia, published at http://ceur-ws.org
                                                      Explicit Global Optimization     219

particular explicit function [2].
    Methods of Lipschitz optimization ([3]–[9]) are reasonably easily and quickly coded.
This is their indubitable advantage. The drawback is that every operation worsens the
estimate of the true Lipschitz constant, immediately affecting the efficiency. In addi-
tion, the Lipschitz constant equally estimates the rate of change of the objective all
over the segment, so, this constant is defined by segments of the fastest variation,
which generally may contain no points of global minimum. To the contrary, the func-
tion may change more smoothly near points of global minimum. One of the ways out is
to estimate Lipschitz constant according to the search region [10]. Also, some methods
estimate Lipschitz constant on the basis of previously calculated values of the objec-
tive [11]. In this case, overestimation of Lipschitz constant and loss of the true global
optimum may be a problem.
    Global search is much more efficient if the objective is known to be differentiable and
its derivative is known to satisfy the Lipschitz condition ([12]–[17]). Effective algorithms
for global minimization of one-dimensional functions over a segment are presented in
[18], [19], [14]. Obviously, for this class of functions the operations of taking maximum
and minimum of two functions have to be excluded, because they do not retain the
differentiability of the objective.
    Another successfully developing field of global optimization is so called d.c. pro-
gramming [20]–[22]. A d.c. function is a function representable as a difference of two
convex continuous functions. A d.c. program is a problem of minimization of a d.c.
function subject to equality and inequality constraints defined also by d.c. functions.
Not considering in detail a wide variety of methods for solving d.c. problems, we only
notice that finding a good representation (if it exists) of a given function in the form
of a difference of two convex function may appear to be a quite complicated problem.
    For global minimization of a univariate explicit function, we present a method of
piecewise linear support functions [23], [24] which is a generalization of well-known
Piyavskii’s method [25]. It should be also noted the the proposed method belongs to
the class of characteristical algorithms [26] and can be easily parallelized.
    The optimization technique suggested in the paper differs from interval analysis
approach used in global optimization [27] and is closely related to automatic differenti-
ation [28], however not so much developed. In general, the approach of the paper can be
considered as an initial extension of automatic differentiation to global optimization.


2   The problem and Piecewise linear support functions
We consider the global optimization problem

                                 f (x) → min, x ∈ [α, β],                             (1.1)

where α, β ∈ R, α ≤ β and function f (x) satisfies the following definition.
   Definition 1.1. A function f (x) is said to have a convex support function-majorant
and concave support function-minorant if there exist functions φ+ (x, y) and φ− (x, y)
such that
1. φ+ (x, y) is convex and continuous with respect to x at any fixed y ∈ [α, β];
2. φ− (x, y) is concave and continuous with respect to x at any fixed y ∈ [α, β];
220      Oleg Khamisov

3. φ− (x, y) ≤ f (x) ≤ φ+ (x, y) for any x, y ∈ [α, β];
4. φ− (y, y) = f (y) = φ+ (y, y) for any y ∈ [α, β].
The functions φ+ (x, y) and φ− (x, y) are called support function-majorant and support
function-minorant, correspondingly .
    Any function satisfying Definition 1.1 is lower semicontinuous [24]. It is easy to see
that any locally Lipschitz function satisfies this definition. In this paper we propose to
construct φ+ (x, y) and φ− (x, y) as piecewise linear functions.
    Definition 2.1. A convex piecewise linear support function-majorant of a function
f (x) and a concave piecewise linear support function-minorant of function f (x) over
a segment [α, β] at a support point y will denote such functions φ+ (x, y) and φ− (x, y)
that satisfy Definition 1.1 and have the form
                                   {                                    }
                  φ+ (x, y) = max k1+ (y)x + b+          +         +
                                               1 (y), k2 (y)x + b2 (y) ,             (2.1)
                                   {                                    }
                    φ− (x, y) = min k1− (y)x + b−       −         −
                                                1 (y), k2 (y)x + b2 (y)                            (2.2)
                                        −       −       −           −
where k1+ (y), b+       +       +
                1 (y), k2 (y), b2 (y), k1 (y), b1 (y), k2 (y), and b2 (y) are certain numbers.
     Constructing the piecewise linear support functions for linear and convex functions
is obvious, so we consider here one of the most trouble in global optimization function
f (x) = sin(x) and analyze the following four special cases.
     I. Point y ∈ [− π2 , 0]. The function sin(x) is convex over the segment [− π2 , y], hence
sin(x) ≥ sin(y)+cos(y)(x−y), x ∈ [− π2 , y] and, consequently, k1− = cos(y), b−           1 = sin(y)−
k1− y. Over the segment [0, 23 π], the graph of the function sin(x) lies above the tangent
line through the origin. The value v1 = 4.493409458 is found from the transcendental
equation sin(v)−cos(v)v = 0. Next, from the linear equation sin(v1 )+cos(v1 )(z −v1 ) =
−1 with respect to z the solution z1 = 4.603338848 is obtained. Then, over the segment
[y, 23 π], the graph of the function sin(x) lies above the straight line through the points
(y, sin(y)) and (z1 , −1): sin(x) ≥ k2− x+b−                       −   sin(y)+1 −
                                               2 , x ∈ [y, 2 π], k2 = y−z1 , b2 = sin(y)−k2 y.
                                                           3                                         −

     II. Point y ∈ [0, π2 ]. Over the segment [− π2 , y], the graph of the function sin(x) lies
above the straight line through the points (−1, 1) and (y, sin(y)): sin(x) ≥ k1− x+b−            1 ,x ∈
[− π2 , y], k1− = sin(y)+1
                      y+1    , b −
                                 1 =  sin(y) − k −
                                                 1 y. Over    the  segment    [ π 3
                                                                                 ,
                                                                                2 2 π],  the  graph    of
                                                                                    π
the function sin(x) pases above the tangent line through the point ( 2 , 1). The value
v2 = 3.901918697 is a solution of the transcendent equation sin(v) + cos(v)( π2 − v) = 1
belonging to the segment [ π2 , 32 π]. Find the value z2 = 4.330896607 as a solution of
the linear equation sin(v2 ) + cos(v2 )(z − v2 ) = −1 with respect to z. It is easy to see
that, over the segment [y, 23 π], the graph of the function sin(x) is located above the
straight line through the points (y, sin(y)) and (−1, 1): sin(x) ≥ k2− x + b−           2 , x ∈ [y, 2 π],
                                                                                                    3

k2− = y−z2 , b−
         sin(y)+1
                  2 = sin(y) − k2 y.
                                     −

     III. Point y ∈ [ 2 , π]. Over the segment [− π2 , π2 ] , the graph of the function sin(x)
                       π

lies above the tangent line through the point ( π2 , 1). The value v3 = −0.7603260437 is
a solution of the transcendent equation sin(v) + cos(v)( π2 − v) = 1 belonging to the
segment [− π2 , π2 ]. The value z3 = −1.189303953 is obtained as a solution of the linear
equation sin(v3 ) + cos(v3 )(z − v3 ) = −1 with respect to z. It is easy to see that, over
the segment [ π2 , y], the graph of the function sin(x) is located above the straight line
through (z3 , −1) and (y, sin(y)): sin(x) ≥ k1− x + b−                            −
                                                             1 , x ∈ [− 2 , y], k1 =
                                                                        π               sin(y)+1 −
                                                                                          y−z3 , b1 =
                                                            Explicit Global Optimization     221

sin(y) − k1− y. Next, over the segment [y, 32 π], the graph of the function sin(x) lies over
the straight line through the points (y, sin(y)) and (π + 1, −1): sin(x) ≥ k2− x + b−    2 ,x ∈
           −    sin(y)+1 −              −
[y, 2 π], k2 = y−π−1 , b2 = sin(y) − k2 y.
    3

    IV. A support point y ∈ [π, 23 π]. Over the segment [− π2 , π], the graph of the function
lies above the tangent line through the point (π, 0). The value v4 = −1.351816804 is
a solution of the transcendental equation sin(v) + cos(v)(π − v) = 0 belonging to
the segment [− π2 , 0]. Again, the value z4 = −1.461746193 is obtained as a solution
of the linear equation sin(v4 ) + cos(v4 )(z − v4 ) = −1. with respect to z. Over the
segment [− π2 , y], the graph of the function sin(x) is located above the straight line
through the points (z4 , −1) and (y, sin(y)): sin(x) ≥ k1− x + b−                          −
                                                                      1 , x ∈ [− 2 , y], k1 =
                                                                                 π
sin(y)+1 −                 −
  y−z4 , b1 = sin(y) − k1 y. Over the segment [y, 2 π], the function sin(x) is convex
                                                         3

therefore k2− = cos(y), b−               −
                         2 = sin(y) − k2 y.
    To summarize, the rule for constructing a concave piecewise linear minorant of the
function sin(x) over the segment [− π2 , 32 π] is the following.
Input: the function f (x) = sin(x) and a point y.
Step 1.
If − π2 ≤ y ≤ 0, then k1− = cos(y), k2− = sin(y)+1
                                            y−z1 .
If 0 ≤ y ≤ π2 , then k1− = sin(y)+1  −  sin(y)+1
                              y+1 , k2 = y−z2 .
If π2 ≤ y ≤ π, then k1− = sin(y)+1  −   sin(y)+1
                            y−z3 , k2 = y−π−1 .
                        −    sin(y)+1 −
If π ≤ y ≤ 3π
            2 , then k1 = y−z4 , k2 = cos(y).
Step 2. b−                −    −            −
         1 = sin(y) − k1 y, b2 = sin(y) − k2 y.
Step 3. Stop.
    In general, rules for constructing support functions are based on similar geometrical
ideas and can be found, for example, in [29].


3    General rules for constructing piecewise linear support
     functions
Let function f (x) and its support function-majorant φ+    f (x, y) and support function-
minorant φ−f (x, y)  be given:
                               {                                           }
              φ+                 +           +
                 f (x, y) = max kf 1 (y)x + bf 1 (y), kf+2 (y)x + b+f 2 (y) ,       (3.1)
                               {                                                  }
                φ−               −           −
                 f (x, y) = min kf 1 (y)x + bf 1 (y),         kf−2 (y)x + b−
                                                                           f 2 (y) .       (3.2)
    Denote

               lf+1 (x, y) = kf+1 (y)x + b+         +             +           +
                                          f 1 (y), lf 2 (x, y) = kf 2 (y)x + bf 2 (y),

               lf−1 (x, y) = kf−1 (y)x + b−         −             −           −
                                          f 1 (y), lf 2 (x, y) = kf 2 (y)x + bf 2 (y).

    Then
                                                 −               −            −
     φ+               +            +
      f (x, y) = max{lf 1 (x, y), lf 2 (x, y)}, φf (x, y) = min{lf 1 (x, y), lf 2 (x, y)}.
222     Oleg Khamisov

                                                                               −
Assume that for another function h(x) its support functions φ+   h (x, y) and φh (x, y)
are also known. The goal of this section is to present rules for constructing support
functions-minorant and functions-majorant for the functions

       F1 (x) = cf (x), F2 (x) = f (x) + h(x), F3 (x) = f (x) − h(x), F4 (x) = f 2 (x),
                                                   1
             F5 (x) = f (x) · h(x), F6 (x) =           , F7 (x) = max{f (x), h(x)},
                                                 f (x)
                        F8 (x) = min{f (x), h(x)}, F9 (x) = f (h(x))
where c is a constant.
3.1. The function F1 (x) = cf (x). It is obvious that
                             {
                  +            max{c · l1+ (x, y), c · l2+ (x, y)},         c≥0
                φF1 (x, y) =
                               max{c · l1− (x, y), c · l2− (x, y)},         c < 0,
                                 {
                                     min{c · l1− (x, y), c · l2− (x, y)},   c≥0
                  φ−
                   F1 (x, y) =       min{c · l1+ (x, y), c · l2+ (x, y)},   c < 0.
3.2. The function F2 (x) = f (x) + h(x). In this case

              φ+                +             +           +             +
               F2 (x, y) = max{lf 1 (x, y) + lh1 (x, y), lf 2 (x, y) + lh2 (x, y)},

              φ−                −             −
               F2 (x, y) = min{lf 1 (x, y) + lh1 (x, y),      lf−2 (x, y) + lh2
                                                                             −
                                                                                (x, y)}.
3.3. The function F3 (x) = f (x) − h(x). Obviously follows from 3.1 and 3.2.
3.4. The function F4 (x) = f 2 (x). From (f (x) − f (y))2 ≥ 0 we have f 2 (x) ≥
2f (x)f (y) − f 2 (y). For obtaining φ−
                                      F4 (x, y) the rule from 3.1 is used with c = 2f (y).
Find constants f and f such that f ≤ f (x) ≤ f ∀x ∈ X. Calculate ẑ = f (y). Suppose
that f < ẑ < f . Then

            z 2 ≤ (f + ẑ)z − f ẑ, z ∈ [f , ẑ], z 2 ≤ (f + ẑ)z − f ẑ,     z ∈ [ẑ, f ].
                           {                              }
Consequently, z 2 ≤ max (f + ẑ)z − f ẑ, (f + ẑ)z − f ẑ , z ∈ [f , f ]. After the substi-
tution z = f (x), ẑ = f (y), we obtain

           f 2 (x) ≤ max{(f + f (y))f (x) − f f (y), (f + f (y))f (x) − f f (y)}.

Then use support convex majorants for the functions (f + f (y))f (x) − f f (y) and
(f + f (y))f (x) − f f (y).
                                                                           2     2       2
3.5. The function F5 (x) = f (x) · h(x). Since f (x) · h(x) = (f (x)+h(x))
                                                                       2     − f 2(x) − h 2(x) ,
support functions for F5 (x) can be constructed by subsequent application of the rules
from 3.1-3.4.
3.6. The function F6 (x) = f (x)  1
                                     . It is assumed that f (x) > 0 ∀x ∈ [α, β]. The func-
tion F6 (x) is a composition of the functions φ(z) = z1 and f (x). Let 0 < f < f (x) <
f ∀x ∈ [α, β] and denote ẑ = f (y). The function φ(z) is convex over the segment
[f , f ]. Therefore F (x) = φ(f (x)) ≥ f (y)
                                          1
                                              − ff2(x)
                                                    (y) . A concave minorant of the function
                                                           Explicit Global Optimization        223

− ff2(x)
      (y) is constructed by the rule presented in 3.1 with c = − f 2 (y) . A convex majorant
                                                                          1

is constructed in the same way as in 3.4, with the auxiliary function z 2 substituted by
the function z1 .
     It is easy to see now that construction of support majorants and minorants of a
concave function f (x) consists in construction of support majorants and minorants of
the convex function −f (x), multiplying them by −1 and applying the rule from 3.1.√In
this way support majorants and minorants for the elementary functions f6 (x) = x
and f7 (x) = ln(x) are constructed.
3.7. The function F7 (x) = max{f (x), h(x)}. Without loss of generality, let us as-
sume that max{f (y), h(y)} = f (y). Then F7 (x) ≥ f (x) ≥ φ−           f (x, y), what means that
φ−f (x,  y)  is a concave   minorant  of  the function   F 7 (x). On   the  other hand, F7 (x) ≤
          +         +                                  +                +
max{φf (x, y), φh (x, y)}. Since the functions φf (x, y) and φh (x, y) are convex in x,
the function ψ(x) = max{φ+                 +
                                f (x, y), φh (x, y)} is also convex in x. Constructing a sup-
port majorant of the function ψ(x) we simultaneously obtain a support piecewise linear
majorant for the function F7 (x).
3.8. The function F8 (x) = min{f (x), h(x)}.
Since min{f (x), h(x)} = − max{−f (x), −h(x)}, support functions can be found by
application of the rules from 3.1 and 3.7.
3.9. The function F9 (x) = f (h(x)). From (3.1) and (3.2) we have
                                   {                                                              }
F7 (x) ≤ φ+   f (h(x), h(y)) = max    k +
                                       f1 (h(y))h(x)  +  b+
                                                          f1 (h(y)), k +
                                                                       f2 (h(y))h(x) + b+
                                                                                        f2 (h(y))   ,
                                   {                                                         (3.7)
                                                                                                 }
F7 (x) ≥ φ−                        −               −           −                 −
          f (h(x), h(y)) = min kf 1 (h(y))h(x) + bf 1 (h(y)), kf 2 (h(y))h(x) + bf 2 (h(y)) .
                                                                                        (3.8)
The right-hand expressions of inequalities (3.7) and (3.8) are piece-wise linear with
respect to the function h(x). Application of the rules presented in 3.7, 3.8, 3.1, and 3.2
suffices to construct support majorants and minorants.
    The above-introduced rules for constructing support piecewise linear functions are
applicable to any explicit function. The process of construction of a support functions
is analogous in some way to the process of differentiation of a function: both can be
automatized. Hence, the quite scrupulous procedure of finding support functions can
be entrusted to a computer. The analytical representations of required functions are
not always necessary, it is sufficient to have only algorithms of finding their values at
any feasible point. That is the reason that a user needs nothing to do but explicitly
set an objective, everything else can be automatized according to the above-described
rules.


4    A Modification of Piyavskii’s Method
The method of solving problem (1.1) proposed in this paper is a modification of the
well-known Piyavskii’s method [25]. Given points x1 , ..., xk , determine next point xk+1
as a global solution of the problem

                       ψk (x) = max {φ− (x, xi )} → min, x ∈ [α, β].                         (4.1)
                                 1≤i≤k
224     Oleg Khamisov

After that, calculate the record fk+1 = min{f (xi ) : i = 1, ..., k + 1}, and if fk+1 −
ψ(xk+1 ) ≤ ε where ε > 0 is a required precision then the point xl , l = 1, ..., k + 1 such
that f (xl ) = fk is the ε-optimal solution. A starting point x1 ∈ [α, β] is arbitrary. If
the functions ψk (x) are uniformly bounded every limit point of sequence xk is a global
solution of problem (1.1) ([25]).
    At the k-th iteration of the method the points x1 , ..., xk divide the segment [α, β]
into p subsegments [αj , β j ], j = 1, ..., p such that α1 = α, β p = β, β j = αj+1 . Next
point xk+1 belongs to one of these subsegments, hence it is sufficient to build support
functions only over the subsegment containing xk+1 . Such a strengthening of Piyavskii’s
method has been already used in Lipschitz optimization [10]. Another strengthening
lies in the fact that a support function depends on the support point providing more
accurate piecewise linear approximation of the objective over a segment, in contrast
to [10] where approximation accuracy is determined by the Lipschitz constant (though
over a smaller segment). The rules for constructing support functions presented in the
previous section are aimed to improvement of accuracy of piecewise linear approxima-
tion through shortening the current subsegment.
    Computational experiment performed in [29] shows acceleration in convergence from
3 to 10 times in comparison to results presented in [4].


References

 1. McCormick, G.P.: Computability of global solutions to factorable nonconvex programs.
    Part I - Convex underestimating problems. Mathematical Programming 10, 147-175
    (1976).
 2. Sukharev, A.G. Minimax Models in the Theory of Numerical Methods. Springer Science
    & Business Media, 258 p. (2012)
 3. Evtushenko, Yu.: Numerical methods for finding global extreme (case of a non-uniform
    mesh). USSR Comput. Maths. Math. Phys. 11(6), 38-54 (1971)
 4. Hansen, P., Jaumard, B.: Lipschitz optimization. In: Handbook of global optimization,
    pp.407-494. Kluwer Acad. Publ., Dordrecht (1995)
 5. Pinter J.: Global Optimization in Action. Kluwer Acad. Publ., Dordrecht (1996)
 6. Evtushenko Yu. G., Posypkin M. A.: Nonuniform covering method as applied to multicri-
    teria optimization problems with guaranteed accuracy. Computational Mathematics and
    Mathematical Physics Volume 53, Issue 2, pp 144-157 (2013)
 7. Evtushenko Yu. G., Malkova V.U., Stanevichyus A.A.: Parallel global optimization of
    functions of several variables. Computational Mathematics and Mathematical Physics
    Volume 49, Issue 2, pp 246-260 (2009)
 8. Strongin R. G., Sergeyev Ya. D.: Global optimization with non-convex constraints: Se-
    quential and parallel algorithms, Kluwer Academic Publishers, Dordrecht, 728 pp. (2000)
 9. Sergeyev Ya. D., Strongin R. G., Lera D.: Introduction to Global Optimization Exploiting
    Space-Filling Curves, Springer, New York, 135 pp. (2013)
10. Sergeev Ya.D.: An one-dimensional determenistic global optimization algorithm. Comput.
    Maths. Math. Phys. 35, 705-717 (1995)
11. Wood G.R., Zhang B.P.: Estimation of the Lipschitz constant of a function. Journal of
    Global Optimization 8, 91-103 (1996)
12. Baritompa, W.: Accelerations for a variety of global optimization methods. Journal of
    Global Optimization 4, 37-45 (1994)
                                                        Explicit Global Optimization      225

13. Breiman, L., Cutler, A.: A deterministic algorithm for global optimization. Mathematical
    Programming 58, 179-199 (1993)
14. Sergeyev Ya. D.: Global one-dimensional optimization using smooth auxiliary functions.
    Mathematical Programming 81(1), 127-146 (1998).
15. Kvasov D.E., Sergeyev Ya. D.: A univariate global search working with a set of Lipschitz
    constants for the first derivative. Optimization Letters 3(2), 303-318 (2009).
16. Kvasov D.E., Sergeyev Ya. D.: Univariate geometric Lipschitz global optimization algo-
    rithms. Numerical Algebra, Control and Optimization 2(1), 69-90 (2012).
17. Sergeyev Ya. D., Mukhametzhanov M.S., Kvasov D.E., Lera D.: Derivative-Free Local
    Tuning and Local Improvement Techniques Embedded in the Univariate Global Opti-
    mization. DOI: 10.1007/s10957-016-0947-5 (2016)
18. Gergel, V.P.: A global optimization algorithm with derivatives. In: Dinamika sistem i op-
    timizatsiya (System Dynamics and Optimization), pp.161-178. State University of Nizhni
    Novgorod, Nizhni Novgorod (1992) (in Russian)
19. Gergel, V.P.: A global optimization algorithm for multivariate functions with Lipschitzian
    first derivatives. Journal of Global Optimization 10, 257-281 (1997)
20. Tuy H.: D.C. optimization: theory, methods and algorithms. In: Handbook of global op-
    timization, pp.149-216. Kluwer Acad. Publ., Dordrecht (1995)
21. Horst, R., Tuy, H.: Global optimization. Deterministic approaches. Springer-Verlag, Berlin
    (1996)
22. Strekalovsky A.S.: On the Minimization of the Difference of Convex Functions on a Fea-
    sible Set. Comp. Math. And Math. Physics, 43(3) pp. 380-390 (2003)
23. Khamisov, O.V.: On optimization properties of functions with a concave minorant. Journal
    of Global Optimization 14, 79-101 (1999)
24. Khamisov, O.V.: Global optimization of functions with a concave support minorant. Com-
    put. Maths. Math. Phys. 44(9), 1473-1483 (2004)
25. Piyavskii, S.A.: An algorithm for finding the absolute extremum of a function. USSR
    Comput. Math. Math. Phys. 12, 57-67 (1972).
26. Grishagin, V.A., Sergeyev, Y.D., Strongin, R.G.: Parallel Characteristical Algorithms for
    Solving Problems of Global Optimization. Journal of Global Optimization, 10(2), 185-206
    (1997)
27. Hansen, E., Walster, G.: Global optimization using interval analysis. Second edition, re-
    vised and expanded. Marcel Dekker, New York, 489 pp. (2004)
28. Griewank, A., Walther, A.: Evaluating Derivatives: Principles and Techniques of Algo-
    rithmic Differentiation. Other Titles in Applied Mathematics 105, 426 pp. (2008)
29. Khamisov O.V., Yershov A.R.: Automatic global optimization. Discrete Analysis and
    Operations Research, Ser. 2, 11(2), 45-68 (in Russian) (2004)