=Paper= {{Paper |id=Vol-1987/paper36 |storemode=property |title=On a Solution of Fractional Programs via D.C. Optimization Theory |pdfUrl=https://ceur-ws.org/Vol-1987/paper36.pdf |volume=Vol-1987 |authors=Tatiana V. Gruzdeva,Alexander S. Strekalovsky }} ==On a Solution of Fractional Programs via D.C. Optimization Theory== https://ceur-ws.org/Vol-1987/paper36.pdf
                     On a Solution of Fractional Programs
                        via D.C. Optimization Theory

                                        Tatiana V. Gruzdeva
                                     Alexander S. Strekalovsky
              Matrosov Institute for System Dynamics and Control Theory of SB RAS
                                         Lermontov str., 134
                                       664033, Irkutsk, Russia
                                   gruzdeva@icc.ru, strekal@icc.ru




                                                        Abstract

                       This paper addresses the problems of fractional programming which
                       data is given by d.c. functions. Such problems are, in general,
                       nonconvex (with numerous local extrema). We develop an efficient
                       global search method for fractional program based on two different ap-
                       proaches. The first approach develops the Dinkelbach’s idea and uses a
                       solution of an equation with the optimal value of an auxiliary d.c. opti-
                       mization problem with a vector parameter. The second one deals with
                       another auxiliary problem with nonconvex inequality constraints. Both
                       auxiliary problems are d.c. optimization problems, which allows us to
                       apply the Global Optimization Theory and develop two corresponding
                       global search algorithms. The algorithms have been substantiated and
                       tested on an extended set of sum-of-ratios problems with up to 200
                       variables and 200 terms in the sum. Also we propose an algorithm for
                       solving problems of fractional programming, which combines the two
                       approaches.




1    Introduction
During the recent decades the problems with the functions representable as a difference of two convex
functions (i.e. d.c. functions) can be considered as one of most attractive in nonconvex optimization
[Hiriart-Urruty, 1985, Horst&Pardalos, 1995, Horst&Tuy, 1996, Evtushenko& Posypkin, 2013, Khamisov, 1999,
Strekalovsky, 2003, Strongin&Sergeyev, 2000, Tuy, 1998]. Actually, any continuous optimization problem can
be approximated by a d.c. problem with any prescribed accuracy [Hiriart-Urruty, 1985, Horst&Tuy, 1996,
Tuy, 1998]. In addition, the space of d.c. functions is a linear space, any polynomial and moreover any
twice differentiable function belong to the space of d.c. functions [Hiriart-Urruty, 1985, Horst&Pardalos, 1995,
Horst&Tuy, 1996, Strekalovsky, 2003, Strongin&Sergeyev, 2000, Tuy, 1998]. Besides, this space is closed with
respect to all standard operations used in Analysis and Optimization. On account of this situation the general

Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of
the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at http://ceur-ws.org




                                                            246
problem of mathematical optimization
                               {
                                    f0 (x) := g0 (x) − h0 (x) ↓ min, x ∈ S,
                   (Pdc )                                        x
                                  fj (x) := gj (x) − hj (x) ≤ 0, j = 1, . . . , m,

where the functions gj (·), hj (·), j = 1, . . . , m,, are convex on IRn , S is a convex set, S ⊂ IRn , can be considered
as a rather attractive object from theoretical and practical points of view.
   Recently, we have succeeded in developing the Global Optimality Conditions for Problem (Pdc ), which per-
fectly fits optimization theory and proved to be rather efficient in terms of computations [Strekalovsky, 2003,
Strekalovsky, 2014]. Now we apply this theory to solving the following problem of the fractional optimization
[Bugarin et al., 2016, Schaible&Shi, 2003]

                                                        ∑
                                                        m
                                                          ψi (x)
                             (Pf )         f (x) :=                    ↓ min, x ∈ S,
                                                        i=1
                                                              φi (x)     x


where S ⊂ IRn is a convex set and ψi : IRn → IR, φi : IRn → IR,

                           (H0 )       ψi (x) > 0, φi (x) > 0 ∀x ∈ S, i = 1, . . . , m.

   It is well-known that the fractional programming problem is NP-complete [Freund&Jarre, 2001]. Surveys on
methods for solving this problem can be found in [Bugarin et al., 2016, Schaible&Shi, 2003], but the development
of new efficient methods for a fractional program still remains an important field of research in mathematical
optimization, because the sum-of-ratios programs arise in various economic applications and real-life problems
[Schaible&Shi, 2003].
   Here we develop efficient global search method for fractional program, which is based on two following ap-
proaches. Generalizing the Dinkelbach’s idea, we propose to reduce the fractional program with d.c. functions
to solving an equation with the vector parameter that satisfies the nonnegativity assumption. In this case, we
need to solve the auxiliary d.c. minimization problems, i.e. Problem (Pdc ) with fi (x) ≡ 0, i ∈ I. We also
propose reduction of the sum-of-ratios problem to a minimization of a linear function on the nonconvex feasible
set given by d.c. inequality constraints, i.e. Problem (Pdc ) where f0 (x) is a linear function. Thus, based on
the solution of these two particular cases of general d.c. optimization problem (Pdc ) we develop two-method
technology for solving a general fractional program, based on the Global Search Theory for d.c. optimization
[Strekalovsky, 2003, Strekalovsky, 2014].
   This paper is an extension of our research [Gruzdeva&Strekalovsky, 2017], where the theoretical framework
and some computational simulations were presented. The outline of the paper is as follows. In Sect. 2, we
describe two approaches to solving Problem (Pf ) using auxiliary d.c. optimization problems. In Sect. 3, we
show some comparative computational testing of two approaches on fractional program instances with up to 200
variables and 200 terms in the sum generated randomly and elaborate an algorithm for solving sum-of-ratios
problem combining the two proposed methods.

2     D.C. Programming Approach to Sum-of-Ratios Problems
Instead of considering a fractional program directly, we develop efficient global search methods, which are based
on two approaches. The first method adopts a reduction of the fractional minimization problem to the solution
of an equation with an optimal value of the d.c. optimization problem with a vector parameter. The second
method is based on the reduction of the sum-of-ratios problem to the optimization problem with nonconvex
constraints.

2.1   Reduction to the D.C. Minimization Problem
To solve fractional program using d.c. minimization we consider the following auxiliary optimization problem

                                                  ∑
                                                  m
                   (Pα )             Φ(x, α) :=     [ψi (x) − αi φi (x)] ↓ min, x ∈ S,
                                                                                x
                                                  i=1

where α = (α1 , . . . , αm )⊤ ∈ IRm is the vector parameter.




                                                                247
      In this subsection we recall some results [Gruzdeva&Strekalovskiy, 2016] concerning the relations between
   Problems (Pf ) and (Pα ). Note, that the data of Problem (Pf ) must satisfy “the nonnegativity condition”, i.e.
   the following inequalities hold

                         (H(α))                    ψi (x) − αi φi (x) ≥ 0 ∀x ∈ S, i = 1, . . . , m.

        Let us introduce the function V(α) of the optimal value to Problem (Pα ):
                                                               {m                              }
                                                                ∑
                           V(α) := inf {Φ(x, α) | x ∈ S} = inf     [ψi (x) − αi φi (x)] : x ∈ S .
                                      x                                    x
                                                                                i=1

        In addition, suppose that the following assumptions are fulfilled:

                              (a) V(α) > −∞ ∀α ∈ K, where K is a convex set from IRm ;
                      (H1 )
                              (b) ∀α ∈ K ⊂ IRm there exists a solution z = z(α) to Problem (Pα )

        Further, suppose that in Problem (Pα0 ) the following equality takes place:
                                                  {m                              }
                                                   ∑
                                   V(α0 ) : = inf    [ψi (x) − α0i φi (x)] : x ∈ S = 0.                                 (1)
                                                        x
                                                             i=1

      In [Gruzdeva&Strekalovskiy, 2016] we proved that any solution z = z(α0 ) to Problem (Pα0 ) is a solution to
   Problem (Pf ), so that z ∈ Sol(Pα0 ) ⊂ Sol(Pf ).
      Therefore in order to verify the equality (1), we should be able to find a global solution to Problem (Pα )
   for every α ∈ IR+ m
                       . Since ψi (·), φi (·), i = 1, . . . , m, are simply convex or generally d.c. functions it can be
   readily seen that Problem (Pα ) belongs to the class of d.c. minimization. As a consequence, in order to solve
   Problem (Pα ), we can apply the Global Search Theory [Strekalovsky, 2003, Strekalovsky, 2014].
      Due to the theoretical foundation developed in [Gruzdeva&Strekalovskiy, 2016], we are able to avoid the
   consideration of Problem (Pf ), and address the parametrized problem (Pα ) with α ∈ IR+                m
                                                                                                            . Hence, instead
   of solving Problem (Pf ), we propose to combine solving of Problem (Pα ) with a search with respect to the
   parameter α ∈ IR+ m
                        in order to find the vector α0 ∈ IR+     m
                                                                   such that V(α0 ) = 0.
      Denote Φi (x) := ψi (x) − αi φi (x), i = 1, . . . , m. Let [v k , uk ] be a k-segment for varying α.
                                  k

      Let a solution z(αk ) to Problem (Pαk ) be given and we computed Vk := V(αk ).

                                                            αk -bisection algorithm

Step 1. If Vk > 0, then set v k+1 = αk , αk+1 = 12 (uk + αk ).
Step 2. If Vk < 0, then set uk+1 = αk , αk+1 = 12 (v k + αk ).

                                                                           ψi (z(αk ))
Step 3. If Vk = 0 and min Φi (z(αk )) < 0, then set αik+1 =                            ∀i : Φi (z(αk )) < 0,
                          i                                                φi (z(αk ))
         αik+1 = αik ∀i : Φi (z(αk )) ≥ 0.
                                                                                       α+
         In addition, set v k+1 = 0, uk+1 = tk+1 αk+1 , where tk+1 =                         .
                                                                                      max αi
                                                                                       i

                                      k+1         k+1          k+1
Stop:    we computed the values α           , v         and u        .

      Further, let [0, α+ ] be an initial segment for varying α. To choose α+ we should take into account that due
   to (H(α)) and (H0 ), we have

                                                                         ψi (x) ∑ ψi (x)
                                                                                   m
                              ∀i = 1, . . . , m : αi ≤ fi (x) ,                 ≤          = f (x) ∀x ∈ S,
                                                                         ϕi (x)      ϕ (x)
                                                                                  i=1 i

   so, for example, α+ can be chosen as αi+ = fi (x0 ), i = 1, . . . , m.




                                                                          248
                                         Algorithm for solving fractional problem
                                                                  +
Step 0. (Initialization) k = 0, v k = 0, uk = α+ , αk = α2 ∈ [v k , uk ].
Step 1. Find a solution z(αk ) to Problem (Pαk ) using the global search strategy for d.c.                                  minimization
        [Strekalovsky, 2003, Strekalovsky, 2014].
Step 2. (Stopping criterion) If Vk := V(αk ) = 0 and min Φi (z(αk )) ≥ 0, then STOP: z(αk ) ∈ Sol(Pf ).
                                                              i

Step 3. Implement αk -bisection algorithm to find new parameters αk+1 , v k+1 and uk+1 ; k = k + 1 and go to Step 1.

       Let us emphasize the fact that the algorithm for solving Problem (Pf ) of fractional optimization consists of 3
   basic stages: the (a) local and (b) global searches in Problem (Pα ) with a fixed vector parameter α (Step 1) and
   (c) the method for finding the vector parameter α (Step 3) at which the optimal value of Problem (Pα ) is zero,
   i.e. V(α) = 0.

   2.2    Reduction to the Problem with D.C. Constraints
   In this subsection we reduce the fractional program to the following optimization problem with a nonconvex
   feasible set                       
                                                      ∑m
                                                f0 :=      αi ↓ min , x ∈ S,
                            (P)                                 (x,α)
                                                      i=1
                                         fi := ψi (x) − αi φi (x) ≤ 0, i = 1, . . . , m.
      In [Gruzdeva&Strekalovsky, 2016] it was proved that for any solution (x∗ , α∗ ) ∈ IRn × IRm to the problem
   (P), the point x∗ will be a solution to Problem (Pf ).
      Further we solved problem (P) using the exact penalization approach for d.c. optimization developed in
   [Strekalovsky, 2016, Strekalovsky, 2017]. Therefore, we introduce the penalized problem as follows

                      (Pσ )          θσ (x) = f0 (x) + σ max{0, fi (x), i ∈ I} ↓ min,             x ∈ S.

   It can be readily seen that the penalized function θσ (·) is a d.c. function, because the functions fi (·) = gi (·) −
   hi (·), i ∈ I ∪ {0}, are the same, and we can apply the global search theory to solving this class of nonconvex
   problems [Strekalovsky, 2003, Strekalovsky, 2014].                        ∑
       Actually, since σ > 0, θσ (x) = Gσ (x) − Hσ (x), Hσ (x) := h0 (x) + σ   hi (x),
                                                                                            i∈I
                                                                      {                                               }
                                                                          ∑
                                                                          m                            ∑
                      Gσ (x) := θσ (x)+Hσ (x) = g0 (x)+σ max                    hi (x); max[gi (x) +            hj (x)] ,
                                                                          i=1          i∈I          j∈I, j̸=i

   it is clear that Gσ (·) and Hσ (·) are convex functions.
       Let the Lagrange multipliers, associated with the constraints and corresponding to the point z k , k ∈ {1, 2, ...},
   be denoted by λ := (λ1 , . . . , λm ) ∈ IRm .
                                                 Global search scheme

Step 1. Using the local search method from [Strekalovsky&Gruzdeva, 2007, Strekalovsky, 2015], find a critical point
        z k in (P).
                    ∑
                    m
Step 2. Set σk :=         λi . Choose a number β : inf(Gσ , S) ≤ β ≤ sup(Gσ , S).
                    i=1
         Choose an initial β0 = Gσ (z k ), ζk = θσ (z k ).
Step 3. Construct a finite approximation Rk (β) = {v 1 , . . . , v Nk | Hσ (v i ) = β + ζk , i = 1, . . . , Nk , Nk = Nk (β)} of
        the level surface {Hσ (x) = β + ζk } of the function Hσ (·).
Step 4. Find a δk -solution ūi of the following Linearized Problem:

                                         (Pσ Li )     Gσ (x) − ⟨∇Hσ (v i ), x⟩ ↓ min, x ∈ S.
                                                                                        x

         so that Gσ (ūi ) − ⟨∇Hσ (v i ), ūi ⟩ − δk ≤ inf {Gσ (x) − ⟨∇Hσ (v i ), x⟩}.
                                                       x




                                                                  249
Step 5. Starting from the point ūi , find a KKT-point                     ui     by    the   local   search   method   from
        [Strekalovsky&Gruzdeva, 2007, Strekalovsky, 2015].
Step 6. Choose the point uj : f (uj ) ≤ min{f0 (ui ), i = 1, ..., N }.
Step 7. If f0 (uj ) < f0 (z k ), then set zk+1 = uj , k = k + 1 and go to Step 2.
Step 8. Otherwise, choose a new value of β and go to Step 3.

     The point z ∗ resulting from the global search strategy will be a solution to the original fraction program
   [Gruzdeva&Strekalovsky, 2016]. It should be noted that, in contrast to the approach from Subsect. 2.1, αi will
   be found simultaneously with the solution vector x, because they are variables, although auxiliary ones, of the
   optimization problem.

   3    Computational Simulations
   Two approaches from above for solving the fractional programs (Pf ) via d.c. optimization problems were verified.
   The algorithm based on the method for solving fractional problem from Subsect. 2.1 (F1-algorithm) and the
   global search scheme from Subsect. 2.2 (F2-algorithm) were coded in C++ language and applied to an extended
   set of test examples generated randomly with up to 200 variables and 200 terms in the sum. All computational
   experiments were performed on the Intel Core i7-4790K CPU 4.0 GHz. All convex auxiliary problems (linearized
   problems) on the steps of F1-, F2-algorithms were solved by the software package IBM ILOG CPLEX 12.6.2.
      Some of the computational experiments of testing the proposed algorithms on fractional problems with
   quadratic functions in the numerators of ratios is presented here.
      We generated the problems of the type proposed by [Jong, 2013] as follows:

                                           ∑m ⟨x, A x⟩ + ⟨bi , x⟩
                                                   i
                                f (x) :=                          ↓ min, Qx ≤ q, x ∈ [1, 5]n ,                          (2)
                                           i=1      ⟨ci , x⟩         x

                                                                               wj wT
   where Ai = Ui D0i UiT , Ui = V1 V2 V3 , i = 1, . . . , n; Vj = I − 2 ∥wj ∥j2 , j = 1, 2, 3; w1 = −i + rand(n, 1),
   w2 = −2i+rand(n, 1), w3 = −3i+rand(n, 1); ci = i−i·rand(n, 1), bi = i+i·rand(n, 1), Q = −1+2·rand(5, n),
   q = 2 + 3 · rand(5, 1)[Jong, 2013]. (We denote by rand(k1 , k2 ) the random matrix with k1 rows, k2 columns and
   the elements generated randomly on [0, 1].)

                          Table 1: Randomly generated problems (2) with quadratic functions
                                n    m       f (x0 )        f (z) Time F1     Time F2
                              10    10   36.738565    36.429457        1.73        2.22
                              10    50 161.613719 157.926620          10.98       18.49
                              10 100 303.448935 298.051190            20.90      122.03
                              10 200 609.211089 606.136790            38.68      501.31
                              50    10   33.151416    31.676804        4.05      124.25
                              50    50 156.152869 152.686192        292.17       918.91
                              50 100 304.067383 298.776803          179.12     2634.33
                              50 200 602.000116 588.454982          340.89    18132.53
                             100    10   31.809077    30.521716     104.81     1241.75
                             100    50 153.990301 149.812955        410.82     4986.00
                             100 100 302.938895 296.580627          492.11    13206.50
                             100 200 603.905018 591.954144         3678.78    55462.61
                             200    10   31.878970    30.021172     262.79    12878.94
                             200    50 152.632702 146.398920       1546.64    81156.93
                             200 100 306.462441 295.557105         2034.73 108858.27
                             200 200 603.988798 587.862890         9552.40 318381.50
      Note that Problems (Pα ) corresponding to the problems (2) happened to be convex, because the constructed
   functions ψi = ⟨x, Ai x⟩ + ⟨bi , x⟩, i = 1, . . . , m, are convex. That is why the F1-algorithm’s run-time turned out
   to be much less than the run-time of F2-algorithm, in which it is necessary to solve nonconvex problems with m
   d.c. constraints.




                                                              250
      The shortcoming of the F2-algorithm is the run-time which is spent on solving problems with d.c. constraints,
   but F1-algorithm wastes a lot of run-time for finding the vector parameter α at which the optimal value of
   Problem (Pα ) is zero.
      The results of computational experiments suggest that the solving of the fraction programs need a combina-
   tion of the two approaches. For example, we can use the solution to Problem (P) to search for the parameter
   α that reduces the optimal value function of Problem (Pα ) to zero. This idea could be implemented by the
   following algorithm.

                                             Two-component algorithm
Step 0. (Initialization) k = 0, v k = 0, uk = α+ .
Step 1. Starting from feasible point xk , implement the local search method from [Strekalovsky&Gruzdeva, 2007,
        Strekalovsky, 2015] to find a critical point z k = (ẑ(αk ), αk ) in d.c. constraints problem (P).
Step 2. (Stopping criterion) If Vk := V(αk ) = 0 and min Φi (ẑ(αk )) ≥ 0, then STOP: ẑ(αk ) ∈ Sol(Pf ).
                                                       i

Step 3. Find a solution z(αk ) to Problem (Pαk ) using the global search strategy for d.c.                  minimization
        [Strekalovsky, 2003, Strekalovsky, 2014].
Step 4. (Stopping criterion) If Vk := V(αk ) = 0 and min Φi (z(αk )) ≥ 0, then STOP: z(αk ) ∈ Sol(Pf ).
                                                       i

Step 5. Implement αk -bisection algorithm to find new parameters αk+1 , v k+1 and uk+1 .
        Set xk+1 = (z(αk ), αk+1 ), k = k + 1 and go to Step 1.
      The two-component algorithm was tested on a lots of test examples and its run-time turned out to be less
   than the run-time of F1-algorithm.

   4    Conclusions
   In this paper, we showed how fractional programs can be solved by applying the Global Search Theory of
   d.c. optimization. The methods developed were substantiated and tested on an extended set of test examples
   generated randomly. In addition, we proposed a two-component technology based on both the global search
   algorithm for d.c. minimization problem and algorithm for solving d.c. constraints problem.

   Acknowledgements
   This work is carried out under financial support of Russian Science Foundation (project no. 15-11-20015).

   References
   [Bugarin et al., 2016] Bugarin, F., Henrion, D. & Lasserre J.-B. Minimizing the sum of many rational functions.
            Mathematical Programming Computation, 8, 83-111. doi: 10.1007/s12532-015-0089-z
   [Evtushenko& Posypkin, 2013] Evtushenko, Y. & Posypkin, M. A deterministic approach to global box-
            constrained optimization. Optimization Letters, 7(4), 819-829. doi: 10.1007/s11590-012-0452-1
   [Freund&Jarre, 2001] Freund, R.W. & Jarre, F. Solving the sum-of-ratios problem by an interior-point method.
           Journal of Global Optimization, 19(1), 83-102.
   [Gruzdeva&Strekalovskiy, 2016] Gruzdeva, T.V. & Strekalovskiy, A.S. An approach to fractional programming
           via D.C. optimization. AIP Conference Proceedings, 1776, 090010. doi: 10.1063/1.4965374
   [Gruzdeva&Strekalovsky, 2016] Gruzdeva, T.V. & Strekalovsky, A.S. An Approach to Fractional Programming
           via D.C. Constraints Problem: Local Search. In: Yu. Kochetov, et al. (eds.) DOOR-2016. Lecture Notes
           in Computer Science, 9869 (pp.404-417) Heidelberg: Springer. doi: 10.1007/978-3-319-44914-2 32
   [Gruzdeva&Strekalovsky, 2017] Gruzdeva, T.V. & Strekalovsky, A.S. A D.C. Programming Approach to Frac-
           tional Problems. In: R. Battiti, D. Kvasov, Y. Sergeyev (eds.) ”Learning and Intelligent Optimization
           Conference LION11, Nizhny Novgorod, Russia”. Lecture Notes in Computer Science, 10556 Springer.
           To appear.




                                                           251
[Hiriart-Urruty, 1985] Hiriart-Urruty, J.B. Generalized Differentiability, Duality and Optimization for Problems
          dealing with Difference of Convex finctions. In: Ponstein J. (ed.) Convexity and Duality in Optimization,
          Lecture Notes in Economics and Mathematical Systems, 256 (pp.37-69). Berlin: Springer–Verlag.

[Horst&Tuy, 1996] Horst, R. & Tuy, H. Global Optimization: Deterministic Approaches. Berlin: Springer–Verlag.
[Horst&Pardalos, 1995] Horst, R. & Pardalos P.M. (Eds.) Handbook of Global Optimization, Nonconvex Opti-
        mization and its Applications. Dordrecht: Kluwer Academic Publishers.
[Jong, 2013] Jong, Y.-Ch. An efficient global optimization algorithm for nonlinear sum-of-ratios problems. Op-
         timization Online http://www.optimization-online.org/DB HTML/2012/08/3586.html
[Khamisov, 1999] Khamisov, O. On optimization properties of functions, with a concave minorant. Journal of
        Global Optimization, 14 (1), 79-101.
[Schaible&Shi, 2003] Schaible, S. & Shi, J. Fractional programming: the sum-of-ratios case. Optimization Meth-
         ods and Software, 18, 219-229.
[Strekalovsky, 2003] Strekalovsky, A.S. Elements of nonconvex optimization [in Russian]. Novosibirsk: Nauka.

[Strekalovsky, 2014] Strekalovsky, A.S. On solving optimization problems with hidden nonconvex structures. In
          T.M. Rassias, C.A. Floudas, & S. Butenko (Eds.), Optimization in science and engineering (pp.465-
          502). New York: Springer.
[Strekalovsky, 2015] Strekalovsky, A.S. On local search in d.c. optimization problems. Applied Mathematics and
          Computation, 255, 73-83. doi: http://dx.doi.org/10.1016/j.amc.2014.08.092
[Strekalovsky, 2016] Strekalovsky, A.S. On the merit and penalty functions for the d.c. optimization. In: Yu. Ko-
          chetov, et al. (eds.) DOOR-2016. Lecture Notes in Computer Science, 9869, (pp. 452–465). Heidelberg:
          Springer. doi: 10.1007/978-3-319-44914-2 36

[Strekalovsky, 2017] Strekalovsky, A.S. Global Optimality Conditions in Nonconvex Optimization. Journal of
          Optimization Theory and Applications, 173, 770792. doi:10.1007/s10957-016-0998-7

[Strekalovsky&Gruzdeva, 2007] Strekalovsky, A.S. & Gruzdeva, T.V. Local Search in Problems with
          Nonconvex Constraints. Computational Mathematics and Mathematical Physics, 47, 381-396.
          doi:10.1134/S0965542507030049

[Strongin&Sergeyev, 2000] Strongin, R.G. & Sergeyev, Ya.D. Global Optimization with Non-convex Constraints:
         Sequential and Parallel Algorithms. Dordrecht: Kluwer Academic Publishers.

[Tuy, 1998] Tuy, H. Convex Analysis and Global Optimization. Dordrecht: Kluwer Academic Publishers.




                                                      252