Approximation of parametrically given polyhedral sets Oleg V. Khamisov Melentiev Energy Systems Institute Irkutsk, Russia khamisov@isem.irk.ru Abstract In our paper we consider systems of linear inequalities which linearly depend on multidimensional parameters. A technique for approximat- ing set of parameters for which the considered system is consistent is described. Approximations are constructed by means of convex and concave piece-wise linear functions. An illustrative example is given. 1 Intriduction Parametric polyhedral set D(p) is defined in the following way D(p) = {x ∈ X : gi (x, p) ≤ 0, i = 1, . . . , m}, (1) where X ⊂ Rn is a polytope (X ̸= ∅), gi : Rn × Rq , i = 1, . . . , m are bilinear functions, i.e. each function is linear in variable x when variable p is fixed and vise versa. Vector p is called a parameter and may vary within a given polytope P ⊂ Rq , p ∈ P . Define set P ∗ = {p ∈ P : D(p) ̸= ∅}. In general, P ∗ is a nonconvex, disconnected and implicitly given set . We consider the following problem: find ∗ ∗ an outer Pout and an inner Pin explicit approximations of P ∗ : ∗ Pin ⊂ P ∗ ⊂ Pout ∗ . Similar problem were considered in [BorrelliEtAl03] and [JonesEtAl08] under some additional conditions which allow one to effectively apply the well elaborated linear programming parametric technique. Interval linear optimization [FiedlerEtAl06] is also tightly connected to the topic of our paper. However, here we suggest to use approximations generated by support function-majorants and function minorants [Khamisov99]. 2 Approximation technique Consider the following function w(p) = min max gi (x, p). x∈X 1≤i≤m ∗ Then set P has the equivalent description P ∗ = {p ∈ P : w(p) ≤ 0}. Copyright ⃝ c by the paper’s authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). In: Sergei S. Goncharov, Yuri G. Evtushenko (eds.): Proceedings of the Workshop on Applied Mathematics and Fundamental Computer Science 2020 , Omsk, Russia, 30-04-2020, published at http://ceur-ws.org 1 Let p̃ ∈ P be given, find x̃ ∈ Argmin max gi (x, p̃) (2) x∈X 1≤i≤m and define function φ(p, p̃) = max gi (x̃, p). (3) 1≤i≤m Since functions gi are liner in p function φ is a convex piece-wise linear function which satisfies two conditions w(p̃) = φ(p̃, p̃), w(p) ≤ φ(p, p̃) ∀p ∈ P. (4) Due to these conditions function φ is called a support function-majorant of function w. Rewrite now function w in the following way ∑ m ∑ m w(p) = min max gi (x, p) = min max ui gi (x, p) = max min ui gi (x, p), (5) x∈X 1≤i≤m x∈X u∈Su u∈Su x∈X i=1 i=1 { } ∑ m where Su is the standard simplex, Su = u ∈ R : m ui = 1, ui ≥ 0, i = 1, . . . , m . Let again p̃ ∈ P be given. i=1 Solve the corresponding max-min problem in (5) and find x̃ and ũ such that ∑ m w(p̃) = ũi gi (x̃, p̃). (6) i=1 Define function ∑ m ψ(p, p̃) = min ũi gi (x, p). (7) x∈X i=1 Then, from (5) and (6) we have w(p̃) = ψ(p̃, p̃), w(p) ≥ ψ(p, p̃) ∀p ∈ P. (8) By construction function ψ(·, p̃) is concave and due to the properties (8) is called a support function-minorant of function w. Assume, that w(p̃) ≤ 0 and define set Pin (p̃) = {p ∈ P : φ(p, p̃) ≤ 0}. (9) From (4) we have w(p) ≤ 0 ∀p ∈ Pin (p̃). By construction Pin (p̃) is a convex polyhedral set. Therefore, Pin (p̃) ∗ is a convex polyhedral inner approximation of Pin . In the case of w(p̃) > 0 we define set P∅ (p̃) = {p ∈ P : ψ(p, p̃) > 0}. (10) From (8) we have w(p) > 0 ∀p ∈ P∅ (p̃). It follows from the concavity of ψ(·, p̃) that P∅ (p̃) is a convex set. Then ∗ the set Pout (p̃) = P \ P∅ (p̃) is an outer approximation of Pout . It follows from (3) and (9) that set Pin (p̃) has explicit description as the polyhedron Pin (p̃) = {p ∈ P : gi (x̃, p) ≤ 0, i = 1, . . . , m}. (11) The set Pout (p̃) is the complement of convex set P∅ (p̃). Therefore, Pout (p̃) is nonconvex, can be disconnected and has a disjunctive structure [Balas18]. Assume, that vertices v 1 , . . . , v N of X are known, X = conv{v 1 , . . . , v N }. Then, from (7) we have ∑m ψ(p, p̃) = min ũi gi (v j , p) (12) 1≤j≤N i=1 and ∑ m P∅ (p̃) = {p ∈ P : ũi gi (v j , p) > 0, j = 1, . . . , N }. (13) i=1 2 Define sets ∑ m Pj (p̃) = {p ∈ P : ũi gi (v j , p) ≤ 0}, j = 1, . . . , N. i=1 Then ∪ N Pout (p̃) = Pj (p̃), (14) j=1 i.e. Pout (p̃) is a union of polyhedrons. Find scalars γ̃j : ∑ m ũi gi (v j , p) ≤ γ̃j ∀p ∈ P, j = 1, . . . , N. i=1 It is well-known, that by introducing 0-1 variables zj , j = 1, . . . , N the disjunctive structure of Pout (p̃) in (14) can be described as follows    ∑ m ∨ ∑ N  Pout (p̃) = p ∈ P : ∃z : ũi gi (v j , p) ≤ γ̃j zj , zj = 0 1, j = 1, . . . , N, zj = (N − 1) .   i=1 j=1 We always can assume that X is a simplex. Since X is bounded it has an outer approximation by a simplex and since X is defined by a system of linear inequalities we can move all inequalities into the list of new functions gi in (1). In this case new functions do not have parameter p, however the suggested approach is still correct. Let us consider how x̃ and ũ corresponding to a given p̃ can be obtained. Rewrite problem in (2) in the following way min{ξ : gi (x, p̃) ≤ ξ, i = 1, . . . , m, x ∈ X}, (15) x,ξ where ξ is an auxiliary unbounded scalar variable. Problem (15) is a linear programming problem which always ˜ be a solution. Then, obviously, x̃ satisfies inclusion (2) and w(p̃) = ξ. has a finite solution. Let (x̃, ξ) ˜ Write down the Lagrange function ∑ m L(ξ, x, u) = ξ + ui (gi (x, p̃) − ξ). i=1 The Lagrange dual problem is { ( ) } ∑ m ∑ m ∑ m max min L(ξ, x, u) = max min ξ 1− ui + ui gi (x, p̃) = max min ui gi (x, p̃), u≥0 x∈X,ξ∈R u≥0 x∈X,ξ∈R u∈Su x∈X i=1 i=1 i=1 i.e. ũ introduced in (6) is a dual solution of (15). Note also, that (15) is a linear programming problem and hence can be easily solved for any given p̃ ∈ P . Let us make some intermediate conclusions. For any arbitrary given p̃ ∈ P we solve linear programming problem (15) obtaining primal solution (x̃, ξ) ˜ and dual solution ũ. If ξ˜ ≤ 0 then we know that system (1) is consistent not only for p = p̃ but also ∀p ∈ Pin (p̃), where Pin (p̃) is given in (11). If ξ˜ > 0 then system (1) is inconsistent not only for p = p̃ but also ∀p ∈ P∅ (p̃) in (10). The main result here consists in the following: checking a given parameter for feasibility we obtain a set with the same property. Example. Sets P = [−0.2, 1.3], X = {(x1 , x2 ) : −5 ≤ xj ≤ 5, j = 1, 2}. System (1) is defined by the following inequalities g1 (x1 , x2 , p) = 5px1 + 10x2 + 2p − 10 ≤ 0, g2 (x1 , x2 , p) = −2x1 − 3px + 2 − 5p + 10.5 ≤ 0. ∪ ∪ Set P is union of three intervals, P ∗ = [−0.2, −0.05] [0.075.0.94] [1.235, 1.3]. Function w and set P ∗ are shown ∗ in Fig. 1. In this example we check three values of parameter for feasibility and construct the corresponding sets. First parameter p1 = 0.01. Solving problem (15) with p̃ = p1 we obtain the corresponding primal solution x = (5, 1.015), ξ 1 = 0.4196, and dual solution u1 = (0.003, 0.997). Since w(p1 ) = ξ 1 > 0 we have D(p1 ) = ∅. Set 1 3 X has four vertices v 1 = (−5, −5), v 2 = (−5, 5), v 3 = (5, −5), v 4 = (5, 5). Support function-minorant of w has the following form (see (12)) ψ(p, p1 ) = min{9.901p + 20.2585, −20.009p + 20.5585, 10.051p + 0.3185, −19.859p + 0.6185} = = min{10.051p + 0.3185, −19.859p + 0.6185} ∀p ∈ [−0.2, 1.3]. Set P∅ (p ) = {p : ψ(p, p ) > 0} is open interval 1 1 ∪ (-0.0317,0.0311). Therefore, D(p) = ∅ ∀p ∈ (−0.0317, 0.0311). Hence Pout (p1 ) = P \ P∅ (p1 ) = [−0.2, −0.0317] [0.0311, 1.3]. See Fig 1 for geometrical interpretation of function ψ(·, p1 ) and set P∅ (p1 ). Take now the second value of the parameter, p2 = 0.6. The corresponding problem (15) (p̃ = p2 ) has solutions x = (5, −0.737), ξ 2 = −1.1729, u2 = (0.153, 0.847). Since ξ 2 < 0 set D(p2 ) ̸= ∅. From (3) we obtain 2 φ(p, p2 ) = max{27p − 17.37, −2.789p + 0.5}, Pin (p2 ) = {p : φ(p, p2) ≤ 0} = [0.179, 0.643] and D(p) ̸= ∅ ∀p ∈ [0.179, 0.643]. Third parameter p3 = 1.1. The corresponding primal and dual solutions x3 = (5, −1.857), ξ 3 = 1.1286, u3 = (0.248, 0.752). In this case ξ 3 > 0 and D(p3 ) = ∅. Support function-minorant ψ(p, p3 ) = min{14.216p − 14.504, −8.344p + 10.296, −20.744p + 25.336}. Set P∅ (p3 ) = {p : ψ(p, p3 ) > 0} is interval (1.02, 1.22). Figure 1: Geometrical interpretation of the Example. 3 An approximation procedure Parameter p ∈ P such that D(p) ̸= ∅ is called feasible parameter and infeasible otherwise. Below we describe a procedure which is based on generating finite number of random parameters in P and checking feasibility property of them. Since every parameter generates a set containing either only other feasible parameters or only infeasible parameters such procedure is a covering type procedure. In general, we finish with a collection of sets with feasible parameters and collection of sets with infeasible parameters. It is assumed that vertices v 1 , . . . , v N of set X are known. We also fix maximum number of iterations k > 1 in advance. The procedure has the following description. 4 Step 0. Set Pin = ∅, P∅ = ∅, k = 1, kin = 0, k∅ = 0; Step 1. Choose randomly parameter pk ∈ P ; ∪ Step 2. If pk ∈ Pin then goto step 7; Pin ∈Pin ∪ Step 3. If pk ∈ P∅ then goto step 7; P∅ ∈P∅ Step 4. Solve problem (15) with p̃ = pk . Let (xk , ξ k ) and uk be primal and dual solutions. ∪ Step 5. If ξ k > 0 then define P∅k = P∅ (p̃) in (13) for p̃ = pk and ũ = uk , and set P∅ = P∅ P∅k , k∅ = k∅ + 1; ∪ k Step 6. If ξ k ≤ 0 then define Pin k = Pin (p̃) in (11) for p̃ = pk and x̃ = xk , and set Pin = Pin Pin , kin = kin + 1; Step 7. Set k=k+1; Step 8. If k > k then stop, otherwise goto step 1. When the procedure stops we have a collection Pin = {Pin 1 kin , . . . , Pin } of feasible parameters sets and a collection k ∗ ∗ P∅ = {P∅1 , . . . , P∅ ∅ } of infeasible parameters sets. Therefore, we can define Pin and Pout in the following way (k ) ∪ kin ∪ ∅ ∗ ∗ k Pin = i Pin , Pout =P\ P∅ ∅ . i=1 i=1 In the example considered above we have k = 3, kin = 1, k∅ = 2 and Pin = {[0.179, 0.643]}, P∅ = {(−0.0317, 0.0311), (1.02, 1.22)}. Therefore, ∪ ∪ ∗ ∗ Pin = [0.179, 0.643], Pout = [−0.2, −0.0317] [0.0311, 1.02] [1.22, 1.3]. Theoretically the suggested procedure is finite since P is a compact set, hence can be covered by a finite number of convex sets. However, in order to increase the efficiency other covering techniques, e.g. borrowed from global optimization technology [EvtushenkoEtAl17] can be used. Acknowledgements The paper was supported by RFBR grant No. 18-07-01432. References [Balas18] E. Balas. Disjunctive programming. Springer Nature Switzerland AG, 2018. [BorrelliEtAl03] F. Borrelli, A. Bemporad, and M. Morari. Geometric algorithm for multiparametric linear programming. Journal of Optimization Theory and Applications, 108(3):515–540, 2003. [EvtushenkoEtAl17] Yu.G.. Evtushenko, M.A. Posypkin, L.A. Rybak, and A.V. Turkin. Finding sets of so- lutions to systems of nonlinear inequalities. Computational Mathematics and Mathematical Physics, 57(8):1241–1247, 2017. [FiedlerEtAl06] M. Fiedler, J. Nedoma, J. Ramı́k, J. Rohn, and K. Zimmermann. Linear optimization problems with inexact data. Springer, 2006 [JonesEtAl08] C.N. Jones, E.C.. Kerrigan, and J.M. Maciejowski. On polyhedral projection and parametric programming. Journal of Optimization Theory and Applications, 138(2):207–220, 2008. [Khamisov99] O. Khamisov. On optimization properties of functions with a concave minorant. Journal Global Optimization, 14:79–101, January 1999. 5