<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>F. Borrelli, A. Bemporad, and M. Morari. Geometric algorithm for multiparametric linear
programming. Journal of Optimization Theory and Applications</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Approximation of parametrically given polyhedral sets</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleg V. Khamisov</string-name>
          <email>khamisov@isem.irk.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>n ⊂ P</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Melentiev Energy Systems Institute Irkutsk</institution>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2020</year>
      </pub-date>
      <volume>108</volume>
      <issue>3</issue>
      <abstract>
        <p>In our paper we consider systems of linear inequalities which linearly depend on multidimensional parameters. A technique for approximating set of parameters for which the considered system is consistent is described. Approximations are constructed by means of convex and concave piece-wise linear functions. An illustrative example is given.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Intriduction</p>
    </sec>
    <sec id="sec-2">
      <title>Parametric polyhedral set D(p) is defined in the following way</title>
      <p>D(p) = {x ∈ X : gi(x, p) ≤ 0, i = 1, . . . , m},
(1)
where X ⊂ Rn is a polytope (X ̸= ∅), gi : Rn × Rq, i = 1, . . . , m are bilinear functions, i.e. each function is
linear in variable x when variable p is fixed and vise versa. Vector p is called a parameter and may vary within
a given polytope P ⊂ Rq, p ∈ P . Define set
In general, P ∗ is a nonconvex, disconnected and implicitly given set . We consider the following problem: find
an outer Po∗ut and an inner Pi∗n explicit approximations of P ∗:</p>
      <p>Similar problem were considered in [BorrelliEtAl03] and [JonesEtAl08] under some additional conditions
which allow one to effectively apply the well elaborated linear programming parametric technique. Interval
linear optimization [FiedlerEtAl06] is also tightly connected to the topic of our paper. However, here we suggest
to use approximations generated by support function-majorants and function minorants [Khamisov99].</p>
    </sec>
    <sec id="sec-3">
      <title>Consider the following function</title>
    </sec>
    <sec id="sec-4">
      <title>Then set P ∗ has the equivalent description</title>
      <p>and define function
Since functions gi are liner in p function φ is a convex piece-wise linear function which satisfies two conditions
x˜ ∈ Argmin</p>
      <p>x∈X 1m≤ia≤xm gi(x, p˜)
φ(p, p˜) = 1m≤ia≤xm gi(x˜, p).</p>
      <p>w(p˜) = φ(p˜, p˜), w(p) ≤ φ(p, p˜) ∀p ∈ P.</p>
    </sec>
    <sec id="sec-5">
      <title>Define function Then, from (5) and (6) we have w(p˜) =</title>
      <p>m
∑ u˜igi(x˜, p˜).
i=1</p>
      <p>m
ψ(p, p˜) = min ∑ u˜igi(x, p).</p>
      <p>x∈X i=1
w(p˜) = ψ(p˜, p˜), w(p) ≥ ψ(p, p˜) ∀p ∈ P.</p>
      <p>By construction function ψ(·, p˜) is concave and due to the properties (8) is called a support function-minorant
of function w.</p>
      <p>Assume, that w(p˜) ≤ 0 and define set
Due to these conditions function φ is called a support function-majorant of function w. Rewrite now function w
in the following way
m m
x∈X 1m≤ia≤xm gi(x, p) = min max ∑ uigi(x, p) = max min ∑ uigi(x, p),
w(p) = min</p>
      <p>x∈X u∈Su i=1 u∈Su x∈X i=1
{ m }
where Su is the standard simplex, Su = u ∈ Rm : ∑ui = 1, ui ≥ 0, i = 1, . . . , m . Let again p˜ ∈ P be given.</p>
      <p>i=1
Solve the corresponding max-min problem in (5) and find x˜ and u˜ such that
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
From (4) we have w(p) ≤ 0 ∀p ∈ Pin(p˜). By construction Pin(p˜) is a convex polyhedral set. Therefore, Pin(p˜)
is a convex polyhedral inner approximation of Pi∗n. In the case of w(p˜) &gt; 0 we define set</p>
      <p>Pin(p˜) = {p ∈ P : φ(p, p˜) ≤ 0}.</p>
      <p>P∅(p˜) = {p ∈ P : ψ(p, p˜) &gt; 0}.</p>
      <p>From (8) we have w(p) &gt; 0 ∀p ∈ P∅(p˜). It follows from the concavity of ψ(·, p˜) that P∅(p˜) is a convex set. Then
the set Pout(p˜) = P \ P∅(p˜) is an outer approximation of Po∗ut.</p>
    </sec>
    <sec id="sec-6">
      <title>It follows from (3) and (9) that set Pin(p˜) has explicit description as the polyhedron</title>
      <p>Pin(p˜) = {p ∈ P : gi(x˜, p) ≤ 0, i = 1, . . . , m}.</p>
      <p>The set Pout(p˜) is the complement of convex set P∅(p˜). Therefore, Pout(p˜) is nonconvex, can be disconnected and
has a disjunctive structure [Balas18]. Assume, that vertices v1, . . . , vN of X are known, X = conv{v1, . . . , vN }.
Then, from (7) we have
and
ψ(p, p˜) =</p>
      <p>m
min ∑ u˜igi(vj , p)
1≤j≤N i=1
m
P∅(p˜) = {p ∈ P : ∑ u˜igi(vj , p) &gt; 0, j = 1, . . . , N }.</p>
      <p>i=1</p>
    </sec>
    <sec id="sec-7">
      <title>Define sets</title>
    </sec>
    <sec id="sec-8">
      <title>Then</title>
      <p>i.e. Pout(p˜) is a union of polyhedrons. Find scalars γ˜j :</p>
      <p>m
Pj (p˜) = {p ∈ P : ∑ u˜igi(vj , p) ≤ 0}, j = 1, . . . , N.</p>
      <p>i=1
Pout(p˜) =</p>
      <p>N
∪ Pj (p˜),
j=1
m
∑ u˜igi(vj , p) ≤ γ˜j ∀p ∈ P, j = 1, . . . , N.
i=1
(14)
(15)
It is well-known, that by introducing 0-1 variables zj , j = 1, . . . , N the disjunctive structure of Pout(p˜) in (14)
can be described as follows</p>
      <p>Pout(p˜) =


</p>
      <p>m
p ∈ P : ∃z : ∑ u˜igi(vj , p) ≤ γ˜j zj , zj = 0 ∨ 1, j = 1, . . . , N,
i=1</p>
      <p>N 
∑ zj = (N − 1) .
j=1 </p>
      <p>We always can assume that X is a simplex. Since X is bounded it has an outer approximation by a simplex
and since X is defined by a system of linear inequalities we can move all inequalities into the list of new functions
gi in (1). In this case new functions do not have parameter p, however the suggested approach is still correct.</p>
      <p>Let us consider how x˜ and u˜ corresponding to a given p˜ can be obtained. Rewrite problem in (2) in the
following way</p>
      <p>mx;in{ξ : gi(x, p˜) ≤ ξ, i = 1, . . . , m, x ∈ X},
where ξ is an auxiliary unbounded scalar variable. Problem (15) is a linear programming problem which always
has a finite solution. Let (x˜, ξ˜) be a solution. Then, obviously, x˜ satisfies inclusion (2) and w(p˜) = ξ˜. Write
down the Lagrange function</p>
      <p>L(ξ, x, u) = ξ +
m
∑ ui(gi(x, p˜) − ξ).</p>
      <p>i=1</p>
    </sec>
    <sec id="sec-9">
      <title>The Lagrange dual problem is max min</title>
      <p>u≥0 x∈X; ∈R</p>
      <p>L(ξ, x, u) = max min
u≥0 x∈X; ∈R
{ (
ξ 1 −
m
∑ ui
i=1
)</p>
      <p>m
+ ∑ uigi(x, p˜)
i=1
}</p>
      <p>m
= max min ∑ uigi(x, p˜),
u∈Su x∈X i=1
i.e. u˜ introduced in (6) is a dual solution of (15). Note also, that (15) is a linear programming problem and
hence can be easily solved for any given p˜ ∈ P .</p>
      <p>Let us make some intermediate conclusions. For any arbitrary given p˜ ∈ P we solve linear programming
problem (15) obtaining primal solution (x˜, ξ˜) and dual solution u˜. If ξ˜ ≤ 0 then we know that system (1) is
consistent not only for p = p˜ but also ∀p ∈ Pin(p˜), where Pin(p˜) is given in (11). If ξ˜ &gt; 0 then system (1)
is inconsistent not only for p = p˜ but also ∀p ∈ P∅(p˜) in (10). The main result here consists in the following:
checking a given parameter for feasibility we obtain a set with the same property.</p>
      <p>Example. Sets P = [−0.2, 1.3], X = {(x1, x2) : −5 ≤ xj ≤ 5, j = 1, 2}. System (1) is defined by the following
inequalities</p>
      <p>g1(x1, x2, p) = 5px1 + 10x2 + 2p − 10 ≤ 0,
g2(x1, x2, p) = −2x1 − 3px + 2 − 5p + 10.5 ≤ 0.</p>
      <p>Set P ∗ is union of three intervals, P ∗ = [−0.2, −0.05] ∪[0.075.0.94] ∪[1.235, 1.3]. Function w and set P ∗ are shown
in Fig. 1. In this example we check three values of parameter for feasibility and construct the corresponding
sets.</p>
      <p>First parameter p1 = 0.01. Solving problem (15) with p˜ = p1 we obtain the corresponding primal solution
x1 = (5, 1.015), ξ1 = 0.4196, and dual solution u1 = (0.003, 0.997). Since w(p1) = ξ1 &gt; 0 we have D(p1) = ∅. Set
X has four vertices v1 = (−5, −5), v2 = (−5, 5), v3 = (5, −5), v4 = (5, 5). Support function-minorant of w has
the following form (see (12))
ψ(p, p1) = min{9.901p + 20.2585, −20.009p + 20.5585, 10.051p + 0.3185, −19.859p + 0.6185} =
= min{10.051p + 0.3185, −19.859p + 0.6185} ∀p ∈ [−0.2, 1.3].</p>
      <p>Set P∅(p1) = {p : ψ(p, p1) &gt; 0} is open interval (-0.0317,0.0311). Therefore, D(p) = ∅ ∀p ∈ (−0.0317, 0.0311).
Hence Pout(p1) = P \ P∅(p1) = [−0.2, −0.0317] ∪[0.0311, 1.3]. See Fig 1 for geometrical interpretation of function
ψ(·, p1) and set P∅(p1).</p>
      <p>Take now the second value of the parameter, p2 = 0.6. The corresponding problem (15) (p˜ = p2) has solutions
x2 = (5, −0.737), ξ2 = −1.1729, u2 = (0.153, 0.847). Since ξ2 &lt; 0 set D(p2) ̸= ∅. From (3) we obtain
φ(p, p2) = max{27p − 17.37, −2.789p + 0.5},
Pin(p2) = {p : φ(p, p2) ≤ 0} = [0.179, 0.643] and D(p) ̸= ∅ ∀p ∈ [0.179, 0.643].</p>
      <p>Third parameter p3 = 1.1. The corresponding primal and dual solutions x3 = (5, −1.857), ξ3 = 1.1286, u3 =
(0.248, 0.752). In this case ξ3 &gt; 0 and D(p3) = ∅. Support function-minorant</p>
      <p>ψ(p, p3) = min{14.216p − 14.504, −8.344p + 10.296, −20.744p + 25.336}.</p>
      <p>Set P∅(p3) = {p : ψ(p, p3) &gt; 0} is interval (1.02, 1.22).
Parameter p ∈ P such that D(p) ̸= ∅ is called feasible parameter and infeasible otherwise. Below we describe
a procedure which is based on generating finite number of random parameters in P and checking feasibility
property of them. Since every parameter generates a set containing either only other feasible parameters or only
infeasible parameters such procedure is a covering type procedure. In general, we finish with a collection of sets
with feasible parameters and collection of sets with infeasible parameters.</p>
      <p>It is assumed that vertices v1, . . . , vN of set X are known. We also fix maximum number of iterations k &gt; 1
in advance. The procedure has the following description.
Step 1. Choose randomly parameter pk ∈ P ;</p>
      <sec id="sec-9-1">
        <title>Step 2. If pk ∈</title>
      </sec>
      <sec id="sec-9-2">
        <title>Step 3. If pk ∈</title>
        <p>∪
Pin∈Pin
Step 4. Solve problem (15) with p˜ = pk. Let (xk, ξk) and uk be primal and dual solutions.
Step 5. If ξk &gt; 0 then de ne P∅k = P∅(p˜) in (13) for p˜ = pk and u˜ = uk, and set P∅ = P∅ ∪P∅k, k∅ = k∅ + 1;
Step 6. If ξk ≤ 0 then de ne Pikn = Pin(p˜) in (11) for p˜ = pk and x˜ = xk, and set Pin = Pin ∪Pikn, kin = kin + 1;
Step 7. Set k=k+1;
Step 8. If k &gt; k then stop, otherwise goto step 1.</p>
        <p>When the procedure stops we have a collection Pin = {Pi1n, . . . , Piknin } of feasible parameters sets and a collection
k
P∅ = {P∅1, . . . , P∅ ∅ } of infeasible parameters sets. Therefore, we can define Pi∗n and Po∗ut in the following way
Pi∗n =
kin
∪ Piin, Po∗ut = P \
i=1
( k∅ )
∪ P k∅ .</p>
        <p>∅
i=1
In the example considered above we have k = 3, kin = 1, k∅ = 2 and</p>
        <p>Pin = {[0.179, 0.643]}, P∅ = {(−0.0317, 0.0311), (1.02, 1.22)}.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Therefore,</title>
      <p>Pi∗n = [0.179, 0.643], Po∗ut = [−0.2, −0.0317] ∪[0.0311, 1.02] ∪[1.22, 1.3].</p>
      <p>Theoretically the suggested procedure is finite since P is a compact set, hence can be covered by a finite
number of convex sets. However, in order to increase the efficiency other covering techniques, e.g. borrowed
from global optimization technology [EvtushenkoEtAl17] can be used.</p>
      <p>Acknowledgements
The paper was supported by RFBR grant No. 18-07-01432.
[Balas18] E. Balas. Disjunctive programming. Springer Nature Switzerland AG, 2018.
[EvtushenkoEtAl17] Yu.G.. Evtushenko, M.A. Posypkin, L.A. Rybak, and A.V. Turkin. Finding sets of
solutions to systems of nonlinear inequalities. Computational Mathematics and Mathematical Physics,
57(8):1241–1247, 2017.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>[FiedlerEtAl06] M. Fiedler</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Nedoma</surname>
            , J. Ram´ık, J. Rohn, and
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Zimmermann</surname>
          </string-name>
          .
          <article-title>Linear optimization problems with inexact data</article-title>
          . Springer,
          <year>2006</year>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [JonesEtAl08]
          <string-name>
            <given-names>C.N.</given-names>
            <surname>Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.C.</given-names>
            .
            <surname>Kerrigan</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.M.</given-names>
            <surname>Maciejowski</surname>
          </string-name>
          .
          <article-title>On polyhedral projection and parametric programming</article-title>
          .
          <source>Journal of Optimization Theory and Applications</source>
          ,
          <volume>138</volume>
          (
          <issue>2</issue>
          ):
          <fpage>207</fpage>
          -
          <lpage>220</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [Khamisov99]
          <string-name>
            <given-names>O.</given-names>
            <surname>Khamisov</surname>
          </string-name>
          .
          <article-title>On optimization properties of functions with a concave minorant</article-title>
          .
          <source>Journal Global Optimization</source>
          ,
          <volume>14</volume>
          :
          <fpage>79</fpage>
          -
          <lpage>101</lpage>
          ,
          <year>January 1999</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>