=Paper= {{Paper |id=Vol-1623/paperco5 |storemode=property |title=Probabilistic Analysis of an Algorithm for the Uncapacitated Facility Location Problem on Unbounded Above Random Input Data |pdfUrl=https://ceur-ws.org/Vol-1623/paperco5.pdf |volume=Vol-1623 |authors=Edward Kh. Gimadi,Anna A. Kurochkina,Elena A. Nagornaya |dblpUrl=https://dblp.org/rec/conf/door/GimadiKN16 }} ==Probabilistic Analysis of an Algorithm for the Uncapacitated Facility Location Problem on Unbounded Above Random Input Data== https://ceur-ws.org/Vol-1623/paperco5.pdf
           Probabilistic Analysis of an Algorithm
      for the Uncapacitated Facility Location Problem
         on Unbounded Above Random Input Data

                                   ⋆
        Edward Kh. Gimadi1,2, , Anna A. Kurochkina3 , and Elena A. Nagornaya2
 1
      Sobolev Institute of Mathematics, 4 Acad. Koptyug avenue, 630090 Novosibirsk, Russia
                                      gimadi@math.nsc.ru
          2
             Novosibirsk State University, 2 Pirogova Str. 630090, Novosibirsk, Russia
                                 nagornaya.elene@gmail.com
    3
      Siberian State University Of Telecommunications And Information Sciences, 86 Kirova
                                Str.,630102, Novosibirsk, Russia
                                       a.potapova@ngs.ru



         Abstract. We consider a Facility Location Problem in the case where an in-
         stances are unbounded above independent random variables with exponential
         and truncated normal distribution. A probabilistic analysis of the approxima-
         tion algorithm with polynomial time complexity is presented. Explicit evalua-
         tions of the performance guarantees are obtained and sufficient conditions for
         the algorithm to be the asymptotically optimal are presented.

         Keywords: facility location problem, probabilistic analysis, polynomial ap-
         proximation algorithm, asymptotically optimal algorithm, relative error, failure
         probability


Introduction

A class of the Facility Location Problem (FLP) is emphasized in the separate branch
of Discrete Optimization and widely distributed in Operations Research and Informat-
ics [1, 3]. The well known problem in this class is the so-called Simple Plant Location
Problem, which is one of the N P -hard problems in Discrete Optimization. For study-
ing of polynomial approximation algorithms for FLP is devoted a large amount of
literature [1, 3, 7, 14, 15]. This problem occurs by selecting locations for facilities and
customer’s service place so as to minimize the total cost for opening facilities and
service the demand.
    For N P -hard problems in Discrete Optimization in the case when input data is a
random value is actual a probabilistic analysis of simple polynomial time complexity
approximation algorithms [13, 12, 9]. One of the first examples of probabilistic analysis
⋆
     This research was supported by the Russian Foundation for Basic Research (grant 15-01-
     00976).
Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: A. Kononov et al. (eds.): DOOR 2016, Vladivostok, Russia, published at http://ceur-ws.org
                                                         Probabilistic analysis for UFLP    19

was presented by Borovkov, A. [2] for Travel Salesman Problem. In order to characterize
the quality of the solution produced by the algorithm on the random input are selected
the relative error εn and failure probability δn [4].
   In this paper we consider a probabilistic analysis of algorithm for solving the Un-
capacitated FLP on unbounded above random input data with the exponential and
truncated normal distribution. We obtain estimates of the relative error, the failure
probability of the algorithm and the sufficient conditions of asymptotic optimality of
the algorithm.


1    Formulation of the Problem

Consider the mathematical formulation of the Uncapacitated Facility Location Problem
(UFLP) [3]:
                              ∑           ∑∑
                    L(X) =       c0i xi +       cij xij −→ min;                   (1)
                                                                    X
                                  i∈I          i∈I j∈J
                                     ∑
                                           xij = 1, j ∈ J ;                                (2)
                                     i∈I

                                0 ≤ xij ≤ xi , i ∈ I, j ∈ J ;                              (3)
                                     xi ∈ {0, 1} , i ∈ I,                                  (4)
    I = {1, 2, ..., m} is the set of the possible facility locations;
    J = {1, 2, ..., n} is the set of the demand points;
    c0i is the the initial cost of the facility placing at location i;
    cij is the cost of servicing a demand point j by an open facility at location i;
    X = (xi )(xij ) is the solution of UFLP, where xi is a boolean variable indicates
that facility at location i ∈ I is open, xij indicates that open facility i ∈ I servicing a
demand point j ∈ J .
    The goal is to find a set of facilities to open Ĩ ⊆ I, Ĩ =
                                                               ̸ ∅ which allows to satisfy all
of the demand points with a minimum total cost.
    The problem (1)–(4) is N P -hard because it reduces to the Set Covering Problem [6].
    In [8] is represented a polynomial approximation algorithm for solving the prob-
lem (1)–(4) from class K̃mn (an , bn ; a0n , b0n ) with elements of matrix (cij ), i ∈ I, j ∈ J
and vector (c0i ), i ∈ I, from intervals [an , bn ] and [a0n , b0n ], an and b0n non-negative
accordingly. Performance guarantees of the algorithm and conditions of asymptotic op-
timality are obtained when the elements of the matrix (cij ) is independent identically
distributed random variables defined on a limited interval with uniform distribution
function F, where F(ξ) ≥ ξ, 0 ≤ ξ ≤ 1 for values ξij = (cij − an )/(bn − an ).
    The work [9] devoted to the probabilistic analysis of several greedy approximation
algorithms for UFLP with input data generated by choosing n random points in the
unit square, where each point is represented as a demand point, as well as a possible
facility location, i.e. sets I and J match. The authors proposed the conditions of
asymptotic optimality for the problem with the same opening costs of the facilities:
c0i = βn , i ∈ I.
20      Edward Kh. Gimadi, Anna A. Kurochkina, Elena A. Nagornaya

    In this paper performed a probabilistic analysis of polynomial approximation algo-
rithm for solving UFLP in a class of the K̃nn (1, ∞; b0n ) under the following assumptions:
    1) The locations of demand points and possible facility locations match: I = J and
m = n.
    2) The initial cost for opening facilities alike: c0i = b0n , i ∈ I.
    3) Off-diagonal elements of the matrix (cij ) is independent identically distributed
random variables from an unbounded above interval (bn = ∞) with a lower limit,
without loss of generality, equals to 1.
    4) Probabilistic analysis is performed for two distribution functions: exponential
with a parameter αn and truncated normal distribution with parameter σn .
    As a result of the probabilistic analysis we obtain corresponding estimates of the rel-
ative error and the failure probability of the algorithm and present sufficient conditions
for asymptotic optimality.
    Further for convenience we will have to deal with the matrix (c̃ij ) which off-diagonal
elements are defined as c̃ij = cij − 1, i, j ∈ J .
    Obviously the objective function L(X) of the original problem is associated with
the objective function F(X) the problem with shifted matrix (c̃ij )

                                      L(X) = n + F(X).

     Denote
                     {
                         αn , in the case of exponential distribution;
              γn =
                         2σn , in the case of truncated normal distribution.


2     Algorithm à in the Case of Exponential Distribution
We describe an approximate algorithm Ã, modifying itself from work [8]:
 1. Calculating the parameters m0 m̃ by the formulas:
                                          √
                                             nαn
                                    m0 =          ,
                                              b0n

                                             m̃ = ⌈m0 ⌉ .
    Next we assume that number of the points with open facilities does not exceed the
    number of the point where is no facilities, i.e. at least m̃ < n/2.
 2. Choosing a subset Ĩ ⊂ J , consisting of the last m̃ elements of the vector (c0i ), i ∈ J .
    We assume J˜ = J \ Ĩ, ñ = |J˜|.
 3. Compute the vector of destinations (ij ), j ∈ J˜, where

                                  ij = arg min{c̃ij |i ∈ Ĩ}, j ∈ J˜.
 4. As a result of the algorithm à choose solution X̃ = (x̃i )(x̃ij ), where:
                                {                    {
                                  1, i ∈ Ĩ;           1, i = ij ,
                          x̃i =               x̃ij =
                                  0, i ∈
                                       / Ĩ,           0, i ̸= ij .
                                                           Probabilistic analysis for UFLP   21

   By issuance of the solution X̃ and the objective function L̃ = L(X̃) algorithm Ã
completes its work.
   The complexity of the algorithm Ae is : O(m̃n) [8].


3    Probabilistic Analysis of Algorithm Ã

In the analysis of the algorithm we will use the following notions [4]:

Definition 1. We say that the algorithm A has estimates εA    A
                                                         n , δn in class Kn of min-
imization problems of dimension n if
                              {      (    ) ∗}
                            P fA > 1 + εAn f    ≤ δnA ,

where εA                             A
        n is the relative error and δn is the failure probability of the algorithm A, f
                                                                                         ∗

is the optimal solution and fA is the solution found by the algorithm A.
                                                                                 ∪
                                                                                 ∞
    The algorithm is called asymptotically optimal in the problem class K =        Kn , if
                                                                                       n=1
there are some estimates εA    A                   A    A
                          n , δn for it such that εn , δn → 0 as n → ∞.

    To prove the main theorem we use:

Theorem 1. Petrov, V. [5] Let X1 , . . . , Xn are independent random variables and for
some positive constants T and hj , j = 1, . . . , n, such that for all t, 0 ≤ t ≤ T
                                              1    2
                                EetXj ≤ e 2 hj t , j = 1, . . . , n.
             ∑
             n                  ∑
                                n
We set H =         hj and S =         Xj , then:
             j=1                j=1

                                        {      { 2}
                                            exp − 2H
                                                  x
                                                      0 ≤ x ≤ HT,
                       P{S > x} ≤              { xT }
                                            exp − 2   x > HT.

   In the case of an exponential distribution with parameter αn , the density of the
off-diagonal elements of the matrix (c̃ij ) is as follows:
                                        { 1
                                            αn e
                                                x/αn
                                                     , 1 ≤ x < ∞,
                            p(x) =
                                            0, othewise,

the corresponding distribution function is:

                           F(x) = P{c̃ij < x} = 1 − e−x/αn .

Lemma 1. For the expectation and variance of the random variable
                                   ∑
                             Cm̃ =      min c̃ij ,
                                                     1≤i≤m̃
                                              j∈J˜
22       Edward Kh. Gimadi, Anna A. Kurochkina, Elena A. Nagornaya

where c̃ij is independent random variables with the same distribution function F(x) =
1 − e−x/αn , 0 ≤ x ≤ ∞, the following estimates are valid:
                                                                     ñαn
                                                     ECm̃ =               ,                                          (5)
                                                                       m̃
                                                                     ñαn2
                                                     DCm̃ =                .                                         (6)
                                                                      m̃2
Proof. By calculating the vector of the destination at Step 3 of the algorithm à we
chose minimum among of m  e independent random variables c̃ij for a fixed j ∈ J˜. Denote
this random variable as ξj (m̃) = min c̃ij . The total cost of servicing obtained by the
                                               1≤i≤m̃
                                         ∑
                                         ñ
algorithm à denote as Cm̃ =                   ξj (m̃).
                                         j=1
     By choosing the solutions X̃ with algorithm à we have

                                               ∑
                                               m̃             ∑
                                                              ñ
                           F̃ = F(X̃) =               c0i +           ξj (m̃) = m̃b0n + Cm̃ ,                        (7)
                                               i=1            j=1

     The distribution function of a value ξj (m̃) equal to

                                       F (ξj (m̃)) = 1 − (1 − F (x))m̃ .

                               ∫∞                                                    ∫∞
                                                                                          xm̃e− αn
                                                                                                 xm̃

                  Eξj (m̃) =           xm̃(1 − F (x))m̃−1 dF (x) =                                 dx =
                                                                                             αn
                               0                                                     0

                                         ∞
                                                 ∫∞
                                                                               αn − xαm̃ ∞   αn
                      = −xe− αn                       e− αn dx = −
                                   xm̃                     xm̃
                                             +                                    e n      =    .
                                         0                                     m̃        0   m̃
                                                 0

     For the expectation of the value Cm̃ we have:

                                                      ∑
                                                      ñ
                                                                                ñαn
                                         ECm̃ =             Eξj (m̃) =               .
                                                      j=1
                                                                                  m̃

                                                                 ∫∞
                                                                                                           αn2
          Dξj (m̃) = Eξj (m̃) − (Eξj (m̃)) =
                                   2                   2
                                                                      x2 (1 − F (x))m̃−1 dF (x) −              =
                                                                                                           m2
                                                                 0
             ∫∞        −xm̃                                                      ∫∞
                  x2 m̃eαn        α2            xm̃ ∞                                                  αn2   αn2
                              dx − ñ2 = −x2 e− αn                                    xe− αn dx −
                                                                                           xm̃
         =                                            +2                                                   =     .
                      αn          m̃                0                                                  m̃2   m̃2
             0                                                                   0

Estimate the variance of the random variable Cm̃ :
                                                      ∑
                                                      ñ
                                                                                ñαn2
                                         DCm̃ =             Dξj (m̃) =                .
                                                      j=1
                                                                                 m̃2
                                                                   Probabilistic analysis for UFLP   23

Lemma 2. Algorithm à provides a solution such that for the objective function F̃ valid
the upper bounds:

                                                         ñαn             √
                        F̃ ≤ m̃βn + (1 + ε′ñ )               ,    m̃ <    ñαn /βn ,                (8)
                                                           m̃
                                          √                              √
                           F̃ ≤ (2 + ε′n ) ñβn αn ,              m̃ ≥    ñαn /βn ,                 (9)
           √
where ε′n = ln n/n → 0, n → ∞.

Proof. Consider steps 2.2 and 2.3 of algorithm Ã. From Lemma 1 and equation (7)
follows
                                             ñαn
                               F̃ ≤ m̃βn +          = F̂
                                               m̃
                                                ñαn
                                  F̂′m̃ = βn − 2
                                                 m̃
We
√ study the behavior of the objective √    function with respect to the point m0 =
   nαn /βn . When m < m0 and εn = ln n/n → 0, n → ∞:

                           ∑
                           m̃            ∑                                               nαn
            F̃ = F(X̃) =         c0i +           min cij = m̃βn + Cm̃ ≤ m̃βn +               = F̂
                           i=1
                                                1≤i≤m̃                                    m̃
                                         j∈J˜

We show with high probability
                                                                  nαn
                                   F̃ ≤ m̃βn + (1 + ε′n )             = F̂.
                                                                   m̃
Actually,
                  P{F̃ > m̃βn + (1 + ε′n )Cm̃ } ≤ P{Cm̃ > (1 + ε′n )ECm̃ } ≤
                                                                       DCm̃        1
                     ≤ P{|Cm̃ − ECm̃ | > ε′n ECm̃ } ≤                 ′       2
                                                                                =
                                                                    (εn ECm̃ )    ln n
    From the mentioned above it is clear that with such select of the parameter m̃ the
upper bound of objective function F̂ derived as the result of algorithm Ã, for small εn ,
is close to its minimum value.

Lemma 3. [10] For non-negative real x and γ,
                                                                            2
                                    1 + x + γx2 ≤ ex · e(γ−0,5)x

.


4   Main Result

Theorem 2. Let the elements of service costs c̃ij be an independent identically dis-
tributed random variables with values in the unbounded above interval [1, ∞), having
24       Edward Kh. Gimadi, Anna A. Kurochkina, Elena A. Nagornaya

exponential or truncated normal distribution with parameters αn or σn . Algorithm Ã
finds the solution with conditions of asymptotic optimality
                                            ( √n )
                                     αn = o        ,                            (10)
                                              ln n
                                            ( √n )
                                     βn = O         ,                           (11)
                                              ln n
and with the relative error and the failure probability
                                                                  ( 1 )
                                                    εA
                                                     n =O                   ,                                   (12)
                                                                   ln n
                                                            { 3 }
                                                   δnA = exp − n .                                              (13)
                                                              8
                                                              m̃                               3α2
Lemma 4. Let X̃j = ξj (m̃), j = 1, . . . , ñ. Put T = 2α       n
                                                                  and hj = m̃2n , j = 1, . . . , ñ.
Then, for every j = 1, . . . , ñ and t, 0 ≤ t ≤ T , the following hold:
                                                (         )      ( h t2 )
                                             Eet X̃j −EX̃j ≤ exp
                                                                    j
                                                                          .
                                                                    2
Proof.
                                        ∫∞                          ∫∞
                             tX̃j             tx                          m̃ tx −xm̃/αn
                        Ee          =        e dF (ξj (m̃)) =                e e        dx =
                                                                          αn
                                        0                           0

                                 m e(t−m̃/αn )x ∞     m/αn           1
                             =                    =           =             .
                                 αn t − m̃/α
                                        ˜ n 0       m̃/αn − t   1 − tαn /m̃
     Because 0 ≤ t ≤ T and T = 2α
                                m̃
                                  n
                                    , then using Lemma 3, we have

                                     ∑        ∞    (         )k                       (                 )
                            1                          tαn            tαn   t2 αn2             1
         Ee   tX̃j
                     =             =                              =1+     +                                 ≤
                       1 − tαn /m̃                      m̃             m̃    m2           1 − tαn /m̃
                                             k=0

                                 tαn   2t2 αn2                    (               )
                       ≤1+           +         ≤ exp (tαn /m̃) exp 1, 5t2 αn2 /m̃2 .
                                  m̃    m̃2
     Thereby,              (          )     (               )     (        )
                        Eet X̃j −E X̃j ≤ exp 1, 5t2 αn2 /m̃2 ≤ exp hj t2 /2 .

Proof. Theorem 2.
   From Lemma 4 follows that it variables X̃j′ = X̃j − E X̃j , 1 ≤ m̃ ≤ ñ satisfy the
conditions of the Petrov theorem.
                 ∑
                 ñ                             ∑
                                                ñ
                                                       3ñα2
   Denote S =       X̃j′ . Put T = 2α
                                    m̃
                                      n
                                        and H =    hj = m̃2n , we have
                       j=1                                              j=1

                                                                  3ñαn
                                                       HT =             .
                                                                   2m̃
                                                        Probabilistic analysis for UFLP    25

    Let us estimate the failure probability δnA of algorithm à with the obvious inequal-
ities F∗ ≥ ñ and EF̃ ≤ m̃βn + ECm̃ = ÊF̃:


      {                   }    {                      }     {       ∑
                                                                    ñ
                                                                                  }
                        ∗
     P FÃ > (1 + εA
                   n )F     ≤ P F̃ + ñ > (1 + εA
                                                n )ñ   ≤ P  m̃βn +             n ≤
                                                                       X̃j > ñεA
                                                                          j=1


            {∑ñ
                                    }     {∑ñ
                                                                      ñαn }
          ≤P     X̃j > εA
                        n ñ − m̃βn   ≤ P      X̃j′ > εA
                                                       n ñ − (m̃βn +      ) .            (14)
             j=1                           j=1
                                                                        m̃
                                        √
Since m0 ≤ m̃ ≤ m0 + 1 and m0 =             nαn
                                             βn , we have


                          ñαn                  ñαn    √
                (m̃βn +        ) ≤ (m0 + 1)βn +      = 2 ñαn βn + βn
                            m̃                   m0
.
                                                     √
                                            n ñ − (2 ñαn βn + βn ) = 2 ln n :
                                                                         3n
    We extend inequality (14), by denoting εA


     {         (       ñαn )     {              √            }   {      3n
    P S > εA
           n ñ m̃βn +       } ≤ P S > εA
                                        n ñ − (2 ñαn βn + βn = P S >        }.
                         m̃                                            2 ln n

    Furthermore, to hence also the relative error of the algorithm we obtain
                                  √                              √
                         3            α n βn   βn      3             α n βn   βn
               εA
                n =          +2              +    =        +2               +    .
                      2 ln n            ñ     ñ   2 ln n             ñ     ñ

With the conditions on αn and βn we have
                                         ( 1 )
                               n =O
                              εA                    → 0 n → ∞.
                                            ln n

    Because
                          3ñαn   3nαn      √           3n
                   HT =         ≤      = 3/2 nαn βn ≤        =x
                           2m̃    2m0                 2 ln n
                                                                 √
                          3n m̃        3n m0        3n 1              n    3n
                 xT =              ≥            =                        ≥    ,
                        2 ln n 2αn   2 ln n 2αn   2 ln n 2           αβn    4
by Petrov Theorem we have:

                            P {S > x} ≤ e− 2 ≤ e− 8 = δn .
                                                   xT       3n




So as εAn → 0, δn → 0 when n → ∞, algorithm à is asymptotically optimal. Thus,
                   A

in the case of exponential distribution for independent identically distributed random
variables c̃ij holds Theorem 2, where γn = αn .
26      Edward Kh. Gimadi, Anna A. Kurochkina, Elena A. Nagornaya

4.1    Case of the Truncated-Normal Distribution
In the case of truncated normal law consider a symmetrical right half of the normal
law with density parameter σn for the off-diagonal elements of the matrix (c̃ij ), having
the following form:
                           {
                             √ 2 2 e−u /(2σn ) , when an ≤ x < ∞,
                                       2    2

                    p(x) =      2πσ n
                             0, otherwhise,
for the corresponding distribution function:
                                          ∫x
                                                2
                                                    e−u /(2σn ) du.
                                                       2    2
                    G(x) = P{c̃ij < x} =     √
                                                  2
                                               2πσn
                                            0

Definition 2. [11] We say that the distribution function F1 (x) majorizes the distri-
bution function F2 (x), if F1 (x) > F2 (x) for any x.
Lemma 5. [10] For any x ≥ 0 and σ > 0 inequality holds
                ∫x √         (
                      2           u2 )             (    x)
                        2
                          exp − 2 du ≥ 1 − exp −           .
                     πσ          2σ                    2σ
                     0

Lemma 6. [11] Let ξ1 , . . . , , ξm be an independent identically distributed random
variables with the distribution function F(x), F̂(x) is a function of random variable
ξ = min ξi , let η1 , . . . , ηm be an independent identically distributed random variables
      1≤i≤m
with the distribution function G(x), Ĝ(x) is function of random variable η = min ηi .
                                                                                 1≤i≤m
Then, for any x
                             F(x) ≤ G(x) ⇒ F̂(x) ≤ Ĝ(x).
Lemma 7. [11] Let Pξ , Pη , Pχ , Pζ be a distribution functions of random variables
ξ, η, χ, ζ accordingly and ξ and η be independent, χ and ζ be independent. Then
         (∀x Pξ (x) ≤ Pη (x)) ∧ (∀y Pζ (y) ≤ Pχ (y)) ⇒ (∀z Pξ+η (z) ≤ Pχ+ζ (z).
Lemma 8. [11] Let the distribution function F (x) of the random value c̃ij be such as
F (x) ≥ F ′ (x). Then for algorithm à hold the same performance guarantees (εA     A
                                                                               n , δn )
                                                     ′
in the case of input with the distribution function F (x).
   Let F ′ (x) = F(x) and F = G(x), from Lemmas 5 – 8 follows the validity of the
Theorem 2 in the case of a truncated-normal distribution.

5     Conclusion
In this paper a probabilistic analysis of approximation algorithm à presented by using
Petrov Theorem for Uncapacitated Facility Location Problem in the case when the
matrix of the elements of the costs is an independent identically distributed variables
from the unbounded above interval with exponential and truncated normal distribution.
    The performance guarantees of the algorithm: the relative error and the failure
probability and sufficient conditions for its asymptotic optimality are presented.
                                                      Probabilistic analysis for UFLP        27

References
 1. Mirchandani, P., Francis, R.: (eds.) Discrete location theory. Wiley-Interscience Series in
    Discrete Mathematics and Optimization. Wiley, New York, 1990.
 2. Borovkov, A.: On probabilistic formulation of two economic problems. Doklady AN SSSR,
    1962, 146(5), 983–986(in Russian).
 3. Beresnev, V., Gimadi, Ed., Dementiev, V.: Extremal Problems of Standardization, Nauka,
    Moskow, 334 p., 1978 (in Russian)..
 4. Gimadi, Ed., Glebov, N., Perepelica, V.: Algorithms with bounds for discrete optimization
    problems, 31, 35–42, 1974 (in Russian).
 5. Petrov V.V.: Limit Theorems of Probability Theory. Sequences of Independent Ran- dom
    Variables. Clarendon Press, Oxford, 304 p. (1995)
 6. Garey, M., Johnson, D: Computers and Intractability; 1982. - 416 p.
 7. Chudak, F.: Improved approximation algorithms for uncapacitated facility location. In
    Integer programming and combinatorial optimization, 180–194, 1998.
 8. Gimadi, Ed.: Aprior performance estimates of the approximation solving for the Problem
    of Standardization. Upravlyaemye sistemy, Novosibirsk, 1987, 27, 25–29(in Russian)
 9. Flaxman, A., Frieze, A., Vera, J.: On average case performance of some greedy approxima-
    tion algorithms for the uncapacitated facility location problem. Comb. Probab. Comput.
    16(05), 713–732(2007)
10. Gimadi, E.Kh., Le Gallu, A., and Shakhshneider, A.V.: Probabilistic Analysis of One
    Approximate Algorithm for the Traveling Salesman Problem with Unbounded Inputs,
    Diskret. Anal. Issled. Oper., 2008, 15(1), 23–43.
11. Gimadi, Ed., Glazkov, Yu.: An asymptotically optimal algorithm for one modification of
    planar three-index assignment. Diskretn. Anal. Issled. Oper., Ser. 2, 13(1), 10-26 (2006).
12. Piersma, N.: A probabilistic analysis of the capacitated facility location problem. J. Comb.
    Optim. 1999. 3(1), 31-50.
13. Angluin, D., Valiant, L.: Fast probabilistic algorithm for Hamiltonian circuits and match-
    ings. J. Comput. and System Sci. 1979. 19(2), 155–193 (1979).
14. Shmoys, D., Tardos, E., and Aardal, K.: Approximation algorithms for facility location
    problems. In Proc. 29th Annu. ACM Sympos. Theory Comput., 265–274 (1997).
15. Guha, S., Khuller, S.: Greedy strikes back: Improved facility location algorithms. In Proc
    of the 9th Annual ACM-SIAM Symp. on Discrete Algorithms, 649–657 (1998).