=Paper= {{Paper |id=Vol-1987/paper85 |storemode=property |title=Algorithm for Solving Stochastic Transportation Problem |pdfUrl=https://ceur-ws.org/Vol-1987/paper85.pdf |volume=Vol-1987 |authors=Anna V. Zykina,Olga N. Kaneva }} ==Algorithm for Solving Stochastic Transportation Problem== https://ceur-ws.org/Vol-1987/paper85.pdf
Algorithm for Solving Stochastic Transportation Problem

                     Anna V. Zykina                                      Olga N. Kaneva
              Omsk State Technical University                     Omsk State Technical University
                    prospekt Mira 11,                                   prospekt Mira 11,
                  644050 Omsk, Russia                                 644050 Omsk, Russia
                    avzykina@mail.ru                                   okaneva@yandex.ru




                                                        Abstract
                       The paper deals with the transportation problem in stochastic formu-
                       lation. The problem solved by using the two-stage approach. The sec-
                       ond stage is formulated as a complementarity problem. The algorithm
                       for transportation problem solving is received. The solution based on
                       stochastic gradient and Monte Carlo method.




1    Introduction
Significant interest is paid to stochastic formulation of the transportation problem with random demand
[Williams, 1963] in applications that use a classical transportation problem [Koopmans, 1949]. In this case
objective function represents the mathematical expectation of total losses during transportation of the product,
damages of poor demand and the costs of excess product storing.
   Transportation problem with random demand and continuous function distribution can be turned to de-
terministic problem of convex programming with linear constraints. In paper [Biswal et al., 2013] stochastic
transportation problem is converted to an equivalent deterministic problem and solved by goal programming
method. However, such transformation does not always gives an acceptable solution; therefore it is necessary to
use other approaches to study the problem.
   Another approach for solving the stochastic optimization problems is a two-stage solutions scheme [Judin,
1998]. The process of solving the problem can be divided into two stages: on the first stage we select the
preliminary plan from deterministic conditions, on the second stage we implement compensation discrepancies,
that have been identified after the implementation of random events. This approach can be used for stochastic
problems where a preliminary decision should be taken and put into the implementation before we have know
the value of random parameters. The preliminary decision can be determined for the transportation problem
with random demand by distribution of materials supplies and the determined reserves of the materials. The
algorithm for solving problem which based on the reduction of the original problem to an equivalent mixed-integer
linear programming problem after discretization is considered in paper [Kibzun et al., 2016].
   In general the difficulties of the two-stage stochastic transportation problems analysis are determined by
the choose the best preliminary plan in the original problem, which would guarantee the existence of residual
compensation for all implementations of uncertainty parameters.
   In this paper we propose an algorithm for solving two-stage stochastic transportation problem which is con-
sidered in [Zykina et al., 2016]. In the problem the choice of compensation plan satisfies the conditions which
are defined by the linear complementarity problem. The proposed algorithm is based on stochastic gradient and
Monte Carlo method [Sakalauskas, 2004].
Copyright ⃝
          c by the paper’s authors. Copying permitted for private and academic purposes.
In: Yu. G. Evtushenko, M. Yu. Khachay, O. V. Khamisov, Yu. A. Kochetov, V.U. Malkova, M.A. Posypkin (eds.): Proceedings of
the OPTIMA-2017 Conference, Petrovac, Montenegro, 02-Oct-2017, published at http://ceur-ws.org




                                                            598
2   Statement of the Linear Complementarity Problem
It is important to consider the linear complementarity problem LCP(B, q) in the form [Bazaraa et al., 1998]:

                                w − By = q, wj ≥ 0, yj ≥ 0, wj yj = 0, j = 1, ..., l.

Here B is a given square matrix of order l, (wj , yj ) is a pair of additional variables.
   Condition wj yj = 0, j = 1, ..., l, is similar to the condition of complementarity in the duality theory for
inequalities By + q ≥ 0 and y ≥ 0. That means, in a pair of conjugate inequalities, at least one should be
implanted as equality.
   Non-negative definiteness of the matrix B ensures the solvability of problem LCP(B, q). When B is positive
definite, problem LCP has a unique solution y of problem LCP(B, q).
   The algorithm can be used as an additional conversion Lemke for the formulation of LCP problem solution
[Bazaraa et al., 1998]. It can be shown as an analogy of the simplex algorithms for linear programming problem.

3   Mathematcal Model of the Transportation Problem
Let us consider the classical transportation problem: minimize the total cost
                                                  ∑
                                                  m ∑
                                                    n
                                                           cij xij → min                                           (1)
                                                 i=1 j=1

for specified volumes of homogeneous product transportation X = {xij } i = 1, m, j = 1, n with restrictions:
                                                ∑
                                                n
                                                      xij = ai , i = 1, m,                                         (2)
                                                j=1

                                                 ∑
                                                 m
                                                      xij = bj , j = 1, n,                                         (3)
                                                i=1


                                              xij ≥ 0, i = 1, m, j = 1, n.                                         (4)
   Conditions (2) determine the distribution of transportation X in accordance to reserves a = (a1 , a2 , . . . , am ).
Conditions (3) ensure the fulfillment of the demand b = (b1 , b2 , . . . , bn ).
   We introduce the matrix of the transportation plan X and matrices C by using vector of the transportation
plan x and vector c, and sticking rows of matrix to vector

                                          xij = x(i−1)n+j , cij = c(i−1)n+j .

Then the group of equations (1), (2), (4) is presented as

                                              cx → min, Ax = a, x ≥ 0,

where matrix A corresponds to constraints (2).

4   Two-stage Stochastic Transportation Problem
It is important to consider the stochastic formulation of the transportation problem (1) – (4).
   Let b = b(ω) be the demand which presented as a random variable and c = c(ω) be the cost of product
transportation, which is also random.
   Transportation problem with random demand can be presented as the following two-stage stochastic program-
ming problem [Zykina et al., 2016]:

                                          F (x) = M {cx + f (x, b)} → min,                                         (5)
                                                                             x∈D

                                          D = {x ∈ Rnm |Ax = a, x ≥ 0},                                            (6)
where
                                   f (x, b) = {p+ y + (x, b) + p− y − (x, b)} → +min− ,                            (7)
                                                                                   y , y




                                                            599
                              w+ − B + y + = q + , w+ ≥ 0, y + ≥ 0, wj+ yj+ = 0, j ∈ J + ,                      (8)

                              w− − B − y − = q − , w− ≥ 0, y − ≥ 0, wj− yj− = 0, j ∈ J − .                      (9)
We show the following meaningful interpretation of the problem by using the terms of the transportation problem:
  – the components of vectors q + or q − with coordinates

                                                     ∑
                                                     m
                                             qj+ =         xij − bj , j ∈ J + ,
                                                     i=1

and
                                                            ∑
                                                            m
                                             qj− = bj −           xij , j ∈ J − ,
                                                            i=1

that specify the amount of the deficit or excess of the resource that can occur with certain plan x ∈ D and in
the realization of a random variable b;
   – the components of vectors y + = y + (x, b) or y − = y − (x, b) determine as the compensation plan of the deficit
for each item of resource consumption;
   – element bij of matrix B + or B − specifies the amount of resource purchasing for i-th consumer under the
compensation plan y = (0, ..., 1, ..., 0), where the positive unit located on j-th place;
   – the components of vectors p+ or p− determine fines for conducting corrective actions.
   For the solvability of the second stage problem (7)–(9) for all implementations of a random variable b and
for any preliminary plan x ∈ D, it is necessary and sufficient that the matrix B + and B − be positive definite.
Under such conditions there are a unique solutions y + (x, b) and y − (x, b) for each complementarity problem
LCP(B + , q + ) and LCP(B − , q − ). In this case problem (5)–(9) is a nonlinear problem of stochastic programming
with the following linear constraints in the form:

                                F (x) = M {cx + p+ y + (x, b) + p− y − (x, b)} → min,                          (10)
                                                                                          x∈D



                                              D = {x|Ax = a, x ≥ 0} ,                                          (11)
here y + (x, b), y − (x, b) are unique solutions of the corresponding complementarity problems. The objective func-
tion F (x) is the expectation of a random function which depends on a random vector from a certain probability
space. The feasible set D, D ̸= ∅, is a bounded convex linear set.
   To solve the problem (10) – (11), we can use methods of stochastic approximation [Sakalauskas, 2004].

5     Designations
Let us transform the matrix Hn×m into vector h1×nm :

                                        h(i−1)n+j = Hij , i = 1, m, j = 1, n.                                  (12)
Then we transform the vector h1×nm into matrix Hn×m :

                                        Hij = h(i−1)n+j , i = 1, m, j = 1, n.                                  (13)
    Vectors q s+ and q s− can be presented as:

                                                         ∑
                                                         m
                                         qjs+ (X k ) =         xkij − bsj , j ∈ J s+ ,                         (14)
                                                         i=1

                                                                 ∑
                                                                 m
                                         qjs− (X k ) = bsj −          xkij , j ∈ J s− .                        (15)
                                                                i=1

    Problem LCP(B, q):
                               w − Bl×l y = q, wj ≥ 0, yj ≥ 0, wj yj = 0, j = 1, l.                            (16)




                                                               600
6      Algorithm
6.1     Starting
1) Input: a matrix M C of dimension m × n which contains values of transport costs, a matrix DC of dimension
m × n of variances, M b – vector with values of demand in points j = 1, n, Db – vector of variances, a – warehouse
stock vector i = 1, m.
   2) Let ε̂, ρ̂ and δ be positive defined: ε̂ > 0, ρ̂ > 0 and δ > 0.
   3) Define the matrix of corrective measures Bn×n , vectors p+ and p− (of dimension n) are fines for conducting
corrective actions.
   4) Select an initial volume N 0 of Monte Carlo estimates samples.
   5) Transform the matrix M C into a vector c̄ by the rule (12).
   6) Find the initial approximation x0 ∈ Rnm as a solution of the linear programming problem:

                                                      c̄x → min, Ax = a, x ≥ 0.                                                         (17)
     7) Set k = 0 and turn to the main step.

6.2     Main Steps
Step 1.

    1. Let xk , N k and ∆k > 0 are given. Transform the vector xk into the matrix X k by the rule (13).
                                                                              k                                                         k
    2. Generate N k values of the random vector b: b1 , b2 , . . . , bN and the random matrix C: C 1 , C 2 , . . . , C N .
    3. For each implementation s = 1, N k of a random vector b:
        (a) form the index sets J s+ and J s− from the elements j = 1, n according to the following rule:
            if
                                                         ∑m
                                                            xkij < bsj ,
                                                                    i=1

            then the corresponding index j is placed in the set J s+ ;
            if
                                                         ∑m
                                                             xkij > bsj ,
                                                                    i=1

            then the corresponding index j is placed in the set J s− ;
        (b) form the matrix B s+ by the elements of the matrix B at the intersection of rows and columns with
            numbers from the set J s+ , and similarly form the matrix B s− in terms of the index set J s− ;
        (c) form vectors:
                                                                    j , j ∈J
                                                                   ps+       s+
                                                                                ,

                                                                    j , j ∈J
                                                                   ps−       s−
                                                                                ;

        (d) form the vectors q s+ (X k ) and q s− (X k ) in accordance to (14) and (15) respectively;
        (e) for each t = 1, nm
               i. transform the vector (xk1 , xk2 , . . . , xkt + ∆k , . . . , xknm ) into the matrix Xtk by the rule (13);
              ii. compute y s+ (X k , bs ) as a solution of the complementarity problem LCP (B s+ , q s+ (X k ));
             iii. compute y s− (X k , bs ) as a solution of the complementarity problem LCP (B s− , q s− (X k ));
             iv. compute y s+ (Xtk , bs ) as a solution of the complementarity problem LCP (B s+ , q s+ (Xtk ));
              v. compute y s− (Xtk , bs ) as a solution of the complementarity problem LCP (B s− , q s− (Xtk ));
             vi. calculate the coordinate t of the stochastic gradient g s (X k , bs ) according to the following formula:

                                                      y s+ (Xtk , bs ) − y s+ (X k , bs )       y s− (Xtk , bs ) − y s− (X k , bs )
                        gts (X k , bs ) = c̄t + ps+                                       + ps−                                     ;
                                                                      ∆k                                        ∆k




                                                                    601
    (f) calculate the values:
                                                        ∑
                                                        m ∑
                                                          n
                               f (X k , C s , bs ) =              csij xkij + ps+ y s+ (X k , bs ) + ps− y s− (X k , bs );
                                                        i=1 j=1

4. after calculating all N values of the objective function f (X k , C s , bs ) and the stochastic gradients g s (X k , bs )
                           k

   we find the following estimates:
                                                      1 ∑
                                                         Nk
                                          e    k
                                          F (X ) = k        f (X k , C s , bs ),
                                                     N s=1

                                                     ∑
                                                                      k
                                                     N
                                       e2        1
                                               k
                                       D (X ) = k        (f (X k , C s , bs ) − Fe(X k ))2 ,
                                               N − 1 s=1

5. find the gradient of the objective function:
                                                                     ∑
                                                                              k
                                                                     N
                                                       e (X k ) = 1
                                                       ∇F                g s (X k , bs ),
                                                                 N k s=1

6. find the sample covariance matrix:

                                          1 ∑ s k s
                                                   k
                                             N
                          Q(X k ) =                            e (X k ))(g s (X k , bs ) − ∇F
                                                 (g (X , b ) − ∇F                          e (X k ))T
                                         N k s=1

7. determine Φγ as γ – quantile of the Fisher distribution with degrees of freedom (N k − n, n). Determine ηβ
   as β-quantile of the standard normal distribution;
                                                                                  e
                e (X k ))T (Q(X k ))−1 ∇F
                                       e (X k ) ≤ Φγ and 2ηβ D(X
                                                                                          k
                                                                   )
8. if (N k − n)(∇F                                           √
                                                                 k
                                                                     ≤ δ then STOP. Otherwise go to step 2.
                                                                                      N
 Step 2.
1. Calculate the value
                                         e (X k )) = εb
                                    εxk (∇F                        max                      e t (X k )}}.
                                                                              {min{xkt , ρb ∇F
                                                              e t (X k )≥0,
                                                              ∇F
                                                               1≤t≤nm

2. Find the ε−possible direction dk by solving the following problem:
                                                  e (X k )∥ → min
                                             ∥d − ∇F
                                                                     Ad = 0,
                                                               dj ≤ 0, j ∈ J k ,
   here J k is the set of indices j for which inequality
                                                                         e (X k ))
                                                          0 ≤ xkj ≤ εxk (∇F
   is specified.
3. Compute ρk . If ∃t for which dkt > 0, then
                                                                            xk
                                                                  ρ, min ( kt )},
                                                         ρk = min{b
                                                                     dk
                                                                      t >0,
                                                                            dt
                                                                          1≤t≤nm

   else ρ = ρb.
          k


4. Find
                                                              xk+1 = xk − ρk dk
   and
                                                                          ρbΦγ
                                                       N k+1 =                          .                                    (18)
                                                                  ρ (d ) (Q(X k ))−1 dk
                                                                   k  k T

5. Replace k by k + 1 and go to step 1.




                                                                     602
7   Convergence
Convergence of the proposed algorithm provided [Sakalauskas, 2004] by determining the sample size from the
conditions (18). In this case the objective function (10) must be differentiable. And it’s gradient must satisfy the
Lipschitz condition. As mentioned above, when choosing the positive definite matrixs B + and B − there is unique
solutions y + (x, b) and y − (x, b) of the corresponding complementarity problem. In this case it is possible to obtain
a linear dependence of the functions y + (x, b) and y − (x, b) [Kaneva, 2005]. This fact ensures differentiability of
the objective function (10) and the Lipschitz condition for the gradient.

8   Conclusion
The effectiveness of the proposed numerical method for solving stochastic transportation problem stipulated by
the following cases:
   – We use two-step scheme where the choice of compensation plan of the second stage satisfy the conditions of
the linear complementarity problem. This fact allows us compensate not only in the case when the demand is
not satisfied, but when the supply exceed the demand. In addition, using the complementarity problem provides
compensation on the limit of compatibility, which allows us to reduce costs of compensation.
   – Under defined conditions of the compensation scheme choice (the positive definiteness of the matrix in the
complementarity problem) ensures the uniqueness of the complementarity problem solution. This fact allows us
use the solution of the complementarity problem to construct the stochastic gradient.

Acknowledgements
This work was supported by Russian Foundation for Basic Research, Project 15-41-04436-a-Siberia and by
Ministry of education and science of Russia, Project 2.9314.2017

References
[Koopmans, 1949] Koopmans, T.C. (1949) Uptimum utilization of the transportation system. Econometrica, 3-4.
[Williams, 1963] Williams, A. C. (1963). A stochastic transportation problem. Operations Research, 759 – 770.

[Biswal et al., 2013] Biswal, M. P. and Samal, H. K. (2013). Stochastic Transportation Problem with Cauchy
          Random Variables and Multi Choice Parameters. Journal of Physical Sciences, 17, 117 - 130.

[Judin, 2010] Judin, D.B. (2010). Mathematical methods of control under incomplete information. Objectives and
          methods of stochastic programming. Moscow: Krasand.

[Kibzun et al., 2016] Kibzun, A.I. and Khromova, O.M. (2016). Mathematical modelling of a transport system
         with minimal maintenance costs. Vestn. YUUrGU. Ser. Matem. Modelirovanie i Programmirovanie,
         9:3, 41 - 54. DOI: 10.14529/mmp160304

[Zykina et al., 2016] Zykina, A.V.& Kaneva, O.N. (2016). Stochastic optimization models in transportation lo-
         gistics. Sovremennye informacionnye tekhnologii i IT-obrazovanie, 12(1), 251 – 255.

[Sakalauskas, 2004] Sakalauskas, L. (2004). Application of the Monte-Carlo Method to Nonlinear Stochastic
         Optimization with Linear Constraints. Informatica, 2, 271 - 282.

[Bazaraa et al., 1998] Bazaraa, M. S. & Shetty, C. (1998). Nonlinear Programming. Theory and Algorithms. New
         York: Wiley.

[Kaneva, 2005] Kaneva, O.N. (2005). Non-linear two-stage problem of stochastic programming with a determin-
         istic matrix correction. Mathematical structures and modeling , 25 - 33.




                                                         603