=Paper= {{Paper |id=Vol-2851/paper11 |storemode=property |title=Efficient Algorithms of Linear Optimization Problems Solution |pdfUrl=https://ceur-ws.org/Vol-2851/paper11.pdf |volume=Vol-2851 |authors=Sergiy Chernov,Sergiy Titov,Ludmila Chernova,Nataliia Kunanets,Varvara Piterska,Lubava Chernova,Yuriy Shcherbyna,Lubomyr Petryshyn |dblpUrl=https://dblp.org/rec/conf/itpm/ChernovTCKPCSP21 }} ==Efficient Algorithms of Linear Optimization Problems Solution== https://ceur-ws.org/Vol-2851/paper11.pdf
Efficient Algorithms of Linear Optimization Problems Solution
Sergiy Chernova, Sergiy Titova, Ludmila Chernovaa, Nataliia Kunanetsb,c, Lubava Chernovaa,
Varvara Piterskad, Yuriy Shcherbynaс and Lubomyr Petryshyne
a
     Admiral Makarov National University of Shipbuilding, Heroiv Ukrainy ave 9, Mykolaiv, 54025, Ukraine
b
     Lviv Polytechnic National University, Stepana Bandery Street 32-a, Lviv, 79013, Ukraine
c
     Ivan Franko National University of Lviv, Universutetska Street 1, Lviv, 79000, Ukraine
d
     Odesa National Maritime University, Mechnikova Street 34, Odesa, 65029, Ukraine
e
     AGH University of Science and Technology, 30 Mickiewicza str., Cracow, 30-059, Poland


        Abstract
        The modern approaches to selection of a project lifecycle model are based in many cases on
        the use of iteration that can be reduced to linear optimization (LO) problems. For study and
        solution of such problems, researchers use the program library of such well-known software
        packages as Mathematica®, Maple® and MathCad®. Computer-aided calculations allow
        solving complicated types of combinatorial integer linear optimization problems and help to
        solve large-scale problems as well. The known methods of exact or approximate solution of
        such problems are studied with account taken of the fact that they belong to so-called P- or
        NP-class problems (polynomial or exponential solution algorithms). In view of this, the
        modern computer-based methods for practical solution of discrete linear optimization
        problems require development of algorithms that allow obtaining an approximate solution
        with guaranteed estimate of deviation from the target function optimum. For such problems,
        it is very important to improve (to simplify) the mathematic model itself, which has to be
        prepared before the beginning of computer calculation. This practical reasonability for a wide
        range of LO problems promotes developing new efficient and improving the existing
        algorithms of preparing for computer calculations. Application of such algorithms will
        considerably reduce the duration of computer calculation and hardware requirements to the
        computer. The subject of this research is development of a chain of efficient algorithms to
        simplify the initial mathematic model of problem and its computer calculation. Construction
        of efficient algorithms and general principles of preparing for computer solution of LO
        problems with their illustrations on various project management model problems is the
        objective of this research.

        Keywords 1
        Linear optimization, polyhedron, target function, simplex method, basis vectors, primary plan,
        reference plan, polyhedron apex, reduction, duality.

1         Introduction
   The quality is managed during the whole project lifecycle. All of its stages and elements have to
be analyzed. This process provides for availability of project, organizational and management
decisions aimed at assurance of quality in fulfillment of processes in the course of project
implementation. The management of quality is realized through compliance with specified
requirements and international standards for quality of information systems, through the system of
control and support of quality in the process of development. Efficient construction of a quality
management model provides for use of various project management approaches [16-20].
Modern mathematic models of project management processes description can be reduced in many
cases to linear optimization (LO) problems [1,2]. For study and solution of such problems, researchers

Proceedings of the 2nd International Workshop IT Project Management (ITPM 2021), February 16-18, 2021, Slavsko, Lviv region, Ukraine
EMAIL: 19chsk56@gmail.com (A.1); ss1-ss10@rambler.ru (A.2); lyudmylachernova@gmail.com (A.3); nek.lviv@gmail.com (A.4);
19chls92@gmail.com (A.5); varuwa@ukr.net (A.6); yshcherbyna@yahoo.com (A.7); l.b.petryshyn@gmail.com (A.8)
ORCID: 0000-0002-9069-0409 (A.1); 0000-0001-8772-9889 (A.2); 0000-0002-0666-0742 (A.3); 0000-0003-3007-2462 (A.4); 0000-0001-
7846-9034 (A.5); 0000-0001-5849-9033 (A.6); 0000-0002-4942-2787 (A.7); 0000-0003-4168-3891 (A.8)
            ©️ 2021 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
use the subprogram library of such well-known software packages as Mathematica®, Maple® and
MathCad®. Computer-aided calculations allow solving complicated types of combinatorial integer
linear optimization problems and help to solve large-scale problems as well [10,12].
For such problems, it is very important to improve (to simplify) the mathematic model itself, which
has to be prepared before the beginning of computer calculation. This practical reasonability for a
wide range of LO problems promotes developing new efficient and improving the existing algorithms
of preparing for computer calculations [15]. Application of such algorithms will considerably reduce
the duration of computer calculation and hardware requirements to the computer.
The subject of this research is development of a chain of efficient algorithms to simplify the initial
mathematic model of problem and its computer calculation.

2           Research paper study and problem statement
    In multiple cases, mathematic models of active systems management are interpreted in the form of
linear optimization problems [1–3,12,13].
Preparing for LO problems solution can be simplified based on using the notion of “duality”. A
couple of dual problems formed according to certain rules allows a researcher to select one of them
that is simpler in its computer calculation.
Simplification algorithms provide an efficient method of searching for solution of an optimization
problem. If we project a multidimensional process onto a two-dimensional plane, this method will
enable graphic visualization of the problem solution sets. This research has proposed a method of
simplifying the combinatorial solution of a discrete optimization problem. It is based on
decomposition of the system representing a system of constraints of a five-dimensional initial
problem into the two-dimensional coordinate plane. Such method allows obtaining a simple system of
graphic solutions to a complicated linear discrete optimization problem. From the practical point of
view, the method proposed allows simplifying the calculation complexity of optimization problems
belonging to this class.
Solution of linear optimization problems is based on algorithm of the classic or a common simplex
method. It consists in intellectual iteration over polyhedron apexes  I (allowable area of optimization
problem). The plan or an apex of polyhedron  I is specified by a system n of basis vectors
a1 , a2 ,   , an . The number of possible apexes of polyhedron equals to the number of combinations
Сnm (n – problem measurability, and m = rang ( I ) ). Real linear optimization problems that interpret
models of management are characterized by big values of m. In view of this, we had to develop an
algorithm ensuring ordered iteration over angular points of the polyhedron. Such a method was
developed [1, 2] and is called simplex method. It allows obtaining the optimum optimization problem
solution from the known primary reference plan X 0 , within a finite number of steps. Each iteration
step of a simplex method corresponds to a new plan improving the target function value. The
algorithmic process continues until finding the optimum value of target function or the absence of
optimization problem solution.
   The number of simplex method iterations is determined by the primary reference plan X0 and the
number of angular points  . As there are several “ways” of transition from X0 to the optimum Xopt , we
                            I


encounter the need to find the shortest (in terms of the number of apexes) “way” of iteration. Now
there are not any publications with such assessments and their correlation to the classic simplex
method algorithm.

3           The objective and the tasks of research
  The research objective provides for use and development of efficient algorithms and preparing
mathematic models of the LO theory with further realization of their solution with the help of
computer. For achievement of the objective stated, the following tasks were specified:
      • Provide the general problem statement for construction of efficient algorithms;
      • Give model examples illustrating the efficiency of algorithms at the stage of computer
          calculation.
4 General statement of a solution simplification problem. Use of the duality
  theory in Linear Optimization problems
    The intensity in development of modern information technologies results in complication of
information systems for various sectors of public activities, implementation of IT projects on their
development. Big IT projects are characterized by:
        • complexity of description, the need for thorough modeling and analysis of data and
             processes;
        • a big number of interlinks between the components (subsystems);
        • the impossibility to use typical project solutions;
        • the need for agreement with applications available in the market;
        • different levels of developers’ qualification and a wide range of tools and means of
             development;
        • growing project scale through involvement of developer teams.
    The modern approaches to construction of information systems are determined by complexity of
defining the total scope of data and analytical tasks of the project. Developers use to define and
analyze the requirements to an information system during the whole lifecycle of its development.
Certainly, a project lifecycle model depends on complexity of the software product to be created. For
development of complicated information systems, the most frequent use is found by the iteration
model providing for division of the project lifecycle into a certain sequence of iterations each of
which is presented as a “mini-project”. The objective of each iteration is to achieve getting a
functional version of information system including supplements and realized modifications introduced
at previous iterations. The final iteration result includes all necessary product functionality. After each
iteration, the information system acquires some growth, i.e. some evolutional development.
Projects on development of complicated information systems the architecture of which includes a set
of components where each component can have several versions encounter the problem of studying
their interrelation with the purpose of creating a unified structure and assuring development of the
whole system. The said model contributes to systemic approach to information system development
with due control and account taken of changes introduced at all stages of the lifecycle. Reduction in
the number of iterations is provided by the use of linear optimization problems.
Linear optimization that was already determined for a long time as a separate part of the optimization
theory in most cases applies sustainable classic algorithms for solution of its problems [1–6]. Typical
problems include such steps of a classic algorithm as: obtaining the primary reference plan,
constructing a chain of reference plans, evaluation of their optimality, improving the plan and the
target function value [7–11]. Each of the reference plans has a set of linearly independent basis
vectors. Transition to a new basis that is nothing more than a transition to the neighbor polyhedron
apex along its edge is affected within a rigorous algorithm. According to the simplex method
algorithm theory, this transition is to be carried out in the direction of the best alteration of the target
function values [12,13]. Such a “firm” calculation requirement to an algorithm results in some linear
optimization problems in appearance of a too big number of iterations as compared to transition not to
the neighbor apex but to another one that can be determined by additional requirements.
Algorithms of preparing a LO problem for computer calculation can be essentially simplified through
using the notion of “duality” in LO problems [9,10]. The existing patterns of transition from primal
problem to a dual one enables easy creation of a dual problem couple.
Formalizing the classic pattern of constructing a dual problem, we obtain couples of mutually dual
problems. This, in its turn, allows selecting the simplest problem in terms of calculation for the further
solution. Let us consider the most common case of presenting a primal LO problem in the form of a
general LO problem
               WI = C X → max,
           A
                    A12   X 1      B1 
 I:  I :  11                         ,
           A21 A22   X 2   =   B2 
          
            x j  0,     j = 1, 2,..., l ,
                                                                                                        (1)
   or in extended form of record
                                                        n
                                  WI =  c j x j → max,
                                                       j =1

                          n

                          a x  b , i = 1, 2, 3,...k ,
                                      ij     j          i
                         j =1
      I:  I :            n
               
                          a x = b , i = k + 1, k + 2, k + 3,...m,
                                      ij     j           i
                         j =1

                           x j  0,                            j = 1, 2,..., l.
                                                                                                                              (2)
  A problem of the following form is called dual to it:

                                      WII = Y B → min,
                 
                            A      A12     C1 
      II:  II :  ( Y1 Y2 )  11              ,
                 
                             A21 A22   =   C2 
                    yi  0,   i = 1, 2,..., k ,                                                                               (3)
  or in other form of record
                                           m
                       WI I =  bi y i → min,
                                           i =1

                  m

                 y a  c , j = 1, 2, 3,...l ,
                   i =1
                           i     ij               j                                                                           (4)
II :  II :       m
                 y a = c , i = l + 1, l + 2, l + 3,...n,
            
                           i     ij               i
                   i =1

                 y i  0,                             i = 1, 2,..., k .

4.1       Model Example No. 1
  A dual problem is to be constructed for the following primal linear optimization problem:

                WI = x1 + 2 x2 − x3 + 7 x4 − x5 → max,
                    3 x1 + 2 x2 − x3 + 2 x4 + x5  12,
                    
           I:  I :  x1 − x2 + 8 x3 − x4 + 7 x5  1,
                     x + 4 x + x − 2 x − x = 8,
                     1        2    3      4    5

                                 x1  0, x2  0, x3  0, x4  0,                                                        (5)

   S o l u t i o n . For the beginning of transition to dual problem, we prepare the system of
constraints of a primal problem – for a maximization problem, we only need inequalities  . We
reverse the sign of the first inequality.

                WI = x1 + 2 x2 − x3 + 7 x4 − x5 → max,
                    −3 x1 − 2 x2 + x3 − 2 x4 − x5  −12,
                   
          I:  I :      x1 − x2 + 8 x3 − x4 + 7 x5  1,
                    x + 4 x + x − 2 x − x = 8,
                        1      2    3      4     5

                        x1  0, x2  0, x3  0, x4  0.                                                                 (6)
  Transition to dual problem is presented in Table 1.

Table 1.
Transition to dual problem for model example №1
                 Y\X                                  x1  0              x2  0   x3  0   x4  0   x5       ?   B
                y1  0                                 –3                  –2        1       –2      –1          –12
                y2  0                                  1                  –1        8       –1      7            1
                     y3                   1                   4   1    –2   –1   =         8
                     ?                                                  =
                     C                    1                   2   –1   7    –1

   The dual problem has the following form:

                 WII = −12 y1 + y2 + 8 y3 → min,
                     −3y1 − y2 + y3 − y4  1,
                     −2y − y + 4 y  2,
                     
                     
                           1     2     3

          II:  II :  y1 + 8 y2 + y3  −1,
                     −2 y − y − 2 y  7,
                          1     2     3

                     − y1 + 7 y2 − y3 = −1,
                     
                                 y1  0, y2  0.                                               (7)

   The dual problem can be reduced to a two-dimensional one. Actually, in case of excluding y3
from the system equation, we will only have two variables: y1 and y2 . Solution of the primal
problem is to be found under condition of the known theorems of duality [4,5].

4.2       Reduction in general Linear Optimization problems
   Let us have a general linear optimization problem in the form:
                                    n
                           WI =  c j x j → max,
                                   j =1


            a x b,
                 n

               j i                       i = 1,  , k ,
            j =1 i j
      I :  n
             ai j x j = bi ,            i = k + 1,  , m,
             j =1
                     x j  0,                 j = 1,  , l.
                                                                                               (8)
   We know that such a problem can always be reduced to the canonical form of record:

                          n
            WI =  c j x j → max,
                          j =1
             n
       I :  ai j x j = bi ,                 i = 1,  , m,
             j =1

                     x j  0,                 j = 1,  , n,
                                                                                               (9)

    We have to remember that linear optimization problems have equivalent forms – they reserve their
set of solutions. This can be achieved under the condition of using transformation methods for
transition from one form of problems to the other. Thus, the equation of system of constraints to a
linear optimization problem is equivalent to the system of two inequalities:

                            n
                             ai j x j  bi ,
       n
                            j =1
          ai j x j = bi   n
                           − a x  −b .
                            
      j =1
                                   ij j        i
                              j =1
                                                                                               (10)

   Variables with arbitrary sign can be presented in the form of difference of 2 nonnegative variables:
      xj = uj −vj,                      u j  0, v j  0.
                                                                                                              (11)

   Transition from constraints – inequalities to constraints – equations is made by adding a
nonnegative (balancing) variable:

       n                         n

      a x  b  a x + x
      j =1
              ij       j    j
                                 j =1
                                         ij   j   n +i       = bi ,   xn+i  0,   i = 1, , k .
                                                                                                              (12)

   For simplification of linear optimization problems transformation, we can also use the transition
from maximization to minimization of the target function and vice versa:

                   n                                     n
      WI =  c j x j → max  WI = − c j x j → min
                   j =1                              j =1                                                     (13)

   In view of this, without loss of generality, let us have a linear discrete optimization problem
presented in canonical form:

               WI = CX → max
       I : A X = B,
                   X  0,                                                                                     (14)

   where the rank of the system of constraints coefficient matrix is equal to rang A = m .
In this case, solving the system by Gauss-Jordan method with arbitrary basis combination of
variables, we obtain a projection of n-dimensional initial problem to (n-m) – dimensional space. In
case n - m = 2, we have projecting to the two-dimensional plane.
Let us consider a model example of solving a five-dimensional linear optimization problem based on
such projecting of a multidimensional space onto the two-dimensional one.

4.3           Model Example No. 3
   Solution of the following linear optimization problem using the simplification algorithm

                           WI = 13x1 + 7 x 2 + 2 x3 − x 4 + 2 x5 → max,
                    10 x1 + 10 x 2 + x3 + x 4 + x5 = 179,
                                                                                                             (15)
              I :  19 x1 + 14 x 2 + x3 + 2 x 4 + 2 x5 = 298,
                    4 x + 5 x + x = 69,
                    1         2     4

                       x1  0, x 2  0, x3  0, x 4  0, x5  0.

   S o l u t i o n . The system of constraints (15) consists of three independent equations. We move
from the canonical problem presentation form to the standard one. Such transition is made by solving
the system (15) by the Gauss-Jordan method (Table 2). As basis variables, we select the following
three: x3, x4, x5.

Table 2
Calculation under basis x3, x4, x5
                    x1                  x2                    x3          x4            x5        b     ∑

                    10                  10                     1           1             1        179   202
                    19                  14                     1           2             2        298   336
                       4                5                      0           1             0        69    79
WI            13                 7          2    -1       2          151
              10                10          1    1        1          179        202
              -1                −6          −1   0        0          −60        −68
               4                 5          0    1        0           69         79
WI            -7               −13          0    −3       0          −207
               9                 4          0    1        1          119        134
               1                 6          1    0        0           60         68
               4                 5          0    1        0           69         79
WI            -7               −13          0    −3       0          −207
               5                −1          0    0        1           50         55
               1                 6          1    0        0           60         68
               4                 5          0    1        0           69         79
WI             5                 2          0    0        0           0


   From the last chain of Table 2, we obtain the solved system.

  4 x1 + 5 x2 + x3 = 69,
  
  5 x1 − x2 + x4 = 50,
                                                                                          (16)
   x + 6 x + x = 60.
  1       2     5



   Dropping the basis variables, we provide a transition to two-dimensional inequalities. Projection of
the five-dimensional initial problem onto the coordinate plane Ox1x2 has the following form:

                       WI = 4 x1 + 5 x2 → max,
                       4 x1 + 5 x2  69,
                       
           Ox1 x2
            I        : 5 x1 − x2  50,
                        x + 6 x  60,
                        1       2

                         x1  0, x2  0.                                                         (17)

   The graphic solution is given on Figure 1.




Figure 1: Projection to Ox1x2.
   The extreme apex coordinates provide the solution of the system
                            4 x1 + 5 x2 = 69,   x = 11,
           max : ω1  ω 2  
         X opt                                  1
                            5 x1 − x2 = 50,     x2 = 5.                                      (18)
    We can obtain the other coordinates from the solved system (17). Therefore, the optimum solution
of the initial problem is equal to
                    
               max = 11, 5, 19, 0, 0
            X opt                    .                                                        (19)
    The biggest value of the target function will be WImax = 65 . Note. Three variables can be selected
from five by ten methods C 53 = 10 . We consider another combination from possible ones to reduce
the problem. We select x2, x3, x4 to be our basis variables. Let us solve the initial system of constraints
to variables x2, x3, x4 by Gauss-Jordan method (Table 3), but using system (17) equivalent to system
(18).

Table 3
Calculation under basis x2, x3, x4
                            x1         x2        x3   x4         x5          b          ∑

                            5          −1        0    0           1         50          55
                            1          6         1    0           0         60          68
                            4          5         0    1           0         69          79
                 WI         5          2         0    0           0          0
                            -5         1         0    0          −1         −50        −55
                            31         0         1    0           6         360        398
                            29         0         0    1           5         319        354
                 WI         15         0         0    0           2         100

   Table 3 provides the solved system with basis variables x2, x3, x4,

−5 x1 − x5 + x2 = −50,

31x1 + 6 x5 + x3 = 360,
29 x + 5 x + x = 319.
    1      5    4                                                                                    (20)

   Ignoring nonnegative basis variables x2, x3, x4 we perform a transition to constraints-inequalities.
The projection of a five-dimensional polyhedron of initial problem (15) to coordinate plane Ox1x5, or
an equivalent transition from the canonical form of LO problem to the standard one looks like:

                 WI = 15 x1 + 2 x5 − 100 → max
                 − 5 x1 − x5  −50,
                 
    IOx1 x5
               : 31x1 + 6 x5  360,
                 29 x + 5 x  319,
                  1          5

                   0  x1 ,0  x5 .                                                               (21)

Graphic solution of the problem is given on Figure 2, where ω1 : 5 x1 + x5 = 50 , ω 2 : 31x1 + 6 x5 = 360 ,
ω3 : 29x1 + 5x5 = 319 , ω 4 : x2 = 0 , ω5 : x1 = 0 .
Figure 2: Projection to Ox1x5
The optimum apex has coordinates x1 = 11 , x5 = 0 . We calculate coordinates x2, x3, x4 from system (21).
Finally, X max
           opt
               =  11, 5, 19, 0, 0  . The obtained optimum solution matches the previous ones, which confirms the
calculations carried out to be correct.

4.4        Acceleration of the Linear Optimization problem solution convergence
    In most cases, solution of linear optimization problems is searched for by the simplex method.
However, this classic algorithm of linear optimization problems solution may create additional
iterations in the immediate process of calculation. If we break some components of the standard
simplex method algorithm, we can accelerate the convergence of simplex calculation – reduce the
number of simplex tables.
For acceleration of the simplex method convergence, it is proposed to deviate from the canonical
algorithm. It is required to choose not the neighbor apex as the next problem plan, but the verified
apex selected according to evaluation of the biggest and the smallest target function values. We
consider the general approach to solution of a linear optimization problem according to the classic
simplex method algorithm [1,2,3]. Without loss of generality, let us have a linear optimization
problem in the standard form of record


                n
      WI =  c j x j → max,
                j =1
            n
       I :  ai j x j  bi , i = 1,       , m,
            j =1

                       x j  0, j = 1,   , n,
                                                                                                        (22)

where bi  0, i = 1, 2,                    , m.
Adding a balancing nonnegative variable xi  0, i = n + 1, n + 2,  , n + m to each inequality
and recoding the problem in vector form allow obtaining the canonical form of an optimization
problem record

      WI = (c, x) → max,
       I : (a j , x) = b,
                x  0,                                                                                  (23)
    or in extended form of record:
         x1a1 + x2a 2 +      + xma m + xm +1a m +1 +                                               + xna n = b,                                         (24)
    where
                             x =  x1 , x2 ,           , xn        , X  R n , c =  c1 , c2 ,                              , cn  ,
                                                                T



               a1 =  a11 , a21 ,     , am1  , a 2 =  a12 , a22 ,                            , am 2  ,     , a n =  a1n , a2 n ,     , amn  ,
                                                 T                                                     T                                         T



a n+1 = e n+1 =  1, 0,   , 0  , a n+ 2 = e n+ 2 =  0, 1,     ,0 ,                          , a n+ m = e n+ m =  0, 0,    , 1  , b =  b1 , b2 ,   , bm  .
                              T                                       T                                                           T                            T



Vectors a n +1 , a n + 2 , , a n + m are unit vectors. These vectors are linearly independent and constitute
the basis. The right sides vector resolution of the optimization problem set of constraints has the
following form:
b = b1e n +1 + b2e n + 2 + + bme n + m                                                          (25)
  As all bi  0 , we obtain the allowable primary reference plan X0 . The following basis resolution
corresponds to the primary plan.
    X0 = b1e n+1 + b2e n+ 2 +                   + bme n+ m = [0,0,                             ,0, b1 , b2 ,       , bm ]
                                                                                          (26)
                                                                                           n

The main idea of using the simplex method algorithm is sequential iteration over allowable reference
plans. One vector is excluded from and another included to the basis by the Gauss-Jordan method
[14]. Subject to compliance with these criteria, we have to build a chain. The beginning of the chain is
located at the starting apex X of polyhedron  I and corresponds to the first simplex table of
                                            0

calculation. Moving to the next reference plan X1 by following the classic algorithm corresponds to
transition to the neighbor apex. Actually, each table is a numeric description of apexes  I . The
process is to be continued till finding the optimum apex Xopt or confirming its absence.
At the arbitrary step of calculation by following the common simplex method algorithm, we have the
possibility to move not to the neighbor apex, but to the arbitrary apex located around the optimum
apex. Such an apex can be selected based on multiple evaluation methods, e.g. the half-interval
method. For this selection, the alternative chain of simplex calculation may have a much smaller
number of iterations. Let us consider a model example of a two-dimensional linear optimization
problem solution to confirm this case, first by following the standard procedure and then by breaking
the rule of basis vectors combination selection.

4.5        Model Example № 5
    LO problem solution

          WI = 2 x1 + 3 x2 → max,
               3x1 − 4 x2  6,
                x + 2 x  12,
               
          I :  1        2

                1 2  3,
                 − x +  x
                x2  4,
                     x j  0, j = 1,            , 4.
                                                                                                                                                        (27)

   S o l u t i o n . Basis nonnegative unknowns are to be added to left sides of each inequality. As a
                                                                          x3, x4, x5, x6




result, we obtain the canonical form of a linear optimization problem:
       WI = 2 x1 + 3 x2 → max,
            3x1 − 4 x2 + x3 = 6,
             x + 2 x + x = 12,
             1
       I : 
                       2     4

             1 2 5 = 3,
              − x +  x   + x
             x2 + x6 = 4,
                  x j  0, j = 1, , 6.
                                                                                                (28)

We have the primary reference plan X0 =  0, 0, 6, 12, 3, 4    I . Let us draw the reference
simplex table (Table 4).

Table 4
Simplex table vertex X0
                                         a1         a2         a3         a4         a5         a6
                    C            B        2          3          0          0          0          0
     Basis
         a3             0            6        3       −4            1          0          0          0

         a4             0         12          1          2          0          1          0          0

         a5             0            3     −1            1          0          0          1          0

         a6             0            4        0          1          0          0          0          1

        Dj           WI =        0         -2         -3            0          0          0          0

Index row  j has two negative evaluations meaning that plan X0 =  0, 0, 6, 12, 3, 4    I is
not optimum and can be improved. The pivot column can be found by the rule of selecting the
smallest negative value of evaluations. This is column а 2 as min −2, −3 = −3 → a 2 . The pivot
row is to be set by the rule of selecting the smallest simplex ratio for positive components of the pivot
column.
   We have
     bi                                         12 3 4 
               ai 2  0, i = 4, 5, 6  = min           , ,        = 3 → a5
     i2a                                        2 1 1                                        (29)
Our solving element is a52 = 1 . For it, we make a Gauss-Jordan transformation and by following the
algorithm we select (Table 5):
Table 5
Simplex table vertex X1
                                                  a1        a2      a3      a4      a5              a6
                      C              B             2        3       0       0        0              0
     Basis
        a3                0              18            −1       0       1       0        4              0

        a4                0              6             3        0       0       1     -2                0

        a2                3              3             −1       1       0       0        1              0

        a6                0              1             1        0       0       0     −1                1

        Dj                WI =       9                 -5       0       0       0        3              0




   From the table (Table 5):
             X1 =  0, 3, 18, 6, 0, 1  , WI ( X1 ) = 9                                 (30)
        The index row  j has a negative evaluation. Plan X1 =  0, 3, 18, 6, 0, 1  is not optimum
and it can be improved. The pivot column is а 2 , as only this column contains negative evaluation
1 = −5 . We select a pivot row from the condition of the smallest simplex ratio for positive
components of the pivot column. We have
              bi                               6 1 
                     ai1  0, i = 4, 6  = min  ,    =  2, 1  = 1 → а 6
              ai1                              3 1                                      (31)
In the new basis, instead of а 6 we involve а1 . After respective calculation, we have the third simplex
table (Table 6).

Table 6.
Simplex table vertex X2

                                                  a1        a2      a3      a4      a5              a6

     Basis            C          B                2         3       0       0        0              0

        a3                0       19                  0         0       1       0        3              1

        a4                0          3                0         0       0       1        1           −3

        a2                3          4                0         1       0       0        0              1

        a1                2          1                1         0       0       0     −1                1

        Dj            WI =       14                   0         0       0       0     -2                5



From the table (Table 6):
             X 2 =  1, 4, 19, 3, 0, 0  WI ( X 2 ) = 14
                                              ,                                              (32)
The index row  j has a negative evaluation. Plan X2 =  1, 4, 19, 3, 0, 0  is not optimum and can be
improved. The pivot column is а5 , as only this column contains negative evaluation 5 = −5 . We
select the pivot row by the condition of the smallest simplex ratio for positive components of the pivot
column. We have
             bi                                 19 3 
                    ai 5  0, i = 3, 4  = min        ,    = 3 → а4
             ai 5                               3 1                                    (33)
        In the new basis, instead of а 4 we involve а5 . After respective calculation, we have the
fourth simplex table (Table 7).

Table 7
Simplex table vertex X3
                                         a1          a2         a3         a4         a5         a6
                  C            B                                                                 
    Basis
       a3                                                              −                   
       a5                                                                                   −
       a2                                                                                      
       a1                                                                                   −

       j          WI =                                                                     −

From the fourth table (Table 7):
                               X3 =  4, 4, 10, 0, 3, 0  , WI ( X3 ) = 20
The index row  j has a negative evaluation. Plan X3 =  4, 4, 10, 0, 3, 0  is not optimum and can be
improved. Our pivot column will be а 6 , as only this column contains negative evaluation  6 = −1. We
select the pivot row by the condition of the smallest simplex ratio for positive components of the pivot
column. We have

     bi                                 4 10 
             ai 6  0, i = 2, 3  = min  ,     = 1 → а3
     ai 6                               1 10                                                 (34)


In the new basis, instead of а3 , we involve а 6 . After respective calculation, we have the fifth simplex
table (Table 8).
Table 8
Simplex table vertex Xopt
                                          a1         a2         a3         a4         a5         a6
                  C             B          2          3          0          0          0          0
    Basis
       a6             0             1          0          0      1/10   ̶ 3/10             0          1

       a5             0             6          0          0      3/10       1/10           1          0

       a2             3             3          0          1    ̶ 1/10       3/10           0          0
       a1             2             6          1          0       1/5        2/5           0          0

       Dj         WI =             21          0          0      1/10     17/10            0          0
All evaluations are nonnegative  j  0 . This means that we have found the optimum solution.

        Xopt =  6, 3  WI ( Xopt ) = 21
                            ,                                                                            (35)

   Therefore, the calculation within the classis simplex calculus contains the following chain of
sequential iteration over apexes

 I : X0 → X1 → X 2 → X3 → Xopt                                                                              .
                                                                                                      (36)

   Let us confirm that breaking the canonical simplex method algorithm can essentially reduce the
length of the calculation chain – the number of simplex tables. We break the algorithm at the first
step. We are not selecting the smallest evaluation like in the common simplex method, but the biggest
one. Respective calculation is given in Table 9.

Table 9
Alternative simplex algorithm
                                             a1          a2          a3          a4          a5              a6
     Basis          C            B           2           3            0           0           0                  0
        a3              0            6           3           ̶4           1           0           0                  0
        a4              0         12             1           2            0           1           0                  0
        a5              0            3           ̶1          1            0           0           1                  0
        a6              0            4           0           1            0           0           0                  1
        Dj         WI =              0           ̶2          ̶3           0           0           0                  0
        a3              0         ̶ 30           0        ̶ 10            1        -3             0                  0
        a1              2         12             1           2            0           1           0                  0
        a5              0         15             0           3            0           1           1                  0
        a6              0            4           0           1            0           0           0                  1
        Dj          WI=           24             0           1            0           2           0                  0
        a2              3            3           0           1      ̶ 1/10        3/10            0                  0
        a1              2            6           1           0        1/5          2/5            0                  0
        a5              0            6           0           0        3/10        1/10            1                  0
        a6              0            1           0           0        1/10      ̶ 3/10            0                  1
        Dj          WI =          21             0           0        1/10       17/10            0                  0


The length of calculation chain has been almost twice reduced - X0 → X 4 → Xopt . The problem
considered is two-dimensional. Therefore, we can perform a graphical solution providing us with
geometric interpretation of the problem calculation chains. We set up an equation of limit lines
ω1 : 3 x1 − 4 x2 = 6, ω2 : x1 + 2 x2 = 12, ω3: − x1 + x2 = 3, ω4 : x2 = 4, ω5 : x1 = 0, ω6 : x2 = 0 , and set semi-
planes determined by respective inequalities of the set of constraints. As a result, we can draw
polyhedron  I (Fig. 3).
Figure 3: Interpretation of compliance with the classic simplex method algorithm and of breaking it

   At the coordinate origin point, we draw the gradient vector                   grad(WI ) =  2, 3  .
Perpendicularly to it, we draw the level line. Moving the line in parallel to itself in the gradient
direction, we set the maximum point Xopt - the apex of the level lines outreach (Figure 3). The
coordinates of the extreme apex are found as coordinates of the crossing point of respective limit
lines:
                            3 x − 4 x2 = 6,    x = 6,
           Xopt : ω1  ω2   1                1
                             x1 + 2 x2 = 12,   x2 = 3.                                     (37)

Therefore, the target function reaches its maximum value at the apex Xopt =  6, 3  and it is equal to
WI ( Xopt ) = 21.The geometric interpretation of the classic simplex calculation consists in the fact that
the first simplex table (Table 4) corresponds to apex X0 . The calculation up to the second table (Table
5) corresponds to transition to the neighbor apex X1 , in the direction of the biggest target function
growth. The third, the fourth and the fifth simplex tables (Table 6, Table 7, Table 8) correspond to
transition X1 → X 2 → X3 → X opt (Figure 3). Therefore, for solving the problem by following the
classic algorithm, we need to set up five simplex tables. For reducing the number of iterations, we
break this algorithm and select not the smallest but the biggest negative evaluation 1 = −2 in the
initial simplex table. The further calculation is given in Table 9. As we can see, the number of
simplex tables has been reduced from five to three.

5 Research results and summary
   The illustrative examples of searching for efficient algorithms of preparing for calculation of an
optimization problem based on using the notion of “duality” problem reduction and the method of
breaking the standard simplex algorithm have visually demonstrated the reasonability of such
methods in solution of linear optimization problems. From the practical point of view, the proposed
approaches allow simplifying the calculation complexity of problems belonging to this class. Based
on comparative solutions of model problems, it has been shown that initial problems can be
essentially simplified. The research result obtained allows arriving at the conclusion that in the
common case, it is reasonable to search for breakage of the algorithm of standard algorithmic patterns
currently created. The application value of the proposed approach consists in using the obtained
scientific result for assurance of the possibility improve the canonical methods of optimization
problems solution and, accordingly, simplifying the computer calculation using libraries of standard
subprograms within the well-known mathematical packages.

Conclusions
   It has been shown that there are classes of linear optimization problems for which it is reasonable
to search for more efficient algorithms with the purpose of preparing Linear Optimization problems
for computer calculation. It has been proved on the example of illustrative solution of typical model
problems that the proposed approach allows us to essentially simplify the initial model for obtaining
the solution of Linear Optimization problems in project management.

References
[1]  J. Dantsig, Linear Programming and Extensions, Progress. Moscow, 1966.
[2]  L. Kantorovich, A. Gorstko, Optimum Solutions in Economics, Nauka, Moscow, 1972.
[3]  N. Unger, S. Dempe, Lineare Optimierung, Springer, Wiesbaden, 2010.
[4]  V. Hetmantsev, Linear Algebra and Linear Programming, Lybid, Kyiv, 2001.
[5]  I. Bahaienko, V. Hryhorkiv, M. Boichuk, M. Riumshyn, Mathematic Programming, Logos, Kyiv,
     1996.
[6] G. Teschl, Teschl Susanne, Mathematik für Informatiker. Band 1: Diskrete Mathematik und
     Lineare Algebra, Springer, Berlin, 2008.
[7] M. Buhir, Linear Algebra, Linear Models, Akademia, Kyiv, 1998.
[8] M. Gavurin, V. Malozemov, Extreme Problems with Linear Constraints, LGU, Leningrad, 1984.
[9] S. Ashmanov, Linear Programming. Chief Ed. Board of Phys. & Math. Lit, Moscow, 1981.
[10] I. Sigal, A. Ivanova, Introduction to Applied Discrete Programming: Models and Calculation
     Algorithms, Physmathlit, Moscow, 2003.
[11] A. Rzheuskyi, N. Kunanets, V. Kut, Methodology of research the library information services:
     the case of USA university libraries. Advances in Intelligent Systems and Computing 689 (2018)
     450–460. doi:10.1007/978-3-319-70581-1_32.
[12] T. Romaniuk, T. Tereshchenko, H. Prysenko, I. Horodkova, Mathematic Programming, Kyiv
     State University of Economics, Kyiv, 1996.
[13] V. Stepaniuk, Methods of Mathematic Programming, Vyshcha Shkola, Kyiv, 1984.
[14] S. Titov, L. Chernova, Higher and Applied Mathematics: Training Manual: in 2 Parts, part 1,
     Fakt, Kharkiv, 2017.
[15] S. Chernov, S. Titov, Ld. Chernova, V. Gogunskii, Lb. Chernova, K. Kolesnikova, Algorithm for
     the Simplification of Solution to Discrete Optimization Problems, East Europ. J. of Enterprise
     Technol. 3/4 (93) (2018) 34–43.
[16] Bondar, A., Bushuyev, S., Onyshchenko, S., Hiroshi, H.: Entropy Paradigm of Project-Oriented
     Organizations Management. Proceedings of the 1st International Workshop IT Project
     Management (ITPM 2020) Volume 1, p. 233-243. Lviv, Ukraine, February 18-20, 2020, CEUR
     Workshop Proceedings (2020)
[17] Bushuyev S., Kozyr B., Rusan N. Modeling of Empathy, Emotional Intelligence and
     Transformational Leadership to the Project Success. Mathematical Modelling and Simulation of
     Systems. Advances in Intelligent Systems and Computing Book Series. Springer AG. vol. 972, P.
     209-222. (2020)
[18] Rzheuskyi, H. Matsuik, N. Veretennikova, R. Vaskiv, Selective Dissemination of Information –
     Technology of Information Support of Scientific Research. Advances in Intelligent Systems and
     Computing 871 (2019) 235–245.
[19] V. Tomashevskyi, A. Yatsyshyn, V. Pasichnyk, N. Kunanets, A. Rzheuskyi, Data Warhouses of
     Hybrid Type: Features of Construction. Advances in Intelligent Systems and Computing book
     series 938 (2019) 325–334.
[20] H. Lypak, V. Lytvyn, O. Lozynska, R. Vovnyanka, Y. Bolyubash, A. Rzheuskyi, D. Dosyn,
     Formation of Efficient Pipeline Operation Procedures Based on Ontological Approach. Advances
     in Intelligent Systems and Computing 871 (2019) 571–581.