<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Parallelization of the Simplex Method Based on the OpenMP Technology</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Boyko[</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Lviv79013</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Nataliia Petryshyn</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this work we propose a parallelization of the simplex method based on the technology of parallel programming OpenMP, which is used for solving linear programming problems. The advantages of this approach are to improve acceleration and efficiency. This, in turn, is important when processing large data. The obtained results of the parallel algorithm were compared with the conventional simplex method. Application of this method is important considering modern trends of multi-core architecture of processors. Also in this work was used such feature as multithreading. The proposed parallelization algorithm is easily scaled to a different number of processor cores. Without loss of generality, the example of solving the problem of distribution of cars between the freight fronts of the railway station have been carried out a number of numerical experiments. Data analysis showed that with increasing the number of threads and cores achieve the maximum performance of a program that implements a parallelization of the simplex method. These findings are depicted as bar charts.</p>
      </abstract>
      <kwd-group>
        <kwd>OpenMP parallel computing technology</kwd>
        <kwd>multithreading</kwd>
        <kwd>finite difference</kwd>
        <kwd>linear programming</kwd>
        <kwd>simplex method</kwd>
        <kwd>parallelization</kwd>
        <kwd>multicore</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1 Introduction</title>
      <p>Linear programming (LP) was developed because of economy problems, finding a
way to find the best decisions while using limited resources. The development and
complication of economic processes and computing stimulates extensive use of
mathematical methods in management, contributes to the growth of the role of linear
programming as one of the important topics of applied mathematics [1-3].</p>
      <p>According to American experts, about 75% of the total number of practical
optimization problems, relate to the objectives of LP. About a quarter of the machine time
spent in recent years on carrying out scientific researches was allocated to the
decision of tasks of LP and their numerical modifications.</p>
      <p>The simplex method is generic and it can be used to solve a wide range of tasks:
 maximum pairing (graph theory));
 maximum thread;
 transportation problem;
 a zero-sum game (a zero sum)
Improving the efficiency of algorithms for solving linear programming problems has
important practical significance. Industrial production resources in the economy, the
tension forces in the hostilities – it is a complex of numerous interrelated processes. In
the management of these totally different phenomena one can distinguish essential
features of similarity. When formulating optimization problems it turns out that
problems with different content can be represented by the same type of mathematical
models [4].</p>
      <p>The linear programming model includes three main points:
 a set of nonnegative variables characterizing the investigated process or
phenomenon;
 relations that establish the relationship between variables (restrictions) and the
requirements of the reflecting task;
 criterion of optimality (goal function).</p>
      <p>In the problems of linear programming constraints are a system of linear inequalities
and equations, expressing the condition of the material balance (ie, that the
consumption of any kind of raw material does not exceed the available stock of this raw
material, etc.). The goal function also has a linear look.</p>
      <p>The aim of the work is to parallelize algorithm of the simplex method to achieve
maximum acceleration and efficiency in the processing of large volumes of input
data.</p>
      <p>Despite the fact that the simplex method is a very efficient algorithm that showed
good results in solving linear programming problems, it is an algorithm with
exponential complexity [4]. The reason for this is the combinatorial nature of the algorithm
that sequentially traverses the vertices of the polyhedron of admissible solutions in the
search for an optimal one.</p>
      <p>Parallelization of the simplex method is a very relevant and popular topic.
Available studies prove the effectiveness of a parallel algorithm, which allows you to
increase the accuracy of the result. However, the implication of the simplex-method
with modern trends in the development of computer technology becomes even more
important, namely: a promising way in building computer systems based on
multicore processors.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Review of the Literature</title>
      <p>Nowadays, most tasks of linear programming are solved using algorithms which are
based on the simplex method. Other known algorithms, including those with
polynomial complexity, yield to the effectiveness of the simplex method in solving
specific applications [5, 6].</p>
      <p>The problem of linear programming is considered in the following canonical
representation:
with restrictions</p>
      <p>n
max( W  i1bi xi ), x  xi , i  1, n</p>
      <p>x
 xi  0, i  1, n,
 n
 j (x)  i1a ji xi  c j , c j  0, i  1, n, j  1, M  n
This formulation is considered to be classic, and this way any linear programming
problem can be compiled [7]. The canonical representation is convenient because in
this case the values bi match with the values of the Lagrange multipliers, which are
calculated for the active constraints xi  0, i  1, n</p>
      <p>The simplex method has one important advantage – when you increase the
dimension of the problem, the computational costs are small, because at each step it
calculates only one value of the objective function.</p>
      <p>The task of LP is usually written in abbreviated form and is represented as follows:
The task of linear programming is a wide range of control tasks to the functioning
of economic systems.</p>
      <p>Linear programming problem closely related to the tasks of game theory, so to
solve them it is possible to apply numeric methods of game theory
n
Z  max(min)  c x</p>
      <p>
        j j [8].
j0
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        )
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        )
      </p>
      <p>The geometric meaning of simplex method consists of consecutive transition edges
from one vertex of a polyhedron solution (basic plan) to another in the direction of the
vertex X*, in which the objective function reaches the highest (lowest) value (see Fig.
1).</p>
    </sec>
    <sec id="sec-3">
      <title>3 Statement of the Problem</title>
      <p>General task of linear programming (GLP), presented in any form of record, called a
task in which you must determine the optimum (maximum or minimum) of the
objective function:
under the following restrictions:</p>
      <p>F  c1x1  c2 x2  ...  cn xn
n
j1 aij x j  bi ; (i  1, s),
n
j1aij x j  bi ; (i  s  1, m),</p>
      <p>x j  0, j  1, n
here ai j , bi , c j – some of the coefficients.</p>
      <p>We introduce the notation:</p>
      <p>m
 j :  C
i1 bi</p>
      <p>aij  c j , ( j  1, n)
where Cb - the vector of objective function coefficients at basic variables.</p>
      <p>Matrix А, that consists of elements aij , the dimension of m  n is called the
matrix of the problem, the column vector is called the vector of free members (a vector
of constraints in the problem), a row vector ck – the coefficients of the objective
 x1 
 x 
function at variable xk , a column vector X   ..2.  – the vector of unknowns
(vari xn 
ables).</p>
      <p>
        In vector form, the problem (
        <xref ref-type="bibr" rid="ref3">3</xref>
        )-(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) has the following form:
      </p>
      <p>
        F  Cx
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        )
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
p1x1  p2 x2  p3x3  ...  pm xm  pm1xm1  pn xn  p0
 a11   a12   a13 
    
 a21   a22   a23 
p1   a31 ; p2   a32 ; p3   a33  …;
 ...   ...   ... 
am1  am2  am3 
 a1m   1   0 
 a   0   0 
 2m     
pm   a3m ; pm1   0 ; pn   0  …
 ...  ... ...
amm   0   1 
When you build simplexe table uses the value p0 – the right part of the problem
constraints.
      </p>
      <p>Without loss of generality, consider the problem of distribution of cars on the
freight fronts of the railway station, which is usually solved by methods of linear
programming in multicriterial statement.
4</p>
    </sec>
    <sec id="sec-4">
      <title>Methods of Solving</title>
      <p>Let freight station arrives a certain number of cars that must be unloaded during the
day on three cargo fronts. At various technical equipment the cost of railway on
unloading is different. Let the different fronts they are respectively 9, 10 and
16.s.u/weights. (standard unit)</p>
      <p>Find the maximum number of wagons can be unloaded by a cargo station during
the day:
There are also the following constraints for the problem:</p>
      <p>F  9x1  10 x2  16 x3</p>
      <p> max
18 x1  6 x2  5x3  360 ,

15 x1  4 x2  3x3  192 ,
12 x1  8x2  3x3  180 .</p>
      <p>Demonstrate a few of the implementation steps of the simplex method on this
problem.</p>
      <p>The problem can be represented in the form of a Table. 1.
In Fig. 2 the value of the function equal to 0.0, however, there are negative values  ,
in particular, 1  9.00, 2  10.00, 3  16.00 . You need to go to the next
support plan with a list of vector values p1, p2 , p3, p4, , p5 , p6 .</p>
      <p>The results of the execution of the next iteration is shown in Fig. 3:
As can be seen from Fig. 3, the optimal value of the function increased
(F  384,00) , however, the target value has not been reached, because  2  2.00 .</p>
      <p>After the last iteration (see Fig. 4) the simplex method the optimal solution (see
Fig. 5):
x1  0,00, x2  8,00 , x3  20,00 , x4  0,00 , x5  0,00 and x6  96,00 . Since
all values 0 , the problem is solved.</p>
      <p>According to the obtained results, construct a table of time measurements of
execution of a sequential algorithm with reference to the respective computer architecture.</p>
    </sec>
    <sec id="sec-5">
      <title>5 Parallelization of the Simplex Method</title>
      <p>To achieve the maximum performance was performed to parallelize the critical
sections of the program. It is possible to reduce the computational time of the algorithm
by performing multiple tasks simultaneously, and has also led to a decrease in the
number of iterations, making the algorithm more effective for use in problems with a
large amount of data [9, 13]. In particular, it was distributed one of the most important
critical areas of the program – calculation of parameters with the arguments in the
objective function. For this we used technology OpenMP is set of compiler directives,
library procedures and environment variables intended for programming
multithreaded applications on multiprocessor systems with shared memory in C, C++ [10].</p>
      <p>Advantages of OpenMP [11,12]:
─ Thanks to the idea of "incremental paralleling", OpenMP best suits the developers
seeking to quickly parallel their calculating programs with large parallel loops. The
developer does not create a new parallel program but just consecutively adds text
of a program OpenMP directives.
─ At the same time, OpenMP is a flexible mechanism that gives the developer great
control over the behavior of the parallel program.
─ It is assumed that the OpenMP program on a uniprocessor platform can be used as
a sequential program, i.e. there is no need to support serial and parallel versions.
The OpenMP directives are simply ignored by a sequential compiler, and OpenMP
procedure call can be inlined stub (stubs), the text of which is given in the
specifications [14].
─ One of the advantages of OpenMP developers find support for the so-called
"orphan" (separated) directives, that is directives of synchronizing and distributing
work that may not enter directly into the lexical context of the parallel region [15].
In this work the parallelization of the critical sections of the algorithm, in particular:
─ The scalar product of the vector Cb and Pi to expedite plan check for optimality:
#pragma omp parallel for
for (int i = 0; i &lt; numberOfVariables; i++) {
double sum = 0;
for (int j = 0; j &lt; numberOfRestrictions; j++) {
sum += functionCoefs[currentBasis[j]] * restrictionCoefs[j*numberOfVariables + i];
}
currentDeltas[i] = sum - functionCoefs[i];
if (currentDeltas[i] &lt; 0) {
negativeDeltaPresent = true;
}
}
}
#pragma omp parallel for
for (int i = 0; i &lt; numberOfRestrictions; i++) {
currentFunctionValue += functionCoefs[currentBasis[i]] * restrictionVals[i];
}
─ The process of determining  , which in the absolute value takes the maximum
value:
#pragma omp parallel for
for (int i = 0; i &lt; numberOfVariables; i++) {
if (i != maxVal_num &amp;&amp; abs(currentDeltas[i]) == abs(currentDeltas[maxVal_num])) {
equalDeltas.push_back(i);
}
}
#pragma omp parallel for
for (int i = 0; i &lt; numberOfVariables; i++) {
if (abs(currentDeltas[i]) &gt; abs(currentDeltas[max])) {
max = i;
}
}
#pragma omp parallel for
for (int i = 0; i &lt; (int)equalDeltas.size(); i++) {
if (functionCoefs[equalDeltas[i]] &gt; functionCoefs[equalDeltas[validMax]]) {
validMax = equalDeltas[i];
}
}
─ The construction of the next simplex table and determining all of its coefficients
according to the new basis:
#pragma omp parallel for
for (int i = 1; i &lt; numberOfRestrictions; i++) {
if (restrictionCoefs[i*numberOfVariables + absMaxDelta] &gt; 0 &amp;&amp;
allDivisions[i] &lt; allDivisions[minDivVal]) {
minDivVal = i;
}
}
#pragma omp parallel for
for (int i = 0; i &lt; numberOfRestrictions; i++) {
if (i == minDivVal) {
currentBasis[i] = absMaxDelta;
break;
}
}
The form was created to process the input data (see Fig. 6.). This form accepts input
parameters of three cargo fronts and performs data processing after clicking the
"Calculate" button.
Form was created to simplify the user experience with the program, and was made
using Windows Forms technology. This is a smart client technology for the .NET
Framework platform, a collection of managed libraries that simplify the execution of
common tasks, such as reading and writing in a file system. While using the Visual
Studio development environment intelligent Windows Forms applications can be
created that display information, request input from users and allow to interact with
remote computers in the network.</p>
      <p>In Windows Forms form is a visual surface on which information to the user is
displayed. Usually the Windows Forms application is build by dragging control
elements onto a form and writing code to respond to user actions such as mouse clicks or
keystrokes. A control element is a separate element of the user interface that can be
used to display or enter data.</p>
      <p>When a user performs any action with a form or one of its control elements, an
event is created. The application responds to these events using code and processes
events when they occur. If an existing control does not meet the needs, in Windows
Forms, you can create custom controls using the User Control class.</p>
    </sec>
    <sec id="sec-6">
      <title>6 Numerical Experiments</title>
      <p>After carrying out a numerical experiments, it has been shown that with a fairly small
input data, the parallel implementation of the simplex method algorithm stops to work
efficiently. It was found that when entering data less than 102, the results of parallel
processing of data are not significantly different from the results obtained with the
sequential execution of the program (see Fig. 7).
Further, on Fig. 8 shows the result of parallelizing the algorithm of the simplex
method to solve the problem of distributing of cars between the freight fronts of the
railway station when processing large amounts of data.
Table 3 shows the time schedules for the parallel execution of the program. As you
can see, the parallelization of the simplex method several times speeds up the
program while processing large data.</p>
      <p>Time measurements of parallel algorithm execution on a four core CPU
In this case, the values of the acceleration and efficiency of the parallel algorithm on
the quad core processor are shown in Table 4. Fig. 9 shows a graph of execution time
(in seconds) depending on the number of threads on quad core processor.
The effectiveness of the simplex method depends on the following factors:
1. Numbers of iterations;
2. Machine time.</p>
      <p>As a result of numerical experiments, the obtained results showed that the machine
time is proportional to m 2 . The number of constraints has a greater impact on
computing efficiency than on the number of variables, therefore, it was concluded that
when forming a linear programming problem we should aim to reduce the number of
constraints.</p>
      <p>The software product developed in the work calculates the solution without losing
data and presents the results as integers. The program interface is easy to use to enter
task data. However, the single-core processor can not be achieved even with the
parallel execution of the program. Also, it was investigated that with a sharp contrast
input data for freight fronts there is an incorrect paralleling and failures in the
implementation of the algorithm.</p>
      <p>Fig. 12. Performance results on a single core processor
Data analysis (see Fig. 12-14) shows that with the increasing the number of threads
and cores the maximum productivity of the program is reached. For a small number of
threads on a single core processor, the algorithm is performed relatively long with
other processors, and there are possible minor differences in starting with the same
large enough input parameters.</p>
    </sec>
    <sec id="sec-7">
      <title>8 Conclusion</title>
      <p>The linear programming problem and the method of its solution that are described in
the work - is only a separate example of a huge number of linear programming tasks.</p>
      <p>Linear programming is the most commonly used method of optimization. Linear
programming is one of the main parts of that section of modern mathematics, which is
called mathematical programming.</p>
      <p>Simlex method is a typical example of iterative computations used in solving most
of the optimization tasks. For computer realization of the simplex method a method of
using artificial variables is developed, which allows to find the initial basic solution of
the problem.</p>
      <p>The main task of this work was to develop a program for solving a linear
programming problems with a sufficiently large dimensionality and the ratio of the
number of variables to the number of constraints. According to the results of numerical
experiments, we can conclude high efficiency of the parallelized algorithm of the
simplex method based on the OpenMP parallel programming technology. After
parallelizing critical sections of the algorithm to speed up the program, it is noted that the
acceleration factor increases with the increase in the number of cores, which means it
directly depends on the architecture of the computer. This approach is aimed at
supporting the latest developments of multi-core processors. However, the total amount
of computations can be improved by some optimizations of the implemented
algorithm, which are goals for further work.
6. Kakhaner, D., Mouler, K., Nesh, S.: Numerical methods and software. «Myr» publishing
house, 575 pp., (1998).
7. Mochurad, L.I.: Method of reduction model for calculation of electrostatic fields of
electronic fields of electronic optics systems. Science journal Radioelektronіka,
informatics, management, 1(48), pp. 29-40 (2019). (In Ukrainian)
8. Voss, M.: OpenMP Share Memory Parallel programming. Toronto, Kanada (2003).
9. Chapman, B., Jost, G.: Using OpenMP: portable shared memory parallel programming
(Scientific and Engineering Computation). Cambridge, Massachusetts: The MIT Press
(2008).
10. Chandra, R., Menon, R., Dagum, L., Kohr, D., Maydan, D., McDonald, J.: Parallel</p>
      <p>Programming in OpenMP. Morgan Kaufinann Publishers (2000).
11. Grama, A., Gupta, A., Karypis, G., Kumar, V.: Introduction to Parallel Computing.</p>
      <p>Addison Wesley, ISBN- 0-201-64865-2, 856 p. (2003).
12. Mochurad, L., Boyko, N.: Solving Systems of Nonlinear Equations on Multi-core
Processors. Advances in Intelligent Systems and Computing IV Selected Papers from the
International Conference on Computer Science and Information Technologies, CSIT 2019,
September 17–20, рp. 90-106, Lviv, Ukraine (2019)
13. Shakhovska, N., Boyko, N., Zasoba, Y., Benova, E.: Big data processing technologies in
distributed information systems. Procedia Computer Science, 10th International
conference on emerging ubiquitous systems and pervasive networks (EUSPN-2019), 9th
International conference on current and future trends of information and communication
technologies in healthcare (ICTH-2019), Vol. 160, 2019, pp. 561–566, Lviv, Ukraine (2019)
14. Boyko, N., Shakhovska, Kh., Mochurad, L., Campos, J.: Information System of Catering
Selection by Using Clustering Analysis, Proceedings of the 1st International Workshop on
Digital Content &amp; Smart Multimedia (DCSMart 2019), рр. 94-106, Lviv, Ukraine (2019)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Yakovlev</surname>
            ,
            <given-names>M.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gerasymova</surname>
            ,
            <given-names>T.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nesterenko</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          :
          <article-title>Characteristic feature of the solving both of non-linear systems and systems of ordinary differential equations on parallel computer</article-title>
          . In Proceedings of international symposium “
          <article-title>Optimization problems of computations” (OPC - XXXV)</article-title>
          . Kyiv: V.M. Glushkov Institute of cybernetics of NAS of Ukraine,
          <year>2009</year>
          . Kyiv: Vol. 2. P.
          <volume>435</volume>
          -
          <fpage>439</fpage>
          . (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Yakovlev</surname>
          </string-name>
          , MV.F.,
          <string-name>
            <surname>Nesterenko</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brusnikin</surname>
            ,
            <given-names>V.N.</given-names>
          </string-name>
          :
          <article-title>Problems of the efficient solving of non-linear systems on multi-processor MIMD-architecture computers</article-title>
          .
          <source>Mathematical machines and systems. (4)</source>
          . P.
          <volume>12</volume>
          -
          <fpage>17</fpage>
          . (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Khymych</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Molchanov</surname>
            ,
            <given-names>Y.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Popov</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          <article-title>other: Parallel algorithms for solving problems of computational mathematics</article-title>
          . Kiev: Scientific Opinion,
          <volume>248</volume>
          pp. (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Autar</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <string-name>
            <surname>NONLINEAR EQUATIONS - Newton-Raphson Method-More</surname>
            <given-names>Examples</given-names>
          </string-name>
          ,
          <source>Civil Engineering. August</source>
          <volume>7</volume>
          , 4 pp. (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Autar</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <string-name>
            <surname>NONLINEAR EQUATIONS - Newton-Raphson Method-More</surname>
            <given-names>Examples</given-names>
          </string-name>
          ,
          <source>Mechanical Engineering. August</source>
          <volume>7</volume>
          , 3 pp. (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>