=Paper= {{Paper |id=Vol-2416/paper28 |storemode=property |title=Dynamic game task of executors incentives in projects for the development of new production in continuous time |pdfUrl=https://ceur-ws.org/Vol-2416/paper28.pdf |volume=Vol-2416 |authors=Oleg Pavlov }} ==Dynamic game task of executors incentives in projects for the development of new production in continuous time == https://ceur-ws.org/Vol-2416/paper28.pdf
Dynamic game task of executors incentives in projects for the
development of new production in continuous time

                O V Pavlov1


                1
                Samara National Research University, Moskovskoye shosse, 34, Samara, Russia, 443086


                e-mail: pavlov@ssau.ru


                Abstract. The article explores the incentive problem of executors of the new products
                development project at the industrial enterprise in continuous time. In the process of
                developing new products, the learning curve effect manifests itself, which leads to a reduction
                in labor intensity, depending on the cumulative volume of production. The project for the new
                products development is considered as a managed hierarchical dynamic system, consisting of a
                project management board (principal) and executors (agents). The interaction of project
                participants is formalized as a hierarchical differential game. To solve the formulated dynamic
                problem of material incentives, the well-known principle of cost compensation was applied.
                The original problem is divided into the task of coordinated incentives and the task of
                coordinated planning. The study showed that the task of coordinated dynamic planning is for
                the principal to determine the optimal planned production volumes in order to minimize the
                labor cost of agents. The initial dynamic problem of material incentives was reduced to the
                optimal control problem. The problem of optimal control with continuous time was solved
                analytically using the Pontryagin maximum principle. The study identifies a condition to
                determine the optimal production volumes for coordination of the interests of the principal and
                agents.



1. Introduction
The article explores the incentive problem of executors of the new products development project at the
industrial enterprise in continuous time. In the process of developing new products, the learning curve
effect manifests itself, which means that labor time (labor intensity) is reduced to perform repetitive
manufacturing operations. The project for the development of new products is considered as a
managed hierarchical dynamic system consisting of the project management (principal) and executors
(agents). The dynamics of a controlled dynamic production system depends only on the actions of the
agents, and the principal affects the target function of the agents by choosing the material incentive
function. The state of the hierarchical dynamic system in each time period depends on its position and
the actions of the participants in the previous period. Production activity in the project for the
development of new production is characterized by the diverging interests of the principal and agents,
which leads to a decrease in the economic efficiency of the entire production system. To find a
solution of these contradictions is possible by coordinating management mechanisms that encourage
agents to choose actions that are beneficial to the principal.



                    V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)
Data Science
O V Pavlov




    Dynamic models of the interaction of unequal players are considered in the active systems theory
[1–2], hierarchical information systems theory [3–5] and dynamic games theory [6–8].
    The dynamic incentives task of agents in terms of the dynamic games theory is called the inverse
Stackelberg game. A review of reverse Stakelberg games models has been carried out in scientific
publications [9–12]. In the hierarchical information systems theory [3–5] the dynamic incentives task
was called the Germeier’s game Г2.
    The theory of active systems [1] develops the approach based on the principle of cost
compensation. The principal compensates the agent's costs in the case of choosing the optimal planned
trajectory and does not pay material compensation in other cases. The original problem is divided into
the task of coordinated incentives and the task of coordinated planning. The task of coordinated
planning is reduced to the problem of optimal control. The recent study [13] explores the results that
generalize the theorems in the monograph [1].
    The hierarchical systems theory [3–5] suggests the approach that used the choice of the principal of
the program of joint actions with the agent and punishment for deviation from this program. As a
result, the initial problem is transformed into the optimization problem.
    In the dynamic games theory [7], the principal plan is implemented using trigger strategies. The
basic idea is that agents agree to follow a certain trajectory and punish any deviated agent.
    The current study formulates and analytically solves the dynamic incentives task of agents in the
conditions of learning-by-doing within the framework of the approach proposed in the monograph [1].

2. Dynamic game task of executors incentives in projects for the development of new production

2.1. The general statement and decision algorithm of a task of executors incentives in projects for the
development of new production
In this dynamic game model there are dynamics of decision making and dynamics of the managed
system. The inequality of participants is fixed by the moves order, the first move is made by the
principal. It is assumed that agents are not linked to each other and perform actions independently.
   The incentive problem is formalized dynamic game in positional strategies for two players with
feedback on management:
                                                T
                                        J p   e t { pu( t )   ( x( t )}dt  max,
                                                0
                                          T
                                    J a   e  t {  ( x( t ))  С( x( t ),u( t ))}dt  max,
                                          0

                                                          dx( t )
                                                                   u( t ),
                                                            dt
                                              0  u( t )  x0  R  x( t ), t  0,T ,
                                               x( 0 )  x0 ,
                                             x( T )  x0  R.
where J p is the decision making criteria of principal, J a is the decision making criteria of agent,  is
principal discount rate, u(t) is production volume of agent at time point t, p is product price,  ( x(t )) is
incentive function of principal, x(t) is the cumulative production volume, T is the project’s planning
horizon,  is agent discount rate, С ( x(t ),u(t )) is the function of the agent's labor costs in the
production of products (costs at time point t), x0 is the production volume produced by the agent
before starting the project, R is the production volume to be produced by the time point T.
   The function of the agent's labor costs in production (costs at time point t) in monetary terms is
defined as the product of labor intensity с( x( t )) , production volume u(t) and the cost of one hour rate
s:
                                          С( x( t ),u( t ))  sс( x( t ))u( t ).                           (1)


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                      210
Data Science
O V Pavlov




   The change dynamics in labor intensity of products from the cumulative production volume is
described by different models of the learning curve. The most typical models are power, exponential
and logistic ones, which are described in the scientific literature [14–17].
   The degree model of the learning curve has the following form:
                                               c( x( t ))  ax( t )b .                        (2)
where а are costs of the first product production, b is learning index.
    The learning index characterizes the speed of decrease in the unit costs of product with an increase
in the cumulative production volume.
    Exponential model of the learning curve:
                                             c( x( t ))  k  e x( t ) .
where  is learning index k ,  are parameters of the exponential model.
  Logistic model of the learning curve:
                                                                        1       
                                 c( x( t ))  cmin  ( cmax  cmin )      x(t) 
                                                                    1  e  ,   
where cmin , cmax are minimum and maximum values of unit costs in product manufacturing,  is
learning index,  is logistic model parameter.
   To solve the formulated problem of incentives, the principle of cost compensation is applied [1].
   In accordance with the principle of cost compensation, it is enough for the principal to compensate
the agent costs to encourage it to choose a planned trajectory:
                                            ( x( t ))  С( x( t ),u( t )) .                         (3)
    Taking into account (3) and (1), the goal function of the principal is written:
                                                T
                                          J p   e t {[ p  sс( x( t ))]u( t )}dt  max.
                                                0
   Given that the price of part p is a constant value, the maximization of the integral income of the
principal can be replaced by minimizing the integral labor costs of the agent:
                                                      T
                                               J p   e t С( x( t ),u( t ))dt  min .
                                                      0
   The solution algorithm consists of dividing the original problem into the task of coordinated
incentives and the task of coordinated planning.
1. The task of coordinated dynamic incentives.
   The principal chooses a compensatory incentive system, which consists of compensating the
agent’s costs in case that the principal’s optimal planned trajectory is chosen and there are no material
payments otherwise:
                                     
                                     C( x( t ),u( t )), если x( t )  x ( t ), для  t  [ 0,T ]
                                                                        R
                        ( x( t ))  
                                     
                                     0,                  если x( t )  x R ( t ), для  t  [ 0,T ] .
2. The task of coordinated dynamic planning.
   The optimal planned principal trajectory is determined from the solution of the optimal control
problem:
                                                      T
                                               J p   e t С( x( t ),u( t ))dt  min .
                                                      0                                               (4)
                                                            dx( t )
                                                                     u( t ),
                                                             dt                                       (5)
                                               0  u( t )  x0  R  x( t ), t  0,T ,                (6)
                                                              x( 0 )  x0 ,                           (7)



V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                 211
Data Science
O V Pavlov




                                                            x( T )  x0  R.                            (8)
   The task of the principal is to select the optimal production volumes of parts u( t ) satisfying the
                                                                                                opt

constraint (6), which transfer the production process (5) from the initial state (7) to the final state (8)
and minimize the integrated discounted labor costs of the agent (4).

2.2. Solution of the dynamic production planning problem
To solve the formulated optimal control problem with continuous time (4)-(8) we apply the Pontryagin
maximum principle [18]. The direct application of the Pontryagin maximum principle to the
formulated optimal control problem is impossible, since in this case there is a special control [19].
    As the principal’s optimality criterion, we consider the criteria of minimizing the integral
discounted rate of the labor cost function of agent C(t), which is close in economic terms:
                                                        T
                                                                С ( t )
                                                  J p   e t         dt  min .
                                                        0       С( t )
        С (t )
where            [ln C (t )] is the logarithmic derivative of the labor cost function, which has the economic
        С (t )
meaning of the rate of labor cost function.
Statement 1
For a positive and absolutely continuous function C(t), the maximization (minimization) of the
following functional
                                                       ~ T            С ( t )
                                                       J   e t            dt
                                                             0        С( t )                                (9)
is equivalent to the functional maximizing (minimizing):
                                                              T
                                                         J   e t ln С( t )dt .
                                                              0                                        (10)
The proof of the statement is given in the Appendix.
   Taking into account this statement, we take minimization of the total discounted logarithmic
function of labor costs as the criteria of optimality (10). We substitute the expression for the labor cost
function (1) into the functional (10):
                                                        T
                                                 J p   e t ln[ sс( x( t ))u( t )]dt .
                                                        0                                                       (11)
   To solve the formulated optimal control problem (5)-(8), (11), we apply the Pontryagin maximum
principle [18]. Hamiltonian function is stated below:
                       H ( t , x , ,u )   ( t )u( t )  e t s  e t ln[ c( x( t ))]  e t ln[ u( t )],
where  (t ) is an auxiliary variable that satisfies the following conjugate equation:
                                        d        H             {ln[c( x(t ))]}
                                                       e t                    .
                                         dt        x                   x
   In accordance with the Pontryagin maximum principle, at each point of the optimal trajectory the
Hamiltonian function reaches its maximum with respect to the control parameters. The maximum of
the control Hamiltonian is found from the condition:
                                                              H
                                                                     0.
                                                               u                                               (12)
   We define the optimal control from the condition (12):
                                                                       e t
                                                         u( t )opt          .
                                                                                                               (13)
    The system of conjugate equations can be written as follows:



V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                           212
Data Science
O V Pavlov




                                           dx e t
                                           
                                           dt    
                                          
                                           d  e t {ln[ c( x( t ))]}
                                           dt
                                                             x                                     (14)
    From the equations of system (14) it follows:
                                                dt  etdx.                                         (15)
                                                                                  1
                                                {ln[ c( x( t ))]}
                                      dt  et                      d .
                                                       x                                        (16)
    The symmetric form of the system (14) taking into account equations (15), (16) will have the form:
                                                                        1
                                                  {ln[ c( x( t ))]}
                                    dt  dx                         d .
                                                         x                                      (17)
    Using the separation of variables in the second differential equation (17):
                                            d {ln[ c( x( t ))]}
                                                                  dx.
                                                        x                                          (18)
    Find the general solution to the differential equation (18):
                                                    C0c( x(t )).                                   (19)
where C0 is the integration constant.
  The optimal control (13) taking into account (19) takes the following form:
                                                                        e t
                                                       u( t )opt                 .
                                                                     C0c( x( t ))                     (20)
    Based on the obtained condition for optimal control (20), we formulate the following statement.
Statement 2.
Taking into account the discounting, the optimal production volumes for any model of the learning
curve at each time point should be inversely proportional to the labor intensity of the products and
directly proportional to the discount rate.
    In the case of absence of discounting (discount rate  = 0), the optimal control will be written:
                                                            1
                                            u( t )opt               .
                                                        C0c( x( t ))
    Based on the obtained conditions for optimal control without discounting, we formulate the
following statement.
Statement 3.
In the case of no discounting, the optimal production volumes for any model of the learning curve at
each time point should be inversely proportional to the labor intensity of the products.
    Find the optimal control and optimal trajectory for the power model of the learning curve (2). The
formula (2) can be substituted in the resulting expression for the conjugate variable (19):
                                                      C1 x( t )b .                                 (21)
where C1  C0 a is the integration constant.
  We substitute formula (21) into the differential equation (15):
                                              dt  et C1x b dx.                                    (22)
    The general solution to equation (22) will have the form:
                                                 1           x1 b 
                                           t   ln C2  C1       
                                                            1 b 
                                                                                                     (23)
    We define the integration constants C1 and C2 from the boundary conditions (7) and (8):



V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                  213
Data Science
O V Pavlov




                                                               ( 1  e T )(1  b )
                                                     C1                               ;
                                                             [( x0  R )1b  x10b ]                                   (24)
                                                                  ( 1  e T )x10b
                                                      C2  1                        .
                                                               ( x0  R )1b  x10b              (25)
   Substituting the constants of integration (24), (25) into formula (23), we find the equation of the
optimal trajectory of the cumulative production volume:
                                                                                                   1
                                                ( 1  e t )                           1b
                             x( t )   x10b 
                                             opt
                                                        T
                                                               [( x0  R )1b  x10b ]  .
                                                (1  e )                                    (26)
   We define the optimal control by substituting the formula (21) into the condition (13) with the
found expression for C1 (24):
                                                                                            b
                                   e t  1b       ( 1  e t )                   1b 
                                                                                             1b ( x  R )1b  x1b
                    u( t )opt                x0                  [( x   R )1b
                                                                                    x   ]         0            0
                                                                                                                     .
                                  1  e T         ( 1  e T )                                      
                                                                        0             0
                                                                                           (27)
                                                                                                      ( 1   b )
    Find the labor cost function (1), taking into account formulas (26) and (27) on the optimal
trajectory with optimal control:
                                                            e t ( x0  R )1b  x10b
                                 C( t , x opt ,u opt )  a                               .
                                                           1  e T     (1  b )           (28)
   Analyzing (28) we come to the conclusion that under optimal control, the change in the instant
costs function depends only on the discount factor e t .

3. Conclusion
The paper explores the dynamic game task of executor’s incentives in projects for the development of
new production in continuous time.
    To solve the formulated problem of incentives, the principle of cost compensation was applied. The
original task is divided into the task of coordinated incentives and the task of coordinated planning.
The task of coordinated incentives is as follows. The principal chooses a compensatory incentive
system, which consists of compensating the agent’s expenses in case of the principal chooses an
optimal trajectory, or there are no material payments otherwise.
    As a result of the study, a condition for the optimal production volumes determining coordination
of the interests of the principal and agents was found: the optimal production volumes for any model
of the learning curve at each time point should be chosen inversely to the labor intensity of the product
and directly to the discount rate. In the case of absence of discounting: the optimal production volumes
for any model of the learning curve at each time point should be chosen inversely to the labor intensity
of the products.
    As a result of analytical problem solving for power model of the learning curve, the following
formula were obtained: formula for optimal production volumes at each time point, optimal trajectory
for cumulative production volumes, and formula for agent labor costs at each time point for optimal
trajectory with optimal control.

Appendix
Proof of the statement.
   We integrate the functional (9) by parts:
                          T        
                               t С( t )       T
                                                                                T
                                                                                   t
                            С( t )
                             e           dt  e     ln C ( T )  ln C ( 0 )    e ln C( t )dt .
                           0                                                    0                                        (29)
    We introduce the function g (t ) :
                                                            g( t )  e t lnС( t ).


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                                     214
Data Science
O V Pavlov




    Then values of the function at the initial and final moment of time are g (0)  ln С (0) and
g (T )  e T ln С (T ). Expression (29) takes the form:
                                       T      
                                          t С( t )
                                                                          T

                                        e С( t )dt  g( T )  g( 0 )    g( t )dt .
                                       0                                  0                                      (30)
Case A. Increasing function g( t ) .
                                                                 T
The geometric interpretation of the integral S g   g (t )dt is the area of the curvilinear trapezium,
                                                                    0
bounded above by the positive function g (t ) , below by the axis of abscissas and by the straight lines
t=0 and t=T. The rectangle area bounded above by the straight line g (t )  g (T ) , below by the axis of
abscissas and by the straight lines t  0 and t  T can be defined on the one hand through the integral
and on the other hand as the multiplication of length by height:
                                                            T
                                                      ST   g( T )dt  Tg ( T ).
                                                            0                                         (31)
   Similarly, the rectangle area bounded above by the line g (t )  g (0) , below by the axis of abscissas
and by the straight lines t  0 and t  T can be found:
                                                            T
                                                      S 0   g( 0 )dt  Tg ( 0 ).
                                                             0                                                   (32)
    From the formulas (31) and (32) follows that:
                                                                    1T
                                                        g( T )        g( T )dt .
                                                                    T 0                                          (33)
                                                                        T
                                                                    1
                                                         g( 0 )       g( 0 )dt .
                                                                    T 0                                          (34)
    Then the functional (30), taking into account formulas (33) and (34), can be written:
                  T      
                     t С( t )
                                                     T
                                                                  1T                           T

                   e С( t )dt  g( T )  g( 0 )    g( t )dt  T  [ g( T ) - g( 0 )]dt    g( t )dt .
                  0                                  0              0                          0                 (35)
                     T
    The integral  [ g (T ) - g (0)]dt  ST 0 defines the rectangle area, bounded above by a straight line
                     0
g (t )  g (T ) , below by a straight line g (t )  g (0) and by straight lines t  0 and t  T .
                         T
    The formula   g (t )dt geometrically can be interpreted as the area of a squeezed curvilinear
                         0
trapezium S g , since   1 . In the case of an increasing function, the condition is satisfied g (T )  g (0) .
                     1T                       1
The expression         [ g (T ) - g (0)]dt  T ST 0 is a positive value and calculates the area of the squared
                     T0
rectangle ST 0 .
                                                                                                         1
    The sum of the areas of the transformed curvilinear trapezium S g and the rectangle                   ST 0 can be
                                                                                                         T
defined as the area of the curvilinear trapezium, bounded above by the positive function 1 g (t ) ( 1 is
the constant factor), below the axis of abscissas and the straight lines t  0 and t  T :
                          T       
                              t С( t )     1T                             T             T

                           С( t ) T 
                            e           dt     [ g ( T ) - g ( 0 )] dt    g ( t )dt   1g( t )dt .
                          0                   0                             0             0




V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                             215
Data Science
O V Pavlov




                                                                                        T
    Since 1 is a constant factor, the maximization of the functional  1 g (t )dt will be equivalent to
                                                                                        0
                                          T           T
maximizing of the functional  g (t )dt   e t ln С (t )dt . Thus, the statement is proved.
                                          0           0
Case B. Decreasing function g( t ) .
In the case of a decreasing function, the condition is satisfied g( T )  g( 0 ) . The formula (35) will
have the form:
                             T        
                                  t С( t )       1T                             T

                              С( t )
                                e           dt     [ g ( 0 ) - g ( T )] dt    g( t )dt .
                              0                    T0                             0
                          T
    The integral  [ g (0) - g (T )]dt  S 0T defines the rectangle area, bounded above by a straight line
                           0
g (t )  g (0) , below by a straight line g (t )  g (T ) and by straight lines t  0 and t  T .
                               1T                      1
    The expression              
                               T0
                                  [ g (0) - g (T )]dt  S 0T is a positive value and calculates the area of the squared
                                                       T
rectangle S 0T .
                                         1            1
    Option 1: the conditions are met     , g (T )  g (0) .
                                         T            T
    In this case, the difference of the areas of the transformed curvilinear trapezium S g and the
            1
rectangle     ST 0 can be defined as the area of the curvilinear trapezium, bounded above by the positive
           T
function 2 g (t ) ( 2 is the constant factor), below by the axis of abscissas and the straight lines t  0
and t  T :
                           T        
                                t С( t )       1T                             T             T

                             С( t )
                              e           dt     [ g ( 0 ) - g ( T )] dt    g ( t )dt   2 g( t )dt .
                            0                    T0                             0             0
                                                 T
    Minimization of the functional                2 g (t )dt will be equivalent to minimizing of the functional
                                                  0
T              T
                   t
 g (t )dt   e         ln С (t )dt . The statement is proved.
0              0
                                         1            1
    Option 2: the conditions are met     , g (T )  g (0) .
                                         T            T
    In this case, the difference of the areas of the transformed curvilinear trapezium S g and the
            1
rectangle     S 0T can be defined as the area of the inverted curvilinear trapezium, bounded above by the
           T
                      1
straight line g (t )  g (0) , below by the function g (t ) and the straight lines t  0 and t  T . The
                      T
                                                    T                       T
                                                       1
negative difference of areas can be calculated   [ g( 0 ) - g( t )]dt   g( t )dt - g( 0 ) .
                                                    0 T                     0
    Since g( 0 )  const , the minimization of this expression will be equivalent to the minimization of the
               T               T
functional  g (t )dt   e t ln С (t )dt . The statement is proved.
               0                0
                                                      1            1
    Option 3: the conditions are met                  , g (T )  g (0) .
                                                      T            T



V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                               216
Data Science
O V Pavlov




    In this case, the difference of the areas of the transformed curvilinear trapezium S g and the
            1
rectangle     S 0T can be defined as the difference of the areas of two curvilinear triangles.
            T
    The area of the first curvilinear triangle is bounded above by the function g (t ) , below by the
                       1
straight line g( t )  g( 0 ) and by the straight lines t  0 and t   (abscissa of the intersection point
                      T
                                                      1
of the function g (t ) and the straight line g (t )  g (0) ). The area of the second curvilinear triangle
                                                      T
                                               1
is bounded above by a straight line g (t )  g (0) , below by the function g (t ) and by straight lines
                                               T
t   and t  T .
                                                                     T                      T
                                                          1              1
    The areas difference can be calculated:  [g (t ) - g (0)]dt   [ g (0) - g (t )]dt   g (t )dt - g (0) .
                                                 0        T            T                    0
Since g( 0 )  const , the minimization of this expression will be equivalent to the minimization of the
               T          T
functional  g (t )dt   e t ln С (t )dt . The statement is proved.
               0          0


4. References
[1] Novikov D A, Smirnov M I and Shokhina T E 2002 Mechanisms of Dynamic Active Systems
      Control (Moscow: IPU RAN) p 124
[2] Ugolnitsky G A 2016 Management of Sustainable Development of Active Systems (Rostov on
      Don: Publishing House of the Southern Federal University) p 940
[3] Gorelik V A, Gorelov M A and Kononenko A F 1991 Analysis of Conflict Situations in Control
      Systems (Moscow: Radio i svyaz) p 228
[4] Gorelik V A and Kononenko A F 1982 Game-theoretic Models of Decision Making in
      Ecological–economic Systems (Moscow: Radio i svyaz) p 144
[5] Gorelov M A and Kononenko A F 2015 Dynamic conflict models III. Hierarchical games
      Automation and Remote Control 2 89-106
[6] Basar T and Olsder G J 1999 Dynamic Noncooperative Game Theory (Philadelphia: SIAM) p
      519
[7] Dockner E, Jorgensen S, Long N V and Sorger G 2000 Differential Games in Economics and
      Management Science (Cambridge: Cambridge University Press) p 382
[8] Li T and Sethi S P 2017 A review of dynamic Stackelberg game models Discrete and
      Continuous Dyn. Syst. B 22(1) 125-159
[9] Olsder G J 2009 Phenomena in inverse Stackelberg games. Part 2: dynamic problems J. Optim.
      Theory Appl. 143(3) 601-618
[10] Groot N, De Schutter B and Hellendoorn H 2012 Reverse Stackelberg games. Part I: basic
      framework Proc. of the 2012 IEEE Int. Conf. on Control Applications 421-426
[11] Groot N, De Schutter B and Hellendoorn H 2012 Reverse Stackelberg games. Part II: results
      and open issues Proc. of the IEEE Int. Conf. on Control Applications 427-432
[12] Groot N, Zaccour G and De Schutter B 2017 Hierarchical game theory for system-optimal
      control: applications of reverse Stackelberg games in regulating marketing channels and traffic
      routing IEEE Control Systems Magazine 37(2) 129-152
[13] Rokhlin D B and Ougolnitsky G A 2018 Stackelberg equilibrium in a dynamic stimulation
      model with complete information Automation and Remote Control 79(4) 701-712
[14] Wright T P 1936 Factors affecting the cost of airplanes J. of the aeronautical sciences 3(4) 122-
      128
[15] Yelle L E 1979 The learning curve: historical review and comprehensive survey Decision
      Sciences 10(2) 302-328


V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)                          217
Data Science
O V Pavlov




[16] Badiru A 1992 Computational survey of univariate and multivariate learning curve models
     IEEE Transactions on Engineering Management 39(2) 176-188
[17] Mohamad Y J 2011 Learning Curves: Theory, Models, and Applications (Boca Raton: CRC
     Press) p 476
[18] Pontryagin L S, Boltyansky V G, Gamkrelidze R V and Mishchenko E F 1983 Mathematical
     Theory of Optimal Processes (Moscow: Nauka) p 392
[19] Afanasyev V N 2003 Mathematical Theory of Control Systems Design (Moscow: Vysshaya
     shkola) p 614




V International Conference on "Information Technology and Nanotechnology" (ITNT-2019)    218