=Paper= {{Paper |id=Vol-2870/paper107 |storemode=property |title=The Criterion of Optimality in the Convex Vector Problem of Optimization |pdfUrl=https://ceur-ws.org/Vol-2870/paper107.pdf |volume=Vol-2870 |authors=Andrii Sokolov,Oksana Sokolova,Viktor Khodakov |dblpUrl=https://dblp.org/rec/conf/colins/SokolovSK21 }} ==The Criterion of Optimality in the Convex Vector Problem of Optimization== https://ceur-ws.org/Vol-2870/paper107.pdf
The Criterion of Optimality in the Convex Vector Problem of
Optimization
Andrii Sokolova, Oksana Sokolovaa and Viktor Khodakovb
a
    Kherson National Technical University, Berislav highway 24, Kherson, 73008, Ukraine
b
    Admiral Ushakov Maritime Institute of Postgraduate Education, Druzhby str. 4, Kherson, 73000, Ukraine


                 Abstract
                 The article deals with the solution of the problem in vector optimization. It is shown there is
                 a solution which is optimal according to Lagrange in the problem with convex particular
                 functions of target that is characteristic to organizational and economic systems.

                 Keywords1
                 Vector objective/target function, Lagrange method, optimality, priorities of Pareto, vector
                 optimization

1. Introduction
    In the history of the development and use of information technologies, three stages of the use of
models and methods of decision-making can be distinguished.
    At the first stage, attempts were made to solve mainly optimization problems in a continual set of
alternatives. This area has been developed in connection with the need to obtain an economic effect in
industrial and transport technologies, both civil and military [1]. The massive nature of certain areas
of use and development stimulated the interest of industrialists and the military in solving problems,
and scientists in finding methods for solving optimization problems. At this stage, an important role
was played by such scientists as L.I. Kantorovich, A.N. Kolmogorov, E.S. Wentzel, Dangitz, Ford,
Bellman, etc [2-5].
    The second stage of development is associated with the emergence and widespread penetration of
computer technology into almost all spheres of human activity.
    In the second half of the twentieth century, automated control systems for various purposes were
developed [6]. When solving the problem of human-machine interaction, much attention was paid to
the role of the human factor, which stimulated the emergence of the development of expert methods
and the theory of decision-making, appropriate technical means and technologies based on these
methods.
    The third stage is associated with the rapid growth of databases, knowledge bases in general and
special-purpose information systems, which stimulated research to identify regularity that could be
used for decision-making. Within the framework of this new paradigm of data mining (knowledge
discovery), systems began to be developed for analyzing large amounts of data. Among other
regularities, a significant proportion is the identification of preferences on a set of objects, which is
directly related to decision making.
    The increased role of the human factor in creating knowledge bases and solving the problem of
formalizing the conclusion of conclusions was the basis for including the theory of decision-making
in the new scientific direction "Artificial Intelligence", which unites research in the field of brain-like
structures. Decision support systems that have combined optimization and expert methods, databases


1
 COLINS-2021: 5th International Conference on Computational Linguistics and Intelligent Systems, April 22–23, 2021, Kharkiv, Ukraine
EMAIL: sokolovandrew84@gmail.com (A. Sokolov); ksushkaariyka@gmail.com (O. Sokolova); hodakov.victor@gmail.com
(V. Khodakov)
ORCID: 0000-0001-8442-6137 (A. Sokolov); 0000-0001-7251-3284 (O. Sokolova); 0000-0002-8188-9125 (V. Khodakov)
            ©️ 2021 Copyright for this paper by its authors.
            Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
            CEUR Workshop Proceedings (CEUR-WS.org)
and knowledge have become a practical tool for decision-makers.
    Among the works that influenced the formation of a new scientific direction, one can single out the
works of N. Nilsson on the creation of a universal problem solver, G. Simon, who studied the problem
of uncertainty in the science of artificial intelligence proposed by him, L. Zade, who proposed the
formalization of subjective uncertainty, D.A. Pospelov, who studied the problem of choice within the
framework of situational management. Selection of the best objects based on their attributive
representation. Zagoruiko in the framework of pattern recognition [7-14].
    Thus, at present, it becomes possible to create brain-like computing structures for ultra-high
performance computers [15].
    One of the most important tasks, especially in demand at the second and third stages of the
development of decision-making theory, is the study of methods of multicriteria choice on a finite set
of alternatives (objects). These include vector and scalar optimization.
    Vector optimization methods are based on establishing preferences on a set of vector object
estimates. In the early twentieth century, Pareto and Edgeworth proposed the dominance relationship,
later called Pareto-dominance. Research in this direction was continued by B. Rua, V.V. Podinsky,
V.D. Nogin [16-19]. The ordering of objects based on the qualitative values of features was formed in
the direction of the verbal analysis of solutions, developed in the works of O.I. Laricheva,
A.B. Petrovsky and others [1, 20, 21].
    Methods for scalarization of vector estimates are based on transforming a multi-criteria
optimization problem into a single-criteria optimization problem using a multi-criteria utility function.
The issues studied in the framework of this direction were formalized into a multicriteria utility
theory. In the works of J. von Neumann, O. Morgenstern, an axiomatic approach was developed and
the main directions of the multicriteria utility theory were formulated, which were then developed in
the works of H. Rife, R. Keene, P. Fishburne, W. Armstrong, S.V. Emelyanova, N.V. Khovanova and
others [22-28].
    Of the Ukrainian scientists who are fruitfully working in solving this scientific direction, we
consider it useful to single out the scientific school of E.G. Petrov [29].
    Modern information technologies are largely based on the use of optimization procedures, which
ensures the efficiency of their implementation [30].
    At the same time, the complexity and coherence of the problems being solved causes the
appearance of significant time consumption for optimization, which manifests itself as the effect of
the dynamics of optimization procedures [31].
    It should be considered, that with the development and improvement of information technology,
the problem of finding the best solutions will take an increasing share among all processes.
    The desire to increase the productivity and accuracy of information processing processes led to a
revision of the approach to determining the amount of information and the transition to the analysis
and synthesis of information systems in the information space [32].
    The development of the information theory was historically associated with the works of
R. Hartley, A. N. Kolmogorov, K. Shannon, A. Ya. Khinchin, V.A. Kotelnikov, V.D. Goppa,
A.M. Iahlom [33-42]. Currently, there is a lack of elaboration of information processes and processes
of forming models of systems and processes.
    Human activity is associated with different types and methods of decision-making management.
These include, for example, the problem of finding optimal options for managing organizational
systems, the problem of designing systems that provide a control law for a given object or a certain
control sequence of actions that provide either the maximum or minimum of a given set of necessary
system quality criteria.
    The assumed principles in the plotting and analysis of informational systems are saving principle,
optimality principle, and the principle of unidirectional flow of time. In fact, saving principle and
unidirectional flow of time processes are feasibility criteria that allow avoiding mistakes in the
analysis and synthesis; and optimality principle is a tool providing protection from mistakes of
“simple” solutions.
    That`s why it is logical to thoroughly analyze initial criteria of optimality before studying
intellectual systems where you deal with quite combined systems [43].
    Organizational control systems have an important place among the control systems in total. Here it
refers to the organizational control which differs from control of technological processes most notably
in its object which is not machinery (equipment) but people, groups of people. It stands to reason the
border defining the difference is rather conditional. Owing to it the control is implemented by people
and it should be considered as organizational control [44].
    We are interested first and foremost in optimal control of organizational systems. The optimal
control of organizational systems is a problem of designing a system that provides for the predefined
object of control or a process the law of control or controlling consequence of influences providing
maximum and minimum of the predefined criteria manifold for the system quality [45].
    The problems in optimal control of organizational systems require solving problems of
multicriteria or vector optimization. It seems reasonable to use an approach where convexity of
particular target functions is viewed as definite peculiarity of the control system for solving the tasks
of vector optimization.

2. The modern state of the vector optimization problem
   To solve the problems of choice, you need to know:
    selectable alternatives;
    a set of functions that characterize the properties of an object;
    a set of goals targeted to the evaluated features;
    a group of people taking part in decision-making;
    preference ration;
    the degree of conformity of the alternatives to the target state [44].
   The optimality principle, in its implementation, is associated with the concept of the goal function.
For the problem of choosing a variant or value of an impact, the goal function is a mapping over a set
of numbers - a function. In this case, for the goal function f(x), the value of the vector x is sought,
which determines the solution to the optimization problem.
   The problem of mutual optimization for several particular functions of objective is referred to as
multiparameter optimization [46]:
                                 𝑥 ∗ → min𝑓𝑖 (𝑥), 𝑖 = ̅̅̅̅̅
                                                         1, 𝑛.                                   (1)
   The most accurate name of the problem (1) - vector optimization - emphasizes that it is
simultaneous optimization of all components of objective function vector that the issue is about:
                                       𝑥 ∗ → min𝑓(𝑥)                                             (2)
   Indeed, since particular objective functions (1) can form the vector where components are
particular objective functions (2) the problem can be called multicriteria because the objective
function can be called a criterion though the notion “criterion” has a reasonably definite meaning in
modeling, the theory of automatic control and the decision theory. In the endeavor to decrease the
complexity of the problem (2), we define it as the problem with many functions of objective or the
problem of vector optimization.
   It is an outwardly simple problem – you have to find vector x* brining the minimum immediately
to all components of the target vector. Since dimx=n, and dimf=m, stationary condition in the form
of matrix equation [47,48]:
                                             𝜕𝑓
                                             𝜕𝑥
                                                = 0,                                              (3)
   induce the gradient matrix:
                                           𝜕𝑓1      𝜕𝑓1
                                                  ⋮
                                   𝜕𝑓     𝜕𝑥1       𝜕𝑥𝑛
                                      = ⋯ ⋯ ⋯                                              (4)
                                   𝜕𝑥     𝜕𝑓𝑚       𝜕𝑓𝑚
                                                  ⋮
                                        ( 𝜕𝑥1       𝜕𝑥𝑛 )
with dimension n to m. Solving the set of equations (3) lets us find new particular optima but it
doesn’t make it possible to find the optimum for the vector function of objective since m of links
among the components of objective function vector is absent. Consequently, to determine the
optimum point it is essential to have m of links.
   Methods for solving problems of the vector optimization differ in the way of additional
organization [48].
   The global criterion method implies ranging of criteria and objectives significance. When solving
problems by the method of the main criterion, there is no listing of alternatives, one objective function
and a set of restrictions are used, that is, one criterion is formed as the main target, and the rest as
restrictive. The objective function plays the role of the main criterion and it is believed that the
problem of multi-criteria optimization is solved by the main criterion method. A one-to-one
correspondence between alternatives and outcomes is characteristic, the considered tasks are solved in
conditions of certainty[44,49].
   In such a case it is solved simply, but you have to decide by yourself what a global criterion is.
                                    𝑥 ∗ → min𝑓𝑘 (𝑥) = 𝛼𝑘                                           (5)
   Method of forming global objective function [50] uses polynomial models of the kind:
                                                  𝑘

                                     𝑓(𝑥) = ∑ 𝑎𝑖 𝑓(𝑥);
                                                 𝑖=1
                                     𝑎𝑖 > 0, 𝑖 = ̅̅̅̅̅
                                                 1, 𝑘                                             (6)
                                          𝑘
                                                               }
                                         ∑ 𝑎𝑖 = 1
                                         𝑖=1
   Weight coefficients αi determine sensitivity of global objective function f to the change of
particular objective functions fi:
                                        𝜕𝑓
                                   𝑎𝑖 = 𝜕𝑓 , 𝑎𝑖 > 0, 𝑖 = ̅̅̅̅̅
                                                         1, 𝑘                            (7)
                                         𝑖
   Obviously coefficients αi are normalization multipliers of Lagrange in the problem:
                                        𝑥 ∗ → min𝑓(𝑥),
                                 𝑓𝑖 (𝑥) − 𝑐𝑖 = 0,  𝑖 = ̅̅̅̅̅
                                                        1, 𝑘.                                     (8)
   With Lagrange function:
                                                       𝑘

                           𝐿(𝑥, 𝜆) = 𝜆0 𝑓 (𝑥) − ∑ 𝜆𝑖 (𝑓𝑖 (𝑥) − 𝑐𝑖 )                               (9)
                                                       𝑖=1
which leads to Lagrange problem with minimization of constraints influence [48]. Obviously the
appointment of weights is possible and “based on the problem requirements”.
   It is also possible to use posynomial models of the form:
                                                 𝑘
                                                           𝑎
                                    𝑓(𝑥) = ∏ 𝑓𝑖 𝑖 (𝑥) ;
                                                                                                 (10)
                                                 𝑖=1
                                     𝑓𝑖 > 0,    𝑖 = ̅̅̅̅̅
                                                      1, 𝑘
   Or with structural links and feedbacks present it is reasonable to use a posynomial model:
                                             𝑝         𝑘
                                                               𝑎
                                 𝑓 (𝑥) = ∑ 𝐶𝑗 ∏ 𝑓𝑖 𝑖𝑗 (𝑥);
                                            𝑗=1  𝑖=1                                            (11)
                                        𝑓𝑖 > 0, 𝑖 = ̅̅̅̅̅
                                                    1, 𝑘;
                                        𝐶𝑗 ≥ 0, 𝑗 = ̅̅̅̅̅
                                                     1, 𝑝
   The given approach is called “sygnomial” optimization and based on the method of geometrical
programming [51].
   The Min-max methods is based on a game-theoretic formulation, in which the researcher and the
external environment are considered as a pair of players with mutually contradictory interests. The
goal of the researcher, as before, is to minimize the criterion by choosing an estimation operator from
a certain class, but with the Min-max methods, the estimate is sought based on the worst state of the
system under study. Thus, the estimation problem is reduced to minimizing the exact upper bound of
the criterion, calculated from a given set of uncertainty. Therefore, unlike asymptotic estimation
methods, Min-max methods are designed to provide the best quality of recovery of unknown
parameters and processes from a fixed volume of observations [52].
   Min-max methods are based on forming “benchmarks” [53, 54]:
                                 𝑓𝑖 (𝑥) ≤ 𝑡𝑖 𝑖 = ̅̅̅̅̅
                                                 1, 𝑘
                                                ̅̅̅̅̅̅
                                g (𝑥) = 0, 𝑖 = 1, 𝑚
                                   𝑖                                                               (12)
                                                        }
                              g (𝑥) ≤ 0, 𝑖 = ̅̅̅̅̅̅̅̅̅̅
                                               𝑚 + 1, 𝑙
                                    𝑖
which provides no negativeness of the difference ti – fi(x) for all valid values x.
    In such a case one can pass on to the minimum of the maximal deviation from metrics С0 in order
to find a strong optimum:
                             𝑥 ∗ → min max(𝑡𝑖 − 𝑓(𝑥)), 𝑖 = ̅̅̅̅̅           1, 𝑘.                     (13)
    Priorities of Pareto are the method based on determining of an extremum or having necessary
condition.
    When optimizing by the Pareto method, an agreement is used that preference is given to one object
over another only if the first object is not worse than the second in all respects and at least one of them
is better. When this condition is met, the first object is considered dominant, the second dominated.
Finding the Pareto optimum is assumed to be the equilibrium of all criteria, therefore, it can be
considered the preferred optimization criterion, since the "rights" of any of them cannot be infringed
in favor of other criteria [44, 55].
    In the problem:
                                     𝑥 ∗ → min𝑓𝑖 (𝑥), 𝑖 = ̅̅̅̅̅   1, 𝑘;
                                         g (𝑥) = 0, 𝑖 = 1,   ̅̅̅̅̅̅
                                                                  𝑚
                                             𝑖                                                       (14)
                                                                       }
                                      g (𝑥) ≤ 0, 𝑖 = ̅̅̅̅̅̅̅̅̅̅
                                                          𝑚 + 1, 𝑙
                                         𝑖
    Let`s introduce optional constraints:
                                      𝑓𝑖 (𝑥) ≤ 𝑓(𝑥𝑑 ), 𝑖 = ̅̅̅̅   1, 𝑠;
                                                                ̅̅̅̅̅̅̅̅̅̅                           (15)
                                   𝑓𝑖 (𝑥) < 𝑓 (𝑥𝑑 ), 𝑖 = 𝑠 + 1, 𝑘
    Thus, we come to the problem:
                                    𝑥 ∗ → min𝑓𝑖 (𝑥),        𝑖 = ̅̅̅̅̅
                                                                    1, 𝑘 ;
                                               g ≤ 0, ̅̅̅̅
                                                         1, 𝑙 ;                                      (16)
                                                  𝑖
                                   (   )       (
                                 𝑓𝑖 𝑥 − 𝑓 𝑥𝑑 ≤ 0,   )                  ̅̅̅̅̅
                                                                  𝑖 = 1, 𝑘
which leads to Lagrange problem with Lagrange function [48]:
                           𝐿(𝑥, 𝜆) = 𝜆0 𝑓 (𝑥) − 𝜆g − 𝜆(𝑓(𝑥) − 𝑓(𝑥𝑑 ))                                (17)
where λ is a matrix of Lagrange multipliers – coefficients of objective function sensitivity to the
constraints:
                                𝜕𝑓1             𝜕𝑓
                                           ⋮ 𝜕𝑓1          1 𝜆12 ⋯ 𝜆1𝑘
                                𝜕𝑓1               𝑘
                         𝜆=   (  ⋯       ⋯      ⋯   ) = ( ⋯          1        ⋯ )                    (18)
                                𝜕𝑓𝑘            𝜕𝑓𝑘
                                           ⋮             𝜆𝑘1 𝜆𝑘2 ⋯ 1
                              𝜕𝑓1        𝜕𝑓𝑘
    There with the condition λij=0 determines the location of an optimum within the area. In the given
case the admissible points are considered to be the points to which the conditions of absolute
inequality are applied. The realization of the inequality can be described as the realization of
preference relation. The excess in admissible points leads to the local optimum which is called an
effective solution or Pareto – an optimal solution. If one or several constraints can be hardly
transformed into the strict ones, the decision is called weakly efficient. As a matter of fact, the method
of Pareto comes to the procedure of solution choice with minimal mutual influence of particular
targets or to the analysis of matrix λ in the problem (16).
    Thus, we can draw a conclusion that modern methods in solving problems of vector optimization
can be considered as variants of solution for Lagrange problem which allows us to set up the problem
of strict argumentation for the optimization criterion in the problem of vector optimization.

3. The statement of the investigation problems

  The purpose of the investigation is the analysis and refinement of optimality criterion offered in the
given article for the convex vector problem [47].
4. The subject matter of the investigation problems
    In accordance with the formulated investigation, the following was proposed: the problems of
optimal management of organizational systems are the problem of designing a system that provides
for a control object or a control process a control law or a control sequence of actions that provide a
maximum or minimum of a given set of system quality criteria.
    We consider the simple problem of vector optimization [46]:
                                            𝑥 ∗ → min𝑓(𝑥)                                         (19)
where dimx=n, dimf=m.
    To be more specific we will talk about the minimum of the function as it is generally accepted. In
the case the stationary condition as a necessary condition has the form of the matrix equation:
                                                          𝜕𝑓1          𝜕𝑓1
                                       ∇𝑓1 𝑥(  )                   ⋮
                                                          𝜕𝑥1          𝜕𝑥𝑛
                          ∇𝑓(𝑥) = [ ⋮ ] = ⋯ ⋯ ⋯ =
                                      ∇𝑓𝑚 (𝑥)            𝜕𝑓𝑚           𝜕𝑓𝑚
                                                                   ⋮                              (20)
                                                       ( 𝜕𝑥1           𝜕𝑥𝑛 )
                              0 ⋮ 0                 𝜕𝑓𝑖,𝑗
                         = (⋯ ⋯ ⋯) ⇔                       = 0, (𝑖, 𝑗 = ̅̅̅̅̅̅̅̅
                                                                        1, 𝑛, 𝑚 )
                                                    𝜕𝑥𝑗
                              0 ⋮ 0
    As it is shown above to find the minimum of the vector objective function it is required to
introduce additionally m of relations.
    We apply the standard method of constraints variation except that the objective vector itself is used
as a constraints vector:
                                            𝑓 (𝑥) − 𝛼 = 0,                                        (21)
where vector α is determined by the value of objective vector in the point of minimum:
                                               𝑓 (𝑥 ∗ ) = 𝛼,                                      (22)
    In such a case we have the vector problem with the constraint of equality type by analogy with
(17):
                                            𝑥 ∗ → min𝑓(𝑥)
                                                                }.                                (23)
                                             𝑓(𝑥) − 𝛼 = 0
    Lagrange function for the given problem has the following form:
                                𝐿(𝑥, 𝜆) = 𝜆0 𝑓 (𝑥) − 𝜆(𝑓(𝑥) − 𝛼 ).                                (24)
    In this case the necessary conditions of the minimum take form [56]:
                                            ∇𝐿 (𝑥 ∗ , 𝜆∗ ) = 0
                                                                 }                                (25)
                                            𝑓 (𝑥 ∗ ) − 𝛼 ∗ = 0
    The obtained set of equations doesn’t have a solution since the quantity of variables doesn’t
correspond to the quantity of links.
    As an additional link we will introduce the assumption about convexity of all particular target
functions [57]. In this case due to dual features we obtain:
                                            𝑥 ∗ → ∗ min𝐿(𝑥, 𝜆)
                                                𝜆=𝜆
                                                                     }                            (26)
                                            𝜆∗ → ∗ max𝐿(𝑥, 𝜆)
                                                𝑥=𝑥
    Due to stationarity of the problem and use of the first condition from (26) we’ll have:
                        𝑥 ∗ → ∗ min {𝜆0 𝑓 (𝑥) − 𝜆(𝑓(𝑥) − 𝛼)} = 𝑐𝑜𝑛𝑠𝑡                              (27)
                            𝜆=𝜆
    Since the target vector f is equal to the vector of optimal values in the point of minimum the
expression can be written in the form:
                               𝜆0 𝑓 (𝑥 ∗ ) − 𝜆∗ (𝑓(𝑥 ∗ ) − 𝛼 ∗ ) = 𝜆0 𝛼 ∗                         (28)
    Thus, constraints take on the form of conditions of additional slackness [58, 59]:
                                     (𝜆0 − 𝜆)(𝑓(𝑥) − 𝛼 ) = 0                                      (29)
    Considering minimality of the mutual influence in target functions the condition of additional
slackness appears quite naturally in Lagrange function.
    Actually the same requirements occur under Pareto optimality.
    Hence, you can write down Lagrange function with constraints in the form of slackness conditions
in the form:
                       𝑥 ∗ → ∗ min {𝜆0 𝑓(𝑥) − (𝜆 − 𝜆0 )(𝑓(𝑥) − 𝛼)} = 0                       (30)
                           𝜆=𝜆
    Consequently, according to Lagrange in the stationary convex problem of vector optimization the
necessary condition of optimality has the form:
                                         ∇𝐿(𝑥 ∗ , 𝜆∗ ) = 0)
                                         𝑓(𝑥 ∗ ) − 𝛼 ∗ = 0              }                    (31)
                            𝜆0 𝑓(𝑥 ∗ ) − (𝜆∗ − 𝜆0 )(𝑓(𝑥 ∗ ) − 𝛼 ∗ ) = 0
    Introducing the matrix of adduced vectors for Lagrange multipliers λ**:
                                           1       𝜆12 𝜆13 ⋯ 𝜆1𝑚
                                𝜆∗∗ = ( 𝜆21         1    𝜆23 ⋯ 𝜆2𝑚 )                         (32)
                                            .       .       .       .
                                         𝜆𝑚1 𝜆𝑚2 𝜆𝑚3          ⋯     1
we can write down the optimality condition in the form:
                                           ∇𝐿(𝑥 ∗ , 𝜆∗ ) = 0)
                                            𝑓(𝑥 ∗ ) − 𝛼 ∗ = 0 }                              (33)
                                        𝜆 𝑓 𝑥 − 𝛼∗) = 0
                                          ∗∗ ( ( ∗ )

    Actually the additional condition is the condition of additional slackness.

5. Practical part
   Let us consider the well-known example [60].
   The target function vector:
                                                    𝑥2
                                       𝑓 (𝑥) = [          ]                                   (34)
                                                 (𝑥 − 1)2
   Using the condition (20) we obtain:
                                                𝑑(𝑥 2 )
                                                  𝑑𝑥          0
                                ∇𝑓 (𝑥) =                   =[ ]
                                            𝑑((𝑥 − 1)2 )      0
                                           [      𝑑𝑥     ]
   Which provides two points of particular minima x1*=0 and x2*=1, fig. 1. To solve the problem with
vector function of target we will introduce the constraints:
                                           𝑓1 (𝑥) − 𝛼1 = 0
                                                                                              (35)
                                           𝑓2 (𝑥) − 𝛼2 = 0
   In the problem Lagrange functions have the following form:
                           𝐿1 (𝑥, 𝜆12 ) = 𝑥 2 − 𝜆12 ((𝑥 − 1)2 − 𝛼2 );
                                                                                              (36)
                           𝐿2 (𝑥, 𝜆21 ) = (𝑥 − 1)2 − 𝜆21 (𝑥 2 − 𝛼1 )
   necessary conditions of the optimum:
                                   𝜕𝐿1
                                   𝜕𝑥
                                       = 2𝑥 − 𝜆12 2(𝑥 − 1) = 0
                                𝜕𝐿2
                                      = 2(𝑥 − 1) − 𝜆21 2𝑥 = 0
                                𝜕𝑥
                                        𝑥 2 − 𝛼1 = 0                                         (37)
                                     (𝑥 − 1)2 − 𝛼2 = 0
                               𝑥 2 − 𝜆12 ((𝑥 − 1)2 − 𝛼2 ) = 0
                               (𝑥 − 1)2 − 𝜆21 (𝑥 2 − 𝛼1 ) = 0}
   from the link (35):
                                      2𝑥 − 𝜆12 2(𝑥 − 1) = 0;
                                                                                             (38)
                                      2(𝑥 − 1) − 𝜆21 2𝑥 = 0.
   We obtain λ12=1/λ21. Since the optimum point is unique from constraints we have α1-α2=2x-1.
   Consequently, we can write it down:
                                     𝑥
                             𝑥2 −        ((𝑥 − 1)2 − 𝛼 ) = 0;
                                   𝑥−1
                                    𝑥−1 2                                                (39)
                         (𝑥 − 1)2 −       (𝑥 − 2𝑥 + 1 − 𝛼 ) = 0.
                                       𝑥
   from which we obtain the connection:
                                      (𝑥 − 1)2 = 𝑥 2                                     (40)
   Consequently, optimum is reached in x*=1/2 point and target functions have the value which is
equal to α*=1/4 in the optimum, Fig. 1.




Figure 1: Graphic Solution of the Vector Optimization Problem

   Hence, the assumption about convexity of particular target functions lets us obtain the optimal
solution of the problem without invoking compromises and expert estimation.

6. Conclusions
   The analysis and research carried out made it possible to make an assumption about the convexity
of the partial functions. This assumption, in turn, made it possible to obtain optimal solutions to the
problem without involving compromises and expert assessments.
   All of the above made it possible to obtain the following important conclusions that can be used in
the search for optimal solutions in building management systems for organizational systems.
   It is shown that methods used for solving the problem of vector optimization can be kept to the
necessary conditions in Lagrange problem.
   It is shown that the problem of vector optimization has the solution based on duality or the optimal
one according to Lagrange in the case of convexity of particular target functions.
   The condition of optimality contains the condition of additional slackness.

7. References
[1] S. V. Mikoni, Teoriya prinyatiya upravlencheskih reshenij: uchebnoe posobie, Lan', Sankt-
    Peterburg, 2015.
[2] L.V. Kantorovich, V.I. Krylov, Priblizhennye metody vysshego analiza: monografiya,
    Gosudarstvennoe izdatel'stvo tekhniko-teoreticheskoj literatury, Leningrad, Moskva, 1950.
[3] A.N. Kolmogorov, Vvedenie v matematicheskuyu logiku, Izdatel'stvo Moskovskogo universiteta,
    Moskva, 1982.
[4] E.S. Ventcel', Issledovanie operacij: zadachi, principy, metodologiya, 2-e izd., Nauka, Moskva,
     1988.
[5] R. Bellman, R. Kalaba, Dinamicheskoe programmirovanie i sovremennaya teoriya upravleniya,
     Per. s angl., Nauka, Moskva, 1969.
[6] S. Mikoni, theoretical justification of systematization of methods multicriteria optimization on a
     finite set of alternatives, in sbornik trudov Sankt-Peterburgskoj mezhdunarodnoj konferencii
     Regional'naya informatika i informacionnaya bezopasnost', SPOISU, Sankt-Peterburg. 2015, pp.
     48-52.
[7] S.I. Rodzin, Teoriya prinyatiya reshenij: lekcii i praktikum: Uchebnoe posobie, TTI YUFU,
     Taganrog, 2010.
[8] N. Nil'son, Iskusstvennyj intellekt. Metody poiska reshenij, Perevod s angl. V. Stefanyuk, Mir,
     Moskva, 1973.
[9] N. Nil'son, Principy iskusstvennogo intellekta, Perevod s anglijskogo, Radio i svyaz', Moskva,
     1985.
[10] G. Sajmon, Nauki ob iskusstvennom, Editorial URSS, Moskva, 2009.
[11] L. Zade, Ponyatie lingvisticheskoj peremennoj i ego primenenie k prinyatiyu priblizhennyh
     reshenij, Mir, Moskva, 1976.
[12] M.G. Gaaze-Rapoport, D.A. Pospelov, Ot amyoby do robota: modeli povedeniya, Nauka,
     Moskva, 1987.
[13] D.A. Pospelov, Ekspertnye sistemy: sostoyanie i perspektivy, Nauka, Moskva, 1989.
[14] N.G. Zagorujko, Kognitivnyj analiz dannyh, Geo, Novosibirsk, 2013
[15] M.F. Bondarenko, YU.P. SHabanov-Kushnarenko, Mozgopodobnye struktury: Spravochnoe
     posobie, Naukova dumka, Kiev, 2011.
[16] V. D. Nogin, Mnozhestvo i princip Pareto, Izdatel'sko-poligraficheskaya associaciya vysshih
     uchebnyh zavedenij, Sankt-Peterburg, 2020.
[17] V. D. Nogin, Suzhenie mnozhestva Pareto: aksiomaticheskij podhod, FIZMATLIT, Moskva,
     2016.
[18] L.A. Petrosyan, N.A. Zenkevich, E.V. SHevkoplyas, Teoriya igr, BHV-Peterburg, Sankt-
     Peterburg, 2012.
[19] V.V. Podinovskij, V.D. Nogin, Pareto-optimal'nye resheniya mnogokriterial'nyh zadach, Nauka,
     Moskva, 1982.
[20] O.I. Larichev, Verbal'nyj analiz reshenij, Nauka, Moskva, 2006.
[21] A.B. Petrovskij, Gruppovoj verbal'nyj analiz reshenij, Nauka, Moskva, 2019.
[22] R.L. Kini, H. Rajfa, Prinyatie reshenij pri mnogih kriteriyah: predpochteniya i zameshcheniya,
     Radioisvyaz', Moskva, 1981.
[23] R.L. Kini, Razmeshchenie energeticheskih ob"ektov: vybor reshenij, Energoatom, Moskva,
     1983.
[24] S.V. Emel'yanov, Teoriya sistem s peremennoj strukturoj, Nauka, Moskva, 1970.
[25] P. Fishbern, Teoriya poleznosti dlya prinyatiya reshenij, Nauka, Moskva, 1978.
[26] N. V. Hovanov, Matematicheskie osnovy teorii shkal izmereniya kachestva, LGU, Leningrad,
     1982.
[27] N. V. Hovanov, Analiz i sintez pokazatelej pri informacionnom deficite, SPbGU, Sankt-
     Peterburg, 1996.
[28] N. V. Hovanov, Matematicheskie modeli riska i neopredelennosti, SPbGU, Sankt-Peterburg,
     1998.
[29] E.G. Petrov, M.V. Novozhilova, I.V. Grebennik, N.A. Sokolova, Metody i sredstva prinyatiya
     reshenij v social'no-ekonomicheskih i tekhnicheskih sistemah, Oldi-plyus, Herson, 2003.
[30] B.A. Esipov, Optimization methods and operations research, Samara state aerospace University,
     Samara, 2007.
[31] I. G. Chernorutsky, Optimization methods in control theory, Peter, St. Petersburg, 2004.
[32] Yu.I. Larionov, L.S. Marchenko and M.A. Khazhmuradov, After the operation. Part II, Inzhek,
     Kharkiv, 2005.
[33] K. Shennon, Raboty po teorii informatsii y kybernetike, Inostrannaia literatura, Moskow, 1963.
[34] R.V.L. Hartley, Transmission of information, Vol. 7: Bell System Technical Journal, 1928,
     pp. 535-563.
[35] A.N. Kolmohorov, Osnovnye poniatia teorii veroiatnostey, volume of Teoria veroiatnostei i
     matematicheskaia statistika, Moscow, 1977.
[36] A.N. Kolmohorov, Teoria informatdii i teoria alhoritmov, Nauka, Moscow, 1987.
[37] V. Fetler, Vvedenie v teoriu veroiatnostei i ee prilozhenia, Myr, Moscow, 1964.
[38] A.M. Lahlom, Veroiatnost i informatsia, KomKniga, Moscow, 2007.
[39] V.D. Goppa, Vedenie v algebraicheskuiu teoriu informatdii, Nauka Fizmatlit, Moscow, 1995.
[40] F.E. Temnikov, Informatika, Izvestia vuzov, Vol.11: Elektrotehnika, 1963.
[41] F.E. Temnikov, Teoreticheskie osnovy informatsionnoy tekhniki, Energia, Moscow, 1971.
[42] V.A. Kotelnikov, O propusknoy sposobnosti efira i provoloki v elektrosviazi, Vsesoiuznogo
     energeticheskogo yniversiteta, MGU, Moscow,1933.
[43] Stuart J. Russel and Peter Norvig, Artifcal Intelligence, 2nd ed., Prentice Hall Upper Saddle
     River, New Jersey, 2004.
[44] S. V. Mikoni, Multicriteria choice on a finite set of alternatives, Lan, St. Petersburg, 2009.
[45] V.M. Glushkov, Fundamentals of paper informatics, Fizmatlit, Moscow, 1982.
[46] V.V. Podinovskiy, Pareto-Optimal Solutions of Multicriteria Problems, Fizmatlit, Moscow,
     2007.
[47] V.D. Nogin, Decision – Making in the Multicriteria Environment: Quantitative Approach,
     Fizmatlit, Moscow, 2005.
[48] A.P. Karpenko, A.S. Semenikhin and E.V. Mitina, Population Methods in Approximation of
     Pareto Set in Multicriteria Optimization Problem. Overview, NAUKA I OBRAZOVANIIE,
     FGBOU VPO “MSTU named after N.E.Bauman”, 2012.
[49] D.E. SHaposhnikov, Vybor variantov v proektirovanii apparatno-programmnyh kompleksov,
     Nizhegorodskij Gosudarstvennyj Universitet Im. N.I. Lobachevskogo, Nizhnij Novgorod, 2020.
[50] B. A. Murtagh, Sovremennoe linejnoe programmirovanie: Teoriya i praktika, Perevod s
     anglijskogo N.K. Burkovskij, Mir, Moskva, 1984.
[51] R. Daffin i E. Piterson, Geometricheskoe programmirovanie, Perevod s angl. D. A. Babaeva,
     Mir, Moskva, 1972.
[52] K. V. Semenihin, Metody minimaksnogo ocenivaniya v mnogomernyh linejnyh modelyah
     nablyudeniya pri nalichii geometricheskih ogranichenij na momentnye harakteristiki avtoreferat
     doktora fiziko-matematicheskih nauk, Moskovskij aviacionnyj institut (gosudarstvennyj
     tekhnicheskij universitet) «MAI», Moskva, 2010.
[53] M.L. Lidov, Minimaksnye metody ocenivaniya, Preprinty IPM im. M.V.Keldysha, Moskva,
     2010.
[54] D. YU. Muromcev i V. N. SHamkin, metody optimizacii i prinyatie proektnyh reshenij, FGBOU
     VPO «TGTU», Tambov, 2015.
[55] T. Saati, Prinyatiereshenij: Metod analiza ierarhij, Perevod s anglijskogo R. G. Vachnadze
     Radioisvyaz', Moskva, 1993.
[56] S.V. Lutmanov, Course of Lectures in Optimization Methods, RDE “Regular and Chaotic
     Dynamics”, Izhevsk, 2001.
[57] A.V. Attetkov, S.V. Galkin and V.S. Zarubin, Optimization Methods, Texbook for Technical
     Colleges, PH MSTU, Moscow, 2001.
[58] A.V. Panteleev and T.A. Letova, Optimization Methods In Examples and Problems: Study
     Guide, Vyssh. Shk., Moscow, 2002.
[59] B.N. Pshenichniy, The Convex Analysis and Extremum Problems, volume of Nonlinear Analysis
     and Its Applications, Nauka, Moscow, 1980.
[60] I.E. Shupik, Models and Methods of Informational Technologies in Correction of Physical
     Condition of Students, Candidate of Technical Sciences, Kherson National Technical University
     (KNTU), Kherson, Ukraine.