<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Optimal Planning in Systems Consisting of Rational Agents</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Igor</forename><surname>Sinitsyn</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Institute of Software Systems</orgName>
								<orgName type="institution">National Academy of Sciences of Ukraine</orgName>
								<address>
									<addrLine>Glushkov prosp. 40, build. 5</addrLine>
									<postCode>03187</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anatoliy</forename><surname>Doroshenko</surname></persName>
							<email>a-y-doroshenko@ukr.net</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Software Systems</orgName>
								<orgName type="institution">National Academy of Sciences of Ukraine</orgName>
								<address>
									<addrLine>Glushkov prosp. 40, build. 5</addrLine>
									<postCode>03187</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Serhii</forename><surname>Pashko</surname></persName>
							<email>pashko1955@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Institute of Software Systems</orgName>
								<orgName type="institution">National Academy of Sciences of Ukraine</orgName>
								<address>
									<addrLine>Glushkov prosp. 40, build. 5</addrLine>
									<postCode>03187</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Optimal Planning in Systems Consisting of Rational Agents</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">ADC59311113EB5F94DDE0494CCBDFB30</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Method</term>
					<term>planning</term>
					<term>rational agent</term>
					<term>system</term>
					<term>multi-criteria optimization</term>
					<term>neural network</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The paper is devoted to mathematical methods of planning in systems consisting of rational agents. An agent is an autonomous object that has sources of information about the environment and influences this environment. A rational agent is an agent who has a goal and uses optimal behavioral strategies to achieve it. It is assumed that there is a utility function, which is defined on the set of possible sequences of actions of the agent and takes values in the set of real numbers. The goal of a rational agent is to maximize the utility function. If rational agents form a system, then they have a common goal and act in an optimal way to achieve it. Agents use the optimal solution of the optimization problem, which corresponds to the goal of the system. The problem of linear programming is considered, in which the number of product sets produced by the system is maximized. To solve the nonlinear problem of optimizing the production plan, the conditional gradient method is used, which at each iteration uses a posteriori estimation of the error of the solution and can stopping the calculation process after reaching the required accuracy. Since the rational agents that are part of the system can have separate optimality criteria, multi-criteria optimization problems appear. The article discusses methods for solving such problems, among which is a humanmachine procedure assiciated to the conditional gradient method and at each iteration uses information from the decision maker (DM). The difficulties of this approach are that the DM is not able to make decisions many times under the condition of a significant number of iterations of the nonlinear programming method. The article proposes to replace DM with an artificial neural network. Non-linear and stochastic programming methods are used to find optimal parameters of this network.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, an agent-oriented approach has been successfully developed within the framework of the theory of artificial intelligence. Agents are considered rational or intelligent and can form multi-agent systems and act to achieve a common goal. Agent systems are studied from different points of view: from the standpoint of the theory of conflict processes, information theory, social psychology, software engineering, in the context of concepts of electronics. The proposed paper considers the aspect of optimizing the actions of agents as part of a multi-agent system.</p><p>The paper <ref type="bibr" target="#b0">[1]</ref> shows that the main types of activities related to the management of systems of rational agents and individual agents are cooperation (formation of agent systems), planning and coordination of agent actions, system placement and recognition. Problems of cooperation, planning and coordination of agents' actions are studied in detail in the monograph <ref type="bibr" target="#b1">[2]</ref>. Recognition tasks are studied in <ref type="bibr">[3 -6]</ref>, placement tasks are studied in papers <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>. In <ref type="bibr" target="#b8">[9]</ref>, the concept of a rational agent is defined and used.</p><p>To solve optimization planning problems, that is, to find optimal action plans in multi-agent systems consisting of rational agents, mathematical programming methods are used, in particular methods of linear, nonlinear, stochastic, discrete programming <ref type="bibr">[10 -14]</ref>. Multi-criteria optimization problems and corresponding mathematical methods are important for multi-agent systems <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref>. In paper <ref type="bibr" target="#b15">[16]</ref>, human-machine procedures are used to solve multi-criteria problems, in which the method of nonlinear programming is combined with the work of a DM. The difficulty of this approach is that the DM is not able to make decisions multiple times under the condition of a significant number of iterations of the nonlinear programming method.</p><p>This paper considers the problem of optimal planning of actions of a multi-agent system consisting of rational agents. Planning optimization problems and corresponding mathematical methods for their solution are described. In multi-criteria optimization problems, it is proposed to use an artificial neural network instead of DM, and methods of determining the optimal values of the parameters of such a network are proposed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Agents and multi-agent systems</head><p>The concept of a rational agent is widely used in the theory of artificial intelligence, economics, game theory and has a unifying meaning, allowing to define the main tasks facing agents and multiagent systems, the relationships between these tasks, etc. In <ref type="bibr" target="#b8">[9]</ref>, an autonomous object that perceives the environment with the help of sensors and influences this environment with the help of executive mechanisms is considered an agent. The agent's program works in a computing device and receives data from sensors, recognizes and analyzes the data, calculates the optimal strategy of the agent's behavior, and issues commands to executive mechanisms. An agent can be a computer program, a robot, or a person. The following definition of a rational agent is given in <ref type="bibr" target="#b8">[9]</ref>: "For each possible perceptual sequence, a rational agent must choose the actions that are expected to maximize its performance, given the facts provided by the perceptual sequence and all the built-in knowledge the agent has".</p><p>Usually, there are different permissible sequences of actions of an agent that lead to a goal. In this case, it is appropriate to consider that there is a utility function defined on a set of action sequences of an agent (or a system of agents), which takes values from a set of real numbers. A rational agent is an agent who, for the sake of achieving a goal, uses the optimal behavior strategy, maximizing the utility function. A multi-agent system is a system consisting of rational agents who have a common goal and use an optimal strategy to achieve it. It can be assumed that an optimization problem is formulated for the system and agents form a behavior strategy using the optimal solution of this problem. Let us give examples of systems consisting of rational agents:</p><p>• the system of state administration bodies, enterprises, institutions and organizations of the country's defense-industrial complex; • the system of economies of different countries developing in cooperation;</p><p>• a group of drones chasing a target. Note that the system of agents may include a central agent that performs some of the management functions.</p><p>In the literature, there are definitions of agents of other types that differ from rational agents. In <ref type="bibr" target="#b1">[2]</ref>, the concept of an intelligent agent is defined, which should have the following properties:</p><p>• reactivity, i.e. the ability to perceive the state of the surrounding environment and act accordingly; • proactivity, i.e. the ability to identify one's own initiative;</p><p>• social activity, i.e. the ability to interact with other agents to achieve a goal.</p><p>Agents are considered autonomous. Autonomy means that the agent's behavior is determined not only by the environment, but also to a large extent by the properties of the agent.</p><p>The definition of an agent is called "weak" if it contains only the described features, that is, autonomy, reactivity, proactivity, social activity <ref type="bibr" target="#b16">[17]</ref>. If, in addition to the above signs, there are additional ones in the definition of the agent, then the definition is called "strong". Additional properties may include:</p><p>• desires, i.e. situations desirable for the agent;</p><p>• intentions, i.e. what needs to be done to satisfy desires or to fulfill obligations to other agents; • goals, i.e. a set of intermediate and final goals of the agent;</p><p>• obligations to other agents; • knowledge, i.e. a part of knowledge that does not change during the agent's existence;</p><p>• beliefs, i.e. a part of the agent's knowledge that can change.</p><p>We assume that agents have the above and possibly other properties to the extent necessary to solve problems and use the resulting solutions.</p><p>Planning is the development of a method of action of a multi-agent system and individual agents in the future depending on the situations that may arise, the choice of an effective method of action, optimal allocation of resources. Centralized planning, distributed development of a centralized plan, distributed development of a distributed plan are possible <ref type="bibr" target="#b17">[18]</ref>. Centralized planning is performed by a dedicated agent that has the necessary resources and information.</p><p>Distributed development of a distributed plan means that there is no dedicated agent, agents build individual plans interacting with each other. In this case, it is considered that the agents have limited computing capabilities, information about the local environment, and limited communication capabilities. Examples include groups of autonomous vehicles, mobile sensor networks, routing in data networks, transportation systems, multiprocessor computing, energy systems <ref type="bibr" target="#b18">[19]</ref>.</p><p>Central planning is discussed next.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Optimization planning models in multi-agent systems and corresponding numerical methods</head><p>Consider optimization problems of planning in multi-agent systems. Such problems can be applied, in particular, in the process of planning the activities of the defense-industrial complex of Ukraine. The defense-industrial complex is a system consisting of interrelated research centers, state organizations, and industrial enterprises that produce goods for military purposes. We consider each element of this system to be a rational agent. Suppose there is a dedicated agent (for example, a ministry) that has the information needed for planning.</p><p>Let the system contain m rational agents that produce several types of products. Some agents may not produce final products while performing managerial functions. Let n be the number of types of products, ij x denote the value of the j -th product produced by the i -th agent, let j α be the given specific weight of the j -th product in the set of final products, ). ,..., , ( </p><formula xml:id="formula_0">..., 1 x j m i ij n j x → ∑ = = α (1) subject to , ,..., 1 , ,..., 1 , 1 l k m i b x a n j ik ij ijk = = ≤ ∑ = (2) . ,..., 1 , ,.., 1 , max min n j m i x x x ij ij ij = = ≤ ≤ (3)</formula><p>Here ijk a is the cost of the k -th resource for the production of the j -th product of unit value at the i -th enterprise, ik b is the stock of the k -th resource at the i -th enterprise, l is the number of types of resources, max min , ij ij x x are the minimum and maximum permissible quantities of the j -th product produced at the i -th enterprise, . 0 max min</p><formula xml:id="formula_1">∞ &lt; ≤ ≤ ij ij x x</formula><p>Let X be the set of vectors x that satisfy conditions (2), <ref type="bibr" target="#b2">(3)</ref>. Using an additional variable , y we write the problem ( <ref type="formula" target="#formula_29">1</ref>) -( <ref type="formula">3</ref>) as follows: <ref type="formula">4</ref>) -( <ref type="formula">6</ref>) is a linear programming problem and is solved by linear programming methods. Note that the numbers appearing in problems ( <ref type="formula" target="#formula_29">1</ref>) -( <ref type="formula">3</ref>) and ( <ref type="formula">4</ref>) -( <ref type="formula">6</ref>) represent part of the agents' knowledge and beliefs, and the objective function expresses the intentions of the system.</p><formula xml:id="formula_2">max, , y x y → (4) , ,..., 1 , 1 n j y x j m i ij = ≥ ∑ = α (5) . X x ∈ (6) Problem (</formula><p>Suppose, in the problem ( <ref type="formula" target="#formula_29">1</ref>) -( <ref type="formula">3</ref>), instead of the objective function (1), some objective function</p><formula xml:id="formula_3">) (x f is used. It is necessary to solve the problem . max ) ( X x x f ∈ → (7)</formula><p>We consider the function ) (x f to be concave (that is, convex upwards) on the set X and smooth on some neighborhood of the set X . Smoothness means that the function is defined and has continuous partial derivatives with respect to all variables, and an open set containing the X is called a neighborhood of X . To solve this problem, it is advisable to use the conditional gradient method (Frank and Wolf method) <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>, which consists of the following. Let's choose</p><formula xml:id="formula_4">. 0 X x ∈ At the s -th step ( ,... 2 , 1 , 0 = s ) we calculate , ), ( max arg x x f x s X x s ∇ = ∈ (8) )), ( ( max arg 1 0 s s s t s x x t x f t − + = ≤ ≤<label>(9)</label></formula><p>). ( x the optimal solution of problem (7). The following inequality holds . ), ( ) ( ) (</p><formula xml:id="formula_5">s s s s x x x f x f x f − ∇ ≤ − * (11)</formula><p>Indeed, the concavity of the function</p><formula xml:id="formula_6">) (x f implies an inequality . ), ( ) ( ) ( s s s x x x f x f x f − ∇ ≤ − * * Obviously, . ), ( ), ( s s s x x f x x f ∇ ≤ ∇ *</formula><p>Last two inequalities imply <ref type="bibr" target="#b10">(11)</ref>.</p><p>The partial boundaries of the sequence } { s</p><p>x are the maximum points of the function ) (x f on the set .</p><p>X In [10] it is proved that</p><formula xml:id="formula_7">); / 1 ( ) ( ) ( s O x f x f s = − * (12)</formula><p>this estimate cannot be improved. It follows from <ref type="bibr" target="#b11">(12)</ref> and the results of <ref type="bibr" target="#b11">[12]</ref> that the conditional gradient method is not suboptimal. It is advisable to use this method for solving problem <ref type="bibr" target="#b6">(7)</ref> due to the possibility of applying estimate <ref type="bibr" target="#b10">(11)</ref>, which allows stopping the computational process after reaching the required accuracy, and due to its simplicity and satisfactory speed of convergence in practice. The conditional gradient method is used in the next section as a component of multicriteria optimization.</p><p>If the function f is not smooth or instead of the gradient (generalized gradient) of the function f its stochastic analogue is known, non-smooth optimization and stochastic programming methods <ref type="bibr">[11 -13]</ref> are used to solve problem <ref type="bibr" target="#b6">(7)</ref>. If, instead of problem <ref type="bibr" target="#b6">(7)</ref>, a discrete programming problem is considered, branch-and-bound methods, heuristic algorithms, random search methods with local optimization, and polynomial approximate schemes can be used <ref type="bibr" target="#b13">[14]</ref>.</p><p>The system of economies of different countries interacting with each other through the export and import of goods, services, and capital can be considered a system of rational agents, provided that the governing bodies of the countries make rational economic decisions in market conditions. There are problems of optimizing the interaction of a separate economy with other economies and with the world market. Corresponding mathematical models and numerical methods are considered in works <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Multi-criteria optimization and human-machine procedures</head><p>In addition to a common goal, each rational agent of a multi-agent system can have its own optimality criterion. The problem is to find a quality solution that satisfies all agents. Let the extremal problem for the i -th agent have the form max, ) (</p><formula xml:id="formula_8">X x i x f ∈ → (13) where }, ,..., 1 { , m I I i = ∈ ), ,..., ( 1 n x x x =</formula><p>X is a convex closed bounded set of n -dimensional Euclidean space. We assume that the criteria are non-negative and normalized, that is, each criterion</p><formula xml:id="formula_9">) (x f i is replaced by , / ) ( * i i f x f where ). ( max * x f f i X x i ∈ =</formula><p>To solve a multi-criteria extremal problem, the convolution method can be applied, replacing the set of criteria ( <ref type="formula">13</ref>) with a single criterion. Such replacement can be performed in several ways <ref type="bibr" target="#b14">[15]</ref>:</p><p>• The minimum of values</p><formula xml:id="formula_10">) (x f i i α is maximized: . max ) ( min X x i i I i x f ∈ ∈ → α • The weighted sum of values ) (x f i is maximized: . max ) ( 1 X x m i i i x f ∈ = → ∑ α • Product is maximized: . max ) ( 1 X x m i i i x f ∈ = → ∏ α</formula><p>Here i α are given non-negative numbers, . 1</p><formula xml:id="formula_11">1 ∑ = = m i i α</formula><p>The method of the main criterion, the method of objective programming and the method of concessions are described in <ref type="bibr" target="#b14">[15]</ref>. The main criterion method consists of maximizing one selected criterion, all others are considered bounded from below: max, ) (</p><formula xml:id="formula_12">x k x f → }, { \ , ) ( min k I i f x f i i ∈ ≥ . X x ∈</formula><p>Here min i f are given numbers, .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>I k ∈</head><p>The objective programming method consists of setting the desired values of criteria m f f ,..., 1 and solving an extremal problem</p><formula xml:id="formula_13">. min ) ( / 1 1 X x p m i p i i i f x f ∈ = → ⎟ ⎠ ⎞ ⎜ ⎝ ⎛ − ∑ α</formula><p>Here 1 ≥ p is given number. In the concession method, the criteria are numbered in order of decreasing importance. In the first step, the first criterion is maximized,</p><formula xml:id="formula_14">), ( max 1 1 * 1 x f f X x∈ = where . 1 X X = In the second step, the set } ) ( , : { 1 * 1 1 1 2 Δ − ≥ ∈ = f x f X x x X</formula><p>is determined, where 1 Δ is the amount of concession for the first criterion, and the problem</p><formula xml:id="formula_15">) ( max 2 2 * 2 x f f X x∈ = is solved. In the third step, the set } ) ( , : { 2 * 2 2 2 3 Δ − ≥ ∈ = f x f X x x X</formula><p>is determined, where 2 Δ is the concession amount for the second criterion, and the problem</p><formula xml:id="formula_16">) ( max 3 3 * 3 x f f X x∈ = is solved. The process continues until the last m -th step.</formula><p>Consider the human-machine method of multi-criteria optimization <ref type="bibr" target="#b15">[16]</ref>. This method uses a non-linear programming method, for which the directions of movement and step sizes are determined using a DM. We will consider the problem of multi-criteria optimization as follows:</p><formula xml:id="formula_17">. max )) ( ),..., ( ( ) ( 1 X x m x f x f u x f ∈ → = (14)</formula><p>Here u is the utility function, X is a convex closed bounded set of n -dimensional Euclidean space. We assume that the function ) (x f i s concave on X . The following conditions <ref type="bibr" target="#b15">[16]</ref> are sufficient for the concavity of the function f on the set X :</p><p>• the function u is concave, and the functions , ,..., 1 , m i f i = are linear;</p><p>• the functions u and , ,..., 1 , m i f i = are concave, and the function u is nondecreasing in each variable.</p><p>Let us assume that the conditions are fulfilled that guarantee the existence of continuous partial derivatives of the function</p><formula xml:id="formula_18">)) ( ),..., ( ( 1 x f x f u m</formula><p>and the validity of the rule for their calculation <ref type="bibr" target="#b21">[22]</ref>. To solve problem ( <ref type="formula">14</ref>), we will use the conditional gradient method ( <ref type="formula">8</ref>) - <ref type="bibr" target="#b9">(10)</ref>. According to the rule for calculating partial derivatives, we have </p><formula xml:id="formula_19">. ) ( )) ( ),..., ( ( ) ( 1 1 ∑ = ∇ ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ ∂ ∂ = ∇ = ∇ m i s i s i s m s s x f f u x f x f u x f Here, ( ) − ∂ ∂ s i f u the i -th partial</formula><formula xml:id="formula_20">x x f w x m i s i s i X x s ∑ = ∈ ∇ =<label>(15) where ( ) ( ) . 1 s</label></formula><formula xml:id="formula_21">s i s i f u f u w ∂ ∂ ∂ ∂ =</formula><p>In <ref type="bibr" target="#b14">(15)</ref>, all quantities, except for , s i w are considered known. The values of , s i w showing the relative importance of the first and i -th criteria, are determined using DM, after which optimization ( <ref type="formula" target="#formula_20">15</ref>) is performed. In <ref type="bibr" target="#b15">[16]</ref>, two ways of determining s i w are proposed. Suppose the function u satisfies the conditions of the theorem on the total increment of a function of several variables <ref type="bibr" target="#b21">[22]</ref>. Let us assume that at point</p><formula xml:id="formula_22">)) ( ),..., ( ( 1 s m s x f x f criterion 1 f received an increment , 1 Δ criterion − i f an increment , i</formula><p>Δ and the values of the remaining criteria did not change. We have ), ( ),..., ( , ) ( (</p><formula xml:id="formula_23">1 2 1 1 s i s s x f x x f u u − Δ + = Δ ( ) + Δ ∂ ∂ = − Δ + + 1 1 1 1 )) ( ),..., ( ( )) ( ),..., ( , ) ( s s m s s m s i i s i f u x f x f u x f x f x f ( ) ( ). ) ( ) ( 2 2 1 i i s i o f u Δ + Δ + Δ ∂ ∂ + Let DM choose the values i Δ Δ , 1 so that . 0 = Δu</formula><p>In this case, we have approximately</p><formula xml:id="formula_24">Δ ∂ ∂ + Δ ∂ ∂ i s i s f u f u that is . 1 i s i w Δ Δ − =<label>( ) ( ) , 0 1 1 =</label></formula><p>The second way of calculating s i w is based on the fact that the gradient of a function indicates the direction of its fastest growth. Let all criteria except the first and i -th remain unchanged, the first and i -th grow by amounts </p><formula xml:id="formula_26">1 0 1 ≤ ≤ → − + − + t s s s m s s s x x t x f x x t x f u (18)</formula><p>Here, the objective function is a function of one variable t . One graph shows the curves )), ( ( In <ref type="bibr" target="#b15">[16]</ref>, an example of the application of the described human-machine method of multi-criteria optimization to the organization of the educational process at one of the university faculties is given. Human-machine procedures can be combined not only with the conditional gradient method, but also with other methods of nonlinear programming.</p><formula xml:id="formula_27">s s s j x x t x f − + , ,..., 2 , 1 m j =</formula><p>Obviously, DM cannot always qualitatively perform the tasks assigned to it in the process of multi-criteria optimization. In addition, the nonlinear programming method can perform a large number of iterations, which is unacceptable for DM. Therefore, it is advisable to use an artificial intelligence system instead of DM. In particular, it is possible to use an artificial neural network that calculates the values of , s i w using formulas ( <ref type="formula" target="#formula_25">16</ref>) and (17), or calculates the gradient of the function f in another way, and the optimal value of t in problem (18). Such networks are considered in the next section</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Use of artificial neural networks in optimization methods</head><p>Artificial neural networks (hereinafter referred to as neural networks) have become widespread as part of artificial intelligence systems. We consider a function weighting coefficients. The scheme of an artificial neuron is shown in Figure <ref type="figure" target="#fig_6">1</ref>.</p><p>A neural network combining a large number of artificial neurons is capable of solving complex problems. We assume that the network has n inputs and one output and calculates some function    We call a function σ The number of artificial neurons in such a neural network is not limited. If you choose a sigmoid function</p><formula xml:id="formula_28">∑ ∑ + = = ⎟ ⎟ ⎠ ⎞ ⎜ ⎜ ⎝ ⎛ = n i n j j ij i n x x x f ψ ϕ (19)</formula><formula xml:id="formula_29">) (t σ sigmoid if the condition ⎩ ⎨ ⎧ −∞ → +∞ → → t t t , 0 , ,<label>1</label></formula><p>), (t σ that has a continuous derivative, then the quantity G will have continuous partial derivatives for all quantities </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>the number of manufactured sets of final products max min 1 ,</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>We denote by *</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>At step<ref type="bibr" target="#b8">(9)</ref> of the conditional gradient method, the problem is solved</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>x</head><label></label><figDesc>Analyzing the constructed graphs, the DM selects the best value of t from the segment DM calculates the values of i Δ or i δ , the optimal value of t in problem (18) and, possibly, selects the starting point . 0 All other calculations are performed by a computer program.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>an artificial neuron, where the real numbers r x x ,..., 1 are the inputs of the neuron, a real number y is the output, real numbers of −</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Scheme of an artificial neuron.</figDesc><graphic coords="8,198.52,415.48,198.24,75.12" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Scheme of a neural network for calculating the values of a continuous real. function ) ,..., ( 1 n x x f .</figDesc><graphic coords="9,95.08,72.04,404.88,261.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>I 2 .Formula ( 20 )</head><label>220</label><figDesc>The following theorem was proved in[24]. Theorem Let σ be any continuous sigmoid function. Finite sums of the form , are real numbers, j y are vectors with real components, can be represented in the form of a neural network (Figure3), containing 1 + N artificial neurons, which are used to calculate the values of the function .G Theorems 1, 2 justify the suitability of neural networks for calculating any continuous functions. The neural network, which is built on the basis of Theorem 1 and accurately calculates the value of the function , f contains a limited number of neurons. Functions , , ij i ψ ϕ the values of which are calculated in neurons, can be different and not have continuous derivatives. The neural network, which is built on the basis of Theorem 2 and approximately calculates the value of the function , f contains artificial neurons that use a single sigmoid function .</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head></head><label></label><figDesc>Here E is a sign of mathematical expectation.Suppose that the sigmoid function ) (t σ has a continuous derivative and the conditions sufficient for differentiation under the sign of mathematical expectation are met. We have the derivative of function ( ).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>(You can also choose a negative coefficient, such a case is treated similarly).</figDesc><table><row><cell cols="13">Without loss of generality, we consider that the first coefficient ( ∂ u 1 f ∂</cell><cell>) s</cell><cell>is chosen as the divisor.</cell></row><row><cell cols="8">Let's write (8) as follows:</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>arg</cell><cell>max</cell><cell></cell><cell>(</cell><cell>, )</cell><cell>,</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>1</cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="5">derivative of − u is calculated at the point</cell><cell>( 1 f</cell><cell>(</cell><cell>x</cell><cell>s</cell><cell>),...,</cell><cell>f</cell><cell>m</cell><cell>(</cell><cell>x</cell><cell>s</cell><cell>)),</cell></row><row><cell cols="3">and the gradient</cell><cell cols="2">∇</cell><cell>f</cell><cell>( s i x</cell><cell>)</cell><cell cols="5">is calculated at the point . s x The linear function</cell><cell>( f s x ∇</cell><cell>),</cell><cell>x</cell><cell>,</cell><cell>included in</cell></row><row><cell cols="10">(8), is not fully known, since the values of ( ∂ u ∂ f</cell><cell>i</cell><cell>) s</cell><cell>are not known. Note that in (8) the value of</cell></row><row><cell>x f s ), ( ∇</cell><cell>x</cell><cell cols="11">can be multiplied by a positive number. In particular, it can be divided by any positive</cell></row><row><cell cols="4">coefficient ( ∂ u ∂ f</cell><cell>i</cell><cell cols="2">) . s</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>and for all components of vectors . j y This circumstance can play an important role in the process of finding optimal values of these parameters. Suppose the probability distribution is given on the set X and x is a random element [25]; here , X x ∈ X is a closed bounded set of n -dimensional Euclidean space. Let ), ,..., ( 1 of function (20) using one of the described methods. We can use the appropriate neural network instead of DM to solve the multi-criteria optimization problem <ref type="bibr" target="#b13">(14)</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusions</head><p>The article deals with the problem of optimal planning of actions in multi-agent systems consisting of rational agents. Definitions of the concepts of agent, rational agent, system consisting of rational agents are given. Properties of agents and ways of developing action plans of a multiagent system are considered. The focus is on centralized planning. Optimization problems of planning and the conditional gradient method are described. Methods for solving multicriteria optimization problems are considered, including a human-machine procedure in which the DM takes part. It is proposed to use an artificial neural network instead of DM. Methods are proposed for determining the optimal values of the parameters of such a network.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Optimal solutions in systems consisted of rational agents</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">P</forename><surname>Sinitsyn</surname></persName>
		</author>
		<idno type="DOI">10.15407/jai2023.02.016</idno>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="16" to="26" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>in Ukrainian)</note>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">An introduction to multiagent systems</title>
		<author>
			<persName><forename type="first">M</forename><surname>Wooldridge</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>John Wiley &amp; Sons, Ltd</publisher>
			<pubPlace>United Kingdom</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Ten lectures on statistical and structural recognition</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Shlesinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Glavach</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2004">2004</date>
			<pubPlace>Kyiv</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Nauk. Dumka</orgName>
		</respStmt>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Optimal recognition procedures</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Gupal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Sergienko</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<pubPlace>Kyiv</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Nauk. Dumka</orgName>
		</respStmt>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Efficiency of Bayesian classification procedure</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Gupal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Sergienko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page" from="543" to="554" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Complexity of classification problems</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Sergienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Gupal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="519" to="533" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Optimal placement of multy-sensor system for threat detection</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10559-018-0026-z</idno>
	</analytic>
	<monogr>
		<title level="j">Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="249" to="257" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Optimal sensor placement for underwater threat detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Pashko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Molyboha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zabarankin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gorovyy</surname></persName>
		</author>
		<idno type="DOI">10.1002/nav.20311</idno>
	</analytic>
	<monogr>
		<title level="j">Naval Research Logistics</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<biblScope unit="page" from="684" to="699" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Russell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Norvig</surname></persName>
		</author>
		<title level="m">Artificial intelligence: a modern approach</title>
				<meeting><address><addrLine>Hoboken, NJ</addrLine></address></meeting>
		<imprint>
			<publisher>Pearson</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>4th Edn</note>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Polyak Introduction to optimization</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">T</forename></persName>
		</author>
		<imprint>
			<date type="published" when="1979">1979</date>
			<publisher>Nauka</publisher>
			<pubPlace>Moscow</pubPlace>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">S</forename><surname>Mikhalevich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Gupal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Norkin</surname></persName>
		</author>
		<title level="m">Methods of non-convex optimization</title>
				<meeting><address><addrLine>Moscow</addrLine></address></meeting>
		<imprint>
			<publisher>Nauka</publisher>
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Complexity of problems and effectiveness of optimization methods</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Nemirovskii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Yudin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1979">1979</date>
			<publisher>Nauka</publisher>
			<pubPlace>Moscow</pubPlace>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Methods of stochastic programming</title>
		<author>
			<persName><forename type="middle">M</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><surname>Ermoliev</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1976">1976</date>
			<publisher>Nauka</publisher>
			<pubPlace>Moscow</pubPlace>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Combinatorial optimization: Algorithms and Complexity</title>
		<author>
			<persName><forename type="first">C</forename><surname>Papadimitriou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Steiglitz</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1982">1982</date>
			<publisher>Prentice-Hall, Inc</publisher>
			<pubPlace>Englewood Cliffs, New Jersey</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Kondruk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Malyar</surname></persName>
		</author>
		<title level="m">Multi-criteria optimization of linear systems</title>
				<meeting><address><addrLine>Uzhhorod</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
		<respStmt>
			<orgName>DVNZ ; Uzhgorod National University</orgName>
		</respStmt>
	</monogr>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">An Interactive Approach for Multi-Criterion Optimization, with an Application to the Operation of an Academic Department</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Geoffrion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Dyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fienberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Management Science</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="357" to="368" />
			<date type="published" when="1972">1972</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Multiagent systems: review of the current state of theory and practice</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Gorodetsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Grushinsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Khabalov</surname></persName>
		</author>
		<ptr target="https://www.slideshare.net/rudnichenko/mas-10320580.2015" />
		<imprint>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Distributed problem solving and planning</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Durfee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">ECCAI Advanced Course on Artificial Intelligence</title>
				<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="118" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<title level="m" type="main">Cooperative control of distributed multi-agent systems</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shamma</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2008">2008</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Optimization of Capital Investment Distribution in an Open Economy on the Basis of the &quot;Input-Output</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10559-022-00454-1</idno>
	</analytic>
	<monogr>
		<title level="j">Model, Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="225" to="232" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Optimizing the Capital Investment Distribution Based on a Dynamic Mathematical Model</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Gupal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Pashko</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10559-024-00678-3</idno>
	</analytic>
	<monogr>
		<title level="j">Cybernetics and Systems Analysis</title>
		<imprint>
			<biblScope unit="volume">60</biblScope>
			<biblScope unit="page" from="375" to="382" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Fikhtengolts</surname></persName>
		</author>
		<title level="m">Course of Differential and Integral Calculus</title>
				<meeting><address><addrLine>Moscow</addrLine></address></meeting>
		<imprint>
			<publisher>Nauka</publisher>
			<date type="published" when="1969">1969</date>
			<biblScope unit="volume">1</biblScope>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">On the representation of continuous functions of several variables in the form of superpositions of continuous functions of one variable and addition</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Kolmogorov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">DAN USSR</title>
		<imprint>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="953" to="956" />
			<date type="published" when="1957">1957</date>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Approximation by Superpositions of a Sigmoidal function</title>
		<author>
			<persName><forename type="first">G</forename><surname>Cybenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mathematics of Control, Signals and Systems</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="303" to="314" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Shiryaev</surname></persName>
		</author>
		<title level="m">Probability, Nauka</title>
				<meeting><address><addrLine>Moscow</addrLine></address></meeting>
		<imprint>
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<title level="m" type="main">Introduction to minimax</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">F</forename><surname>Demyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">N</forename><surname>Malozemov</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1972">1972</date>
			<publisher>Nauka</publisher>
			<pubPlace>Moscow</pubPlace>
		</imprint>
	</monogr>
	<note>in Russian</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
