<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">The Stochastic Game Model for Solving Self-organization Problem of the Hamilton Graph Cycle</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Petro</forename><surname>Kravets</surname></persName>
							<email>petro.o.kravets@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Bandera Street</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mykola</forename><surname>Prodaniuk</surname></persName>
							<email>mykola.m.prodaniuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Bandera Street</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Pasichnyk</surname></persName>
							<email>volodymyr.v.pasichnyk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>12 Bandera Street</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">The Stochastic Game Model for Solving Self-organization Problem of the Hamilton Graph Cycle</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">3D5CD2FEDC9127AEB778F6CA6AD5C23F</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-12-29T06:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>0000-0001-8569-423X (P. Kravets)</term>
					<term>0000-0001-9544-3792 (M. Prodaniuk)</term>
					<term>0000-0002-5231-6395 (V. Pasichnyk) Self-organization, Behavioral pattern, graph, Hamiltonian cycle, stochastic agent game, Markov recurrent method</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>A new application of a stochastic game model for solving the problem of self-organization of the Hamiltonian graph cycle is proposed. To do this, game agents are placed at the vertices of an undirected graph, the pure strategies of which are options for choosing one of the incident edges. All agents' random choice of strategies forms a set of local paths that begin at each vertex of the graph. Current player payouts are defined as loss-making functions that depend on the strategies of neighboring players who control adjacent vertices of the graph. These functions are formed from a penalty for choosing opposing strategies by neighboring players and a penalty for strategies that have shortened the length of the local path. Computer simulation of a stochastic game provided a pattern of self-organization of agents' strategies in the form of several local cycles or a global Hamiltonian cycle into the graph, depending on the ways in which the current losses of players are formed. The study results can be used in practice for game solutions of NP-complete problems, transport, and communication problems, building authentication protocols in distributed information systems, and for collective decision-making under uncertainty.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The rational functioning of natural and artificially distributed systems is possible due to the coordinated work of their active elements or agents. In such systems, the agent has a local behavior strategy determined by the survival and reproduction criteria in a changing environment. In the absence of coordination, the agents' actions will be chaotic, and their populations have no chance of survival. Coordination of agents' actions should be aimed at forming a global strategy for the behaviors of the system as a wholeachieving a certain balance of local goals of the system of agents, the deviation from which becomes disadvantageous to individual agents or groups of agents <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>Coordination can be centralized (vertical) or decentralized <ref type="bibr">(horizontal)</ref>. Vertical coordination has one center or hierarchically subordinate centers, and horizontal coordination is realized through the coordination of actions directly between all agents or within small groups of agents. Hierarchical centralized coordination requires a high level of the social organization of agents. In centralized systems, information flows flow rapidly, mostly from top to bottom, entropy and reaction time in such systems are minimal, but the cost of maintaining existence can be high. Decentralized coordination does not require a high level of the social organization of agents and is more democratic since information flows are mainly formed at the grassroots levels of the system, costs are minimized, but the entropy and time to produce a coordinated response will be much higher than in centralized systems. Decentralized coordination is a more natural form of coexistence of agents during their organizational evolution <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>Decentralized coordination of agents' actions is formed during their direct local interaction and multi-round exchange of information among themselves. The result of purposeful decentralized coordination is the unauthorized achievement of global coordination or the self-organization of agents, which manifests itself as established behavioral patterns in the horror of the entire decentralized system.</p><p>Self-organization is a purposeful spatiotemporal process of creating, ordering, or improving the structure and functions of a complex open dynamic system due to its internal factors without organizing external influence. The necessary conditions for the self-organization of complex systems are multivariance, alternativeness, and randomness of agents' actions, guided by the process of self-learning and adaptation to uncertainties <ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref><ref type="bibr" target="#b6">[7]</ref>.</p><p>The structure of self-organization is a regular (nonchaotic) distribution of structural elements of a system or their states in time and space, which allows a holistic description of the system without characterizing all its constituent elements. Patterns are manifested at the macro level and are a consequence of local interactions of elements at the micro level <ref type="bibr" target="#b7">[8]</ref>.</p><p>Behavioral patterns determine the form of orderly actions of agents that arise from chaos through a process of coordination. The behavioral pattern is a systemic, globally coordinated behavior of a group of agents, which arises based on their local interaction during multi-step adaptive learning <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>.</p><p>The structural model of a decentralized system is conveniently represented as a graph, the vertices of which denote agents, and the edges determine the possibility of observing the internal states of neighboring agents and the local exchange of information between them. Then, depending on the strategies of agents' behavior, self-organization patterns can manifest themselves in the form of combinatorial entities within the graphskeleton trees, painted vertices, Eulerian or Hamiltonian cycles, etc. <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>.</p><p>In practice, those combinatorial formations on graphs whose construction problem belongs to the class of NP-complete have been used, that is, it does not allow finding an exact solution by full enumeration of variants for large-order graphs in an acceptable polynomial time <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>. One such problem is the search for Hamiltonians on the cycle of the graph. A Hamiltonian cycle in a graph is a continuous path that passes through all vertices of a graph only once. For example, the travelling salesman problem (the Travelling Salesman Problem) is to find the minimal Hamiltonian cycle in the loaded graph <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>.</p><p>Not every graph has a Hamiltonian cycle. Unlike Euler cycles, no reliable conditions are known as to whether a given graph has a Hamiltonian cycle. In some cases, they are guided by statements about the dependence of the existence of Hamiltonian cycles on the powers of vertices of the graph <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b15">16]</ref>.</p><p>Precise or approximate (heuristic) methods can be used to search for Hamiltonian cycles. Although heuristic methods do not guarantee an optimal solution, they translate the problem into a polynomial complexity class, optimizing the enumeration of possible options.</p><p>This class also includes the strode of a stochastic game described in this article for the selforganization of the Hamiltonian cycle of an undirected graph. Its possible advantage is that thanks to self-learning and adaptation to uncertainties, it can find the Hamiltonian cycle in deterministic and random graphs. The disadvantages include a relatively small, power order of the rate of convergence, due to a priori uncertainty of a distributed system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Practical applications of Hamiltonian cycles</head><p>Hamiltonian graph cycles are widely used in various fields <ref type="bibr" target="#b16">[17]</ref>. They are used to solve various variants of the travelling salesman problem for optimal routing <ref type="bibr" target="#b17">[18]</ref><ref type="bibr" target="#b18">[19]</ref><ref type="bibr" target="#b19">[20]</ref><ref type="bibr" target="#b20">[21]</ref><ref type="bibr" target="#b21">[22]</ref>, modelling assemblies and genomes according to its selected fragments <ref type="bibr" target="#b22">[23]</ref>, identification by fingerprints <ref type="bibr" target="#b23">[24]</ref>, communication management <ref type="bibr" target="#b24">[25]</ref>, construction of cryptographic authentication protocols with zero knowledge disclosure <ref type="bibr" target="#b25">[26]</ref> and others.</p><p>Zero-knowledge disclosure is an interactive method by which one party (provides evidence) can prove to another party (verifying evidence) that it knows some meaning (statement, object, or secret key) without telling the party. This method was first proposed in 1985 by authors S. Goldwasser, S. Micali, and C. Rackoff in "The Complexity of Knowledge of interactive evidence systems".</p><p>Zero-knowledge proof has the following properties: 1. Completeness: if the statement 𝑥 is true, then A will convince B of this. 2. Correctness: if the statement 𝑥 is false, even a dishonest A cannot convince, except for 𝑥 a very small probability.</p><p>3. Zero Disclosure: If a statement 𝑥 is true, then anyone even dishonest A will know nothing, except the fact, that the statement 𝑥 is true.</p><p>Let a graph consisting of n-vertices numbered 1, 2 , .., 𝑛 and it has a Hamiltonian cycle. Then, enumerating all permutations in a symmetric group, we can be found in the Hamiltonian cycle. Since then, such a method of complete enumeration of options can hardly be realized in a reasonable time. It is known that from luck finding the Hamiltonian cycle in a graph is NP-complete.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Overview of methods for finding Hamiltonian graph cycles</head><p>The search for the Hamiltonian cycle is a special case of the travelling salesman problem, which finds the shortest Hamiltonian path of the loaded graph.</p><p>Combinatorial nodes of the search for Hamiltonian cycles of the graph are divided into those that provide exact solutions, and heuristics, which give approximate solutions <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b27">28]</ref>.</p><p>The methods of the first group are based on a complete enumeration of options or on deterministic calculations and can be applied to problems of relatively small dimensions. Heuristic methods use some improvements, and intuitive guesses to reduce the number of enumerated options. Some of them are probabilistic methods that rely on a random selection of options with the ability to select and memorize better solutions. Heuristic methods provide an approximate optimal solution to the problem in an acceptable polynomial time.</p><p>The formulation of the travelling salesman problem as a linear programming problem was first performed by N. Deo and S. L. Hakimi in 1965.</p><p>Let [𝑐 𝑖,𝑗 &gt; 0|𝑖, 𝑗 = 1. . 𝑛] be a matrix of weights (or distances) of edges of order graph 𝑛, and [𝑥 𝑖,𝑗 |𝑖, 𝑗 = 1. . 𝑛] be a matrix of displacements between vertices of the graph. The element 𝑥 𝑖,𝑗 = 1, if an edge between vertices 𝑖 and 𝑗 (𝑖 ≠ 𝑗) is included in the Hamiltonian path, and 𝑥 𝑖,𝑗 = 0otherwise. The traveling salesman's tasks can be formulated as follows.</p><p>The objective function that minimizes the length of the Hamiltonian cycle is:</p><formula xml:id="formula_0">𝑓(𝑥) = ∑ ∑ 𝑐 𝑖,𝑗 𝑥 𝑖,𝑗 → 𝑚𝑖𝑛 𝑥 𝑛 𝑗=1 𝑛 𝑖=1</formula><p>.</p><p>(</p><formula xml:id="formula_1">)<label>1</label></formula><p>System of restrictions: ∑ 𝑥 𝑖,𝑗 = 1 𝑛 𝑖=1</p><p>(1 ≤ 𝑗 ≤ 𝑛) there can be only one entrance to each vertex; ∑ 𝑥 𝑖,𝑗 = 1 𝑛 𝑗=1</p><p>(1 ≤ 𝑖 ≤ 𝑛) there can be only one exit from each vertex; ∑ ∑ 𝑥 𝑖,𝑗 ≥ 1 𝑗∉𝑆 𝑖∈𝑆 (𝑆 ≠ ∅, 𝑆 ⊂ {1, . . . , 𝑛}) is the exclusion of isolated cycles that together cover all vertices of the graph, there must be only one (Hamiltonian) cycle; for any subset of vertices 𝑆, there exists at least one arc that leads to other vertices of the graph.</p><p>𝑥 𝑖,𝑗 ∈ {0,1} (1 ≤ 𝑖, 𝑗 ≤ 𝑛) is the desired matrix, the elements of which denote the Hamiltonian path. The formalization of the problem <ref type="bibr" target="#b0">(1)</ref>, although it provides an exact solution by integer programming methods, for example, by the method of branches and boundaries, is cumbersome, contains many variables and constraints, and requires many calculations. A number of other methods can be used to accurately solve the traveling salesman problem, for example, dynamic programming, Lagrange multipliers, the modified method of branches and boundaries (Little's method), clipping planes (Danzing's-Falkerson-Johnson method).</p><p>Heuristic methods provide a solution close to optimal for large-dimensional problems within acceptable time limits. These include methods of nearest neighbor, greedy, insertion, local optimization of k-opt to improve the initial solution, Lin-Kernighan, decomposition, and cross-linking <ref type="bibr" target="#b28">[29]</ref>, elastic network, artificial neural networks, genetic, ant, and others.</p><p>Similar exact and approximate methods are used to solve the problem of finding Hamiltonian cycles of an unloaded graph.</p><p>Accurate methods include brute force, backtracking, algebraic method, Roberts-Flores method, linear programming, integer programming (Gomori, branch and boundary method), dynamic programming and others.</p><p>Since the problem of determining Hamiltonian cycles is NP-complete, the application of the brute force method can only be used for small-order graphs.</p><p>To speed up the enumeration of options, you can use the backtracking method (search with return), which excludes from consideration a significant number of options by one check, building a decision tree and traversing it in depth <ref type="bibr" target="#b29">[30]</ref>.</p><p>First, select an arbitrary vertex of the graph, for example, the first and one of the edges incidents to it, along which we proceed to the next vertex. We remember the vertices passed. Suppose that the k - th vertex of the loop has already been found. If is equal to the number 𝑛 of vertices of the graph and there is an edge that connects the 𝑘 -th vertex to the first, then the loop is found. If there are edges extending from the 𝑘 -th vertex to the still unviewed vertices, then we include one of them in the solution and continue the search from it. If such edges do not exist, then go back one step to the (𝑘 − 1)-th vertex and continue the search. So, all the vertices of the Hamiltonian cycle will be found.</p><p>In <ref type="bibr" target="#b30">[31]</ref> a new method and corresponding polynomial algorithm for solving the problem of finding the Hamiltonian cycle of a graph are proposed. Based on a given graph, another graph of the shortest paths is constructed. To construct a graph of shortest paths, Dijkstra's algorithm is used. The search for the Hamiltonian cycle in the graph is reduced to the search for a closed path in the shortest path graph. The enumeration space for solutions to a problem consists of solutions that are constructed from each vertex of the graph in the shortest path graph.</p><p>An algebraic method based on multiple symbolic multiplications of a modified adjacency matrix 𝐵 of order 𝑛 (single elements of an adjacency matrix are replaced by the literal notation of graph vertices) by a matrix of sums of internal (without extreme vertices) products of notation of vertices of nonzero chains between vertices of a graph: 𝑃 𝑖+1 = 𝐵 ⋅ 𝑃 𝑖 <ref type="bibr" target="#b10">[11]</ref>. Multiplication is performed until a matrix 1 n P − containing all Hamiltonian chains between all pairs of vertices is computed. Hamiltonian cycles are obtained from those chains whose edges 𝑃 𝑛−1 or vertices are connected by arcs. The disadvantage of this method is that for large orders of the graph, each element of the matrix 𝑃 𝑖 will consist of many constituents' sub-elements, which require large amounts of memory to store.</p><p>Unlike the algebraic method, which ensures obtaining all existing Hamiltonian cycles of a graph, the Roberts-Flores enumeration method works with only one path, which is extended until a Hamiltonian cycle is found, or it turns out that this path cannot lead to a Hamiltonian cycle <ref type="bibr" target="#b10">[11]</ref>. After that, the path is modified in some systematic way and the cycle search continues for it. By reducing the number of computations required, this method is efficient for large graphs.</p><p>For practical applications, heuristic methods for finding Hamiltonian cycles are more effective, which reduces the complete enumeration of options. These include the nearest neighbor, greedy, annealing simulations, banned search, Hopfield's artificial neural network, evolutionary, genetic, ant colony, and others.</p><p>In the context of self-organization, those heuristic methods that have the property of self-learning are important. Among these, it is worth noting the methods of neural networks, genetics, ant colony, and stochastic play, which have polynomial complexity.</p><p>The problem of finding the Hamiltonian cycle can be solved using the Hopfield neural network, which implements learning without a teacher <ref type="bibr" target="#b31">[32]</ref>. To do this, the conditions of problem (1) must be translated into parameters of connections between neurons.</p><p>For the Hopfield network to determine the Hamiltonian cycle, the following requirements must be imposed:</p><p>1. The neural network should consist of 𝑁 = 𝑛 × 𝑛 neurons, which can be considered as square matrix of 𝑛 rows and 𝑛 columns, where 𝑛 is the order of the graph.</p><p>2. There are connections between all pairs (𝑘, 𝑙) of neurons to which weightings are attributed 𝑊 𝑘,𝑙 .</p><p>3. Not all grid weights can be negative at the same time.</p><p>4. The active neuron in each column specifies the corresponding vertex of the graph (or another route city for the traveling salesman problem). 5. The network response must contain only one active neuron in each row and each column. For this, the network weights must be constructed so that each neuron interferes with the activation of other neurons in its row and in its column. 6. To minimize the path length, it is necessary that the neuron in the 𝑗 -th column the more actively interferes with the activation of neurons in the (𝑗 + 1) -th and (𝑗 − 1) -th columns, the greater the distance between them (required to solve the traveling salesman problem).</p><p>All these conditions are satisfied by the following formula for calculating the weight between the neuron corresponding to the x -city (row) at the 𝑖 -th position (column) and the neuron corresponding to the y -city (row) at the j -th position (column): 𝑊 𝑘,𝑙 = 𝑊 𝑥𝑖,𝑦𝑗 = −𝐴𝛿 𝑥𝑦 (1 − 𝛿 𝑖𝑗 ) − 𝐵𝛿 𝑖𝑗 (1 − 𝛿 𝑥𝑦 ) − 𝐶𝑑(𝑥, 𝑦)(𝛿 𝑖,𝑗+1 + 𝛿 𝑖,𝑗−1 ) + 𝐷, where 𝑘 = (𝑥, 𝑖), 𝑙 = (𝑦, 𝑗) are the coordinates of the connections between the neurons of the matrix; 𝐴, 𝐵, 𝐶, 𝐷 are some constants; 𝑑(𝑥, 𝑦) is the distance between cities 𝑥 and 𝑦; 𝛿 𝑥𝑦 is the Kronecker symbol taking the value 1 if 𝑥 = 𝑦 and the value 0 is otherwise. As it is easy to see, the first term is equal A − for all connections in the same line (𝑥 = 𝑦), except the connection of the neuron with itself (at 𝑖 = 𝑗). The second term is equal B − for all relationships in the same column (𝑖 = 𝑗), except the relation to itself (𝑥 = 𝑦). The third term is proportional to the distance between cities</p><p>x and y if these cities are adjacent in the route (𝑖 = 𝑗 − 1 or 𝑖 = 𝑗 + 1).</p><p>Starting from the initial random state, the Hopfield network can provide a suboptimal solution to the problem (for the traveling salesman problem, the resulting cycle may differ from the optimal one). The experiment can be conducted several times and choose the best solution.</p><p>Heuristic nodes based on genetic algorithms and models are the evolutionary process of natural reflection, inheritance, and mutation. Each Hamiltonian cycle is treated as a chromosome. At the initial stage, there is a set of such chromosomes (the initial population), and at each subsequent cycle, a new one is produced from the existing population by pairwise crossing and mutations <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b32">33]</ref>.</p><p>A simplified genetic algorithm consists of the following steps: The genetic algorithm allows you to get more than one solution in a single application and select the best one. You can run several runs of the algorithm and calculate the average value of the best results. As the graph order increases, the probability of obtaining an optimal solution decrease.</p><p>The ant algorithm is based on the behavior of an ant colony and simulates the evaporation of ovation of pheromones. Ants find their way between an anthill and a food source by the smell of pheromones they leave behind. The more ants use the same path, the higher the concentration of pheromones on it. This "ant logic" allows you to choose a shorter path between the end points of the route <ref type="bibr" target="#b26">[27,</ref><ref type="bibr" target="#b33">34]</ref>.</p><p>For each ant, the transition from point 𝑖 to point 𝑗 is determined by its memory 𝑀 (the list of points that can still be visited), the visibility 𝜇 𝑖,𝑗 between points (the presence of an edge 𝑒 𝑖,𝑗 in the graph), and the pheromone trail 𝜏 𝑖,𝑗 . The selection of the next vertex of the graph is carried out with probability.</p><formula xml:id="formula_2">𝑝 = 𝜒(𝑗 ∈ 𝑀) ⋅ 𝜇 𝑖,𝑗 𝛼 𝜏 𝑖,𝑗 𝛽 ∑ 𝜇 𝑖,𝑘 𝛼 𝜏 𝑖,𝑘 𝛽 𝑘∈𝑀</formula><p>, where 𝜒(𝑗) ∈ {0,1} is the indicator function of the event; 𝛼, 𝛽 are the weight parameters.</p><p>For the considered neural, ant, and genetic algorithms, it is problematic to choose the initial values of the parameters that will ensure their convergence to the optimal solution.</p><p>Even though the described heuristic methods provide the search for an approximate optimal Hamiltonian cycle, their methodological value is in demonstrating different implementations of selflearning of active systems for solving NP-complete problems without the need to use classical algorithms. Such methods implement "soft" calculations that do not require traditional programming and can be easily adapted to solve other problems. The stochastic game method also has similar properties.</p><p>In this article, we propose a new application of the stochastic game method <ref type="bibr" target="#b34">[35]</ref> for the selforganization of Hamiltonian cycles of an unloaded graph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">The purpose of the work</head><p>The aim of the work is to solve the stochastic game problem of self-organization of strategies for constructing a Hamiltonian cycle of an unloaded undirected graph. Comprehension of the goal is provided using the adaptive method in solving a stochastic game, proper adjustment of its parameters, planning a computer experiment, developing a software model and a stochastic game, analyzing the results and making recommendations for their practical application.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Formulation of the game problem of finding Hamiltonian cycles</head><p>Let the undirected graph 𝐺 = (𝑉, 𝐸) be given by a finite set of vertices V and edges E .The graph is ordered, that is, it's edges is numbered in ascending order, for example, clockwise. We assume that the graph is simple (has no loops and multiple edges) and is connected (does not contain isolated vertices). Furthermore, given the possibility of covering the cycle of all vertices without loss of generality, it can be assumed that the minimum degree of vertices of a deterministic graph is greater than 1 (graph without leaves).</p><p>At each vertex of the graph, we place the game agent 𝐴 𝑖 , who can choose one of the incident vertices of 𝑖 ∈ 𝑉 edges belonging to the local set 𝐸 𝑖 ⊆ 𝐸. To do this, each agent with a number 𝑖 ∈ 𝑉 has 𝑁 𝑖 ≥ 2 pure strategies 𝑋 𝑖 = (𝑥 𝑖 <ref type="bibr" target="#b0">[1]</ref>, 𝑥 𝑖 <ref type="bibr" target="#b1">[2]</ref>, … , 𝑥 𝑖 [𝑁 𝑖 ]), where 𝑥 𝑖 ∈ 𝐸 𝑖 is the edge number and the 𝑁 𝑖 value is determined by the power of the corresponding vertex. Hereinafter, superscript is not a power of a number, except for symbols with a minus sign.</p><p>The variants of the possible choice can be given by an oriented graph of players' strategies. Let the arcs coming out of the vertex with the number 𝑖, denote the strategies of the i -th player regarding the choice of one of the neighboring players, and those arcs that enter the 𝑖 -th vertex denote the dependence of the winnings or losses of the 𝑖 -th player on the strategies of neighboring players. The correspondence of given undirected graph and formed on its basis-oriented graph of strategies is shown in Figure <ref type="figure" target="#fig_0">1</ref>. The choice of pure strategies is made by the players independently and synchronously at discrete moments in time 𝑛 = 1,2, …. The choice made is estimated by the current loss 𝜁 𝑛 𝑖 , which is formed depending on how much the agent's action led to an approach to the target solution of 𝑛 = 1,2, … the problem.</p><p>To do this, each game agent 𝑖 ∈ 𝑉 makes an independent choice of one of 𝑁 𝑖 own pure strategies 𝑥 𝑛 𝑖 = 𝑥 𝑖 ∈ 𝑋 𝑖 according to conditional probabilities. To select pure strategies {𝑥 𝑛 𝑖 }, a random mechanism is used, built on a discrete dynamic distribution, which is given by vectors of mixed strategies 𝑝 𝑛 𝑖 :</p><formula xml:id="formula_3">𝑥 𝑛 𝑖 = {𝑥 𝑖 (𝑙)|𝑙 = 𝑎𝑟𝑔 𝑚𝑖𝑛 𝑙 ∑ 𝑝 𝑛 𝑖 [𝑘] &gt; 𝜔 (𝑘, 𝑙 = 1. . 𝑁 𝑖 ) 𝑙 𝑘=1 },<label>(2)</label></formula><p>where 𝜔 ∈ [0, 1] is a real random variable with uniform distribution.</p><p>After completing the selection of pure strategies by all players, they receive random current losses 𝜁 𝑛 𝑖 = 𝜁 𝑛 𝑖 (𝑥 𝑛 𝑉 𝑖 ), whose values are a function of the combined strategies 𝑥 𝑉 𝑖 ∈ 𝑋 𝑉 𝑖 = × 𝑗∈𝐷 𝑖 𝑋 𝑗 of neighboring players from subsets 𝑉 𝑖 ⊆ 𝑉 (𝑉 𝑖 ≠ ∅) ∀𝑖 ∈ 𝑉. It is assumed that current losses have a constant mathematical expectation and a limited second point, which are not known to agents a priori. Depending on the selected criteria and initial parameters of the game method, several autonomous loops or one global loop may occur in the graph.</p><p>To create autonomous disjoint cycles of a graph, the condition must be satisfied that the choice of an edge in the direction of the 𝑖 -th agent (∀𝑖 ∈ 𝑉) can be carried out only by one of its neighbors agent from the set 𝑉 𝑖 .</p><p>Let 𝑘 𝑛 𝑖 = ∑ 𝜒(𝑥 𝑛 𝑗 → 𝑥 𝑛 𝑖 )</p><p>𝑗∈𝑉 𝑖 be the current number of neighboring agents whose strategies are directed to the vertex of the graph where the 𝑖 -th agent is located (in the direction of the 𝑖 -th agent). The heterogeneity of directing strategies within a local subset i V is considered by the difference penalty:</p><formula xml:id="formula_4">𝜉 𝑛 𝑖 [1] = 𝐶 𝑖 −1 ∑ |𝑘 𝑛 𝑖 − 𝑘 𝑛 𝑗 | 𝑗∈𝑉 𝑖 ,<label>(3)</label></formula><p>where 𝜉 𝑛 𝑖 <ref type="bibr" target="#b0">[1]</ref> ∈ 𝑅 + 1 is a real number; 𝐶 𝑖 = |𝑉 𝑖 | is the number of agents of the local subset 𝑉 𝑖 . This criterion provides the possibility of forming cycles of movement of agents on the graph. To form a Hamiltonian cycle, each agent must choose a strategy (incident to its vertex edge) to maximize the length of the local path emanating from the agent-controlled vertex of the graph. Let 𝑟 𝑛 𝑖 be a set of vertices (or sequence of numbers of vertices) that defines such a path with a root vertex with a number 𝑖, formed at the current time n by the strategies (directions of movement) 𝑥 𝑛 𝑖 of players. The path 𝑟 𝑛 𝑖 consists of the values 𝑚 ≔ 𝑥 𝑛 𝑚 determined by the players' pure strategies 𝑥 𝑛 𝑚 . The recurrent procedure for determining the current path that starts at the vertex with number 𝑖, will be:</p><formula xml:id="formula_5">𝑟 𝑛 𝑖 (𝑡) = 𝑟 𝑛 𝑖 (𝑡 − 1) ⋃ {𝑚| 𝑚 ≔ 𝑥 𝑛 𝑚 } {𝑚}∩𝑟 𝑛 𝑖 (𝑡−1)≠∅ 𝑚=𝑖 , ∀𝑖 ∈ 𝑉,<label>(4</label></formula><p>) where 𝑟 𝑛 𝑖 (0) = ∅.</p><p>The notation of the operation of combining sets into (4) should be read so that this operation is performed repeatedly, starting with the value 𝑚 = 𝑖 until the element 𝑟 𝑛 𝑖 reappears during the formation of the path 𝑚 ∈ 𝑟 𝑛 𝑖 (𝑡 − 1). For a connected graph with vertex powers greater than 1, this condition will always holdfollowing the players' strategies from the 𝑖 -th vertex (∀𝑖 ∈ 𝑉), entry into a local loop or a global Hamiltonian cycle will occur. The minimal local loop covers two edge-connected vertices of the graph and is formed by the opposite strategies of two neighboring players. The maximum cycle covers all vertices of the graph and is one of the Hamiltonian cycles. The deviation of the length of the local path from the maximum possible is considered by the penalty:</p><formula xml:id="formula_6">𝜉 𝑛 𝑖 [2] = 𝐶 −1 (𝐶 − 𝑆(𝑟 𝑛 𝑖 )),<label>(5)</label></formula><p>where 𝜉 𝑛 𝑖 <ref type="bibr" target="#b1">[2]</ref> ∈ 𝑅 + 1 is a real number; 𝐶 = |𝑉| is a number of game agents; 𝑆(𝑟 𝑛 𝑖 ) = |𝑟 𝑛 𝑖 | is the length of the local path (the number of vertices of the graph on the way to entering the cycle), starting at the 𝑖th vertex (0 ≤ 𝑆(𝑟 𝑛 𝑖 ) ≤ 𝐶).</p><p>This criterion approximates the length of the path 𝑟 𝑛 𝑖 , leaving the vertex 𝑖, to the number of vertices 𝑉 of the graph 𝐺 (to the length of the Hamiltonian cycle).</p><p>Penalties (3) and ( <ref type="formula" target="#formula_6">5</ref>) can be used individually or comprehensively. The total current losses of players are calculated after the selection of moving options 𝑥 𝑛 𝑖 ∀𝑖 ∈ 𝑉 is completed:</p><formula xml:id="formula_7">𝜁 𝑛 𝑖 = 𝜆𝜉 𝑛 𝑖 [1] + (1 − 𝜆)𝜉 𝑛 𝑖 [2],<label>(6)</label></formula><p>where 𝜆 ∈ [0,1] is the weighting factor that determines the share of penalty criteria in the formation of agents' losses. Due to the random selection of pure strategies, current losses 𝜁 𝑛 𝑖 ∀𝑖 ∈ 𝑉 are random variables with a priori unknown stochastic characteristics. To formulate criteria for player behavior, time-averaged losses are used:</p><formula xml:id="formula_8">𝛧 𝑛 𝑖 = 1 𝑛 ∑ 𝜁 𝑡 𝑖 𝑛 𝑡=1 , 𝑛 = 1,2, … , ∀𝑖 ∈ 𝑉 . (<label>7</label></formula><formula xml:id="formula_9">)</formula><p>Strategies of players should be aimed at minimizing their own functions of average losses <ref type="bibr" target="#b6">(7)</ref>:</p><formula xml:id="formula_10">𝑙𝑖𝑚 𝑛→∞ Ζ 𝑛 𝑖 → 𝑚𝑖𝑛 𝑥 𝑛 𝑖 ∀𝑖 ∈ 𝑉. (<label>8</label></formula><formula xml:id="formula_11">)</formula><p>The stochastic game of finding a Hamiltonian cycle is that each player 𝑖 ∈ 𝑉 must learn to choose pure strategies {𝑥 𝑛 𝑖 } (2) at points in time 𝑛 = 1,2, … based on the observation of locally determined current losses {𝜁 𝑛 𝑖 } (6) so as to ensure that the system of criteria ( <ref type="formula" target="#formula_10">8</ref>) is met.</p><p>The method of forming a sequence of strategies {𝑥 𝑛 𝑖 } ∀𝑖 ∈ 𝑉 (𝑛 = 1,2, …) of stochastic game will determine the fulfillment of one of the conditions of collective optimality, for example, Nash, Pareto or another <ref type="bibr" target="#b35">[36]</ref>.</p><p>The learning process of a stochastic game, which leads to self-organization of strategies moving game agents, is evaluated by the following characteristics.</p><p>1. The system function of average losses, which is the averaging of individual functions of average losses of players <ref type="bibr" target="#b6">(7)</ref>:</p><formula xml:id="formula_12">𝛧 𝑛 = 𝐶 −1 ∑ 𝛧 𝑛 𝑖 𝑖∈𝑉 . (<label>9</label></formula><formula xml:id="formula_13">)</formula><p>2. The strategy coordination coefficient, which is the relative number of coordinated strategies of the players:</p><formula xml:id="formula_14">𝐾 𝑛 = (𝑛𝐶) −1 ∑ ∑ 𝜒(|𝜁 𝑡 𝑖 | ≤ 𝛿) 𝑖∈𝑉 𝑛 𝑡=1 , (<label>10</label></formula><formula xml:id="formula_15">)</formula><p>where 𝜒( * ) ∈ {0,1} is the indicator function of the event: if the condition holds, then 𝜒( * ) = 1, otherwise -𝜒( * ) = 0; 0 &lt; 𝛿 &lt;&lt; 1 is a small positive real number.</p><p>The self-organization of the Hamiltonian cycle will be indicated by a decrease in the function 𝛧 𝑛 ≥ 0 of system losses and an increase in the coordination coefficient 𝐾 𝑛 ∈ [0,1] of agents' strategies.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Method for solving a stochastic game.</head><p>Formation of sequences {𝑥 𝑛 𝑖 } with the desired properties will be performed using the recurrent method in changing the vectors of mixed strategies <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref>:</p><formula xml:id="formula_16">𝑝 𝑛+1 𝑖 = 𝜋 𝜀 𝑛+1 𝑁 𝑖 {𝑝 𝑛 𝑖 − 𝛾 𝑛 𝑅(𝑝 𝑛 𝑖 , 𝑥 𝑛 𝑖 , 𝜁 𝑛 𝑖 )},<label>(11)</label></formula><p>where 𝜋 𝜀 𝑛+1 𝑁 𝑖 is a projector on unit  -simplex 𝑆 𝜀 𝑁 𝑖 ⊆ 𝑆 𝑁 𝑖 <ref type="bibr" target="#b36">[37]</ref>; 𝛾 𝑛 is a monotonically descending sequence of positive values, which regulates the step size of the method; 𝑅 is a method step; 𝜀 𝑛 is a monotonically decreasing sequence of positive quantities that regulates the rate of expansion of 𝜀simplex.</p><p>The change in the elements of the vector of mixed strategies is constructed in such a way that when choosing a strategy 𝑥 𝑛 𝑖 (𝑗), the element 𝑝 𝑛 𝑖 (𝑗) decreases in proportion to the amount of the current loss 𝜁 𝑛 𝑖 . The other elements of the vector of mixed strategies do not change or grow proportionally of 𝜁 𝑛 𝑖 . After recalculating the vectors of mixed strategies, they are normalized by the method of projection on the unit  -simplex. As a result, smaller displacements of the vectors of mixed states on the unit  - simplex.</p><p>For theses in the recurrent method, we use the results of the theory of stochastic approximation <ref type="bibr" target="#b37">[38]</ref>. For this, suppose that the mathematical expectation of random losses 𝑀{𝜁 𝑛 𝑖 (𝑥)} = 𝑙 𝑖 (𝑥) is constant for all 𝑥 ∈ 𝑋 = ⊗ 𝑖∈𝑉 𝑋 𝑖 . Then the average loss function of the matrix game is calculated as:</p><formula xml:id="formula_17">𝐿 𝑖 (𝑝 𝐷 𝑖 ) = ∑ 𝑙 𝑖 (𝑥 𝑉 𝑖 ) ∏ 𝑝 𝑗 (𝑥 𝑗 )</formula><p>𝑗∈𝑉 𝑖 ;𝑥 𝑗 ∈𝑥 𝑉 𝑖 𝑥 𝑉 𝑖∈𝑋 𝐷 𝑖 <ref type="bibr" target="#b11">(12)</ref> where 𝑝 𝑉 𝑖 ∈ 𝑆 𝑉 𝑖 = ∏ 𝑆 𝑁 𝑗 𝑗∈𝑉 𝑖</p><p>; 𝑝 𝑖 ∈ 𝑆 𝑁 𝑖 .</p><p>The goal of the players is to minimize their average loss functions for mixed strategies 𝑝 𝑖 : 𝐿 𝑖 (𝑝 𝑉 𝑖 ) → 𝑚𝑖𝑛 𝑝 𝑖 . Let the expectation of the motion vector of method <ref type="bibr" target="#b10">(11)</ref> be the gradient of the mean loss function <ref type="bibr" target="#b11">(12)</ref>:</p><p>𝑀{𝑅(𝑝 𝑛 𝑖 , 𝑥 𝑛 𝑖 , 𝜁 𝑛 𝑖 )} = 𝛻 𝑝 𝑖 𝐿 𝑖 (𝑝 𝑉 𝑖 ).</p><p>Considering that</p><formula xml:id="formula_18">𝛻 𝑝 𝑖 𝐿 𝑖 = 𝑀 { 𝜁 𝑛 𝑖 𝑒 𝛵 (𝑥 𝑛 𝑖 )𝑝 𝑛 𝑖 𝑒(𝑥 𝑛 𝑖 )|𝑝 𝑛 𝑖 = 𝑝 𝑖 },</formula><p>where 𝑒(𝑥 𝑛 𝑖 ) is the unit vector-indicator of choice of pure strategy 𝑥 𝑛 𝑖 ∈ 𝑋 𝑖 , based on stochastic approximation we obtain a gradient method for solving the game problem:</p><formula xml:id="formula_19">𝑝 𝑛+1 𝑖 = 𝜋 𝜀 𝑛+1 𝑁 𝑖 {𝑝 𝑛 𝑖 − 𝛾 𝑛 𝜁 𝑛 𝑖 𝑒(𝑥 𝑛 𝑖 ) 𝑒 т (𝑥 𝑛 𝑖 )𝑝 𝑛 𝑖 }.<label>(13)</label></formula><p>The parameter 𝛾 𝑛 reduces the step size of the method to achieve optimal collective solutions of the game: ‖𝑝 𝑛 𝑖 − 𝑝 * 𝑖 ‖ → 0, and the parameter n</p><formula xml:id="formula_20"> extends the  -simplex: 𝑆 𝜀 𝑛+1 𝑁 𝑖 → 𝑆 𝑁 𝑖 .</formula><p>The values of these parameters can be calculated as follows:</p><formula xml:id="formula_21">𝛾 𝑛 = 𝛾𝑛 −𝛼 , 𝜀 𝑛 = 𝜀𝑛 −𝛽 ,<label>(14)</label></formula><p>where 𝛾 &gt; 0, 𝛼 ∈ (0,1], 𝜀 &gt; 0, 𝛽 &gt; 0.</p><p>The convergence of stochastic game strategies to collective-optimal values 𝑝 * 𝑖 is determined by the ratios of parameters n  and n  <ref type="bibr" target="#b13">(14)</ref>, which must meet the fundamental conditions of the stochastic approximation <ref type="bibr" target="#b36">[37,</ref><ref type="bibr" target="#b37">38]</ref>. As shown in <ref type="bibr" target="#b38">[39]</ref>, to ensure the root mean square (rms) convergence of the game method (13) to the Nash point in the positive environment, the following relations must be fulfilled:</p><formula xml:id="formula_22">0 &lt; 𝛽 &lt; 𝛼 &lt; 1.<label>(15)</label></formula><p>The theoretical order of the asymptotic rate of convergence of method ( <ref type="formula" target="#formula_19">13</ref>) is 𝑛 −𝜃 , where 𝜃 = 𝑚𝑖𝑛{ 𝛽, 𝛼 − 𝛽, 1 − 𝛼} is the order parameter. The maximum value of parameter 𝜃 = 1 2 is reached for</p><formula xml:id="formula_23">𝛼 − 𝛽 = 1 2</formula><p>.</p><p>The stochastic game begins with untrained mixed agent strategies: 𝑝 0 𝑖 = (</p><p>) ∀𝑖 ∈ 𝑉. In moments of time 𝑛 = 1, 2, … mixed strategies are dynamically rearranged according to <ref type="bibr" target="#b12">(13)</ref> for adaptive selection of pure strategies. One step of repeating stochastic play is that at a point in time 𝑛 each player 𝑖 ∈ 𝑉 chooses a pure strategy 𝑥 𝑛 𝑖 (2) and by time 𝑛+1 receives a current loss 𝜁 𝑛 𝑖 (6), which is used to calculate the new mixed strategy 𝑝 𝑛+1 𝑖 <ref type="bibr" target="#b12">(13)</ref>. The stochastic agent game implements adaptive learning by trial and error, which requires a significant number of trials to find the desired solution. The learning process can be significantly accelerated by using a computer implementation of a stochastic game with the appropriate adjustment of its parameters.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">A game algorithm for finding the Hamiltonian cycle of a graph.</head><p>Step 1. Set initial values for parameters: 𝐶 = |𝑉|the number of players, which is equal to the number of vertices of the graph; 𝑀 𝐶×𝐶a matrix of adjacencies of the graph; 𝑁 𝑖the number of pure strategies of the 𝑖 -th player, which is equal to the power of the 𝑖 -th vertex of the graph;</p><p>𝑋 𝑖 = (𝑥 𝑖 <ref type="bibr" target="#b0">[1]</ref>, 𝑥 𝑖 <ref type="bibr" target="#b1">[2]</ref>, … , 𝑥 𝑖 [𝑁 𝑖 ]), 𝑖 = 1. . 𝐶vectors of players' pure strategies, where is the number of the edge incident to the vertex of the graph; 𝑝 0 𝑖 = (</p><p>), 𝑖 = 1. . 𝐶initial mixed player strategies.</p><p>𝛾 &gt; 0a parameter of the learning step; 𝛼 ∈ (0,1]an order of the learning step; 𝜀a parameter of 𝜀 -simplex; 𝛽 &gt; 0an order of the expansion rate of 𝜀 -simplex; 𝜆 ∈ [0,1]a weighting factor that determines the share of penalty criteria in the formation of player losses. 𝑛 = 0an initial time moment; 𝑛 𝑚𝑎𝑥a maximum number of the method steps.</p><p>Step 2. Select action options 𝑥 𝑛 𝑖 ∈ 𝑋 𝑖 , 𝑖 = 1. . 𝐶 according to (2). Step 3. Get the value of current losses 𝜁 𝑛 𝑖 according to (6). Step 4. Calculate the parameter values 𝛾 𝑛 , 𝜀 𝑛 according to <ref type="bibr" target="#b13">(14)</ref>.</p><p>Step 5. Calculate the elements of the mixed strategy vectors 𝑝 𝑛 𝑖 , 𝑖 = 1. . 𝐶 according to (13). Step 6. Calculate the self-organization characteristics of the stochastic game 𝑍 𝑛 according to <ref type="bibr" target="#b8">(9)</ref> and 𝐾 𝑛 according to (10). Step 7. Set the next time moment 𝑛: = 𝑛 + 1.</p><p>Step 8. If 𝑛 &lt; 𝑛 𝑚𝑎𝑥 , then go to step 2, otherwise the algorithm stops.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8.">Results of a computer experiment</head><p>First, consider an undirected full-connected graph 𝐺 = (𝑉, 𝐸) without loops. In the program, the graph is conveniently specified either by the matrix of adjacencies or by the incident matrix. In each vertex of the graph, we place one agent. In this case, the count will control 𝐶 = |𝑉| the agents. In a repetitive stochastic game, one of the agents can knockout and rat one of the incident edges of the graph. Linked by strategies with the combined choice of several agents, forms a local path. The task of the game is the adaptive fusion of local paths of agents into one global Hamiltonian cycle.</p><p>To solve a stochastic game, we use the stochastic game method <ref type="bibr" target="#b12">(13)</ref> with the following parameters: 𝜆 = 0.5, 𝛾 = 1, 𝜀 = 0.999 𝑁 𝑖 , 𝛼 = 0.5, 𝛽 = 0.25. In addition to the values of parameters satisfying condition <ref type="bibr" target="#b14">(15)</ref>, for a stochastic game to converge, it is necessary that the graph has a Hamiltonian cycle.</p><p>The number of vertices of the graph and the powers of its vertices will significantly affect the order of the rate of convergence of the graph and the rate of convergence of the game method. The functions of average player losses 𝛧 𝑛 <ref type="bibr" target="#b8">(9)</ref> and strategy coordination ratio 𝐾 𝑛 <ref type="bibr" target="#b9">(10)</ref>  A decrease in the average loss function 𝛧 𝑛 indicates that the target conditions (6) of the convergence of the stochastic game are met. The approximation of the value of the coordination coefficient 𝐾 𝑛 to 1 (that is, to logarithmic zero) indicates the self-organization of the players' strategies. For the given parameters of the game method, increasing the order C of a full-connected graph leads to a significant increase in the number of learning steps of the stochastic game. Thus, for 𝐶 = 5, the rapid growth of the coordination coefficient begins at about 𝑛 = 10 3 steps of learning the stochastic game, for 𝐶 = 10 it takes a little more than 𝑛 = 10 4 steps, and for 𝐶 = 15 it takes about 𝑛 = 10 5 steps.</p><p>In addition to the order of a Hamiltonian graph, the convergence time of a stochastic game will also be determined by its connectivity and the parameters of the game method. Reducing the connectivity of the graph accelerates the self-organization of cycles.</p><p>Depending on the selected criteria and initial parameters of the game method, several autonomous loops or one global loop may appear in the graph.</p><p>If the parameter 𝜆 of the complex penalty <ref type="bibr" target="#b5">(6)</ref> takes values close to 1, then the dominant influence on the course of the game will be the penalty criterion <ref type="bibr" target="#b2">(3)</ref>, which minimizes the selection of each vertex of the graph by neighboring players. application of this criterion can lead to the formation of several separate (autonomous) cycles in the graph, as shown in Figure <ref type="figure">3</ref>. For the parameter values given above, we obtain solutions of a stochastic game in pure strategies. The stochastic game provides a multivariate solution from the problem of constructing graph cycles. {2, 3, 2}, {1, 4, 5, 1} From the obtained results the simple interaction of agents within local subsets in the course of learning a stochastic game leads to a complex, coordinated behavior of the system, resulting in selforganizing patterns in the form of several autonomous or one Hamilton in the cycle of the graph. In the process of learning, the stochastic game moves from the initial chaotic choice of agents' strategies to their purposeful choice in the form of cycles.</p><p>Complicate the problem by expanding it to random graphs <ref type="bibr" target="#b39">[40]</ref>. We apply the game method (13) to find the Hamiltonian cycle of a random graph. To do this, we assign to each vertex (or game agents) the probability 𝑞 𝑖 of failure. Restorative failure of the vertex leads to the temporary loss of all its incident ribs. The agent corresponding to such a vertex skips the current move of the stochastic game, and neighboring players do not consider his choice strategy (since it is absent) to calculate their own current losses.</p><p>Criterion (3) is calculated only for those players (or vertices of the graph) and adjacent to them who have not failed for the current step of the game. Criterion (5) for determining the length of the local path, in addition to the condition of entering the cycle, additionally considers the condition of reaching the player who refused and was the first to happen on the way.</p><p>Several implementations of player strategies for a random full-connected graph are shown in Figure <ref type="figure" target="#fig_4">5</ref>. In addition to vertex failures, it is similarly possible to introduce failures of the edges of the graph. In case of failures, a temporary violation of the connectivity of the graph is allowed. Dashed lines with arrows indicate strategies that participants in the game can choose when forming a path towards the rejected players. Used as a limiting condition for calculating current penalties <ref type="bibr" target="#b4">(5)</ref>.</p><p>In Figure <ref type="figure" target="#fig_5">6</ref> the graphs of the coefficient 𝐾 𝑛 of coordination of players' strategies in the course of game self-organization of Hamilton cycles of a random graph, which is an implementation of a fullconnected graph with |𝑉| = 5 vertices, are given. The weighting factor of criteria (3) and ( <ref type="formula" target="#formula_6">5</ref>) in convolution ( <ref type="formula" target="#formula_7">6</ref>) is 𝜆 = 0.5. The failure probabilities 𝑞 𝑖 = 𝑞∀𝑖 ∈ 𝑉 are given the same for all vertices of the graph.</p><p>The graphs in Figure <ref type="figure" target="#fig_5">6</ref> are obtained for the following values of failure probabilities 𝑞 ∈ {0; 0.05; 0.1; 0.15; 0.2}. For higher probability failures, the convergence time of the game method increases significantly.  As can be seen in Figure <ref type="figure" target="#fig_5">6</ref>, inmove players lead to an increase in the number of searches for their steps necessary for self-organization of the Hamilton cycle. This is due to the fact that the game method must additionally adapt to random implementations of a given graph.</p><p>To isolate the Hamiltonian cycle of a random graph foreach vertex, the conditional expectation of the edge number chosen by the corresponding agent at the current time is determined rounded to an integer value: During the adaptive game, at the n → limit values of the mathematical expectations, the edge numbers are sent to one of the deterministic Hamiltonian cycles of the graph, for example, shown in Figure <ref type="figure">4a</ref>. The stochastic game method cannot compete in efficiency with the well-known meta odes of constructing Hamiltonian cycles and determinates graphs. It has other advantages and purposesit can find Hamiltonian cycles of random graphs under conditions of uncertainty (the probability of failure of vertices of a graph is not known a priori). On the other hand, the method of stochastic play is a good illustration of self-organization Hamiltonian cycle based on the collection and processing of local data without exchanging information between all players. The solution of a stochastic game manifests itself in the form of a global pattern of self-organization of player strategies, which is one of the Hamiltonian cycles of the graph. The slow (power) convergence of the game is explained by its stochastic nature and the lack of information among players about the full structure of the graph or its random implementations. A stochastic game simulates the evolutionary process of self-organization of the Hamiltonian cycle through self-training of game agents.</p><p>Consequently, the self-organization of the considered stochastic game consists in the formation of patterns of strategies of game agents in the form of Hamiltonian cycles that arise in a deterministic or random graph in the process of learning the recurrent method <ref type="bibr" target="#b12">(13)</ref> based on local interaction between agents, which leads to global coordination of the entire distributed system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9.">Conclusions</head><p>1. The complex problem of self-organization of Hamiltonian cycles of undirected graph based on model of stochastic game is solved. 2. Global Hamiltonian cycles arise because of purposeful locally conditioned choice of conflict-free strategies during adaptive learning of a stochastic game. 3. Self-organization of Hamiltonian cycles of a graph is possible if the constraints on the parameters of the game method are derived from the general conditions of stochastic approximation. 4. The considered game method provides a power order of the speed of convergence, requires a significant number of learning steps, since it works in conditions of incomplete a priori information. 5. Increasing the graph order leads to the deployment of the search process over a longer period and requires proper configuration of the parameters of the stochastic game. 6. The method of stochastic play as a method of random tests with adaptive data processing requires more game steps than known deterministic methods, but it can work with random graphs with a priori unknown distributions. 7. Compared to deterministic graphs, the search for Hamiltonian cycles in random graphs requires more steps of the game method, since at each step of the game another implementation of the connections between vertices of the graph is possible. 8. The stochastic game method of self-organization of Hamiltonian cycles can be used to build cryptographic protocols for exchanging keys, in systems of proof with non-disclosure of knowledge, for solving distributed flow and transport problems and collective decision-making under uncertainty. 9. A promising study in this direction is the game simulation of self-organization of Sensory networks, ring oscillation of signals in neural networks for the application of results in artificial intelligence systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Correspondence of graphs</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>) = 𝑃{𝑥 𝑛 𝑖 = 𝑥 𝑖 (𝑗)|𝑥 𝑡 𝑖 , 𝜁 𝑡 𝑖 (𝑡 = 1,2, … , 𝑛 − 1)}, 𝑗 = 1. . 𝑁 𝑖 , where {𝑥 𝑡 𝑖 |𝑡 = 1,2, … , 𝑛 − 1} is the background of the strategies chosen by the player with the number 𝑖; {𝜁 𝑡 𝑖 |𝑡 = 1,2, … , 𝑛 − 1} is the background of the losses received for this. The probability set 𝑝 𝑛 𝑖 = (𝑝 𝑛 𝑖 (1), 𝑝 𝑛 𝑖 (2), … , 𝑝 𝑛 𝑖 (𝑁 𝑖 )) ∀𝑖 ∈ 𝑉 specifies the vectors of mixed strategies of players who take values 𝑝 𝑛 𝑖 ∈ 𝑆 𝑁 𝑖 on 𝑁 𝑖 -dimensional unit simplexes: 𝑆 𝑁 𝑖 = {𝑝 |∑ 𝑝(𝑗) 𝑁 𝑖 𝑗=1 = 1; 𝑝(𝑗) ≥ 0 (𝑗 = 1. . 𝑁 𝑖 )}.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>characterize the course of a stochastic game of agent movement. Graphs of these functions for different values of order 𝐶 = |𝑉| (number of vertices) of a full-connected graph are shown in Figure 2 on a logarithmic scale. The image is bounded by the coordinates of the rectangular output area of the graphs. Self-organization indicators of a stochastic game for different values of the order of the full graph</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :Figure 3 :</head><label>33</label><figDesc>Self-organized isolated cycles of graphsIn Figure3a shows two possible variants of self-organizing isolated loops for a full-connected graph. An edge (2, 3) is a degenerate variant of a loop (minimal loop) where agent 2 has chosen vertex 3 and agent 3 has chosen vertex 2. Figures3b -3dshows variants of isolated cycles for inferior graphs of different structures.To determine the Hamiltonian cycle, you must set the parameter close to 0. Then criterion (5) will have a predominant effect on the course of the game 𝜆, which maximizes the lengths of local paths and leads to their asymptotic fusion in time into one global, Hamiltonian cycle.Variants of Hamiltonian cycles, as a result of self-organization of the stochastic game, are shown in Figure4for different graphs. The game problem has a multivariate solution in pure strategies.<ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b3">4</ref>, 5, 1} Self-organized Hamiltonian graph cycles</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Implementations of a random graph of player strategies</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: The dependence of the coordination coefficient on the probability of failure of vertices of a random graph</figDesc><graphic coords="13,192.50,435.79,209.25,146.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>𝑥n𝑖 = 𝑖𝑛𝑡(∑ 𝑝 𝑛 𝑖 𝑥 𝑛 𝑖 𝑁 𝑖 𝑖=1 |𝑥 𝑡 𝑖 , 𝜁 𝑡 𝑖 (𝑡 = 1,2, … , 𝑛 − 1)).</figDesc></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Gamazine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-L</forename><surname>Deneubourg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">R</forename><surname>Frank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sneyd</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Theraula</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bonabeau</surname></persName>
		</author>
		<title level="m">Self-Organization in Biological Systems</title>
				<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Cooperative Coordination and Formation Control for Multi-agent Systems</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Game strategies for decision making in hierarchical systems: I. Mathematical model of the stochastic game</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
		<idno type="DOI">10.20535/SRIT.2308-8893.2019.3.06</idno>
	</analytic>
	<monogr>
		<title level="j">System Research, and Information Technologies</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="63" to="75" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Game strategies for decision making in hierarchical systems: II. Computer simulation of the stochastic game</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
		<idno type="DOI">10.20535/SRIT.2308-8893.2019.4.11</idno>
	</analytic>
	<monogr>
		<title level="j">System Research, and Information Technologies</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="105" to="118" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m">Self-organization: Theories and Methods</title>
				<editor>
			<persName><forename type="first">W</forename><forename type="middle">J</forename><surname>Zhang</surname></persName>
		</editor>
		<meeting><address><addrLine>USA</addrLine></address></meeting>
		<imprint>
			<publisher>Nova Science Publishers</publisher>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Game model of self-organizing of multiagent systems</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Information systems and networks</title>
				<imprint>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">829</biblScope>
			<biblScope unit="page" from="161" to="176" />
		</imprint>
		<respStmt>
			<orgName>Bulletin of ; Lviv polytechnic national university</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Game self-organization of agent&apos;s system with an individual estimation of strategies</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer systems and networks</title>
		<imprint>
			<biblScope unit="volume">546</biblScope>
			<biblScope unit="page" from="75" to="85" />
			<date type="published" when="2005">2005</date>
		</imprint>
		<respStmt>
			<orgName>Lviv Polytechnic national university</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Bulletin of</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Self-Organization in Pattern Formation: Review</title>
		<author>
			<persName><forename type="first">F</forename><surname>Schweisguth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Corson</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.devcel.2019.05.019</idno>
	</analytic>
	<monogr>
		<title level="j">Developmental Cell</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="659" to="677" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Patterns of self-organizing strategies in the game of mobile agents, Bulletin of</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jurinets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Kis</surname></persName>
		</author>
		<idno type="DOI">10.23939/sisn2020.07.024</idno>
	</analytic>
	<monogr>
		<title level="m">Information systems and networks</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="24" to="34" />
		</imprint>
		<respStmt>
			<orgName>Lviv polytechnic national university</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
		<idno>doi: 10. 23939.sisn2021.09.131</idno>
		<title level="m">Self-organizing strategies in the game of agent movement</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="131" to="141" />
		</imprint>
		<respStmt>
			<orgName>Lviv polytechnic national university</orgName>
		</respStmt>
	</monogr>
	<note>Information systems and networks</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Christofides</surname></persName>
		</author>
		<title level="m">Graph theory: an algorithmic approach</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="1975">1975</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Graph Theory: An Introduction to Proofs</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">R</forename><surname>Saoub</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Algorithms, and Applications</title>
				<imprint>
			<publisher>Chapman and Hall/CRC</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">The Planar Hamiltonian Circuit Problem is NP-Complete</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Garey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Johnson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Endre</surname></persName>
		</author>
		<idno type="DOI">10.1137/0205049</idno>
	</analytic>
	<monogr>
		<title level="j">SIAM Journal on Computing</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="704" to="714" />
			<date type="published" when="1976">1976</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Efficient solution for finding Hamilton cycles in undirected graphs</title>
		<author>
			<persName><forename type="first">W</forename><surname>Alhalabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kitanneh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alharbi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Balfakih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sarirete</surname></persName>
		</author>
		<idno type="DOI">10.1186/s40064-016-2746-8</idno>
	</analytic>
	<monogr>
		<title level="j">SpringerPlus</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="1" to="14" />
			<date type="published" when="1192">1192. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The Traveling Salesman Problem</title>
		<idno type="DOI">10.1007/978-3-540-71844-4_21</idno>
	</analytic>
	<monogr>
		<title level="m">Combinatorial Optimization: Algorithm and Combinatorics</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Korte</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Vygen</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="527" to="562" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A survey on existing conditions of Hamiltonian graphs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kumar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mishra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Mathematical Control Science and Applications</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="28" to="35" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Application of Hamilton&apos;s graph theory in new technologies</title>
		<author>
			<persName><forename type="first">Ł</forename><surname>Waligóra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">World Scientific News</title>
		<imprint>
			<biblScope unit="volume">89</biblScope>
			<biblScope unit="page" from="71" to="81" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Optimal Routing and Hamiltonian Cycle in Petersen-Torus Networks</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Jang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third 2008 International Conference on Convergence and Hybrid Information Technology</title>
				<meeting>the Third 2008 International Conference on Convergence and Hybrid Information Technology</meeting>
		<imprint>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="303" to="308" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Optimization of routes of aircraft performing Argo-aviation works</title>
		<author>
			<persName><forename type="first">F</forename><surname>Sharifov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Kandiba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Science-intensive technology</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">23</biblScope>
			<biblScope unit="page" from="319" to="325" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Methods of solving problems of finding optimal tourist routes by ant colony imitation algorithms</title>
		<author>
			<persName><forename type="first">V</forename><surname>Lytvyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ugrin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1193">1193. 2016</date>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page" from="47" to="60" />
			<pubPlace>Kharkiv</pubPlace>
		</imprint>
		<respStmt>
			<orgName>Bulletin of the National Technical University ; Kharkiv Polytechnic Institute ; Collection of scientific works ; Computer Science and Modeling ; NTU ; Kharkiv Polytechnic Institute</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Energy-efficient renewable scheme for rechargeable sensor networks</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zheng</surname></persName>
		</author>
		<idno type="DOI">10.1186/s13638-020-01687-4</idno>
	</analytic>
	<monogr>
		<title level="j">EURASIP Journal on Wireless Communications and Networking</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">An improved Routing Optimization Algorithm Based on Travelling Salesman Problem for Social Networks</title>
		<author>
			<persName><forename type="first">N</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.3390/su9060985</idno>
	</analytic>
	<monogr>
		<title level="j">Sustainability</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1" to="15" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">What do Eulerian and Hamiltonian cycles have to do genome assembly?</title>
		<author>
			<persName><forename type="first">P</forename><surname>Medvedev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pop</surname></persName>
		</author>
		<idno type="DOI">10.1371/journal.pcbi.1008928</idno>
	</analytic>
	<monogr>
		<title level="j">PLoS Computational Biology</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="5" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Identification of fingerprints based on Hamiltonian cycle of distribution of local features</title>
		<author>
			<persName><forename type="first">O</forename><surname>Melkozerova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rassomakhin</surname></persName>
		</author>
		<idno type="DOI">10.26565/2304-6201-2019-44-06</idno>
	</analytic>
	<monogr>
		<title level="j">Series: Mathematical modelling, Information technology, Automated control systems</title>
		<imprint>
			<biblScope unit="volume">44</biblScope>
			<biblScope unit="page" from="51" to="65" />
			<date type="published" when="2019">2019</date>
		</imprint>
		<respStmt>
			<orgName>V. N. Karazin Kharkiv National University</orgName>
		</respStmt>
	</monogr>
	<note>Bulletin of</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Application of graph theory in communication management problems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kavun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Revak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Scientific Bulletin of Lviv State University of Internal Affairs</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="225" to="240" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">B</forename><surname>Barak</surname></persName>
		</author>
		<ptr target="https://intensecrypto.org/public/lec_14_zero_knowledge.html" />
		<title level="m">An Intensive Introduction to Cryptography, Chapter 13: Zero knowledge proofs</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Gulyanytsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mulesa</surname></persName>
		</author>
		<title level="m">Applied methods of combinatorial optimization: Tutorial</title>
				<meeting><address><addrLine>Kyiv</addrLine></address></meeting>
		<imprint>
			<publisher>Publishing and printing center</publisher>
			<date type="published" when="2016">2016</date>
		</imprint>
		<respStmt>
			<orgName>Kyiv University</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Graph Learning for Combinatorial Optimization: A Survey of State-ofthe-Art</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<idno type="DOI">10.1007/s41019-021-00155-3</idno>
	</analytic>
	<monogr>
		<title level="j">Data Science and Engineering</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="119" to="141" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Investigation of the efficiency of common edges decomposition algorithm for solving large size traveling salesman problem</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kutelmakh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Uhrynovskyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Young Scientist</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">52</biblScope>
			<biblScope unit="page" from="1" to="5" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<title level="m" type="main">Backtracking (the) Algorithms on the Hamiltonian Cycle Problem</title>
		<author>
			<persName><forename type="first">J</forename><surname>Sleegers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Berg</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2107.00314v1[cs.DS</idno>
		<ptr target="https://arxiv.org/pdf/2107.00314v1.pdf" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">A new method for finding a Hamiltonian cycle on a graph</title>
		<author>
			<persName><forename type="first">V</forename><surname>Prokopenkov</surname></persName>
		</author>
		<idno type="DOI">10.20998/2413-3000.2020.2.6</idno>
	</analytic>
	<monogr>
		<title level="m">Series: Strategic management, portfolio management, and projects</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="43" to="49" />
		</imprint>
		<respStmt>
			<orgName>Bulletin of the National Thecnical ; Kharkiv Polytechnic Institute</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Solving the Hamiltonian cycle problem via an artificial neural network</title>
		<author>
			<persName><forename type="first">T</forename><surname>Tambouratzis</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0020-0190(00)00116-2</idno>
	</analytic>
	<monogr>
		<title level="j">Information Processing Letters</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="237" to="242" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">A genetic Algorithm for a Hamiltonian Path Problem</title>
		<author>
			<persName><forename type="first">E</forename><surname>Ponce-De-Leon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ochoa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Santana</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Industrial and Engineering Application of Artificial Intelligence and Expert Systems</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="13" to="19" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">A modification of the ant colony method for solving the problem of a salesman by a team of autonomous agents, Bulletin of</title>
		<author>
			<persName><forename type="first">O</forename><surname>Muliarevych</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Golembo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer systems and networks</title>
		<imprint>
			<biblScope unit="volume">717</biblScope>
			<biblScope unit="page" from="24" to="30" />
			<date type="published" when="2011">2011</date>
		</imprint>
		<respStmt>
			<orgName>Lviv polytechnic national university</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">B.-S</forename><surname>Chen</surname></persName>
		</author>
		<title level="m">Stochastic Game Strategies and their Applications</title>
				<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Ungureanu</surname></persName>
		</author>
		<title level="m">Pareto-Nash-Stackelberg Game and Control Theory: Intelligent Paradigms and Applications</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<title level="m" type="main">Learning Automata: Theory and Applications</title>
		<author>
			<persName><forename type="first">K</forename><surname>Najim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">S</forename><surname>Poznyak</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2014">2014</date>
			<publisher>Elsevier</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m" type="main">Stochastic Approximation Algorithms and Recursive Algorithms and Applications</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Kushner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">G</forename><surname>Yin</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Convergence of the game gradient method in sign-positive environments</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kravets</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer systems and networks</title>
		<imprint>
			<biblScope unit="volume">438</biblScope>
			<biblScope unit="page" from="83" to="89" />
			<date type="published" when="2001">2001</date>
		</imprint>
		<respStmt>
			<orgName>Lviv polytechnic national university</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Bulletin of</note>
</biblStruct>

<biblStruct xml:id="b39">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Frieze</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1901.07139v20</idno>
		<ptr target="https://arxiv.org/pdf/1901.07139v20.pdf" />
		<title level="m">Hamilton Cycles in Random Graphs: a bibliography</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>math.CO</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
