Game Task of Ontological Project Coverage Petro Kravets, Vasyl Lytvyn, Victoria Vysotska, Yevhen Burov and Ivanna Andrusyak Lviv Polytechnic National University, S. Bandera street, 12, Lviv, 79013, Ukraine Abstract In this work, a multi-agent game model is developed to form virtual teams of project implementers based on subject ontology libraries. The competencies and abilities of agents required to execute projects are defined by subsets of ontologies from ontology library. Intelligent agents randomly, simultaneously and independently select one of the projects at discrete times. Agents that choose the same project determine the current composition of the team for its execution. For agent teams, the current loss is calculated for not covering the required project competencies by the agents' combined capabilities. This penalty is used to adaptively adjust mixed player strategies. The likelihood of formation of teams whose current composition has led to the reduction in the penalty for not covering ontologies is increasing. During the repetitive stochastic game, agents form vectors of mixed strategies that minimize average losses for non-coverage of projects. To solve the problem of game-based project coverage, an adaptive recurrent Markov method is developed based on a stochastic approximation of the modified condition of complementary non-rigidity, which holds in Nash equilibrium points. Computer simulation has confirmed the viability of using stochastic game model to form project teams having the necessary ontological support under uncertainty. The convergence of the game method is ensured by conforming to fundamental conditions and limitations of stochastic optimization. The validity of the experimental results is confirmed by the repeatability of results obtained for different sequences of random variables. Keywords 1 Multi-agent system, ontology, project, stochastic game, adaptive game method. 1. Introduction In today's information society with advanced means of communication through mobile devices and computer networks, the formation of various virtual organizations and communities is relevant. Such virtual associations of people for professional or other interests are intended to solve promptly and efficiently a variety of problems, such as completing regular project tasks, creating start-ups for attracting investors, organizing network marketing or distance learning, solving complex issues in science, economics and public administration, building various Internet services, discussing political and social processes, etc. [1 - 3]. Virtual communities as project execution teams provides such advantages as geographical distribution, decentralization, lack of departmental barriers, high mobilization ability, integration of the best experience and modern technologies for the implementation of project, the possibility to attract specialists with diverse expertise, resulting in the creation of favorable conditions for professional partnership. Furthermore, they provide better coordination of efforts to achieve the goal, competitiveness, prompt resolution of urgent problems, flexibility of structure and function, adaptability to changes in the outside world, remote access to computing and information resources. Proceedings of the 2nd International Workshop IT Project Management (ITPM 2021), February 16-18, 2021, Slavsko, Lviv region, Ukraine EMAIL: Petro.O.Kravets@lpnu.ua (P. Kravets); vasyl.v.lytvyn@lpnu.ua (V. Lytvyn); victoria.a.vysotska@lpnu.ua (V. Vysotska); yevhen.v.burov@lpnu.ua (Y. Burov); andrusyak.ivanna@gmail.com (I. Andrusyak) ORCID: 0000-0001-8569-423X (P. Kravets); 0000-0002-9676-0180 (V. Lytvyn); 0000-0001-6417-3689 (V. Vysotska); 0000-0001-8653- 1520 (Y. Burov); 0000-0001-6601-4374 (I. Andrusyak) © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) The modeling of the dynamics of virtual associations in a distributed environment can performed using multi-agent systems [4 - 7]. An agent is an informational object with elements of artificial intelligence that can make autonomous decisions, interact with other agents and a human person to achieve its goal. A group of such agents that solve a common problem in a computer information network is called a multi-agent system. In order to accomplish their task the agents need to have some knowledge in one or more subject areas represented formally in the form of ontologies. Interacting with each other, agents can query, compare ontologies, and integrate them to gain new knowledge, intersect ontologies to discover shared knowledge, enrich or correct them. [8 - 12]. Generally, agents' knowledge is highly specialized. Several ontologies, which describe different subject areas, are usually required to complete the project. On the other hand, for the full intellectual and informational support of the project, the agents must be able to form teams (communities, groups, coalitions). A team is a community of agents formed to reach a goal or to accomplish a task using shared knowledge and collaboration. In order for the project to be successful, the combined capabilities of the agent team in knowledge of domain-specific ontologies must cover the competencies required to complete the project. In addition, the team facilitates the organization and coordination of agents, decreases the complexity of the communication process and reduces the reaction time when changes in the information environment occur. In order to create a team, agents must be able to find and identify each other on the network by negotiating goals and attributes. Centralized team formation limits agent autonomy and is problematic when using distributed data sources. Ideally, ontology agents should be able to group themselves based on self-organizing mechanisms because their coordinated interaction and the application of adaptive decision-making rules using only local information [13 - 15]. Ontological project support is a dynamic process with elements of uncertainty, coming from not clearly defined goals, uncertain initial data, changes in the process of project implementation, development of ontologies over time, imperfect knowledge of project contractors, uncontrolled external factors. [16 - 19]. Therefore, ontology agents involved in project implementation must built as adaptive, self-learning systems. Multi-agent support for virtual communities has received considerable attention in the current scientific literature [20 - 22]. However, the problem of adaptive ontology project coverage based on the formation of teams of agents with problem-oriented knowledge is not sufficiently researched. Project coverage belongs to the class of NP-complex combinatorial optimization problems. Approximate algorithms, such as greedy, genetic, ant colony, artificial neural networks, and others [23 - 28] are used to solve such problems in the allowed polynomial time. In this paper, we propose a new method of approximating the solution of the problem of project coverage based on the results of stochastic game theory [29 - 31]. The formation of agent teams to execute projects is formulated as a competitive task of securing an agent for one of the projects. In the process of finding the optimal coverage, it is possible to move an agent from one project to another, which may temporarily disrupt the ontological support of the project and make it impossible to execute. Competition problems are studied by game theory and under uncertainty by stochastic game theory. A discrete deterministic game can solved in a finite number of computational steps. Discrete stochastic repetitive game unfolds for an unlimited period. This game provides multi-step adaptive search for one of the solutions of the problem with a given accuracy and within practically acceptable time limit. It can used to solve deterministic or stochastic multiple-criteria optimization problems, but it is especially useful and efficient in stochastic uncertainty conditions, when a complete iteration over variants cannot performed due to random response of a controlled system to the choice of the same strategy at different times. The adaptive stochastic game mechanism compensates for the lack of a priori information by collecting and processing current data at each step of the game. Considering the presence of competitive goals and a priori uncertain factors in project management, it is important to apply stochastic game methods for ontological support of projects. The purpose of this work is to develop a self-learning game model for ontological project support by forming teams of agents in an uncertain environment. To achieve this goal it is necessary to do the following: • Formulate a stochastic game problem of covering projects by ontology agents, • Create an adaptive method and algorithm for game problem solution, • Develop a computer program model of game-based selection of ontology agents to execute projects, • Analyze and discuss the results obtained. 2. Mathematical model of stochastic game Let’s start with a library of ontologies  = {O1 , O2 ,..., Oq } , where each element specifies knowledge in specific subject area. It is required to organize the development of m projects  = {1 , 2 ,...,  m } with corresponding ontological support. Each project has an associated set of ontological knowledge (competence) i = {O1 , O2 ,..., Or }   , required for its execution. The set of agents A = {A1 , A2 ,..., An } , n  m defines the qualified labor on the market. The capabilities of each agent are determined by a set of ontologies Ai = {O1i , O2i ,..., Osi }   . In general case, Ai  Aj   , so the agents can have the same capabilities in one or several knowledge areas. Let’s assume the completeness of the capabilities for the set of agents and set of competencies required to execute projects. Without loss of generality, we will assume also that the knowledge of all agents covers the library of ontologies  Ai =  , required for projects execution  k =  . Ai A  k  Therefore, the agents’ ontological knowledge is sufficient for all projects execution. We will need to form a set of virtual agent teams   = {G1 , G2 ,..., Gm } for the execution of all projects. Each team corresponds to the agent group Gk = { A1 , A2 ,..., Ag } , k = 1..m , where k k k  Gk = A , Gk  G j =  . The capabilities of the agent teams must meet the competency k =1..m k j requirements for the respective projects. The formation of virtual teams of agents will performed by the stochastic game method, which is specified by a tuple ( A, U i ,  i | Ai  A) , (1) where A – is the set of agents or players; U i = {u1i , u2i ,..., umi } – is the set of pure strategies of player i , which determines his affiliation with one of the teams; i :U → R1 – the cost function of player i ; U =  U i – set of combined strategies obtained by the joint choice of all players.. Ai A Agents can choose one of the teams themselves. Possible choices are given by strategy vectors U . The choices are made independently and randomly at times t = 1, 2,... . An agent Ai  A is i included in group Gk , if its selected pure strategy uti corresponds to the strategy of the group utk : Gk =   ( uti = utk )  Ai , k = 1..m , (2) Ai A where  ( )  {0,1} – indicator event function, 1  Ai = Ai , 0  Ai =  . A prerequisite for successful completion of the project is its full ontological support by a team of agents. The capabilities of the agent team should cover the competencies required to complete the project:  Akj   k , k = 1..m . (3) k A j Gk It is advisable to ensure the perfect coverage of all projects, when  =  . For the breach of project coverage, a penalty is assigned, which the relative number of uncovered project ontologies (dimensionless value) measures: tk [1] =  k \  Akj  k , k = 1..m , (4) Akj Gk where |  | – is the cardinality of the set,  i   , tk [1]  [0,1] . The formula (4) does not rule out the possibility that all agents will select one or more projects and the remaining projects will remain uncovered. Thus, by selecting only one of the projects, the agents knowingly (due to the completeness of the ontology library's coverage of the agents' capabilities and competencies required to execute the projects) ensure its coverage and receive a minimum loss, which further encourages them to remain on the project. Therefore, such an assessment is insufficient when all projects coverage is required. Since several agents may have knowledge of the same subject areas, the problem of combinatorial optimization appears related to the minimum required project coverage. This problem belongs to the class of NP-complex problems. Instead of solving the complex problem of minimum coverage of sets, we introduce a payment for deviation from the planned cost of the project (dimensionless relative value): ( )  tk [2] = C  (  k ) − C (  k ) max C  (  k ) , C (  k ) ,  (5) where C  (  k ) =  C ( A ) – is the cost of coverage for project k , C ( A )  0 – is the cost of k j k j Akj Gk services of agent j , taking part in execution of project k (self-assessment of the agent's abilities in monetary terms), C (  k )  0 – is the cost of execution of project k , tk [2]  [ −1,1] . If  tk [2]  0 , then we have an incentive (a negative penalty), otherwise – a penalty. This encourages the selection of a team of agents with a minimum total cost of services offered. If the total cost of the offered services exceeds the planned cost of the project, the selection of such team members will penalized. Thus we avoid the ontological over-coverage of the project by team of the contractors (within the project's estimated cost), or otherwise by duplication of the capabilities of the agents who selected the specific project. The combined fine for failure to organize the project  k implementation consists of penalties (4) and (5):  tk = tk [1] + (1 −  )tk [2] + t , (6) where  [0,1] – is the weight coefficient,  t – is the random value (additive white Gaussian noise), which reflects the stochastic uncertainty of the task. If t = 0 t = 1, 2,... , then current loss  tk  [−1,1] . All agent of the team Gk , taking part in the execution of project  k , obtain the same current loss (6):  ti =  tk Ai  Gk , Gk   . (7) We assume that random losses { ti (u )} of players are independent u U , i = 1..m , t = 1, 2,... , have a constant mean value E{ ti (u )} = v(u ) = const and limited second moment sup E{[ (u )] } =  (u )   . The stochastic parameters of random losses are not known by players t i 2 2 t apriori. The course of the game is evaluated by the functions of the average losses of the agents: 1 t i Zti =    = it [1] + (1 −  )it [2] +  t t  =1 Ai  A , (8) 1 t i where it [1] =   [1] – is the average losses function for insufficient ontological support for the t  =1 1 t project, it [2] =  i [2] – is the average losses function for deviation from the projected cost of t  =1 t the project,  t = t −1    – is current average value for random noise or current estimate of the =1 random noise mean value (for Gaussian white noise lim  t = 0 ). t→ The goal of each player is to minimize their own function of the average losses (8) over time: lim Z ti → min Ai  A . (9) t → Thus, the stochastic game of ontological support for projects is as follows. By calculating current losses { ti } , each player Ai  A should learn how to choose the pure strategies {uti } to ensure that the criteria system (9) is met over time: t = 1, 2,... . The quality of the formation of agent teams in game is evaluated by the following characteristics: 1) the system function of the average losses of a multi-agent system: 1 n i  Zt = t [1] + (1 −  )t [2] +  t , Zt = n i =1 (10) 1 n where n – is the number of agents, t [1] =   it [1] – is the systemic component of losses n i =1 1 n reflecting the lack of project coverage, t [2] =   it [2] – is the systemic component of losses n i =1 reflecting the deviation from project cost balance; 2) the average norm for players’ mixed strategies: 1 t n i t =  p , nt  =1 i =1 (11) where   R – is the Euclidean vector norm. 1 Game solutions should satisfy one of the conditions of equilibrium, such as Nash, Pareto, or another, depending on the method of strategy sequencing {uti } Ai  A . 3. Stochastic game solving method We are creating the sequences of pure strategies {uti } necessary for the solution of the game task based on random distributions, obtained from dynamic vectors of mixed i ( i i i ) strategies pt = pt [1], pt [2],... pt [m] i  D , which contain the conditional probabilities of agent i being in team k :   pti [k ] = P uti = u i [k ] ui ,  i ( = 1, 2,..., t − 1) , k = 1..m , (12) where {ui } – is the history of strategies chosen by player i ; { i } – is the history of losses associated with those choices. We construct the stochastic game solving method on the basis of a stochastic approximation of the complementary non-rigidity of the deterministic game, which holds for mixed strategies at Nash equilibrium points [32]. To do this, we define the polylinear function of the average losses for the deterministic game:  V i ( p) = vi (u ) uU  p j (u j ) , (13) A j A;u j u where v(u ) = M { ti (u )} . Then the condition of complementary non-rigidity in the vector form will look like:  pi V i ( p) − e mV i ( p) = 0 Ai  A , (14) where  pi V i ( p ) – is the gradient of polylinear function of average losses; e = ( (1) k | k = 1..m ) – is m the all-ones vector; p  S M – is a combined mixed strategy set on a unitary simplex S M ( M = mn ). To account for solutions on the boundaries of unitary simplex, let us weigh the complementary non- rigidity vector by the elements of the mixed strategy vector: ( ) diag ( p i )  pi V i ( p) − e mV i ( p) = 0 , (15) where diag ( p i ) – is square diagonal matrix of order m , comprised of elements of vector p i Ai  A . Taking in consideration that diag ( p i )[ pi V i − e mV i ] = E{ ti [e(uti ) − pti ] | pti = p i } , (16) and using the stochastic approximation method [34 – 36] we get such recurrent expression:   pti+1 =  mt +1 pti −  t ti (e(uti ) − pti ) Ai  A , (17) where E – is the sign for mathematical mean;   t +1 – is projection on m -dimensional unitary simplex m S m [33];  t  0 and  t  0 – is monotonically declining sequences of positive numbers; e(ut i ) – is a vector indicating the agent’s choice of pure strategy uti = u i  U i . The projection   t +1 on expendable  t -simplex Smt +1  S m m ensures the adherence to condition pti [ k ]   t , k = 1..m , necessary for completeness of statistical information on the choice of pure strategies, and parameter  t → 0 is used as an additional control element for the recurrent method convergence. Parameters  t and  t can calculated as follows:  t =  t − ,  t =  t −  , (18) where   0 ;   0 ;   0 ;   0 . The convergence of mixed strategies (17) to optimal values with probability 1 and in RMS sense is determined by the ratios of parameters  t and  t , those that must satisfy the fundamental conditions of stochastic approximation [33 - 35]. The choice of pure strategy uti [ k ] Ai  A is accomplished by players based on dynamic random distribution (17):  k  k = arg  min  pti (uti [ j ])    {1..m}, (19)  k =1..m j =1  where  [0, 1] – is the real random number with a uniform distribution. Stochastic game begins with untrained mixed strategies with element values p0i [k ] = 1/ m , where k = 1..m . Later, the dynamics of vectors of mixed strategies are determined by the Markov recurrent method (17) – (19). Therefore, in moments of time t = 1, 2,... each player using the mixed strategy pti selects the pure strategy u ti (19) and up to the time moment t + 1 gets current losses  ti (7). After this, it calculates the mixed strategy pti+1 according to (17) – (18). Due to the purposeful dynamic restructuring of mixed strategies based on the processing of current losses, method (17) - (19) provides an adaptive choice of pure strategies over time. 4. Stochastic game solving algorithm Step 1. Set the initial parameter values: t = 0 – is starting time moment; m – is the number of projects, or the number of virtual teams, or the number of pure strategies of game agents; n – is the number of agents;  = {O1 , O2 ,..., Oq } – is the library of ontologies; k = {O1 , O2 ,..., Or }   , k = 1..m – is the sets of ontological domain knowledge or competencies, needed for project execution; Ai = {O1i , O2i ,..., Osi }   , i = 1..n – is the sets of ontologies, describing the capabilities of agents; C (  ) = ( C (1 ), C ( 2 ),..., C ( m ) ) – is the costs of projects; C ( A ) = ( C ( A1 ), C ( A2 ),..., C ( An ) ) – is the costs of services, provided by agents; U i = {u i [1], u i [2],..., u i [m]} , i = 1..n – is the vectors of pure strategies for agents; p0i = ( (1/ m) k k = 1..m ) , i = 1..n – is the initial values for agent’s mixed strategies;   0 – is parameter of learning step;   (0,1] – is learning step order coefficient;  – is parameter of  -simplex;   0 – is expansion order factor for  -simplex;  [0,1] – is weight coefficient; tmax – is the maximal number of steps for method. Step 2. Choose the pure strategies (teams) uti  U i of agents i = 1..n according to (19). Step 3. Calculate the current losses values  ti , i = 1..n according to (7). Step 4. Calculate the values of parameters  t and  t according to (18). Step 5. Calculate the elements of mixed strategies vectors pti , i = 1..n according to (17). Step 6. Calculate the quality characteristics Z t (10) and  t (11) of project coverage. Step 7. Move to the next time moment t := t + 1 . Step 8. If t  tmax , than continue from step 2, otherwise – game is completed. 5. A test case Agents need to be selected to carry out two projects  = {1 ,  2 } with ontological library  = {O1 , O2 , O3 , O4 , O5} . Knowledge (competencies) required for the execution of projects are given by sets of ontologies: 1 = {O1 , O3 , O5 } ,  2 = {O2 , O4 } . The applicants for participation in projects are described by the set of agents A = {A1 , A2 , A3 , A4 , A5 , A6 } . For each agent the subset of ontologies, describing its capabilities is given: A1 = {O1 , O2 } , A2 = {O2 , O3} , A3 = {O3 , O4 } , A4 = {O4 , O5} , A5 = {O1 , O4 } , A6 = {O2 , O5} . The planned costs for the implementation of projects are C (  ) = (10000, 6000 ) money units. The cost of skilled labor in the labor market is determined by this array C ( A ) = ( 4000, 2500,1500,3500, 2500, 2000 ) . It is needed to form two virtual teams of agents  = {G1 , G2 } , with ontological knowledge covering every project with project cost constraints C  (  )  C ( ) , where C  (  ) – is the array of project coverage costs. The ontological support of projects problem can have several solutions. For a considered test case such options of project coverage are possible    : 1) G1 = {A1 , A2 , A4 } , G2 = {A3 , A5 , A6 } , C  (  ) = (10000, 6000 ) ; 2) G1 = { A1 , A3 , A6 } , G2 = { A2 , A4 , A5 } , C  (  ) = ( 7500,8500 ) ; 3) G1 = {A2 , A4 , A5} , G2 = { A1 , A3 , A6 } , C  (  ) = ( 8500, 7500 ) ; 4) G1 = {A3 , A5 , A6 } , G2 = {A1 , A2 , A5} , C  (  ) = ( 6000,9000 ) ; 5) G1 = { A2 , A4 , A5 , A6 } , G2 = { A1 , A3 } , C  (  ) = (10500,5500 ) ; 6) G1 = { A2 , A3 , A5 , A6 } , G2 = {A1 , A4 } , C  (  ) = ( 8500, 7500 ) ; 7) G1 = { A2 , A3 , A4 , A6 } , G2 = { A1 , A5 } , C  (  ) = ( 9500, 6500 ) ; 8) G1 = {A1 , A4 , A5 , A6 } , G2 = {A2 , A3} , C  (  ) = (12000, 4000 ) ; 9) G1 = {A1 , A3 , A5 , A6 } , G2 = {A2 , A4 } , C  (  ) = (10000, 6000 ) ; 10) G1 = {A1 , A3 , A4 , A6 } , G2 = {A2 , A5 } , C  (  ) = (11000,5000 ) ; 11) G1 = {A1 , A2 , A4 , A5} , G2 = {A3 , A6 } , C  (  ) = (12500,3500 ) ; 12) G1 = {A1 , A2 , A3 , A5} , G2 = {A4 , A6 } , C  (  ) = (10500,5500 ) ; 13) G1 = {A1 , A2 , A3 , A4 } , G2 = {A5 , A6 } , C  (  ) = (11500, 4500 ) . All options provide redundant coverage of both projects by teams of agents representing ontologies. Redundant coverage is determined by external circumstances, because agents have knowledge of more than one subject area. Options 1 and 9 determine the quasi-optimal solution of the problem because they are constrained by the cost of projects, but in both cases, there is redundancy of project coverage by agent groups: • option 1: 1 (G1 ) = A1  A2  A4 = {O1 , O2 , O3 , O4 , O5}  1 , 2 (G2 ) = A3  A5  A6 = {O1 , O2 , O3 , O4 , O5}  2 , • option 9: 1 (G1 ) = A1  A3  A5  A6 = {O1 , O2 , O3 , O4 , O5}  1 , 2 (G2 ) = A2  A4 = {O2 , O3 , O4 , O5}  2 . In addition, if multiple agents have knowledge of the same subject area, competition may arise within the team for applying this knowledge to the project. Thus, in option 1, agents A1 and A2 from group G1 compete on the ontology O2 application to execute the project 1 , and agents A3 and A5 from group G2 compete on the ontology O4 for project  2 execution. In option 9, in group G1 executing project 1 there is competition for the use of ontology O4 by agents A1 and A5 , the use of ontology O2 by agents A1 and A6 , the use of ontology O4 by agents A3 and A5 . An alternative to competition is the cooperation and mutual enhancement of agents' knowledge in the same subject area. Options 2 – 4, 6, 7 exceed the cost of the project G2 . In options 5, 8, 10 – 13, the project G1 implementation cost is exceeded. The game algorithm must learn how to choose one of the described project coverage options (within their stated cost) by the contracting agents, which have the necessary knowledge the form of ontology sets. 6. The results of computer simulation Computer simulation was performed using game method (17) – (19) with such parameters: U = {u1i , u2i } ,  = 1 ,  = 0.999 / m ,  = 0.01 ,  = 2 ,  = 0.5 , tmax = 10 4 . i Fig. 1 and fig. Fig. 2 present, on a logarithmic scale, the graphs of the functions of the average losses of players Z t , t [1] , t [2] , and the average norm of mixed strategies  t that characterize the convergence of the stochastic game of ontological project support. The choice of logarithmic scale is due to the need for a compact representation of simulation results with a large range of values. Exceptions when the current values of the game's characteristic functions are less than zero or equal to zero, were programmatically processed for the logarithmic scale. As can be seen in Fig. 1 and fig. 2, the game method (17) – (19) minimizes the average losses functions Z t , t [1] , t [2] over time. The function of average norm of mixed strategies  t reaches logarithmic zero, which illustrates how the game is solved in pure strategies a) b) Figure 1: The solution of the stochastic game without noise: a) for option 1; b) for option 10 The linear (on logarithmic scale) drop in the graphs of the players' systemic losses functions Z t , t [1] , t [2] shown on Fig. 1a, indicates the achievement of a quasi-optimal solution of the ontology project coverage problem. The computer simulation shows that in almost 90% of the experiments, methods (17) – (19) provide a quasi-optimal coverage corresponding to options 1 or 9. For other permissible coverage options, for example, for option 10, the value of the average loss function t [1] for project coverage deviations decreases linearly, indicating that all projects are covered (Fig. 1b). The value of the average losses function t [2] for disrupting the projects’ cost balance goes to a stable value, which indicates the deviation of project implementation costs from the projected cost. The convergence of game method (17) – (19) to the optimal collective solution depends on the accuracy of its parameters tuning, the ratios of which should satisfy the fundamental requirements of stochastic approximation. It has been experimentally found that reducing the parameter   (0, 2] slows down the rate of expansion of the  -simplex and leads to an increase in the number of stochastic game steps required to find one of the project coverage options. A similar effect is observed with the increase of the parameter   (0,1] , which accelerates the decrease of the search step of the recurrent method. In other words, for the implemented recurrent transformation (17) with the stated method of formation of penalties (7), the expansion of the  -simplex should be fast enough, and the reduction of the search step should be slow. The rapid extension of the  -simplex practically does not limit the magnitude of the search method step. The initially large value of the search step leads to significant dynamics of mixed strategy vectors, enabling players to randomly choose other pure strategies (moving from one project to another), looking for the optimal ontology coverage of the projects. Over time, the magnitude of the search step becomes smaller, and the dynamics of vectors of mixed strategies stabilize, securing the formed agent teams to execute specific projects. The progress of stochastic game under the conditions of interference is shown in Fig. 2. The stochastic uncertainty of project coverage is given by a normal distribution t ~ Normal (e, d ) with mathematical expectation e = 0 and variance d = 0.25 . The empirical normal distribution is obtained using the formula:  12  t = e + d    j , t − 6  , (20)  j =1  where  [0,1] – is a real random number with uniform distribution. a) b) Figure 2: The solution of stochastic game in the condition of interference: а) for option 1; b) for option 10 The effect of random noise causes an irregularity in the magnitude of the search step of the recurrent method, which at each step of the stochastic game changes further in proportion to the variance of the interference. This is reflected in the form of a systemic average losses function Z t , which behaves as a more pronounced random process. The additional randomization of current losses with white Gaussian noise with small variance (for example, d = 0.25 ) does not have a significant impact on the learning outcome of the stochastic game. However, increasing the variance of noise causes the solution process to be slowed down or makes it impossible to reach the solution. 7. Conclusions In today's information society with advanced telecommunications through mobile devices and computer networks, it is important to form a variety of virtual organizations and communities. Such virtual associations of people by professional or other interests are designed to quickly solve various tasks: to perform project tasks, create start-ups to attract investors, network marketing, distance learning, solving complex problems in science, economics and public administration, construction of various Internet services, discussion of political and social processes, etc. Objective of the study is to develop an adaptive Markov recurrent method based on the stochastic approximation of the modified condition of complementary non-rigidity, valid at Nash equilibrium points for solving the problem of game coverage of projects. This paper proposes a new self-learning game based method of forming virtual agent teams to execute projects under uncertainty. At the beginning of the game, mixed game agent strategies are untrained and provide an equal choice of projects. The agent training process consists in the purposeful change of vectors of mixed strategies at each step of the game in order to minimize the functions of average losses for insufficient ontological support for projects. At the final stage of learning a stochastic game, stabilization of mixed strategies of agents occurs. The elements of the learned mixed strategies set the probabilities of agents belonging to one of the teams. The result of the game is the formation of teams of agents whose ontological knowledge covers the competencies needed to complete the projects. The adaptive multi-step stochastic game solving method is based on a stochastic approximation of the modified Nash equilibrium complementary non-rigidity condition. The convergence of the game is determined by the fundamental conditions of stochastic approximation and depends on the size of the game (number of players and strategies) and the ratios of the parameters of the recurrent solution method. The proposed method is based on the processing of reactions of the game environment under a priori uncertainty and therefore has a slow convergence rate, offset by the high computing power of modern computer systems. In this work the multi-agent game model for formation of virtual teams of executors of projects based on libraries of subject ontologies is developed. The competencies and abilities of agents required to carry out projects are specified by sets of ontologies. Intelligent agents randomly, simultaneously and independently choose one of the projects at discrete times. Agents who have chosen the same project determine the current composition of the team of its executors. For agents' teams, a current penalty is calculated for insufficient coverage of competencies by the combined capabilities of agents. This penalty is used to adaptively recalculate mixed player strategies. The probabilities of selecting those teams whose current composition has led to a reduction in the fine for non-coverage of ontologies are increasing. During the repetitive stochastic game, agents will form vectors of mixed strategies that will minimize average penalties for non-coverage of projects. For solve the problem of game coverage of projects, an adaptive Markov recurrent method based on the stochastic approximation of the modified condition of complementary non-rigidity, valid at Nash equilibrium points, was developed. Computer simulation confirmed the possibility of using the stochastic game model to form teams of project executors with the necessary ontological support in conditions of uncertainty. The convergence of the game method is ensured by compliance with the fundamental conditions and limitations of stochastic optimization. The reliability of experimental studies is confirmed by the repeatability of the results obtained for different sequences of random variables. The disadvantage of the proposed game method is that, in general, it does not guarantee perfect coverage, since it does not provide the elimination of agents whose capabilities are excessive for project execution. Alternatively, as a solution, the additional dummy projects could created with proposals that will be attractive to the redundant workforce from other projects. 8. References [1] IGI Global, Virtual Communities: Concepts, Methodologies, Tools and Applications (4 Volumes). Information Resources Management Association (USA), 2011. doi: 10.4018/978-1- 60960-100-3. [2] T. Hutchings, Real Virtual Community, volume 35 (2) of Word & World, 2015, pp. 151-161. [3] A. Roy, A Typology of Virtual Communities on the Internet: Contingency Marketing Approaches. Proceedings of the First International Academic Research Conference on Marketing & Tourism, MTCI15Dubai Conference, Dubai-UAE, 22-24 May, 2015, pp. 1-11. [4] G. Weiss, Multiagent Systems. Second Edition. The MIT Press,. 2013. [5] A, Byrski, M. Kisiel-Dorohinicki, Evolutionary Multi-Agent Systems: From Inspirations to Applications. Springer, 2017. doi: 10.1007/978-3-319-51388-1 [6] N. Radley, Multi-Agent Systems – Modeling, Control, Programming, Simulations and Applications. Scitus Academics LLC, 2017. [7] F. Dignum, J. Bradshaw, B.G. Silverman, W. Doesburg, Agent for Games and Simulations: Trends in Techniques, Concepts and Design. Springer, 2009. doi : 10.1007/978-3-642-11198-3 [8] P. Kravets, Y. Burov, V. Lytvyn, V. Vysotska, Gaming Method of Ontology Clusterization, volume 16 (1) of Webology, 2019, pp. 55-76. [9] D. Stuart, Practical Ontologies for Information Professionals. Facet Publishing, 2016. [10] Y. Aleman, M. J. Somodevilla, A proposal for domain ontological learning, volume 133 of Research in Computing Science, 2017, pp. 63-70. [11] C. M. Keet, An introduction to Ontology Engineering, 2018, http://hdl.handle.net/11427/28312. [12] C. Thomas, Ontology in Information Science. IntechOpen, 2018. doi: 10.5772/65599. [13] Z. Sun, Cooperative Coordination and Formation Control for Multi-agent Systems. Springer, 2018. doi : 10.1007/978-3-319-74265-6 [14] S. Yang, J.-X. Xu, X. Li, D. Shen, Iterative Learning Control for Multi-agent Systems Coordination. Wiley-IEEE Press, 2017. [15] P. Scerri, R. Vincent, R.T. Mailler, Coordination of Large-Scale Multiagent Systems. Springer, 2010. doi : 10.1007/0-387-27972-5 [16] O. Perminova-Harikovski, M. Gustafsson, K. Wikstrom, Defining Uncertainty in Projects – A New Perspective, volume 26(1) of International Journal of Project Management, 2008, pp. 73-79. doi: 10.1016/j.ijproman.2007.08.005. [17] E.Z.H. Zheng, M.M. Carvalho, Managing Uncertainty in Projects: A Review, Trends and Gaps, vol. 7(2) of Revista de Gestao e Projetos GeP. Journal of Business and Projects, 2016, pp. 95-99. [18] D. Cleden, Managing Project Uncertainty (Advances in Project Management). 1st Edition. Routledge, 2017. [19] K. Macedo, M. Marinho, S. Santos, Uncertainty Management in Software Projects: A Case Study in a Public Company. Journal of Convergence Information Technology 14(1)(2019) 61-67. [20] V. Bryl, P. Giorgiani, S. Fante, ToothAgent: A Multi-agent System for Virtual Communities Support. In: Kolp M., Henderson-Sellers B., Mouratidis H., Garcia A., Ghose A.K., Bresciani P. (eds) Agent-Oriented Information Systems IV. AOIS 2006, volume 4898 of Lecture Notes in Computer Science, Springer, Berlin, Heidelberg, 2008, pp. 212-230. doi: https://doi.org/10.1007/978-3-540-77990-2_13. [21] M. Fahad, O. Boissier, P. Maret, N. Moalla, C. Gravier, Smart Places: Multi-Agent based Smart Mobile Virtual Community Management System, volume 41 (4) of Applied Intelligence, Springer Verlag, Germany, 2014, pp. 1024-1042. doi: 10.1007/s10489-014-0569-2. hal- 01015456. [22] Y. Lee, Q. Chong, Multi-agent Systems Support for Community-Based Learning, volume 15(1) of Interacting with Computers, 2003, pp. 33-55. doi: https://doi.org/10.1016/S0953- 5438(02)00057-7. [23] M. Hidalgo-Herrero, P. Rabanal, I. Rodriguez, F. Rubio, Comparing problem solving strategies for NP-hard optimization problems, vol. 124 (1-2) of Fundamenta Informaticae, 2013, pp. 1-25. [24] S.M. Abdulrahman, Using Swarm Intelligence for Solving NP-hard Problems, volume 6 (3) of Academic Journal of Nawroz University, 2017, pp. 46-50. [25] X. Huang, A polynomial-time algorithm for solving NP-hard problems in practice, volume 34 (1) of ACM SIGACT News, 2003, pp. 101-108. [26] B. Reus, How to Solve NP-Complete Problems. In: Limits of Computation. Undergraduate Topics in Computer Science, Springer, Cham, 2016, pp. 275-297. doi: https://doi.org/10.1007/978-3-319-27889-6_21. [27] G. Panchal, D. Panchal, Solving NP hard Problems using Genetic Algorithm, volume 6 (2) of International Journal of Computer Science and Information Technology, 2015, pp. 1824-1827. [28] M. Prates, P.H.C. Avelar, H. Lemos, L.C. Lamb, M.Y. Vardi, Learning to solve NP-complete problems : A graph neural network for decision TSP. Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 2019, pp. 4731-4738. doi : 10.1609/aaai.v33i01.33014731 [29] L. Chyrun, E. Leshchynskyy, V. Lytvyn, A. Rzheuskyi, V. Vysotska, Y. Borzov, Intellectual analysis of making decisions tree in information systems of screening observation for immunological patients, CEUR Workshop Proceedings 2488 (2019) 281–296. [30] M. Ummels, Stochastic Multiplayer Games: Theory and Algorithms. Amsterdam University Press, 2014. [31] V. Ungureanu, Pareto-Nash-Stackelberg Game and Control Theory: Intelligent Paradigms and Applications, Springer, 2018. doi : 10.1007/978-3-319-75151-1 [32] S. K. Neogy, R. B. Bapat, D. Dubey, Mathematical Programming and Game Theory. Springer, 2018. doi : 10.1007/978-981-13-3059-9 [33] A. Rzheuskyi, O. Kutyuk, V. Vysotska, Y. Burov, V. Lytvyn, L. Chyrun, The Architecture of Distant Competencies Analyzing System for IT Recruitment, in: Proceedings of the 14th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2019, 2019, pp. 254–261. [34] H. Kushner, G. G. Yin, Stochastic Approximation and Recursive Algorithms and Applications. Springer Science & Business Media. 1997. doi : 10.1007/978-1-4899-2696-8 [35] A. Benveniste, M. Metivier, P. Priouret, Adaptive Algorithms and Stochastic Approximations, Springer-Verlag Berlin Heidelberg, 1990. doi : 10.1007/978-3-642-75894-2 [36] V. Lytvyn, V. Vysotska, V. Mykhailyshyn, A. Rzheuskyi, S. Semianchuk, System Development for Video Stream Data Analyzing. Advances in Intelligent Systems and Computing 1020 (2020) 315–331.