=Paper= {{Paper |id=Vol-3179/Short_9.pdf |storemode=property |title=A Model of Information Influences on the Base of Rectangular Stochastic Matrices in Chains of Reasoning with Possible Contradictions |pdfUrl=https://ceur-ws.org/Vol-3179/Short_9.pdf |volume=Vol-3179 |authors=Oleksiy Oletsky |dblpUrl=https://dblp.org/rec/conf/iti2/Oletsky21 }} ==A Model of Information Influences on the Base of Rectangular Stochastic Matrices in Chains of Reasoning with Possible Contradictions== https://ceur-ws.org/Vol-3179/Short_9.pdf
A Model of Information Influences on the Base of Rectangular
Stochastic Matrices in Chains of Reasoning with Possible
Contradictions
Oleksiy Oletsky
National University of Kyiv-Mohyla Academy, Skovorody St,.2, Kyiv, 04070, Ukraine

                Abstract
                A way of implementing the model “state-probability of choice”, which describes probabilities
                of making a choice between certain alternatives, into chains of uncertain reasoning has been
                suggested. That means associating a special matrix, called rectangular stochastic matrix, with
                each node of the reasoning chain, whereas a node corresponds either to a given fact or to a rule
                of inference, maybe uncertain. A binary case, when a resolution has to be either rejected or
                accepted, has been regarded. A situation of a dynamic equilibrium, which means that none of
                these alternatives has advantage over the other, and possible ways of how agents of influence
                can break such a situation, have been demonstrated. Some ways of resolving conflicts between
                contradictory sources of evidence have been suggested.

                Keywords 1
                Dynamic equilibrium of alternatives, chain of reasoning, contradictory evidence, rectangular
                stochastic matrices, agents of influence

1. Introduction
    Uncertain reasoning takes a significant place in decision making. An agent, who has to make a
choice between two or more given alternatives, often is not certain enough of facts affecting their
decisions. Moreover, an agent’s knowledge may be inconsistent, incomplete and contradictory. And
there may be agents of influence, which are trying to affect what other agents believe in or how they
behave. Belief networks typically based on Bayesian reasoning (Bayes networks [1]) or maybe on the
Dempster-Shafer theory [2] are commonly used for modelling processes related to reasoning. It appears
promising to integrate such approaches with knowledge graphs, which are referred to as a new trend in
artificial intelligence [3, 4].
    Studies of information influence comprise modeling information dissemination and exploring how
information impacts affect levels of trust and belief of an agent. Issues relating to social modeling,
spread of information and news, especially of fake news, establishing trust networks [5, 6, 7] are of
great interest now. There is a lot of approaches to modeling individual and collective behavior of agents
in multi-agent systems, a review of these approaches and models can be found in [8]. It appears helpful
and important to enforce behavioral aspects of these and related models.
    The model “state-probability of choice” based on the concept of rectangular stochastic matrices [9]
was suggested in [10]. It involves into consideration distributions of probabilities regarded as certain
states and a Markov chain describing changes of those probabilities in terms of transitions between the
states. Within the model, an agent of influence can try to change transition probabilities, and thereby
probabilities of choice and, speaking more generally, agents’ behavior. That can be carried out by
providing influences which can affect transition probabilities of the Markov chain.
    In this paper we are developing an approach aimed at introducing the model “state-probability of
choice” into a chain of reasoning, which might be a part of a more complicated belief network, by


Information Technology and Implementation (IT&I-2021), December 01–03, 2021, Kyiv, Ukraine
EMAIL: oletsky@ukr.net (A. 1)
ORCID: 0000-0002-0553-5915 (A. 1)
             ©️ 2022 Copyright for this paper by its authors.
             Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
             CEUR Workshop Proceedings (CEUR-WS.org)



                                                                                                           354
associating main elements of this model with nodes of the network. Insofar as an agent’s knowledge
may be uncertain or contradictory, we can talk about probabilities that an agent is going to accept or
reject a specific resolution. Some experiments illustrating possible information influences will be
presented.

2. Methods and tools
    The model “state-probability of choice” can be shortly described as follows [10].
    Let n be a number of alternatives to be chosen by an agent, and m be a number of states, each state
represents a certain distribution of probabilities among the alternatives.
    The model comprises two main components:
      the matrix 𝑍 = (𝑧𝑖𝑗 , 𝑖 = 1, 𝑚; 𝑗 = 1, 𝑛) called the “state-probability of choice” matrix, where
𝑧𝑖𝑗 is a probability that when being in the i-th state, the agent will choose the j-th alternative;
      the stochastic matrix of transition probabilities Π = (𝜋𝑖𝑗 , 𝑖 = 1, 𝑛), where 𝜋𝑖𝑗 is a probability
that when being in the i-th state, the agent will move on to the j-th one.
    The sum of any row in the matrix Z equals 1. By analogy with square stochastic matrices, such
matrices can be referred to as rectangular stochastic matrices []. In [4] some properties of rectangular
stochastic matrices have been established. A matrix Z can be chosen rather arbitrarily, but this
arbitrariness can be significantly mitigated on the base of dividing the states into some groups having
more or less clear meaningful interpretation. This will be explained below in the paper.
    The matrix Π determines a Markov chain, which describes how probabilities of choice may change.
It is well known that under some conditions a vector of stationary probabilities 𝑝 = (𝑝1 , … , 𝑝𝑛 ), which
is the main left eigenvector of Π, exists. Moreover, the corresponding eigenvalue equals 1, and this
vector satisfies the equation
                                             𝑝𝛱 = 𝛱                                                   (1)
   Therefore, in some situations within the model “state-probability of choice” we may postulate p
instead of Π. Then the overall probability that an agent will choose the j-th alternative equals [4]
                                            𝑣 = 𝑝𝑍                                                    (2)
   The vector 𝑣 = (𝑣1 , … , 𝑣𝑛 ) contains such probabilities for all alternatives. A very important problem
within the model is a problem of reaching a dynamic equilibrium, which is a situation when
                                                    1
                                            ∀𝑖 𝑣𝑖 =
                                                    𝑛                                                  (3)
   Dynamic equilibrium is of especial importance in the case if n=2 and decisions are being made
collectively by majority of votes. Then it means that none of the two alternatives has advantages over
the other. If the number of agents is large enough, a situation of the dynamic equilibrium is practically
the only situation when the parity of alternatives holds, so they are rotated and win by turn, which was
showcased in [10]. For the case n=2, the relation (3) takes the view
                                       𝑣 = (0.5, 0.5)                                                 (4)
    Finding of dynamic equilibrium is related to the concept of balanced matrices [10]. A rectangular
stochastic matrix is said to be balanced if sums of all its columns equal 1. An important theorem, which
states that if n=2, Z and p in (2) are a balanced matrix and a symmetric vector respectively, then the
dynamic equilibrium holds, has been proved in [10]. If an agent of influence wants to promote an
alternative which is losing at the moment, they may need first of all to reach the nearest point of the
dynamic equilibrium and then to move it away from this point in the desired direction. Some
illustrations of such a situation and of moving away from it it were presented in [10, 11]. Now we are
going to discuss how to use the model “state-probability of choice” in chains of inference and reasoning.

3. A simple rule of inference
   Let’s consider a simple rule
                                          𝐴⇒𝐵                                                         (5)

                                                                                                       355
   We are interested in the probability that the resolution B will be accepted or rejected. So, we have
two alternatives: ACCEPT and REJECT, and therefore n=2. In the simplest case, we may take
                                             1 0                                                   (6)
                                      𝑍=(         )
                                             0 1
    So, in this case the matrix Z is binary, that is its elements equal either 0 or 1. This matrix is obviously
a balanced rectangular stochastic one. For the rule 𝐴 ⇒ 𝐵 we have to construct another matrix “state-
probability of choice” R, which connects states of A and B. We introduce two states for it: supporting
A called FOR_A and opposing A called AGAINST_A. Let’s introduce two random variables: 𝜉 𝐴⇒𝐵
which can take values 𝑟1 = FOR_A or 𝑟2 = AGAINST_A and 𝜉 𝐵 , whose possible values correspond to
states 𝑧1 and 𝑧2 represented by the rows of Z. Then we can consider R having elements
                               𝑟𝑖𝑗 = 𝑃(𝜉 𝐵 = 𝑧𝑗 |𝜉 𝐴⇒𝐵 = 𝑟𝑗 )                                             (7)
   For instance, we may take
                                          0.8 0.2                                                         (8)
                                       𝑅=(        )
                                          0.2 0.8
   This matrix was chosen to be the balanced rectangular stochastic matrix. It is important that such
matrices should be centrocymmetric [12, 13].
   Given A as a known fact, we have to postulate either stationary probabilities p or transitive ones
Π and then obtain p from Π. Anyway, it is easy to see that
                                         𝑣 = 𝑝𝑅𝑍                                                          (9)
    which ensues from the total probability rule.
    It is important to mention that the matrix product RZ itself is a balanced stochastic matrix, so we can
talk about some chain of balanced stochastic matrices alongside the whole chain of inference or
reasoning. For this simple case, we will postulate p directly. In addition to this, an analysis carried out
in [4, 5] allows us to choose p so that the dynamic equilibrium will hold. Insofar as p ought to be
symmetric, there is no other possibility to reach that than to put 𝑝 = (0.5, 0.5). It gives
                                𝑣 = (0.5, 0.5)                                                          (10)
   So, dynamic equilibrium holds. If an influencer manages to change p, they may break the dynamic
equilibrium. For example, taking p=(0.55, 0.45) gives v=(0.53, 0.47). The analysis performed in [10]
shows that if a decision is being made by majority of votes and the number of agents is large enough,
the probability that B will be accepted is close to 1. This simplest illustrative example gives the same
results as the total probability rule does, that is only another way of calculations. Now let’s consider a
more comprehensive and more flexible example involving larger systems of states and transition
probabilities. The following example also aims to demonstrate some behavioral aspects.

4. A more comprehensive example
   The inference rule is the same as regarded above: 𝐴 ⇒ 𝐵. Now we are going to introduce a matrix
“state-probability of choice”, which represents more states reflecting possible distributions of
probabilities of accepting or rejecting B, likewise it was suggested in [10].
   The matrix Z will represent some of such distributions. It should be balanced. Some techniques of
building such matrices were suggested in [10]. Let Z be as follows:
                                        1       0
                                       0.9     0.1
                                      0.75 0.25
                                       0.6     0.4
                               𝑍 = 0.5         0.5                                            (11)
                                       0.4     0.6
                                      0.25 0.75
                                       0.1     0.9
                                    ( 0         1 )


                                                                                                          356
   As it was mentioned above, these basic states can be chosen rather arbitrarily. This arbitrariness can
be reduced by explicit distinguishing groups of states which are directly related to the states represented
by Z. But the similar effect can be reached if we build a matrix R which corresponds to the rule of
inference in the way illustrated above.
   Now we’ll consider three groups of states:
        convinced proponents of accepting B;
        those who hesitate;
        convinced proponents of rejecting B.
   Let’s take R as follows:
                0.9        0.08 0.02 0              0     0   0    0              0
             𝑅=(0           0    0.1 0.2           0.4   0.2 0.1   0              0)                  (12)
                 0          0     0   0             0     0 0.02 0.08            0.9
   Whereas it appears problematic to postulate transition probabilities between the states of Z, possible
transitions between the new three states appear to be much clearer.
   Based on results of [10. 11], we are able to choose a matrix of transition probabilities Π so that the
vector of stationary probabilities p would be symmetric, and thereby dynamic equilibrium would hold.
In details, the following statement was proven.
   Let a (𝑚 × 𝑚) -matrix A satisfy any of the following relations:
                ∀𝑖, 𝑗: 𝑎𝑖𝑗 = 𝑎𝑖,𝑚−𝑗+1                                                                 (13)

   (i.e. the i-th and the j-th columns of A are equal to each other) or
                       ∀𝑖, 𝑗: 𝑎𝑖𝑗 = 𝑎𝑚−𝑖+1,𝑚−𝑗+1                                                      (14)
   (i.e. A is a centrosymmetric matrix).
   Then the main eigenvector x of the matrix A is symmetric, i.e.
                                 ∀𝑗: 𝑥𝑗 = 𝑥𝑚−𝑗+1                                                      (15)
   Let’s take
                                        0.5 0.4 0.1                                              (16)
                                   Π = (0.2 0.6 0.2)
                                        0.1 0.5 0.4
   The matrix Π is centrosymmetric, that means that it satisfies (14). Then the stationary probabilities
of being in the groups are as follows:
                          𝑝 = (0.25, 0.5, 0.25)                                                       (17)
   This vector is symmetric, indeed. It gives the following vector of final probabilities of rejecting and
accepting B:
                         𝑣 = 𝑝𝑅𝑍 = (0.5, 0.5)                                                         (18)
    So, dynamic equilibrium holds. An obvious way for an influencer to break the dynamic equilibrium
may be related to affecting the transition probabilities. Changes of transition probabilities may occur
when receiving additional information or on the base of reinforcement learning [14]. Some ways of
applying reinforcement learning to establishing trust networks within the model called the Integrated
Trust Establishment Model (ITE) were proposed in [15]. Really, if an agent gets a positive experience
related to B, or if they find out a good fact about it, the probability of their transition to a group with a
better attitude to B may increase. And vice versa, negative experience or information may push an agent
to a group with a worse attitude.
    Let Π be slightly changed and take the following view:
                                   0.5       0.4   0.1
                                 (
                              Π = 0.25       0.6   0.15)                                              (19)
                                   0.1       0.5   0.4
   Now
                                  𝑣 ≈ (0.5406, 0.4594)                                                (20)

                                                                                                         357
    and the dynamic equilibrium has been broken. B will be permanently accepted, if a number of agents
is large enough and decisions are being made by majority of votes.

5. Contradictory rules of inference
     Let there be two rules of inference with the same corollary:
                                                   𝐴1 ⇒ 𝐵,                                           (21)
                                                   𝐴2 ⇒ 𝐵
    Here we are coming back to postulating stationary probabilities explicitly. Let R and Z be the same
as in the Section 4. Both 𝐴1 and 𝐴2 may be associated with different vectors 𝑝1 and 𝑝2.
    Let’s take
                                𝑝1 = (0.8,    0.1,     0.1)                                       (22)
                               𝑝2 = (0.1,     0.3,     0.6)
     Then we get two respective vectors, corresponding to different probabilities of accepting or rejecting
B:
                                  𝑣1 ≈ (0.8409,                0.1591)                               (23)
                                  𝑣2 ≈ (0.2565,                0.7435)
   This means that when reasoning on the base of 𝐴1 , B should be accepted, but on the base of 𝐴2 it
should be rejected. There are many approaches to how to combine such contradictory pieces of
evidence. We are developing the one related to importance of the evidence.

6. Weighting evidence
   An agent may regard some pieces of evidence more important than some other ones. Then we may
consider a convex combination
                                  𝑝 = ∑𝛼 𝑝                                                 (24)
                                                       𝑖   𝑖
                                                   𝑖
 where
                                          0 ≤ 𝛼𝑖 ≤ 1,

                                               ∑ 𝛼𝑖 = 1
                                               𝑖

   The weight coefficients are interpreted as degrees of importance assigned to the known facts. The
larger is 𝛼𝑖 , the more important is the evidence. So, for achieving their goals, an influencer can try to
affect not only transition probabilities but degrees of importance 𝛼𝑖 as well.
   Let’s look at how to find weight coefficients ensuring a situation of dynamic equilibrium. Relation
(9) gives the clear clue to this. Since
                                        ∀𝑖 𝑣𝑖 = 𝑝𝑖 𝑅𝑍,                                               (25)
     substituting (24) into (9) gives

                         𝑣 = 𝑝𝑅𝑍 = (∑ 𝛼𝑖 𝑝𝑖 𝑅𝑍) = ∑ 𝛼𝑖 𝑣𝑖                                            (26)
                                           𝑖                    𝑖
     Therefore, for ensuring a dynamic equilibrium we have to choose such 𝛼𝑖 , for which the relation
                                                                                                     (27)
                                  ∑ 𝛼𝑖 𝑣𝑖 = (0.5, … ,0.5)
                                   𝑖
     holds.
     For two vectors, the relation (27) takes a view
                                 𝛼𝑣1 + (1 − 𝛼 )𝑣2 = 0.5,                                             (28)
     where 𝑐1 and 𝑐2 are the first elements of 𝑣1 and 𝑣2 respectively.

                                                                                                       358
   After some simple transformations we get
                                            0.5 − 𝑣2                                            (29)
                                       𝛼=
                                            𝑣1 − 𝑣2
   assuming 𝑣1 > 0.5 > 𝑣2 .
   On this base, let’s find a situation of dynamic equilibrium for the example from the Section 5. We
have there
                          𝑐1 = 0.8409; 𝑐2 = 0.2565,                                             (30)
                                     𝛼 ≈ 0.417,
    and
            𝑝 = 𝛼𝑝1 + (1 − 𝛼 )𝑝2 = (0.3917,          0.2167,     0.3917)
   The vector p is symmetric. Then
                             𝑣 = 𝑝𝑅𝑍 = (0.5, 0.5)                                                     (31)
   which means the dynamic equilibrium.

7. Another way of getting weights
   For this simple case, there is another way of getting weight coefficients.
   For finding 𝛼, which would make a symmetric convex combination, instead of solving the equation
(28) we might try to apply a similar technique directly to 𝑝1 and 𝑝2.
   Let’s re-write elements of 𝑝1 and 𝑝2 in the following form:
                                      𝑝1 = (𝑎, 𝑥, 𝑏)
                                                                                                       (32)
                                      𝑝2 = (𝑐, 𝑦, 𝑑)
   with given a, x, b, c, y, d.
   A convex combination
                                  𝑝 = 𝛼𝑝1 + (1 − 𝛼)𝑝2                                                 (33)
   will be symmetric if the relation
                           𝛼𝑎 + (1 − 𝛼 )𝑐 = 𝛼𝑏 + (1 − 𝛼 )𝑑                                            (34)
   holds. The middle elements x and y don’t matter within this context.
   From (34) we can get the solution
                                               𝑑−𝑐                                                     (35)
                                     𝛼=
                                           𝑎−𝑐−𝑏+𝑑
   This gives the same results as those in the Section 6. However, that is the very simple case when
there are only two vectors representing two different facts, and each of them has only three elements
representing three different levels either of certainty of given facts or maybe of attitude to the resolution
under consideration. In more complicated cases, for instance when considering more groups of states
and agents or if there are many different and maybe contradictory sources of evidence, a situation
probably will be more intricate. But on the other hand, possible ways of getting situations of a dynamic
equilibrium may become more flexible and multi-faceted.

8. Conclusions and discussion
    This paper reports how the model “state-probability of choice”, which was described and explored
in [10], can be implemented in chains of uncertain inference and reasoning, which may be a part of a
more complicated belief network. It has been suggested that each node of such a chain of reasoning,
either a given fact or a rule of inference, shall be associated with a special rectangular stochastic matrix.
So, it was shown in the paper that a sequence of reasoning can be represented as a chain of matrix
products alongside the chain of reasoning.
    In this paper the regarded states corresponding to the rules of reasoning were related mainly to
various levels of support for accepting or rejecting the final resolution under consideration. But systems



                                                                                                         359
of states may be very different. It is possible to consider different levels of confidence about the rules
or something else.
    We regarded two basic cases: first of them is a simple action of reasoning on the basis of the modus
ponens rule, and the other is a situation of two inference rules with the same corollary. Various systems
having different amounts of states were considered. Even though these cases are very simple, they
should be considered as building blocks for constructing AND-OR-graphs, belief networks based either
on Bayesian networks or on the Dempster-Shafer theory, production systems [1, 2] etc. As regards a
knowledge graph, stationary probabilities and intermediate matrices might be associated with its ground
extensional and intentional components respectively, and the reasoning process within the model “state-
probability of choice” is carried out by means of multiplying vectors and matrices. Approaches to
implementing the model “state-probability of choice” on the basis of merging the regarded blocks into
such composite structures should be developed.
    We have considered a case of two contradictory rules in order to contribute a wide range of studies
related to logical systems which may be incomplete or inconsistent [16, 17]. Some ways of resolving
conflicts based on introducing weight coefficients, which reflect degrees of certainty about the evidence
or maybe trust to them, were suggested in the paper. There is an importing question about how to assign
these coefficients. Some approaches to that, based first of all on estimating experts’ competence and
that appear mostly promising to be combined with the model “state-probability of choice”, were
suggested in [18, 19].
    It appears very reasonable to consider what may be a common point of paraconsistent and uncertain
reasoning. Really, if an inconsistent system of assertions allows us to infer both a statement and its
negation, we are able to talk about a plausibility of this statement, and therefore about a probability of
accepting it, even though the available evidence and the process of reasoning itself may not be of
probabilistic nature. Such considerations appear to be of great importance for social modelling in terms
of multi-agent systems, in which resolutions are being made collectively, first of all by majority of
votes. For studying such systems, behavioral aspects and factors relating to affecting and changing
opinions of agents appear to be of a first-rank significance. The model “state-probability of choice” just
aims to point out such behavioral aspects in a more or less clear and articulate way.
    We consider possible wishes of influencers who are trying to increase or decrease levels of support
for certain decisions and resolutions. We have explicitly introduced the states representing possible
agents’ attitude to certain resolutions, and within the suggested approach those agents of influence can
affect transition probabilities between these states. Another way of influence is to affect degrees of
certainty about the evidence and levels of trust to the sources of evidence.
    Within this context, finding situations of a dynamic equilibrium appears to be a quite important
issue. Dynamic equilibrium means that none of the alternatives has advantages over the others. In the
particular case regarded in the paper, dynamic equilibrium means that there is the equal probability that
a considered resolution would be accepted or rejected. So, if the resolution is going to fail, and an
influencer wants to maintain and push it forward, they may try first of all to reach the nearest point of
the dynamic equilibrium and then to move away from it in a desired direction [11]. In the paper, some
ways of finding a dynamic equilibrium for the case of two contradictory facts with the same corollary
based on selecting relevant weights of these facts have been suggested.
    Even though the model figures out some parameters that agents of influence can affect, namely the
transition probabilities between states and the levels of certainty and trust, they typically are not able to
affect these parameters directly. Instead, some information influences should be delivered, and there is
a special issue how to explore possible effects of such information influences. It appears promising to
apply methods of the game theory, the algorithmic game theory, the theory of mechanism design [20,
21] as well. After all, it's worth mentioning that a multi-node network approach, which is being
considered in this paper, seems to be rather fruitful for developing distributed multi-agent architectures
for various applications such as [22, 23, 24, 25] etc.

9. References
[1] A. Darwiche. Modeling and Reasoning with Bayesian Networks. Cambridge University Press, 2009.
[2] P. Jackson. Introduction to expert systems. Addison Wesley, 1999.


                                                                                                        360
[3] L. Bellomarini, D. Fakhoury, G. Gottlob, E. Sallinger. Knowledge graphs and enterprise AI: the
     promise of an enabling technology. In ICDE, pp. 26–37. IEEE, 2019.
[4] P. Atzeni, L. Bellomarini, M. Iezzi, E. Sallinger, A. Vlad. Weaving enterprise knowledge graphs:
     The case of company ownership graphs. In EDBT, pp. 555–566. OpenProceedings.org (2020).
[5] S. Vosoughi, D. Roy, S.Aral. The spread of true and false news online. Science, 359. (03 2018)
     1146–1151.
[6] A. Aref, T. Tran. An integrated trust establishment model for the internet of agents. Knowledge
     and Information Systems, 62:79–105, 2020.
[7] K. K. Fullam, T. B. Klos, G. Muller, J. Sabater, A. Schlosser, Z. Topol, K. S. Barber, J. S.
     Rosenschein, L. Vercouter, M. Voss, A specification of the agent reputation and trust (art) testbed:
     Experimentation and competition for trust in agent societies. In Proc. 4th Int. Joint Conf. Auto.
     Agents Multiagent Syst. (2005) 512-518.
[8] H Yu, Z Shen, C Leung, C Miao, V. Lesser. A survey of multi-agent trust management systems.
     IEEE Access, 1 (2013) 35–50.
[9] R. Beneduci. Stochastic matrices and a property of the infinite sequences of linear functionals.
     Linear Algebra and its Applications 433 (2010) 1224–1239.
[10] O.V. Oletsky, E.V. Ivohin. Formalizing the Procedure for the Formation of a Dynamic
     Equilibrium of Alternatives in a Multi-Agent Environment in Decision-Making by Majority of
     Votes. Cybern Syst Anal Vol.57 (2021) 47-56. doi: https://doi.org/10.1007/s10559-021-00328-y.
[11] O. Oletsky. Exploring Dynamic Equilibrium Of Alternatives On The Base Of Rectangular
     Stochastic Matrices. CEUR Workshop Proceedings 2917, CEUR-WS.org 2021. http://ceur-
     ws.org/Vol-2917/ (2021) 151-160.
[12] J.R. Weaver. Centrosymmetric (cross-symmetric) matrices, their basicproperties, eigenvalues, and
     eigenvectors. Amer. Math. Monthly 92 (1985) 711–717.
[13] A. Melman. Symmetric centrosymmetric matrix-vector multiplication. Linear Algebra Appl. 320
     (2000) 193–198
[14] S. Russel, P. Norvig. Artificial Intelligence: A Modern Approach. Pearson Education, Inc., 2003.
[15] A. Aref, T. Tran. Rlte: A reinforcement learning based trust establishment model. In 2015 IEEE
     Trustcom/BigDataSE/ISPA, pp. 694–701, 2015.
[16] J.-Y. Béziau, W. Carnielli, D. Gabbay (Eds.). Handbook of Paraconsistency. London, King's
     College, 2007.
[17] W. Carnielli, M. Coniglio, J. Marcos (2007). Logics of Formal Inconsistency, in: D. Gabbay, F.
     Guenthner (Eds.). Handbook of Philosophical Logic, Volume 14 (2nd ed.). The Netherlands:
     Kluwer Academic Publishers 1–93 (2007).
[18] A.F. Voloshin, G.N. Gnatienko, E.V. Drobot. A Method of Indirect Determination of Intervals of
     Weight Coefficients of Parameters for Metricized Relations Between Objects. Journal of
     Automation and Information Sciences, Vol. 35 (2003) 1-4.
[19] Hnatiienko H., Snytyuk V. A posteriori determination of expert competence under uncertainty /
     Selected Papers of the XIX International Scientific and Practical Conference "Information
     Technologies and Security" (ITS 2019) (2019) 82–99.
[20] T. Borgers, D. Krahmer, R. Strausz. An introduction to the theory of mechanism design. Oxford
     University Press, 2015.
[21] T. Roughgarden. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.
[22] A.S. Palau, M.H. Dhada, M.H., A.K. Parlikad. Multi-agent system architectures for collaborative
     prognostics. J Intell Manuf 30, 2999–3013 (2019). https://doi.org/10.1007/s10845-019-01478-9.
[23] K. Upasani, M. Bakshi, V. Pandhare, B.K. Lad. Distributed maintenance planning in
     manufacturing industries. Computers & Industrial Engineering, 108, 1–14 (2017).
[24] N. Kiktev, A. Didyk, M. Antonevych. Simulation of Multi-Agent Architectures for Fruit and Berry
     Picking Robot in Active-HDL. IEEE International Conference on Problems of
     Infocommunications Science and Technology, PIC S and T 2020 - Proceedings, 2021, 635–640
     (2021). DOI: 10.1109/PICST51311.2020.9467936.
[25] S. Wang, J. Wan, D. Zhang, D. Li, C. Zhang. Towards smart factory for industry 4.0: A self-
     organized multi-agent system with big data based feedback and coordination. Computer Networks,
     101, 158–168 (2016).


                                                                                                     361