=Paper= {{Paper |id=Vol-3312/paper16 |storemode=property |title=On Applying the Structured Model “State-Probability of Action” to Multi-Criteria Decision Making and Contradictory Reasoning |pdfUrl=https://ceur-ws.org/Vol-3312/paper16.pdf |volume=Vol-3312 |authors=Oleksiy Oletsky,Ivan Peleshchak |dblpUrl=https://dblp.org/rec/conf/momlet/OletskyP22 }} ==On Applying the Structured Model “State-Probability of Action” to Multi-Criteria Decision Making and Contradictory Reasoning== https://ceur-ws.org/Vol-3312/paper16.pdf
On Applying the Structured Model “State-Probability of Action”
to Multi-Criteria Decision Making and Contradictory Reasoning
Oleksiy Oletsky1, Ivan Peleshchak2
1
    National University of Kyiv-Mohyla Academy, Skovorody St.,2, Kyiv, 04070, Ukraine
2
    Lviv Polytechnic National University, Lviv, 79013, Ukraine


                 Abstract
                 An approach to applying the structured model “state-probability of action” for decision making
                 and reasoning under contradictory information got its further development. It was formulated
                 in terms of nodes, each of which should be associated with a rectangular stochastic matrix
                 “state-probability of action”, and decision making, reasoning etc. is carried out mainly by
                 operating with matrices and vectors. A way to constructing matrices “state-probability of
                 action” on the base of fuzzy sets and their membership functions has been suggested. A
                 problem of equilibrium between two alternatives was explored for such a network. Some
                 examples are provided, one of them represents the prospect of how the structured model “state-
                 probability of action” can be combined with some elements of the Analytic Hierarchy Process
                 for tackling the problem of multi-criteria decision making.

                 Keywords 1
                 Multi-criteria decision making, model “state-probability of action”, fuzzy sets, equilibrium of
                 alternatives, contradictory reasoning, Analytic Hierarchy Process

1. Introduction. Related works
    Currently there is a growing interest to models of individual and collective decision making in a
multi-agent environment, especially to those models that take into account behavioral aspects of agents’
actions and various factors affecting their decisions. We consider also agents of influence whose goals
are to make other agents accept decisions desirable for the influencers. Provided that an influencer is
aware of a supposed parameterized model describing behavior of agents, they can try to affect decisions
of other agents by manipulating parameters of that model – probably in an indirect way by sharing a
certain information with other agents.
    In [1] one possible approach to constructing such models has been suggested. This approach is based
on considering a system of states corresponding to possible distributions of probabilities that an agent
shall make available decisions if they are being in the certain state. Random walk across those states
has been considered as well. A model based on this approach was called “state-probability of action”
(or sometimes “state-probability of choice”) model. In [2] some parameters for this sort of models have
been introduced and explored.
    For describing complex decision making influenced by a set of different factors it appears promising
to join together separate nodes, each of which corresponds to a particular judgement and is described
by a separate model “state-probability of action”, to form a network reflecting relations between those
nodes. A basic approach to describing such relations, that is the structured model “state-probability of
action”, has been suggested in [3], but this approach needs a further development. A specific issue is
how to use fragments of the network formed by “state-probability of action” nodes for describing


MoMLeT+DS 2022: 4th International Workshop on Modern Machine Learning Technologies and Data Science, November, 25-26, 2022,
Leiden-Lviv, The Netherlands-Ukraine
EMAIL: oletsky@ukma.edu.ua (O. Oletsky); ivan.r.peleshchak@lpnu.ua (I. Peleshchak)
ORCID: 0000-0002-0553-5915 (O.Oletsky); 0000-0002-7481-8628 (I. Peleshchak)
              © 2022 Copyright for this paper by its authors.
              Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)
uncertain logical reasoning, especially if this reasoning is carried out on the base of contradictory
evidence. This is the problem this paper is devoted to.


2. Methodology, model, and techniques: “state-probability of choice” nodes
   and connections between them
                                                                                                          ( )   ( )
   Formally, for each node Q we have to specify a system of states 𝑆 = {𝑠 , 𝑠 , … }. Let 𝜉 ( ) be a
random variable that means which state from 𝑆 an agent is located in at the moment. We have to
consider a vector of probabilities

                                                                   ( )          ( )
                                                    𝑝̅ ( ) = (𝑝̅         , 𝑝̅         ,…)

                ( )                                                                             ( )
     where 𝑝̅         is a probability that an agent is being in the state 𝑠                          :

                                                         ( )                          ( )
                                                    𝑝̅         = 𝑃(𝜉 ( ) = 𝑠                )

                                  ( )
   Instead of specifying 𝑝̅    explicitly, we might consider a random walk across Q and transitional
probabilities in the corresponding Markov chain as it has been done in [1].
   Let A and B be connected nodes, and B is a successor of A. We consider the relation 𝐴 → 𝐵 as an
uncertain one, which means that the state 𝑆 the agent is located in depends on their state from 𝑆 . We
specify the following probabilities:

                                            ( → )                          ( )                   ( )
                                        𝑦           = 𝑃 𝜉( ) = 𝑠                  𝜉( ) = 𝑠

                                                                                       ( → )
     We can introduce the (𝑚 × 𝑚 )-matrix 𝑌 ( → ) = (𝑦                                          ), where 𝑚 = 𝑆 ( ) , 𝑚 = 𝑆 ( ) ,
    ( → )
𝑦      are described above.
   The matrix 𝑌 ( → ) belongs to the class of so-called rectangular stochastic matrices [1, 3, 4], which
is a generalization of well-known square stochastic matrices. A rectangular matrix 𝑊 =
 𝑤 , 𝑖 = 1, 𝑚, 𝑗 = 1, 𝑛 is said to be rectangular stochastic if it satisfices the following requirements:


                                                         ∀𝑖       𝑤 = 1,

                                                         ∀𝑖, 𝑗 0 ≤ 𝑤 ≤ 1

   which means that the sum of elements in each row equals 1, but the matrix may be not a square one.
   As it was stated in [3], within such a notation in terms of vectors and matrices the following relation
takes place:

                                                     𝑝̅ ( ) = 𝑝̅ ( ) ∙ 𝑌 ( → )

    Terminal nodes, which have no successors, have another meaning. They are related just to the action
of decision making. Let there be n alternatives. We can introduce the “state-probability of action” model
as follows: let its states numbering m represent possible distribution of probabilities of choice, and each
element ℎ , 𝑖 = 1, 𝑚 , 𝑗 = 1, 𝑛 of the matrix 𝐻 = (ℎ ) is the probability that an agent will choose the j-
th alternative if they are being in the i-th state.
    Let’s introduce the vector 𝑝 , 𝑗 = 1, 𝑛, where 𝑝 is the probability that an agent will choose the j-th
alternative. Then
                                                𝑝 = 𝑝̅ ( ) ∙ 𝐻

3. Equilibrium of alternatives
   We will consider the most important particular case when there are two alternatives (n=2). For this
case it is especially important to consider the situation of equilibrium of alternatives. This means that
no alternative has advantage over the other, and

                                              𝑝 = (0.5,        0.5)

    Equilibrium of alternatives is of great importance for collective decision making by majority of votes
[1, 2]. It can be shown that if a number of agents is large enough, a situation of equilibrium is the only
situation when the alternatives are on a par with each other and are chosen in turn. Otherwise, one
alternative holds the steady advantage over the other, and it is permanently winning. But there can be
different situations of equilibrium. So, if an agent of influence wants to boost up an alternative which
is currently losing, they can try to reach one of equilibrium situations and then walk away from it in the
desired direction. In [1] some sufficient conditions of equilibrium situations were found, which
significantly rely on the principles of symmetry. This will be illustrated below.
    To make the further considerations clearer, let’s regard a basic illustrative example.

   Example 1
   Let the terminal matrix H be as follows [1]:

                                                     1   0
                                                    0.9 0.1
                                                  ⎛0.75 0.25⎞
                                                  ⎜ 0.6 0.4 ⎟
                                                  ⎜         ⎟
                                              𝐻 = ⎜ 0.5 0.5 ⎟
                                                  ⎜ 0.4 0.6 ⎟
                                                  ⎜0.25 0.75⎟
                                                    0.1 0.9
                                                  ⎝ 0    1 ⎠


   As it was shown in [1], for securing the situation of equilibrium between the alternatives the vector
 ( )
𝑝̅ should be symmetric. For example, let’s take the following one:

                              𝑝̅ ( ) = (0.3, 0.1, 0. , 0. , 0.2, 0. , 0. , 0.1, 0.3)

   Then

                                        𝑝 = 𝑝̅ ( ) ∙ 𝐻 = (0.5, 0.5)

    This means that the alternatives should be chosen with the equal probabilities, and equilibrium of
alternatives holds.
    Let’s assume that an agent of influence succeeds in changing the vector 𝑝̅ ( ) – for example, it
changes as follows:

           𝑝̅ ( ) = (0.3,    0.1,       0.,      0.1,       0.1,       0.,       0.,   0.1,   0.3)

    Then 𝑝 = (0.51, 0.49). Equilibrium of alternatives has been broken, and the first alternative will
now win permanently.
    The problem is that the matrix H is postulated in a very arbitrary way. There is no sound reason for
its elements to be chosen as they are, and they could have very different values. In addition to this, the
states represented by H hardly can be clearly interpreted. Now we are going to discuss how this problem
might be tackled.

4. Getting matrices for terminal nodes
    One approach to making the model “state-probability of choice” more structured and intelligible
was suggested in [3]. Firstly, it comprises distinguishing groups of states having more or less clear
interpretation. For instance, such groups can be as follows:
         proponents of a decision
         those who hesitate
         opponents of a decision
    A number of groups can be larger.
    In terms of concepts introduced in Section 2, this is a separate node associated with a model “state-
probability of action” with its own system of states L connected to the terminal one. So, we should
somehow determine a transition matrix 𝑌 ( → ) from L to H. Then we can get the more ingenious
aggregated matrix 𝐻∗ = 𝑌 ( → ) ∙ 𝐻. Given a vector 𝑝̅ ( ) , which specifies probabilities that an agent is
being in each state of L, the vector of choice probabilities p takes a view

                                    𝑝 = 𝑝̅ ( ) ∙ 𝑌 ( → ) ∙ 𝐻 = 𝑝̅ ( ) ∙ 𝐻 ∗

  Again, a question of equilibrium arises, and now it is connected with so-called centrosymmetric
matrices [5, 6]. A (𝑚 × 𝑛) – matrix A is said to be centrosymmetric, if

                                𝑎 =𝑎             ,          ∀𝑖 = 1, 𝑚; 𝑗 = 1, 𝑛

   It is known that if both matrices 𝑌 ( → ) and H are centrosymmetric, then their product 𝐻 ∗ is
centrosymmetric as well. Then, if 𝑝̅ ( ) is a symmetric vector and 𝐻 ∗ is a centrosymmetric matrix, the
vector p shall be symmetric, that is 𝑝 = (0.5, 0.5), and therefore equilibrium of alternatives holds.
   In [3] the matrix 𝑌 ( → ) was just specified explicitly. But it seems reasonable to elaborate more
flexible approaches to getting this matrix.

5. A fuzzy approach to getting transitional matrix

    For specifying the transitional matrix 𝑌 ( → ) , we suggest an approach based on fuzzy sets. To
formulate the idea more or less formally, let’s consider a family of fuzzy sets 𝑈(𝑙, 𝐻) with the
membership functions 𝜇 ( , ) (𝑥), which indicates the grade of relation of the state x represented by H
to the certain state l from L. For instance, the state corresponding to the distribution (0.6, 0.4) relates in
large measure to the state corresponding to hesitating agents but not to proponents or opponents of the
decision.
                                                                   ( )
    So, we can get a matrix 𝑈 = (𝑢 ), where 𝑢 = 𝜇 ( ), (𝑠 ). The matrix 𝑌 ( → ) can be obtained
from this matrix by means of the well-known exponential transformation

                                                ( → )        𝑒
                                            𝑦           =
                                                            ∑ 𝑒

    where 𝛽 > 0 is a certain parameter.
    This looks similar to what we did in [2], but in that paper we did not take into consideration neither
related systems of states nor fuzzy sets.
    Let’s illustrate this by the following example.

   Example 2
   Let’s take the terminal matrix H corresponding to a certain decision the same as in (1), and there are
three groups of states on the level L: proponents of the decision, hesitating agents and opponents of the
decision. We can take U as follows:


                              1 0.8 0.3 0.1 0 0    0   0 0
                          𝑈 = 0 0.1 0.4 0.9 1 0.9 0.4 0.1 0
                              0 0    0   0 0 0.1 0.3 0.8 1


   This matrix is centrosymmetric, so Y has to be centrosymmetric as well. If we take 𝛽 = 1, it
approximately equals


          0.2192 0.1795 0.1089 0.0891 0.0807 0.0807 0.0807 0.0807 0.0807
𝑌 ( → ) = 0.0674 0.0745 0.1006 0.1658 0.1833 0.1658 0.1006 0.0745 0.0674
          0.0807 0.0807 0.0807 0.0807 0.0807 0.0891 0.1089 0.1795 0.2192


   Then the aggregated matrix

                                                         0.6167 0.3833
                             𝐻 ∗(   )
                                        = 𝑌( → ) ∙ 𝐻 ≈     0.5    0.5
                                                         0.3833 0.6177

    is centrosymmetric. Equilibrium of alternatives will hold for any symmetric vector 𝑝̅ ( ) .
    Like [2], the parameter 𝛽 can be interpreted as a parameter which indicates the degree of agents’
decisiveness, and an influencer can try to manipulate this parameter. The resulting matrix
𝐻 ∗( ) appears not to be very good due to the low value of 𝛽. If we take the increased value, for
instance 𝛽 = 5, which means that the agents become more decisive, we will get

                                                    0.9487 0.0513
                                    𝐻 ∗(    )
                                                ≈     0.5    0.5
                                                    0.0513 0.9487


   Another promising approach is related to introducing fuzzy variables such as triangle, trapezoid etc.
ones.

6. The model “state-probability of action” and reasoning
    Now we a going to show how the structured model “state-probability of action” can be applied for
uncertain and contradictory reasoning. Reasoning is typically carried out on the base of the knowledge
by applying certain rules of inference to the known facts. But if we regard an agent’s knowledge as a
logical system, it may be contradictory (especially if the agent is a human being). Therefore, for a
statement A an agent may infer both A and its negation 𝐴̅. In such a situation, we consider a probabilistic
approach, which means that an agent can accept A with a certain probability, and this probability can
be calculated with the help of the model “state-probability of action”. In this paper we are going to
develop an approach preliminary outlined in [7], re-formulate it in terms of systems of states and to
provide some more ingenious examples.
    Let an agent consider a decision L and therefore two related alternatives – to accept L or to reject L.
Within the model “state-probability of action” we have to introduce systems of states corresponding to
the rules of inference. For a single inference rule 𝐴 ⟹ 𝐵 we consider two systems of states: 𝑆 ( ) and
𝑆 ( ) . So, we have to specify the vector 𝑝̅ ( ) and the matrix 𝑌 ( ⟹ ) , then 𝑝̅ ( ) can be obtained by the
formula (). If the rule is a terminal one, that is the rule is 𝐴 ⟹ 𝐿, then we should specify the vector
𝑝̅ ( ) , the matrix 𝑌 ( ⟹ ) and in addition to this the matrix 𝐻 ∗ described above.
     Finally, for the rule 𝐴 ⟹ 𝐿 the vector of probabilities equals

                                         𝑝 = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ ) ∙ 𝐻 ∗                                   (1)

   Multipliers in the equation (1) can be grouped in different ways. It can be rewritten in the following
form:

                                                𝑝 = 𝑝̅ ( ) ∙ 𝑅

   where 𝑅 = 𝑌 ( ⟹ ) ∙ 𝐻∗ .
   Surely, a chain of logical inference may be longer.
   If both 𝑌 ( ⟹ ) and 𝐻∗ are centrosymmetric rectangular stochastic matrices, then their product R
shall be a centrosymmetric rectangular stochastic matrix as well. Provided that 𝑝̅ ( ) is a symmetric
vector and R is a centrosymmetric matrix, equilibrium of alternatives holds.
   Another equivalent form of (1) can be constructed as follows:

                                            𝑝 = 𝑝̅ ( ) ∙ 𝐻∗ ,                                        (2)

                                          𝑝̅ ( ) = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ )

   We will use this form below.
   It is possible that the decision L can be affected by different factors, and this typically can lead to
contradictions. To make a closer look, let’s consider the following set of rules:

                                                     𝐴 ⟹ 𝐿,
                                                        …
                                                     𝐴 ⟹𝐿

   By applying the equation (2) we can get q different vectors:

                                  𝑝̅ ( )( ) = 𝑝̅ (   )
                                                         ∙ 𝑌(   ⟹ )
                                                                      , 𝑘 = 1, 𝑞

  The vector 𝑝̅ ( ) can be obtained by combining all 𝑝̅ ( )( ) . In particular, we can get their convex
combination:


                                                 𝑝̅ ( ) = ∑       𝑤 𝑝̅ ( )( ) ,
                                                                                                       (3)

                                                     0≤ 𝑤 ≤ 1,


                                                         𝑤 =1



   We assume that the matrix 𝐻 ∗ is centrosymmetric. It can be shown that if all vectors 𝑝̅ ( )( ) are
symmetric, their convex combination is symmetric and therefore equilibrium of alternatives holds for
any values of 𝑤 . Otherwise, for ensuring equilibrium the proper values of 𝑤 should be specially
picked.
   Let’s regard some examples.
7. Examples of combining evidence
   Example 3
   Firstly, we are going to illustrate one popular rule of what may be called paranormal logic, namely
the rule “If A implies B and B is desirable, then A is true”. Surely, this rule is incorrect from the logical
point of view, but people are often driven by it in their practice.
   So, we have the rule 𝐴 ⟹ 𝐵, B is here the terminal node. As it was explained before,

                                                 𝑝 = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ ) ∙ 𝐻 ∗

   As for A, one evidence is the uncertain information whether it is true or false. It appears to be more
flexible if we pick out a separate rule 𝐾 ⟹ 𝐴, where K denotes the statement “There is the evidence
that A is true”. Based on this rule, we can get the vector

                                                𝑝̅ ( )( ) = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ )

   Another rule is 𝑊 ⟹ 𝐴, where W denotes the statement “B is desirable”. So,

                                                𝑝̅ ( )( ) = 𝑝̅ (    )
                                                                        ∙ 𝑌(   ⟹ )


    and the vector 𝑝̅ ( ) should be obtained by combining 𝑝̅ ( )( ) and 𝑝̅ ( )( ).
    For all rules we are taking systems of states like to that we used for getting aggregated matrix
𝐻 ∗ (proponents, opponents and hesitating agents).
    Let’s postulate the specific values.
    The matrix 𝐻 ∗ will be taken from the example 2 with 𝛽 = 5:

                                                        0.9487 0.0513
                                                𝐻∗ =      0.5    0.5
                                                        0.0513 0.9487

   For the rule 𝐴 ⟹ 𝐵 we are taking the matrix

                                                    0.8 0.2 0
                                          𝑌 ( ⟹ ) = 0.3 0.4 0.3
                                                     0 0.2 0.8

   For the reason of simplicity, we are taking the same matrices 𝑌 ( ⟹ ) and 𝑌 (                 ⟹ )
                                                                                                       :

                                                                     0.9 0.1 0
                                     𝑌 ( ⟹ ) = 𝑌(        ⟹ )
                                                                   = 0.1 0.8 0.1
                                                                      0 0.1 0.9

   But vectors 𝑝̅ ( ) and 𝑝̅ (   )
                                     are very different. Assuming that there is no reliable information about
A, we may take

                                          𝑝̅ ( ) = (0.1,            0.2,        0.7)

   But if a majority of agents wants B to be accepted, we may take

                                         𝑝̅ (    ) = (0.7,              0.2,     0.1)

   Then

                         𝑝̅ ( )( ) = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ ) = (0.11,                   0.24,   0.65)
                        𝑝̅ ( )( ) = 𝑝̅ (    )
                                                ∙ 𝑌(   ⟹ )
                                                             = (0.65,      0.24,          0.11)

  If we performed reasoning on the base of available knowledge about A only, our further calculations
would be as follows:

                          𝑝 = 𝑝̅ ( )( ) ∙ 𝑌 ( ⟹ ) ∙ 𝐻 ∗ = (0.3062,                0.6938),

   which means that B should be rejected.
   Similarly, if we proceeded the reasoning on the base of 𝑝̅ ( )( ), B should be accepted. And if we
carry out the combining:

                   𝑝̅ ( ) = 0.5 ∙ 𝑝̅ ( )( ) + 0.5 ∙ 𝑝̅ ( )( ) = (0.38,            0.24,      0.38)

   we will get

                                 𝑝 = 𝑝̅ ( ) ∙ 𝑌 ( ⟹ ) ∙ 𝐻 ∗ = (0.5,               0.5)

   and equilibrium of alternatives will hold.
   Another example illustrates how the structured model “state-probability of action” can be combined
with some elements of the Analytic Hierarchy Process (AHP) [8-12], which is a very famous method
of hierarchical multi-factor decision making.

   Example 4
   Assume there are two alternatives and agents are to choose one of them. Similar to AHP, we consider
decision making affected by multiple criteria. But instead of constructing pairwise comparison matrices
for each criterion, we are trying to introduce states reflecting degrees of advantage of one alternative
over the other. We stipulate the rule 𝐴 ⟹ 𝐿 with the following meaning: “if an alternative L is better
with respect to any criterion, then L should be chosen”. We introduce the states for A as follows:
        L is significantly better than a competing alternative;
        L is better in some measure;
        both alternatives are equivalent;
        L is worse in some measure;
        L is significantly worse.
    Certainly, systems of states may be quite different. For instance, states may correspond to the
standard grades of Saaty scale, or we may use any other scale of preferences. Some reviews of different
scales for pairwise comparisons can be found in [13, 14].
    For the rule 𝐴 ⟹ 𝐿 we will specify the following matrix:


                                                0.95 0.05   0
                                               ⎛ 0.6 0.3   0.1 ⎞
                                      𝑌( ⟹ ) = ⎜ 0    0.2 0.8 ⎟
                                                 0.1 0.3   0.6
                                               ⎝  0  0.05 0.95 ⎠


    For each k-th criterion we will specify its particular vector 𝑝̅ ( )( ) . Let there be 4 criteria, and


                                              𝑝̅ ( )( ) = (0.9, 0.1, 0, 0, 0. )
                                           𝑝̅ ( )( ) = (0.6, 0.2, 0.1, 0.1, 0)
                                           𝑝̅ ( )( ) = (0, 0.1, 0.1, 0.2, 0.6)
                                      𝑝̅ ( )( ) = (0, 0, 0, 0.1, 0.9)

    Either L or the competing alternative shall gain an advantage with respect to particular criteria. For
combining criteria by using the formula (), we have to specify the coefficients 𝑤 . For instance, we can
take the Perronian vector (that is the normalized main eigenvector) of a pairwise comparison matrix
across criteria, which is absolutely typical for the AHP.
    Let the comparison matrix be

                                              1      2 3      4
                                              1
                                             ⎛       1 2      3⎞
                                              2
                                             ⎜                 ⎟
                                         𝐶 = ⎜1      1
                                                       1      2⎟
                                             ⎜3      2         ⎟
                                              1      1 1
                                                              1
                                             ⎝4      3 2          ⎠


    Then its Perronian vector approximately equals

                            𝑤 = (0.4673, 0.2772, 0.1601,                0.0954)


    With respect to this combined vector, final probabilities of alternatives are

                                       𝑝 = (0.6836,       0.3164)


    and the chosen alternative shall be L.
    Equilibrium of alternatives will hold if the Perronian vector of C is symmetric. This can take place
if C is centrosymmetric, for example if it is as follows:


                                              1 1
                                            1                 1
                                              4 4
                                           ⎛4 1 1             4⎞
                                         𝐶=⎜                   ⎟
                                           ⎜4 1 1             4⎟
                                              1 1
                                            1                 1
                                           ⎝  4 4                 ⎠


    In this paper we don’t consider possible consistency or inconsistency of centrosymmetric pairwise
comparisons, this issue needs to be specially studied.
    If a pairwise comparison matrix is considered as a parameter of a behavioral model, an influencer
can try to affect these comparisons – maybe by influencing experts to change their opinions and thereby
to modify their comparisons.

8. Results, conclusions and discussion
   In this paper, the structured model “state-probability of action” has got a further development, which
opens a prospect of constructing a model of decision making under the uncertain and/or contradictory
information on the base of a network of connected nodes, each of which implements the model “state-
probability of action”. At a closer look, each node should be associated with a rectangular stochastic
matrix “state-probability of action”, and decision making, reasoning etc. is carried out mainly by
operating with matrices and vectors. For given or assumed statements, vectors of initial probabilities
that an agent is being in the certain state related to these statements are to be provided. Such vectors
might be specified explicitly, but they can be also obtained from a Markov chain with the given
transitional probabilities, those are the probabilities of transitions across the states.
    Such a model comprises operating with chains of reasoning and combining contradictory evidence.
In some measure it is similar to those like probabilistic Bayesian networks, belief networks, knowledge
graphs etc. [15-18]. But the suggested model places more articulate emphasis on behavioral aspects of
decision making and on considering possible contradictions in available evidence.
    An approach to constructing matrices “state-probability of action” on the base of fuzzy sets and their
membership functions has been suggested. It appears that such a fuzzy approach might be considerably
entrenched by using fuzzy numbers of different kinds (triangle, trapezoid etc.).
    The model “state-probability of action” was designed to be parametrized. If such a model describes
a real situation of decision making and an agent of influence is aware of this model, then they can try
to manipulate the parameters with the aim of making other agents accept decisions desirable for
influencers. They can do it by sharing information with other agents; models aimed at describing
dissemination of information across communities are rapidly developing now [19-22].
    Some parameters for nodes implementing the structured model “state-probability of action” have
been suggested and discussed in [2] and, following that, in this paper. The main of them are as follows
(including but not limited to):
        decisiveness of agents
        pairwise comparisons between different criteria the final decision depends on if the suggested
    approach is being combined with the Analytic Hierarchy Process
        weighting coefficients in formula (3), which are proposed to be used for combining evidence;
    meaningfully these coefficients may reflect how those pieces of evidence are important and/or
    reliable and how we do trust them
        fuzzy membership functions and types of fuzzy numbers which can be applied for forming
    matrices “state-probability of action”, especially for terminal nodes
        transitional probabilities across states.
    This list of parameters of the model surely can be extended.
    Two examples, which illustrate possible ways of applying the structured model “state-probability of
action” to contradictory decision making and reasoning, have been provided. One of them illustrates a
very proliferated, despite its actual incorrectness, rule of reasoning “If A implies B and B is desirable,
then A is true”, which relates to what can be characterized as a paranormal logic and may be closely
bound with a conflict between knowledge and wishes of an agent, especially of a human being. The
other example illustrates the prospect of how the structured model “state-probability of action” can be
combined with some elements of the Analytic Hierarchy Process for tackling the problem of multi-
criteria decision making. This appears especially important if decisions are made algorithmically on the
base of certain parametrized procedures.
    For both examples, the problem of equilibrium between alternatives, which can be found within the
model by combining contradictory pieces of evidence, has been explored. Those situations of
equilibrium can be found on the base of symmetric vectors and centrosymmetric matrices, it appears
interesting to search for non-symmetric ones. If the model is combined with the AHP, it appears
important to investigate how consistent or inconsistent pairwise comparison matrices may be.
    As an overall final remark, the suggested model “state-probability of action” insofar as it is a
probabilistic model admitting clear fuzzy generalization, and it is a model placing special emphasis on
behavioral aspects of decision making, can find various applications as for modeling individual and
collective decisions in socio-economic systems (political activity, information wars, fluctuations of
ratings gained by political parties, voting on elections etc.) so in multi-agent systems of algorithmic
decision making, especially if decisive rules applied in those systems are weakly formalized, unclear
and volatile.


9. References
[1] O. Oletsky, E. Ivohin, Formalizing the Procedure for the Formation of a Dynamic Equilibrium of
     Alternatives in a Multi-Agent Environment in Decision-Making by Majority of Votes, Cybern Syst
     Anal. 57-1 (2021) 47-56. doi: https://doi.org/10.1007/s10559-021-00328-y.
[2] O.Oletsky, Exploring Dynamic Equilibrium Of Alternatives On The Base Of Rectangular
     Stochastic Matrices, in: CEUR Workshop Proceedings 2917, CEUR-WS.org 2021. http://ceur-
     ws.org/Vol-2917/, pp. 151-160
[3] E.Ivokhin, O.Oletsky, Restructuring of the Model “State–Probability of Choice” Based on
     Products of Stochastic Rectangular Matrices, Cybern. Syst. Anal. 58-2 (2022) 242-250.
     https://doi.org/10.1007/s10559-022-00456-z
[4] R.Beneduci, Stochastic matrices and a property of the infinite sequences of linear functionals,
     Linear Algebra and its Applications 433 (2010) 1224–1239
[5] J.R.Weaver, Centrosymmetric (cross-symmetric) matrices, their basic properties, eigenvalues and
     eigenvectors, Amer. Math. Monthly 92 (1985) 711–717.
[6] A. Melman, Symmetric centrosymmetric matrix-vector multiplication, Linear Algebra Appl. 320
     (2000) 193–198.
[7] O.Oletsky, A Model of Information Influences on the Base of Rectangular Stochastic Matrices in
     Chains of Reasoning with Possible Contradictions, in: CEUR Workshop Proceedings 3179 (IT&I
     Workshops 2021), pp. 354-361.
[8] T.L. Saaty, The Analytic Hierarchy Process, McGraw-Hill, New York, 1980.
[9] M. Brunelli, Introduction to the Analytic Hierarchy Process, Springer, Cham, 2015.
[10] O.S. Vaidya, S. Kumar, Analytic hierarchy process: An overview of applications, European
     Journal of Operational Research 169(1) (2006) 1-29
[11] A. Ishizaka, A. Labib, Review of the main developments in the analytic hierarchy process, Expert
     Syst. Appl. 38 (2011) 14336–14345.
[12] W. Ho. Integrated analytic hierarchy process and its applications. A literature review, European
     Journal of Operational Research 186(1) (2008) 211-228.
[13] E. Choo, W. Wedley, A Common Framework for Deriving Preference Values from Pairwise
     Comparison Matrices, Comput. Oper. Res. 31(6) (2004) 893-908.
[14] V. Tsyganok, S. Kadenko, O. Andriichuk, Usage of Scales with Different Number of Grades for
     Pair Comparisons in Decision Support Systems, International Journal of the Analytic Hierarchy
     Process 8(1) (2016) 112-130. doi: 10.13033/ijahp.v8i1.259.
[15] A.Darwiche, Modeling and Reasoning with Bayesian Networks, Cambridge University Press,
     2009.
[16] P. Jackson, Introduction to expert systems, Addison Wesley, 1999.
[17] L. Bellomarini, D. Fakhoury, G. Gottlob, E. Sallinger, Knowledge graphs and enterprise AI: the
     promise of an enabling technology, in: ICDE, IEEE, 2019, pp. 26–37
[18] P. Atzeni, L. Bellomarini, M. Iezzi, E. Sallinger, A.Vlad, Weaving enterprise knowledge graphs:
     The case of company ownership graphs, in: EDBT, OpenProceedings.org (2020), pp. 555–566.
[19] S. Vosoughi, D. Roy, S.Aral, The spread of true and false news online, Science 359 (03 2018)
     1146–1151
[20] A. Aref, T. Tran, An integrated trust establishment model for the internet of agents, Knowledge
     and Information Systems, 62 (2020) 79–105
[21] K. K. Fullam, T. B. Klos, G. Muller, J. Sabater, A. Schlosser, Z. Topol, K. S. Barber, J. S.
     Rosenschein, L. Vercouter, M. Voss, A specification of the agent reputation and trust (art) testbed:
     Experimentation and competition for trust in agent societies, in: Proc. 4th Int. Joint Conf. Auto.
     Agents Multiagent Syst. (2005), pp. 512-518
[22] H. Yu, Z. Shen, C. Leung, C. Miao, V. Lesser, A survey of multi-agent trust management systems,
     IEEE Access, 1 (2013) 35–50