<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Modern Machine Learning Technologies, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Modeling changes of opinions as transition probabilities within one- and two-level model “State-Probability of Action”⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksiy Oletsky</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Peleshko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Moholivskyi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ivan Franko National University of Lviv</institution>
          ,
          <addr-line>Universytetska Str., 1, Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National University of Kyiv-Mohyla Academy</institution>
          ,
          <addr-line>Skovorody Str., 2, Kyiv, 04070</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>14</volume>
      <issue>2025</issue>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Approaches to modeling and simulating processes related to elections and changing voters' opinions in bipartisan democracies on the basis of Markov chains are discussed. The basic approach suggested by the “state-probability of action” model (SPA model) is combined with ideas featured by pairwise comparisons and the Analytic Hierarchy Process. The one-level SPA model, focusing on election results, and the twolevel model regarding criteria which affect decisions are considered. Some modifications of traditional homogenous Markov chains, such as switching roles or random transition probabilities, are explored. Some approaches to using non-homogenous Markov chains are outlined.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Markov chains</kwd>
        <kwd>switching roles</kwd>
        <kwd>“state-probability of action” model</kwd>
        <kwd>pairwise comparisons</kwd>
        <kwd>Analytic Hierarchy Process</kwd>
        <kwd>making decisions</kwd>
        <kwd>elections1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ] an approach to modeling elections in an established bipartisan democracy combining
uncertain decisions with the Analytic Hierarchy Process (AHP) [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6 ref7">3-7</xref>
        ] has been introduced. The main
idea is to apply what is referred to as the “state-probability of action” model, or SPA-model, which
has been suggested in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] a parametrized version of this basic model, which acquired ideas of
pairwise comparisons and of the AHP, and of taking into account degrees of decisiveness typical for
reinforcement learning techniques [
        <xref ref-type="bibr" rid="ref9">9, 10</xref>
        ] as well, has been considered.
      </p>
      <p>
        Decisions are typically made on the ground of some criteria, which leads us to using a very
common and ubiquitous two-level scheme of AHP, in which the top level corresponds to eventual
decisions, and the bottom level corresponds to the specific criteria. Some ways to implement similar
ideas for the two-level SPA-model of decision making on the base on the approach suggested in [11]
have been discussed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] as well.
      </p>
      <p>
        The question of modeling changes in voters’ opinions resulting in different outcomes of elections
and therefore in changing parties which are in power, is a question of great interest. The basic and
the simplest point is finding and exploring situations of so-called dynamic equilibrium, when no
alternative has advantages over the others. Some sufficient conditions for an equilibrium between
alternatives within the SPA-model for the case when there are two alternatives only were reported
in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]; in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] some ways to how agents of influence might move the situation away from the
equilibrium were described. But the approach considering only equilibrium situations is very basic
and not sufficient one, the question is much more profound. So, in this paper we are going to point
out some other approaches to modeling changes in voters’ opinions.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>
        The basic one-level SPA-model, as it was described in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], is as follows. Let there be n alternatives
forming a set  = { , … ,  }, one of which is to be chosen. Let  be a random variable corresponding
to a certain choice. Then we can consider a vector  = { , … ,  }, where  is a probability that the
alternative  is chosen, that is
      </p>
      <p>=  ( =  ).</p>
      <p>Then we regard some states, each of them corresponds to a certain distribution of probabilities of
making certain choices. More formally, let  = { , … , 
} be a given set of states, and  be a random
variable denoting a state in which the agent is at the specific moment. We are considering conditional
probabilities ℎ</p>
      <p>=  ( =  | =  ). These probabilities form a matrix denoted as H, which is called
the SPA-matrix. Eventually, we have to specify the vector of input probabilities  ̅ = ( , … ,  ),
where</p>
      <p>=  ( =  ).</p>
      <p>
        Then, as it was shown in [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ],
      </p>
      <p>=  ̅</p>
      <p>In the paper we are regarding the most important case n=2, when there are two competing
alternatives only. This particular case is typical for bipartisan democracies, where there are two
competing political parties, which are gaining power in turn according to results of elections. For
this case, equilibrium means that  = (0.5, 0.5).</p>
      <p>
        If n=2, equilibrium between alternatives holds if the  ̅ vector is symmetric, and the H matrix is
centrosymmetric [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], this follows directly from the properties of centrosymmetric matrices [12, 13].
      </p>
      <p>
        The H matrix might be specified in very different ways. The  ( ,  ,  ) model suggested in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
appears to be quite suitable for our purposes. Within this approach, each state of the SPA-model
corresponds to a certain level of preference of an alternative over the other in terms of pairwise
comparisons. The q parameter reflects the granularity of preference levels, we pose there are 2q+1
preference levels been numbered from -q to q. For quantifying, that is for ascribing certain values to
those levels, we are using so-called transitive scales of comparisons [14]. Then the value ascribed to
the k-th grade of preference shall equal  .
(
      </p>
      <p>
        The 
, … , 
parameter reflects the agent’s level of confidence. Given the preference values
) for the k-th state, the corresponding probabilities should be obtained as follows [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
techniques [
        <xref ref-type="bibr" rid="ref9">9, 10</xref>
        ].
following SPA-matrix (approximately):
      </p>
      <p>The bigger is  , the more decisive the agent is. This approach is typical for reinforcement learning
For example, the  ( ,  ,  ) model with the parameters  = 3,  = 1.4,  = 4.5 yields the

=
∑</p>
      <p>= ⎜⎜
1.0
0.5
0.9985
⎛0.9563
⎜0.0437
⎝
 =  ∙  ∙  ∙</p>
      <p>=  ∙  ∙ 
In the paper we are going to use this matrix as a starting point.</p>
      <p>Given the input probabilities  ̅ = (0.1, 0.1, 0.15, 0.3, 0.15, 0.1, 0.1), we obtain the resulting
probabilities  = (0.5, 0.5). Equilibrium holds; no alternative has advantages.</p>
      <p>
        The two-level multicriteria decision making model, based on combining two-level SPA-model
[11] with classical two-level AHP, has been suggested in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Shortly, we consider a two-level state
system: the bottom level corresponds to separate criteria, and the top one corresponds to the eventual
choice. For connecting these levels, the following logical rules are suggested: if the alternative A has
a preference over the other alternative B by the separate k-th criterion, then A has the overall preference
over B. For reflecting these rules, we are introducing transitional matrices for each criterion. For the
reason of simplicity, we may consider the same transitional matrix R for all criteria as long as we
believe that the actual measures of influence for different criteria may be adequately reflected by
weighting coefficients for these criteria. These weighting coefficients form the vector  =
( , … ,  ), where K is the total number of criteria,  is the importance of the k-th criterion, and
      </p>
      <p>Like for the top-level SPA-matrix, the states for the bottom-level matrices related to specific
criteria correspond to grades of preference between alternatives. For each k-th criteria we have to
specify probabilities for being in certain states. Let’s introduce the matrix  = (
,  = 1,  ,  =
1,  ), where</p>
      <p>is the probability of being in the i-th state with respect to the k-th criteria.</p>
      <p>
        Then [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
      </p>
      <p>
        Equilibrium holds if  is a symmetric vector, and D, R, H are centrosymmetric matrices [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. It is
important to mention that, like the similar condition for the one-level model, this is the sufficient
condition for equilibrium only, but not the necessary one.
      </p>
      <p>
        There might be many approaches to specifying the matrix R [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. However, now for our purposes
it appears enough to take it as the unit matrix: R=I. Then
      </p>
      <p>It is necessary to mention that if the number of voters is large enough, then a losing alternative
has almost no chances to be chosen. So, if equilibrium holds permanently, both alternatives have
equal chances, and they are chosen by turn. Indeed, political parties in bipartisan democracies are
chosen by turn and changing each other in power. But the assumption of permanently held
equilibrium seems to be unrealistic. To the moment of elections either one or other party should get
an advantage, but afterwards this shall change.</p>
      <p>In our previous papers we mainly considered input probabilities as something given. But for
exploring and modeling processes of changing voters’ opinions more rigorously we should look at a
Markov chain which these probabilities result from.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Regarding homogenous Markov chain</title>
      <p>Let’s consider a one-level SPA model with m states. Let’s introduce a Markov chain having the
transition matrix Π = (
( ),  ,  = 1,  ), where 
( ) is the probability that an agent who is in the
i-th state at the moment t jumps to the j-th one. Note that meaningfully transitional probabilities
 ( ) describe changes in voters’ opinions, since each state reflects a certain opinion about which
alternative is better and to which extent. A Markov chain is said to be homogenous if transitional
probabilities  don’t depend on t.</p>
      <p>Provided a Markov chain is homogenous, the stationary probability distribution exists under
certain conditions; it can be found from the following known equation:</p>
      <p>Π =</p>
      <p>But can we really describe changing power in bipartisan democracies on the basis of homogenous
Markov process within the SPA model? Seemingly not, since if we consider a homogenous process
in the traditional sense, the only situation when both parties win elections by turn is a situation of
the permanent equilibrium of alternatives, which is unrealistic.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Homogenous chain with switching roles</title>
      <p>However, we can try to retain the assumption about homogenous features if we consider a process
which is homogenous in some generalized sense. Namely, we can consider a Markov chain, in which
the states are bound not with the certain party but with the role of being in power.</p>
      <p>Let’s take the matrix H from Section 2, but the alternatives now will be of another sort. The first
alternative is to vote for the party which is currently in power, and the other is to vote for their
opponents. We can consider two important cases.</p>
      <p>Case 1. Let’s take the transition matrix as follows:</p>
      <sec id="sec-4-1">
        <title>The resulting probabilities of voting results are as follows:</title>
      </sec>
      <sec id="sec-4-2">
        <title>Case 2. This is the situation opposite to the previous one. Let the transition matrix be as follows:</title>
        <p>Now the ruling party loses, and the power moves on to their opponents.</p>
        <p>Summarizing the section and combining it with the considerations about the equilibrium of
alternatives, we can formulate the following statement.</p>
        <p>Let  ( ) be the probability that an agent will vote for the ruling party. If within the SPA model
changes of voters’ opinions are described by a homogenous Markov chain with switching roles and
the chain always reaches its stationary mode before the moment of election occurs, then repeated
changing of power is possible if and only if  ( ) ≤ 0.5.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Why the winners are to lose: some explaining models</title>
      <p>It appears interesting to explore how the fact that the party which recently won the election
promptly gets into the losing situation can be explained within the SPA model. In this section we are
going to provide some considerations about this issue.</p>
      <p>Firstly, modern bipartisan democracies are typically considered to be mature and established
democracies. On the other hand, they probably have already harnessed most available resources to
reach an overall situation which is near to being as good as possible. No party can improve the social
situation drastically. Some betterments surely happen. But good things are usually taken for granted
whereas faults in governance, which are absolutely inevitable, are rapidly getting into the focus of
public discussion.</p>
      <p>So, if the ruling party does things similarly to what their opponents did while being in power,
and their mistakes are similar as well, the overall situation likely can be described in terms of the
homogenous Markov chain with changing roles. In addition to this, in established democracies most
people, generally satisfied with their life, are usually conservative and are not going to approve of
drastic changes.</p>
      <p>People usually tend to estimate ongoing betterment or worsening not just as they are, but rather
in comparison with what they think other politicians could secure.</p>
      <p>And last, but not least, the situation is near to being Pareto-optimal with respect to criteria
indicating the quality of social well-being. This means that it is hard or even impossible to improve
any criterion without worsening some others.</p>
      <p>We are going to illustrate the latter with the following example.</p>
      <p>Let’s regard the two-level model. We are considering two criteria: average income and
environmental issues. Of course, there actually are many other criteria, but this example is very basic
and illustrative.</p>
      <p>Average income is considered to be more important. So are the weighting coefficients: 0.6 for
average income and 0.4 for environmental issues. Then</p>
      <p>The average income gradually rises permanently. Voters take this for granted. Nevertheless, the
transitional matrix with respect to this criterion indicates gradual movement in favor of the ruling
party. Let it be as follows:
0.5 0.5 0 0 0 0 0
0.3 0.5 0.2 0 0 0 0
⎛ 0 0.3 0.5 0.2 0 0 0 ⎞
Π = ⎜⎜ 0 0 0.3 0.5 0.2 0 0 ⎟⎟
⎜ 0 0 0 0.3 0.5 0.2 0 ⎟</p>
      <p>0 0 0 0 0.3 0.5 0.2
⎝ 0 0 0 0 0 0.6 0.4⎠</p>
      <p>However, environmental issues are deteriorating, and people are really worried about this. So,
the transitional matrix with respect to this criterion reflects sharper movement in favor of the
opponents. Let it be as follows:</p>
      <sec id="sec-5-1">
        <title>Then the resulting vector of probabilities is</title>
      </sec>
      <sec id="sec-5-2">
        <title>The opponents win.</title>
        <p>Considering homogenous Markov chain with switching roles as a basis for modeling changes in
voters’ opinions appears to be a sound starting point. But there is an issue which hardly can be
explained within such a model.</p>
        <p>If real-word processes strictly complied with assumptions about homogeneity, changing of power
would occur with perfect regularity. The party which won the latest election definitely would lose
the next one. However, that doesn’t hold in the real world. So, we a going to discuss possible
nonhomogenous enhancements.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Non-homogenous modeling</title>
      <p>Generally, non-homogenous Markov chains, featuring varying transitional probabilities, are not
studied very well. However, some approaches, which at list could be used for modeling and
simulation, might be outlined.</p>
      <p>We are considering discrete moments of time  ,  , …. For each moment of time the reciprocal
transitional matrix Π( ) = ( ( ),  ,  = 1,  ,  = 1,2, … ) exists.</p>
      <p>For a one-level SPA-model, given SPA-matrix H, current input probabilities  ( ) , and current
transitional matrix Π( ), we can calculate current probabilities for choosing alternatives as follows:
input probabilities at the next moment of time are as follows:</p>
      <p>( ) =  ( ) ,
 (</p>
      <p>) =  ( )Π( )
and probabilities for choosing alternative at the next moment of time are as follows:
 (
) =  (</p>
      <p>) =  ( )Π( )</p>
      <p>Both homogenous Markov chains with switching roles and non-homogenous Markov chains are
suitable for modeling changes in voters’ opinions, but their areas of relevance are quite different.
Whereas homogenous Markov chains appear to describe rather established situations,
nonhomogenous ones are to reflect unstable dynamics in situations featuring significant changes. What
appears to be a trigger for changing dynamics of opinions is various events occurring in real life and
information occasions, which may relate to actual ongoing flow of events, or may not.</p>
      <p>A simple view on the matter may be as follows. When an information occasion arises, transitional
probabilities start changing. Eventually they are stabilized, and then the Markov chain reaches the
stationary distribution of probabilities across the states corresponding to different opinions in our
case. There are various approaches to modeling dissemination of information itself [15-22]. The
question of how information occasions really affect changes in opinions has not been studied well
so far, but some approaches can be outlined.</p>
      <p>However, this simple view is not sufficient. The issue is that information occasions may arise too
frequently, so there may not be enough time for the Markov chain to be established and to reach the
new stationary point. Moreover, agents of influence often are addicted to creating information
occasions, sometimes far-fetched, scripted and choreographed or even merely fake. Their opponents
are doing the same but aiming to affect the situation in the opposite direction.</p>
      <p>Such things can be considered as an informational noise causing occasional fluctuations in
voters’ opinions which don’t affect the situation very much unless something crucial happens. This
prompts an idea to consider a homogenous model but that features random transition probabilities.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Homogenous modeling with random transition probabilities</title>
      <p>For simplicity, let’s consider the one-level SPA-model with random transition probabilities having
the following view:
Π</p>
      <p>= Π + 
where Π is a constant matrix, and  is a random matrix. The important thing to be cared is that
Π must remain stochastic. That’s why components of  must not be distributed normally or so.</p>
      <p>Under some conditions, this assumption allows us to estimate upper and lower bounds of
resulting probabilities of choice, for instance based on the following evident inequality for the
components of the eigenvector v of the given stochastic matrix A:
min</p>
      <p>≤  ≤ max</p>
      <p>Such a model based on the homogenous Markov chain with switching roles and with random
transition probabilities can explain how both sides can win elections by turn even if equilibrium of
alternatives does not hold. Now we are going to illustrate this with the following experiment.</p>
      <p>
        For simplicity, we took the one-level model with three states only (m=3) and the SPA matrix as
follows:
 =
 either state 1 or 3 is randomly chosen; let’s denote the chosen state as L
  ′ =  +  ,  = 1,  , where  are random values from the interval [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] multiplied by
0.2
  ′ =  −  ,  = 1, 
After 15 steps we have got the following series of choice probability distributions:
      </p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusions and discussion</title>
      <p>This illustrates that within the described model competing parties really can win elections by
turn and change each other in power.</p>
      <p>The paper addresses issues related to modeling processes in bipartisan democracies including
changes in voter’s opinions and elections within the model named “state-probability of action” (SPA
model). Changes in opinions are described in terms of transition matrices of Markov chains, where
the states represent different levels of preference of one alternative over another, which is typical
for pairwise comparisons and the Analytic Hierarchy Process. Both one-level model regarding final
choices only and two-level model regarding criteria influencing possible choices are considered.</p>
      <p>Possibilities related both to homogenous models considering constant transition matrices and
non-homogenous ones considering varying matrices are discussed and illustrated by provided
examples. Homogenous models turn out to be very constrained, and under pure homogenous
assumptions the only situation when two competing parties have chances to win elections in turn,
that is to change each other in power, is the situation of permanent equilibrium between alternatives.
This means that the probability for each alternative must permanently equal 0.5, however in practice
this seems to be unrealistic.</p>
      <p>The model based on homogenous Markov chains with switching roles, which bounds the states
of the SPA model not just to specific parties but to their roles, that is to whether they are in power
or not, is considered. This model doesn’t require an equilibrium of alternatives for modeling
changeable results of elections. So, the homogenous model can be used as a basis for modeling, but
it has its own constraints. If an equilibrium isn’t held permanently, then either one party wins each
time, or parties change each other in power with perfect regularity. This means that according to the
model a party which won the previous election must lose just the next one. Evidently, any of those
options doesn’t take place in the real world.</p>
      <p>Using Markov chains with switching roles, basically homogenous, but featuring random
transitional probabilities, is considered in the paper. It was shown experimentally that such a model
performs proper behavior in the sense that it demonstrates that alternatives can be chosen by turn.</p>
      <p>Possible approaches to modeling and simulation on the basis of non-homogenous Markov chains
are discussed as well.</p>
      <p>In the paper only possibilities related to describing changes of opinions on the basis of transition
matrices across the states are discussed. However, other parameters affecting opinions, such as
measures of confidence or weighting coefficients reflecting importance of different criteria should
be considered.</p>
      <p>Last but not least, it is very important to consider models of spreading information influence and
how information influences and information occasions affect changing opinions. Since agents of
influence are counteracting each other, game aspects of the matter should be considered as well.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <sec id="sec-9-1">
        <title>The authors have not employed any Generative AI tools.</title>
        <p>[10] R.Sutton, A.Barto. Reinforcement Learning: An Introduction. Second Edition. MIT Press,</p>
        <p>London, 2018.
[11] E.Ivokhin, O.Oletsky, Restructuring of the Model “State–Probability of Choice” Based on
Products of Stochastic Rectangular Matrices, Cybern. Syst. Anal. 58-2 (2022) 242-250.
https://doi.org/10.1007/s10559-022-00456-z.
[12] J.R.Weaver, Centrosymmetric (cross-symmetric) matrices, their basic properties, eigenvalues
and eigenvectors, Amer. Math. Monthly 92 (1985) 711–717.
[13] A. Melman, Symmetric centrosymmetric matrix-vector multiplication, Linear Algebra Appl. 320
(2000) 193–198.
[14] E. Choo, W. Wedley, A Common Framework for Deriving Preference Values from Pairwise</p>
        <p>Comparison Matrices, Comput. Oper. Res. 31(6) (2004) 893-908.
[15] S. Vosoughi, D. Roy, S.Aral, The spread of true and false news online, Science 359 (03 2018)
1146–1151
[16] A. Aref, T. Tran, An integrated trust establishment model for the internet of agents, Knowledge
and Information Systems, 62 (2020) 79–105
[17] K. K. Fullam, T. B. Klos, G. Muller, J. Sabater, A. Schlosser, Z. Topol, K. S. Barber, J. S.</p>
        <p>Rosenschein, L. Vercouter, M. Voss, A specification of the agent reputation and trust (art)
testbed: Experimentation and competition for trust in agent societies, in: Proc. 4th Int. Joint
Conf. Auto. Agents Multiagent Syst. (2005), pp. 512-518
[18] H. Yu, Z. Shen, C. Leung, C. Miao, V. Lesser, A survey of multi-agent trust management systems,</p>
        <p>IEEE Access, 1 (2013) 35–50
[19] S. Wasserman, K. Faust, Social Network Analysis: Methods and Applications, Cambridge</p>
        <p>University Press, 1994
[20] L. Dang, Z. Chen, J. Lee, M.-H. Tsou, X. Ye, Simulating the spatial diffusion of memes on social
media networks, International Journal of Geographical Information Science, 33 (2019) 1545-1568
[21] P. Sobkowicz, M. Kaschesky G. Bouchard, Opinion mining in social media: Modeling,
simulating, and forecasting political opinions in the web, Government Information Quarterly,
29 (2012) 470-479
[22] R. Mallipeddi, S. Kumar, C. Sriskandarajah, Y. Zhu, A Framework for Analyzing Influencer
Marketing in Social Networks: Selection and Scheduling of Influencers, SSRN Electronic Journal,
(2018)</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Dosyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Oletsky</surname>
          </string-name>
          ,
          <article-title>An approach to modeling elections in bipartisan democracies on the base of the “state-probability of action” model</article-title>
          ,
          <source>in: CEUR Workshop Proceedings</source>
          ,
          <year>2024</year>
          ,
          <volume>3723</volume>
          , pp.
          <fpage>74</fpage>
          -
          <lpage>85</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oletsky</surname>
          </string-name>
          ,
          <source>Exploring Dynamic Equilibrium Of Alternatives On The Base Of Rectangular Stochastic Matrices</source>
          , in: CEUR Workshop Proceedings 2917, CEUR-WS.org
          <year>2021</year>
          . http://ceurws.org/Vol-
          <volume>2917</volume>
          /, pp.
          <fpage>151</fpage>
          -
          <lpage>160</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>T.L.</given-names>
            <surname>Saaty</surname>
          </string-name>
          , The Analytic Hierarchy Process,
          <string-name>
            <surname>McGraw-Hill</surname>
          </string-name>
          , New York,
          <year>1980</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Brunelli</surname>
          </string-name>
          , Introduction to the
          <source>Analytic Hierarchy Process</source>
          , Springer, Cham,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Ishizaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Labib</surname>
          </string-name>
          ,
          <article-title>Review of the main developments in the analytic hierarchy process</article-title>
          ,
          <source>Expert Syst. Appl</source>
          .
          <volume>38</volume>
          (
          <year>2011</year>
          )
          <fpage>14336</fpage>
          -
          <lpage>14345</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.</given-names>
            <surname>Ho</surname>
          </string-name>
          .
          <article-title>Integrated analytic hierarchy process and its applications. A literature review</article-title>
          ,
          <source>European Journal of Operational Research</source>
          <volume>186</volume>
          (
          <issue>1</issue>
          ) (
          <year>2008</year>
          )
          <fpage>211</fpage>
          -
          <lpage>228</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>W.</given-names>
            <surname>Koczkodaj</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Mikhailov</surname>
          </string-name>
          , G.Redlarski,
          <string-name>
            <given-names>M.</given-names>
            <surname>Soltys</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Szybowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Tamazian</surname>
          </string-name>
          , E.Wajch, Kevin Kam Fung Yuen,
          <source>Important Facts and Observations about Pairwise Comparisons, Fundamenta Informaticae</source>
          <volume>144</volume>
          (
          <year>2016</year>
          )
          <fpage>1</fpage>
          -
          <lpage>17</lpage>
          . doi:
          <volume>10</volume>
          .3233/FI-2016-1336.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oletsky</surname>
          </string-name>
          , E. Ivohin,
          <article-title>Formalizing the Procedure for the Formation of a Dynamic Equilibrium of Alternatives in a Multi-Agent Environment in Decision-Making by Majority of Votes, Cybern Syst Anal</article-title>
          .
          <volume>57</volume>
          -
          <fpage>1</fpage>
          (
          <year>2021</year>
          )
          <fpage>47</fpage>
          -
          <lpage>56</lpage>
          . doi: https://doi.org/10.1007/s10559-021-00328-y.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Russell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Norwig</surname>
          </string-name>
          , Artificial Intelligence:
          <article-title>A Modern Approach, 4th Edition</article-title>
          . Pearson Education, Inc.,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>