=Paper= {{Paper |id=Vol-2923/paper29 |storemode=property |title=Recurrent Estimation of the Information State Vector and the Correlation of Measuring Impact Matrix using a Multi-Agent Model (short paper) |pdfUrl=https://ceur-ws.org/Vol-2923/paper29.pdf |volume=Vol-2923 |authors=Zoreslava Brzhevska,Halyna Haidur,Nadiia Dovzhenko,Andrii Anosov,Maksym Vorokhob |dblpUrl=https://dblp.org/rec/conf/cpits/BrzhevskaHDAV21 }} ==Recurrent Estimation of the Information State Vector and the Correlation of Measuring Impact Matrix using a Multi-Agent Model (short paper)== https://ceur-ws.org/Vol-2923/paper29.pdf
Recurrent Estimation of the Information State Vector
and the Correlation of Measuring Impact Matrix using
a Multi-Agent Model
Zoreslava Brzhevskaa, Halyna Haidura, Nadiia Dovzhenkoa, Andrii Anosovb,
and Maksym Vorokhobb
a
    State University of Telecommunications, 7 Solomenska str., Kyiv, 03110, Ukraine
b
    Borys Grinchenko Kyiv University, 18/2 Bulvarno-Kudriavska str., Kyiv, 04053, Ukraine

                 Abstract
                 The information-oriented model is a multi-agent simulation model in which the integrated
                 characteristics of information resources are the result of many local interacting individuals. The
                 information-oriented approach in modeling involves the creation of simulation models that
                 reproduce some criteria of information reliability and their local interaction for the built integrated
                 models of many information resources. Information within this model is considered as a unique,
                 discrete unit in which there is a set of characteristics that change with the introduction of the life
                 cycle.

                 Keywords 1
                 Information, information-oriented model, multi-agent model, measuring impact matrix.

1. Introduction
   Building a model at the level of describing a particular information resource provides a number of
advantages such as transparency about objective mechanisms, the ability to describe the object under
study, with a high degree of detail, to obtain more useful information from the simulation results.
   Based on the information-oriented model, we obtain data that fully correspond to the usual state of
the data in the information space. To this end, the level of threat is introduced into the model as a result
of obtaining and perceiving data. That is, in this case, each cell contains a demand for data and some
class of threat. Under the new rules, data is moved to a free cell, where the ratio (demand/threat class)
is maximum.
   Later modifications of the information-oriented model consider different types of interactions
between information, as well as other complications. This makes it possible to analyze a wider range of
social processes and procedures.

2. Information-Oriented Model Framework
   The following issues are investigated within the framework of the information-oriented model:
       Distribution of the amount of information between the data.
       Distribution of data by significance.
       Data migration.
       Introduction of new properties into the model, such as the demand/threat class ratio, and the
corresponding modification of the rules.
       Introduction of new properties of information, such as the impact of information on the
individual.

Cybersecurity Providing in Information and Telecommunication Systems, January 28, 2021, Kyiv, Ukraine
EMAIL: zoreska.puzniak@gmail.com (A.1); gaydurg@gmail.com (A.2); nadezhdadovzhenko@gmail.com (A.3); a.anosov@kubg.edu.ua
(B.4); m.vorokhob.asp@kubg.edu.ua (B.5)
ORCID: 0000-0002-7029-9525 (A.1); 0000-0003-0591-3290 (A.2); 0000-0003-4164-0066 (A.3); 0000-0002-2973-6033 (B.4); 0000-0001-
5160-7134 (B.5)
              ©️ 2021 Copyright for this paper by its authors.
              Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
              CEUR Workshop Proceedings (CEUR-WS.org)



272
        Change of rules of the emergence of new data.
        Introduction of inheritance rules, for example, when the amount of information that loses
demand is evenly distributed in the new information that has appeared.
        Introduction of multiple units, such as demand.
        Introduction of rules for the exchange of demand between information.
   Information-oriented modeling encompasses spatially distributed models in which each unit of
information is associated with a specific position in space. Thus, the properties of the model
significantly depend on its space-time scale. Models also differ in the amount of information
considered. The scope of calculations directly depends on the scale of the problem [1, 2].
   It should be noted that the information-oriented model requires more computation than the analytical
one. However, in many areas, the development of an information-oriented model is justified due to the
fact that:
        Data of real observations of the studied parameters are often not enough to identify the
analytical model.
        It is necessary to take into account spatial aspects.
        It is necessary to take into account the mechanisms of the information space.

3. Main Part
     To assess the state of information in the information-oriented model, we use partial a priori
statistical uncertainty. in which the law of distribution of components-evaluated and measured random
processes is known to the nearest certain set of parameters. The parametric description must meet two
sometimes conflicting requirements. At first, it must qualitatively and correctly reflect the limited a
priori knowledge. At second, the number of parameters should not be too large. The increase in the
number of parameters leads to a deterioration in the quality of the main task both due to the complexity
of the technical implementation and due to the loss of input to be used to determine parameter values
or to exclude unknown and unnecessary parameters. Thus, in the case of parametric a priori uncertainty,
instead of a single probability distribution law for random processes, we define a whole class of
distributions. The evaluation algorithm must select from a given class of distributions to ensure that the
optimization criterion is met. This means that the estimation algorithm must be parametrically adaptive
[3].
    Under parametrically adaptive estimation algorithm we will understand such algorithm which on the
basis of process of the measuring information is capable not only to give an estimation of necessary
components of the random process, but also to restore statistical characteristics of the a priori
description of the dynamic system.
    Consider the construction of a recurrent algorithm for estimating the information state vector хk and
the constant correlation matrix R of measuring the impact of a sample of measurements of increasing
volume Yk1 = {yi, i = ¯(1,k)} [4]. Consider the case when the linear model of the system and
measurements are described by stochastic differential equations, and the measurement model includes
the influence of vk in the form of a definition of Bkvk, where Вk is a time-dependent matrix.
    So, given:
   1.   System model:

   𝑥𝑘+1 = 𝑎0 (𝑘) + Ф(𝑘 + 1|𝑘)𝑥𝑘 + 𝑏(𝑘)𝑤𝑘 .                                                          (1)

   2.   Measurement model:

   𝑦𝑘 = 𝐴0 (𝑘) + 𝐻𝑘 𝑥𝑘 + 𝐵𝑘 𝑣𝑘 .                                                                     (2)




                                                                                                      273
   3.       A priori data:

   𝑤𝑘 ~𝑁(0; 𝑄𝑅 ); 𝑣𝑘 ~𝑁(0; 𝑅);                                                                           (3)

where R is a const; QR is an unknown matrix; 𝑥0 ~𝑁(𝑥(0|0), 𝑃0 ;

                     𝑐𝑜𝑣(𝑤𝑘 , 𝑤𝑗 ) 𝑐𝑜𝑣(𝑣𝑘 , 𝑣𝑗 ) = 𝑐𝑜𝑣(𝑥0 , 𝑣𝑘 ) = 𝑐𝑜𝑣(𝑥0 , 𝑤𝑘 ) = 0; 𝑘 ≠ 𝑗

   4.       Optimization criterion is the maximum total density of the probability distribution of the
estimated and measured parameters

   max 𝜋 (𝑥1𝑘 , 𝑌1𝑘 |𝑅)                                                                                  (4)


                   𝑋1𝑘 , 𝑅
where 𝑋1𝑘 = {𝑥𝑖 , 𝑖 = 1, 𝑘.
   We first convert criterion (4) to an equivalent notation form. First of all, we note that since the
natural logarithm is a monotonically increasing function of its argument, instead of (4) we can use an
equivalent criterion.

   𝑚𝑎𝑥 ln 𝜋 (𝑋1𝑘 , 𝑌1𝑘 |𝑅).                                                                              (5)


                   𝑋1𝑘 , 𝑅
   Using the formula of total probability, the marking of the process (xk, yk), and the independence of
measurements 𝑦𝑖 ∈ 𝑌1𝑘 , 𝑖 = 1, 𝑘, we represent the community of density distribution 𝑋1𝑘 і 𝑌1𝑘 in two
forms:

   ln 𝜋 (𝑋1𝑘 , 𝑌1𝑘 |𝑅) = ln 𝜋 (𝑋1𝑘−1 , 𝑌1𝑘−1 |𝑅) + + ln 𝜋 (𝑥𝑘 , 𝑦𝑘 |𝑋1𝑘−1 , 𝑌1𝑘−1 , 𝑅) ==
ln 𝜋 (𝑋1𝑘−1 , 𝑌1𝑘−1 , 𝑅) + + ln 𝜋 ( 𝑦𝑘 |𝑥𝑘−1 , 𝑦𝑘−1 , 𝑅) + ln 𝜋 (𝑥𝑘 |𝑥𝑘−1 , 𝑦𝑘 , 𝑅) ;                     (6)

   ln 𝜋 (𝑋1𝑘 , 𝑌1𝑘 |𝑅) = ln 𝜋 (𝑋1𝑘 ) + ln 𝜋 (𝑌1𝑘 |𝑋1𝑘 , 𝑅) = ln 𝜋 (𝑋1𝑘 ) + + ∑𝑘𝑖=1 ln 𝜋 (𝑦𝑖 |𝑋1𝑘 , 𝑅).    (7)

   Given (6) and (7), we convert criterion (5) into a component:

        max ln 𝜋 (𝑥𝑘 |𝑦𝑘 , 𝑥(𝑘 − 1|𝑘 − 1), 𝑅̂                                                             (8)
                                            𝑘)
         𝑥𝑘
   {
                           ̂𝑘 , 𝑅) ̂
       max ∑𝑘𝑖=1 ln 𝜋 (𝑦𝑖 |𝑋1
        𝑅                                                                                                 (9)

      ̂𝑘 = {x (i | і), і = 1, 𝑘}, x (k | k) і 𝑅
where 𝑋                                       ̂𝑘 is the estimates xk and R, obtained from a sample of
       1

measurements 𝑌1𝑘 .
                                                                 ̂𝑘 , gives the usual Kalman-type
   The optimization for criteria (8) for the conditions that R = 𝑅



274
extrapolation and filtering algorithms.

   Let us now turn to the optimization for criterion (9). Let us first imagine ln 𝜋 (𝑦𝑖 | 𝑋̂
                                                                                           𝑘
                                                                                           1 , 𝑅) explicitly.

To do this, we note that due to the linearity of the transformation 𝐵𝑖 𝑣𝑖 and condition (3)

   𝐵𝑖 𝑣𝑖 ~𝑁(0, 𝐵𝑖 𝑅𝐵𝑖𝜏 ).                                                                                (10)

   It follows from relation (2)

   𝐵𝑖 𝑣𝑖 = 𝑦𝑖 − 𝐻𝑖 𝑥𝑖 − 𝐴0 (𝑖)                                                                           (11)

and transformation from the variables 𝐵𝑖 𝑣𝑖 to the variables 𝑦𝑖 is equal to one, so, given (10) and (11),
we can write


   𝐽(𝑅) = ∑𝑘𝑖=1 ln 𝜋 (𝑦𝑖 | 𝑋̂
                            𝑘                   ̃ 1 𝑘                 𝜏 −1
                            1 , 𝑅) = −(𝑘𝑚/2) ln 2𝜋 + 2 ∑𝑖=1 ln |(𝐵𝑖 𝑅𝐵𝑖 )  − −1/

2 ∑𝑘𝑖=1 𝑣𝑖𝜏 ( (𝐵𝑖 𝑅𝐵𝑖𝜏 )−1 𝑣𝑖 ,                                                                          (12)

where 𝑣𝑖 = 𝑦𝑖 − 𝐻𝑖 𝑥(𝑖|𝑖 − 1) − 𝐴0 (𝑖); 𝜋̃ = 3,1415 ….
   Using the definition and properties of a pseudo-inverse matrix [5], the following transformations
should be performed:

   (𝐵𝑖 𝑅𝐵𝑖𝜏 )−1 = 𝐵𝑖 𝐵𝑖 + (𝐵𝑖 𝑅𝐵𝑖𝜏 ) + (𝐵𝑖𝜏 ) + 𝐵𝑖𝜏 = 𝐵𝑖 [(𝐵𝑖𝜏 𝐵𝑖 ) × 𝑅(𝐵𝑖𝜏 𝐵𝑖 ) + 𝐵𝑖𝜏 ==
𝐵𝑖 (𝐵𝑖𝜏 𝐵𝑖 )−1 𝑅−1 (𝐵𝑖𝜏 𝐵𝑖 )−1 𝐵𝑖𝜏 .                                                                     (13)

   Also, note that

   |(𝐵𝑖 𝑅𝐵𝑖𝜏 )−1 | = |𝐵𝑖 𝑅𝐵𝑖𝜏 |−1                                                                        (14)

   Substituting (13) and (14) into (12), we obtain

                  𝑘𝑚
                     ̃ − 1/2 ∑𝑘𝑖=1 ln|𝐵𝑖 𝑅𝐵𝑖𝜏 | − 1/ 2 ∑𝑘𝑖=1 𝑣𝑖𝜏 𝐵𝑖 (𝐵𝑖𝜏 𝐵𝑖 )−1 𝑅−1 (𝐵𝑖𝜏 𝐵𝑖 )−1 𝐵𝑖𝜏 𝑣𝑖
   𝐽(𝑅) = − ( 2 ) ln 2𝜋                                                                                  (15)


   The necessary condition for the extremum of the functional (15) is described by equation
𝑑𝐽(𝑅)
      | ̂ =0
 𝑑𝑅 𝑅=𝑅  𝑘

which can be given an explicit look:

             −1 𝜏           −1                                                     −1 𝜏
       ̂𝑘
   −𝑘 (𝑅             ̂𝑘
                ) + (𝑅           (∑𝑘𝑖=1(𝐵𝑖𝜏 𝐵𝑖 )−1 𝐵𝑖𝜏 𝑣𝑖 𝑣𝑖𝜏 𝐵𝑖 (𝐵𝑖𝜏 𝐵𝑖 )−1 )𝑅
                                                                              ̂𝑘     ) =0                (16)


                                                          ̂𝑘 , we obtain
   Transporting (16) and multiplying it left and right by 𝑅

   ̂𝑘 = 1 ∑𝑘𝑖=1(𝐵𝑖𝜏 𝐵𝑖 )−1 𝐵𝑖𝜏 𝑣𝑖 𝑣𝑖𝜏 𝐵𝑖 (𝐵𝑖𝜏 𝐵𝑖 )−1 .
   𝑅                                                                                                     (17)
        𝑘



                                                                                                           275
   Write (17) in the form

   ̂𝑘 = 1 (𝐵𝑘𝜏 𝐵𝑘 )−1 𝐵𝑘𝜏 𝑣𝑘 𝑣𝑘𝜏 𝐵𝑘 (𝐵𝑘𝜏 𝐵𝑘 ) + + 𝑘−1 1 ∑𝑘−1
   𝑅                                                         (𝐵𝑖𝜏 𝐵𝑖 )−1 𝐵𝑖𝜏 𝑣𝑖 𝑣𝑖𝜏 𝐵𝑖 (𝐵𝑖𝜏 𝐵𝑖 )−1   (18)
        𝑘                                          𝑘 𝑘−1 𝑖=1



4. Conclusions
    The study of this algorithm using software and hardware allows identifying of some features. If the
                                       ̂𝑘 ≤ (R) ≤〖15R〗, i = (1,m) (R is a true correlation matrix of
diagonal elements (Rk) is in the range 𝑅
measurements). Then the estimate х (k | k) is not very sensitive to changes in the elements of the matrix
Ck. If (Rk) < R, then this can lead to large errors in estimating x (k | k) due to the deterioration of the
conditionality of the matrix Ck. To improve the operation of the algorithm, it is advisable to use an
iterative procedure for calculating the matrix Rk. It is enough to perform three iterations when
calculating the estimate of the matrix Rk, so that the estimate (Rk) almost stopped changing.

5. References
   [1] V. Grimm, Ten years of individual-based modelling in ecology: what have we learned and what
could we learn in the future?, Ecological Modelling 115 (1999) 129–148. doi: 10.1016/S0304-
3800(98)00188-4.
   [2] D. Lande, Fundamentals of modeling and evaluation of electronic information flows,
monograph, Engineering, 2006.
   [3] M. Ogarkov. Methods for statistical estimation of the parameters of random processes,
EnergoAtomIzdat, 1990.
   [4] B. Teipley, D. Born, A sequential procedure for estimating the state and covariance matrix of
observation errors. Rocket technology and astronautics 9(2) (1971) 27–34.
   [5] R. Liptser, A Shiryaev Statistics of stochastic processes, Science, 1974.




276