=Paper= {{Paper |id=Vol-2067/paper11 |storemode=property |title=Application of Decision-making Support, Nonlinear Dynamics, and Computational Linguistics Methods during Detection of Information Operations |pdfUrl=https://ceur-ws.org/Vol-2067/paper11.pdf |volume=Vol-2067 |authors=Dmitry V. Lande,Oleh V. Andriichuk,Anastasiya M. Hraivoronska,Natalia A. Guliakina |dblpUrl=https://dblp.org/rec/conf/its2/LandeAHG17 }} ==Application of Decision-making Support, Nonlinear Dynamics, and Computational Linguistics Methods during Detection of Information Operations== https://ceur-ws.org/Vol-2067/paper11.pdf
Application of Decision-making Support, Nonlinear Dynamics,
 and Computational Linguistics Methods during Detection of
                   Information Operations
   © Dmitry V. Lande              © Oleh V. Andriichuk            © Anastasiya M. Hraivoronska
        Institute for Information Recording of National Academy of Sciences of Ukraine,
                                          Kyiv, Ukraine
   dwlande@gmail.com               oleh.andriichuk@i.ua               nastya_graiv@ukr.net
                                      © Natalia A. Guliakina
                 Belarusian State University of Informatics and Radioelectronics,
                                    Minsk, Republic of Belarus
                                        guliakina@bsuir.by

                                                         Abstract
          The paper describes application of decision-making support, nonlinear dynamics, and computational linguistics
methods to detection of information operations. The term information operation denotes a complex of information
measures meant to change public opinion about a certain object or process.
          Dynamics of the number of publications about the target object during a given period is represented as a time
series. In order to define peculiar features of such series we are using Morlet and “Mexican Hat” wavelets. Besides that,
we check the correlation between the time series and the respective information operation template. In addition to the
listed approaches, we are also using ΔL-method based on Detrended Fluctuation Analysis.
          Linguistic analysis of publications on the target object is another important aspect of information operations
detection. Using sentiment analysis, the emotional coloring of these publications texts is determined. For detection of
sub-topics related to the basic topic, a collection of publications is used as text corpus, to which TF-IDF and visibility
graphs approaches are applied for determination of keywords. The keywords defined in this way are used for
clarification of queries in content-monitoring systems.
          Decision-making support system tools are used for decomposition of information operation topics and
evaluation of efficiency rating of these topics in dynamics. During information operation topics decomposition,
knowledge obtained from expert group is used. As a result of decomposition, the goal hierarchy graph of information
operation topics is built. Based on this graph, recommendations for decision-makers are produced (using the method of
target-oriented dynamic evaluation of alternatives) in the form of dynamic efficiency ratings of information operation
topics. The recommendations produced in this way are used to evaluate the damage caused by the information
operation, as well as to form the information counteractions.
          A concept of the information-analytical system for the detection of information operations is proposed.
          Keywords: information operations detection, time series, wavelet-analysis, keywords network, decision
support system, expert decomposition.

                                                     1 Introduction
          Sources of information have a substantial influence on people. The last several years show that mass media
sources can be used efficiently for propagating misinformation. Furthermore, social experiments show that people often
believe in the unconfirmed news and disseminate them. For example, in [1] the authors present a review of known false
beliefs and misinformation in American society. In [2] the author describes his experiments. He examined belief in
political rumors surrounding the health care reforms enacted by Congress in 2010. Depending on the details presented,
17-20% of the respondents believed in them, and 24-35% of the recipients did not have a specific opinion, and 45-58%
of the respondents rejected the rumors.
          Consequently, investigating processes in the information space is a topical issue. The problems of high
significance are processing information streams, identifying trends and anomalies, and detecting critical and meaningful
events in real-time mode.
          Let us define the information operation [3-4]. The information operation (IO) is the complex of informational
events (news on the Internet and media, comments on social networks, forums, etc.), aimed to change the public opinion
about a particular object (person, organization, institution, country, etc.). Most of IO has the typical structure (Fig. 1). If
the presented IO has the following phases: «background publications» – «calm» – «preparatory bombardment» –
«calm» – «attack», then by the first three phases it is possible to predict future events with high probability.
        To illustrate techniques and approaches presented in the article we use the “Brexit” topic. As you may know, a
referendum was held on Thursday 23 June 2016, to decide whether the UK should leave or remain in the European
Union. Leave won by 51.9% to 48.1%. Naturally, this event is connected with many informational processes on the
Internet. Currently, Brexit is a topical issue that is widely researched by the scientific community [5].
                                                               76
                                        Fig. 1. Information Operation Roadmap.

                2 Nonlinear Dynamics Methods for Analyzing Informational Streams
          When investigating thematic information streams, we consider how the number of publications changes over
time. A content monitoring system provides time series data for a particular topic. In this case, time series is a sequence
of the numbers of publications dedicated to the topic per day during a specific time. For example, we gathered the time
series for the “Brexit” topic (Fig. 2).




                              Fig. 2. The Number of Publications on “Brexit” during 2016.

         To deal with time series data, we use wavelet-analysis [6]. A wavelet is a function that is well localized in
time. In practice, we often use the Mexican Hat wavelet:


                                                                 2       t2 
                                                 (t )  C (1  t ) exp  ,
                                                                         2

and the Morlet wavelet:

                                                                  t2 
                                                 (t )  exp ikt  .
                                                                  2

       The essence of the wavelet-transform is to identify regions of the time series which are similar to the wavelet.
To explore different parts of the original signal with various degrees of detail the wavelet is transformed by

                                                            77
stretching/squeezing and moving along the time axis. Therefore, the continuous wavelet-transform has the location`
                                                                                                                     2
parameter (l) and the scale parameter (s). By definition, the continuous wavelet transform of the function x  L ( R)
is:

                                                                     
                                                                 1             *   t l 
                                                 W (l , s ) 
                                                                 s
                                                                    x(t )  s dt,
                                                                     


                          *
where l,s R , s  0;  is complex conjugate of  , the values { W ( s, l ) } is called the coefficients of the wavelet
transform or the wavelet-coefficients. The wavelet-coefficients are visualized in the plot with a locations axis and a
scale axis.
          The reason to use the Mexican hat wavelet and the Morlet wavelet is a possibility to detect spikes in time
series. The wavelet-coefficients for the “Brexit” time series are shown in Fig. 3 (Mexican hat) and 4 (Morlet). In both
cases, the spike in the time series on the second half of June is highlighted strongly. One can also notice smaller spikes
in the second part of time series.




             Fig. 3. The Wavelet-coefficients for the “Brexit” Time Series Using the Mexican Hat Wavelet.




                Fig. 4. The Wavelet-coefficients for the “Brexit” Time Series Using the Morlet Wavelet.

         A wavelet must meet the mathematical requirements to be used in the continuous wavelet-transform.
Generally, we can explore the correlation of the time series with a pattern of our choice. For example, if we aim to
detect the IO, we can use the pattern shown in Fig. 5. The shape of the pattern matches stages of the IO. Now we use the
number of points in the pattern instead of the scale.
         A pattern can be moving along the time axis in the same way as a wavelet. To calculate each wavelet-
coefficient we use the entire time series, but in this case, we need k points of the series and a pattern with k points to
calculate the correlation coefficient by the formula:

                                                           k

                                                         x(l  i)  x s(i)  s 
                                                          i 1
                                  C (l , k )                                                    .
                                                    k                          k

                                                    x(l  i)  x   s(i)  s 
                                                                          2                  2

                                                   i 1                       i 1


          To visualize the results we make the similar plot as for the wavelet-coefficients (Fig. 6). When applying this
method, the small spikes in the second part of the time series are highlighted more noticeably.
          Another approach to analyzing time series is the ΔL-method. The ΔL-method is based on the DFA (Detrended
Fluctuation Analysis) method [7]. The essence of the approach is to determine and visualize the absolute deviation of
the accumulated time series from the corresponding values of linear approximation.
          First, let us fix a length of a segment s. We split up the time series into overlapping segments. For the point xt
we choose the segment with the length s and the center at the point t (or at the point t-1 if s is even). For each segment
fit the points in it with a linear function. Denote the value of local approximation at the point t for the segment with the



                                                                 78
center at l by Lt ,l ,s . Next, calculate the absolute deviation of xt from the approximation line as follows:


                                                           t ,l , s | xt  Lt ,l , s | .

                                                                                                                
         According to the method we calculate values  t ,l ,s for all l  1,..., T and s  1,..., T / 4 . Finally, we
calculate standard deviations:

                                                       1 s                      2   1 s              2
                                        E (l , s )      
                                                       s t 1
                                                              | xt  Lt ,l ,s |      
                                                                                    s t 1
                                                                                            t ,l , s .




                                     Fig. 5. Pattern with Different Numbers of Points.




              Fig. 6. Correlation Coefficients for the “Brexit” Time Series and the Pattern shown in Fig.4.



                                                                79
         Coefficients E (l , s ) are plotted in the same way as wavelet-coefficients. We apply the ΔL-method to the
Brexit time series and present the results in Fig. 7.




                     Fig. 7. Coefficients Obtained by the ΔL-method for the “Brexit” Time Series.

         The continuous wavelet-transform, the correlation with a pattern, and the ΔL method help to identify spikes,
edges, periodicity, and local features of the time series. Plots or scalograms obtained by these methods are used to
visualize special features of time series.

            3 Computational Linguistics Methods for Analyzing Informational Streams
         In the previous section, we discuss the methods from nonlinear dynamics for analyzing the time series
associated with the information stream. In that case, we use only numbers of publications dedicated to a particular topic.
In this section, we are going to introduce some techniques of computational linguistics to extract useful information
from the text of the publications.
         Many publications and messages on social networks have strong sentiment. When analyzing the information
stream, it is useful to determine the attitude of the text. Therefore, we apply methods of sentiment analysis. In the
simple case, we classify whether a publication is positive, negative or neutral, using lists of words commonly associated
with having a specific sentiment.
         The second useful task is to select significant words from the text. For the task to be done, one can apply
widely accepted TFIDF score. To calculate TFIDF scores, a text consisting of N words is divided into equal parts of M
words. Then for each i-th word in the text TF(i) is the frequency of the word in the document, and DF(i) is the
frequency of parts in which the word occurs. The TFIDF score for each word is calculated as following:

                                                                        N      
                                            TFIDF(i )  TF(i ) log            .
                                                                    M  DF(i ) 
         Another estimate of word significance is a standard deviation estimate [8]. Let us numerate all words in the
text from 1 to N. Denote by Ak (n) layout of a certain word A, where k is the number of occurrence of this word in the
text, and n is a position of this word in the text. The distance between occurrences of the word A is
 Ak  Ak 1 (m)  Ak (n)  m  n , where m and n are the positions of the k +1- th and k -th occurrences of the word
A in the text, respectively. Denote be A the vector A1 , A2 ,..., AK . The standard deviation estimate is:

                                                                         2
                                                             A2  A
                                                    A                      .
                                                                 A

         Using the TFIDF score or the standard deviation estimate, one can match each word in the text with a
numerical value or weight. The next step might be to transform these pairs into a horizontal visibility graph [9]. The
horizontal visibility graph (HVG) allows constructing the keyword network. Such network leads to detecting primary
and secondary essential words for understanding the text and significant collocations.
         The process of constructing the language network using HVG consists of two stages. At the first step, the
traditional HVG is constructed. At the second step, all the nodes corresponding to a single word are combined into a
single node and the connections of these nodes are also combined. It was proved that the largest-degree nodes reflect
the meaning of the text; thus, these nodes are of informational significance. The keywords can be used to identify
subtopics connected with the main topic of interest (Fig. 8).

                                                           80
                                         Fig. 8. Keywords for the “Brexit” Topic.

        4 Decomposition of Information Operations Topics and Their Rating Calculation
          In publications [10-12] it is shown that IO belong to weakly-structured domains. In order to deal with such
domains, decision support systems (DSS) are used [13]. DSS produce recommendations for a decision maker. To do
this, they are modeling weakly structured subject domains in their knowledge base (KB).
          As part of the “goal hierarchy graph” model, a hierarchy of goals, or KB, represented in the form of an oriented
network-type graph (an example for the Brexit is shown in Fig. 9), is built by experts. Nodes (vertices) of the graph
represent goals or KB objects. Edges reflect the impact of one set of goals on achievement of other goals: they connect
sub-goals to their immediate “ancestors” in the graph (super-goals). Goals can be quantitative and qualitative.




                                   Fig. 9. Structure of Goals Hierarchy of KB for Brexit

         For building of the goal hierarchy the method of hierarchic target-oriented evaluation of alternatives is
used [13]: a hierarchy of goals is built; then the respective partial impact coefficients are set, and the relative efficiency
of projects is calculated. First, the main goal of the problem solution is formulated, as well as potential options of its
achievement (projects) that are to be estimated at further steps. After that a two-phase procedure of goal hierarchy graph
construction takes place: “top-to-bottom” and “bottom-to-top” [13]. “Top-to-bottom” phase envisions step-by-step
decomposition of every goal into sub-goals or projects, that influence the achievement of the given goal. The main goal
is to be decomposed into more specific components – sub-goals that influence it. Then these lower-level goals are
further decomposed into even more specific sub-components that are, in their turn, also decomposed. When a goal is
decomposed, the list of its sub-goals may include (beside just-formulated ones) goals (already present in the hierarchy)
that were formulated in the process of decomposition of other goals. Decomposition process stops, when the sets of sub-
goals, that influence higher-level goals, include already decomposed goals and decision variants being evaluated. Thus,
                                                             81
when decomposition process is finished, there are no goals left unclear. “Bottom-to-top” phase envisions the definition
of all upper-level goals (super-goals, “ancestors”) for each sub-goal or project (i.e., the goals this project influences).
          As it has been mentioned, the experts build a hierarchy of goals that is represented by an oriented network-type
graph (Fig. 8). Its nodes are marked by goal formulations. Presence of an edge, connecting one node (goal) to the other
indicates the impact of one goal upon achievement of the other one. As a result of the above-mentioned process of goal
hierarchy building, we get a graph that is unilaterally connected, because from each node there is a way to the node,
marking the main goal. Each goal is assigned an indicator of achievement level, form 0 to 1. This indicator equals 0 if
there is absolutely no progress in achievement of the goal, while if the goal is completely achieved it equals 1. Impact of
one goal upon the other can be positive or negative. Its degree is reflected by the respective value – a partial impact
coefficient (PIC). In the method of target-oriented dynamic evaluation of alternatives (MTDEA) the delay of impact is
also taken into consideration [13]. In case of projects their implementation time is taken into account as well. PIC are
defined by experts, and, in order to improve the credibility of expert estimation process, pair-wise comparison-based
methods are used.
          Application of the approach and the methodology set forth in [10-11], calls for availability of a group of
experts. Work of experts is rather costly and requires considerable time. So, reduction of expert information usage
during building of DSS knowledge base (KB) in the process of IO detection represents a relevant issue.
          The essence of the methodology of DSS KB building during IO detection [14-15] is as follows:
          1) Group expert estimation is conducted in order to define and decompose the goals of the informational
operation. Thus, the IO is decomposed as a weakly structured system. For this purpose, the means of the system for
distributed collection and processing of expert information (SDCPEI) are used.
          2) Using the DSS, the respective KB is built, based on the results of the expert examinations conducted by
SDCPEI as well as on available objective information.
          3) Analysis of dynamics of thematic information flow is performed by means of content-monitoring system
(CMS). PIC are enter into the KB of DSS.
          4) Recommendations of the decision-maker are calculated by means of the DSS, based on the KB already built.
          The methodology is illustrated by the example of Brexit.
          The “Consensus-2” system [16], intended for evaluation of alternatives by a group of territorially distributed
experts, was used as SDEICP for group decomposition.
          Based on interpretation of the data base, formed in SDCPEI “Consensus-2”, the knowledge engineer created
the respective KB of “Solon-3” DSS [17]. The structure of goal hierarchy of this KB is provided on Fig. 8.
          After that, by means of InfoStream CMS [18] the analysis of dynamics of thematic information flow was
conducted. For this purpose, in accordance to each component of the IO, queries were formulated in the specialized
language. Based on these queries, dynamics analysis of publications on the target topic was performed. During
formulation of queries in accordance to the goal hierarchy structure (Fig. 8), the following rules were used:
          1) when moving from top to bottom, respective queries of lower-level components of the IO were
supplemented by queries of higher-level components (using “&” symbol), for clarification;
          2) in cases of abstract, non-specific character of IO components, movement from bottom to top took place,
while the respective queries were supplemented by queries of lower-level components (using "|" symbol);
          3) in cases of specific IO components, the query was made unique.
          Based on the results of fulfillment of queries to InfoStream system, particularly, on the number of documents
retrieved, respective PICs were calculated for each of them. PICs were calculated under assumption that the degree of
impact of IO component was proportional to the quantity of the respective documents retrieved. Obtained PIC values
were input into the KB. Thus, we managed to refrain from addressing the experts for evaluation of impact degrees of IO
components.
          A recommendations (in the form of dynamic efficiency ratings of information operation topics) produced in the
describe above way are used to evaluate the damage caused by the information operation [12], as well as to form the
information counteractions [20].
          The structure of goals hierarchy of KB for the Brexit is shown in Fig. 8. Based on the KB, the “Solon-3” DSS
provided recommendations. The following rating has been obtained for the information impact of the publications:
"Immigration and refugees" (0.364), "EU standards versus UK standards" (0.177), "EU laws versus UK laws" (0.143),
"Expansion of foreign trade relations" (0.109), "Additional taxes" (0.084), "Sales quota" (0.08),
"Unemployment" (0.043).

 5 Use Case Diagram of a Concept of the Information-Analytical System for the Detection of
                                Information Operations
        Shown in Fig. 10 is the use case diagram of a concept of the information-analytical system for IO detection.
The information-analytical system is an implementation of described above methods via system integration of DSS,
CMS, SDCPEI and expert estimation system (EES). Our concept of the information-analytical system has following
actors:
      Expert is a specialist who is invited or hired to provide a qualified opinion, judgment or estimation on some
issues.

                                                            82
       Knowledge engineer is a specialist who constructs the knowledge base and provides recommendations with
the help of the decision-making support toolkit.
       DSS is a system that provides recommendations based on the data from its knowledge base (both objective and
expert data).
       CMS is a system that collects information from websites automatically in real time. In addition, CMS performs
structuring, grouping by semantic features, thematic selecting, providing access to information databases in search
mode, and analyzing the dynamics of thematic information streams.
       SDCPEI is a system for remote group work of experts in the global network.
       EES is a complex of software tools for expert estimation. EES must adapt flexibly to the level of experts and
allow getting full and exact knowledge from them. “Level” (“Riven”) system [19] is an example of EES.




   Fig. 10. Use Case Diagram of the Concept of Information-analytical System for Information Operations Detection

          Use Cases:
       Building a knowledge base. The knowledge engineer builds a knowledge base of DSS for modeling of the
weakly structured domain and producing the recommendations. In this process, both objective and expert information
can be used.
       Group decomposition. Expects take part in the group decomposition. According to the target of the IO, the
knowledge engineer forms a group of experts who are specialist in the field. The process of group decomposition is to
be carried by dialogue with the experts of the group to decompose (or to divide) the target to the subparts. At each stage
of the decomposition, experts are asked to formulate a set of goals (they may choose from the existing ones or formulate
their own). These goals must directly affect the target. Further, the decomposition of the next goals is performed
similarly. The decomposition process comes up to the end when we obtain the specific components of the IO. These
components must have precise formulations that we will use as the queries for the content monitoring system.
       Group expert estimation. The process can be initiated by the knowledge engineer after the group
decomposition process. During this process, the knowledge engineer identifies the type of the sub-target influences
(qualitative or quantitative, positive or negative), as well as their degrees.
       Objective Information Entering to the Knowledge Base. The knowledge engineer enters the objective
information (content-monitoring results) to the knowledge base.


                                                           83
       Analyzing of the dynamics of the thematic information stream. The process is initiated by the knowledge
engineer. During this process, CMS performs the search queries. The found publications are used to study the dynamics
of number of publications on target topics.
       Calculation of recommendations. The process is aimed to calculate the relative efficiency of each component
of the IO (information throw-in). The relative efficiency means the contribution of the component to achieving the
target. The calculation is based on data from the knowledge base. In addition, we evaluate the aggregated estimation of
the effectiveness of the components. Dynamics of the process is taking into account. Based on the past data is possible
to change the quantitative values of the target.
         The concept integrates all components presented above.

                                                    6 Conclusions
         In order to detect IO, it is necessary to involve a complex of analytical techniques. The complex includes
methods of nonlinear dynamics and computer linguistics.
         The wavelet transform and its modifications are applied to detect periodicity, peaks, spikes, and behavioral
patterns in the dynamics of the number of thematic publications.
         To extract meaningful and significant words from the text of publications we use the TFIDF score, the standard
deviation estimate, and horizontal visibility graphs.
         Applying decision-making support methods, we provide the decomposition of the topics of the IO and assess
rating of the effectiveness of these topics.
         It is proposed a concept of a new information-analytical system for detection of IO through integration of a
decision support system, a content-monitoring system, a system for distributed expert information collection and
processing and an expert estimation system.

         This paper is prepared as part of project #F73/23558 “Development of Decision-making Support Methods and
Means for Detection of Information Operations”. The project won the contest #F73 for grant support of scientific
research projects held by The State Fund for Fundamental Research of Ukraine and Belarusian Republican Foundation
for Fundamental Research.

                                                      References
     1. Lewandowsky S., Ecker U. K. H., Seifert C. M., Schwarz N., Cook J. Misinformation and Its Correction Continued
         Influence and Successful Debiasing, Psychological Science in the Public Interest, Department of Human
         Development, Cornell University, USA, 2012, Volum e 13 issue 3, pp. 106-131.
     2. Berinsky A. J. Rumors and Health Care Reform: Experiments in Political Misinformation, British Journal of Political
         Science, Volume 47, Issue 2 April 2017, pp. 241-262.
     3. Горбулін В.П., Додонов О.Г., Ланде Д.В. Інформаційні операції та безпека суспільства: загрози, протидія,
         моделювання: монографія – К., Інтертехнологія, 2009 – 164 с.
     4. Ландэ Д.В., Додонов В.А., Коваленко Т.В. Информационные операции в компьютерных сетях:
         моделирование, выявление, анализ // МОДЕЛИРОВАНИЕ-2016: материалы пятой Международной
         конференции МОДЕЛИРОВАНИЕ-2016, Киев, 25-27 мая 2016 г. / ИПМЭ НАН Украины, 2016. - C. 198-201.
     5. Veit Bachmann, James D. Sidaway Brexit geopolitics / Geoforum 77 (2016) рр. 47–50.
     6. Addison Paul S. The illustrated wavelet transform handbook: introductory theory and applications in science,
         engineering, medicine and finance. Boca Raton, FL: CRC Press, Taylor & Francis Group, 2016. 446 p.
     7. Peng C.-K., Buldyrev S. V., Havlin S. et al. Mosaic organization of DNA nucleotides. Physical Review E, 1994. V.
         49 E. 2. 1685-1689 pp.
     8. Ortuño M., Carpena P., Bernaola P., Muñoz E., Somoza A.M. Keyword detection in natural languages and DNA //
         Europhys. Lett, – 57(5). – P. 759-764 (2002).
     9. Lande D., Snarskii A. Compactified Horizontal Visibility Graph for the Language Network. arXiv preprint arXiv:
         1302.4619, 2013. Andriichuk O.V., Kachanov P.T. Usage of expert decision-making support systems in information
         operations detection / Proceedings of the International Symposium for the Open Semantic Technologies for
         Intelligent Systems (OSTIS 2017), Minsk, Republic of Belarus, 16-18 Ferbruary, 2017, pp. 359-364.
     10. Андрейчук О.В., Качанов П.Т. Методика применения инструментария экспертной поддержки принятия
         решений при идентификации информационных операций / Информационные технологии и безопасность:
         Материалы международной научно-технической конференции 1 декабря 2016 года – К: ИПРИ НАН
         Украины, 2016. – С. 141-155. // Oleh V. Andriichuk, Petro T. Kachanov A Methodology for Application of Expert
         Data-based Decision Support Tools while Identifying Informational Operations / CEUR Workshop Proceedings
         (ceur-ws.org). Vol-1813 urn:nbn:de:0074-1813-0. Selected Papers of the XVI International Scientific and Practical
         Conference "Information Technologies and Security" (ITS 2016) Kyiv, Ukraine, December 1, 2016. - pp. 40-47.
         [http://ceur-ws.org/Vol-1813/paper6.pdf]
     11. Каденко С.В. Можливості та перспективи використання експертних технологій підтримки прийняття рішень
         у сфері інформаційної безпеки / Информационные технологии и безопасность: Материалы международной
         научно-технической конференции 1 декабря 2016 года – К: ИПРИ НАН Украины, 2016. – С. 31-42. // Sergii
         V. Kadenko Prospects and Potential of Expert Decision-making Support Techniques Implementation in Information
                                                           84
    Security Area / CEUR Workshop Proceedings (ceur-ws.org). Vol-1813 urn:nbn:de:0074-1813-0. Selected Papers of
    the XVI International Scientific and Practical Conference "Information Technologies and Security" (ITS 2016) Kyiv,
    Ukraine, December 1, 2016. - pp. 8-14. [http://ceur-ws.org/Vol-1813/paper2.pdf]
12. Andriichuk O.V., Kachanov P.T. Usage of expert decision-making support systems in information operations
    detection / Proceedings of the International Symposium for the Open Semantic Technologies for Intelligent Systems
    (OSTIS 2017), Minsk, Republic of Belarus, 16-18 Ferbruary, 2017, pp. 359-364.
13. Тоценко В. Г. Методы и системы поддержки принятия решений. Алгоритмический аспект. – К.: Наукова
    думка, 2002. – 382 с.
14. Андрійчук О.В., Ланде Д.В. Побудова баз знань систем підтримки прийняття рішень при виявленні
    інформаційних операцій / Реєстрація, зберігання і обробка даних: зб. Наук. Праць за матеріалами Щорічної
    підсумкової наукової конференції 17-18 травня 2017 року / НАН України. Інститут проблем реєстрації
    інформації. – К.: ІПРІ НАН України, 2017. – С. 101-103.
15. Андрійчук О.В., Ланде Д.В. Застосування інструментарію підтримки прийняття рішень при виявленні
    інформаційних операцій / Актуальні проблеми управління інформаційною безпекою держави : зб. матеріалів
    наук.-практ. конф., (Київ, 24 трав. 2017 р.). – Київ : Нац. акад. СБУ, 2017. – C.161-163.
16. Циганок В.В., Роїк П.Д., Андрійчук О.В., Каденко С.В. Свідоцтво про реєстрацію авторського права на твір
    №75023. Міністерство економічного розвитку і торгівлі України. Комп’ютерна програма „Система
    розподіленого збору та обробки експертної інформації для систем підтримки прийняття рішень –
    «Консенсус-2»” від 27.11.2017.
17. Тоценко В.Г., Качанов П.Т., Циганок В.В. Свідоцтво про державну реєстрацію авторського права на твір
    №8669. Міністерство освіти і науки України державний департамент інтелектуальної власності.
    Комп’ютерна програма "Система підтримки прийняття рішень СОЛОН-3" (СППР СОЛОН-3) від 31.10.2003.
18. Григорьев А.Н., Ландэ Д.В., Бороденков С.А., Мазуркевич Р.В., Пацьора В.Н. InfoStream. Мониторинг
    новостей из Интернет: технология, система, сервис: Научно-методическое пособие – К.: Старт-98, 2007. –
    40 с.
19. Циганок В.В., Андрійчук О.В., Качанов П.Т., Каденко С.В. Свідоцтво про реєстрацію авторського права на
    твір № 44521 Державної служби інтелектуальної власності України. Комп'ютерна програма “Комплекс
    програмних засобів для експертного оцінювання шляхом парних порівнянь «Рівень»” від 03.07.2012.
20. Vitaliy Tsyganok Decision-Making Support for Strategic Planning / Proceedings of the International Symposium for
    the Open Semantic Technologies for Intelligent Systems (OSTIS 2017), Minsk, Republic of Belarus, 16-18
    Ferbruary, 2017, pp. 347-352.




                                                      85