=Paper= {{Paper |id=Vol-2393/paper_356 |storemode=property |title=Modeling of Cognitive Process Using Complexity Theory Methods |pdfUrl=https://ceur-ws.org/Vol-2393/paper_356.pdf |volume=Vol-2393 |authors=Vladimir Soloviev,Natalia Moiseenko,Olena Tarasova |dblpUrl=https://dblp.org/rec/conf/icteri/SolovievMT19 }} ==Modeling of Cognitive Process Using Complexity Theory Methods== https://ceur-ws.org/Vol-2393/paper_356.pdf
Modeling of Cognitive Process Using Complexity Theory
                       Methods

      Vladimir Soloviev[0000-0002-4945-202X], Natalia Moiseienko[0000-0002-3559-6081] and
                           Olena Tarasova [0000-0002-6001-5672]

    Kryvyi Rih State Pedagogical University, 54 Gagarin Ave., Kryvyi Rih, 50086, Ukraine
       {vnsoloviev2016, n.v.moiseenko, e.ju.tarasova}@gmail.com



       Abstract. The features of modeling of the cognitive component of social and
       humanitarian systems have been considered. An example of using multiscale,
       multifractal and network complexity measures has shown that these and other
       synergetic models and methods allow us to correctly describe the quantitative
       differences of cognitive systems. The cognitive process is proposed to be
       regarded as a separate implementation of an individual cognitive trajectory,
       which can be represented as a time series and to investigate its static and
       dynamic features by the methods of complexity theory. Prognostic possibilities
       of the complex systems theory will allow to correct the corresponding
       pedagogical technologies.

       Keywords: cognitive systems, complex systems, complex networks,
       synergetics, degree of complexity, new pedagogical technologies.


1      Introduction

Recently, it has become clear that pedagogical science operates on the transmission of
a kind of structured information that is knowledge. Information, as the main concept
of cybernetics, is characterized by a metric function and, thus, the search for optimal
management of educational processes is translated into a plane of mathematical
modeling [1-3].
   In science, starting with R. Descartes, I. Newton and P.-S. Laplace determinism
and strict conditional constructions had been predominant for a long time. Initially,
these views were developed in science and mathematics, and then moved into the
humanitarian field, in particular, in pedagogy. As a result, many attempts have been
made to organize education as a perfectly functioning machine. According to the
dominant ideas then, for the education of a person the only need was to learn how to
manage such a “machine”, that is to turn education into a kind of production and
technological process. The emphasis was on standardized training procedures and
fixed patterns of learning. Thus appeared the beginning of the technological approach
in teaching and, consequently, the predominance of teaching the reproductive activity
of students.
    For many complex systems, the phenomenon of self-organization is characteristic
[4]. It leads to the fact that very often a few variables, the so-called order parameters,
are detected very often for the description of an object, which is described by a large
or even infinite number of variables [5]. These parameters “subordinate” other
variables, defining their values. The researchers are aware of the mechanisms of self-
organization, which lead to the allocation of parameters of order, methods of their
description as well as the corresponding mathematical models. However, it is likely,
our brain has a brilliant ability to find these parameters, to “simplify reality”, finding
more effective algorithms for their selection. The process of learning and education
allows one to find successful combinations that can be the order parameter in certain
situations or the mechanisms of searching for such parameters (“learn to study”,
“learn to solve non-standard tasks”).
   It is also advisable to use the ideas of a soft (or fuzzy) simulation. All said by
V.I. Arnold, in the case of hard and soft models [6], takes place in pedagogical
science. Since in humanitarian systems the results of their interaction and
development can not be predicted in detail, by analogy with complex quantum
systems one can speak the principle of uncertainty for humanitarian systems. In the
process of learning unplanned small changes always occur as well as fluctuations in
the various pedagogical systems (and the individual, and the team of students, and
knowledge systems). Therefore, the basis of modern educational models should lie in
the principle of uncertainty in a number of managerial and educational parameters.
   Network education refers to a new educational paradigm [7], which is called
“networking”. Its distinctive features are learning based on the synthesis of the
objective world and virtual reality by activating both the sphere of rational
consciousness and the sphere of intuitive, unconscious. The networking of a student
and a computer is characterized as an intellectual partnership representing the so-
called “distributed intelligence”. Unlike the traditional, network education strategy is
focused not on the systematization of knowledge and the assimilation of the next main
core of information, but on the development of abilities and motivation to generate
their own ideas [8].
   Within the framework of recent research in the Davos forum, 10 skills were
identified, most demanded by 2022 [9]: (1) Analytical thinking and innovation;
(2) Active learning and learning strategies; (3) Creativity, originality and initiative;
(4) Technology design and programming; (5) Critical thinking and analysis;
(6) Complex problem-solving; (7) Leadership and social influence; (8) Emotional
intelligence; (9) Reasoning, problem-solving and ideation; (10) Systems analysis and
evaluation. Obviously, the cognitive component in the transformation processes of
Industry 4.0 is dominant, which actualizes attention to the study of cognitive
processes.
   The complexities here are reduced to the fact that cognitive processes are poorly
formalized. Therefore, the field of theoretical works until recently was virtually
empty. The picture has fundamentally changed with the use of recent synergetic
studies. The fact is that the doctrine of the unity of the scientific method asserts: for
the study of events in the social-humanitarian systems, the same methods and criteria
apply to the study of natural phenomena. Significant success was achieved within the
framework of interdisciplinary approaches and the theory of self-organization –
synergetics [4, 5].
   The process of intellection is a cognitive process characterized by an individual
cognitive trajectory whose complexity is an integro-differential characteristic of an
individual. The task is to quantify cognitive trajectories and present them in the form
of a time series that can be analyzed quantitatively. The theory of complexity
introducing various measures of complexity, allows us to classify cognitive
trajectories by complexity and choose more complex, as more efficient ones. The
analysis procedure can be done dynamically, by correcting the trajectories by means
of progressive pedagogical technologies.
   Previously, we introduced various quantitative measures of complexity for
particular time series, in particular: algorithmic, fractal, chaos-dynamic, recurrent,
nonreversible, network, and others [10]. Significant advantage of the introduced
measures is their dynamism, that is, the ability to monitor the time of change in the
chosen measure and compare with the corresponding dynamics of the output time
series. This allowed us to compare the changes in the dynamics of the system, which
is described by the time series, with characteristic changes in concrete measures of
complexity and draw conclusions about the properties of the cognitive trajectory.
   Objects of research are cognitive processes that control neurophysiological and
other cognitive characteristics of a person:

─ the length of the full step of different age children [11], a healthy young person and
  the elderly, or those with neurodegeneration (Alzheimer’s, Parkinson’s,
  Huntington’s, etc. [12]);
─ human recalls of words [13];
─ objects of cognitive linguistics – the works of various authors, different genres,
  written in different languages [14];
─ discretized multi-genre musical compositions [15].

The corresponding databases in the form of time series are in open access [16].
   In this paper, we consider some of the informative measures of complexity and
adapt them in order to study the cognitive processes. The paper is structured as
follows. Section 2 describes previous studies in these fields. Section 3 presents
information mono- and multiscale measures of complexity. Section 4 describes the
technique of fractal and multifractal. Network measures of complexity and their
effectiveness in the study of cognitive processes are presented in Section 5.


2      Analysis of previous studies
Researchers interested in human cognitive processes have long used computer
simulations to try to identify the principles of cognition [17]. Existing theoretical
developments in this scientific field describe complex, dynamic, and emergent
processes that shape intra- (e.g., cognition, motivation and emotion) and inter- (e.g.,
teacher-student, student-student, parent-child interactions, collaborative teams) person
phenomena at multiple levels. These processes are fundamental characteristics of
complex systems but the research methods that are used sometimes do not match the
complexity of processes that need to be described.
   From the set of methods of the theory of complex systems we consider only those
related to information, fractal, and network complexity measures.
   Entropic measures in general are relevant for a wide variety of linguistic and
computational subfields. In the context of quantitative linguistics, entropic measures
are used to understand laws in natural languages, such as the relationship between
word frequency, predictability and the length of words, or the trade-off between word
structure and sentence structure [18]. Together with Shannon’s entropy, more
complex versions are used: the Approximate entropy, Sample entropy [19].
   In order to demonstrate the scale-invariant properties of cognitive processes, these
types of entropy were used in a multiscale version in the study of cognitive processes
of cerebral activity [20], human locomotion functions [21], in linguistics [19].
   Cognitive processes like most complex systems [22] exhibit fractal properties [23,
24], analysis and the use of results requires careful research.
   In recent years, the complex networks methods [25] have become widespread.
They not only allow the construction and exploration of networks with obvious (as in
linguistics) nodes and links [26], but also those reproduced from the time series by
actively developing methods [27, 28].
    In our recent works, we have used some of the modern methods of the theory of
complex systems for the analysis of such a complex system as cryptocurrency [29,
30]. In this paper, we adapt them to cognitive signals.


3      Information mono- and multiscale measures of
       complexity

Based on the different nature of the methods laid down in the basis of the formation
of the measure of complexity, they pay particular demands to the time series that
serve the input. For example, information requires stationarity of input data. At the
same time they have different sensitivity to such characteristics as determinism,
stochasticity, causality and correlation. In this paper, we do not use classical
information measures (for example, the complexity behind Kolmogorov, entropy
measures), since complex signals manifest complexity inherent to them on various
spatial and temporal scales, that is, they have scale-invariant properties. They, in
particular, are manifested through the power laws of distribution.
     Obviously, the classic indicators of algorithmic complexity are unacceptable and
lead to erroneous conclusions. To overcome such difficulties, multiscale methods are
used.
     The idea of this group of methods includes two consecutive procedures: 1) coarse
graining (“granulation”) of the initial time series – the averaging of data on non-
intersecting segments, the size of which (the window of averaging) increased by one
when switching to the next largest scale; 2) computing at each of the scales a definite
(still mono scale) complexity indicator.
    The process of “rough splitting” consists in the averaging of series sequences in a
series of non-intersecting windows, and the size of which – increases in the transition
from scale to scale [31].
    Each element of the “granular” time series is in accordance with the expression:
                                           j
                       yj  1 /         x , 1 j  N /,
                                     i ( j 1) 1
                                                      i
                                                                                        (1)


where τ characterizes the scale factor. The length of each “granular” row depends on
the size of the window and is even N/τ. For a scale equal to 1, the “granular” series is
exactly identical to the original one.
    We demonstrate the work of multi-scale measures of complexity on examples of
Approximate Entropy and Sample Entropy [19]. Approximate Entropy (ApEn) is a
“regularity statistic”, which determines the possibility of predicting fluctuations in
time series. Intuitively, it means that the presence of repetitive patterns (sequences of
a certain length constructed from successive numbers of sequences) fluctuations in the
time series leads to a greater predictability of the time series than those in which there
are no repetitions of the templates. The comparatively large value of ApEn shows the
likelihood that similar observation patterns will not follow one another. In other
words, a time series containing a large number of repetitive patterns has a relatively
small ApEn, and the ApEn value for a less predictable (more complex) process is
greater.
    When calculating ApEn for a given time series SN consisting of N values t(1), t(2),
t(3), ..., t(N) two parameters are chosen, m and r. The first of these parameters, m,
indicates the length of the template, and the second – r – defines the similarity
criterion. The sequences of time series elements SN consisting of m numbers taken
starting from the number i are called, and are called vectors pm(i). The two vectors
(patterns), pm(i) and pm(j), will be similar if all the difference pairs of their respective
coordinates are less than the values of r, that is, if |t(i+k)–t(j+k)|