<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Classification Technology Based on Hyperplanes for Visual Analytics with Implementations for Different Subject Areas</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Glushkov Cybernetics Institute</institution>
          ,
          <addr-line>Kyiv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Kherson national technical university</institution>
          ,
          <addr-line>Kherson</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Khmelnytskyi National University</institution>
          ,
          <addr-line>Khmelnytskyi</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1868</year>
      </pub-date>
      <fpage>0000</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>The integration of human intellectual capabilities into the process of building a machine learning model is the most promising area. The advantage of this area is to effectively combine the capabilities of both human and machine through the use of visual analytics. Visual analytics combines machine learning, data transformation, and data visualization, that enables people to understand big and complex data. Using this approach, a human can form a mental model of a decision-making mechanism based on data analysis. To enable the machine to use this model, it is necessary to transform it into the form used by the machine. The paper proposes an information technology for transforming a model from the domain of human understanding into machine representation through formalization. The practical application of this technology is presented using the data classification method as an example. Data is visualized by lowering the dimension of the feature space. Using visual analytics, a human forms a classification model that is transformed into machine form through formalization. This research allows us to demonstrate the effectiveness of humanmachine interaction in the process of model building and the model transformation technique.</p>
      </abstract>
      <kwd-group>
        <kwd>Visual Analytics</kwd>
        <kwd>Classification</kwd>
        <kwd>Test Tasks</kwd>
        <kwd>Model Building</kwd>
        <kwd>Recognition</kwd>
        <kwd>Human-Machine Interaction</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>The use of human analytical abilities in machine learning significantly improves and
expands the possibilities of the practical application of artificial intelligence. A human
can understand the information content of data, relations, and structure. This
circumstance is the main reason for involvement and integration into machine learning-based
decision-making systems. However, the process of effective integration of a human,
namely, his intellectual capabilities, is complicated from the point of view of
developing information technologies that are embodied in practical tools for use. The related
works section describes solutions that, in the current development of machine
learning, are presented in research areas. The purpose of this paper is to efficiently
transform data into various, applied areas for classification based on hyperplanes. This is
justified by the fact that the effectiveness of technology largely depends on the correct
conversion of data with no loss of information content. And this in turn depends on
the application area. The main prerequisites for information technology and the
process of obtaining decisions using a model built by a human are presented in the
section on the concept of integrating the intellectual capabilities of a human on the basis
of the construction of a mental model.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Related works</title>
      <p>
        The interaction between a computer and a human is very important, and sometimes
crucial. Interaction involves the exchange of certain information, while the
presentation of this information must have the property of understanding. That is, information
must be transformed and presented in such a way that the recipient and user of this
information can understand and interpret it. The purpose of obtaining information
may be ultimate for the consumer. An example of such use is the various types of
tables, graphs, charts that are prepared for the human. Humans most fully and
comprehensively use the visual representation of information. Therefore, for a human,
various types of visualization are important. Often, the graphics that the machine
prepares are the final product for the human. For a machine, a human presents
information in the form of numbers which are forming data volumes. However, a human,
providing the machine with data, wants to get the result of working on the data in the
form of an informational presentation that is convenient for use. Based on the
calculation results, a person can change or perform any actions on data or algorithms,
providing feedback for the machine and improving the results of the machine. Thus, a
human becomes involved in the process of obtaining the necessary calculated results. A
human is cyclically involved in the computational process and becomes its necessary
part. Such a direction in the development of human-machine interaction was called
“human-in-the-loop” [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1, 2, 3</xref>
        ].
      </p>
      <p>
        Improving the process of visual analysis allows shortening the way of cognition. In
this direction, the process of interaction between machine and a human is being
improved. Interface interaction is the main part of visual analytics, as it is the only
connecting link. This link plays a key role in the transfer of information between a human
and machine. An important issue is the quality of interaction. Since the interaction is
process-oriented, it is difficult to assess how much knowledge has expanded and, as a
consequence, to evaluate the quality of the interaction itself. In the general case, in
our opinion, a qualitative interaction should reduce the number of cyclic interactions,
for example, the number of “sensemaking loops” [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        The development of the process of human – machine interaction is the knowledge
generation model for visual analytics [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The proposed interaction scheme separates
the processes and is focused on obtaining a specific result, which is knowledge.
      </p>
      <p>This process is also human-oriented, but allows specifying the result of using
visualization. The knowledge acquired by a human is the result of the use of visual
analytics and can be more formalized than focusing on the process of cognition. The final
product in this case is knowledge.</p>
      <p>Effective visualization is to show as much information as possible in its simplest
form. This aspect is very important because it allows expanding the use of visual
analytics for analysts with different qualifications. Thus, the trends in the use of visual
analytics indicate an increasing involvement of a human in the process of extracting
knowledge from data and developing visual analytics workflow.
3</p>
    </sec>
    <sec id="sec-3">
      <title>The concept of integrating the intellectual capabilities of a human on the basis of the construction of a mental model</title>
      <p>
        Like the previous approach, visual analysis is used for the end user - human.
However, it should be noted that the machine may also be the final consumer of the product
of visual analytics. In this case, the human acts as a necessary integrated part of the
system for obtaining the final product that will be used by the machine. Such areas as
Interactive machine learning [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ] allows using the intellectual capabilities of a
human. The machine produces the final product, meanwhile including the human in the
cyclic process of improving the result. Visual analytics, or to be more precise, human
intellectual abilities, are used to build the final product of machine learning VIS4ML
proposed [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ].
      </p>
      <p>
        Thus, a human makes a valuable contribution and helps the machine to achieve the
development goal, which is the creation of a model. This model, by definition in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ],
is formal and is used by the machine.
      </p>
      <p>
        Two forms of the model are determined: formal, for machine use and mental [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10, 11,
12</xref>
        ], for human use. Visualization is by far the most informative for humans. In the
investigations [
        <xref ref-type="bibr" rid="ref13 ref14 ref15">13, 14, 15</xref>
        ], an approach was proposed according to which a mental
model is formed, which is further used by the machine as another execution
environment. An approach and tools for projecting the use of a mental model by a machine
are proposed. This approach is implemented up to the example of data classification
based on clustering. Piecewise linear restrictive rules define class areas and allow
determining the need to increase or limit the class area visually, which is important in
borderline data, especially for applications [
        <xref ref-type="bibr" rid="ref15 ref16">15, 16</xref>
        ]. A mental model is created that is
used by a human to form hypervolumes and class boundaries. This allows providing
tools for obtaining additional information by the system and the controllability of the
classification process. The results of the system work are well understood and
manageable due to the visual presentation and interactivity of restrictive rules [
        <xref ref-type="bibr" rid="ref17 ref18 ref19">17, 18, 19</xref>
        ].
Further, we will consider the application of the proposed approach on the example of
subject areas for the following tasks: classification of textual information,
classification of facial expressions of emotions, marking the ECG signal, classification of test
tasks for adaptive testing.
In the study [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] the main areas of the face are identified, the changes of which form
facial expressions inherent in a particular emotional state of a human. At a certain
level of aggregation, the most influential on mimic expressions facial areas with
eyebrows, eyes, and mouth can be distinguished [
        <xref ref-type="bibr" rid="ref21 ref22 ref23">21, 22, 23</xref>
        ]. By grouping the structural
components of mimic displays, a set of qualitative characteristics of displacements of
points or groups of points can be formed, which are given in Table 1. Points are
determined on the face by pictures received with Intel RealSense Camera [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] and
correspond to the areas of finding the muscular structures of the face.
Based on the need for identification of facial expressions by means of conventional
cameras with low resolution and according to the results of Table 1 the following
gradation for features that are located on the facial areas is introduced:
─ eyes: {open, narrowed, normal};
─ lips: {stretched, compressed, normal};
─ eyebrows: {raised, lowered, normal}.
      </p>
      <p>According to the above graduation, the mimic expressions of emotions are presented
as follows (Table 2).
lips
brows</p>
      <p>Joy
normal
stretched
raised</p>
      <p>Grief
normal
compressed
lowered</p>
      <p>Fear
open
normal
raised</p>
      <p>Anger
narrowed
normal
lowered</p>
      <p>Delight
normal
stretched
lowered
The representation of mimic display in the context of emotional states shown in Table
2 serves as a basis for the subsequent synthesis of the model by which detection will
be carried out. Empirically defined features are formally submitted as follows:
– x1 – the sign of facial expressions of the eyes area;
– x2 – the sign of facial expressions of the lips area;
– x3 – the sign of facial expressions of the eyebrows area.</p>
      <p>
        x1, x2 , x3 ∈ [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ], while x1 ∈ [0,0.2] – for narrowed eyes; x1 ∈ [0.4,0.6] – for normal
eyes; x1 ∈[0.8,1] – for open eyes; x2 ∈ [0,0.2] – for compressed lips; x2 ∈ [0.4,0.6] –
for normal lips; x2 ∈ [0.8,1] – for stretched lips; x3 ∈ [0,0.2] – for lowered eyebrows;
x3 ∈ [0.4,0.6] – for normal eyebrows; x3 ∈[0.8,1] – for raised eyebrows. The gaps in
the proposed synthetic model that are not used ( ]0.2,0.4[ , ]0.6,0.8[ ) serve to simulate
good resolution between different emotional states at their classification.
      </p>
      <p>
        The validity of the proposed model is verified by the data obtained in the study
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].
      </p>
      <p>According to the images of relevant emotions from the above study, according to
Table 2, the features are formed at relevant intervals. The generated input is
visualized in two-dimensional space by the proposed approach (Fig. 2).</p>
      <p>As the Fig. 2 shows, the synthesized data are grouped by emotion, which confirms
the ability of the proposed model to be used to classify emotional states. Further,
following the steps of the proposed approach, piecewise linear dividers for the classes
that correspond to emotional states are indicated. Going further, according to the
suggested approach, the hyperplane parameters are obtained. Using the obtained
parameters of the hyperplanes, a solution tree was constructed for the hyperplane
classification of mimic expressions of emotional states.
As can be seen in Fig. 2, the classes of emotions “anger” and “grief” can be visually
divided into two groups each. For the emotion of “grief”, this is explained by the fact
that some of respondents in the photo have eyelids twisted ( x1 ∈ [0,0.2] ), and the rest
of respondents have the eyelids in normal condition ( x1 ∈ [0.4,0.6] ) (Table 3).
4.2</p>
      <sec id="sec-3-1">
        <title>ECG signal marking</title>
        <p>
          The application of the proposed approach to the task of marking the
electrocardiographic (ECG) signal is considered. When analyzing the ECG signal, an important
step is to break it into gaps containing QRS complexes (Fig. 3).
For the emotion "Anger", some of the respondents in the photo have narrowed eyelids
x1 ∈ [0,0.2] , and others have expanded ones ( x1 ∈ [0.8,1] ) (Table 3).
The QRS complex is the combination of three of the graphical deflections seen on a
typical electrocardiogram (EKG or ECG) [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]. It is usually the central and most
visually obvious part of the tracing; in other words, it's the main spike seen on an ECG
line. It corresponds to the depolarization of the right and left ventricles of the human
heart and contraction of the large ventricular muscles.
        </p>
        <p>The Q, R, and S waves occur in rapid succession and reflect a single event and thus
are usually considered together. A Q wave is any downward deflection immediately
following the P wave. An R wave follows as an upward deflection, and the S wave is
any downward deflection after the R wave. The T wave follows the S wave, and in
some cases, an additional U wave follows the T wave.</p>
        <p>For realization of partition the training ECG signal is divided into two classes:
intervals containing QRS complexes (positive samples) and intervals without QRS
complexes (negative samples), or with their partial presence (Fig. 4). The resulting
intervals are reduced to one dimension (a given number of values).</p>
        <p>Positive samples</p>
        <p>Negative samples
For the training sample, we apply the proposed approach and obtain the parameters of
a hyperplane that divides the given two classes. Passing a window of a given size of
an ECG signal, we form a gap and check it for belonging to one of the classes. If the
gap contains a QRS complex, then we mark the signal (Fig. 5).</p>
        <p>
          The training and testing of the proposed approach was carried out on the data from
[
          <xref ref-type="bibr" rid="ref27">27</xref>
          ] and it showed its ability to place ECG signal at intervals with QRS complexes.
4.3
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Classification of test tasks for adaptive testing</title>
        <p>
          The proposed approach has the potential to solve the problem of classification of test
tasks in the process of adaptive testing of the level of knowledge. Adaptive methods
are widespread in modern e-education, in particular to determine the level of
knowledge [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. There are different adaptive testing algorithms. With an adaptive
approach to the testing process, regardless of the algorithm used, every next test task
is selected depending on the user's response to the previous test task.
        </p>
        <p>For the automatic selection of the next test task, those tasks from the initial set that
have not been used in the testing process (relevant) are analyzed. The analysis is
performed on many parameters of the test tasks. The following parameters include:
1. The classic parameters provided by the test task for any test algorithm – both
adaptive and classic. These include: the type of test task; the number of correct answers;
the level of difficulty; the number of characters in the task and the answers;
maximum response time, etc.
2. The semantic parameters that are required for adaptive testing and relate to each
test task with the elements of the semantic structure of the educational course.
These include: current heading of educational material; a key term knowledge of
which is tested; a snippet of the content of the educational material that was used to
create the test assignment.</p>
        <p>When classifying current test tasks by each parameter, some classes may not contain
samples, or samples in classes may be unevenly distributed. The cumulative set of
parameters forms a polycube of relevant test tasks, which must be compared with a
polycube that forms irrelevant (already used) test tasks. The result is the choice of the
next test task for the user, which by parameters is as different as possible from the
irrelevant test tasks. This process is repeated each time the user answers the next test
task. Elements of the model of the given multidimensional representation of
parameters thus are updated every time.</p>
        <p>
          The proposed approach to defining the boundaries of classes and determining the
need for their transformation makes it possible to automatically provide the required
interactivity and dynamism of the classification system. The set of test tasks, obtained
as a result by the user, will be balanced in their parameters, representative of the
semantic structure of the educational course and corresponding to the chosen algorithm
of adaptive testing [
          <xref ref-type="bibr" rid="ref28">28</xref>
          ]. Possibilities for visual dynamic presentation provide tools for
understanding, controlling and validating the actions performed at different stages of
the adaptive testing process of the level of knowledge [
          <xref ref-type="bibr" rid="ref29">29</xref>
          ].
5
        </p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Conclusions</title>
      <p>The use of visual analytics must be focused on the formation of the final product in
the form of a model. This direction is the most promising as it allows finding not only
solutions, but creating a mechanism for making decisions. In this way, both a mental
model for a human and a formal model for use by a machine are formed. The received
models are the result of effective interaction between human and machine during the
productive exploitation of advantages. The orientation of the use of visual analytics
not to the process of cognition, but to obtaining a model allows expanding its use.</p>
      <p>Based on his analytical abilities, a human himself determines the ability and
measure of data separation based on visualization. One of the necessary circumstances is to
minimize losses and identify relationships between data. The visual representation of
space should not distort or reduce information connections. The use of data from
diverse application areas has shown effective visualization with the right
transformation. The data presented in visual form are aggregated into groups that distance
themselves. This suggests the possibility of widespread use of the approach of
transforming the mental model into machine-executed space.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Endert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hossain</surname>
            ,
            <given-names>M. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramakrishnan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>North</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fiaux</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrews</surname>
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>The human is the loop: new directions for visual analytics</article-title>
          .
          <source>In: Journal of Intelligent Information Systems</source>
          ,
          <volume>43</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>411</fpage>
          -
          <lpage>435</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Endert</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ribarsky</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turkay</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wong</surname>
            ,
            <given-names>B. L. W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nabney</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Díaz</surname>
            <given-names>Blanco</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            &amp;
            <surname>Rossi</surname>
          </string-name>
          ,
          <string-name>
            <surname>F.</surname>
          </string-name>
          :
          <article-title>The state of the art in integrating machine learning into visual analytics</article-title>
          ,
          <source>Computer Graphics Forum</source>
          ,
          <volume>36</volume>
          (
          <issue>8</issue>
          ), pp.
          <fpage>458</fpage>
          -
          <lpage>486</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Thomas</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cook</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Illuminating the Path: Research and Development Agenda for Visual Analytics</article-title>
          . IEEE-Press (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pirolli</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Card</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>The Sensemaking Process and Leverage Points for Analyst Technology as Identified Through Cognitive Task Analysis</article-title>
          .
          <source>In: Proceedings of International Conference on Intelligence Analysis</source>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Sacha</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stoffel</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stoffel</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kwon Bum Chul</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ellis</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keim</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>Knowledge generation model for visual analytics</article-title>
          .
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          ,
          <volume>20</volume>
          (
          <issue>12</issue>
          ), pp.
          <fpage>1604</fpage>
          -
          <lpage>1613</lpage>
          (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johnson</surname>
            , J., Cheng,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>An Interactive Machine Learning Framework</article-title>
          . ArXiv, abs/1610.05463 (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plass</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kickmeier-Rust</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Crisan</surname>
            ,
            <given-names>G. C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pintea</surname>
            ,
            <given-names>C. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Palade</surname>
            ,
            <given-names>V.:</given-names>
          </string-name>
          <article-title>Interactive machine learning: Experimental evidence for the human in the algorithmic loop</article-title>
          .
          <source>Appl. Intell</source>
          .
          <volume>49</volume>
          (
          <issue>7</issue>
          ), pp.
          <fpage>2401</fpage>
          -
          <lpage>2414</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Sacha</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kraus</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keim</surname>
            ,
            <given-names>D. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Vis4ml: An ontology for visual analytics assisted machine learning</article-title>
          .
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>25</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>385</fpage>
          -
          <lpage>395</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Andrienko</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lammarsch</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrienko</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fuchs</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Keim</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Miksch</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rind</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Viewing visual analytics as model building</article-title>
          . In: Computer Graphics Forum,
          <volume>37</volume>
          (
          <issue>6</issue>
          ), pp.
          <fpage>275</fpage>
          -
          <lpage>299</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Staskoj</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Mental models, visual reasoning and interaction in information visualization: A top-down perspective</article-title>
          .
          <source>In: IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>16</volume>
          ,
          <issue>6</issue>
          , pp.
          <fpage>999</fpage>
          -
          <lpage>1008</lpage>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Keim</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Andrienko</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fekete</surname>
          </string-name>
          , J.
          <string-name>
            <surname>-D.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Görg</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kohlhammer</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melançon</surname>
          </string-name>
          , G.:
          <article-title>Visual analytics: Definition, process, and challenges</article-title>
          . In: Information Visualization:
          <article-title>HumanCentered Issues and Perspectives, Kerren A</article-title>
          .,
          <string-name>
            <surname>Stasko</surname>
            <given-names>J. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fekete J.-D.</surname>
          </string-name>
          ,
          <string-name>
            <surname>North</surname>
            <given-names>C.</given-names>
          </string-name>
          , (Eds.). Springer, Berlin, pp.
          <fpage>154</fpage>
          -
          <lpage>175</lpage>
          (
          <year>2008</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Yi</surname>
            ,
            <given-names>J. S.</given-names>
          </string-name>
          <string-name>
            <surname>Kang</surname>
            ,
            <given-names>Y.-A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stasko</surname>
            ,
            <given-names>J. T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jacko</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          :
          <article-title>Toward a deeper understanding of the role of interaction in information visualization</article-title>
          .
          <source>In: IEEE Trans. on Visualization and Computer Graphics</source>
          ,
          <volume>13</volume>
          (
          <issue>6</issue>
          ), pp.
          <fpage>1224</fpage>
          -
          <lpage>1231</lpage>
          (
          <year>2007</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Manziuk</surname>
            ,
            <given-names>E. A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barmak</surname>
            ,
            <given-names>O. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krak</surname>
            ,
            <given-names>Iu. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kasianiuk</surname>
            ,
            <given-names>V. S.:</given-names>
          </string-name>
          <article-title>Definition of information core for documents classification</article-title>
          .
          <source>Journal of Automation and Information Sciences</source>
          ,
          <volume>50</volume>
          (
          <issue>4</issue>
          ),
          <year>2018</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>34</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ren</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Suganthan</surname>
            ,
            <given-names>P. N.</given-names>
          </string-name>
          :
          <article-title>Ensemble classification and regression-recent developments, applications and future directions</article-title>
          .
          <source>In: IEEE Computational intelligence magazine</source>
          ,
          <volume>11</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>41</fpage>
          -
          <lpage>53</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Kryvonos</surname>
            ,
            <given-names>I. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krak</surname>
            ,
            <given-names>I. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barmak</surname>
            ,
            <given-names>O. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kulias</surname>
            ,
            <given-names>A. I.</given-names>
          </string-name>
          :
          <article-title>Methods to Create Systems for the Analysis</article-title>
          and
          <source>Synthesis of Communicative Information. Cybernetics and Systems Analysis</source>
          ,
          <volume>53</volume>
          (
          <issue>6</issue>
          ), pp.
          <fpage>847</fpage>
          -
          <lpage>856</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Krak</surname>
            ,
            <given-names>I. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kryvonos</surname>
            ,
            <given-names>I. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barmak</surname>
            ,
            <given-names>O. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ternov</surname>
            <given-names>A. S.:</given-names>
          </string-name>
          <article-title>An Approach to the Determination of Efficient Features and Synthesis of an Optimal Band-Separating Classifier of Dactyl Elements of Sign Language</article-title>
          .
          <source>In: Cybernetics and Systems Analysis</source>
          ,
          <volume>52</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>173</fpage>
          -
          <lpage>180</lpage>
          (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Barmak</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Manziuk</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Krak</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Using piecewise hyper linear classification in multidimensional feature space for text content</article-title>
          .
          <source>In: IEEE 14th International Conference on Computer Sciences and Information Technologies (CSIT)</source>
          , Vol.
          <volume>2</volume>
          , pp.
          <fpage>119</fpage>
          -
          <lpage>123</lpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18. de Leeuw, J.,
          <string-name>
            <surname>Mair</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Multidimensional scaling using majorization:</article-title>
          <source>In: Journal of Statistical Software</source>
          ,
          <volume>31</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>1</fpage>
          -
          <lpage>30</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Guttman</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>A general nonmetric technique for finding the smallest coordinate space for a configuration of points</article-title>
          .
          <source>In: Psychometrics</source>
          ,
          <volume>33</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>469</fpage>
          -
          <lpage>506</lpage>
          (
          <year>1968</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Martinez</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Du</surname>
            ,
            <given-names>S.:</given-names>
          </string-name>
          <article-title>A model of perception of facial expressions of emotion by human: research overview and perspectives</article-title>
          .
          <source>In: Journal of Machine Learning Research</source>
          ,
          <volume>13</volume>
          , pp.
          <fpage>1589</fpage>
          -
          <lpage>1608</lpage>
          (
          <year>2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Ekman</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Friesen</surname>
            <given-names>W. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hager</surname>
            <given-names>J. C.</given-names>
          </string-name>
          :
          <article-title>The Facial Action Coding System</article-title>
          .
          <source>Salt Lake City</source>
          , UT Research Nexus eBook (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22. Duchenne de Boulogne, G.-B.,
          <string-name>
            <surname>Cuthbertson</surname>
            ,
            <given-names>A. R.</given-names>
          </string-name>
          :
          <source>The Mechanism of Human Facial Expression</source>
          . Cambridge UK; New York; etc.: Cambridge University Press,
          <volume>227</volume>
          (
          <year>1990</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>S. Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Anil</surname>
            ,
            <given-names>K. L.</given-names>
          </string-name>
          <article-title>Handbook of face recognition</article-title>
          , New York, Springer Science &amp; Business Media,
          <volume>395</volume>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>24. Intel RealSense Camera https://www.intelrealsense.com/</mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Wingenbach</surname>
            <given-names>T. S. H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ashwin</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brosnan</surname>
            <given-names>M.:</given-names>
          </string-name>
          <article-title>Correction: Validation of the Amsterdam Dynamic Facial Expression Set - Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions</article-title>
          .
          <source>PLOS ONE</source>
          <volume>11</volume>
          (
          <issue>12</issue>
          ): e0168891 (
          <year>2016</year>
          ) https://doi.org/10.1371/journal.pone.
          <volume>0168891</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Ellis</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhu</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Koenig</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Thayer</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          :
          <article-title>A careful look at ECG sampling frequency and R-peak interpolation on short-term measures of heart rate variability</article-title>
          .
          <source>In: Physiol. Meas. 36</source>
          , pp.
          <fpage>1827</fpage>
          -
          <lpage>1852</lpage>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Goldberger</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Amaral</surname>
            ,
            <given-names>L. A. N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Glass</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hausdorff</surname>
            ,
            <given-names>J. M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ivanov</surname>
            ,
            <given-names>P. Ch.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mark</surname>
            ,
            <given-names>R.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mietus</surname>
            ,
            <given-names>J. E.</given-names>
          </string-name>
          , Moody, G. B.,
          <string-name>
            <surname>Peng</surname>
          </string-name>
          , C.
          <article-title>-</article-title>
          K.,
          <string-name>
            <surname>Stanley</surname>
          </string-name>
          , H. E.: PhysioBank, PhysioToolkit, and
          <article-title>PhysioNet: Components of a New Research Resource for Complex Physiologic Signals</article-title>
          .
          <source>Circulation</source>
          .
          <volume>101</volume>
          (
          <issue>23</issue>
          ):
          <fpage>e215</fpage>
          -
          <lpage>e220</lpage>
          (
          <year>2003</year>
          ) https://physionet.org/content/mitdb/1.0.0/
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Pasichnyk</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melnyk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasichnyk</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Turchenko</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Method of adaptive control structure learning based on model of test's complexity</article-title>
          .
          <source>In: Proceedings of the 6th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications</source>
          , IDAACS'
          <year>2011</year>
          , Vol.
          <volume>2</volume>
          , pp.
          <fpage>692</fpage>
          -
          <lpage>695</lpage>
          (
          <year>2011</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Krak</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Barmak</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mazurets</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>The practice implementation of the information technology for automated definition of semantic terms sets in the content of educational materials</article-title>
          .
          <source>In: CEUR Workshop Proceedings 2139</source>
          , pp.
          <fpage>245</fpage>
          -
          <lpage>254</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <article-title>ECG signal with QRS complexes https://www</article-title>
          .wikiwand.com/en/QRS_complex
          <source>(аccess. March</source>
          <volume>20</volume>
          ,
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>