<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>On the Problem of Development of Methods and Algorithms Based on the Ob ject-Oriented Logic Programming for Intelligent Video Monitoring of Laboratory Rats</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>A A Morozov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>O S Sushkova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kotel'nikov Institute of Radio Engineering and Electronics of RAS</institution>
          ,
          <addr-line>Mokhovaysatr. 11-7</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Moscow, Russia</institution>
          ,
          <addr-line>125009</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>29</fpage>
      <lpage>37</lpage>
      <abstract>
        <p>The problem of the video monitoring the laboratory rats by the meansof the object-oriented logic programming is considered. The main task of the video monitoring is the analysis of the behavior of the animals in cognitivt esting. An essential feature of the video records is in that the experiments are conducted in the same cage where the animal lives, that is, the background of the cage isawdust. The color of the animals is about the same as the color of the sawdust; thus the detection of the animals is not a simtpalsek. An additional difficulty is in that the videos were recorded simultaneously with electroencephalograms (EEG) in the animals; thus the head of the rat is connected with EEG cable that moves and causes false detections of recognition algorithms. In the paper, development of low-level algorithms for video analysis as well as logical methods for the analysis of the animal behavior idsiscussed. The methods and algorithms are implemented in the Actor Prolog object-oriented logiclanguage.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, automation of neurophysiological experiments on laboratory animals was
recognized as an important direction in the computer vision and intelligent video monitoring [
        <xref ref-type="bibr" rid="ref1 ref2">1,
2</xref>
        ]. The methods of computer vision enable automation of routine of animal behavior analysis
and, that is more important, make the results of the analysis independent from the human factor.
Usually, a neurophysiological research is based on the relative change of the quantity of given
events (for instance, actions performed by the laboratory animal) under given experimental
conditions, but not on the absolute quantity of these events. Moreover, often it is not
possible to estimate the exact quantity of the events because the behavior of the animal is
not clearly expressed; in this case, the recognition of the events depends on the experience
and subjective opinion of the experimenter. The state of the experimenter including his
fatigue and his current ideas about the importance of various experimental events can influence
the results of the recognition too. Therefore it is important to provide the constancy and
uniformity of the recognition of required elements of animal behavior. This uniformity can be
provided by the methods of automatic video analysis (intelligent video monitoring).
      </p>
      <p>
        Another important problem that can be solved using intelligent video monitoring is
standardization of neurophysiological experiments and providing reproducibility [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] of the
experimental results by independent researchers in various laboratories.
      </p>
      <p>
        There are free and commercial software available for automation of video processing and
laboratory animal behavior analysis in biomedical experiments [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7 ref8">4–8</xref>
        ]. Nevertheless, new
problems arise constantly in the neurophysiological experiments that require video processing
that is out of capabilities of the existed software. It is expedient to use high-level programming
languages specialized for intelligent video monitoring for solving these problems. In this paper, a
video monitoring problem of such kind and video analysis methods used for solving this problem
are considered.
      </p>
      <p>The videos considered below are produced in neurophysiological experiments on the study
of a convulsive electrical activity of the brain cortex. In the experiments, videos of a behavior
of laboratory rats were recorded simultaneously with EEG signals. A comparison of EEG data
with the behavior of the animals is necessary because sharp motions of the animals can result in
EEG artifacts that are very similar to the epileptic discharges. Thus, the first task of the video
monitoring is recognition of the sharp motions of the animals and using this information for
proper interpretation of the results of the experiments. The second task of the video monitoring
is the analysis of the behavior of animals in cognitive testing (in the tests on social recognition
and recognition of a new object). An essential feature of the video records is in that the
experiments are conducted in the same cage where the animal lives, that is, the background of
the cage is sawdust. The color of the animals is about the same as the color of the sawdust;
thus the detection of the animals is not a simple task.</p>
      <p>
        Initial experiments with the video analysis have demonstrated that the methods of object
detection implemented in commercial software based on the analysis of brightness, analysis of
color, and background subtraction cannot provide stable recognition of the laboratory rats on
the sawdust background. Thus, we have applied more sophisticated texture-oriented methods
implemented in the Actor Prolog logic programming system for the recognition of the animals.
The texture-oriented methods provide stable detection of the animals at the expense of a
decrease of spatial resolution of the detection. In action, it implies a loss of information about
the coordinates of the contour of the animal and impossibility of using modern model-based
tracking methods for recognition of postures and actions of the animals. We have developed
logical methods for analysis of the behavior of laboratory rats based on the information about
the coordinates and velocity of objects that can be obtained using the low-level video processing
means implemented in the Actor Prolog system [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref9">9–16</xref>
        ]:
(i) The coordinates and velocity of the centroid of the animal are computed using the
texturebased methods of image processing.
(ii) The coordinates of the EEG cap that connects the animal with the EEG cable are computed
using the color-based methods.
(iii) The exact coordinates of auxiliary objects placed in the cage are computed using the
colorbased methods too.
      </p>
      <p>We have developed a logical definition ( a s et o f l ogical r ules) o f t he e xploratory b ehavior o f the
laboratory rats that provides an acceptable quality of recognition of the required behavior in
the cognitive tests.</p>
      <p>
        The logical approach to the definition and analysis of laboratory animal behavior is described
in the first section of the paper. The experimental conditions and peculiarities of the video data
to be processed are described in the second section. A description of an experimental program
for the video analysis implemented in the Actor Prolog logic language and the results of the
experiment are discussed in the third section.
2. The Logic Programming Approach to the Intelligent Video Monitoring
The idea of using mathematical logic and logic programming for intelligent video surveillance was
developed in research projects W4 [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ], VidMAP [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], VERSA [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ], LTAR [20R], oboSherlock [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ],
Actor Prolog [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], etc. The idea is in that one applies logical formulae/rules for description and
recognition of objects, situations, and events. One can explain the advantage of the logical
approach to the intelligent video monitoring in the following way. The activity and behavior
notions [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] differ in that the behavior of an object is the activity of the object related to the
context information about the place, time, object attributes, etc. The information about the
context allows deciding, for instance, whether the behavior of the object is abnormal and/or
dangerous. Thus the analysis of the behavior is a more complicated problem than the analysis
of the activity. It is necessary to describe and analyze the information about the context of the
activity and the mathematical logic is perhaps the best instrument that can be used for this
purpose.
      </p>
      <p>
        The Actor Prolog language is an object-oriented logic language, that is, it combines
expressiveness of the logical and object-oriented approaches to the programming [
        <xref ref-type="bibr" rid="ref23 ref24 ref25 ref26">23–26</xref>
        ]. This
combination has increased the area of application of the logic programming. In particular, the
object-oriented features enable to effort an opportunity to solve the problem of storing and
processing big arrays of binary data (such as audio/video data) in the logic languages. The
problem is in that plain logic languages do not implement data arrays directly, but use lists and
structures for storing data because these data structures correspond to the Skolem functions in
the first-order Predicate Calculus [
        <xref ref-type="bibr" rid="ref27">27</xref>
        ]. In the object-oriented logic languages, the arrays of data
can be encapsulated in the instances of some specialized built-in classes. This enables fast and
effective processing of the big data arrays in the logic languages.
      </p>
      <p>
        The method of object-oriented logic programming of intelligent video surveillance was
developed for the analysis of people behavior and recognition of abnormal activity [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref9">9–16</xref>
        ]. The
idea of the method is in the following:
(i) The stages of low-level and high-level processing of the video stream are separated.
(ii) The stage of the low-level video processing includes background subtraction, extraction of
blobs, computing trajectories of the movements of the blobs, etc. The low-level processing
is performed directly upon the video data arrays using special built-in classes of the logic
language. The built-in classes are implemented in a procedural programming language to
increase the speed of the data processing.
(iii) The stage of the high-level video processing includes analysis of trajectories/graphs of the
blob movements. The algorithms of the high-level analysis are implemented in the Actor
Prolog logic language in a form of logical rules. The graphs of the blob movements are
described using the terms of the logic language: structures, lists, and underdetermined
sets [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ].
(iv) The logic programs written in Actor Prolog are translated to Java [
        <xref ref-type="bibr" rid="ref14 ref28">14, 28</xref>
        ]. The Java
language is used as an intermediate language in the translation scheme to provide high
performance and stable work of the intelligent video monitoring software.
      </p>
      <p>New built-in classes of the Actor Prolog language were developed to use the language for the
intelligent video monitoring of the laboratory animals. These new built-in classes implement
new means of the low-level video processing that is necessary to analyze simultaneously blobs
of diferent k inds e xtracted u sing d iferent me thods. Th est andard me thods of bl ob extraction
based on the background subtraction are not applicable in the case of sawdust background
because the sawdust surface is changed permanently that is an effect of the movements of the
rat and this is a cause of multiple false results during the background subtraction.
It turned out, that the methods of blob extraction based on the analysis of the brightness and hue of
objects are of little use in the recognition of the rats in the cage with the sawdust background too.
The reason for this is in that the color of the rats is about the same as the color of the sawdust.
Theoretically speaking, the computer vision methods can differentiate the hues of the rats and the
sawdust, but, in practice, the illumination inside the cage often is non-uniform andthceolor of the
rats and sawdust background is influenced by the shadows and reflections of the light from the
colored objects and plastic walls of the cage. As a result of this, the standard methods of
blob extraction based on the brightness and color also mistakes.</p>
    </sec>
    <sec id="sec-2">
      <title>3. The Experimental Conditions</title>
      <p>Let us consider a problem of intelligent video monitoring laboratory rats by the example of a
neurophysiological experiment on the study of cognitive possibilities of the animal. By the terms
of the experiment, one puts new objects into the cage with the animal. The animal explores
the new objects and the experimenter estimates the total time spend by the animal to explore
the objects. After a time, the experiment is to be repeated with the same objects. If the time
spent by the animal to explore the objects is less than one in the first test, the experimenter can
conclude that the animal remembers the objects. If the time is about the same, it means that
the animal forgot the objects.</p>
      <p>The exploratory activity of the rat is usually manifested in that it approaches the object and
snifs around the object (see Figure 1). It is dificult to describe the exploratory activity of the rat
in a formal way; thus a simplified approach to detection of the exploratory activity is often used
in the neurophysiological experiments: the distance between the rat and the object is estimated
and the cumulative time when the rat is situated close enough to the object is calculated.
Sometimes one estimates just the number of approaches of the rat to the object. This simplified
method leads to mistakes and is not applicable to the experiment under consideration because
the cage is small and the rat is situated not far from the object about all the time. At that the
rat can snif around the object or ignore it; it can just lie or dig the sawdust near the object.</p>
      <p>An additional problem is in that we cannot detect the contour of the animal in the background
of sawdust. Thus we cannot recognize in a reliable way the face and forelegs of the animal.
Instead of this, we estimate the following attributes of the experimental setting with the help of
the low-level image processing procedures:
(i) The coordinates and the velocity of the centroid of the blob related to the body of the
rat are estimated using the texture-oriented method of image analysis that is sensitive to the
smoothness of the object surfaces. The body of the rat is well visible in the sawdust background
because the rat hair has a smoother surface in comparison with the sawdust.
(ii) The coordinates of EEG cap are detected using the color in the HSB space.
(iii) Exact coordinates of the objects placed in the cage are estimated using the color information
too. The only diference is in that the coordinates of the object are estimated permanently
in all the frames of the video and then averaged to avoid mistakes when the rat covers the
object.</p>
      <p>Let B be the centroid of the blob corresponding to the body of the rat (see Figure 2). Let
C be the centroid of the blob corresponding to EEG cap of the rat. Let O be the object that is
nearest to the C point. Let A be a point on the contour of the object O that is nearest to the
C point. Let D be a point in the contour of the same object O that is nearest to the B point.
The following combinations of these attributes have been recognized as useful for the detection
of the exploratory activity of the rats during the experiments:
(i) If the distance between the A and C points is less than 1 cm, the rat probably snifs the
object. If the distance between the A and C points is more than 4 cm, the rat probably
does not explore the object. If the distance lies in the interval 1–4 cm, additional analysis
is necessary to determine whether the rat explores the object or not. The analysis is
complicated by the fact that the rat can lie down sideways during the snifing of the object;
in this case, the A − C distance can be big enough.
(ii) If the angle between the B − C and B − A lines is more than 50 degrees, the rat probably
does not investigate the object. This heuristic rule reflects the fact that the rat turns his
face to the object during the snifing.
(iii) If the velocity of the B point is more than 3 cm per second, the rat probably just walks
around the object, but does not explore it.
(iv) If the ratio between the D − B and A − C distances is less than 1.3, the rat probably does
not explore the object. This rule reflects the fact that the rat usually stands aside from the
object and pulls his face to the object during the snifing.</p>
      <p>These heuristic rules were implemented in a logic program for detection of the exploratory
activity of the rats and tested in neurophysiological experiments.</p>
    </sec>
    <sec id="sec-3">
      <title>4. The Logic Program for the Video Analysis</title>
      <p>The Actor Prolog language has no built-in means of the fuzzy logical inference, however one can
easily implement a kind of a fuzzy logical reasoning using the standard top-down resolution and
standard built-in arithmetical operations. In particular, one can define the heuristic rules of rat
behavior analysis described above in the following way:
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018)
PREDICATES:
determ:
sniffing_is_detected(REAL,REAL,REAL,REAL)
imperative:
fuzzy_metrics(REAL,REAL,REAL) = REAL
- (i,i,i,i);
- (i,i,i);</p>
      <p>The snifing is detected predicate succeeds if an exploratory behavior of the rat is detected.
The predicate has four input arguments: distance A − C; distance D − B; the angle between
the B − C and B − A lines; and the velocity of the B point.</p>
      <p>CLAUSES:
sniffing_is_detected(AC,_,_,_):</p>
      <p>AC &lt; 0.01,!.
sniffing_is_detected(AC,_,_,_):</p>
      <p>AC &gt; 0.04,!,
fail.
sniffing_is_detected(AC,DB,A,V):</p>
      <p>M1== ?fuzzy_metrics(DB/AC,1.3,0.1),
M2== 1-?fuzzy_metrics(A,50.0,10.0),
M3== 1-?fuzzy_metrics(V,0.03,0.01),
P== M1*M2*M3,</p>
      <p>P &gt; ?power(0.5,3).
fuzzy_metrics(X,T,H) = 1.0
:</p>
      <p>X &gt;= T + H,!.
fuzzy_metrics(X,T,H) = 0.0
:</p>
      <p>X &lt;= T - H,!.
fuzzy_metrics(X,T,H) = V
:</p>
      <p>V== (X - T + H) * (1 / (2*H)).</p>
      <p>
        The fuzzy metrics function is auxiliary one. It is used for definition of fuzzy thresholds. The
function has three input arguments: the value to be checked; the threshold; and the width of
the interval of uncertainty [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]:
      </p>
      <p>In Figure 3, the user interface of the logic program intended for the analysis of the rat behavior
is shown. Control elements of the dialog window of the program allow selection of the video fi les
and examination of the video frames at various rates in the forward and backward directions.
The information on the total distance of movement, average velocity, cumulative time expended
by the rat for investigation of new objects in the cage is demonstrated in the top of the dialog
window. In the graphics window, the program demonstrates the frames of the video, detected
blobs, and auxiliary information related to the behavior analysis. The blob related to the body
of the rat is indicated by the orange color. The blob related to the EEG cap is indicated by
the cyan color. The green and blue blobs correspond to the new objects placed in the cage in
the course of the experiment. The logical rules defined above are used for the recognition of the
exploratory activity of the rat.</p>
      <p>The logic program was tested on five r ats. T he r esults o f t he t ests a re g iven i n Table 1: the
total distance passed by the rat during the test; the average velocity of the rat; and the time of
the exploratory activity of the rat (in percents of the total time of the test).</p>
      <p>All five videos were marked manually to estimate the quality of the algorithm of detection
of the exploratory activity.</p>
      <p>
        The results of the automatic recognition were compared with the results of the manual
marking to compute the sensitivity and specificity of the detection (see Table 2). The results
of the application of standard algorithms SVM [
        <xref ref-type="bibr" rid="ref29">29</xref>
        ] and ANFIS [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ] are given in the table
for reference.
      </p>
      <p>The tests demonstrated that the logic program ensures about the same sensitivity and
specificity ( ≈ 80%) as the SVM and ANFIS algorithms do on the base of the same data.
This quality of detection is comparable with the quality of manual detection and is enough
for conduction of the neurophysiological experiment. Note that an advantage of the logic
programming approach to the detection is in that it does not require a preliminary training of
the program and, that is more important, logical rules are understandable for the experimenter
and can be manually fixed and/or improved in any time. The values of the attributes used in
the rules (the thresholds of angles and distances, etc.) can be assigned manually or computed
(automatically or semi-automatically) on the base of statistical analysis of marked videos.</p>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusions</title>
      <p>
        A method of intelligent video monitoring the laboratory animals based on the object-oriented
logic programming is developed. The method is intended for intelligent video monitoring the
laboratory rats in non-standard experimental conditions when one cannot apply existed software
for automation of neurophysiological experiments. In particular, this method is applicable for
analysis of the behavior of rats in cages with a sawdust background when the contrast of the
images is low and the illumination is non-uniform. In the framework of the method, heuristic
rules and fuzzy definitions a re u sed f or d escribing a nd r ecognition o f t he a nimal b ehavior. The
method is implemented on the base of the Actor Prolog [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] object-oriented logic language.
Acknowledgments
Authors are grateful to Natalia V. Gulyaeva, Ilya G. Komoltsev, Anna O. Manolova, Margarita R.
Novikova, and Irina P. Levshina (IHNA and NPh RAS) for the experimental video data, to Yury V.
Obukhov (IRE RAS) for the problem statement, and to Alexander F. Polupanov (IRE RAS) for the help
in the research. We thank anonymous reviewers for the advertence and useful remarks on the paper.
This research is funded by Russian Science Foundation (project No. 16-11-10258).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Robinson</surname>
            <given-names>L</given-names>
          </string-name>
          and
          <string-name>
            <surname>Riedel</surname>
            <given-names>G</given-names>
          </string-name>
          <source>2014 Journal of neuroscience methods</source>
          <volume>234</volume>
          <fpage>13</fpage>
          -
          <lpage>25</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Tscharke</surname>
            <given-names>M</given-names>
          </string-name>
          and
          <string-name>
            <surname>Banhazi T M 2016</surname>
          </string-name>
          <source>Journal of Agricultural Informatics</source>
          <volume>7</volume>
          <fpage>23</fpage>
          -
          <lpage>42</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Spruijt</surname>
            <given-names>B M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peters S M</surname>
            , de Heer R C,
            <given-names>Pothuizen H H and van der Harst</given-names>
          </string-name>
          J E 2014
          <source>Journal of neuroscience methods</source>
          <volume>234</volume>
          <fpage>2</fpage>
          -
          <lpage>12</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>van Dam</surname>
            <given-names>E A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>van der Harst</surname>
          </string-name>
          J E,
          <string-name>
            <surname>ter Braak</surname>
            <given-names>C J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tegelenbosch</surname>
            <given-names>R A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spruijt B M and Noldus L P 2013 Journal</surname>
            <given-names>of</given-names>
          </string-name>
          <source>neuroscience methods</source>
          <volume>218</volume>
          <fpage>214</fpage>
          -
          <lpage>224</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Noldus</surname>
            <given-names>L P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Spink</surname>
            <given-names>A J</given-names>
          </string-name>
          and
          <string-name>
            <surname>Tegelenbosch</surname>
            <given-names>R A</given-names>
          </string-name>
          2001
          <source>Behavior Research Methods</source>
          <volume>33</volume>
          <fpage>398</fpage>
          -
          <lpage>414</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Ohayon</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Avni</surname>
            <given-names>O</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Taylor</surname>
            <given-names>A L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Perona</surname>
            <given-names>P</given-names>
          </string-name>
          and
          <string-name>
            <surname>Egnor</surname>
            <given-names>S R</given-names>
          </string-name>
          2013
          <source>Journal of neuroscience methods</source>
          <volume>219</volume>
          <fpage>10</fpage>
          -
          <lpage>19</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Weissbrod</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shapiro</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vasserman</surname>
            <given-names>G</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Edry</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dayan</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yitzhaky</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hertzberg</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Feinerman</surname>
            <given-names>O</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kimchi</surname>
            <given-names>T 2013</given-names>
          </string-name>
          <string-name>
            <surname>Nature</surname>
            <given-names>Communications</given-names>
          </string-name>
          <article-title>4 2018</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Giancardo</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sona</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Huang</surname>
            <given-names>H</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sannino</surname>
            <given-names>S</given-names>
          </string-name>
          , Manag`o
          <string-name>
            <given-names>F</given-names>
            ,
            <surname>Scheggia</surname>
          </string-name>
          <string-name>
            <given-names>D</given-names>
            ,
            <surname>Papaleo</surname>
          </string-name>
          <string-name>
            <given-names>F</given-names>
            and
            <surname>Murino</surname>
          </string-name>
          <string-name>
            <surname>V 2013</surname>
          </string-name>
          <article-title>PloS one 8 e74557</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sushkova O S 2016</surname>
          </string-name>
          <article-title>Real-time analysis of video by means of the Actor Prolog language</article-title>
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>6</issue>
          )
          <fpage>947</fpage>
          -
          <lpage>957</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2016-40-6-
          <fpage>947</fpage>
          -957
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sushkova O S and Polupanov A F 2017</surname>
          </string-name>
          <article-title>Object-oriented logic programming of 3D intelligent video surveillance: The problem statement IEEE 26th International Symposium on Industrial Electronics (ISIE), Edinburgh</article-title>
          ,
          <source>United Kingdom 1631-1636</source>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sushkova O S and Polupanov A F 2017</surname>
          </string-name>
          <article-title>Towards the distributed logic programming of intelligent visual surveillance applications</article-title>
          <source>Proceedings Advances in Soft Computing: 15th Mexican International Conference on Artificial Intelligence</source>
          <volume>2</volume>
          <fpage>42</fpage>
          -
          <lpage>53</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          <year>2015</year>
          <article-title>Pattern Recognition</article-title>
          and
          <source>Image Analysis</source>
          <volume>25</volume>
          <fpage>481</fpage>
          -
          <lpage>492</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Polupanov</surname>
            <given-names>A F</given-names>
          </string-name>
          <year>2015</year>
          <article-title>Development of the logic programming approach to the intelligent monitoring of anomalous human behaviour OGRW (Koblenz</article-title>
          : University of Koblenz-Landau)
          <fpage>82</fpage>
          -
          <lpage>85</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Polupanov A F 2014</surname>
          </string-name>
          <article-title>Intelligent visual surveillance logic programming: Implementation issues CICLOPS-WLPE (Aachener Informatik Berichte</article-title>
          )
          <fpage>31</fpage>
          -
          <lpage>45</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vaish</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Polupanov</surname>
            <given-names>A F</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Antciperov</surname>
            <given-names>V E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lychkov</surname>
            <given-names>I I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alfimtsev</surname>
            <given-names>A N</given-names>
          </string-name>
          and
          <string-name>
            <surname>Deviatkov</surname>
            <given-names>V V</given-names>
          </string-name>
          <year>2015</year>
          <article-title>Development of concurrent object-oriented logic programming platform for the intelligent monitoring of anomalous human activities</article-title>
          <source>BIOSTEC</source>
          <volume>511</volume>
          <fpage>82</fpage>
          -
          <lpage>97</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Sushkova O S 2018</surname>
          </string-name>
          <article-title>The intelligent visual surveillance logic programming (Access mode</article-title>
          : http://www.fullvision.ru)
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Haritaoglu</surname>
            <given-names>I</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harwood</surname>
            <given-names>D</given-names>
          </string-name>
          and
          <string-name>
            <surname>Davis</surname>
            <given-names>L S</given-names>
          </string-name>
          <year>1998</year>
          W 4: Who?
          <article-title>When? Where? What? A real time system for detecting and tracking people FG (</article-title>
          <year>Japan</year>
          )
          <fpage>222</fpage>
          -
          <lpage>227</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Shet</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harwood</surname>
            <given-names>D</given-names>
          </string-name>
          and
          <string-name>
            <surname>Davis L 2005 VidMAP:</surname>
          </string-name>
          <article-title>Video monitoring of activity with Prolog IEEE AVSS 224-229</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>O</given-names>
            <surname>'Hara</surname>
          </string-name>
          <string-name>
            <surname>S 2008</surname>
          </string-name>
          <article-title>VERSA-video event recognition for surveillance applications (M.S. thesis</article-title>
          . University of Nebraska at Omaha)
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Artikis</surname>
            <given-names>A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sergot</surname>
            <given-names>M</given-names>
          </string-name>
          and
          <string-name>
            <surname>Paliouras</surname>
            <given-names>G 2010</given-names>
          </string-name>
          <article-title>A logic programming approach to activity recognition</article-title>
          <source>International Workshop on Events in Multimedia 3-8</source>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Worch J H</surname>
          </string-name>
          , B´
          <string-name>
            <surname>alint-Bencz´edi F and Beetz M 2016 KI - K¨unstliche Intelligenz</surname>
          </string-name>
          30
          <fpage>21</fpage>
          -
          <lpage>27</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Borges P V K</surname>
            , Conci
            <given-names>N</given-names>
          </string-name>
          <source>and Cavallaro A 2013 IEEE Transactions on Circuits and Systems for Video Technology</source>
          <volume>23</volume>
          <fpage>1993</fpage>
          -
          <lpage>2008</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          1999
          <string-name>
            <surname>Actor Prolog</surname>
          </string-name>
          <article-title>: an object-oriented language with the classical declarative semantics IDL (Paris</article-title>
          , France)
          <fpage>39</fpage>
          -
          <lpage>53</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          <year>2002</year>
          <article-title>On semantic link between logic, object-oriented, functional, and constraint programming MultiCPL 43-57</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          <year>2007</year>
          <article-title>Operational approach to the modified reasoning, based on the concept of repeated proving and logical actors CICLOPS 1-15</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          [26]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          <year>2003</year>
          <article-title>Pattern Recognition</article-title>
          and
          <source>Image Analysis</source>
          <volume>13</volume>
          <fpage>640</fpage>
          -
          <lpage>649</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          [27]
          <string-name>
            <surname>Chang</surname>
            <given-names>C L</given-names>
          </string-name>
          and
          <string-name>
            <surname>Lee R C T</surname>
          </string-name>
          <article-title>1973 Symbolic logic and mechanical theorem proving</article-title>
          (New York: Academic Press)
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [28]
          <string-name>
            <surname>Morozov</surname>
            <given-names>A A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sushkova O S and Polupanov A F 2015</surname>
          </string-name>
          <article-title>A translator of Actor Prolog to Java RuleML DC</article-title>
          and Challenge
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          [29]
          <string-name>
            <surname>Cristianini</surname>
            <given-names>N</given-names>
          </string-name>
          and
          <string-name>
            <surname>Shawe-Taylor J 2000 An</surname>
          </string-name>
          <article-title>Introduction to Support Vector Machines and Other Kernel-based Learning Methods</article-title>
          (Cambridge: Cambridge University Press)
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          [30]
          <string-name>
            <surname>Jang J S R 1993 IEEE</surname>
          </string-name>
          <article-title>Transactions on Systems</article-title>
          ,
          <source>Man, and Cybernetics</source>
          <volume>23</volume>
          <fpage>665</fpage>
          -
          <lpage>685</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>