<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Video based human smoking event detection method</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Anna V. Pyataeva</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maria S. Eliseeva</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Reshetnev Siberian State University of Science and Technology</institution>
          ,
          <addr-line>Krasnoyarsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Siberian Federal University</institution>
          ,
          <addr-line>Krasnoyarsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>343</fpage>
      <lpage>352</lpage>
      <abstract>
        <p>The paper proposes a method for recognizing smoking event detection from visual data. The method uses a three-dimensional convolutional neural network ResNet, which provides work with video based spatio-temporal features. According to WHO Framework Convention on Tobacco Control [1] there is no safe level of tobacco smoke exposure. Creating a completely smoke-free environment is the only way to protect people from the harmful efects of breathing even second-hand smoke. Human action analysis based on visual processing is significant for many applications such as intelligent video surveillance, analysis of employee and customer behavior. Recognizing a person's smoking while driving can significantly increase road safety [ 2]. To recognize smoking activity on the use of smartwatch sensors as a state-transition model that consists of the mini-gestures handto-lip, hand-on-lip, and hand-of-lip [ 3]. Wu et al. [4] proposed the color-based ratio histogram analysis is introduced to extract the visual clues from appearance interactions between lighted cigarette and its human holder. The techniques of color re-projection and Gaussian Mixture Models enable the tasks of cigarette segmentation and tracking over the background pixels. Smoke detection in the area around human faces and hands can be applied to recognition of the smoking action [5, 6, 7]. The reliable smoke detection is a dificult due to great variability of shape, color, transparency, turbulence variance, non-stable motion, boundary roughness, and time-varying flicker efect in the boundaries of smoke as well as artifacts during shooting such as low resolution, blurring, and weather conditions. The key problem of smoking behavior recognition is the irregular shape: diferent ways to hold a cigarette, types of tobacco products, bad weather and shooting conditions.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Smoking event detection</kwd>
        <kwd>convolutional neural network</kwd>
        <kwd>spatio-temporal features</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
    </sec>
    <sec id="sec-2">
      <title>2. Smoking event detection method</title>
      <p>In this paper spatio-temporal features based smoking activity detection algorithm, which allows
recognizing human smoking activity regardless of the person’s appearance, the way to hold a
cigarette, the type of cigarette, the distance of the object of interest, and movement patterns.
pause, falls down, pause, rises again;
∙ lighting a cigarette:
∙ lip movement on close-up scenes;</p>
      <sec id="sec-2-1">
        <title>2.1. Spatio-temporal features of smoking activity</title>
        <p>Smoking activity belongs to a group of atomic actions that can be recognized only if there is a
certain set of spatio-temporal features. Four atomic action groups are considered:
∙ arm position changes. The sequence of actions: the hand rises to the level of the lips,
times);
∘ tilt of the head;
∘ using a cigarette lighter, using a lighter involves a sequence of actions:
— bringing the lighter to the face with one hand;
— the thumb of this hand starts the mechanism (the action can be repeated several
following actions:
— the other hand can prevent the cigarette from fading and block the view to
recognize previous actions (in this case, both hands are at the level of the lips);
— the hands are lowered;
∘ lighting up with matches, the use of matches for lighting cigarettes consists of the
— the cigarette is clamped between the teeth;
— both hands are at chest level or just below the chest;
— one hand is performed with a small wave (the action can be repeated);
— one hand remains at chest level, the second changes position, moving higher to
the chin or lips,
— a wave of the hand (to extinguish the match);
— lowering the hands;
∙ flicking the ash from the cigarette (the action may not be present in the frame) consists
of the following steps: the withdrawal of the hand with the cigarette down and the
characteristic movement of the hand or fingers of the hand.</p>
        <p>Smoking activity recognition is implemented using a three-dimensional neural network based
on the spatio-temporal features in the entire video data.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Image pre-processing</title>
        <p>Visual information as a result of real-time video shooting may include objects with dynamic
behavior, noise of the hardware or transmission lines, as well as artefacts afected by weather
conditions (for example, rain or snow, poor luminance in the morning or evening). Because of
this, the quality of smoking action recognition significantly degrade. Therefore, scaling and
mean subtraction [8] are used to solve this problem. To implement preprocessing algorithms, a
computer vision library OpenCV (Open Source Computer Vision Library) was used [9]. Thus,
the video sequence preprocessing is performed according to the expressions:
 =
 −  

,  =</p>
        <p>,  =
 −</p>
        <p>
          ,
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
where , ,  are the values of the red, green, blue channels of the image, respectively;
 = { ,  ,  } is the average color intensity for each image channel;  — scaling coeficient.
The  value can be the standard deviation over the training set. However,  can also be manually
set to scale the input image space to a specific range
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Neural network architecture</title>
        <p>AlexNet [10], VGG [11] and ResNet [9] neural networks are most often used to classify images
and video sequences. The ResNet neural network is fully convolutional, so it is used for
space-time volume extraction, unlike many architectures with fully connected layers, including
AlexNet and VGG-16, which contain several levels of the maximum pool that can damage the
actions evaluation. The ResNet network contains only one pool level immediately after the
conv1 layer. The reduced number of bonding layers makes ResNet more suitable for visual
recognition of smoking, since spatial details must be preserved to recognize this process.</p>
        <p>In the work the 34 layers ResNet neural network was used that shows computational eficiency
in solving classification problems [ 12]. In order to use ResNet to estimate multi-frame optical
lfow, it is necessary to extend this architecture, replacing all ×  two-dimensional convolutional
kernels with an additional time dimension  ×  × 3, as described in article [13]. The pool
layers in the decoder are expanded in a similar way. The neural network transformed in this
way in the paper is called ResNetM, its composition is presented in Table 1.</p>
        <p>In Table 1 the residual blocks are grouped in square brackets. Batch normalization is used
after each convolutional layer. The main diference between this architecture and ResNet is
the use of 3D kernels and a modified downsampling operation, whereby feature maps in the
convolution layer are combined with several adjacent frames in the previous layer, thereby
capturing motion information.
inputs is performed periodically in steps of 2.</p>
        <p>The dimensions of the convolutional kernels are 3 × 3
× 3. The network uses 16-frame RGB
clips as inputs. The dimensions of the input clips are 3 × 16 × 112 × 112. Downsampling of</p>
        <p>Core
7
× 7</p>
        <p>× 7
3
3
3
3
︂[ 3 × 3
︂[ 3 × 3
× 3
× 3
︂[ 3 × 3
︂[ 3 × 3
× 3
× 3
︂]
︂]
︂]
︂]
× 3
× 3 × 3
× 3
× 3 × 4
× 3
× 3 × 6
× 3
× 3 × 3
64
64
128
256
512</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Smoking activity detection algorithm</title>
        <p>The proposed method uses deep learning network for smoking action detection by recognizing
actions that are characteristic of a person who is in the process of smoking. The block diagram
of the smoking activity detection algorithm is shown in Figure 1.</p>
        <p>Stochastic Gradient Descent (SGD) with momentum is used to train the neural network.
Training samples are randomly generated from the videos in the training set. Time
positions are selected evenly. Next, 32-frame clips are set around the specified time positions. If
the video is shorter than 32 frames, it will loop as many times as necessary to reach the set
duration. Then the spatial positions are randomly selected from four corners or one center.
In addition to the positions, the spatial scales of each sample are also specified for
multiscale cropping. The frame is cropped at the time-space positions. The size of each sample is
3 channels× 32 frames× 112 pixels× 112 pixels, and each sample is flipped horizontally with a 1/2
probability. It also subtracts the average of our dataset from the sample for each color channel.
All created samples retain the same class labels as their original videos. Model training uses
cross entropy as a function of loss. The training parameters include a damping of 0.001 and 0.9
for the impulse. The learning rate is 0.1 and divided by 10 after saturation of the validation loss.
When fine tuning is performed at a learning rate of 0.001, the scale attenuation is 1e − 5.</p>
        <p>
          At the first stage, the neural network is initialized, the parameters are set, and after that the
video sequence is fed to the input. Initialization of the classes is performed, which allows the
classification of the dataset: “smoking”, “no smoking”. The duration of the sample is determined,
that is, the number of frames for classification is 32, and the spatial sizes of the sample are
112 × 112. To create input clips, the sliding window method is used, in which only the oldest
frame in the list is discarded, making room for the newest frame. Each video is then split into
non-overlapping 32-frame clips. This operation occurs using a loop that reads frames from the
video stream, then checks for frame capture. If a frame is captured, then each clip is cropped
around the center position at the maximum scale, an average subtraction is performed and a
new frame is added to the queue, otherwise the loop exits. The new cycle allows you to check if
the queue is full. At the end of this cycle, a blob object is created. A “blob object” or “blob” is
a collection of frames with the same spatial dimensions, expressed in width and height, and
the same depth, that is, the number of channels that must be preprocessed in the same way. A
blob object has the following dimensions: (
          <xref ref-type="bibr" rid="ref3">3, 32, 112, 112</xref>
          ). The number 3 denotes the number
of channels in the input frames. 32 — the total number of frames in the “blob”. The following
numbers represent the height and width respectively.
        </p>
        <p>Next, in order to extract the space-time characteristics, each instance is transmitted through
a 3D convolutional neural network. Smoking is recognized by finding multiple optical flow.
The optical flow is calculated at each point, then a motion map is formed. Each feature map
of a convolutional layer is associated with several consecutive adjacent frames in the upper
layer. The next step is to assess the probability of smoking in the clips. The network “scans”
the sequence of thirty-two frames, generates motion paths, analyzes the similarity to a known
smoking pattern, and finds the probabilities of smoking in each frame, which are then averaged
over all clips. The class that has the highest score indicates the action in the given video
sequence. If the probability is greater than or equal to 0.5, then smoking in these frames is
recognized.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experimental and results</title>
      <p>
        In order to the video-based smoking detection model work, the following specifications are
required: a minimum of 2GB NVIDIA graphics card and installed software: CUDA and cuDNN.
The model uses Anaconda and Python packages including OpenCV, matplotlib, and Pytorch.
Experimental studies were carried out with the characteristics of a laptop Intel (R) Core (TM)
i76700HQ processor, 2.60 GHz processor clock, 8 GB RAM, Windows 10 operating system, NVIDIA
GeForce GTX 960M graphics processor, 2 GB dedicated graphics processor memory. The
modified neural network was trained on 6766 videos from the HMDB51 dataset [ 14]. The video
shows actions that can be grouped into five groups:
(
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) general face actions: smile, laugh, chew, speak;
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) actions with object manipulations: smoking, eating, drinking;
(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) general body movements: do a wheel, applaud, climb, climb stairs, dive, fall to the floor,
put your hands back, do a handstand, jump, pull up, push up, run, sit down, climb from
something, do somersaults , get up, turn around, walk, make a wave;
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) body movements when interacting with an object: combing hair, catching, drawing a sword,
dribbling a ball, playing golf, hitting a ball, picking, pouring, pushing something, riding a
bicycle, riding a horse, shooting a ball, shooting bow, shoot a gun, throw a ball;
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) body movements for human interaction: fencing, hugging, kicking, kissing, punching,
shaking hands, sword fighting.
      </p>
      <p>
        Actions of categories (
        <xref ref-type="bibr" rid="ref1">1</xref>
        )–(
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) for experimental research are combined into one class “no
smoking”. For experimental studies, 70 “smoking” videos were used, in which people of diferent ages,
body types, gender characteristics, diferent races, diferently holding cigarettes, of diferent
shapes and types, were filmed in the process of smoking. and 6766 video with “no smoking”
actions. At least two observers to ensure consistency have reviewed each clip. The algorithm
results are shown in Table 2.
      </p>
      <p>Tables 3 and 4 shows the frames of some of the video sequences used and the results of
smoking recognition. The results of the smoking recognition method are marked with the labels
“smoking” — “no smoking”.</p>
      <p>The test video data is supplemented with videos in which the action is visually similar
to smoking, but, thanks to the spatial and temporal features of the neural network and the
identified pattern of characteristic smoking movements, it is able to distinguish these actions
from smoking action. In video 9 a girl eats a lolipop; video 11 a girl bites a pen; video 15 a man
eats ice cream. The training sample was 80%, the test sample was 20% of the total sample. To
evaluate the efectiveness of human smoking activity detection and recognition algorithms,
the indicators of detection accuracy (TR), false-positive (FAR) and false-negative (FRR) were
used. The results of smoking detection for neural network architectures ResNet and modified
network ResNetM are shown in Table 5.</p>
      <p>Era</p>
      <sec id="sec-3-1">
        <title>Training loss</title>
      </sec>
      <sec id="sec-3-2">
        <title>Accuracy when training</title>
      </sec>
      <sec id="sec-3-3">
        <title>Test losses</title>
      </sec>
      <sec id="sec-3-4">
        <title>Accuracy when checking</title>
        <p>0.2024
0.2058
0.2448
0.2259
0.2267
0.6699
0.7346
0.7613
0.7984
0.7984
0.9198
0.9280
0.9095
0.9280
0.9125</p>
      </sec>
      <sec id="sec-3-5">
        <title>Alias: Video 11.</title>
      </sec>
      <sec id="sec-3-6">
        <title>Number of frames: 107.</title>
        <p>Resolution: 1280× 720.</p>
      </sec>
      <sec id="sec-3-7">
        <title>Video duration: 4.47 sec.</title>
      </sec>
      <sec id="sec-3-8">
        <title>Alias: Video 12.</title>
      </sec>
      <sec id="sec-3-9">
        <title>Number of frames: 117.</title>
        <p>Resolution: 270× 360.</p>
      </sec>
      <sec id="sec-3-10">
        <title>Video duration: 3.90 sec.</title>
      </sec>
      <sec id="sec-3-11">
        <title>Alias: Video 15.</title>
      </sec>
      <sec id="sec-3-12">
        <title>Number of frames: 151.</title>
        <p>Resolution: 1280× 720.</p>
      </sec>
      <sec id="sec-3-13">
        <title>Video duration: 5.07 sec. 349</title>
      </sec>
      <sec id="sec-3-14">
        <title>Description and results of some used videos (continued). Description of test video Sample frame 1 Sample frame 2</title>
      </sec>
      <sec id="sec-3-15">
        <title>Alias: Video 19.</title>
      </sec>
      <sec id="sec-3-16">
        <title>Number of frames: 87.</title>
        <p>Resolution: 480× 360.</p>
      </sec>
      <sec id="sec-3-17">
        <title>Video duration:2.93 sec.</title>
      </sec>
      <sec id="sec-3-18">
        <title>Alias: Video 18.</title>
      </sec>
      <sec id="sec-3-19">
        <title>Number of frames: 81.</title>
        <p>Resolution: 1280× 720.</p>
      </sec>
      <sec id="sec-3-20">
        <title>Video duration: 2.73 sec.</title>
        <p>Experimental studies conducted on 20 video sequences obtained in real-world shooting
conditions confirm the eficiency of the proposed method for recognizing smoking. The ResNet
neural network architecture, modified to a three-dimensional neural network, ensures that the
spatial-temporal signs of smoking are taken into account and shows, on average, 15% higher
accuracy in recognizing the smoking actions compared to the basic architecture. The developed
software implementation of the smoking recognition method provides real-time operation.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <article-title>[1] WHO framework convention on tobacco control</article-title>
          . Available at: https://www.who.int/fctc/ text_download/en.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Chien</surname>
            <given-names>T.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            <given-names>C.C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan</surname>
            <given-names>C.P.</given-names>
          </string-name>
          <article-title>Deep learning based driver smoking behavior detection for driving safety //</article-title>
          <source>Journal of Image and Graphics</source>
          .
          <source>2020</source>
          . Vol.
          <volume>8</volume>
          . No. 1. P.
          <volume>15</volume>
          -
          <fpage>20</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Odhiambo</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cole</surname>
            <given-names>C.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Torkjazi</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Valafar</surname>
            <given-names>H</given-names>
          </string-name>
          .
          <article-title>State transition modeling of the smoking behavior using LSTM recurrent neural networks // 2019</article-title>
          <source>International Conference on Computational Science and Computational Intelligence (CSCI)</source>
          .
          <year>2019</year>
          . P.
          <volume>898</volume>
          -
          <fpage>904</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Wu</surname>
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hsieh</surname>
            <given-names>J</given-names>
          </string-name>
          ., Cheng J., Cheng S.,
          <string-name>
            <surname>Tseng</surname>
            <given-names>S.</given-names>
          </string-name>
          <article-title>Human smoking event detection using visual interaction clues // 20th</article-title>
          <source>International Conference on Pattern Recognition</source>
          .
          <year>2010</year>
          . P.
          <volume>4344</volume>
          -
          <fpage>4347</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Dunne</given-names>
            <surname>É</surname>
          </string-name>
          .
          <article-title>Smoking detection in video footage. A Dissertation Submitted in Partial Fulfilment of the Requirements for the Degree of MAI (Computer Engineering)</article-title>
          . Submitted to the University of Dublin, Trinity College,
          <year>2018</year>
          . 43 P.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Iwamoto</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Inoue</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matsubara</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tanaka</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Cigarette smoke detection from captured image sequences // Image Processing: Machine Vision Applications III</article-title>
          .
          <source>International Society for Optics and Photonics</source>
          .
          <year>2010</year>
          . Vol.
          <volume>7538</volume>
          . P.
          <volume>82</volume>
          -
          <fpage>87</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Iwamoto</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Inoue</surname>
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matsubara</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tanaka</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Cigarette smoke detection using feature values based on the kernel LMS algorithm //</article-title>
          <source>IEICE Technical Report. Circuits and Systems</source>
          .
          <year>2010</year>
          . Vol.
          <volume>109</volume>
          (
          <issue>434</issue>
          ). P.
          <volume>237</volume>
          -
          <fpage>248</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Varma</surname>
            <given-names>V.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sasidharan</surname>
            <given-names>K.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ramachandran</surname>
            <given-names>K.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nair</surname>
            <given-names>B</given-names>
          </string-name>
          .
          <article-title>Real time detection of speed hump/bump and distance estimation with deep learning using GPU and</article-title>
          ZED stereo camera // Procedia Computer Science.
          <year>2018</year>
          . Vol.
          <volume>143</volume>
          . P.
          <volume>988</volume>
          -
          <fpage>997</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>[9] OpenCV (Open source computer vision library)</article-title>
          . Available at: https://opencv.org.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Krizhevsky</surname>
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sutskever</surname>
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hinton</surname>
            <given-names>G.E.</given-names>
          </string-name>
          <article-title>ImageNet classification with deep convolutional neural networks //</article-title>
          <source>Advances in Neural Information Processing Systems</source>
          .
          <year>2012</year>
          . P.
          <volume>1097</volume>
          -
          <fpage>1105</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Simonyan</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zisserman</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>Very deep convolutional networks for large-scale image recognition // CoRR</article-title>
          .
          <year>2014</year>
          . abs/1409.1556.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Ji</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
            <given-names>K.</given-names>
          </string-name>
          <article-title>3D convolutional neural networks for human action recognition //</article-title>
          <source>IEEE Transactions on Pattern Analysis &amp; Machine Intelligence</source>
          .
          <year>2013</year>
          . Vol.
          <volume>35</volume>
          . No. 1. P.
          <volume>221</volume>
          -
          <fpage>231</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Yu</surname>
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>T.</given-names>
          </string-name>
          <article-title>Recognition of human continuous action with</article-title>
          3D CNN // International Conference on
          <source>Computer Vision Systems</source>
          . Springer,
          <year>2017</year>
          . P.
          <volume>314</volume>
          -
          <fpage>322</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <article-title>HMDB a large human motion database</article-title>
          . Available at: https://serre-lab.clps.brown.edu/ resource/hmdb
          <article-title>-a-large-human-motion-database.</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>