<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Information system for analyzing the movement of complexly identifiable objects against a stationary background⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Mykhaylo Palamar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykhaylo Strembitskyi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Oleg Ulyanov</string-name>
          <email>oulyanov@rian.kharkov.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Myroslava Yavorska</string-name>
          <email>myavorska@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andriy Palamar</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Institute of Radio Astronomy NASU Kharkiv</institution>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>Ruska str., 56, 46001, Ternopil</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>In recent years, increasing number of studies have focused on the detection of moving objects and the identification of static backgrounds. The most common approaches rely on deep learning methods and the creation of extensive databases and test datasets. However, such approaches face challenges related to the configuration of numerous parameters and the complexity of data storage systems. The proposed method addresses these limitations by restricting frame processing and localizing the search area to isolate the region containing a moving object against a stationary background. In addition, filters are applied to highlight characteristic areas of the static background and the probable location of dynamic objects. A method and corresponding software have been developed in the MATLAB environment to assess the dynamic behavior of complexly identifiable moving objects against a stationary background, based on the analysis of successive video surveillance frames. Experimental results demonstrate that the proposed approach offers a high overall performance in identifying the movement trajectories of dynamic objects in static scenes. Furthermore, it shows resilience to variations in dynamic object parameters and environmental changes, effectively distinguishing between moving and stationary objects.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;video surveillance</kwd>
        <kwd>moving object detection</kwd>
        <kwd>data analysis</kwd>
        <kwd>image processing 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The detection and classification of dynamic objects is a critical task in numerous fields of research
[
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4">1–4</xref>
        ]. In intelligent transportation systems, for instance, the detection of moving objects is a
fundamental research direction. Moving objects are identified in real time using appropriate
algorithms within intelligent video surveillance systems [5], for detecting anomalous events [6],
and in the development of tracking systems [7].
      </p>
      <p>Developing a reliable system for detecting moving objects remains a challenge due to various
factors, one of the most prominent being a non-uniform background. The simplest approach
involves background subtraction, where the static background model is subtracted from the input
image and the resulting pixel differences beyond a defined threshold are treated as foreground.
Background subtraction methods are well-suited for static or slowly changing scenes, such as
waving tree leaves or water surface movement.</p>
      <p>Common techniques for adaptive background modeling include Gaussian Mixture Models
(GMM) [8] and Visual Background Extractor (ViBe) [9].</p>
      <p>0000-0002-8255-8491 (M. Palamar); 0000-0002-5713-1672 (M. Strembitskyi); 0000-0003-0934-0952 (O. Ulyanov);
00000001-8033-7348 (M. Yavorska); 0000-0003-2162-9011 (A. Palamar)</p>
      <p>To enable early intervention, it is important to identify the current position of foreign objects as
well as their speed and acceleration [10-13]. However, in video surveillance scenarios where object
parameters significantly differ from those of the background, visual identification becomes
difficult.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement and proposed method</title>
      <p>For the implementation of preventive measures, it is crucial to identify the current position of a
foreign object, as well as its speed and acceleration.</p>
      <p>When the parameters of the observed object and the general background are not comparable
during video surveillance, the object may not be visually identifiable. For example, the surveillance
frames in Figure 1 (a) and (b) differ by the presence of an additional element in one of them, shown
in Figure 1 (c).</p>
      <p>Below, we propose an approach for analyzing the pixel-level images of video frames that allows
for the fixation of the current position and movement of objects with complex identification
against a stationary background.</p>
      <p>Considering the image as a set of pixels defined by their position and intensity, we assign to
each element the coordinate values i, j and the intensity level pij.</p>
      <p>To the image as a whole, we associate a conditional "center of mass," analogously to the center
of mass of material objects distributed on a plane.</p>
      <p>If the size of the selected image for analysis is (m, n) pixels, the abscissa of its "center of mass"
can be calculated as:</p>
      <p>l
∑ X P</p>
      <p>i i
XC = i=1</p>
      <p>l
∑ Pi
i=1
(1)
and the ordinate as:
where
m
∑ Y j P j
YC = j=1
m
∑ P j
j=1
(2)
(3)
m
∑ i pij m
Y j= i=m1 , P j=∑ pij (4)
∑ pij i=1
i=1
Thus, for the case shown in Fig. 1(c), we obtain: XC= 295.9176 pix., YC= 214.8757 pix.</p>
      <p>Accordingly, by observing the displacement of a given object against a stationary background,
we can track the change in the position of its "center of mass" in the pixel image, which results
from the difference between the current frame and the background frame.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Case study</title>
      <p>When monitoring the movement of the object shown in Figure 1 (c) against the background in
Figure 1 (a), we obtain a sequence of frames similar to Figure 1 (b), in which the localization of the
object is problematic.</p>
      <p>The result of applying the developed software based on the proposed method is shown in Figure</p>
      <p>During the analysis of 14 consecutive frames sized (300×300) pixels, the computed positions of
the "center of mass" are marked with red dots, the object boundaries on the general background are
outlined in green, and the trajectory of movement is shown as a yellow line.</p>
      <p>From the resulting trajectory, we can estimate discrete values of instantaneous velocity and
acceleration. The magnitude of the velocity vector is calculated as:
where</p>
      <p>V i+1=√V xi+1+V yi+1</p>
      <p>2 2
V xi+1= xi+∆1−t xi V yi+1=
yi+1− yi
∆t
(5)
(6)
where ∆t – is the time interval between adjacent frames.</p>
      <p>The change in velocity magnitude during the object's observation interval shown in Figure 2 is
illustrated in Figure 3.</p>
      <p>The computed discrete values are marked with red dots. A continuous change, represented by
the dashed blue line, is obtained through spline approximation of the discrete data.</p>
      <p>The components of the acceleration vector magnitude are calculated similarly:
and the argument of the acceleration vector magnitude is calculated as:</p>
      <p>To adequately assess the state of the object at time moments between available video frames, in
addition to the magnitudes of the velocity and acceleration vectors, it is also important to
determine their directions. The direction of motion — i.e., the argument of the velocity vector — is
calculated as:</p>
      <p>Thus, the movement of the detected object during the observation interval, shown in Figure 2, is
accompanied by changes in its velocity and acceleration vectors, as illustrated in Figure 5. The
object’s positions in the video frames are marked with red dots. Velocity vectors at specific
moments are shown in blue, while acceleration vectors are shown in green.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Influence of random noise on video frames</title>
      <p>The accuracy of determining the current position of the observed object may be affected by
random noise superimposed on the working frames.</p>
      <p>As shown in Figure 6 and Figure 7, the effect depends on both the dimensions of the working
fields and the comparative evaluation of the overall intensity level of the pixel image of the tracked
object versus the noise level in the analyzed frame:</p>
      <p>k n
∑ ∑ pobject ij
Q= i=1 j=1
m l
∑ ∑ pnoise ij
i=1 j=1
where pobject ij – are the pixel intensities of the object image of size ( k × n), and pnoise ij – are the
pixel intensities of the noise in the working frame.</p>
      <p>Figure 6 presents an example of the trajectory shift (red line) compared to the one obtained
without noise (yellow line) for parameters: k = 16, n = 12, m = 300, l = 300, Q = 6.05%.
(8)
(9)
(10)</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>Moving object detection plays a pivotal role in video surveillance and computer vision systems.
Traditional trajectory identification methods often fail in accurately isolating the background,
especially in the presence of noise. Deep learning-based approaches partially solve this issue but
are often too complex and resource-intensive for real-time video surveillance systems.</p>
      <p>In this work, we propose a method and implement a software tool in MATLAB to evaluate the
dynamic behavior of complexly identifiable moving objects using live video frames. We also
investigate the impact of random image noise on observation outcomes.</p>
      <p>The proposed method combines the strengths of object detection models with background
modeling techniques. Experimental results indicate that this method performs well with complex
dynamic objects, even in noisy visual environments.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
[5] K. Muchtar, A. Bahri, M. Fitria, T. W. Cenggoro, B. Pardamean, A. Mahendra, M. R.</p>
      <p>Munggaran, and C.-Y. Lin, Moving pedestrian localization and detection with guided filtering,
IEEE Access, vol. 10, pp. 89 181–89 196, 2022.
[6] M.-I. Georgescu, A. Barbalau, R. T. Ionescu, F. S. Khan, M. Popescu, and M. Shah, Anomaly
detection in video via self-supervised and multi-task learning, in Proceedings of the IEEE/CVF
conference on computer vision and pattern recognition, 2021, pp. 12 742–12 752.
[7] F. R. Valverde, J. V. Hurtado, and A. Valada, There is more than meets the eye: Self-supervised
multiobject detection and tracking with sound by distilling multimodal knowledge, in
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021,
pp. 11612–11621.
[8] C. Stauffer and W. E. L. Grimson, Adaptive background mixture models for real-time tracking,
in Proceedings. 1999 IEEE computer society conference on computer vision and pattern
recognition (Cat. No PR00149), vol. 2. IEEE, 1999, pp. 246–252
[9] O. Barnich and M. Van Droogenbroeck, Vibe: a powerful random technique to estimate the
background in video sequences, in 2009 IEEE international conference on acoustics, speech
and signal processing. IEEE, 2009, pp. 945– 948.
[10] Palamar M., Pohrebennyk V., Puleko I., Chumakevych V., Ptashnyk V. Automated decryption
of bodies of water on the basis of Landsat-8 satellite images with reference to controlled
classification / Przeglad Elektrotechniczny, 2020, 96 (11), p. 115-118
[11] M. Palamar, M. Yavorska, M. Strembitskyi, V. Strembitskyi, Selection of the efficient video data
processing strategy based on the analysis of statistical digital images characteristics, Scientific
journal of the Ternopil national technical university, 2018, pp. 107-114.
[12] F. Erich, B. Bourreau, C. K. Tan, G. Caron, Y. Yoshiyasu, N.Ando, (2023, January). Neural
Scanning: Rendering and determining geometry of household objects using Neural Radiance
Fields. In 2023 IEEE/SICE international symposium on system integration (SII), pp. 1-6.
[13] F. Erich, N. Chiba, A. Mustafa, R. Hanai, N. Ando, Y. Yoshiyasu, Y. Domae, NeuralMeshing:
Complete Object Mesh Extraction from Casual Captures, arXiv preprint arXiv:2508.16026,
2025.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Redmon</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <article-title>Yolov3: An incremental improvement</article-title>
          , arXiv preprint arXiv:
          <year>1804</year>
          .02767,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Grycuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Scherer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Marchlewska</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Napoli</surname>
          </string-name>
          ,
          <article-title>Semantic hashing for fast solar magnetogram retrieval</article-title>
          ,
          <source>Journal of Artificial Intelligence and Soft Computing Research</source>
          ,vol.
          <volume>12</volume>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Girshick</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Faster rcnn: Towards real-time object detection with region proposal networks</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          , vol.
          <volume>28</volume>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>W.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Anguelov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Erhan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Szegedy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Reed</surname>
          </string-name>
          , C.-Y. Fu,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Berg</surname>
          </string-name>
          , Ssd:
          <article-title>Single shot multibox detector</article-title>
          ,
          <source>in European conference on computer vision</source>
          . Springer,
          <year>2016</year>
          ,pp.
          <fpage>21</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>