<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Modeling of a Neural Network-Based Motor Position Controller in a System for Tracking Objects of Complex Shapes</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr Laktionov</string-name>
          <email>itm.olaktionov@nupp.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alina Yanko</string-name>
          <email>al9_yanko@ukr.net</email>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alina Hlushko</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victor Krasnobayev</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>National University «Yuri Kondratyuk Poltava Polytechnic»</institution>
          ,
          <addr-line>Pershotravnevyj Ave 24, Poltava, 36011</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This research is dedicated to enhancing the efficiency of a system for tracking objects of complex shapes through the integration of movable cameras and a neural network-based motor position controller. The aim of this work is to ensure accurate and reliable real-time object tracking. In this study, a system for tracking objects of complex shapes was developed and investigated, utilizing a camera mounted on an electric motor, with and without neural network-based motor position controller. A key aspect of the research is the training of a neural network model based on electric motor position data during tracking. The model's output data are used to predict the electric motor's position, enabling proactive motion correction and improved tracking accuracy. A distinctive feature of this research is the adaptation of the neural network-based motor position controller for localized use in a system for tracking objects of complex shapes, specifically designed to address current challenges faced by regional industrial enterprises. The practical value of this work lies in the potential application of the developed system in industry and educational processes to enhance technical safety. The system's flexibility allows for its use with or without a neural network-based motor position controller, ensuring rapid configuration and adaptation to various conditions. The current prototype utilizes a 2MP camera, and while the integration of an LSTM-based motor position controller showed a minor reduction in the standard deviation of positioning errors (from 168.88 to 164.11), future work will focus on incorporating higher-resolution cameras with improved low-light performance and further optimization of the neural network architecture and training dataset to enhance tracking accuracy.</p>
      </abstract>
      <kwd-group>
        <kwd>neural network-based controller</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>computer vision</kwd>
        <kwd>object tracking</kwd>
        <kwd>electric motor position prediction</kwd>
        <kwd>neural network 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In</p>
      <p>
        modern automated systems and robotics, object tracking plays a pivotal role, finding
applications in various domains ranging from video surveillance to automated production control.
Precise tracking, especially of objects with complex shapes, necessitates continuous and real-time
correction of movable mechanism positions. Electric motors are a crucial component of such
systems, providing high-precision positioning, yet their stable operation requires the use of
sophisticated control algorithms capable of mitigating diverse external influences and errors [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>Current approaches to electric motor control include the use of traditional controllers, but to
ensure high accuracy and adaptability in unpredictable environmental changes and object
variations, more advanced methods such as neural network-based controllers are essential. Neural</p>
      <p>Network-Based Controllers, built upon neural networks, demonstrate the ability to adapt to
changing conditions, optimizing control parameters in real time. They enable the reduction of
noise, positional estimation errors, and other unforeseen factors that arise during the tracking of
objects with complex geometries. This minimizes static and dynamic positioning errors, enhances
system resistance to external influences, and ensures optimal real-time operation.</p>
      <p>The task of tracking complex-shaped objects in the context of regional industrial enterprises is
particularly relevant, where the accuracy and reliability of video surveillance systems are critical
for ensuring the safety and efficiency of production processes. In this context, the development of a
tracking system utilizing a Neural Network-Based Controller capable of predicting object motion
and proactively adjusting the camera position is not only a scientific but also a practical necessity.</p>
      <p>The foundation of this research is the development concept that combines computer vision with
an actuator that adjusts the camera position based on object movement, integrated with a neural
network model. The study is based on the principles of stacking, adaptive learning, and neural
network control, allowing the integration of computer vision capabilities with precise actuator
control. The research emphasizes the creation of a model for predicting signals several seconds
ahead, enabling preemptive activation of the stepper motor. The scientific novelty of this work lies
in adapting a well-established scientific approach to localize the use of neural network control in a
tracking system to address specific challenges faced by regional industrial enterprises.</p>
      <p>The aim of this research is to enhance the operational efficiency of computer vision models by
implementing movable cameras and a tracking system with neural network control, thereby
ensuring more accurate and reliable real-time tracking of complex-shaped objects.</p>
      <p>To achieve this goal, the following research tasks were defined:
1.
2.</p>
      <p>Develop a tracking system with and without a neural network-based controller, capable of
independent operation.</p>
      <p>Conduct an experimental study for comparative analysis of the effectiveness of both
systems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Analysis of existing scientific approaches</title>
      <p>
        Early prototypes of tracking systems were implemented on Arduino controllers [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. The software
involved object detection followed by tracking. This technology enhances tracking accuracy
through the use of cascaded classifiers [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Over time, the hardware and software have actively
evolved and transformed into an integrated ecosystem incorporating artificial intelligence.
      </p>
      <p>
        Artificial intelligence tools continuously learn from specific data, thereby updating the model
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Therefore, contemporary research is oriented towards developing new artificial intelligence
models to address specific tasks. The process of model parameter identification is frequently
considered [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This allows for the discovery of new parameters and enhances prediction accuracy.
      </p>
      <p>
        Prediction accuracy is also improved by creating neural network controllers, as exemplified in
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. A notable feature of the proposed solution is the existence of a real-world model from which
data is collected and fed into the artificial intelligence model. This work is among the pioneering
efforts that have unlocked new possibilities for artificial intelligence applications. In addition to
developing models that predict the position of actuators, the selection and integration of other
equipment, particularly video cameras, with the control system is crucial. Study [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] proposes a PTZ
camera control system that automatically detects and tracks moving objects in real-time, utilizing
their center, direction of motion, distance, and speed, regardless of the camera's focusing function.
Implemented on a TI DM6446 DSP processor, this system demonstrates high efficiency in tracking
high-speed vehicles. The study also highlights the limitations of software camera focus,
underscoring the necessity for a motor-driven movement system.
      </p>
      <p>
        Article [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] focuses on the development of Complementary metal-oxide semiconductor image
sensor and its applications in aerospace, medical and automotive fields. The sensor can be created
in specialized software and manufactured at the enterprise. Such sensors can expand the
capabilities of computer vision systems in interaction with other equipment, primarily cameras.
Therefore, this study expands on previous work [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        For construction applications, [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ] proposes an automated tracking system for construction
machinery on unmanned construction sites, combining image processing and machine learning
techniques to improve accuracy and reliability. This system utilizes a platform that adjusts
direction as needed, but via manual command. Although the algorithm provides stable and
continuous imaging, it is hampered by the issue of manual control. Research [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] introduces a
novel Position Alignment Method (PAM) that automatically, accurately, and rapidly aligns
coordinate systems, ensuring error-free calibration in remote camera control. Experimental
comparisons show that PAM outperforms manual methods in terms of accuracy, stability, and
operational speed, and is more flexible for use in telerobotic camera control.
      </p>
      <p>
        The use of motors increases the camera's range of motion, but in [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], an algorithm for
automatic detection, tracking, and zooming of active targets using a camera with an already wide
range of motion is presented, improving the resolution of distant objects. The proposed system
optimizes disk space usage by stopping recording when no targets are present and provides
adaptive tracking of multiple objects with motion prediction to minimize image quality loss and
reduce the need for camera movement. Study [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] demonstrates a developed automatic position
correction module for an image inspection system, which enables the camera to adjust its pose and
position based on detected object displacement or rotation errors. Results show that the system
with position correction significantly enhances productivity by automating the optical quality
inspection process.
      </p>
      <p>
        Any platform movement destabilizes the camera, reducing image quality and tracking accuracy.
The visual tracking system for mobile robots proposed in [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ] stabilizes images during motion
using a combination of feedforward control from gyroscope and encoder data (VOR) and periodic
feedback correction (OKR). Study [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] presents a visual tracking system for a mobile robot that
uses stereo camera and motion sensor data to maintain a line of sight to a stationary target.
Visionbased compensation is applied to correct motion measurement errors, activated when the robot
stops or moves slowly, ensuring high tracking accuracy without overloading the system. The
background suppression algorithm, considering camera motion to minimize the impact of its
oscillations caused by wind or heavy transport vibrations, which is especially critical at high focal
lengths, is presented in [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. During motor movement, the proposed data processing-based
stabilization approach to compensate for camera rotation in real-time in [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] improves tracking
accuracy of features and estimation of independent camera motion. Experiments show that
stabilization increases accuracy by 27.37% for feature tracking and 34.82% for independent motion
estimation, and reduces processing time by 25%.
      </p>
      <p>
        The use of motors is also necessary for calibrating installed cameras. Study [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] proposes a
rotation-based camera and gyroscope calibration method that eliminates the need for targets and
accurately estimates intrinsic camera parameters and extrinsic system parameters. The method is
verified on real data from a low-cost platform, making it suitable for lightweight robotic platforms
equipped with cameras and gyroscopes.
      </p>
      <p>
        The developed camera stabilization control system on a gimbal for unmanned aerial vehicles
(UAVs), used for tasks such as target tracking, surveillance, and aerial photography in [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], shows
that traditional PID control is less effective compared to PID control with settings tuned by the
PSO algorithm.
      </p>
      <p>
        In the context of enhancing the reliability and efficiency of data processing systems, it is crucial
to use methods that ensure error resistance and high performance [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. In this context, Residue
number systems (RNS) can play a key role. Prior studies [20] and [21] analyze the impact of
Residue Number Systems on error resistance and the efficiency of computer systems, particularly
in the context of error diagnostics in data processing devices.
      </p>
      <p>Considering the advantages of RNS in providing parallel computations and error resistance,
their application in tracking systems can enhance data processing speed and reliability, especially
in industrial settings where speed and accuracy are critical. Further research will focus on
integrating RNS into tracking systems to improve their performance.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Tools for development of a neural network-based motor position controller in a system for tracking objects of complex shapes</title>
      <p>This research was conducted using a Raspberry Pi single-board computer, a Nema 17 stepper
motor, a TB6600 stepper motor driver, and an HD 2MP video camera, as illustrated in Fig. 1.</p>
      <p>Two approaches to action strategies were considered. The first action strategy involved
developing an object tracking system with camera movement when the tracked object reached the
edge of the graphical interface. The second action strategy involved developing an object tracking
system with camera movement controlled by a neural network when the tracked object reached
the edge of the graphical interface.</p>
      <p>As neural networks, Facebook Prophet [22] and Long Short-Term Memory (LSTM) [23] were
studied. To determine the signal shape and train the neural network, a single video sequence was
used, and the tracking object's movement data were recorded in a .csv file for in-depth analysis.
The input data included the object's displacement from the center of the graphical interface and
time in seconds, which were recorded in a Table 1.</p>
      <p>Justification of the neural network model selection required the use of a test video sequence
(self-created) to capture the signal shape, as shown in Fig. 2.</p>
      <p>In the research process, the Python programming language [24] and external libraries installed
in the single-board computer's environment, myenv, were used, as depicted in Fig. 3.</p>
      <p>The primary video stream processing library was cv [25]. The gpiozero library [26] was used for
stepper motor control. The analysis of accumulated data involved several standard libraries,
including pandas, numpy, and matplotlib, which are implemented in the programming language
[24]. Neural networks required the use of the sklearn [27] and tensorflow [28] libraries. In addition
to these library packages, the Prophet library [22] was used. To build the models, the methodology
was used [27], [28].</p>
      <p>The configuration of the Prophet library, with the prior installation of additional libraries
necessary for Prophet to function, particularly plotly [29] for graphical interpretation of the results,
is shown in Fig. 3.</p>
      <p>The developed models were saved in a .h5 file for use on the Raspberry Pi. Thus, the basic tools
for conducting the research were prepared.</p>
      <p>The main idea of tracking is to maintain the tracked object, especially during camera movement
by the stepper motor. For this purpose, an interface with specific functionality was created. The
buttons included settings for region of interest (ROI): w for forward, s for backward, a for left, d for
right, enter to start tracking, +/- for scaling, q to end tracking, r to start recording offset and time,
and t to finish recording. Key elements of the tracking system are the tracking algorithms, which in
this study included Channel and Spatial Reliability Tracker, Kernelized Correlation Filters,
Minimum Output Sum of Squared Error Filter, and Multiple Instance Learning.</p>
      <p>The development of the tracking or object following system was carried out according to the
following subtasks:
1. The ROI should appear in the center of the graphical interface, and upon ending tracking
(button q), the ROI should return to the center position.
2. The operator selects the tracking area and adjusts the ROI scale.
3. Upon starting tracking, the program should determine the distance from the ROI to the left
and right boundaries and to the center of the graphical interface.
4. If the distance between the ROI and the boundary is less than 20 px, the camera should
move 1 step left or right (depending on the ROI position).
5. When the distance from the ROI to the edge of the interface is too large, the stepper motor
should perform one step at a time to avoid losing the detected object.
6. Do not accelerate the stepper motor's movement, even when the ROI is close to the edge of
the graphical interface. Limit the number of steps.
7. Display information about the ongoing action on the interface.
8. Use the gpiozero library to control the stepper motor. Stepper motor configuration: dirPin =
16, stepPin = 12, MAX_ANGLE = 30 # -30 to +30 degrees, STEP_ANGLE = 1.8 # Stepper
motor step in degrees (e.g., 200 steps per revolution -&gt; 1.8 degrees per step).
9. To determine the stepper motor's rotation direction, self.direction.value = 0, use the
condition if direction &gt; 0 else 1 (1 for clockwise rotation, otherwise counterclockwise).</p>
      <p>The software also had requirements for debugging the implemented program texts. During
debugging data recording, if the object is lost, recording should stop, and resume when the object
reappears in the frame. If the object is lost and cannot be found, the operator should exit tracking
mode, reconfigure tracking mode, and continue data recording.</p>
      <p>Comparative analysis of the studied values was carried out using the standard deviation
criterion (see Table 1).</p>
    </sec>
    <sec id="sec-4">
      <title>4. Results of modeling a neural network-based motor position</title>
      <p>controller in a system for tracking objects of complex shapes</p>
      <sec id="sec-4-1">
        <title>4.1. Development and debugging of the basic functionality of object tracking without a neural network controller</title>
        <p>According to the research program, the primary step involved the implementation of software for
object tracking with a video camera displacement system upon the tracked object reaching the
extreme position of the graphical interface. The software implementation was carried out in
several files, specifically import_cv2.py, import_cv2-1.py, import_cv2- -4-1.py.</p>
        <p>During the debugging of the proposed solution, implemented in the import_cv2.py file, certain
errors arose, notably the selection of an excessive number of stepper motor steps. This led to a
technical loss of the detection object, as the camera rotation angle was too large. Introducing the
camera's limitation during the discussion of the initial hardware setup and early challenges makes
sense. It provides context for potential issues encountered during the debugging phase, even if
those specific issues weren't directly caused by the camera resolution. It is important to note that
the system at this stage utilized a 2MP camera, the resolution of which, while sufficient for initial
testing, presented an inherent limitation in capturing fine details and could potentially impact
tracking accuracy, especially for distant or small objects.</p>
        <p>The subsequent version of the software implementation, import_cv2-1.py, addressed the
aforementioned issues. For instance, a decision was made to create a bounding box, 10% smaller on
the left and right sides than the main graphical interface. Upon approaching the region of interest
to the frame, the stepper motor with the camera was to be activated and adjust the camera
position. However, this idea was also imperfect, as upon the region of interest approaching the
frame, if the tracking object moved beyond the video stream, it was lost.</p>
        <p>In the import_cv2-2.py version, the number of stepper motor steps for camera movement per
unit time was reduced. To enhance control over camera movements, the detection threshold of the
bounding box of the studied area was increased to 15%. Text messages regarding the stepper motor
speed and the tracking object position were added to the interface. Consequently, the following
limitations were observed: if the detection object moves along the X-axis and exits the study area,
the stepper motor does not rotate the camera. If the detection object is in the center of the study
area, the motor rotates the camera by a specified step. These and other contradictions were
addressed in subsequent software versions, with the desired result achieved only in
import_cv2-41.py, as shown in Fig. 4.</p>
        <p>As shown in Fig. 4, the graphical interface has a classic layout. Control commands are
located in the lower part, and status messages are displayed in the upper part. The tracking
object is in the center of the frame.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Rationale for selecting models of a neural network controller for tracking objects of complex shapes and their construction</title>
        <p>The subsequent part of the research was dedicated to the development and selection of an optimal
neural network controller model, where two models, Prophet and LSTM, were compared. To verify
the functionality of the Prophet model on a single-board computer, the first simple program was
implemented (Fig. 5).</p>
        <p>As shown in Fig. 5, the model functions. At the next stage of the research, according to the
research program, data accumulation was performed. As can be seen from the graph, the signal is
close to sinusoidal, so a sinusoidal signal form was generated for 300 seconds (Fig. 6). The main
goal of creating the model is to predict the signal a few seconds ahead, so that the stepper motor is
activated in advance.</p>
        <p>Despite various ways of using the Prophet model, it does not reproduce the input signal in the
form of a sinusoid, so it will not work adequately in the system being developed. Attempts to
represent the new record Facebook Prophet=g(t⸱х1)⸱s(t⸱х2) ⸱h(t⸱х3)⸱noise as a mathematical
notation and implement it programmatically did not show the desired result. Additionally, Auto
Regressive Integrated Moving Average tools were used [30], but it has limitations on the number of
variables. Further, the LSTM model with the Adam optimizer was used, training was performed on
50 epochs, with batch_size=16. Before training the network, the classic steps of its construction
were performed [31]. The sample was differentiated into training/test in a ratio of 75/25. The
criteria for the quality of the model construction were the coefficients of determination on both
subsamples and the value of loss='mse'. The actual and predicted values using LSTM model as
shown in Fig. 7.</p>
        <p>As can be seen from Fig. 7, the actual and predicted values almost coincide, as evidenced by the
calculated data. The coefficients of determination on the training/test samples are 0.98/0.98, which
indicates the absence of overfitting, with loss: 0.0038 - val_loss: 0.0054.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental research and practical application</title>
      <p>Following the theoretical modeling, experimental modeling of the tracking system was
conducted. For this purpose, a dataset was accumulated, as shown in Fig. 8.</p>
      <p>The data in Fig. 8 were initially examined for gaps, analyzed, and fed into the neural network, as
shown in Fig. 9.</p>
      <p>The constructed model does not exhibit signs of overfitting, as evidenced by R2 values of 0.99
for both subsamples. The graphical interpretation of the stepper motor displacement prediction
result indicates high accuracy, as the actual and predicted data coincide. This is also confirmed by
the training error loss: 0.0011 - val_loss: 0.0023.</p>
      <p>Based on the comparative analysis using the standard deviation criterion, the tracking system
without a neural network controller demonstrates a standard deviation of 168.88, while the system
with a neural network controller shows a standard deviation of 164.11. This allows for predicting
the motor activation time for camera displacement depending on the position of the studied
detection object. Let us apply the created solutions to practical tasks. The tracking distance was
investigated from 0 to 300 meters. Algorithms such as Channel and Spatial Reliability Tracker,
Kernelized Correlation Filters [32] and others do not detect images at a distance of 200-300 meters
with a region of interest size of 30x30 px, even with image zooming. However, various factors
influence this, including the object size. The quality of daytime object tracking correlates with
natural factors (sunlight entering the camera lens), the object rapidly changing its trajectory, and
tracking losing the object. As practice shows, the proposed solution works at a distance of up to
5060 meters in daylight. For example, a car of any color is tracked in daylight, as shown in Fig. 10.</p>
      <p>As shown in Fig. 10, the system tracks the car and person even in the presence of obstacles,
such as trees. The solution can be used in industry or education for safety support.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Analysis of limitations and shortcomings of the system</title>
      <p>The developed system for tracking objects of complex shapes, while demonstrating high efficiency
under certain conditions, has a number of limitations that need to be considered for its further
improvement and practical application.</p>
      <p>At this stage, the development does not include a specialized case for transportation, which
limits its mobility and usability in field conditions. To expand the scope of application of the
system, it is necessary to develop a reliable and convenient case that will ensure the protection of
components during transportation and rapid deployment on-site [33]. To assess the economic
feasibility and reliability of the system, it is necessary to conduct a detailed analysis of the cost of
its components (camera, single-board computer, electric motor) and study their resistance to
external influences [34]. Differentiation of components will allow determining the optimal ratio
between cost and quality. The effectiveness of the system can vary significantly depending on the
lighting level and the type of objects being tracked. To ensure stable operation of the system in
different conditions, it is necessary to conduct experiments with different lighting levels (day,
night, artificial) and different detection objects (people, vehicles, industrial parts).</p>
      <p>At this stage, the system is controlled using a keyboard, which limits its convenience and the
possibility of remote control. To expand the functionality of the system, it is necessary to
implement remote control using radio signals, Wi-Fi, or other wireless technologies, such as Mesh
Networking [35]. To ensure autonomous operation of the system in field conditions, it is necessary
to use specialized power modules, such as batteries or solar panels. The choice of power module
should take into account the power consumption of the system components and the duration of
autonomous operation. The effectiveness of using a neural network controller depends on the
quality and volume of training data. To ensure adequate functioning of the system in different
scenarios, it is necessary to pre-train the neural network on a large dataset that reflects different
types of movements and observation conditions.</p>
      <p>Considering these limitations and shortcomings, further research will focus on their elimination
and expansion of the functionality of the developed system.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Conclusion</title>
      <p>The findings of this research demonstrate the successful development of a system for tracking
complex-shaped objects within a 50-60 meter range under daylight conditions. The system
effectively employs a 2MP camera and a stepper motor for automated camera position correction
based on object movement at the interface edges. The implementation allows for operation both
with and without an LSTM-based neural network controller, enabling a comparative analysis of
their effectiveness. A comparative analysis of the tracking system with and without the
LSTMbased neural network revealed a marginal reduction in the standard deviation (164.11 vs. 168.88),
suggesting a potential for enhanced positioning accuracy; however, it necessitates further rigorous
optimization of tracking parameters, filtering, and signal smoothing. Optimization of operation is
also possible by using more powerful equipment, such as video cards or acceleration.</p>
      <p>Future work will focus on a detailed experimental setup refinement, expanding system
functionality through remote control and autonomous power, developing a protective
transportation case, and a thorough investigation into cost-effectiveness and reliability across
diverse lighting conditions and object types. Critically, upcoming research will prioritize enhancing
the neural network's performance through experimentation with various architectures (e.g.,
CNNLSTM hybrids), increasing the size and diversity of the training dataset, and applying advanced
optimization techniques to improve the motor position prediction accuracy.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
Association 29(5) (2023) 818 835. URL:
https://scibulcom.net/en/article/L8nV7It2dVTBPX09mzWB.
[20] S. Onyshchenko, A. Yanko, A. Hlushko, Improving the efficiency of diagnosing errors in
computer devices for processing economic data functioning in the class of residuals,
EasternEuropean Journal of Enterprise Technologies 5(4(125)) (2023) 63 73.
doi:10.15587/17294061.2023.289185.
[21] A. Yanko, V. Krasnobayev, A. Martynenko, Influence of the number system in residual classes
on the fault tolerance of the computer system, Radioelectronic and Computer Systems, 3(107)
(2023) 159 172. doi:10.32620/reks.2023.3.13.
[22] Facebook/Prophet. URL: https://github.com/facebook/prophet.
[23] R. C. Staudemeyer, E. R. Morris, Understanding LSTM a tutorial into long short-term
memory recurrent neural networks, 2019. URL: https://arxiv.org/pdf/1909.09586.
[24] Python, 2025. URL: https://www.python.org/.
[25] OpenCV, 2025. URL: https://opencv.org/.
[26] Gpiozero, 2025. URL: https://gpiozero.readthedocs.io/en/latest/.
[27] Scikit-learn. Machine Learning in Python, 2025. URL: https://scikit-learn.org/.
[28] TensorFlow, 2025. URL: https://www.tensorflow.org/.
[29] Plotly, 2025. URL: https://plotly.com/.
[30] G. Sunitha, A. Sriharsha, Y. Olimjon, I. Mamatov, Interactive Visualization With Plotly
Express, in: M. G. Galety, et al. (Eds.), Advanced Applications of Python Data Structures and
Algorithms, IGI Global, 2023, pp. 182 206. doi:10.4018/978-1-6684-7100-5.ch009.
[31] Sh. Shalev-Shwartz, Sh. Ben-David (Ed.), Understanding machine learning: from theory to
algorithms, Cambridge University Press, 2014. URL:
https://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/.
[32] A. H. Mbouombouo Mboungam, Y. Zhi, C. K. Fonzeu Monguen, Real-time tracking target
system based on kernelized correlation filter in complicated areas, Sensors 24 (2024) 6600.
doi:10.3390/s24206600.
[33] A. Yanko, N. Pedchenko, O. Kruk, Enhancing the protection of automated ground robotic
platforms in the conditions of radio electronic warfare, Naukovyi Visnyk Natsionalnoho
Hirnychoho Universytetu 6 (2024) 136 142. doi:10.33271/nvngu/2024-6/136.
[34] S. Onyshchenko, V. Skryl, A. Hlushko, O. Maslii, Inclusive Development Index. In:
Onyshchenko, V., Mammadova, G., Sivitska, S., Gasimov, A. (eds), Proceedings of the 4th
International Conference on Building Innovations (ICBI 2022), volume 299 of Lecture Notes in
Civil Engineering, Springer, Cham, 2023, pp. 779 790. doi:10.1007/978-3-031-17385-1_66.
[35] P. Cardullo, D. J. Roio, Mesh Networking, OSF Preprints tepgc, Center for Open Science, 2019.
doi:10.31219/osf.io/tepgc.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Boriak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yanko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Laktionov</surname>
          </string-name>
          ,
          <article-title>Model of an automated control system for the positioning of radio signal transmission/reception devices</article-title>
          ,
          <source>Radioelectronic and Computer Systems</source>
          <volume>4</volume>
          (
          <issue>112</issue>
          ) (
          <year>2024</year>
          )
          <fpage>156</fpage>
          167. doi:
          <volume>10</volume>
          .32620/reks.
          <year>2024</year>
          .
          <volume>4</volume>
          .13.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Face-Detection</surname>
          </string-name>
          . URL: https://github.com/rizkydermawan1992/face-detection.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>O.</given-names>
            <surname>Laktionov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yanko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Pedchenko</surname>
          </string-name>
          ,
          <article-title>Identification of air targets using a hybrid clustering algorithm</article-title>
          ,
          <source>Eastern-European Journal of Enterprise Technologies</source>
          <volume>5</volume>
          (
          <issue>4</issue>
          (
          <issue>131</issue>
          )) (
          <year>2024</year>
          )
          <fpage>89</fpage>
          95. doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2024</year>
          .
          <volume>314289</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>O'Shaughnessy, Understanding Automatic Speech Recognition, Computer Speech</article-title>
          &amp;
          <string-name>
            <surname>Language</surname>
          </string-name>
          (
          <year>2023</year>
          )
          <article-title>101538</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.csl.
          <year>2023</year>
          .
          <volume>101538</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] echnology analysis and predicting a multiwave number of new COVID-19 disease based on prophet model</article-title>
          ,
          <source>Visnyk of Vinnytsia Politechnical Institute</source>
          <volume>153</volume>
          (
          <issue>6</issue>
          ) (
          <year>2020</year>
          )
          <fpage>65</fpage>
          75. doi:
          <volume>10</volume>
          .31649/1997-9266-2020-153-6-65 75.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Vladov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sachenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokurenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Muzychuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V .</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <article-title>Helicopters turboshaft engines neural network modeling under sensor failure</article-title>
          ,
          <source>Journal of Sensor and Actuator Networks</source>
          <volume>13</volume>
          (
          <issue>5</issue>
          ) (
          <year>2024</year>
          )
          <article-title>66</article-title>
          . doi:
          <volume>10</volume>
          .3390/jsan13050066.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Jeong</surname>
          </string-name>
          , Cheol-Jun, Park, Goo-Man,
          <article-title>Real-time auto tracking system using PTZ camera with DSP</article-title>
          ,
          <source>International journal of advanced smart convergence 2(1)</source>
          (
          <year>2013</year>
          )
          <fpage>32</fpage>
          35. doi:
          <volume>10</volume>
          .7236/IJASC.
          <year>2013</year>
          .
          <volume>2</volume>
          .1.032.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <article-title>The development and application of CMOS image sensor</article-title>
          ,
          <source>Applied and Computational Engineering</source>
          <volume>7</volume>
          (
          <year>2023</year>
          )
          <fpage>767</fpage>
          777. doi:
          <volume>10</volume>
          .54254/
          <fpage>2755</fpage>
          -2721/7/20230460.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Fujitake</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Inoue</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Yoshimi</surname>
          </string-name>
          ,
          <article-title>Development of an Automatic Tracking Camera System Integrating Image Processing and Machine Learning</article-title>
          ,
          <source>Journal of Robotics and Mechatronics</source>
          <volume>33</volume>
          (
          <issue>6</issue>
          ) (
          <year>2021</year>
          )
          <fpage>1303</fpage>
          1314. doi:
          <volume>10</volume>
          .20965/jrm.
          <year>2021</year>
          .p1303.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>J.</given-names>
            <surname>He</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>PAM:</surname>
          </string-name>
          <article-title>Research on posture alignment method for camera robot system</article-title>
          ,
          <source>International Journal of Advanced Robotic Systems</source>
          <volume>21</volume>
          (
          <issue>5</issue>
          )
          <year>2024</year>
          . doi:
          <volume>10</volume>
          .1177/17298806241278908.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.-C.</given-names>
            <surname>Hsia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-M. Wei</surname>
            ,
            <given-names>C.-Y.</given-names>
          </string-name>
          <string-name>
            <surname>Chang</surname>
          </string-name>
          ,
          <article-title>Intelligent object tracking with an automatic image zoom algorithm for a camera sensing surveillance system</article-title>
          ,
          <source>Sensors</source>
          <volume>22</volume>
          (
          <issue>22</issue>
          ) (
          <year>2022</year>
          )
          <article-title>8791</article-title>
          . doi:
          <volume>10</volume>
          .3390/s22228791.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>A. K. Bedaka</surname>
            ,
            <given-names>S.-C.</given-names>
          </string-name>
          <string-name>
            <surname>Lee</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Mahmoud</surname>
            ,
            <given-names>Y.</given-names>
            -S. Cheng, C.-Y.
          </string-name>
          <string-name>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>A camera-based position correction system for autonomous production line inspection</article-title>
          ,
          <source>Sensors</source>
          <volume>21</volume>
          (
          <issue>12</issue>
          ) (
          <year>2021</year>
          )
          <article-title>4071</article-title>
          . doi:
          <volume>10</volume>
          .3390/s21124071.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Bahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. M. Shaikh</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Kim</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Cho</surname>
          </string-name>
          ,
          <article-title>Pan/Tilt camera control for vision tracking system based on the robot motion and vision information</article-title>
          ,
          <source>in: Proceedings of the 18th World Congress The International Federation of Automatic Control</source>
          , volume
          <volume>44</volume>
          <source>of IFAC</source>
          , Milano, Italy,
          <year>2011</year>
          , pp.
          <fpage>3165</fpage>
          <lpage>3170</lpage>
          . doi:
          <volume>10</volume>
          .3182/20110828-6-IT-1002.
          <fpage>01781</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Bahn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Park</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.M.</given-names>
            <surname>Shaikh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <article-title>A motion-informationbased vision-tracking system with a stereo camera on mobile robots</article-title>
          ,
          <source>in: Proceedings of the 5th International Conference on Robotics, Automation and Mechatronics (RAM)</source>
          , IEEE, Qingdao, China,
          <year>2011</year>
          , pp.
          <fpage>252</fpage>
          <lpage>257</lpage>
          . doi:
          <volume>10</volume>
          .1109/RAMECH.
          <year>2011</year>
          .
          <volume>6070491</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>P.</given-names>
            <surname>Mazurek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Okarma</surname>
          </string-name>
          ,
          <article-title>Background suppression for video vehicle tracking systems with moving cameras using camera motion estimation</article-title>
          , in: J. Mikulski, (Eds.),
          <source>Proceedings of the 12th International Conference on Transport Systems Telematics</source>
          , volume
          <volume>329</volume>
          <source>of TST 2012</source>
          , Springer, Berlin, Heidelberg,
          <year>2012</year>
          , pp.
          <fpage>372</fpage>
          <lpage>379</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>642</fpage>
          -34050-5_
          <fpage>42</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J. P.</given-names>
            <surname>Rodríguez-Gómez</surname>
          </string-name>
          ,
          <string-name>
            <surname>J</surname>
          </string-name>
          . R. M.
          <article-title>-d.</article-title>
          <string-name>
            <surname>Dios</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Ollero</surname>
          </string-name>
          , G. Gallego,
          <article-title>On the benefits of visual stabilization for frame- and event-based perception</article-title>
          ,
          <source>IEEE Robotics and Automation Letters</source>
          <volume>9</volume>
          (
          <issue>10</issue>
          ) (
          <year>2024</year>
          )
          <fpage>8802</fpage>
          8809. doi:
          <volume>10</volume>
          .1109/LRA.
          <year>2024</year>
          .
          <volume>3450290</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>P.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <article-title>An Efficient Gyro-Aided Optical Flow Estimation in Fast Rotations With AutoCalibration</article-title>
          ,
          <source>IEEE Sensors Journal</source>
          <volume>18</volume>
          (
          <issue>8</issue>
          ) (
          <year>2018</year>
          )
          <fpage>3391</fpage>
          3399. doi:
          <volume>10</volume>
          .1109/JSEN.
          <year>2018</year>
          .
          <volume>2810060</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Rajesh</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.</surname>
          </string-name>
          <article-title>M. Ananda, PSO tuned PID controller for controlling camera position in UAV using 2-axis gimbal</article-title>
          ,
          <source>in: Proceedings of the 2015 International Conference on Power and Advanced Control Engineering (ICPACE)</source>
          , Bengaluru, India,
          <year>2015</year>
          , pp.
          <fpage>128</fpage>
          <lpage>133</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICPACE.
          <year>2015</year>
          .
          <volume>7274930</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Onyshchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Yanko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hlushko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Maslii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cherviak</surname>
          </string-name>
          ,
          <article-title>Cybersecurity and improvement of the information security system</article-title>
          ,
          <source>Journal of the Balkan Tribological</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>