<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International
Journal of Integrated Engineering 12 (2020) 173-180. doi:10.30880/ijie.2020.12.07.019.
[15] T. Hubanova</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1109/MEMSTECH55132.2022.10002907</article-id>
      <title-group>
        <article-title>Method for measuring torques of electric motors using machine vision</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andriy Dudnik</string-name>
          <email>a.s.dudnik@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Kvashuk</string-name>
          <email>dmytro.kvashuk@npp.nau.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vitalii Ostapenko</string-name>
          <email>vt.ostapenko@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Maksym Zhdanovych</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Nazarii Lytvyn</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vira Mykolaichuk</string-name>
          <email>viramykolaichuk@knu.ua</email>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Interregional Academy of Personnel Management</institution>
          ,
          <addr-line>Frometivska Str., 2, Kyiv, 03039</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Open University of Human Development “Ukraine”</institution>
          ,
          <addr-line>Lvivska Str., 23, Kyiv, 04071</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Volodymyrska Str., 60, Kyiv, 03022</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2014</year>
      </pub-date>
      <volume>2732</volume>
      <fpage>33</fpage>
      <lpage>36</lpage>
      <abstract>
        <p>The object of research is the process of determining the angular displacement of dynamometric transmission mechanisms, which are used to measure the torques of electric motors. The paper analyzes methods of converting torques into a unified signal, taking into account destabilizing factors. Study of the accuracy of measurements during their application. In the course of the conducted research, various methods and means of measurement were considered, in particular, the use of tensometry and inductive transducers. It was found that such tools require periodic maintenance. In particular, mechanical transmission dynamometers measure torque by visual observation of angular deformation, which is converted into torque by visualizing the angle of rotation of the shaft using a Nonius angle scale. It is substantiated that such a measurement method has not left its relevance, because instead of involving an operator to determine the measurement results, it can be supplemented by using machine vision to determine the measurement results. And its application can find practical implementation in aggressive conditions, where electronic measuring transducers, due to aggressive conditions, cannot be used. In this regard, a method of measuring torques of electric motors based on machine vision is proposed. Its approbation was carried out using modeling tools and a specially developed prototype, based on which a tensometric-type measuring transducer with the possibility of visualizing the measured signal and its fixation by means of machine vision was proposed. A method of determining the measured value using known software and hardware solutions is proposed. The solution of similar tasks with the help of machine vision is discussed, taking into account the disadvantages caused by low speed and sensitivity.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;torque</kwd>
        <kwd>electric motor</kwd>
        <kwd>measurement accuracy</kwd>
        <kwd>measurement error</kwd>
        <kwd>measuring device</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Ensuring the accuracy of measurements of dynamometric moments of electric motors in dificult
conditions is one of the key tasks of metrological support. In contrast to the methods of measuring static
moments, which are carried out with a stationary stator, it requires taking into account variable loads
and rotation speeds, which places higher demands on the accuracy and speed of the characteristics of
the measuring equipment. At the same time, it is possible to achieve high measurement accuracy only
with the use of force-measuring sensors, which are placed directly on the shaft.</p>
      <p>Considering the fact that modern technologies for the conversion of measurement information
allow such measurements to be performed with high accuracy using tensometry, optical, inductive and
capacitive methods, they require periodic maintenance, specific power conditions, special vibration
ifltering methods, etc.</p>
      <p>At the same time, torsion dynamometers have existed for quite some time, which allow you to
visualize the twisting angle of the dynamometric element, which has a proportional relationship with
the rotational load, using the Nonius scale. Thus, the angular deformation is transformed into a torque
through the visualization of the twisting of the dynamometric element placed on the shaft. This
traditional method is quite simple and reliable, but requires the participation of the operator. It can
be used in aggressive environments where electronics cannot be used, especially when it comes to
measuring directly on the motor shaft. In recent years, the relevance of such measuring instruments has
increased due to the need to master environments where conventional electronics may be inefective,
such as radiation exposure, open space conditions, high pressure at great depths, and other extreme
conditions. Therefore, there is a need to automate the measurement process while preserving the proven
methodology. There is also the question of the need to improve outdated force measuring devices.</p>
      <p>In such conditions, machine vision tools can be used to recognize the measurement results. However,
this requires additional research into the speed, accuracy, sensitivity, and linearity of such solutions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Literature review</title>
      <p>The study of ways to identify measurable quantities using machine vision is becoming more and more
popular. The main idea of using machine vision to determine torques is the analysis of the angular
displacement of the shaft.</p>
      <p>
        The methodology of identifying the movement of moving objects has long been used in various
industries. For example, work [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] proposed a displacement estimation technique, which is used to determine
the displacement of objects by integrating asynchronous acceleration measurements using a Kalman
iflter. An improved feature matching algorithm has been developed for better object tracking. This
allowed combining asynchronous measurements with diferent sampling rates to improve displacement
estimation. The efectiveness and practicality of the proposed method were confirmed by means of
tests. In all tests, the proposed technique made it possible to accurately estimate displacement with an
error of 3%.
      </p>
      <p>
        The application of this technique to determine the angle of rotation of the shaft can be used with
the use of special marks on the shaft. At the same time, the automatic determination of the scaling
factor will be able to translate measurements from pixel units to angular units. However, such a
method will be dependent on visual accessibility, prone to errors at high rotation speeds, sensitive
to external influences, and requires complex calibration and system setup. May depend on lighting
conditions and other external factors.In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], a method of displacement measurement using the edge
contrast enhancement (EEM) technique is proposed, which is a significant improvement compared to the
previous orientation coding (OCM) technique. First of all, EEM improves the ability to track low-contrast
objects, especially in low-light conditions. Unlike OCM, which uses image orientation gradients only,
EEM also applies magnitude gradients, which allows for better highlighting and enhancement of subtle
edge features. This significantly improves the ability to identify edges, which in turn increases tracking
accuracy and reliability.
      </p>
      <p>The main disadvantage of the EEM method is the high requirements for computing resources
and processing time due to the complexity of the algorithms for detecting and enhancing fine edge
characteristics, especially in conditions of low contrast and variable lighting.</p>
      <p>
        Given the possible increase in error, adapting the method of identifying the angular displacements
of the dynamometric elements of the measuring transducers can be quite appropriate in situations
where the main priority is uninterrupted operation with minimal maintenance costs. In addition, there
are quite simple solutions for the implementation of the system itself for recognizing the angular
deformation of the dynamometric element of the shaft. For example, the method presented in this
article can be applied on the basis of a cheap Raspberry Pi mini computer, the power of which, as shown
by the research of this article [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], is suficient to recognize moving objects by determining their spatial
coordinates.This approach can be useful in areas where the accuracy of torque measurement is not high.
But a more accurate measurement requires more computing power.
      </p>
      <p>At the same time, it poses new challenges to developers, requiring constant improvement of both
algorithmic and optical means of identifying moving objects. But there are approaches where, despite
the low resolution of the camera and the processing speed of the video stream, the application of
machine vision can be improved.</p>
      <p>
        Thus, in work [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], Faster Region-Based Convolutional Neural Networks (Faster RCNN) machine
learning models were used, which made it possible to identify and classify red and blue boxes by
processing images obtained from a low-resolution camera on a Raspberry Pi 3 B+. This became possible
thanks to deep learning algorithms, which are able to efectively analyze and recognize objects even
with limited image quality. Experimental results demonstrated that even with limitations related to
image quality, the system was able to achieve an accuracy of 78.8% in detecting and sorting red and
blue boxes. Regarding the analysis of rotational motion, a number of limitations related to the speed of
image processing should be highlighted.
      </p>
      <p>
        In work [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the problem of processing speed of rotary movement was solved by introducing additional
signal labels into the design, which greatly facilitates the recognition of visual features of the object.
Thus, the obtained results demonstrate that the proposed framework efectively solves the problem of
image processing speed required for rotational motion analysis and can serve as a reliable platform
for future applications. But at the same time, a significant number of destabilizing factors significantly
reduces the accuracy of the measurement. Therefore, attention should be paid to the possibility of
building a three-dimensional model of the electric motor shaft, and special marks of the dynamometric
element can be recognized using detectors of special points, for example, SIFT. The experience of using
this method can be taken from the article[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], where the determination of the position of objects was
implemented using stereoscopic binoculars.
      </p>
      <p>This approach can be adapted to measure torques, using the system’s ability to accurately determine
the spatial position and orientation of objects. The use of computational algorithms for image processing
with recognition of SIFT detectors allows you to create accurate 3D models of rotating objects. This can
be useful for monitoring and controlling torques in various engineering applications.</p>
      <p>But this method cannot be used to determine the angular movements of the shaft under conditions
of high speeds, since there is a need for additional recognition tools.</p>
      <p>
        For example, in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] a method of self-measuring the speed of the robot and controlling the torque of
the electric motor is proposed. However, the methods proposed in the article do not take into account
the risk of inaccuracies in speed and moment measurements, especially in conditions of variable loads
or unstable power supply. Therefore, in control systems, especially those that depend on computational
algorithms, there can be a delay between the measurement of speed and the corresponding adjustment of
torque, which can afect the smoothness and accuracy of the robot’s movement. From this example, you
can take approaches to the synchronization of data from diferent sensors for accurate measurement of
parameters in dynamic conditions, which allows you to accurately track the time and place of shooting.
This can be adapted to monitor changes in shaft position in real time.
      </p>
      <p>
        Gyroscopic methods, which also use visual cues of motion but do not focus on images, should
also be considered. For example, methods of probe scanning of shaft movement are usually based on
measuring the frequency characteristics of the set with various optical sensors, as well as resistive
methods based on the proportionality of the shaft rotation angle and the illumination of the strain
gauge receiver, which records the level of illumination intensity as a result of the angular movement
of the dynamometric element. The principle of operation of such methods is described in works
[
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ]. However, they have significant limitations related to the need for precise alignment of
optical components and high sensitivity to external influences, such as light pollution and mechanical
vibrations. This can significantly afect the accuracy and reliability of measurements.
      </p>
      <p>
        An alternative to the already existing methods of measuring torques can be methods based on
computer vision, which allow determining the angle of rotation of the shaft depending on the load
moment [
        <xref ref-type="bibr" rid="ref11">11, 12</xref>
        ]. However, the main unsolved problem of the considered methods, which can be
applied to estimate the rotational moment, is the dificulty with accurate determination of angular
displacements in conditions of high rotation speeds and variable lighting conditions, which can lead to
errors in measurements [13, 14, 15, 16, 17].
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. The purpose and objectives of the research</title>
      <p>The purpose of the study is to develop a method of measuring the torques of electric motors based on
machine vision, which will make it possible to apply this method in conditions where conventional
measuring transducers cannot be used.</p>
      <p>To achieve the goal, the following tasks were set:
• develop a prototype of a dynamometric clutch that signals the level of rotational load on the
electric motor shaft;
• propose a method of processing visual information that characterizes the measurement parameter.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Research materials and methods, object and research hypothesis</title>
      <p>The object of the research is the development of a method of measuring the torques of electric motors
using machine vision. The research hypothesis suggests that the application of machine vision can be
used in conditions where traditional measuring devices are inefective.</p>
      <p>This study involves the development of a prototype of a dynamometric clutch capable of recording
the level of rotational load on the shaft of an electric motor, as well as the development of a method of
processing the received visual information from this clutch.</p>
      <p>It is assumed that the use of these approaches will reduce the problem of introducing machine vision
into the process of measuring the dynamic characteristics of electric motors and will contribute to
the further development of the accuracy and speed of this method, which can be used in particularly
dificult conditions of operation of measuring devices [18, 19, 20, 21].</p>
    </sec>
    <sec id="sec-5">
      <title>5. Requirements for the development of a dynamometric clutch that signals the level of rotational load on the shaft of an electric motor and a method of processing visual information from a dynamometric clutch</title>
      <p>The development of a prototype of a dynamometric clutch involves the creation of a device capable of
measuring rotational loads on an electric motor shaft in real time. The design of the coupling includes
sensors that can withstand mechanical loads and ensure the stability of measurements. The coupling
is also equipped with an interface for easy integration with electric motor control systems and data
transmission in digital format. The prototype must be tested for its ability to withstand long-term use
in various operating modes of the electric motor, including maximum revolutions and load changes.</p>
      <p>In parallel with the development of the coupling, the development of a method of processing visual
information coming from the coupling is proposed. The processing system automatically analyzes
visual information to determine measurement parameters [22, 23]. The development of a graphical
user interface for visualization and analysis of measurement results in real time is also expected
[24, 25, 26, 27, 28, 29, 30].</p>
    </sec>
    <sec id="sec-6">
      <title>6. Development of a prototype for measuring the torque based on recognition of the angle of rotation of the shaft. Principle of operation</title>
      <p>The method that can be used to measure the rotational moment of an electric motor is based on the
well-known principle of converting the elasticity of the dynamometric element into dynamic moment
(Figure 1).</p>
      <p>The angle of twist of two sections of the dynamometric element relative to each other can be
represented by the expression:</p>
      <p>=  , (1)
where  - infinitesimally small value by which the angle of inclination changes;  length of sections
perpendicular to the axis of the dynamometric element, the angle between which is minimal;  - angle
of inclination of the helical line of the dynamometric element.</p>
      <p>Taking into account the length of the dynamometric element, the twist angle can be determined as
follows:
 =
 
0
=
 
0</p>
      <p>,
0 =
 4
32</p>
      <p>,
 =  , (4)

where  - length of the dynamometric element;  - distance between two sections; - radius of turns
of the spring of the dynamometric element; 0 - moment of inertia of the cross-section of the spring
relative to the center of this cross-section;  - force acting at the moment of twisting the spring; 
diameter of the wire from which the spring of the dynamometric element is wound;  - shear modulus
of the spring material;  - twisting moment acting on the cross section of the spring  =  .</p>
      <p>Deformations 1 and 2 associated with deformations in the  and  directions can be described by
the following dependencies:
1 =
 +  +  −    1 +    1,</p>
      <p>2 2 2
2 =  +2  +  − 2   2 +  2  2, (6)
where  1 and  2 are the angles that determine the orientation of the axes relative to the main axes of
deformation of the elastic element.</p>
      <p>Hence, the shear deformations are expressed by the formula:
If  1 ≡  2, then:
With:
  =
2 (1 −  2) − (  −  ) ( 1 −  2) ,</p>
      <p>2 −  2
 1 +  = −/2, 0, /2,  . . .</p>
      <p>2
=  2 − ,
(8)
(2)
(3)
(5)
(7)
The maximum angular twist of the dynamometric element will have the expression:
Using the Mohr diagram, the main deformations can be calculated as follows:</p>
      <p>= √︀ 2 +  2,
. =</p>
      <p>At the same time, determining the dependences of dynamometric elements on the shaft twist angle
can be complicated by dificulties in setting up and calibrating such a system. Therefore, to determine
the rotational force, it is possible to use a strain gauge sensor, which has greater sensitivity and known
grading characteristics.</p>
      <p>In this case, the structure of the measuring transducer will have the following form (Figure 2).</p>
      <p>Designation in Figure 2: 1 is an electric motor; 2 – rigidly fixed strain gauge sensor to the body of the
strain gauge coupling; 3 – strain gauge clutch, which is connected to the electric motor shaft through
strain gauge sensor 2.</p>
      <p>Based on this, a prototype of a dynamometric coupling for measuring torque (Figure 3) is proposed,
which consists of an AS5600 angular displacement sensor (Figure 4), a strain gauge sensor (Figure 5),
which is built on the basis of an HX711 microcircuit, which converts an analog signal received from a
strain gauge bridge, which housed in an aluminum case measuring 75x12.7x12.7 mm. Allows you to
convert deformation into an electrical signal, supporting a maximum load of up to 1 kg.</p>
      <p>As well as an electrical circuit for determining the level of the sensor’s output signal, which is built
on the basis of the LM3914 integrated circuit (Figure ??), which allows you to convert the output signal
of the microcircuit into a qualitative indicator of the measured signal.</p>
      <p>Thus, the measurement of the torque is based on the conversion of the mechanical load into an
analog signal using a strain gauge bridge. This primary signal is fed to an analog-to-digital converter
(ADC), which processes and converts it into digital form. Further processing of the signal is carried
out using the HX711 chip, which forms a unified output signal. This signal is fed to the LM3914 chip,
which allows visualization of the shaft load level using an indicator, thereby providing the ability to
determine the shaft load in real time [31, 32, 33, 34, 35, 36].</p>
    </sec>
    <sec id="sec-7">
      <title>7. A method of recognizing a change in a signal indicator using machine vision</title>
      <p>
        Considering the display of the signal level as a scale placed in the form of an LED matrix on the housing
of the dynamometric coupling (Figure 3), it is possible to compile the program code for identifying the
signal level by setting a virtual scale:
import cv2
import numpy as np
# Load the image using OpenCV
image = cv2.imread(patch)
# Convert image to HSV color space to detect red color more accurately
hsv_image = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# Define range for red color in HSV
lower_red = np.array([0, 120, 70])
upper_red = np.array([
        <xref ref-type="bibr" rid="ref10">10, 255, 255</xref>
        ])
lower_red2 = np.array([170, 120, 70])
upper_red2 = np.array([180, 255, 255])
light_red = np.array([160, 100, 100])
dark_red = np.array([180, 255, 255])
# Create masks for red color
mask1 = cv2.inRange(hsv_image, lower_red, upper_red)
mask2 = cv2.inRange(hsv_image, lower_red2, upper_red2)
mask3 = cv2.inRange(hsv_image, light_red, dark_red)
combined_mask = mask1 + mask2 + mask3
# Find contours in the mask
contours, hierarchy = cv2.findContours(combined_mask, cv2.RETR_EXTERNAL, cv2.
      </p>
      <p>CHAIN_APPROX_SIMPLE)[-2:]
# Initialize list to hold the x-coordinates of indicator and reference lines.
line_x_coords = []
# Loop through the contours to find the x-coordinates.
for contour in contours:
# Get the bounding box of the contour.
x, y, w, h = cv2.boundingRect(contour)
# Store the x-coordinate of the center of the bounding box.</p>
      <p>line_x_coords.append((x + x + w) // 2)
# Sort the x-coordinates
line_x_coords = sorted(line_x_coords)
# Assuming the leftmost line is the indicator and the rightmost line is the reference
# Check if we have at least two lines detected
if len(line_x_coords) &gt;= 2:
indicator_x = line_x_coords[0]
reference_x = line_x_coords[-1]
# Assuming the reference line (rightmost) is at position 10 Nm, we scale the
measurement accordingly
scale_factor = 10 / (reference_x - indicator_x)
# Now we measure the distance of the indicator line from the 0 position, scale it,
and that’s our measurement
measurement = (indicator_x - line_x_coords[0]) * scale_factor
else:</p>
      <p>measurement = None
measurement, indicator_x, reference_x</p>
      <p>As part of this code, the image is first loaded using OpenCV library functions. For better identification
of red elements in the image, it is converted from the standard BGR color space to the HSV model. This
allows you to more accurately determine the ranges of red color, in particular for its various shades.
Next, color masks are created that highlight red areas in the image. The program uses these masks
to find the contours corresponding to the red lines. Each contour found is analyzed and a central
x-coordinate is determined for it. After finding all the x-coordinates, they are sorted to select the
extreme points that represent the left and right red lines on the scale. The left line is a moment indicator,
and the right line is a standard that remains unchanged and corresponds to the maximum value of the
measurement (Figure 7).</p>
      <p>The program determines the scale factor by dividing the distance between the indicator and the
standard by the known maximum value of the scale. Finally, it measures the torque using the location
of the indicator line and converts it to a scale of 0 to 10 Nm. Testing of this method with the use of a
direct current electric motor made it possible to obtain the following results (Figure 8).</p>
      <p>Statistical observation of the measurement results showed a standard deviation of 0.22 in the stabilized
operating mode. The relative error was 20%. So, this method of determining the loads on the shaft using
machine vision and the proposed dynamometric clutch allowed to obtain the characteristics of the load
intensity in a non-contact way. The use of the developed software code made it possible to conduct a
comparative analysis of pixels on the signal and reference mark of the dynamometric clutch.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Discussion of the results of the assessment of the accuracy of measurements of dynamic moments and angular accelerations of electric motors</title>
      <p>The results of testing the method of determining torques on the shaft of a direct current electric
motor using machine vision demonstrate the significant prospects of this technology for non-contact
measurement. With the help of a specially developed dynamometric coupling, measurements were
obtained, which are supported by statistical data. The standard deviation in the stabilized operating mode
was 0.22, which indicates a fairly high stability of the measurement process. The relative measurement
error was 25%, which indicates potential areas for further improvement of the technique.</p>
      <p>The application of the developed software code for the comparative analysis of pixels on the signal and
reference markings of the dynamometer coupling made it possible to analyze the load intensity in more
detail. This made it possible to obtain more accurate data on the distribution of loads, which contributes
to a better understanding of the behavior of the electric motor in various operating conditions.</p>
      <p>Considering the obtained results, the method of using machine vision together with the dynamometric
coupling proved to be efective for measuring torques.However, it should be noted that in order to
achieve greater accuracy and reduce the relative error, it is necessary to optimize both the image
processing algorithms and the physical components of the measurement system. Further research
should focus on improving these aspects in order to reduce errors and improve measurement reliability.</p>
    </sec>
    <sec id="sec-9">
      <title>9. Conclusions</title>
      <p>The proposed method of measuring torque uses the capabilities of machine vision, which greatly
simplifies traditional measurement methods. Testing of the method confirmed the possibility of its use
for non-contact measurement of torques. The use of a specially developed dynamometric coupling
ensured the stability of measurements with a standard deviation of 0.22. However, taking into account
the complex cycle of transformations of the measured value, the relative error was 25%. Although
this method does not provide high measurement accuracy, it can be applied in conditions where it is
necessary to use visual data to obtain measurement information, in particular in those situations where
traditional measuring equipment is unsuitable or cannot be applied.</p>
    </sec>
    <sec id="sec-10">
      <title>Declaration on Generative AI</title>
      <p>The authors have not employed any Generative AI tools.
intelligent instrument system for measurement parameters of the stress - strain state of complex
structures, in: 2022 IEEE 4th International Conference on Advanced Trends in Information Theory
(ATIT), Kyiv, Ukraine, 2022, pp. 120–124. doi:10.1109/ATIT58178.2022.10024222.
[29] I. Bakhov, Y. Rudenko, A. Dudnik, N. Dehtiarova, S. Petrenko, Problems of teaching future teachers
of humanities the basics of fuzzy logic and ways to overcome them, International Journal of Early
Childhood Special Education (INT-JECSE) 13 (2021) 844–854. doi:10.9756/INT-JECSE/V13I2.
211127.
[30] O. Trush, A. Dudnik, M. Trush, O. Leshchenko, K. Shmat, R. Mykolaichuk, Mask mode monitoring
systems using it technologies, in: 2022 IEEE 4th International Conference on Advanced Trends in
Information Theory (ATIT), Kyiv, Ukraine, 2022, pp. 219–224. doi:10.1109/ATIT58178.2022.
10024216.
[31] A. Dudnik, B. Presnall, M. Tyshchenko, O. Trush, Methods of determining the influence of physical
obstructions on the parameters of the signal of wireless networks, in: IT&amp;I Workshops, 2021, pp.
227–240. URL: https://ceur-ws.org/Vol-3179/Paper_21.pdf.
[32] N. Dakhno, O. Leshchenko, Y. Kravchenko, A. Dudnik, O. Trush, V. Khankishiev, Dynamic model
of the spread of viruses in a computer network using diferential equations, in: 2021 IEEE 3rd
International Conference on Advanced Trends in Information Theory (ATIT), Kyiv, Ukraine, 2021,
pp. 111–115. doi:10.1109/ATIT54053.2021.9678822.
[33] R. Aleksieieva, A. Fesenko, A. Dudnik, Y. Zhanerke, Software tool for ensuring data integrity and
confidentiality through the use of cryptographic mechanisms, in: MoMLeT+ DS, 2023, pp. 259–273.</p>
      <p>URL: https://ceur-ws.org/Vol-3426/paper21.pdf.
[34] A. Dudnik, S. Dorozhynskyi, S. Grinenko, O. Usachenko, B. Vorovych, O. Grinenko, Methods of
constructing a lighting control system for wireless sensor network “smart home”, 2022. doi:10.
1007/978-3-031-04809-8_15.
[35] A. Dudnik, I. Bakhov, O. Makhovych, Y. Ryabokin, O. Usachenko, Models and methods for
improving performance of wireless computer networks based on the decomposition of lower
layers of the osi reference model, International Journal of Emerging Technology and Advanced
Engineering 12 (2022) 152–162. doi:10.46338/ijetae0122_15.
[36] O. Trush, I. Kravchenko, M. Trush, O. Pliushch, A. Dudnik, K. Shmat, Model of the sensor network
based on unmanned aerial vehicle, in: 2021 IEEE 3rd International Conference on Advanced Trends
in Information Theory (ATIT), Kyiv, Ukraine, 2021, pp. 138–143. doi:10.1109/ATIT54053.2021.
9678623.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ma</surname>
          </string-name>
          , J. Choi,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sohn</surname>
          </string-name>
          ,
          <article-title>Real-time structural displacement estimation by fusing asynchronous acceleration and computer vision measurements</article-title>
          ,
          <source>Micromachines</source>
          <volume>12</volume>
          (
          <year>2021</year>
          )
          <article-title>1034</article-title>
          . doi:
          <volume>10</volume>
          .1111/ mice.12767.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>L.</given-names>
            <surname>Luo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Q.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <article-title>Edge-enhanced matching for gradient-based computer vision displacement measurement</article-title>
          ,
          <source>Computer-Aided Civil and Infrastructure Engineering</source>
          <volume>33</volume>
          (
          <year>2018</year>
          )
          <fpage>1019</fpage>
          -
          <lpage>1040</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>V.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Minghua</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Rizwan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Shaikh</surname>
          </string-name>
          , X. Liu,
          <article-title>Computer vision based object grasping 6dof robotic arm using picamera</article-title>
          ,
          <source>in: 2018 4th International Conference on Control, Automation and Robotics (ICCAR)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>312</fpage>
          -
          <lpage>316</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICCAR.
          <year>2018</year>
          .
          <volume>8384653</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Sawant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tyagi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sawant</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. L.</given-names>
            <surname>Tade</surname>
          </string-name>
          ,
          <article-title>Implementation of faster rcnn algorithm for smart robotic arm based on computer vision</article-title>
          ,
          <source>in: 2022 6th International Conference On Computing, Communication, Control And Automation (ICCUBEA)</source>
          , Pune, India,
          <year>2022</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/ ICCUBEA54992.
          <year>2022</year>
          .
          <volume>10010930</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F. D.</given-names>
            <surname>Secuianu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lupu</surname>
          </string-name>
          ,
          <article-title>Implementation of a home appliance mobile platform based on computer vision: self-charging and mapping</article-title>
          ,
          <source>in: 2018 21st International Conference on System Theory, Control and Computing (ICSTCC)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>6</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICSTCC.
          <year>2018</year>
          .
          <volume>8540685</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>A system and method for person identification and positioning incorporating object edge detection and scale-invariant feature transformation</article-title>
          ,
          <source>Measurement</source>
          <volume>223</volume>
          (
          <year>2023</year>
          )
          <article-title>113759</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.measurement.
          <year>2023</year>
          .
          <volume>113759</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cubero</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Marco-Noales</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Aleixos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Barbé</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Blasco</surname>
          </string-name>
          ,
          <article-title>Robhortic: A field robot to detect pests and diseases in horticultural crops by proximal sensing</article-title>
          ,
          <source>Agriculture</source>
          <volume>10</volume>
          (
          <year>2020</year>
          )
          <article-title>276</article-title>
          . doi:
          <volume>10</volume>
          .3390/ agriculture10070276.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Sue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wilson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Farr</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kretschmar</surname>
          </string-name>
          ,
          <article-title>High precision torque measurement on a rotating load coupling for power generation operations</article-title>
          ,
          <source>in: Proc. IEEE Int. Instrumentation and Measurement Technology Conf. (I2MTC)</source>
          , Graz, Austria,
          <year>2012</year>
          , pp.
          <fpage>518</fpage>
          -
          <lpage>523</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Veyrat Durbex</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. Nachajon</given-names>
            <surname>Schwartz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Tacca</surname>
          </string-name>
          ,
          <article-title>Solutions for torque and speed measurement on electric machine controllers test benches</article-title>
          ,
          <source>Elektron</source>
          <volume>5</volume>
          (
          <year>2021</year>
          ).
          <source>doi:10.37537/rev.elektron. 5.1.131</source>
          .
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zappalá</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bezziccheri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. J.</given-names>
            <surname>Crabtree</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Paone</surname>
          </string-name>
          ,
          <article-title>Non-intrusive torque measurement for rotating shafts using optical sensing of zebratapes, Measurement Science and Technology (</article-title>
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .1088/
          <fpage>1361</fpage>
          -6501/AAB74A.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>O.</given-names>
            <surname>Sushchenko</surname>
          </string-name>
          , et al.,
          <article-title>Airborne sensor for measuring components of terrestrial magnetic field</article-title>
          ,
          <source>in: IEEE International Conference on Electronics and Nanotechnology (ELNANO)</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>687</fpage>
          -
          <lpage>691</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELNANO54667.
          <year>2022</year>
          .
          <volume>9926760</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>