<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Journal of Engineering Research &amp; Technology (IJERT). 05(05)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.5604/01.3001.0012.1104</article-id>
      <title-group>
        <article-title>Moving Targets on the Projection Image from the Laser Emitter of the Multimedia Trainer</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Serhii Yaremenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iurii Krak</string-name>
          <email>krak@univ.kiev.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Glushkov Cybernetics Institute</institution>
          ,
          <addr-line>Kyiv, 40, Glushkov ave., 03187</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Taras Shevchenko National University of Kyiv</institution>
          ,
          <addr-line>Kyiv, 64/13, Volodymyrska str., 01601</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2007</year>
      </pub-date>
      <volume>9</volume>
      <issue>2</issue>
      <fpage>99</fpage>
      <lpage>110</lpage>
      <abstract>
        <p>The problems of creating a multimedia simulator with a laser emitter for teaching the correct use of various weapons in order to acquire the skills of targeted high-speed shooting are investigated. A hit on the target is determined by the coincidence of the central pixel of the laser response (spot) on the projection image with one of the target pixels. The hardware and algorithmic errors of shooting have been investigated and methods have been developed to reduce them. The analysis of the spots from the laser emitter on the projection image and to find the centroid of the spot is carried out. An algorithm for determining the centroid of a spot through two-stage binarization of the image has been developed and tested, the thresholds of binarization that are optimal for solving the problem have been determined. The test results showed that the obtained accuracy is within the hardware component of the error, and the speed in decision-making is performed in real time. image processing, multimedia simulator, laser emitter, video stream, clustering, centroids IntelITSIS'2022: 3rd International Workshop on Intelligent Information Technologies &amp; Systems of Information Security, March 23-25,</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The development of computer graphics methods, real-time image processing and the development
of technical means for the effective visualization of dynamic processes have created conditions for the
creation of new systems for teaching various skills using virtual environments that simulate one or
another subject area of such training. This study proposes mathematical methods and their algorithmic
software implementation for creating a multimedia simulator for teaching the correct use of various
weapons for acquiring high-speed aimed shooting skills. Note that the laser emitter is attached to a real
weapon and when the trigger is pressed, a spot appears on the screen as a result of a shot, which gives
the user a complete feeling of the naturalness of the learning process. Along with the use of real
weapons, appropriate virtual environments are created by means of computer graphics and various
situations are simulated in it, as close as possible to real conditions (Fig. 1) [1]. Such multimedia laser
shooting training systems can be used for effective training and rapid mastering of new types of
weapons. At the same time, it is necessary to develop more complex modeling systems that support the
entire combat training process [2]-[4].</p>
      <p>The principle of operation of a multimedia weapon simulator with a laser emitter (see Fig. 1) is as
follows. The projector reproduces on the screen a model of a training video with a moving target (for
example, a tank). Hitting the target is determined by the coincidence of the central pixel of the laser
response (spot) on the projection image with one of the target pixels. The receiving camera-sensor
transmits the video stream to the computer, where the corresponding program processes each frame of
the video stream in real time [5].</p>
      <p>2020 Copyright for this paper by its authors.</p>
      <p>Depending on the results of processing, the video clip may change - for example, an explosion is
played (Fig. 2), which is accompanied by the corresponding sounds from the computer speaker.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Hardware and algorithmic components of the shooting errors</title>
      <p>The response of the laser emitter is displayed on the screen as a spot about 8-10 mm in size. The
accuracy of shooting from a weapon with a laser emitter depends on the error in determining and the
size of the central pixel of this spot [6], [7].</p>
      <p>Consider two components of the shooting error:
hardware - depends on the parameters of the laser emitter, the dimensions and characteristics of the
projection screen, the resolution and characteristics of the receiving camera-sensor;
algorithmic - depends on the selected algorithm and frame processing methods.</p>
      <p>The hardware and algorithmic components of the error are interconnected. Algorithmic processing
of the spot pixels determines the coordinates of the center in real numbers, then these numbers are
rounded up to discrete values of the coordinates of the central pixel of the spot. The accuracy of
choosing the central pixel of the spot depends on the accuracy of the algorithm. On the other hand, the
accuracy of determining the algorithmic component to a certain extent depends on the number of pixels
that characterize the spot, i.e. on the image resolution [8].</p>
      <p>Figure 3 shows images of the same spot at different resolutions of the image displayed on the
receiving camera-sensor.</p>
      <p>As follows from the figure, for a frame with a resolution of (1280*720), the coordinates of the central
pixel of the spot were determined relatively accurately. After reducing the resolution to 640*360, the
problem arose of which of the neighboring pixels to choose as the centroid of the spot - with coordinates
(i, j) or with coordinates (i, j + 1) . In both cases, we obtain an error in determining the center equal
to Δ/2, where Δ is the pixel size. Note that Δ/2 is the maximum possible error in determining the
centroid, which depends on the resolution of the receiving camera-sensor. With a resolution of
1280*720 and a screen size of 1670mm*940mm, the maximum possible hardware error in mm is
0.5*1670/1280 = 0.7mm (or 0.5*940/720 = 0.7mm). With a resolution of 640*360 and the same screen
size (1670mm*940mm), the maximum hardware error doubles (Δ/2=1.4mm).</p>
      <p>From this we can conclude that if the hardware error is too low, then it is not necessary to achieve a
significant increase in accuracy algorithmically, since after such a determination of the coordinates of
the spot center, the pixel closest to this center is ultimately selected, i.e. the error will be determined by
the pixel size. Therefore, one of the important requirements for the accuracy of the algorithm is the
choice of the nearest pixel to the center of the spot [9],[10].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Analysis of spots from a laser emitter on a projection image</title>
      <p>All frames of video images usually have various kinds of defects and noise. This is due to both the
hardware component (aberrations) and the influence of external factors, while the noise components of
the images are combined with the useful signal.</p>
      <p>Depending on the nature of the origin, aberrations are divided into color (chromatic) and geometric
(called distortion). Chromatic aberrations are optical distortions caused by different angles of refraction
of light waves of different lengths. The aberrations of optical systems also include noise due to the
properties of the sensitive elements of cameras, for example, dark dots on a light background (the
socalled black defect) due to non-working pixels that occur during the production of photosensors.</p>
      <p>External factors are due to the relationship between the illumination of the video lens (useful signal)
and the light flux reflected from local objects (noise), blurring of the image due to the movement of the
subject, etc.</p>
      <p>Below are frames (Fig. 4) obtained on a projection image with point responses from a laser emitter.
The figures show defects that can affect the accuracy of solving the problem:</p>
      <p>In Fig. 4a), the shape of the spot, which is in the middle, is distorted under the influence of
background points, and on the spot on the right, a dark pixel stands out against a light background (black
defect) - the last pixel in the upper left corner.</p>
      <p>In Fig. 4b), the spots are on a black background; however, the color of the pixels is still distorted
along the edges of the spot due to chromatic aberrations - an excess of violet color is noticeable.</p>
      <p>In Fig. 4c), the brightness of the extreme pixels and the shape of the spot are distorted under the
influence of an external factor (glare on the frame).</p>
      <p>Analyzing the images as a whole, we note the signs of spots that are important for the problem being
solved:</p>
      <p>The intensity (brightness) of the pixels of the spots is close to the maximum (255,255,255) and
contrasts well with the background;</p>
      <p>A sharp drop in intensity is noted at the edges of the spot;
The intensity of the pixels increases towards the center of the spot, especially at the edges.</p>
      <p>The intensity gradient of the extreme points essentially depends on the contrast with the points of
the surrounding background.</p>
      <p>The size of the spots is different - from 6 to 12 mm (10-20 pixels for an image resolution of 1280 *
720), while the size of the characteristic region is approximately 10% larger.</p>
      <p>The spots are drop-shaped, close in shape to a circle or ellipse.</p>
      <p>Let's analyze these features. The first two features allow us to assume that, with a certain degree of
accuracy, the central pixel of the spot can be determined as the center of mass of pronounced light
pixels. The third sign shows that the center of the spot can be determined through the dependence of
the change in the intensity of points towards the center. The fourth feature indicates that when
determining the center of the spot through a change in the intensity of the points, it should be taken into
account that the intensity of the points along the outer contour of the spot differs significantly. In this
case, the last two signs indicate that it is necessary to determine the size and shape of the characteristic
region for processing the pixels of each spot.</p>
    </sec>
    <sec id="sec-4">
      <title>Algorithm for determining the spot centroid from a laser emitter</title>
      <p>The term "centroid" is used mainly as a substitute for the physical terms - "center of gravity" and
"center of mass", when it is necessary to emphasize the geometric aspects of this concept, including in
images. In this case, the centroid of a fragment of a discrete halftone image is determined by analogy
with the center of gravity of physical objects, where instead of the weight of each point, ( x, y) its
intensity is taken I ( x, y) [11], [12]. The centroid coordinates were Сx , C y determined in terms of
moments of a M ij discrete image with pixel intensity I (x, y):
{Cx , Cy } = { M10 , M 01 },
M ij =   xi y j I (x, y ),
x y</p>
      <p>. (1)
Important properties of sub-images derived from moments include:
• area (for binary images) or sum of color levels (for halftone images);
• centroid with coordinates Сx , C y .</p>
      <p>To determine the centroid of a spot from a laser spot, it is necessary to solve two problems:
• Find spot points.</p>
      <p>• Determine the centroid from the points of the spot.</p>
      <p>Determining the centroid of the spot is solved in two stages (Fig. 5):
• The position of several points of the spot is approximately determined through image
binarization by the maximum possible lower threshold, which cuts off noise effects;
• The size of the spot is refined through binarization of the image fragment with a lower
threshold selected around the detected points of the spot, and the coordinates of the centroid
are already determined from this fragment.</p>
      <p>The value of the upper threshold at both stages of processing is 255, since the intensity of the central
pixels of the spot is close to the maximum.</p>
      <p>The spot centroid determination algorithm consists of the following steps:</p>
      <p>One signal is extracted from the original color image - usually the RGB (Red, Green, Blue) image
is converted to grayscale. There may be other options, see below for details.</p>
      <p>Image smoothing is not required, since at this stage it is only necessary to find a few bright dots of
the spot, and when smoothing, their intensity decreases.</p>
      <p>Image binarization with BINARY threshold type [13] (Fig. 6). The threshold is chosen from
approximately 246 to 254, see test cases below. Note that a threshold that is too high leaves few spot
points, and a threshold that is too low leaves background pixels in the image.</p>
      <p>All closely located points within a radius of 20 mm (35 pixels) are grouped into spots, random spots
with up to 3 pixels are discarded.</p>
      <p>The "centers of mass" of the found spots are determined through the moments (1).</p>
      <p>A fragment of the image around the center of the spot is determined by the center of mass - a square,
the size of which exceeds the size of the spot detected at the 1st stage by approximately two times.</p>
      <p>We return to the processing of the halftone image. The fragment of the image selected in the previous
step is smoothed. Smoothing is implemented with a discrete Gaussian kernel approximation  = 1 . It
is not advisable to use a larger value  due to the loss of information about the real intensity of the spot
points, which is important for solving the problem. Anti-aliasing with large-radius filters suppresses
noise, but blurs the image.</p>
      <p>Binarization of a halftone image fragment (2nd stage of processing) with TOZERO threshold type
(see Fig. 6). This type of threshold allows you to zero out the outer pixels of the spot and leave the
intensity of the pixels inside and at the edges of the spot in grayscale. The value of the lower threshold
is chosen so that only points characterizing the spot remain within the image fragment, the remaining
points become black (intensity is 0). As a result, the image is divided into dots, which are displayed in
shades of gray with a gradation ranging from 180 to 255 (see test results below), and black dots outside
the spot, which are cut off by a threshold up to 180.</p>
      <p>The position of the center of gravity of the image fragment is determined through the moments (1).
After binarization, those pixels that have become black are not taken into account when calculating the
center of gravity of an image fragment, since their intensity is 0. Therefore, the center of gravity is
determined only by the points characterizing the spot.</p>
      <p>The original image can be represented by one of the color models (RGB, HSV (Hue, Saturation,
Value), etc.)[13], [14].</p>
      <p>When using the RGB image model, it is difficult to give preference to any of the signals. Usually,
the image is converted to grayscale by the formula</p>
      <p>Y ' = 0.299R + 0.578G + 0.114B (2)</p>
      <p>The disadvantage of such a conversion is that signals of different colors can get the same intensity
after conversion to a grayscale image, and after conversion, some of the information necessary for
determining the boundaries by gradient methods may be lost.</p>
      <p>When determining the centroid using the HSV model, the H (brightness) signal can be preferred,
since the points of the laser spot are characterized by high brightness. However, the brightness of the
dots of the spot significantly depends on the external illumination. Therefore, the gradation range of the
signal on the histogram may decrease if the lighting is too bright - the image contrast deteriorates. In
this case, it will be more difficult to calculate the gradient points. We also note that image contrast
correction can be performed using linear histogram stretching [14],[15].</p>
    </sec>
    <sec id="sec-5">
      <title>5. Algorithm testing and results analysis</title>
      <p>The algorithm described above provides for the assignment of image binarization thresholds at 2
stages:
• When determining the position of the spot.
• When all points of the spot are selected and the position of the centroid is determined from
them.</p>
      <p>In order to determine the optimal lower thresholds for image binarization at the first and second
stages of image processing, as well as to evaluate the resulting accuracy of determining the centroid
coordinates, an experimental testing of the algorithm is carried out. To test the algorithm, an image of
the maximum possible complexity is selected - laser spots against the background of snow cover (see
Fig. 4a). Before binarization by BINARY type (see Fig. 6), a color image (RGB model) is converted to
grayscale (GRAY) according to formula (2).</p>
      <p>At the first stage (see Fig. 5), complete cutting off of noise (points that do not belong to laser spots)
starts from a binarization threshold of 248 (Fig. 7).
c
Figure 7: Determination of the binarization threshold (1st stage of processing), after which only the
points of the laser spot remain on the image</p>
      <p>At the second stage (see Fig. 5), cutting off noise (dots that do not belong to laser spots) may also
be necessary. This problem is solved, as in the first stage, by choosing the lower binarization threshold.
At the same time, the threshold value at the second stage can be significantly lower compared to the
binarization threshold at the first stage, since the area of the selected image fragment is much smaller
and therefore the probability of the appearance of noise points within the image fragment around the
spot is negligible.</p>
      <p>On Fig. 8 shows the results of tests with a fragment of the image of the left spot (see Fig. 7a),
enlarged in scale. They allow us to estimate how the position of the center of mass changes for different
values of the lower binarization threshold (1st stage of processing).</p>
      <p>On Fig. 9 shows how the value of the lower binarization threshold (2nd stage of processing) affects
the position of the centroid.</p>
      <p>The top row (see Fig. 9) shows the results of tests to determine the centroid for an image fragment
with shades of gray and low lower intensity thresholds (130-180). Here, the accuracy of determining
the coordinates of the centroid goes beyond one pixel. The spread of coordinates is explained by the
fact that the intense points of the image fragment include, together with the points of the spot, the points
of the background.</p>
      <p>In the middle row in Fig. 9 shows the results of tests with intensity thresholds from 180 to 220. As
can be seen from the figure, the position of the centroid practically does not change at different
thresholds. Here, the thresholds are optimal – all spot points are captured and, basically, background
points are cut off. The background points still remain on the image fragment, but their influence on the
position of the centroid is small due to the lower intensity compared to the central points of the spot.</p>
      <p>In the bottom row in Fig. 9 shows the results of tests with intensity thresholds from 220 to 253. Here,
the accuracy of determining the centroid coordinates is within one pixel. The spread of coordinates is
explained by the fact that the intensities of the extreme points of the spot are not taken into account. In
addition, the shape of the spot at high thresholds is distorted due to the random spread of intensities
over the center of the spot.</p>
      <p>Thus, the tests performed allow us to evaluate the accuracy that is achieved at the 1st and 2nd stages
of image processing.</p>
      <p>At the 1st stage, the position of the center of mass changes within one pixel (see Fig. 8) - when the
lower intensity threshold is selected from 246 to 254. An error of one pixel corresponds to an error of
1.4mm at a resolution of 1280 * 720 and a screen size of 1670 * 940 .</p>
      <p>At the second stage, the position of the center of gravity remains stable at the intensity threshold
from 180 to 220. Therefore, the algorithmic component of the error does not affect the shooting
accuracy, since the spread of the actual values of the centroid coordinates is much less than the pixel
size. Ultimately, the task is not to find the values of the centroid coordinates in real numbers (x, y) ,
but to find the corresponding indices of the (i, j) central pixel of the spot, which are determined through
the operation of mathematical rounding: i = Round(x / w), j = Round( y / w) , where w is the pixel
size.</p>
      <p>Therefore, taking into account the possibility of choosing a wide range of thresholds for both the
first and second stages of image processing, as well as the fact that the tests were carried out for an
image with a complex background texture, we can conclude that the accuracy of centroid determination
by the proposed algorithm is quite high and does not depend significantly way from external factors
(background textures, lighting, etc.) [16-20].</p>
    </sec>
    <sec id="sec-6">
      <title>6. Discussion and conclusions</title>
      <p>The evaluation of the hardware and algorithmic components of the error in recognition of the
centroid of the spot from the laser emitter of the small arms simulator is carried out.</p>
      <p>Research has been carried out on image processing methods related to the problem of spot
recognition and determination of their characteristics.</p>
      <p>An algorithm for determining the spot centroid through two-stage image binarization has been
developed and tested, and optimal binarization thresholds for solving the problem have been
determined.</p>
      <p>The accuracy of the algorithm is sufficient - within the limits of the hardware component of the
error. The speed of the algorithm is high due to its simplicity.</p>
      <p>In further work, it is planned to refine the algorithm, providing the possibility of an adaptive choice
of thresholds for binarization, as well as determining the contours of the spot using gradient methods.
7. References
[1] Yaremenko, S., Krak, I.: Determination of the position of the laser spot in the plane of the photo
sensor of the multimedia shooting gallery. EUROCON 2021 - 19th IEEE International Conference
on Smart Technologies, Proceedings, 531-536. (2021).
doi:10.1109/EUROCON52738.2021.9535549.
[2] Jedrasiak, K., Daniec, K., Sobel, D., Bereska, D., Nawrat, A.: The concept of development and test
results of the multimedia shooting detection system. 2016 Future Technologies Conference (FTC),
San Francisco, CA, USA. P. 1057-1064. (2016). doi: 10.1109/FTC.2016.7821734.</p>
    </sec>
  </body>
  <back>
    <ref-list />
  </back>
</article>