=Paper= {{Paper |id=Vol-2344/short3 |storemode=property |title=Use of point clouds for video surveillance system cover zone imitation |pdfUrl=https://ceur-ws.org/Vol-2344/short3.pdf |volume=Vol-2344 |authors=Aleksandr Mezhenin,Vera Izvozchikova,Viktoriia Ivanova |dblpUrl=https://dblp.org/rec/conf/micsecs/MezheninII18 }} ==Use of point clouds for video surveillance system cover zone imitation== https://ceur-ws.org/Vol-2344/short3.pdf
       Use of point clouds for video surveillance system
                    cover zone imitation
 Aleksandr Mezhenin1[0000-0002-7150-9811], Vera Izvozchikova2[0000-0002-8707-9510] and
                    Viktoriia Ivanova 1[0000-0001-8247-7344]
           1
           ITMO University, Kronverksky Ave. 49, St. Petersburg, Russia
                  a.v.mezhenin@gmail.com, vikaaiv17@gmail.com
           2
             Orenburg State University, Ave. Pobedy 13, Orenburg, Russia
                                viza-8.11@mail.ru



      Abstract. The article is devoted to assessing the coverage areas of pro-
      jected video surveillance systems, determining the optimal number of cam-
      eras, identifying the so-called “blind zones” at the design stage. For this
      purpose, it is proposed to use a three-dimensional modeling system in
      which a polygonal model of a video surveillance scene is created. The re-
      sulting model is converted from polygons to a cloud of points. Using dif-
      ferent camera locations, we get different coverage areas. During the ex-
      periments, the objects of observation and surveillance cameras, whose lo-
      cation changed, were modeled. The resulting 2.5-d surfaces, represented
      as a cloud of points, are transferred to a common coordinate system and
      compared. The comparison results are shown as a heat map. Experimental
      modeling was carried out using specially created objects in the environ-
      ment of Autodesk 3ds Max and CloudCompare.

      Keywords: Video surveillance systems, Designing, Virtual modeling, Point
      clouds, Thermal maps.


1     Introduction

The main purpose of video surveillance systems is to provide a visual inspection
of an object equipped with them. When designing, it is necessary to take into
account the combination of factors that increase the efficiency of the developed
video surveillance systems. One of the most important factors is the coverage
zone of the observed area with cameras and the absence of so-called “blind
zones”. Their appearance depends on many factors: the characteristics of the
cameras being installed, their location, the number, used data formats [16].
   Obviously, the more cameras, the easier it is to place them so as to get the
most complete coverage of the observed scene, but this increases the cost of the
2


developed system. The proposed computer simulation of the coverage area in
the form of a cloud of points of the observed surfaces of the 3D space, according
to the authors, will allow obtaining preliminary estimates of the effectiveness of
the developed video surveillance systems [16] at the design stage, which will
improve design solutions. A mathematical model is proposed, as well as algo-
rithms for modeling a cloud of points and obtaining heat maps for comparing
coverage areas.


2     Analog overview

As analogues, we consider special software designed for the development of
video surveillance systems. The first analogue is a program for the design of
video surveil-lance systems-JVSG [13], which allows you to set the number and
location of video surveillance cameras, to calculate the developed system, to
determine the viewing area, to place the camera on an existing or created from
scratch plan of the premises, to place obstacles in three-dimensional space to
identify blind spots. As a result of calculation there is a project that includes
drawings of zones of video cameras, the block diagram and the explanatory
note.
   The second equivalent is the DlinkSurveillanceFloorPlannerPRO [14] soft-
ware. This program also helps to create a project for deploying cameras for
video surveil-lance systems. It allows you to select cameras, place them on a
floor plan and visually displays coverage areas.
   Both of the reviewed programs allow to adjust the camera settings, import a
floor plan into a program, determine some camera visibility areas. Thus, they
allow to evaluate the effectiveness using such system capabilities as the choice
of cameras with specific settings, their location and cost. However, coverage
areas are presented in a form of plans, which, according to the authors, does
not allow to evaluate them fully.
   Currently, widespread method of representing objects of space in the form of
clouds of points. A point cloud is a collection of vertices in three-dimensional or
two-dimensional space, obtained from an object and representing the surface of
this object. The data for displaying a cloud of points can be obtained by scan-
ning 3D objects with special devices or by using computer simulations. In addi-
tion to coordinates, points may contain additional information, such as colors
or normal’s.
   There are several packages available for visualizing point clouds: MeshLab,
CloudCompare and Point Cloud Library (PCL), which can be integrated into
OpenCV. At this stage, the program CloudCompare [17] was used, which allows
                                                                                               3


editing and processing 3D point clouds. CloudCompare supports import and
export of files with the extension OBJ, E57, ASCII, PLY, etc. Also, the program
allows you to create cameras, place them on the workspace and display visibility
areas for them.


3        Mathematical envisioning of a surface comparison problem

For a three-dimensional one-sided surface, constructed from the point cloud S,
which contains N points, there exists a coordinate system 𝑂𝑥𝑦𝑧, which is con-
structed in such a way, that the point cloud point 𝑆 = {(𝑥 ) , 𝑦 ) , 𝑧 ) )}/        )-. can be
seen as the function 𝑧 = 𝑓(𝑥, 𝑦), built on the discrete point range {(𝑥 ) , 𝑦 ) )}/         )-. .
   Here is an example. Two one-sided surfaces, 𝑆. and 𝑆1 are formed respectively
                                         /3                        /3
from the point clouds 2(𝑥). , 𝑦). , 𝑧). ))-. 4, 2(𝑥)1 , 𝑦)1 , 𝑧)1 ))-. 4 in the 𝑂𝑥𝑦𝑧 coordinate
system so that 𝑆. and 𝑆1 will be projected onto the surface 𝑂𝑥𝑦 in 𝐸 6 . It is
required to find a measure for comparing the surfaces 𝑆. and 𝑆1 , that will satisfy
the semi metric axioms and calculate it [1].
   As a quantity that will show the difference between the clouds we will use
d(x,y). This is the Euclidean distance in point clouds between two points, which
is calculated by using the following formula:

                                       𝑑(𝑥, 𝑦) = 8∑/
                                                   )-.(𝑥) , 𝑦) ).                            (1)

   We will denote the range of points in both clouds, 𝑆. and 𝑆1 , that are closest
                              /
to each other - 2:𝒔𝟏𝒊 , 𝒔𝟐𝒊 ?4)-. . E is the error margin. Then, E is lowered gradually
to the required amount.

                                           .
                                     𝐸 = / ∑/      𝟏 𝟐
                                            )-. 𝑑(𝒔𝒊 , 𝒔𝒊 ) → 𝑚𝑖𝑛                            (2)


4        Algorithm for comparing object surfaces

The iterative closest point (ICP), algorithm is the most fitting one for compar-
ing two different point clouds of the same given object. This algorithm is based
on the minimization of the average distance between two point clouds. A special
procedure is used to minimize the distance between the two point clouds in
question. The algorithm can be used to compare point clouds based on the same
object, which has had some changes made to it. Also it works if the object was
scanned from a different angle. In either case, the point clouds must have similar
areas, that cover each other. This algorithm moves and adjusts these
4


corresponding areas until they overlap to the extent denoted by E – the margin
of error.
   For the algorithm to start functioning an initial evaluation of the similarity
between the clouds is required, which, over time, becomes more accurate, be-
cause of the repetition of the cycle. The two point clouds - и 𝑆. and и 𝑆1 will
gradually become one cloud. The maximal accuracy will be achieved if the po-
sition of the points in space was similar initially and the similar areas of the
clouds were large. The pairs of closest points, which are located in the similar
areas of the clouds, must be picked in a certain quantity, which should not
exceed a set limit. If the given similar areas of the clouds contain too many
pairs of points, then the accuracy of the algorithm’s work becomes much lower.
   During the process, the error margin should be lowered as much as possible:

                                       𝐸 → 𝑚𝑖𝑛                                  (3)

   The offered algorithm of solving the given problem includes the following
steps:
   1. Find the corresponding closest point (𝑠). , 𝑠)1 ), such that

                                   𝑆1 (𝑠 1 ) = |−𝑠)1 |                          (4)

                                 (𝑠 1 ) = argmin 𝑆(𝑠 1 )                        (5)
                                          NO3 ∈Q3 ,NOR ∈QR


   𝑠) denotes the given point in the space ℝ6 , for which we must find the shortest
possible distance 𝑠). to the range 𝑆. . This means that we need to find such a
point, 𝑠). , that will make the functional small as possible (5).
   2. Find the most optimal way to transform the first point cloud, by rotating
it or changing the points’ position, using the least squares method. This step
should be repeated until the error margin E is satisfied. We shall get t – the
vector of transformation and the rotation matrix R, which will deliver minimum
function:

                         𝒥(R, t) = ∑/      .         1 1
                                    )-.|(𝑅𝑠) + 𝑡) − 𝑠) |1                       (6)

                            (𝑅, 𝑡) =     argmin 𝒥(𝑅, 𝑡) ,                       (7)
                                       Z∈Q[(6),\∈ℝ]


  the matrix R is an objective orthogonal 3 × 3 matrix whose determinant
equals 1, therefore it belongs to the orthogonal rotation group 𝑆(3);
                                                                               5


   the vector t displays the translation of the range of points along the vector
[𝑥, 𝑦, 𝑧]` ;
   ‖ ∙ ‖11 is the square of the normal 𝑙. or the square of the Euclidean distance
between two given points.
   3. Adjustment of the point cloud being transformed by using the rotation
matrix and translation vector that were found in the above mentioned steps.
Thus we will get a new point cloud.

                                    𝑠). = 𝑅𝑠). + 𝑡                            (8)

    4. Repeat steps 1 through three.

   The work of the algorithm will stop only then, when the amount of similarity,
imposed by E – the error margin is met. The amount of difference between the
positions of the points in clouds S1 and S2 will be the quantity we sought to
find. This amount can be also displayed as a percentage of the original distance
between the points.


5      Experimental results

Figure 1, (a) shows the point cloud S1, which was constructed from a three-
dimensional scene in the cloud processing software, CloudCompare. Figure 1 (b)
shows another point cloud, which imitates a scan of the same object from a
different angle.




                    a)                                          b)

                      Fig.1. Point cloud S1, point cloud 𝑆1 .

   The point cloud 𝑆1 was generated by applying rotating and translation oper-
ations, based on the matrix, illustrated below, to the original point cloud 𝑆. .
6


                                  0.86 0.5 0 60
                                 −0.5 0.86 0 −10
                                d                k.
                                    0   0  1  0
                                    0   0  0  1

   This transformation matrix can be seen to contain two parts: a rotation ma-
trix

                                      0.86    0.5 0
                                 R = l−0.5   0.86 0m
                                        0      0  1

     And a translation vector

                                           60
                                     t = l−10m.
                                            0

   On Figure 2 we can see the relative position of clouds S1 and S2, both for a
view from a random angle (see Fig. 2, (a)) and for a view from the top (see Fig.
2, (b)).




                    a)                                          b)
    Fig.2. Point clouds S1 and S2, view from a random angle (a), view from the
                                      top (b).

Figure 3 displays a graphic rendering of the distance of point cloud S1 relative
to point cloud S2. This distance is calculated as the distance between corre-
sponding points, both for random angle view (see Fig. 3, (a)) and view from the
top (see Fig. 3, (b)). The areas that are colored blue correspond to the minimal
distance between similar points on the two surfaces, while areas colored red
denote the maximal distance. The ratio of distance to color is shown on the
scale in the right part of the figure.
                                                                                       7




                    a)                                           b)

 Fig.3. Distance between point clouds S1 and S2, random angle view (a), top
                                  view (b).

Initial average distance between the points of cloud S1 and cloud S2 was 26.231,
the standard difference was 21.597; S2 relative to S1 – 26.894, standard differ-
ence – 22.061.
   Shown on Figure 4 are the results of applying the point cloud registration
operation, that utilizes the ICP algorithm in the cloud processing program
CloudCompare for an arbitrary view (see Fig. 4, (a)) and view from the front
(see Fig. 4, (b)).




                    a)                                           b)

 Fig.4. The result of applying the ICP algorithm for point clouds S_1 and S_2, arbi-
                             trary view (a), top view (b).

The calculated distance of the point cloud S1 relative to the point cloud S2 and
from point cloud S2 relative to point cloud S1 is displayed on Figure 5, both
from an arbitrary view (see Fig. 5, (a)) and for a top view (see Fig. 5, (b)).
8




                    a)                                        b)

 Fig.5. Distances between point clouds S1 and S2 after registration by using
the ICP algorithm, rendered from a random angle view (a), and top view (b).

   The average distance from point cloud S1 to point cloud S2 is 0.315, standard
difference is 0.32; S2 relative to S1 – 0.322, standard difference is 0.325.
   As a result of the execution of the ICP algorithm on the point cloud 𝑆1 the
following matrix was used:

                                0.865    −0.502 0.000
                           R = l 0.502    0.865 0.000m
                                 0.000    0.000 1.000

    And this vector was applied:

                                        −56.917
                                   t = l−21.375m
                                         0.878

    The results fit the allowed margin of error.


6      Conclusion

The proposed approach to the modeling of coverage areas and their visualization
will allow obtaining more effective design solutions for video surveillance sys-
tems. The possibility of using universal 3D modeling and data processing sys-
tems for solving a specific applied problem is shown. The results of the compar-
ison, presented in the form of a heat map, make it possible to estimate the
degree of coverage of the observation zone and reveal the presence of dead zones.
These provisions are the basis for further ongoing research in this area.

Acknowledgments. The research has been supported by the RFBR, according
to the research projects No. 17-07-00700 A.
                                                                                        9


References

 1. Dyshkant, N.: Measures for Surface Comparison on Unstructured Grids with Differ-
    ent Density. Lecture Notes in Computer Science: Discrete Geometry for Computer
    Imagery, Vol. 6607, pp. 501–512 (2011).
 2. Dyshkant, N.: Disparity Measure Construction for Comparison of 3D Objects’ Sur-
    faces. Inproceedings of the Workshop IMTA, Lisbon, Portugal: INSTICC Press, pp.
    43–52 (2009).
 3. Dyshkant, N.: An algorithm for calculating the similarity measures of surfaces rep-
    resented as point clouds. Pattern Recognition and Image Analysis: Advances in
    Mathematical Theory and Applications, vol.20, No. 4, pp. 495–504 (2010).
 4. Tomaka, A.: The application of 3d surfaces scanning in the facial features analysis.
    Journal of edical Informatics and Technologies,pp. 233–240 (2005).
 5. Szymczak, A., Rossignac, J., King, D.: Piecewise regular meshes: Construction and
    compression. Graphical Models, vol. 64, No. 3-4, pp. 183–198 (2002).
 6. Stepanyants, D.G., Knyaz, V.A.: PC-Based Digital CloseRange Photogrammetric
    System for Rapid 3D Data Input in CAD Systems. International Archives of Pho-
    togrammetry and Remote Sensing, vol. 33, part B5, pp. 756–763 (2000).
 7. Mitra, N.J., Guibas, L.J., Pauly, M.: Partial and approximate symmetry detection
    for 3D geometry. In proceedings ACM SIGGRAPH, pp. 560–568 (2006).
 8. Liu, Y., Rodrigues, M.A.: Geometrical analysis of two sets of 3D correspondence
    data patterns for the registration of free-form shapes. J. Int. and Rob. Systems, vol.
    33, pp. 409–436 (2002).
 9. Knyaz, V.A., Zheltov, S.Yu.: Photogrammetric Techniques for Dentistry Analysis,
    Planning and Visualisation.In proceedings ISPRS Congress Beijing 2008, Proceed-
    ings of Commission V, pp. 783–788 (2008).
10. Koidis, P., Patias, P., Tsioukas, V.: 3D Visualization of Dental Data for Virtual
    Treatment Planning.In proceedings ISPRS Congress Istanbul 2004, Proceedings of
    Commission V, pp. 996–1001 (2004).
11. MathWorks - Makers of MATLAB and Simulink, URL: https://www.math-
    works.com/.
12. Unity, URL: https://unity3d.com/ru.
13. JVSG: CCTV Design Software, URL: http://www.jvsg.com/.
14. D-Link Surveillance Floor Planner, URL: http://tools.dlink.com/intro/sfp/.
15. CloudCompare - Open Source project, URL: https://www.danielgm.net/cc/.
16. Shardakov, V.M., Parfenov, D.I., Zaporozhko, V.V., Izvozchikova, V.V.
      Development of an Adaptive Module for Visualization of the Surrounding Space
    for Cloud Educational Environment In: 11th International Conference Management
    of Large-Scale System Development, MLSD (2018).