=Paper= {{Paper |id=Vol-2711/paper15 |storemode=property |title=Determination of the Spatial Orientation of Objects in Automated Production |pdfUrl=https://ceur-ws.org/Vol-2711/paper15.pdf |volume=Vol-2711 |authors=Rahim Mammadov,Timur Aliyev |dblpUrl=https://dblp.org/rec/conf/icst2/MammadovA20 }} ==Determination of the Spatial Orientation of Objects in Automated Production== https://ceur-ws.org/Vol-2711/paper15.pdf
  Determination of the Spatial Orientation of Objects in
                Automated Production

          Rahim Mammadov[0000-0003-4354-3622], Timur Aliyev[0000-0001-9347-3904]

   Azerbaijan State Oil and Industry University, “Instrumentation engineering” department,
                                 professor, Baku, Azerbaijan
                     rahim1951@mail.ru, a_tima1@mail.ru



       Abstract. At present, industrial enterprises manufacture products with a con-
       stantly changing nomenclature by introducing robotic systems with elements of
       artificial intelligence into the production process. The latter allow to increase
       productivity in conditions when it is difficult or impossible for a person to per-
       form certain production operations.
          The most important stages of production where industrial robots need to be
       used are the processing of workpieces and their assembly. At these stages, the
       workpieces must enter the working area in strict sequence, at the right time and
       place, and in a certain position.
          However, in the process of delivery of details, changes are possible, which
       leads to the need to use identification systems. Mechanical systems have proven
       themselves very well, however they are very cumbersome and difficult to main-
       tain.
          The use of systems of technical vision, which allows for the recognition and
       spatial orientation of an object, seems promising. The study is carried out on the
       basis of the system of three equations obtained by the authors, showing the de-
       pendence of the change in the moments of inertia of a plane figure during its
       triple rotation in space.
          Triple rotation means the rotation of an object around three mutually perpen-
       dicular coordinate axes OX, OY and OZ at certain angles α, β and γ, respective-
       ly.
          By solving the system of equations for the variables α, β and γ, one can de-
       termine the spatial orientation of the object.
         The obtained theoretical results were verified by computer modeling of the
       proposed method for determining the position of spatial objects. The experi-
       ments were carried out on four reference images, which are rotated in space at
       arbitrary angles.
         The simulation results showed that the proposed system of equations showing
       the dependence of the change in the moments of inertia of a plane figure during
       its triple rotation in space is valid for each rotated image.
         This approach can be used in flexible automated production to clarify the spa-
       tial position of workpieces entering the working area of a processing or assem-
       bly machine.

       Keywords: industrial robots, recognition object, object features, inertia moment



Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons
License Attribution 4.0 International (CC BY 4.0). ICST-2020
1      Introduction

   The application of artificial intelligence technology in robot systems enables to in-
crease the efficiency by adapting to constant changes in the working condition and to
achieve greater manufacturing flexibility [1, 2]. Intelligent multi-robot systems are in
demand first of all for the work in restricted access conditions and when there is a
need for effective decision making with minimal human operator participation [3, 4].
   Contemporary robot systems are widely used in industrial production [5, 6]. They
have great functional flexibility thanks to progressive actuating mechanism, micro-
processor control systems with developed software, vision and other sensors, adaptive
capabilities, and can replace people in the implementation of various types of opera-
tions [7, 8]. In addition, computer-aided systems are actively used in the food industry
for sorting food products [9].


2      Problem statement

    One of the general purpose systems of intelligent robots are technical vision sys-
tems [10, 11]. They solve such problems as finding the research object, identifying it,
determining its coordinates on the location in the manipulator working area, geomet-
ric parameters of the object to ensure its capture, and also industrial assembly and
quality control [4, 9].
    In the process of manufacturing the product, the main purpose of industrial robots
is the installation of pre-oriented storage in the working area of the machine and the
removal of completed parts from the machine with subsequent installation on a con-
veyor line. The parts are delivered only sequentially to the assembly area, i.e., at the
right time and place [12, 13].
    However, during transportation from one working area to another one, as a result
of the influence of external factors, changes in the spatial orientation of products are
possible. In this regard, it is promising to make identification of blanks at each stage
of processing.
    Currently, a number of methods have been developed for the recognition spatial
images of objects, however, each of them has its faults. So, in [11], parallel transfor-
mation and rotation of the image being a special case of spatial displacements are
studied. In [14, 15] the analysis is carried out on reference points, numbering of which
remains constant for spatial distortions, which is not always achievable. In [16,17],
image analysis is carried out along the contour, which is the most sensitive to dis-
torting factors. In [18], moments of inertia are considered as the main features, how-
ever, an analysis of the properties of moments of inertia showed that the features, in
this case, are integral for a wide range of objects, which makes it difficult to recognize
objects of one cluster.
    The process of recognizing a spatial object on images is significantly complicated
by the change in all geometric features. This is due to the image undergoing such
distortions as rotation and shear. Thus, it can be said that, the task of identifying fea-
tures being invariant to spatial distortions of object images has not been solved com-
pletely.


3      Problem solving method

  In this article, statistical moments are proposed to be used as reference points for
the image, and on the basis of this principle, an effective methodology for determin-
ing the orientation of objects in space has been developed.
  One of the methods for recognizing the object located arbitrarily in space is the
recognition of two-dimensional images of its sides by comparing them with reference
images.
  As a matter of fact, the side image of the object is a flat closed single-contour or
multi-contour figure. Thus, the process of recognizing the object located arbitrarily in
space can be reduced to the recognition of several plane figures located arbitrarily in
space. If the flat figure is a solid body, then the analysis of its position in space can be
made by analyzing the position of the fixed marker point located on a flat figure. In
this case, arbitrary location of the plane figure, and consequently, a marker point in
space, is considered as the rotations of the plane figure around three coordinate axes.
The analysis of the plane figure position will be carried out by analyzing the position
of the marker point projection on the frontal plane. In this case, the origin point is
located in the center of the plane figure.
   In the course of the research [19], the authors obtained the dependence of the
change in the coordinates of the marker point projection during the rotation of the
frontal plane around the horizontal axis OX, followed by rotation around the vertical
axis OY:
                                x 2 = x 0  cos  ,                                             (1)

                                 y 2 = y 0  cos   x 0  sin   sin  ,                      (2)

where x0, y0 are the coordinates of the marker point on the initial frontal plane; x 2, y2
are the coordinates of the marker anguish after its double rotation; α is the rotation
angle around the horizontal axis; β is the rotation angle around the vertical axis.
   After integrating expressions (1) and (2) for all figures, the inertia moment of the
plane figure is obtained after its double rotation in space:

         J X 2 = cos3   cos   J X 0  2  cos2   sin   sin   cos   J X 0 Y 0 +

          + cos   sin 2   cos   sin 2   J Y 0 ,                                         (3)

         J Y 2 = cos   cos3   J Y 0 ,                                                     (4)

         J X 2 Y 2 = cos2   cos2   J X 0 Y 0  cos   sin   cos2   sin   J Y 0 ,     (5)

where JX0, JY0, JX0Y0 are respectively the inertia moment of the initial figure along the
axis OX, OY and the centrifugal inertia moment; JX2, JY2, JX2Y2 are respectively the
inertia moment of the plane figure after its double rotation in space, along the axis
OX, OY and the centrifugal inertia moment.
   As it is known [20], the inertia moment of the section during the rotation of the ax-
es passing in the plane of the section around the axis passing perpendicular to the
plane of the section are related by equations (6) ÷ (8):

                    J U = J X  cos2  + J Y  sin 2  − J XY  sin 2  ,           (6)

                    J V = J X  sin 2  + J Y  cos2  + J XY  sin 2  ,           (7)

                                                  JX − JY
                    J UV = J XY  cos 2 +                 sin 2 ,                (8)
                                                     2
where JX, JY, JXY are axial and centrifugal inertia moments of the section with respect
to the initial axes; JU, JV, JUV are axial and centrifugal inertia moments of the section
with respect to the rotated axes; γ  is rotation angle.
   Substituting expressions (3) ÷ (5) into equations (6) ÷ (8) and denoting the con-
stants JU, JV, JUV respectively, as, JX3, JY3, JX3Y3 the dependence of the change in the
inertia moments of the plane figure is obtained during its triple rotation in space:
                    J X3 = f (, ,  , J X 0 , J Y 0 , J X 0 Y 0 ) ,               (9)

                    J Y3 = f (, ,  , J X 0 , J Y 0 , J X 0 Y 0 ) ,               (10)

                    J X3Y3 = f (, , , J X 0 , J Y 0 , J X 0 Y 0 ) .              (11)

  As a result, a system of three equations with three variables is obtained: α, β and γ.
Solving the system of equations (9) ÷ (11), the position of the object side in space,
and, consequently, the position of the object itself can be found.


4       Computer Simulations
    In order to check the obtained theoretical results, computer simulation was carried
out.
    For a more detailed analysis of the proposed methodology, it is desirable to study
objects belonging to the same cluster, but significantly differing in form. As a feature
characterizing the shape of the object was chosen "shape indicator" defined by the
expression [21]:
                                     Perimeter 2
                                  =                                                (12)
                                         Area
    Studies have shown that for real parts produced by flexible automated manufactur-
ing, the shape index does not exceed 80. Therefore, abstract images were used in
computer modeling. To simplify, stylized animal figures were chosen as images of the
object.
   Figure 1 presents four flat figures, which are the original images.
                    a)                          b)




                         c)                          d)
                                    Fig. 1. Original images

   Table 1 summarizes the main parameters of these figures.

                            Table 1. Parameters of original images
     Figure           Object       Object width,     Object area,            Object shape
                   height, pixel        pixel            pixel               indicator (ρ).
    Fig. 1,a.          254               214            30211                     41
    Fig. 1,b.          254               356            30718                     79
    Fig. 1,c.          254               123             8766                     119
    Fig. 1,d.          254               201            23992                     160

    At the first stage, using the AutoCAD system, the original images were rotated in
the image plane at angles γ in increments equal to 40º (40º; 80º; 120º; 160º; 200º;
240º; 280º; 320º). As a result, 8 new images were obtained. Further, the moments of
inertia (JX3_meas, JY3_meas, JXY3_meas) were measured for the obtained images. In addition,
using the formulas (9)÷(11), the moments of inertia (J X3_calc, JY3_calc, JXY3_calc) were
calculated for the image data, where α=0 and β=0. The relative divergences (D) be-
tween the corresponding moments of inertia were also calculated:
        J X3_meas - J X3_calc         J Y3_meas - J Y3_calc         J XY3_meas - J XY3_calc
 DX =                         ; DY =                        ; DXY =                         .(13)
             J X3_meas                     J Y3_ meas                     J XY3_meas
    Figure 2 presents the graphs of the dependence of relative divergences D on the
rotation angle γ for the moments of inertia JX3, JY3, JXY3, respectively.
    At the second stage, using the AutoCAD system, the original images were rotated
around the horizontal axis at angles α in increments equal to 10º (10º; 20º; 30º; 40º;
50º; 60º; 70º; 80º). As a result, 8 new images were obtained. Further, similarly to the
first stage, moments of inertia were measured for the obtained images (J X3_meas,
JY3_meas, JXY3_meas), according to formulas (9)÷(11), moments of inertia (J X3_calc, JY3_calc,
JXY3_calc) were calculated for these images, where β=0 and γ=0, and according to for-
mulas (13), the relative divergences (D) between the corresponding moments of iner-
tia were calculated.
    Figure 3 presents graphs of the dependence of the relative divergences D at rota-
tion angle α for the moments of inertia JX3, JY3, JXY3, respectively.




      a)




      b)




      c)
  Fig. 2. The dependence of the relative divergences between the corresponding moments of
                 inertia at angles of rotation of the object in the image plane
      a)




      b)




      c)
Fig. 3. The dependence of the relative divergence between the corresponding moments of iner-
                tia at angles of rotation of the object around the horizontal axis

    At the third stage, using AutoCAD system, the original images were rotated
around the vertical axis at β angles in increments equal to 10º (10º; 20º; 30º; 40º; 50º;
60º; 70º; 80º). As a result, 8 new images were obtained. Further, similarly to the first
stage, moments of inertia were measured for the obtained images ((J X3_meas, JY3_meas,
JXY3_meas); according to formulas (9)÷(11), the moments of inertia (J X3_calc, JY3_calc,
JXY3_calc) were calculated for these images, where α=0 and γ=0; using the formulas
(13), the relative divergences (D) between the corresponding moments of inertia were
calculated. Figure 4 presents the graphs of the dependence of the relative divergence
D on the rotation angle β for the moments of inertia JX3, JY3, JXY3, respectively.




          a)




          b)




          c)
Fig. 4. The dependence of the relative divergence between the corresponding moments of iner-
                 tia at angles of rotation of the object around the vertical axis

    At the fourth stage, using the AutoCAD system, the original images were rotated
in the image plane at angles γ in increments equal to 40º, with subsequent rotations
around the horizontal axis at angles α in increments equal to 10º and around the verti-
cal axis at angles β in increments equal to 10º (10º, 10º, 40º; 20º, 20º, 80º; 30º, 30º,
120º; 40º, 40º, 160º; 50º, 50º, 200º; 60º, 60º, 240º; 70º, 70º, 280º; 80º, 80º, 320º). As a
result, 8 new images were obtained. Further, similarly to the first stage, moments of
inertia (JX3_meas, JY3_meas, JXY3_meas) were measured for the obtained images; according
to the formulas (9)÷(11) the moments of inertia (J X3_calc, JY3_calc, JXY3_calc) were calcu-
lated for these images; using the formulas (13), the relative divergences (D) between
the corresponding moments of inertia were calculated.
   Figure 5 presents the graphs of the dependence of the relative divergence D at the
rotation angles α, β, and γ for the moments of inertia JX3, JY3, JXY3, respectively.




     a)




    b)




      c)
Fig. 5. The dependence of the relative divergence between the corresponding moments of iner-
 tia at rotation angles of the object in the image plane, around the horizontal and vertical axes
5. Conclusions

    As it can be seen from Figure 2 that the graphs are oscillatory in nature. That is, it
is possible to come to such a conclusion that the angle of the object rotation in the
image plane does not significantly affect the value of the divergence D, and is located
in certain acceptable intervals. The reason for the dispersion of the divergence D is
the appearance of distortion of discrete images with low resolution.
    Additional criteria for assessing the divergence D is the shape indicator. The high-
er the shape indicator, the greater the dispersion of divergence. The reason for this is
the distortion of the pixels of the contour of the object during rotation.
    It can be seen from Figure 3 and Figure 4 that the graphs are exponential in nature.
That is, it is possible to come to such a conclusion that the angle of the object rotation
around the horizontal or vertical axis has a significant effect on the value of the diver-
gence D only at its extreme values. At the same time, at small and average values, it is
located in certain acceptable intervals. Similarly, additional criteria for assessing the
divergence D can serve as a shape indicator. The higher the shape indicator, the great-
er increasing rate of the exponent. The reason for this, in addition to low resolution, it
serves as the change in the image area.
    As it is seen from figure 5 that the graphs are complex oscillating-exponential in
nature. In this graph, the exponents have a greater speed than the graphs in figures 3
and 4. So, it is possible to come to such a conclusion that for the objects randomly
located in space, the value of the divergence D will be essentially already close to the
extreme values of the rotation angle. At the same time, when increasing the indicator,
the graphs acquire an N-shaped form.
    As it can be seen from the above-mentioned statements, taking into account the
computational error, the system of equations (9) ÷ (11) is justified for each rotated
image and with the exception of extreme values, it is applied to all positions of the
object. Thus, using the equations obtained by the authors, the system of equilibrium
dependence of the moments of inertia of the flat figure with its triple rotation in space
can be carried out in parallel to the recognition and spatial orientation of the object.
By means of this definition, the orientation of the object in space will be produced for
a fixed time.
    This equilibrium system can be used in flexible automated production to clarify
the spatial position in the working area of a processing or assembly machine. Thus, it
can be applied to autonomous mobile robots to search for objects in this area.


References
 1. Bo-hu Li, Bao-cun Hou, Wen-tao Yu, Xiao-bing Lu & Chun-wei Yang.: Applications of
    artificial intelligence in intelligent manufacturing: a review. Frontiers of information tech-
    nology & electronic engineering, vol. 18, pp. 86–96 (2017).
 2. C. Renzi, F. Leali, M. Cavazzuti & A. O. Andrisano.: A review on artificial intelligence
    applications to the optimal design of dedicated and reconfigurable manufacturing systems.
    The International Journal of Advanced Manufacturing Technology, vol. 72, pp. 403–418
    (2014).
 3. Crisan N., Pop I. & Coman I.: Robotic Surgical Approach in Limited Access Anatomical
    Areas. New Trends in Medical and Service Robots. Assistive, Surgical and Educational
    Robotics. Mechanisms and Machine Science, 38, pp.165-177 (2016).
 4. Gerlind Wisskirchen, Blandine Thibault Biacabe et al.: Artificial intelligence and robotics
    and their impact on the workplace. IBA Global Employment Institute. (2017).
 5. Alp Ustundag, Emre Cevikcan.: Industry 4.0: Managing The Digital Transformation.
    Springer International Publishing, Switzerland (2018).
 6. Miller M. R., Miller R. Robots and Robotics: Principles, Systems, and Industrial Applica-
    tions. McGraw-Hill Education (2017).
 7. Eugenio Brusa.: Mechatronics. Principles, technologies and applications. Nova Science
    Publishers Inc., New York (2015).
 8. Héctor C. Terán, Oscar Arteaga, Guido R. Torres, A. Eduardo Cárdenas, R. Marcelo Ortiz,
    Miguel A. Carvajal, O. Kevin Pérez.: Mobile robotic table with artificial intelligence ap-
    plied to the separate and classified positioning of objects for computer-integrated manufac-
    turing. Russian Conference on Artificial Intelligence - RCAI 2018, pp. 218-229 (2018).
 9. Caldwell D.G.: Robotics and automation in the food industry. Current and future technolo-
    gies. Woodhead Publishing Limited (2013).
10. Phansak Nerakae, Pichitra Uangpairoj, Kontorn Chamniprasart.: Using machine vision for
    flexible automatic assembly system. International Conference on Knowledge Based and
    Intelligent Information and Engineering Systems, pp. 428 – 435 (2016).
11. Mamedov R.K., Mutallimova A.S., Aliyev T.Ch.: Ispol'zovaniye momentov inertsii izo-
    brazheniya dlya invariantnogo k affinnym preobrazovaniyam raspoznavaniya. Vostochno-
    Yevropeyskiy zhurnal peredovykh tekhnologiy. №4/3 (58), pp. 4-7 (2012).
12. KLS Sharma.: Overview of Industrial Process Automation, second edition. Elsevier Inc.
    (2017).
13. Siciliano B., Khatib. O.: Springer Handbook of Robotics. 2nd edition. Springer-Verlag
    Berlin Heidelberg (2016).
14. Vimal Sudhakar Bodke, Omkar S.: Vaidya Object Recognition in a Cluttered Scene using
    Point Feature Matching. International Journal for Research in Applied Science & Engi-
    neering Technology, pp. 286-290 (2017).
15. Toshiaki Ejima, Shuichi Enokida, Toshiyuki Kouno.: 3D Object Recognition based on the
    Reference Point Ensemble. International Conference on Computer Vision Theory and Ap-
    plications pp. 261-269 (2014).
16. Farnoosh Ghadiri, Robert Bergevin, Guillaume-Alexandre Bilodeau: Carried Object De-
    tection Based on an Ensemble of Contour Exemplars. 14th European Conference Comput-
    er Vision – ECCV 2016. Amsterdam, October 11–14, pp. 852-866 (2016).
17. Xin Li, Fan Yang, Hong Cheng, Wei Liu, Dinggang Shen: Contour Knowledge Transfer
    for Salient Object Detection. 15th European Conference Computer Vision – ECCV 2018.
    Munich, September 8-14, pp. 370-385 (2018).
18. Mohammad Arafah, Qusay Abu Moghli.: Efficient Image Recognition Technique Using
    Invariant Moments and Principle Component Analysis. Journal of Data Analysis and In-
    formation Processing, pp. 1-10 (2017).
19. Mamedov R.K., Aliyev T.Ch.: Kontrol' polozheniya 3D-ob"yektov v gibkikh avtoma-
    tizirovannykh sistemakh. Povysheniye dostovernosti raspoznavaniya. – LAP LAMBERT
    academic publishing (2016).
20. Slocum S.E., Hancock E.L.: Text-book on the strength of materials. Revised edition.
    FB&c Ltd (2016).
21. Jens Feder.: Fractals. Plenum press. New York. 1988.