Reducing Average Risk by Providing İnvariance of Two- Dimensional Binary İmages Rahim Mammadov1, Elena Rahimova2, Gurban Mammadov3 and Volodymyr Sherstjuk4 1Azerbaijan State Oil and Industry University, Azadliq av. 16/21, Baku, AZ1010, Azerbaijan 2Azerbaijan State Oil and Industry University, Azadliq av. 16/21, Baku, AZ1010, Azerbaijan 3Azerbaijan State Scientific Research Institute for Labor Protection and Occupational Safety, Tabriz st.108, Baku, AZ1008, Azerbaijan 4Kherson National Technical University, str.Instytutska 11, Khmelnytskyi, 29016, Ukraine Abstract Technical vision systems in intelligent robot complexes increase productivity and reduce costs associated with quality errors by providing automated control of product quality. But when recognizing images of objects, difficulties arise due to the linear movement of the image (rotation of the image around the center of gravity and displacement in the coordinate plane). Such linear displacements lead to methodological errors in the estimation of proximity measurements between reference and known objects. Since such destabilizing factors reduce the reliability of image recognition, it is imperative to eliminate the issue of invariance to linear displacements of images. In the proposed algorithm, the contour points of two-dimensional binary images of objects at the output of the vision system are shown as coordinates in the Cartesian coordinate plane of the display. The values of the specified coordinates are not invariant to the linear displacement and rotation of the image. Therefore, for correct recognition of such images, it is necessary to ensure that the image points are invariant with displacement and rotation. In order for the image to be invariant to orthogonal displacement, the coordinate system must be moved to the center of gravity of the defined image. Then, the rotation angle of the reference object relative to the starting position is determined by the moments of inertia relative to the coordinate axes of the image. After evaluating the rotation angle of the image, the coordinates of the contour points are found by rotating the reference image in the computer memory by this angle. Then the coordinates of the contour points of the current image are compared with the coordinates of the contour points of the rotated reference image. This comparison provides accurate information on whether the current image is the same or different from the reference image. Thus, the proposed algorithm allows invariant recognition of two-dimensional binary images. The higher the level of invariance, the lower the average risk. The proposed algorithm was simulated on a computer and positive results were obtained. Keywords 1 Pattern recognition, invariance, linear displacement, image rotation, vision system, average risk CITRisk’2022: 3nd International Workshop on Computational & Information Technologies for Risk-Informed Systems, January 12, 2023, Neubiberg, Germany EMAIL: rahim1951@mail.ru (R. Mammadov); elena1409_mk@mail.ru (E. Rahimova); Qurban_9492@mail.ru (G. Mammadov); vgsherstyuk@gmail.com (V. Sherstjuk) ORCID: 0000-0003-4354-3622 (R. Mammadov); 0000-0003-1921-4992 (E. Rahimova); 0000-0002-2874-6221 (G. Mammadov); 0000-0002-9096-2582 (V. Sherstjuk) © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). CEUR Workshop Proceedings (CEUR-WS.org) 1. Introduction Depending on the problems to be solved in technical vision systems of adaptive robots and flexible industrial systems, there are problems of perception and recognition of both object regularities and technological processes in order to process and prepare relevant signals. Their effectiveness and efficiency directly depends on the reliability of the accurate performance of the measurement process, the compatibility of the parameters of the recognition and reference images, and the adoption of the necessary decision with the obtained results related to pattern recognition. Such processes occur due to the presence of non-stabilizing factors, because due to the influence of these factors, errors arise in the adjustment of measurement and imaging parameters [1,2]. These types of errors create certain difficulties for adaptive robots to make correct decisions. The accuracy of decision-making in image recognition is a measure of proximity between the recognizable and reference images depending on the membership of the information included in the database. Currently, vision systems have become an alternative to the human factor in performing visual or manual quality control operations of products. Thus, companies undertake to increase productivity and reduce costs related to quality errors that may occur during human supervision. When recognizing images of objects, difficulties arise due to their linear displacement (rotation of the center of gravity of the image or change of its position in the coordinate plane). So, such problems arise when the product lies on the conveyor line or changes its direction after production. Obviously, determining the spatial orientation of objects is both a complex and expensive task. These factors lead to changes in the number of products, absolute values of coordinate parameters, and random measurement error in the values of object parameters. These and other non-fixing factors reduce the accuracy of image recognition, so the recognition system must be invariant to changes in image position [3,4]. Various methods and tools have been proposed to ensure invariance to image rotation around the center of gravity and image position change. However, these methods cannot provide the greatest invariance in image recognition. In these works, the main focus is on static moments, which are considered to be their main feature, in order to ensure invariance to linear displacement in object images. The analysis of these methods showed that in this case the symbols are very complex and therefore the reliability is low. Therefore, research aimed at finding the best methods and tools to achieve image recognition invariance in technical vision systems remains relevant [5,6,7]. 2. Problem statement During the recognition of images of objects in robotic systems, certain difficulties arise due to the rotation and scaling of the image around the center of gravity. Such problems lead to the loss of information about the number, location and absolute price of properties, and random errors in the calculation of prices. Since such destabilizing factors reduce the reliability of image recognition, it is imperative to eliminate the issue of invariance to linear displacements of images [8,9]. Various methods and tools have been proposed to ensure the invariance of the rotation of the images of objects around the center of gravity and large-scale changes in the image. However, these methods and tools cannot ensure the most accurate invariance in object recognition. Therefore, research aimed at finding the best methods and tools to achieve invariance in image recognition remains relevant [10,11]. 3. Comparative analysis In order to clarify the solution to the problem of image detection and measurement of its parameters, the methods of recognition in the applied system were analyzed. Unlike all the methods in the table, more attention was paid to its geometric features. There are several ways to achieve sensitivity in transformations in the field of recognition systems, in particular, two groups of approaches can be distinguished among the most commonly applied transformation transitions. The methods of the first group include spatially insensitive properties (for example, the method of moments, the Fourier method of images). On the other hand, those who adopt an alternative approach work with object models and try to combine the objects observed and used in training by choosing parameters [12,13,14]. The method based on the analysis of the amplitudes of the individual harmonics of the Fourier spectrum of images has a number of advantages, such as a small number of important features, an unambiguous relationship between the rotation of the image or the corresponding rotation, the scale of the spectrum. The harmonic shift of the spectrum can be used to measure the corresponding shift in the image [15,16,17]. Mellin and Fourier-Mellin transforms also reduce the number of features like Fourier transforms, which simplifies the recognition scheme. The double-scale invariance of the latest method allows to stabilize the verticality of the statistical characteristics of the measurement images based on them, thereby increasing the accuracy of the measurement. These methods are performed only in coherent optical systems and require fairly sophisticated image analyzers to achieve some degree of invariance [18,19,20]. The secant method is used to recognize images that are large enough to be contoured using segments or straight sequences. For this method, it is not enough to draw the contours of the images, but also to divide the angular area of the image into segments that can contain several or more objects. The most important condition when using this method is the stability of the visible shape of the object [21,22,23]. Table 1. Mutual analysis of the methods used Method of Opti Fourier geometric cal The method of transform Transfor Fourier- Geometric moments of corr obtaining the (analysis of mation Mellina Secant moments space- elati main features of harmonic of transform method method frequency on the image Fourier Mellina ation spectra of the met spectrum) image hod The basic maxi Geometric Distribution mum Geometri moments of of secant of the c the individual Harmonic Harmonic Harmonic lengths and correl Early signs moments harmonic amplitude amplitude amplitude angles ation of the Fourier between functi image transform of them on and the image its positio n A Displacem + (+/- + + (+/-) - + (+/-) + + (+/-) cq ent ) (+/-) +( length) + (+/- + Rotation - - - + (+/-) - (corners) ) (+/-) To change + (corners) + (+/- + - + + + (+/-) the scale - (length) ) (+/-) After comparing different algorithms and schemes for solving the problems of recognition and identification of known objects, it can be concluded that among the most promising schemes are the characteristics of the object determined (controlled) by synchronous detection of the center of the image and the geometric moments of its image. its Fourier transform is used, or one of the methods of determining the position of the main maximum of the correlation function of an object description, which correlates schemes using a priori synthesized discriminant functions. However, when there are sufficiently arbitrary and a priori unknown changes in geometric parameters (properties), for example, the scale and shape of its description, for example, the use of the recognized methods of consideration is not effective enough [24,25]. 4. Problem solving Space has a Х0У0 coordinate system and a certain description. When the coordinate axes are rotated, the coordinates of the image also change. Therefore, the task is to determine the angle of rotation of the coordinate axes and, accordingly, at what angle the image is rotated. According to the task, the following sequence of steps was performed. 1. The initial coordinates of the image are set х0,у0 2. The new image coordinates are calculated by the following formulas when the axes are rotated 𝑥𝑥1 = 𝑦𝑦 𝑠𝑠𝑠𝑠𝑠𝑠 𝛼𝛼 + 𝑥𝑥 𝑐𝑐𝑐𝑐𝑐𝑐 𝛼𝛼 𝑦𝑦1 = 𝑦𝑦 𝑐𝑐𝑐𝑐𝑐𝑐 𝛼𝛼 − 𝑥𝑥 𝑠𝑠𝑠𝑠𝑠𝑠 𝛼𝛼 3. The initial axial and centrifugal moments of inertia of the figure relative to the axes are determined according to the following formulas ОХ0 və ОУ0 𝑛𝑛 𝐽𝐽𝑥𝑥 = �(𝑥𝑥𝑜𝑜𝑜𝑜 − 𝑥𝑥𝑠𝑠𝑠𝑠 )2 𝑖𝑖=1 𝑛𝑛 𝐽𝐽𝑦𝑦 = �(𝑦𝑦𝑜𝑜𝑜𝑜 − 𝑦𝑦𝑠𝑠𝑠𝑠 )2 𝑖𝑖=1 𝑛𝑛 𝐽𝐽𝑥𝑥𝑥𝑥 = �(𝑥𝑥𝑜𝑜𝑜𝑜 − 𝑥𝑥𝑠𝑠𝑠𝑠 )(𝑦𝑦𝑜𝑜𝑜𝑜 − 𝑥𝑥𝑥𝑥𝑠𝑠𝑠𝑠 ) 𝑖𝑖=1 Where 𝑛𝑛 1 𝑥𝑥𝑠𝑠𝑠𝑠 = � 𝑥𝑥𝑖𝑖 𝑛𝑛 𝑖𝑖=1 𝑛𝑛 1 𝑦𝑦𝑠𝑠𝑠𝑠 = � 𝑦𝑦𝑖𝑖 𝑛𝑛 𝑖𝑖=1 4. The axial and centrifugal moments of inertia of the figure relative to the rotating axes ОХ1 and ОУ1 were found 𝐽𝐽𝑥𝑥1 = 𝐽𝐽𝑥𝑥0 cos2 𝛼𝛼 + 𝐽𝐽𝑦𝑦0 sin2 𝛼𝛼 − 𝐽𝐽𝑥𝑥1𝑦𝑦1 sin 2𝛼𝛼 (1) 𝐽𝐽𝑦𝑦1 = 𝐽𝐽𝑦𝑦0 cos2 𝛼𝛼 + 𝐽𝐽𝑥𝑥0 sin2 𝛼𝛼 + 𝐽𝐽𝑥𝑥1𝑦𝑦1 sin 2𝛼𝛼 𝐽𝐽𝑥𝑥0− 𝐽𝐽𝑦𝑦0 𝐽𝐽𝑥𝑥1𝑦𝑦1 = ∙ 𝑠𝑠𝑠𝑠𝑠𝑠2𝛼𝛼 + 𝐽𝐽𝑥𝑥0𝑦𝑦0 ∙ 𝑐𝑐𝑐𝑐𝑐𝑐2𝛼𝛼 2 The aim of the study is to find the angle of rotation of the coordinate axes. In other words, it is necessary to find the angle by which the figure is rotated relative to the original Х0У0 coordinate axes. In the research, a solution was found using the ArcSin function from the trigonometric functions For this purpose, let's solve the equation (1) and rewrite it in the following form, changing its form: 𝐽𝐽𝑥𝑥0− 𝐽𝐽𝑦𝑦0 2 ∙ 𝑠𝑠𝑠𝑠𝑠𝑠2𝛼𝛼 + 𝐽𝐽𝑥𝑥0𝑦𝑦0 ∙ 𝑐𝑐𝑐𝑐𝑐𝑐2𝛼𝛼 = 𝐽𝐽𝑥𝑥1𝑦𝑦1 (2) To further simplify the expressions, the following substitutions have been made: 𝐽𝐽𝑥𝑥0− 𝐽𝐽𝑦𝑦0 𝑎𝑎 = 2 𝑏𝑏 = 𝐽𝐽𝑥𝑥0𝑦𝑦0 𝑐𝑐 = 𝐽𝐽𝑥𝑥1𝑦𝑦1 𝜑𝜑 = 2𝛼𝛼 Then equation (2) becomes the following simplified expression: 𝑎𝑎 ∙ 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑏𝑏 ∙ 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑐𝑐 (3) Dividing both sides of the equation by 𝑑𝑑 = √𝑎𝑎2 + 𝑏𝑏 2 yields the following expression. 𝑎𝑎 𝑏𝑏 𝑐𝑐 ∙ 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + ∙ 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 2 √𝑎𝑎 + 𝑏𝑏 2 2 √𝑎𝑎 + 𝑏𝑏 2 √𝑎𝑎 + 𝑏𝑏 2 2 Or 𝑎𝑎 𝑏𝑏 𝑐𝑐 ∙ 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + ∙ 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑑𝑑 𝑑𝑑 𝑑𝑑 From the triangles BCD and BEF, the following expressions are derived according to Fig. 1 𝑎𝑎 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑑𝑑 𝑏𝑏 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 = 𝑑𝑑 Figure 1. The coordinates of the figure relative to the new axes inclined at an angle α to the original coordinate axes Applying the above substitution in the previous formula, the following expression is obtained 𝑐𝑐 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 ∙ 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 + 𝑠𝑠𝑠𝑠𝑠𝑠𝑠𝑠 ∙ 𝑐𝑐𝑐𝑐𝑐𝑐𝑐𝑐 = 𝑑𝑑 Or 𝑐𝑐 sin (𝛽𝛽 + 𝜑𝜑) = 𝑑𝑑 it is possible to write by simplifying 𝑎𝑎 𝛽𝛽 = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � 𝑑𝑑 Or, when converting to another trigonometric function, the following expression will be obtained 𝑏𝑏 𝛽𝛽 = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � 𝑑𝑑 Therefore, the double angle of rotation φ of the coordinate axes, and hence the figure under consideration с 𝑏𝑏 𝜑𝜑 = 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � (4) 𝑑𝑑 𝑑𝑑 And the desired angle of rotation is half of the angle found, so that. 𝜑𝜑 𝛼𝛼 = 2 As a result, the final formula for determining the rotation angle of the figure will look like this: с 𝑏𝑏 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴� �−𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴� � 𝛼𝛼 = 𝑑𝑑 + 𝜇𝜇 (5) 2 𝑑𝑑 Here, µ is the angle that takes into account in which quadrant the figure will be located in the Cartesian coordinate plane as a result of the rotation of the coordinate axes (table 1). Table 2. Table of determination of angle µ The value of the quarter Quarter value in angle µ, taking into number angles account a quarter I 0 ≤ α <90 0 II 90 ≤ α <180 90 III 180 ≤ α <270 180 IV 270 ≤ α <360 270 Here, as well as in the solution of option 1 (arcsin), one should take into account in which quarter the figure is located as a result of the rotation. Based on the received formula (5), a program was written to confirm the correctness of the received formulas. For example, the rotation of the quadrilateral, whose coordinates are listed in the table below, was considered (Table 3) Table 3. Coordinates of the rotated quadrilateral in the Cartesian plane x 20 36 51 51 51 36 20 20 y 50 50 50 34 17 17 17 35 5. Computer simulation In the proposed algorithm, the contour points of two-dimensional binary images of objects at the output of the vision system are shown as coordinates in the Cartesian coordinate plane of the display. The values of the specified coordinates are not invariant to the linear displacement and rotation of the image. Therefore, for correct recognition of such images, it is necessary to ensure that the image points are invariant with displacement and rotation. In order for the image to be invariant to orthogonal displacement, the coordinate system must be moved to the center of gravity of the defined image. Then, the rotation angle of the reference object relative to the starting position is determined by the moments of inertia relative to the coordinate axes of the image. After evaluating the rotation angle of the image, the coordinates of the contour points are found by rotating the reference image in the computer memory by this angle. Then the coordinates of the contour points of the current image are compared with the coordinates of the contour points of the rotated reference image. This comparison provides accurate information on whether the current image is the same or different from the reference image. Thus, the proposed algorithm allows invariant recognition of two-dimensional binary images. The block diagram of the algorithm for computer simulation is given in the figure. The main program consists of subroutine entry, input of figure coordinates after rotation, rotation angle calculation subroutine, and result printing blocks. The process of entering the initial data obtained in the subprogram entry into the computer takes place. Cartesian coordinates of a plane figure are assumed as initial information. Then we rotate the arbitrarily drawn plane figure at a certain angle. The new coordinates of the plane figure that has changed its position are entered into the computer. A subroutine is used to calculate the angle formed during rotation. Also, the used subroutine directly interacts with the coordinate input block after the rotation. Figure 2: Block diagram of the proposed Algorithm Calculation of new coordinates during rotation is determined according to the following formulas: 𝑥𝑥1 = 𝑦𝑦 𝑠𝑠𝑠𝑠𝑠𝑠 𝛼𝛼 + 𝑥𝑥 𝑐𝑐𝑐𝑐𝑐𝑐 𝛼𝛼 𝑦𝑦1 = 𝑦𝑦 𝑐𝑐𝑐𝑐𝑐𝑐 𝛼𝛼 − 𝑥𝑥 𝑠𝑠𝑠𝑠𝑠𝑠 𝛼𝛼 The implementation of the rotation angle calculation subroutine consists of the following formulas: 𝐽𝐽𝑥𝑥1 = 𝐽𝐽𝑥𝑥0 𝑐𝑐𝑐𝑐𝑐𝑐2𝛼𝛼 + 𝐽𝐽𝑦𝑦0 𝑠𝑠𝑠𝑠𝑠𝑠2𝛼𝛼 − 𝐽𝐽𝑥𝑥1𝑦𝑦1𝑠𝑠𝑠𝑠𝑠𝑠 2 𝛼𝛼 𝐽𝐽𝑦𝑦1 = 𝐽𝐽𝑦𝑦0 𝑐𝑐𝑐𝑐𝑐𝑐2𝛼𝛼 + 𝐽𝐽𝑥𝑥0 𝑠𝑠𝑠𝑠𝑠𝑠2𝛼𝛼 + 𝐽𝐽𝑥𝑥1𝑦𝑦1𝑠𝑠𝑠𝑠𝑠𝑠 2 𝛼𝛼 𝐽𝐽𝑥𝑥0− 𝐽𝐽𝑦𝑦0 𝐽𝐽𝑥𝑥1𝑦𝑦1 = ∙ 𝑠𝑠𝑠𝑠𝑠𝑠2𝛼𝛼 + 𝐽𝐽𝑥𝑥0𝑦𝑦0 ∙ 𝑐𝑐𝑐𝑐𝑐𝑐2𝛼𝛼 2 с 𝑏𝑏 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � − 𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴𝐴 � � 𝛼𝛼 = 𝑑𝑑 𝑑𝑑 + 𝜇𝜇 2 In the program, the coordinates of the figure when rotated according to the above were calculated. And when calculating according to the formula (5), the following results were obtained. In the tables, the coordinates of the plane figure when rotated from 0 to 360 degrees are given. Table 4 The coordinates of the plane figure when it is rotated by the first and second quadrants in the Cartesian coordinate system 𝟎𝟎° 𝟎𝟎° 𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑° 𝟔𝟔𝟔𝟔° 𝟔𝟔𝟔𝟔° 𝟗𝟗𝟗𝟗° 𝟗𝟗𝟗𝟗° 𝟏𝟏𝟏𝟏𝟏𝟏° 𝟏𝟏𝟏𝟏𝟏𝟏° 𝟏𝟏𝟏𝟏𝟏𝟏° 𝟏𝟏𝟏𝟏𝟏𝟏° 𝟏𝟏𝟏𝟏𝟏𝟏° 𝟏𝟏𝟏𝟏𝟏𝟏° x y x y x y x y x y x y x y 20 50 42 33 53 8 20 50 42 33 53 8 20 50 36 50 56 25 61 -6 36 50 56 25 61 -6 36 50 51 50 69 18 69 -19 51 50 69 18 69 -19 51 50 51 34 61 4 55 -27 51 34 61 4 55 -27 51 34 51 17 53 -11 40 -36 51 17 53 -11 40 -36 51 17 36 17 40 -3 33 -23 36 17 40 -3 33 -23 36 17 20 17 26 5 25 -9 20 17 26 5 25 -9 20 17 20 35 35 20 40 0 20 35 35 20 40 0 20 35 Table 5 The coordinates of the plane figure when it is rotated by the third and fourth quadrants in the Cartesian coordinate system 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟐𝟐𝟐𝟐𝟐𝟐° 𝟑𝟑𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑𝟑𝟑° 𝟑𝟑𝟑𝟑𝟑𝟑° x y x y x y x y x y x y 42 33 53 8 20 50 42 33 53 8 20 50 56 25 61 -6 36 50 56 25 61 -6 36 50 69 18 69 -19 51 50 69 18 69 -19 51 50 61 4 55 -27 51 34 61 4 55 -27 51 34 53 -11 40 -36 51 17 53 -11 40 -36 51 17 40 -3 33 -23 36 17 40 -3 33 -23 36 17 26 5 25 -9 20 17 26 5 25 -9 20 17 35 20 40 0 20 35 35 20 40 0 20 35 As can be seen in Figure 3, as a result of the applied algorithm, the rotation angles received the same values. Therefore, the proposed algorithm performs the necessary operations by returning the angle α that has fallen to other quadrants to the first quadrant each time. Thanks to this, 2D binary images can be recognized as rotation invariant, regardless of the rotation angle. Figure 3. Determination of angles during the application of the proposed algorithm The first column of the table specifies the rotation angles of the figure. Column 2 shows the rotation angles calculated by formula (5) based on the arcsine function. 6. Conclusion A comparative analysis of algorithms for invariant recognition of two-dimensional binary images showed that there is no method that can fully solve this problem. Instead, there are algorithms that can partially solve the problem within certain constraints. One of the reasons for not solving the problem is that the trigonometric functions of the rotation angle take different signs in different quarters during image rotation. Therefore, the obtained result is not adequate to each other in different quarters. Therefore, the proposed method completely eliminates this drawback. Theoretical and computer modeling results show that the proposed formula gives accurate results only in the first quarter. Due to the fact that trigonometric functions have different signs in different quarters, the same formula does not give correct results in other quarters. Therefore, the proposed algorithm performs the necessary operations by returning the angle α that has fallen to other quadrants to the first quadrant each time. Thanks to this, 2D binary images can be recognized as rotation invariant, regardless of the rotation angle. Computer modeling proves that the proposed method is correct. References [1] D.Forsyth A., Computer Vision, First Indian Edition, Pearson Education, 2003 [2] J.Wäldchen, P.Mäder, Plant Species Identification Using Computer Vision Techniques: A Systematic Literature Review, Archives of Computational Methods in Engineering, 2017, 25 (2), pp. 507–543. DOI:10.1007/s11831-016-9206-z [3] R.O.Duda, P.E.Hart, D.G.Stork, Pattern classification, 2nd Edition, Wiley-Interscience, 2001 [4] L.Jianzhuang, 2D Shape Matching by Contour Flexibility, IEEE Transaction on Pattern Analysis and Machine Intelligence, 19, 2008, pp. 1 – 7 [5] R.G.Mammadov, Correction of estimation errors of the measure of affinity between objects at recognition patterns for intellectual systems, Proceedings of the International Scientific Conference «Problems of Cybernetics and Informatics», Baku, October 23-25, 2006, pр. 21- 24 [6] K.Senthil, Object Recognition Using Shape Context with Canberra Distance, Asian Journal of Applied Science and Technology (AJAST), March 2017, 1, 2, 2017, рр. 268-273 [7] M.Ayaz, T.Sinelnikova, S.K.Mustafa, V.Lyashenko, Features of the Construction and Control of the Navigation System of a Mobile Robot, International Journal of Emerging Trends in Engineering Research, 8, 4, 2020, pp. 1445-1449. DOI:10.30534/ijeter/2020/82842020D.G [8] D.G.Lowe, Distinctive image features from scale-invariant keypoints, Int. J. Comp. Vision, 60, 2, 2004, pp. 91–110 [9] S.Nantogma, Y.Xu, W.Ran, A Coordinated Air Defense Learning System Based on Im- munized Classifier Systems, Symmetry, 2021, 13, 271 р. [10] R.Mammadov, E.Rahimova, G.Mammadov, Increasing the Reliability of Pat-tern Recognition by Analyzing the Distribution of Errors in Estimating the Measure of Prox-imity between Objects, Pattern Recognition and Information Processing (PRIP'2021), Pro-ceedings of the 15th International Conference, 21–24 Sept,. 2021, Minsk, Belarus, UIIP NASB, 2021, рp.111-114 [11] R.F.Abdel-Kader, R.M.Ramadan, F.W.Zaki, E.El-Sayed, Rotation-Invariant Pattern Recognition Approach Using Extracted Descriptive Symmetrical Pattern, (IJACSA) International Journal of Advanced Computer Science and Applications, 3, 5, 2012, рp.151- 158 [12] H.Jiang, S.X.Yu, Linear solution to scale and rotation invariant object matching, IEEE Conf. Computer Vision and Pattern Recognition, 2009, pp. 2474–2481 [13] B.Liu, H.Wu, W.Su, W.Zhang, J.Sun, Rotation-invari-ant object detection using sectorring hog and boosted random ferns, The Visual Computer, 34(5), 2018, рр. 707–719 [14] R.G.Mammadov, T.Ch.Aliyev, G.M.Mammadovє 3D Object Recognition by Unmanned Aircraft to Ensure the Safety of Transport Corridors, İnternational conference on problems of logistics, management and operation in the East-West transport corridor (PLMO), Baku, Azerbaijan, October 27-29, 2021, рp. 209-216 [15] R.G.Mammadov, E.G.Rahimova, G.M.Mammadov, Reducing the estimation error of the measure of proximity between objects in pattern recognition, The International Conference on Automatics and Informatics (İCAİ’2021) IEEE, Varna, Bulgaria, 30 Sept.-2 Oct., 2021, pр. 76- 81 [16] R.G.Mammadov, T.Ch.Aliev, Difinition of orientation of objects by the system of tech-nical vision, The third international conference “Problems of cybernetics and informatics PCI ‘2010”, Baku, 2010, vol.1, pp. 259-262 [17] M.V.Khachumov, Invariant moments and metrics in pattern recognition, Modern High Technologies, 2020, pp. 69-77. DOI:10.17513/snt.37975 [18] T.Ejima, Sh.Enokida, T.Kouno, 3D Object Recognition based on the Reference Point Ensemble, International Conference on Computer Vision Theory and Ap-plications, Portugal, 2014, pp. 261-269 [19] V.H.S.Ha, J.M.F.Moura, Afine-permutation invariance of 2-D shape, IEEE Trans. Image Process, 14 (11), 2005, рр. 1687–1700 [20] J.Cortadellas, J.Amat, F. de la Torre, Robust normalization of silhouettes for recognition application, Pattern Recognition Lett, 25, 2004, рр. 591–601 [21] R.G.Mammadov, T.Ch.Aliyev, G.M.Mammadov, Minimization of the Average Risk in Pattern Recognition for Smart Grid Systems, 6th “Computational Linguistics and Intelligent Systems” COLİNS 2022, Gliwice, Poland, May 12-13, 2022, pp. 365-375 [22] O.E.Ogri, H.Karmouni, MSayyouri, H.Qjidaa, 3D image recognition using new set of fractional-order Legendre moments and deep neural networks, Signal Processing: Image Communication, 2021, 98, 116410, ISSN 0923-5965. DOI:10.1016/j.image.2021.116410 [23] N.Zaeri, F.Baker, Thermal Face Recognition Using Moments Invariants, International Journal of Signal Processing Systems, December, 3, 2, 2015, pp. 94-99 [24] A.A.Khan, A.A.Shaikh, Z.A.Shaikh, et al, IPM-Model: AI and metaheuristic-enabled face recognition using image partial matching for multimedia forensics investigation with genetic algorithm, Multimed Tools Appl 81, 2022, pp. 23533–23549. DOI:10.1007/s11042-022- 12398-x [25] S.H.Abdulhussain, B.M.Mahmmod, A.AlGhadhban, Flusser, J. Face Recognition Algorithm Based on Fast Computation of Orthogonal Moments, Mathematics, 10, 2022, 2721. DOI:10.3390/math10152721