Program-algorithm complex for image imposition in aircraft vision systems А.I. Efimov1, А.I. Novikov1 1 Ryazan State Radio Enginering University, Ryazan, 390005, Russia Abstract One of the most important tasks being solvable on the aircraft board is a task of imposition of real images and images synthesized according to the digital terrain map. Complex of auxiliary tasks and actual imposition task should be solved on a real time basis (with frequency 25 frames per second) and with strict requirements to accuracy of the heterogeneous image imposition. Traditional correlation-extremal methods of imposition ensure a necessary accuracy but require unacceptably high expenditures of computer time. The paper describes an algorithm of imposition based on affine transformations of the synthesized image to the plane of a real video image and also algorithms for solution of auxiliary tasks. Keywords: preprocessing; skeleton; image enhancement; affine transformations; projective transformation; image imposition 1. Introduction Necessity to improve aircraft flight security, to ensure safety of landing requires a development of new methods for integration and interpretation of information obtained from onboard technical vision systems (TVS) of various spectral ranges and also from navigation devices and digital terrain map (DTM) [1,2]. Video information obtained from TVS together with the synthesized image of terrain relief, cartographic and navigation information obtaining in real time help the crew to pilot and land under conditions of low visibility. Onboard TVS can contain a television (TV) camera, infrared imager, lidar and radar which form TV, thermal imaging (TI) and location images of the underlying surface respectively. 2. Object of research Object of the research is a process of imposition of a real television (TV) image obtained from the television camera installed on an aircraft board and a synthesized image in onboard TVS. Synthesized image is formed into an onboard calculator according to the digital terrain map. Imposition of real and synthesized images in onboard TVS is one of the most important and complicated tasks being solvable into the onboard computer complex. Issues occurring under its solution are caused by several reasons. One of the main reasons is errors in detection of current coordinates of an aircraft as a material point in the air space (latitude  , longitude  and height h) and also errors in determination of aircraft orientation as an extended object in the space. Errors in measurement of parameters of the yaw  , pitch  and roll  also belong to them. Errors can be in the digital terrain map (DTM). Also another source of errors can be sensors forming images. Different nature of real and synthesized images is one more reason complicating solution of the image imposition tasks. Positioning errors can be added by geometrical distortions appearing on processed images at stages of the boundary detection of brightness jump and formation of closed circuits. 3. Methods of research Widely known correlation-extremal methods of image imposition, as practice of their application shows, ensure enough good quality of imposition [3]. However, in the present case search of a global extremum of the objective function is joined with formation of 106 angles (sets of 6 numbers - vector coordinates ν   x, y , h, ,  ,   of navigation parameters) and, as a consequence, with unacceptably high expenditures of computer time. There are known approaches to reduce a spatial dimension due to usage of the extended angle under formation of the synthesized image and application of a pyramid of different-scaled images for consecutive refinement of a point of the objective function global optimum under the correlation-extremal approach to image imposition [4]. Such modernization of correlation-extremal methods of imposition leads to reduction of computation efforts but does not solve the issue up the end both regarding time and accuracy of the task solution. Limiting values of errors in determination of navigation parameters set a corresponding parallelepiped in the 6-dimensional space of parameters. Within this parallelepiped, a grid with nodes is formed. A synthesized image is constructed by values of the vector of parameters at each node. This image is overlapped onto a real image; a value of the objective function being a measure of the imposition quality is calculated and stored. After enumeration of all nodes the objective function global extremum and vector of navigation parameters ν opt where this optimum is achieved are found. Alternative method to impose images is reduced to a search of the same objects on a pair of heterogeneous images, their comparison, calculation of the geometrical transformation of one image to the plane of other one and imposition of images for representation to a pilot. Algorithms based on the geometrical transformation of the synthesized image to the plane of a real one require less computation efforts for their implementation than correlation-extremal methods and provide acceptable quality of image imposition. However, these methods are applied only in cases if contours of continuous presence objects (water objects, roads, large buildings etc.) are stably distinguished on the underlying surface. Besides, bottleneck of these methods is a search of 3rd International conference “Information Technology and Nanotechnology 2017” 47 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov key (corresponding) points onto a pair of imposed images by means of which transformation of the synthesized image is constructed to the plane of a real one. Correctness of selection of key points determines a degree of precision of “imposed” images, i.e. quality of image imposition [5]. Methods for enhancement of image imposition by means of projective transformations under presence of less informative areas on the image and, as a consequence, presence of incorrect pairs of key points are suggested in papers [6,7]. Alternative and widely spread method for construction of a qualitative projective transformation under presence of some subset of incorrect pairs in a set of pairs of key points is a usage of the algorithm RANSAC [8]. Imposition algorithm considered below is based on analysis and comparison of contours on a pair of images. Its basis is affine transformations of the synthesized image to the plane of a real video image. Affine transformations do not take into consideration projective distortions which inevitably appear under aerial photography [9]. Their advantage is simplicity of implementation, low computational efforts providing functioning in real time and satisfactory quality of image imposition. Suggested algorithm holds an intermediate position between correlation-extremal methods of imposition and methods of projective geometry. This algorithm is a certain compromise under conditions when mentioned methods cannot be implemented in automatic mode with acceptable accuracy and time characteristics. Although imposition of images is a final and very important procedure but possibility and quality of image imposition significantly depend on how successfully auxiliary tasks are solved. These tasks include tasks of detection of contours on images, enhancement of images, identification of contours and setting of one-to-one correspondence between contours on a pair of heterogeneous images, formation of a set of pairs of key points. 3.1. Imposition algorithm on the basis of transformation in the complex plane For realization of the algorithm it is necessary to have contours of continuous presence objects both on a real image and on a responding synthesized image. Specific requirements are imposed on quality of contours used in the algorithm. They are considered below. Algorithm is based on an affine transformation of points on the complex plane according to formula (s ) (r ) z k(r )  z nр  z k( s ) , z  x  iy , where z k – a point on the synthesized image contour, z k – a result of transformation to the real image plane, z nр  xпр  iyпр – a complex number providing transformation of points of one image to the plane of other one. It is required to find a pair of corresponding (key) points on the first and second contours in order to determine a complex number z nр . Let’s D be some area on the image with boundary D . Suggested variant of the algorithm takes points belonging to ends of the area diameters as corresponding points. i.e. M1 , M 2   arg max  M i , M j  . Such points are found for corresponding M i , M j D objects both on the real and responding synthesized images. Vectors a1  ( x 2  x1 ; y 2  y1 ) and a1  ( x 2  x1 ; y 2  y1 ) are placed in correspondence with found points M 1 x1 , y1  and M 2  x2 , y 2  on the real image and responding points M 1 x1 , y1  , on the synthesized one. Complex numbers z1  x 2  x1  i  y 2  y1  and z1  x 2  x1  i  y 2  y  respond these vectors on the complex plane. Complex number z nр executing transformation of all points of the virtual image to the plane of the real one is z z z determined according to formula znр  r  r s . zs zs  zs Algorithm for detection of points M1, M2 belonging to ends of the diameter is the following. Let’s take a random point M on ~ the contour, choose a direction of the contour tracing and a near point M is found in this direction where local maximum is realized  ~  M D   M , M  max  M , M j  (1) j ~ Then point M is taken as initial one and here the following point is found for it where the local view maximum is achieved (1). After full contour tracing in the chosen direction a global maximum is separated from a set of local maximums and, as a consequence, unknown points M1, M2 are found. Large volume of experimental researches of the algorithm for determination of points M1, M2 belonging to ends of some contour diameter confirm its correctness. For search of points M1, M2 belonging to ends of the diameter of a certain contour it is required that the contour should be closed. Contours of the real image obtained as a result of the algorithm for separation of boundaries of brightness jump can contain breaks. On the basis of information from the digital terrain map we know a type of the object and in particular it is clear that contours of its boundaries should be closed. Algorithm for formation of contours and additional processing is described below. The algorithm is aimed at enhancement of the contour image and, in particular, removal of breaks of small length which contours should be closed. 3rd International conference “Information Technology and Nanotechnology 2017” 48 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov Quality of image imposition by means of the described algorithm depends on quality of separation of brightness jump boundaries. As a rule, skeletons obtained as a result of separation of brightness jump boundaries contain a great number of short lines. They significantly complicate a search of interested objects and determination of one-to-one correspondence between such objects (between object contours) on real and responding virtual images. For elimination of these disadvantages the algorithm of additional processing of contour images has been developed. It allows eliminating both closed and unclosed lines of short length. The algorithm is described in [10]. Fig.1 shows the original TV image, boundaries separated using Canny detector [11] and enhanced contour image, and also synthesized image responding to the original TV image correspondingly. а b c d Fig. 1. Images at various stages of the technological chain execution: а – original TV image; b – boundaries separated using Canny detector; c – improved contour image; d – synthesized image constructed by the digital terrain map. After achievement of the enhanced image of boundaries (Fig.1c) it is necessary to obtain a description of contours as a connected set of points. This procedure is realized as following: 1) pixels of the bitmap black-and-white image obtained as a result of the previous processing is looked through; 2) if black pixel is found, it is taken as a start of the contour and marked by the current pixel for analysis; iterative execution of steps 3-4 starts. Black pixels previously included into content of some contour is removed from consideration. 3) pixels adjoining the current one are looked through according to the order shown in Figure 2. In the case of detection of a neighbor in positions 1-8 it is added to the contour and marked as current one; operation is repeated; 4) if there are no black pixels in positions 1-8, contour tracing is stopped and return to step 1 occurs; 5) if all pixels of the image are looked through, algorithm operation completes. 5 1 6 3 Х 4 8 2 7 Fig. 2. Order of point review under contouring. In practice, there are often cases when contour of the extended object contains insignificant breaks. For this purpose, operation of additional combination is applied to contour descriptions obtained by the above-mentioned approach if distance between end points (the beginning and the end) of a certain pair of contours does not exceed a threshold value (threshold value is accepted as 7 pixels). Operation of additional combination is addition of pixels of one contour to list of pixels of another contour if above-mentioned condition is met. It allows removing small breaks and increasing quality of following procedures of imposition. 3rd International conference “Information Technology and Nanotechnology 2017” 49 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov As a result, after processing of the whole image we have a set of connected contours. It is natural and logical to remove contours of short lengths, i.e. contours where a number of pixels is less that the threshold value (experimental researches have determined that for images having resolution 704×576 pixels, contour length should be more than 120 pixels). Finally, we obtain connected contours of long length corresponding to extended objects on the original image. In addition to checking for exceeding of the minimum threshold length, we examine satisfaction to a range of additional conditions (these values will be described below in details): – coordinates x, y  of the object «center of mass» should be located on some surrounding of even one of objects from the virtual map, otherwise correspondences are not found for the mentioned object that negatively influences on the final result of image imposition; – length d of the contour diameter should be not more than 2/3 of its length L; – width w of the contour is not less than 12 pixels. Execution of mentioned conditions guarantee that only contours of extended closed objects which can be used for following imposition will be on images. 3.2. Automatic identification of contours and match making between them on real and virtual images Digital terrain map contains information on object types which contours are reflected on the synthesized image. This information allows identifying corresponding objects on the synthesized image. Contour analog of the real image may not contain contours of some objects which, nevertheless, are present on the synthesized image. And otherwise contours can be present on the real image which contain specific distinctions from the corresponding contours on the synthesized image. Distinctions can be explained both by outdated state of the digital terrain map and disadvantages of algorithms of contour separation at all stages of the real image processing. At the visual level, one-to-one correspondence between object contours is determined enough easily. Task to determine such correspondence by a computer in automatic mode is enough complicated. For solution of the present task we suggest the algorithm based on usage of some numerical characteristics. They are calculated for each contour on the real and synthesized images: - coordinates x, y  of the object «center of mass»; - length L of the contour; - length d of the contour diameter; - width w of the contour.     Let’s designate M ir xir , yir , M sj x sj , y sj , i  1, I , j  1, J object centers of mass which contours are selected on real and synthesized images correspondingly. Here i - a number of the object on the real image and j - a number on the synthesized one. Coordinates of “centers of mass” are found as mean values according to the corresponding coordinate by all pixels of the contour. Matrix of distances between centers of mass of the size I  J is constructed for a set of objects on real and virtual images, i.e.    M 1r , M 1s    M 1r , M 2s  ...  M 1r , M Js   ... ... ... ... .    M Ir , M 1s    M Ir , M 1s  ...  M Ir , M Js  Distances are located in the Euclidean metric. Analysis of correspondences between object centers of mass is based on enough really supposition that shift of the object on the synthesized image with regard to the object responding to it on the real image does not exceed a certain limiting value T. So, if in some line i0 of the distance table all distances are longer than this value then no object on the synthesized image corresponds with the object with number i0 on the real image. Correspondingly, if such situation has a place in j 0 -column then no object on the real image corresponds with the object with number j 0 on the synthesized image. Such objects will not further participate in the procedure for determination of correspondences between objects. After removal of objects not having a corresponding object on other image from the matrix, procedure for determination of correspondences between objects begins. Let’s I 1 , J 1 be numbers of residual objects on real and synthesized objects correspondingly. For all residual objects, both on real and virtual images the following is calculated: - lengths Lri , Lsj , i  1, I1, j  1, J1 of contours; - lengths dir , d sj , i  1, I1, j  1, J1 of diameters; 3rd International conference “Information Technology and Nanotechnology 2017” 50 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov - values of object width wir , wsj , i  1, I1, j  1, J1 . In this algorithm, a contour length is considered as a number of pixels in the contour. Area width is considered as a length b of the vector b  N1N2 with ends on the contour in its average part and orthogonal to the vector a – an area diameter. After all numerical characteristics of all objects are found, a chain of computational procedures and comparisons is performed. Sequentially, in the cycle by i from 1 to the end of the real object list, objects are chosen and following actions are performed for each of them:   1) in the cycle by i execution of inequality  M ir , M sj  T is checked. Objects j1 , j2 ,... , jk which this inequality is fulfilled for, participate in the following comparison, the rest ones – not; 2) by each of three parameters L, d , w for i - contour on the real image, the nearest “neighbor” is searched on the synthesized image   j   arg min Li  L j1 , Li  L j2 ,..., Li  L jk 1 j j  arg min d  d , d  d ,..., d  d   i j1 i j2 i jk (2) 2 j j  arg min w  w , w  w ,..., w  w   3 i j1 i j2 i jk j Nearest “neighbor” in each of three conditions in (2) should comply with inequality Li  L j    Li , d i  d j    d i , wi  w j    wi . Here   1,3 . These inequalities are based on suppositions that values 1 2 3 of each of three parameters for corresponding objects on real and virtual images cannot differ more than   1 per unit. If j   j   j   j  , then the decision is made that the object with number j  on the virtual image corresponds to the 1 2 3 object with number i on the real image. Even if all values j  , j  , j  are different then we make a decision that the responding 1 2 3 object on the virtual image is not found for the object with number j  , j  , j  on the real image. 1 2 3 4. Results Fig. 3 shows contours of objects selected on real and synthesized images after the stage of image enhancement. a b Fig. 3. Contours of objects selected on real (a) and synthesized (b) images. Distances between each object on the real image and all objects on the virtual images are calculated according to the algorithm for determination of correspondence on a pair of images. These distances are shown in Table 1. Object SI-0 (object with number 0 on the synthesized image) should be removed from the procedure for determination of correspondence between objects because distance from it to each of four objects on the real image is longer than the threshold value T (T=100 was set as a threshold value for this experiment). Analysis of data from Table 1 allows supposing that by 3rd International conference “Information Technology and Nanotechnology 2017” 51 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov minimum criterion of distances between objects on real and synthesized images (Fig.3) there are the following correspondences: RI  0  SI  1 ; RI  1  SI  2 ; RI  2  SI  3 ; RI  3  SI  4 . Table 1. Distances between objects on the real and synthesized images. Object RI-0 RI -1 RI -2 RI -3 SI-0 187 233 196 376 SI -1 42 80 130 307 SI -2 152 39 82 200 SI -3 234 145 63 134 SI -4 371 266 208 35 SI -5 408 297 253 62 SI -6 411 299 259 69 However, control of determined correspondences by other parameters of objects according to algorithm (2) leaves only three correspondences for the final imposition that are: RI  0  SI  1 ; RI  1  SI  2 ; RI  2  SI  3 . Now we can perform the final stage of processing – imposition of contours of the synthesized image on the real TV-image. Fig.4a shows a result of simple overlapping of the synthesized image on the real one. We can see significant discrepancies between contours which are expressed by the shift of the synthesized image in relation to the real one and by contour dimensions. Fig. 4b shows a final result of imposition performed according to the algorithm. At the visual level, we can estimate quality of imposition as satisfactory. a) b) Fig. 4. а – result of overlapping of real and suntesized images, b – result of imposition. Imposition of each eighth frame of the video sequence responding to imagery within 4 sec of flight has been performed to estimate algorithm efficiency. Results of imposition quality estimation by 13 pairs of real and synthesized by DTM frames from this video sequence are shown in Table 2. Imposition quality estimation has been performed using index  introduced in paper [7]. Main idea of the suggested method is the following. Image is divided into square blocks (cells) of the specified size, e.g. 100х100 pixels. It provides a possibility to obtain not only integral estimation of imposition quality but also local estimations in each of separated square blocks. In each cell for all informative (other than background color) points of one of images, informative points of other image locating in certain square surrounding of size ( 2k  1)  ( 2k  1) having its center in the processed informative point is searched. As a rule, k  1 or k  2 . Value k  1 is equal to thickening of the thin line in one pixel of the first contour up to two pixels, and k  2 – up to three pixels. Sliding window of the chosen size ( 5 5 in the considered experiment) is moved along lines of the image. As soon as an informative point of the first image gets into the center of this surrounding then informative pixels of the second image getting into this surrounding and not marked at the previous stages are searched and marked. After scanning of the whole image is completed, in each square blocks a number of mi marked points of the imposed (second) contour is calculated and we found a mi ratio of this number to the total number of informative points of the first contour M i , i.e.  i  . Let’s call coefficient  i as Mi  mi an index of the imposition quality in i -block of the image and coefficient   i – an integral index of the whole contour Mi i imposition quality. 3rd International conference “Information Technology and Nanotechnology 2017” 52 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov Results of imposition quality estimation by 13 pairs of real and synthesized by DTM frames from this video sequence are shown in Table 2. Within processing of the first frame there was an imposition disruption because correspondence between contours on real and responding synthesized images could not be determined. Enhancement of the imposition quality was not achieved in two frames. Imposition quality increased within the range from 24% to 108% in 10 frames. In paper [12] imposition of heterogeneous images on these 13 frames was performed using broken-linear transformations which allow taking projective distortions into consideration. This paper achieved higher indices of the image imposition quality. Technology of the image imposition described in [12] requires greater computational efforts and it has not yet been automatized up to the end. Table 2. Results of the image imposition estimation. Number of a pair Index α before imposition Index α Change of index α Expert estimation of image after imposition Absolut. /percent. imposition of frames 1 0.281 --- Imposition disruption 9 0.282 0.402 0.12/ 42.6% enhanced 17 0.247 0.421 0.174/ 70.4% enhanced 25 0.231 0.481 0.25/ 108.2% enhanced 33 0.232 0.418 0.186/ 80.2% enhanced 41 0.334 0.236 -0.098/ -29.3% worsened 49 0.300 0.389 0.089/ 29.7% enhanced 57 0.313 0.241 -0.072/ -23.0% worsened 65 0.229 0.364 0.135/ 59.0% enhanced 73 0.324 0.426 0.102/ 31.5% enhanced 81 0.279 0.381 0.102/ 36.6% enhanced 89 0.261 0.362 0.101/ 38.7% enhanced 97 0.194 0.240 0.046/ 23.7% enhanced Mean values 0.268 0.363 0.095/ 35.4% Algorithm was also examined on real video sequences obtained from a TV camera consisting of OVS within long time interval. Fig.5 shows a diagram of quality estimation for 900 frames (36 sec flight). Estimation of mathematical expectation of the imposition quality in the mentioned video sequence was 0.27, estimated variance – 0.004. Research of influence of the scene nature on results of the imposition quality estimation has been performed. It determined that quality estimation of the imposition algorithm results based on transformation in the complex plane exceeds the index before imposition on the same fragments where we can separate and determine correspondence as minimum between two objects. For the frames where correspondence was not determined, quality estimation remains invariable. 5. Description of the program-algorithm complex Program-algorithm complex of the image imposition in aviation vision systems which constituent is an algorithm for the image imposition based on transformation in the complex plane contains the following main blocks: 1) block for registration of images obtained from a vision sensor (in the case of program realization on a bench – frame capture from the video sequence); 2) block for construction of a virtual image by the virtual terrain model; 3) block of pre-processing of real and virtual images; 4) block for imposition and removal of geometrical mismatch of real and virtual images between each other; 5) block for visualization of the imposition result. Blocks 2, 3 and 4 are most important for the whole complex and complicated from the point of view of their construction. Construction of the virtual terrain model is realized by the separate software module in the developed realization of the image imposition complex. Its main functions are positioning of the virtual camera according to specified position coordinates and construction of the image by the available digital map in format sxf. It is possible to receive a frame both in thin lines and with overlapped textures. Block of preprocessing realizes auxiliary operations required for execution of the geometrical imposition: separation of boundaries, removal of low informative lines, obtaining of connected contours – for real images; removal of lines of short extension and obtaining of connected contours – for virtual images. Quality of preprocessing greatly determines effectiveness of the following imposition. Key element of the program-algorithm complex for image imposition is a block of geometrical imposition. The algorithm of imposition based on transformation in the complex plane as enough fast and reliable approach is suggested to be applied as one of algorithms for removal of geometrical mismatch. High-speed performance of the algorithm of image imposition based on transformation in the complex plane is the following: total costs for preprocessing do not exceed 0.3 sec per frame, for procedures of imposition – 0.05 sec. Calculations are performed using a computer equipped with processor Intel i7-3630QM, 3rd International conference “Information Technology and Nanotechnology 2017” 53 Image Processing, Geoinformation Technology and Information Security / А.I. Efimov, А.I. Novikov 2.40 GHz, random-access memory 8 Gb, realization of algorithms for preprocessing and algorithm for imposition is performed in the C ++ programming language. Circuit of the program-algorithm complex organization is shown in Fig.6. Fig. 5. Diagram of quality estimation for the video sequence. Processed image from Image from OVS Preprocessing and TVS enhancement Imposition mode Imposition (algorithm) Output image Imposition result Visualization Information on location and Data for imposition Data for visualization orientation DTM Fig. 6. Circuit of the program-algorithm complex organization for image imposition. 6. Conclusion As it was mentioned before, correlation-extremal methods of imposition provide the necessary accuracy but require unacceptably high costs of computer time. Broken-linear transformations of the synthesized image to the plane of real video image are vulnerable because of issues with search of a set of key point pairs in the automatic mode [9]. Suggested algorithm is a certain compromise between requirements to accuracy of algorithms for imposition of heterogeneous images and requirements for their implementation in the automatic mode and in real time. Considered algorithm can operate under large errors of navigation parameters only if similarity of contours of corresponding objects on real and synthesized images is saved. Developed algorithm can be applied independently for imposition of images and it can be used in combined schemes with algorithms of a higher level for pre-imposition. References [1] Elesina SI. Imposition of images in correlation-extreme navigation systems. Edited by Kostyashkin LN, Nikiforov MB. Moscow: Radio Engineering, 2015; 208 p. [2] Wisilter YuV. Aviation systems of enhanced and synthesized vision: analytical review of materials of foreign information sources. State Scientific Center of the Russion Federation, State Scientific and Research Institute of Aviation Systems (Federal State Unitary Enterprise "GosNIIAS"), Scientific Information Centre; Edited by Fedosov EA. M., 2011; 77 p. [3] Baklitsky VK. Correlation-extreme methods of navigation and guidance. Tver: Book Club, 2009; 360 p. [4] Babayan PV, Ershov MD. Algorithms for removal of mismatches in the onboard vision system . Vestnik of RSREU 2015; 4(2): 32–38. [5] Crum WR, Hartkens T, Hill DLG. Non-rigid image registration: theory and practice. The British Journal of Radiology 2014; 77: 140–153. [6] Goshin EV, Kotov AP, Fursov VA. Two-stage formation of spatial transformation for image imposition. Computer Optics 2014; 38(4): 886–891. [7] Efimov AI, Novikov AI. An algorithm for multistage projective transformation adjustment for image superimposition. Computer Optics 2016; 40(1): 258– 266. DOI: 18287 / 2412-6179-2016-40-2-258-266 [8] Hast A, Nysjö J , Marchetti A.. Optimal RANSAC – Towards a Repeatable Algorithm for Finding the Optimal Set. Journal of WSCG 2013; 21(1): 21–30. [9] Gruzman IS, Kirichuk VS, Kosykh VP, Peretryagin GI, Spector AA. Digital image processing in information systems. Novosibirsk: Publishing house of NSTU, 2002; 351 p. [10] Novikov AI, Sablina VA, AI Efimov. Image Superimposition Technique in Computer Vision Systems Using Contour Analysis Methods. 5th Conference on Embedded Computing (MECO) Proceedings 2016: 132–137. [11] John Canny. A Computational Approach to Edge Detection. IEEE Transactions on Pattern an Machine Intelligence 1986; PAMI-8(6): 679–698. [12] Novikov AI, Sablina VA, Nikiforov MB. Algorithms for automatic identification of objects on heterogeneous images and image imposition. Collection of papers of the III International Conference and Youth School "Information Technologies and Nanotechnologies" 2017: 599–607. 3rd International conference “Information Technology and Nanotechnology 2017” 54