<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Machine Vision for Astronomical Images Using the Canny Edge Detector</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergii Khlamov</string-name>
          <email>sergii.khlamov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iryna Tabakova</string-name>
          <email>iryna.tabakova@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Trunova</string-name>
          <email>tetiana.trunova@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhanna Deineko</string-name>
          <email>zhanna.deineko@nure.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Kharkiv National University of Radio Electronics</institution>
          ,
          <addr-line>Nauki avenue 14, Kharkiv, 61000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>In this paper we have presented a realization of the machine vision purposes in scope of the processing of astronomical frames by the Canny edge detector. Its main purpose is to select image's borders of object with the unknown shape in the frame's background using the different known recognition patterns. Parameters for the Canny edge detector are determined automatically by the results of frame pre-processing and are unique for each image in the input data set. The Canny edge detector was realized as a tool using the C++ programming language. Implementation of the machine vision purposes was tested using the astronomical frames with different patterns, object shapes, sizes, and resolutions. recognition patterns Machine vision, image recognition, image processing, Canny edge detector, object detection,</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
</p>
      <p>
        The astronomical frames in the common case are made by the charge-coupled devices (CCD) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]
and can be received from the different sources: archives, servers, predefined series or “live” (online)
data streams. There are also
various
machine vision
purposes [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that include analyzing,
understanding digital images, methods for acquiring, processing and extraction of high-dimensional
information to produce symbolic or numerical data in form as decisions [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        The machine vision and data mining [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] purposes regarding the astronomical frame processing are
related to the following features: filtering [5], brightness equalization with background alignment [6],
images detection of objects [7], motion detection of objects [8], astrometry (estimation of the object’s
position in an image, which can be converted to position in the sky) [9], photometry (estimation of the
object’s brightness) [10], determination of the object’s parameters and its visible motion [11],
reference objects cataloging [12], objects recognition [13, 14], Wavelet coherence analysis [15], etc.
There are several mathematical filters that are different by their nature but can be used at the
preprocessing stage in the pipeline for CCD-images [16] before the general frame processing method:
fast Fourier transform (FFT), as an algorithm, which computes the discrete Fourier
transform (DFT) of a sequence, or its inverse (IDFT) [17].
low-pass filter [18] – data cleaning process for bypassing the different artifacts of the
instrumental measurements. Such a filtering algorithm also attenuates signals with frequencies
higher than the cutoff frequency and passes only the signals with a frequency lower than a
selected cutoff frequency.
edge detection methods [19] that are aimed at the edges identifying like curves in a frame, at
which the image brightness changes sharply or has discontinuities.
corner detection methods [20] that are used in the machine vision process to get the
appropriate kinds of the features of an image.
      </p>
      <p>2022 Copyright for this paper by its authors.
 blob (point) detection methods [21] that are aimed for the identifying process of the various
regions of the investigated objects in the astronomical CCD images. Such regions have
differences in color and brightness/gray shade, which are comparable with the adjacent
regions. The blob is a kind of region with points or even pixels that have the same constant or
approximately constant properties, so all such points or even pixels in the blob are the same.
 ridge detection methods [22] that are aimed at the ridge’s localization in the frame that
defined as curves where its points are the local maximums of the image function, like the
geographical ridges.</p>
      <p>The object’s recognition process using the predefined recognition pattern can be simplified by the
preparing more accurate determined image’s borders of object [23]. There are a lot of different
patterns or types of objects in an image: point, long, blurred and objects with flare or intersection with
another objects. Some of them are galaxies, stars, robots, drones [24], rockets, satellites [25], and even
small Solar System objects [26].</p>
      <p>So, for our research we selected the Canny edge detector from the edge detectors family [19] to
analyze its ability to identify edges of the different astronomical objects in images, which have more
complicated structure and random background noise.</p>
      <p>In this paper we presented the several mostly known recognition patterns for the astronomical
frames, which are commonly used during the astronomical image processing like classification. Also,
we presented the real examples of applying the Canny edge detector implemented into the developed
tool using the C++ programming language. This tool is a realization of the machine vision purposes
for processing of the astronomical frames with different patterns, sizes, object shapes and resolutions.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Astronomical object classification</title>
      <p>
        One of the main directions for the image processing in astronomy is the object classification. This
process is also related to the one step from the general (KDD) process [
        <xref ref-type="bibr" rid="ref5">27</xref>
        ].
      </p>
      <p>There are two known helpful concepts in the astronomical object classification. They are
completeness (also known as recall) and efficiency (also known as precision). The completeness
concept is defined in terms of the true positive (TP) and false positive (FP). The efficiency concept is
defined in terms of the true negative (TN) and false negative (FN).</p>
      <p>The completeness concept is the number of objects that are really related to the appropriate class
with some pattern, as mentioned in the following formula [7, 14]:
(1)
(2)</p>
      <p>The efficiency concept is the number of objects that were already classified and really related to
the appropriate pattern, as mentioned in the following formula [8, 14]:</p>
      <p>Such two metrics are very interesting for the astrophysics research in case when completeness and
efficiency are the highest. The importance of each quantity is often depended on the situation. For
example, an investigation of the rare objects that generally requires a high completeness while allows
the lower efficiency, but the statistical clustering or classification of cosmological and astrophysical
objects requires the high efficiency, even if the completeness of such procedure is very expensive.</p>
      <p>
        The astronomical object classifications can be also performed by the bivariate luminosity function
and the morphology-density relation where both a digital sky survey of this size and detailed Hubble
types are used. Such realization uses a committee of Artificial Neural Networks (ANNs) [
        <xref ref-type="bibr" rid="ref6">28</xref>
        ] in the
“waterfall” arrangement, in which the output from one ANN is the input of other ANN. This
detalization produces the more detailed classes and subclasses, that improves their results using the
spectral principal components and investigation of their kinematics.
      </p>
      <p>=


+ 
=


+</p>
      <p>
        The different genetic algorithms with evolving the ANNs are used for the attribute selection and
classification of the “bent-double” galaxies in the FIRST (Faint Images of the Radio Sky at
Twentycm) [
        <xref ref-type="bibr" rid="ref7">29</xref>
        ] radio survey data. FIRST is the project, which was developed for the generating a radio
equivalent of the Palomar Observatory Sky Survey over 10 thousand degrees2 of the North and South
Galactic Caps.
      </p>
      <p>
        Using the NRAO Very Large Array (VLA) and an automated mapping pipeline, the FIRST project
produced images with the following characteristics: 1.8" pixels, typical root mean square (RMS) of
0.15 mJy, image resolution of 5". The detection threshold was used at the 1 mJy source, and there
were about 90 sources per degree2, where about 35 percent of which have the formed structure with
scales from 2-30". And only 30 percent of the FIRST sources have analogues in the known modern
Sloan Digital Sky Survey (SDSS) catalogue [
        <xref ref-type="bibr" rid="ref8">30</xref>
        ].
      </p>
      <p>So, the radio morphology includes the compact nucleus of the radio galaxy and extremely long
jets. Thus, in this way the bent-double morphology detects the galaxy cluster presence. Such
morphology can be performed by the different approaches. Some of them are like combination of the
ensembles of ANN and the locally weighted regression or even by using the fuzzy algebra and
heuristic methods, anticipating the importance of probabilistic studies that are just now beginning to
emerge.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Recognition patterns</title>
      <p>
        The pattern recognition techniques are the base of the machine vision purposes into processing of the
astronomical frames. Such recognition techniques are related to the astronomical object classification
and simplify this process using the especial methods, templates, patterns, etc. These algorithms are used
for the assignment of the initial input data to a certain class or group by selecting of the major features,
which characterize this group [
        <xref ref-type="bibr" rid="ref9">31</xref>
        ]. So, for the objects in images received as real sets of astronomical
frames, the following different recognition patterns can be applied [
        <xref ref-type="bibr" rid="ref10">32</xref>
        ].
3.1.
      </p>
    </sec>
    <sec id="sec-4">
      <title>Point objects</title>
      <p>The point objects in the astronomical frame have a round shape. These objects have only one
brightness peak in its center. Such example of the point objects in the astronomical frame is presented
in the Figure 1.
3.2.</p>
    </sec>
    <sec id="sec-5">
      <title>Long objects</title>
      <p>The extended or long objects in the astronomical frame have the round shape at the end of their
form and at least 1:4 or even more semi-axes ratio. These objects have several brightness peaks that
very closely lie along the line in the direction of object. Such example of the long objects in the
astronomical frame is presented in the Figure 2.</p>
      <p>
        The blurred objects in the astronomical frame have the round shape at the end of their form and up
to 1:4 semi-axes ratio. These objects have the same properties with extended/ long objects but have
the different origin nature. The reasons of such frame’s creation are the following: various telescope’s
aberrations, different fails in the diurnal tracking, telescope coma [
        <xref ref-type="bibr" rid="ref11">33</xref>
        ], etc. Such example of the
blurred objects in the astronomical frame is presented in the Figure 3.
      </p>
    </sec>
    <sec id="sec-6">
      <title>Objects with flare or intersection with other objects</title>
      <p>This type of the object’s recognition pattern is not so general, but also can be as a combination of
other patterns, e.g., point, long and blurred objects. The major singularity of it is the crossing with other
objects in the astronomical frame where the brightness of the closely located object is more than
brightness of the investigated object.</p>
      <p>Another case of appearance of such superfluous flare is the very long exposure time. This situation is
very complicated for the automated machine vision because the brightness peak of such intersected
objects are mixed and unspecified.</p>
      <p>Such example of the objects with flare and intersection with another objects in the astronomical
frame is presented in the Figure 4.</p>
    </sec>
    <sec id="sec-7">
      <title>4. Canny edge detector</title>
      <p>The Canny edge detector is an operator from the edge detection family, which uses a multi-steps
algorithm to detect the wide range of edges or corners in the CCD-images. The Canny edge detection is a
method, which helps to extract the useful structure information of the various vision objects and reduce the
amount of data for the further processing. Such filter is widely used in the different computer vision
systems.</p>
      <p>
        The developer Canny has found that the requirements for the edge detection application on diverse
vision systems are comparatively similar. In this case, the edge detection technique can be used in a very
wide range of conditions [
        <xref ref-type="bibr" rid="ref12">34</xref>
        ].
4.1.
      </p>
    </sec>
    <sec id="sec-8">
      <title>Criteria</title>
      <p>There are several common criteria for the edge detection:
 Detection of edges with the lower rate of error – the detection should precisely catch as many
edges presented in the CCD-image as possible.
 The edge point, which was detected from the operator will accurately localize in a center of the</p>
      <p>CCD-image’s edge or corner.
 A given edge in the CCD-image will be marked only one time, and where possible, image
noise will not create false edges or corners.
4.2.</p>
    </sec>
    <sec id="sec-9">
      <title>Processing steps</title>
      <p>
        The processing algorithm of the Canny edge detector consists of the following five processing
steps [
        <xref ref-type="bibr" rid="ref13">35</xref>
        ]:
 Applying of the Gaussian filter [18] to remove the noise by smoothing the CCD-image under
processing using the following equation:
  =
2
1
2 
( −
( − ( + 1))2 + ( − ( + 1))
      </p>
      <p>2
2
2
) ,

where 1 ≤  ,  ≤ (2 + 1).</p>
      <p>Determining of the image’s intensity gradients using the following equation:
(3)
(5)



where   and   are two images which at each point contain the horizontal and vertical
derivative approximations respectively;  is a source image.</p>
      <p>Applying of thresholds of the gradient magnitude and performing the lower bound cut-off
reduction for getting the rid of mock response for the edge detection.</p>
      <p>Applying of the predefined double threshold for determination of the potential edges in the
CCD-image under processing.</p>
      <p>Finalizing of the edges detection by reducing all other edges that are not connected to the
strong edges and weak.
4.3.</p>
    </sec>
    <sec id="sec-10">
      <title>Thresholds</title>
      <p>For the last processing step of the Canny edge detector, the higher and lower threshold values are
selected. If the gradient value of an edge pixel is higher than the higher threshold value, it is marked
as a strong edge pixel.</p>
      <p>If a gradient value of an edge pixel is lower than the higher threshold value and higher than the
lower threshold value, it is marked as a faint edge pixel. If the gradient value of an edge pixel is lower
than the lower threshold value, it is rejected.</p>
      <p>
        Such 2 critical values are defined according to the given input astronomical image [
        <xref ref-type="bibr" rid="ref14">36</xref>
        ].
      </p>
    </sec>
    <sec id="sec-11">
      <title>5. Image recognition test examples</title>
      <p>
        The Canny edge detector can be used aa a pre-processing method in the pipeline. The proper place
for it is before the main algorithm using for the image processing (classification [
        <xref ref-type="bibr" rid="ref9">31</xref>
        ] and recognition
[5] of objects, parameters estimation of the image or apparent motion of object [11]).
      </p>
      <p>In our research we applied the Canny edge detector after the brightness alignment and background
equalization processing using the inverse median filter [6]. We decided this because of the more
precise object’s edge detection after applying the Canny edge detector without the redundant noises in
astronomical frame.</p>
      <p>Under the current research we took astronomical frames for the image’s recognition purposes with
the following various resolutions: 512 x 512, 768 x 512, 3056 x 3056, 4008 x 2672 pixels. Also, the
appropriate different conditions were used during the observation. Such astronomical frames were
received from the several real observatories and used as the test examples.</p>
      <p>
        These test data were selected in scope of the current research from the following real observatories:
ISON-NM [
        <xref ref-type="bibr" rid="ref15">37</xref>
        ], ISON-Kislovodsk [
        <xref ref-type="bibr" rid="ref15">37</xref>
        ], Vihorlat Observatory in Humenne [10] and Mayaki observing
station of "Astronomical Observatory" Research Institute of I. I. Mechnikov Odessa National
University [
        <xref ref-type="bibr" rid="ref16">38</xref>
        ] with unique observatory codes “H15”, “D00”, “Humenne” and “583” accordingly.
      </p>
      <p>
        The information about these observatories is provided in the Table 1. The observatory codes are
unique and approved by the Minor Planet Center (MPC) [
        <xref ref-type="bibr" rid="ref17">39</xref>
        ] of the International Astronomical Union
(IAU) [
        <xref ref-type="bibr" rid="ref18">40</xref>
        ].
      </p>
      <p>Image recognition test examples are consisting of the different series of astronomical CCD-frames
that were collected during the regular observations by the various CCD-cameras. The information
about the CCD-cameras that are installed on the telescopes from the observatories list above is
presented in the Table 2.</p>
      <p>This table contains the following information about CCD-camera: model and its parameters, like
resolution, pixel size and exposure time.</p>
      <p>Each series of CCD-frames includes the different investigated objects in each frame of series.
Processing results of some examples of the astronomical frames according to the different recognition
patterns of objects described above using the developed tool with implemented Canny edge detector
are presented in the Figure 5.</p>
      <p>The processing results and subsequent analysis showed that the Canny edge detector has a good
accuracy with edge detection in the astronomical frames, which contain the point objects and the
single long objects. The processing results of the edge detection for the blurred objects are not so
precise. The reason of this is that there are few brightness peaks are very closely presented along a
line of the direction of object. So, images of the several objects are mistakenly merged into the one
object, which makes further processing very uncertain and complicated. So, as a result, the Canny
edge detector is not so effective for using with such astronomical recognition patterns.</p>
    </sec>
    <sec id="sec-12">
      <title>6. Conclusions</title>
      <p>
        The tool with implementation of the machine vision purposes into processing of the astronomical
frames using Canny edge detector was developed as research under the CoLiTec project [
        <xref ref-type="bibr" rid="ref5">27</xref>
        ]. The
tool with a realization of the Canny edge detector uses the C++ programming language, OpenCV
mathematical library and QT graphical library. The especial test data in view of sets with
astronomical CCD-frames [
        <xref ref-type="bibr" rid="ref19">41</xref>
        ] were selected for the current research by the various observatories
equipped with the different telescopes. Such astronomical frames had the various observation
conditions, resolutions for the required quality and presence of objects in the frames that can be
recognized by the predefined investigated patterns. The quality indicators, like conditional probability
of true detection and accuracy of the edge detection of objects were analyzed, which made possible to
make the resolution.
      </p>
      <p>
        With help of applying the Canny edge detector under the pre-processing stage in a pipeline for the
astronomical frames the precision of the major image processing methods (classification [
        <xref ref-type="bibr" rid="ref5">27</xref>
        ] and
recognition [5] of objects, matched filtration [
        <xref ref-type="bibr" rid="ref20">42</xref>
        ], parameters estimation of the image or apparent
motion of object [11], photometry [
        <xref ref-type="bibr" rid="ref21">43</xref>
        ]) increases by 5-20%. As results showed, the Canny edge
detector applying increases the precision of the edge detection for the point and the single long
objects. For the blurred objects in the astronomical frames, it is not so effective because of the
difficulty increasing of detection and the chance to lose the object. The received results after
processing including the generated experiments will be also used for the machine learning [
        <xref ref-type="bibr" rid="ref22">44</xref>
        ] and
time series [
        <xref ref-type="bibr" rid="ref23">45</xref>
        ] analysis.
      </p>
    </sec>
    <sec id="sec-13">
      <title>7. Acknowledgements</title>
      <p>The authors thank all observers, online services, and tools, which provided their data for the
current investigation and testing of the developed tool with implementation of the machine vision
purposes into processing of the astronomical frames using Canny edge detector.</p>
    </sec>
    <sec id="sec-14">
      <title>8. References</title>
      <p>Instrumentation, Software and Cyberinfrastructure for Astronomy II, vol. 8451, 2012. doi:
10.1117/12.925321.
[5] S. Khlamov, et al., Recognition of the astronomical images using the Sobel filter, Proceedings of
the International Conference on Systems, Signals, and Image Processing, IWSSIP 2022, 4 p.,
2022. doi: 10.1109/IWSSIP55020.2022.9854425.
[6] V. Savanevych, et al., CoLiTecVS software for the automated reduction of photometric
observations in CCD-frames, Astronomy and Computing, vol. 40 (100605), 15 p., 2022. doi:
10.1016/j.ascom.2022.100605.
[7] V. Savanevych, et al., Formation of a typical form of an object image in a series of digital frame,
Eastern-European Journal of Enterprise Technologies, vol. 6, issue 2 (120), pp. 51–59, 2022. doi:
10.15587/1729-4061.2022.266988.
[8] T. Ando, Bayesian model selection and statistical modeling, CRC Press, 2010.
[9] V. Kudak, V. Klimik, and V. Epishev, Evaluation of disturbances from solar radiation in orbital
elements of geosynchronous satellites based on harmonics, Astrophysical Bulletin, vol. 65 (3),
pp. 300-310, 2010. doi: 10.1134/S1990341310030120.
[10] K. M. Hampson, D. Gooding, R. Cole, and M. J. Booth, High precision automated alignment
procedure for two-mirror telescopes, Applied Optics, vol. 58, pp. 7388-7391, 2019.
[11] A. Massone, A. Perasso, C. Campi, and M. Beltrametti, Profile detection in medical and astronomical
images by means of the Hough transform of special classes of curves, Journal of Mathematical
Imaging and Vision, vol. 51 (2), pp. 296-310, 2015. doi: 10.1007/s10851-014-0521-4.
[12] V. Savanevych, et al., Selection of the reference stars for astrometric reduction of CCD-frames,
Advances in Intelligent Systems and Computing, vol. 1080, pp. 881–895, 2020. doi:
10.1007/978-3-030-33695-0_57.
[13] J. Luan, Book review: Practical algorithms for image analysis: Description, examples, and code,
Discrete Dynamics in Nature and Society, vol. 6 (3), pp. 219–220, 2001. doi:
10.1155/s1026022601000243.
[14] R. Gonzalez, and R. Woods, Digital image processing, 4th edition, NY: Pearson, 2018.
[15] M. Dadkhah, et al., Methodology of wavelet analysis in research of dynamics of phishing
attacks, International Journal of Advanced Intelligence Paradigms, vol. 12(3-4), pp. 220-238,
2019. doi: 10.1504/IJAIP.2019.098561.
[16] J. R. Janesick, Scientific charge-coupled devices, SPIE press, 2001. doi: 10.1117/3.374903.
[17] S. Haynal, and H. Haynal, Generating and searching families of FFT algorithms, Journal on
Satisfiability, Boolean Modeling and Computation, vol. 7(4), pp. 145-187, 2011. doi:
10.48550/arXiv.1103.5740.
[18] J. Karki, Active low-pass filter design, Texas Instruments application report, 2000.
[19] J. M. Park, and Y. L. Murphey, Edge detection in grayscale, color, and range images, Wiley
Encyclopedia of Computer Science and Engineering, pp. 1-16, John Wiley &amp; Sons, Inc., 2008.
doi: 10.1002/9780470050118.ecse603.
[20] E. Rosten, and T. Drummond, Machine learning for high-speed corner detection, Lecture Notes
in Computer Science, vol. 3951, pp. 430-443, 2006.
[21] T. Lindeberg, Scale selection properties of generalized scale-space interest point detectors,
Journal of Mathematical Imaging and vision, vol. 46(2), pp. 177-210, 2013. doi:
10.1007/s10851-012-0378-3.
[22] T. Lindeberg, Edge detection and ridge detection with automatic scale-selection, International
journal of computer vision, vol. 30(2), pp. 117–156, 1998.
[23] M., Svensén, and C. M. Bishop, Pattern recognition and machine learning, Springer, 2007.
[24] M. Ivanov, et al., Effective informational entropy reduction in multi-robot systems based on
realtime TVS, IEEE International Symposium on Industrial Electronics, pp. 1162–1167, 2019. doi:
10.1109/ISIE.2019.8781209.
[25] V. Akhmetov, et al., Cloud computing analysis of Indian ASAT test on March 27, 2019,
Proceedings of the 2019 IEEE International Scientific-Practical Conference: Problems of
Infocommunications Science and Technology, PIC S and T 2019, pp. 315–318, 2019. doi:
10.1109/PICST47496.2019.9061243.
[26] M. Mahlke, et al., The ssos pipeline: Identification of Solar System objects in astronomical
images, Astronomy and Computing, vol. 28, 100289, 2019. doi: 10.1016/j.ascom.2019.100289.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Smith,</surname>
          </string-name>
          <article-title>The invention and early history of the CCD, Rev</article-title>
          .
          <source>Mod. Phys</source>
          , vol.
          <volume>3</volume>
          ,
          <issue>issue</issue>
          82, pp.
          <fpage>2307</fpage>
          -
          <lpage>2312</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Klette</surname>
          </string-name>
          , Concise computer vision, Springer, London,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>C.</given-names>
            <surname>Steger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ulrich</surname>
          </string-name>
          , and
          <string-name>
            <given-names>C.</given-names>
            <surname>Wiedemann</surname>
          </string-name>
          ,
          <article-title>Machine vision algorithms and applications</article-title>
          , John Wiley &amp; Sons,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Cavuoti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brescia</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>Data mining and knowledge discovery resources for astronomy in the Web 2.0 age, Proceedings of the SPIE Astronomical Telescopes</article-title>
          and
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>K.</given-names>
            <surname>Borne</surname>
          </string-name>
          ,
          <article-title>Scientific data mining in astronomy, Data Mining and Knowledge Discovery Series, Chapman</article-title>
          and Hall/CRC, pp.
          <fpage>115</fpage>
          -
          <lpage>138</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Collister</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Lahav</surname>
          </string-name>
          ,
          <source>ANNz: Estimating Photometric Redshifts Using Artificial Neural Networks, The Publications of the Astronomical Society of the Pacific</source>
          , vol.
          <volume>116</volume>
          , issue 818, pp.
          <fpage>345</fpage>
          -
          <lpage>351</lpage>
          ,
          <year>2004</year>
          . doi:
          <volume>10</volume>
          .1086/383254.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>N.</given-names>
            <surname>Thyagarajan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Helfand</surname>
          </string-name>
          ,
          <string-name>
            <surname>R. L. White,</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. H.</given-names>
            <surname>Becker</surname>
          </string-name>
          ,
          <article-title>Variable and Transient Radio Sources in the FIRST Survey, The Astrophysical Journal</article-title>
          , vol.
          <volume>742</volume>
          , id
          <volume>49</volume>
          , 15 p.,
          <year>2011</year>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>0004</fpage>
          -637X/742/1/49.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>Zhenping</given-names>
            <surname>Yi</surname>
          </string-name>
          , et al.,
          <article-title>Automatic detection of low surface brightness galaxies from SDSS images</article-title>
          , MNRAS,
          <year>stac775</year>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1093/mnras/stac775.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>W.</given-names>
            <surname>Burger</surname>
          </string-name>
          , and
          <string-name>
            <given-names>M.</given-names>
            <surname>Burge</surname>
          </string-name>
          ,
          <article-title>Principles of digital image processing: fundamental techniques</article-title>
          , New York, NY: Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>W. R.</given-names>
            <surname>Howard</surname>
          </string-name>
          ,
          <article-title>Pattern recognition and machine learning</article-title>
          ,
          <source>Kybernetes</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>D. J.</given-names>
            <surname>Schroeder</surname>
          </string-name>
          , Astronomical optics, Elsevier,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>T.</given-names>
            <surname>Moeslund</surname>
          </string-name>
          , Canny edge detection,
          <source>Laboratory of Computer Vision</source>
          and Media Technology, Aalborg University, Denmark,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>H.</given-names>
            <surname>Scharr</surname>
          </string-name>
          ,
          <article-title>Optimal operators in digital image processing, Doctoral dissertation</article-title>
          , Rupertus Carola University,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [36]
          <string-name>
            <given-names>R.</given-names>
            <surname>Kimmel</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Bruckstein</surname>
          </string-name>
          ,
          <article-title>Regularized Laplacian zero crossings as optimal edge integrators</article-title>
          ,
          <source>International Journal of Computer Vision</source>
          , vol.
          <volume>53</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>225</fpage>
          -
          <lpage>243</lpage>
          ,
          <year>2003</year>
          . doi:
          <volume>10</volume>
          .1023/A:
          <fpage>1023030907417</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [37]
          <string-name>
            <given-names>I.</given-names>
            <surname>Molotov</surname>
          </string-name>
          , et al.,
          <article-title>ISON worldwide scientific optical network</article-title>
          ,
          <source>Fifth European Conference on Space Debris</source>
          ,
          <source>ESA SP-672</source>
          ,
          <issue>7</issue>
          ,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [38]
          <string-name>
            <given-names>B.</given-names>
            <surname>Carry</surname>
          </string-name>
          , et al.,
          <article-title>Potential asteroid discoveries by the ESA Gaia mission: Results from follow-up observations</article-title>
          ,
          <source>Astronomy and Astrophysics</source>
          ,
          <volume>648</volume>
          ,
          <issue>A96</issue>
          ,
          <year>2021</year>
          . doi:
          <volume>10</volume>
          .1051/
          <fpage>0004</fpage>
          -6361/202039579.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [39]
          <article-title>The Minor Planet Center (MPC) of the International Astronomical Union</article-title>
          . URL: https://minorplanetcenter.net.
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [40]
          <article-title>List of Observatory Codes: IAU Minor Planet Center</article-title>
          . URL: https://minorplanetcenter.net/iau/lists/ObsCodesF.html
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>G.</given-names>
            <surname>Adam</surname>
          </string-name>
          , et al.,
          <article-title>Embedded microcontroller with a CCD camera as a digital lighting control system</article-title>
          ,
          <source>Electronics</source>
          , vol.
          <volume>8</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>33</fpage>
          ,
          <year>2019</year>
          . doi:
          <volume>10</volume>
          .3390/electronics8010033.
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>S.</given-names>
            <surname>Khlamov</surname>
          </string-name>
          , et al.,
          <article-title>Development of computational method for matched filtration with analytic profile of the blurred digital image</article-title>
          ,
          <source>Eastern-European Journal of Enterprise Technologies</source>
          , vol.
          <volume>5</volume>
          , issue 4-119, pp.
          <fpage>24</fpage>
          -
          <lpage>32</lpage>
          , 20222. doi:
          <volume>10</volume>
          .15587/
          <fpage>1729</fpage>
          -
          <lpage>4061</lpage>
          .
          <year>2022</year>
          .
          <volume>265309</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [43]
          <string-name>
            <given-names>O.</given-names>
            <surname>Ilbert</surname>
          </string-name>
          , et al.,
          <article-title>Accurate photometric redshifts for the CFHT legacy survey calibrated using the VIMOS VLT deep survey</article-title>
          ,
          <source>Astronomy &amp; Astrophysics</source>
          , vol.
          <volume>457</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>841</fpage>
          -
          <lpage>856</lpage>
          ,
          <year>2006</year>
          . doi:
          <volume>10</volume>
          .1051/
          <fpage>0004</fpage>
          -6361:
          <fpage>20065138</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [44]
          <string-name>
            <given-names>N. M.</given-names>
            <surname>Ball</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Brunner</surname>
          </string-name>
          ,
          <article-title>Data mining and machine learning in astronomy</article-title>
          ,
          <source>International Journal of Modern Physics D</source>
          , vol.
          <volume>19</volume>
          (
          <issue>7</issue>
          ), pp.
          <fpage>1049</fpage>
          -
          <lpage>1106</lpage>
          ,
          <year>2010</year>
          . doi:
          <volume>10</volume>
          .1142/S0218271810017160.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [45]
          <string-name>
            <given-names>L.</given-names>
            <surname>Kirichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.S.A.</given-names>
            <surname>Alghawli</surname>
          </string-name>
          , T. Radivilova,
          <article-title>Generalized approach to analysis of multifractal properties from short time series</article-title>
          ,
          <source>International Journal of Advanced Computer Science and Applications</source>
          , vol.
          <volume>11</volume>
          , issue 5, pp.
          <fpage>183</fpage>
          -
          <lpage>198</lpage>
          ,
          <year>2020</year>
          . doi:
          <volume>10</volume>
          .14569/IJACSA.
          <year>2020</year>
          .
          <volume>0110527</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>