<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Linear Ob jects Detection on SAR Images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleg Yu. Ivanov</string-name>
          <email>iv@list.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ural Federal University</institution>
          ,
          <addr-line>Yekaterinburg, Mira st., 19, Russia, ol</addr-line>
        </aff>
      </contrib-group>
      <fpage>58</fpage>
      <lpage>65</lpage>
      <abstract>
        <p>Allocation of linear structures on images is required to solve a large number of thematic problems of the Earth remote sensing. Roads and railways, power lines, pipelines, and borders of natural areas \landsea", \forest- eld") are examples of such structures. It is known that traditional algorithms for linear targets and structures detection are not e ective, because radar images have special distinctive features (i:e: speckle-noise), which complicate the target detection problems. In the paper neural network algorithm based on the Hough transform and application of Kohonen neural networks is suggested and researched. At the rst stage, a radar image is transformed into a Hough plane, where linear targets give peak responses; then a Kohonen neural network is used to nd these peaks. It is shown that the neural gas algorithm for the network weights adjustment is more suitable than the \Winner takes all" rule. Also, an exponential weight calibration function for better convergence is o ered. Examples of real radar space images processing obtained by the RADARSAT-1 are given. Results of processing show that the suggested algorithm is suitable for linear targets detection on the radar images.</p>
      </abstract>
      <kwd-group>
        <kwd>Linear objects detection algorithm</kwd>
        <kwd>synthetic aperture radar</kwd>
        <kwd>Hough transformation</kwd>
        <kwd>Kohonen neural network</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Nowadays satellite imagery becomes increasingly popular due to the wide
spectrum of solved problems. Successful operation with a variety of orbital remote
monitoring systems allow one to obtain quality images of the Earth surface in
di erent bands of the electromagnetic spectrum. Among the latter, the synthetic
aperture radar (SAR) technology plays an important role and it can carry out
imagery regardless of weather conditions and natural lighting of surface [
        <xref ref-type="bibr" rid="ref1 ref6 ref7">1, 6, 7</xref>
        ].
      </p>
      <p>
        There are several algorithms for automatic linear structures detection on the
remote sensing data, and functional analysis (Fourier analysis, Gabor wavelets
algorithm) and parametric analysis algorithms [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] are most commonly used.
The problem becomes more complicated by the fact that coherent radar images
(obtained by SAR) have their own very speci c characteristics, and the most
important among them is their distinctive spotting, so-called the speckle-noise
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. Speckle signi cantly reduces e ectiveness of the mentioned algorithms. In
this paper, we consider the algorithm of linear structures detection based on the
Hough transformation with subsequent analysis with on the basis of Kohonen
neural network. Application of the neural networks gives one a possibility of
parallel processing and provides high noise immunity.
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Classi cation algorithm</title>
      <p>The proposed algorithm consists of several steps. At rst, the Hough
transformation is performed on a fragment of the original image I(m; n), which transforms
a pixel coordinates (m; n) into a Hough space A( ; )</p>
      <p>= m cos + n sin :
where m and n are the point pixel Cartesian coordinates, and and are the
point polar coordinates.</p>
      <p>
        As a result of this transformation, each line of the original image will
correspond to a point in the Hough space (Fig. 1) For a binary image, the
transformation is performed for non-zero pixels only. For a gray imagery, the transformation
result values in the Hough plane are multiplied on the pixel value [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The
optimal sampling interval of the parameter is equal to one, and the parameter
sampling each interval should not exceed 0.02 radians. Choice of the origin at
the center of original image also helps to reduce errors in subsequent calculations.
      </p>
      <p>
        Further, the image in the Hough plane is converted into an array of training
vectors for a neural network. The easiest way is to match the value pair ( ; ) on
the Hough plane with vectors x, whose number for this cell will be proportional
or equal to the A( ; ) value for this cell. To improve convergence of the learning
algorithm, it is better to take such number of vectors that will be proportional
to (A( ; ))q, q = 1:5 : : : 2:0 (Fig. 2, 3) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>The next step consists in normalization of vectors; this is performed by the
formulas
x1 = 0:7(2Nang i</p>
      <p>1)=(Nang=Nnorm);
x2 = 0:7(2Nnorm i</p>
      <p>1);
x23 = 1
x12
x22;
where Nnorm and Nang are the sizes of the Hough plane (in pixels). As a result,
the vectors become three-dimensional.</p>
      <p>Typically, in self-organizing networks each neuron is connected to all
components of the input vector using synaptic connections. The weights of the neurons
synaptic connections form a vector w, which is needed to be initialized before
the training. To reduce the number of \dead" neurons, it is better to use the
uniform or random initialization. The number of neurons should be not less than
the number of lines you want to detect. Usually, it is enough to take a few dozen,
since too many neurons lead to unnecessary increase of computational costs and
increase possibility of detecting false objects.</p>
      <p>
        The next step is the self-organization (learning) of the competitive type
neural network (Kohonen network). There are several algorithms for learning the
self-organizing neural networks, such as the WTA algorithm (Winner Takes All),
the WTM algorithm (Winner Takes Most), the neural gas algorithm, and others.
These algorithms di er in the rate of convergences, in e cient use of neurons,
and in implementation complexity of calculations at each iteration. In this work,
the choice was made in favor of the neural gas algorithm with coordinate-wise
metric (\Manhattan") [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. The Manhattan metric has the form
      </p>
      <p>N
d(x; wi) = X jx
i=1
wij;
and requires a minimum computational cost among the other metrics. With
this metrics, and the neural gas one algorithm has the best convergence among
self-organization algorithms.</p>
      <p>The weights adjustment in the neural gas algorithm is performed with the
coe cient G(i; x)</p>
      <p>
        G(i; x) = exp
m(i)
;
where m(i) is the order of the neuron after sorting by the Manhattan distance.
During the neural network training, the vectors w are sequentially supplied to
the input. To improve the convergence, they are introduced to the network in a
random order. Further, taking into account the chosen metric, the neurons
sorting and neuron weight adjustment procedures are performed. Next, the process
repeats until predetermined number of cycles is completed (Fig. 4). After that,
the weights ne adjustment procedure with an exponential calibration coe cient
[
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ] is recommended to be used
      </p>
      <p>F (i; x) = exp ( ad) ;
where d is the Manhattan distance and a is an experimentally adjusted coe cient
(it may decrease during network training).</p>
      <p>As a result of the network self-organization and the weights ne adjustment,
the most neurons group near the center of training vectors clusters, but some of
them become \dead" or \wandering" (i:e:, they do not nd any vectors cluster).
Formed neurons groups are combined, and the \dead" and \wandering" neurons
are discarded that leads to reduction of a number of detected lines.</p>
      <p>The peculiarity of this problem is that the neural network works only in
training mode. After the end of the training process, the weights of neurons are
determined; then they are denormalized and descaled that result coordinates of
desired straight lines. The inverse Hough transformation allows one to display
these linear structures in the spatial coordinates.</p>
      <p>
        The nal processing stage is the weight statistical analysis of the pixels on
the selected lines [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. It is used, at rst, to nd the borders of a linear structure
segment if it passes not through the full image, and, secondly, to eliminate the
remaining false lines. The rst problem may be solved by the correlation analysis
algorithms, and the second one is performed by the image brightness distribution
analysis [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>Experimental results</title>
      <p>The developed algorithm has been tested both on models and on real radar
images. Figure 5a shows an image obtained by the synthesized aperture radar
RADARSAT-1 (spatial resolution is 8 m). In the picture, there are images of
the four forest belts of di erent lengths, as well as, a bright fragment in the
lower right corner. In addition, the fragment has a horizontal stripe of a medium
brightness that is the image defect.</p>
      <p>The resulting linear structures are presented on Fig. 5d. It is seen that all
lines are detected correctly. A line truncation in the lower right corner takes
place due to the image brightness decrease there.
The proposed algorithm can signi cantly improve linear elements detection e
ciency on radar images compared with traditional algorithms. It becomes
possible due to the application of a neural network, which has enough low sensitivity
to purity of the input data. Application of the neural network algorithm
automates the process of linear structures detection due to the exclusion of thresholds
selection from the algorithm.
5</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgment</title>
      <p>This work was supported by the RFBR grants nos. 13-07-12168, 13-07-00785 and
by the Ural Federal University's Center of Excellence in \Quantum and Video
Information Technologies: from Computer Vision to Video Analytics" (according
to the Act 211 Government of the Russian Federation, contract 02.A03.21.0006)</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Kondratenkov</surname>
            ,
            <given-names>G. S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Frolov</surname>
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Yu</surname>
          </string-name>
          .: Radiovideniye.
          <article-title>Radiolocatsionnye sistemy distansionnogo zondirovaniya Zemli [Radiovision. Radar systems for remote sensing of the Earth] (in Russian)</article-title>
          .
          <source>Radiotekhnika</source>
          , Moscow (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Ivanov</surname>
            ,
            <given-names>O. Yu.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kobernichenko</surname>
            ,
            <given-names>V. G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Neronsky L</surname>
          </string-name>
          . B.:
          <article-title>Bystriy algoritm tsifrovogo sintezirovaniya aperturi [Fast digital aperture synthesis algorithm] (in Russian)</article-title>
          .
          <source>Radiotekhnika</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <volume>23</volume>
          {
          <fpage>29</fpage>
          (
          <year>1994</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Dorosinsky</surname>
            ,
            <given-names>L. G.</given-names>
          </string-name>
          :
          <article-title>The research of the distributed objects on radar image recognition algorithms</article-title>
          .
          <source>Proc. of the CriMiCo 2013 23rd International Crimean Conference Microwave and Telecommunication Technology, Crimea</source>
          .
          <volume>1216</volume>
          {
          <issue>1218</issue>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Pratt</surname>
            ,
            <given-names>W. K.</given-names>
          </string-name>
          <article-title>Digital image processing</article-title>
          . Mir, Moscow (
          <year>1982</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Ivanov</surname>
            ,
            <given-names>O. Yu.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Korkunov</surname>
            <given-names>P. V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sosnovsky</surname>
            ,
            <given-names>A. V.</given-names>
          </string-name>
          :
          <article-title>Algoritm vydeleniya lineinykh struktrur na izobrazheniyakh, osnovanniy na preobrazovanii ha a [an algorithm for linear structures detection on the imagery based on hough transformation] (in Russian)</article-title>
          .
          <source>Proc. of VII nauchnoprakticheskaya konferentsiya \Sviaz-Prom</source>
          <year>2010</year>
          ", ,
          <string-name>
            <surname>Ekaterinburg</surname>
          </string-name>
          .
          <fpage>321</fpage>
          -
          <lpage>326</lpage>
          , (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Ossovsky</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          :
          <article-title>Neural networks for information processing</article-title>
          .
          <source>Finansy i statistika</source>
          , Moscow (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Duda</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hart</surname>
          </string-name>
          , E.:
          <article-title>Pattern classi cation and scene analysis</article-title>
          .
          <source>Mir</source>
          , Moscow (
          <year>1976</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>