<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>High performance radar images modelling and recognition of real objects</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>D A Zherdev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>V V Prokudin</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Image Processing Systems Institute of RAS - Branch of the FSRC "Crystallography and Photonics" RAS</institution>
          ,
          <addr-line>Molodogvardejskaya street 151, Samara, Russia, 443001</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Samara National Research University</institution>
          ,
          <addr-line>Moskovskoe Shosse 34А, Samara, Russia, 443086</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2019</year>
      </pub-date>
      <fpage>549</fpage>
      <lpage>552</lpage>
      <abstract>
        <p>In the work there is a modernization of the parallel algorithm for the radar images formation of 3D models with the synthesis of the antenna aperture. In the formation of the scene description, the various structures are used in which it is possible to use more efficient and derived calculations. In addition, it is the topical task to recognize objects on radar images. Thus, on the basis of the implemented parallel program for modelling, the high performance required for simulating multiple radar images can be achieved.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        This research is a continuation of ideas and methods used in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], where high-performance radar images
modeling approach was considered. The goal of this work is to obtain a greater acceleration of the
parallel program for synthetic aperture radar modeling by building a kd tree describing a
threedimensional scene [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2-4</xref>
        ]. We used CUDA to perform a high-performance computing on a graphic card
and achieve this goal. It is the main difficulty to form a trajectory signal along with the radar travel
and then compute a radar image. In this study, an algorithm for obtaining radar characteristics was
implemented with the construction of the kd tree structure, that allow to describe any
threedimensional scene.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Modelling and recognition</title>
      <p>
        The main computationally expensive part of the radar images modelling algorithm is in the processing
of radiated and reflected radar signals. In this work, a modification of the CUDA algorithm was
implemented, which difference at the stage of the trajectory signal constructing is in a previously
calculated kd-tree for any three-dimensional scene. The subsequent execution of the synthesis of the
radar aperture was performed the same way as discussed in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This approach of computable
operations reducing allowed us to achieve a three-fold acceleration compared with the previous
algorithm implementation.
      </p>
      <p>
        In addition, during the research, we carried out the experiments of object recognition in radar
images. There are many methods and approaches of object recognition, among which the popular
methods are convolutional neural networks [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], support vectors, nearest neighbours, etc. [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. In this
work, the object recognition was performed using the method of support subspaces [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. We used
threedimensional models of the tank, BMP, BTR to construct the radar images. All sizes of the models
were matched to the corresponding sizes of their real prototypes at the three-dimensional coordinate
system. We used the modelling parameters presented in Table 1 when generated the training set. The
bearing angle was 17°, which corresponds to the conditions of the real obtained SAR images training
dataset. We modelled 100 images for each object. The step of rotation was equal the 3.6° in the
observation plane. Figure 3 shows the real and modeled SAR images of BTR.
      </p>
      <p>There are the results of research shown below which related to the construction of a training model
sample and the subsequent recognition of real images using model images at the training stage. Figure
1, a), b) shows the real radar images of a tank from the widely known MSTAR database, and figure 2,
a), b) shows radar images obtained by modelling using the synthetic aperture radar method, with
angles the bearing angle of 17° and 15° and the aspect angles are 17 ° and 100 °, respectively.
#
1
2
3
4
5
6
7
8
9
10
11</p>
      <p>Parameter
Radar start point (x,y,z), m
Radar observation mode
Synthesis lenght, m
Wave length (chirp), m
Azimuth resolution, m
Range resolution, m
Impulse duration, mcs
Min range, m
Max range, m
Azimuth step, m
Range step, ns
a) b)
Figure 1. MSTAR dataset SAR images: a) bearing angle 17°, elevation angle 15°; b) bearing
angle 17°, elevation angle 100°.</p>
      <p>
        The recognition results for three classes obtained using the recognition contingency index based
algorithm [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] (without subclassing) are listed in Table 2. Note that the result of 62.78% correct
recognition of the three classes was obtained on a relatively small training set (300 images). The
MSTAR training sample contains 587 of image samples.
      </p>
      <p>a) b)</p>
      <p>Figure 3. SAR images: a) model and b) real BTR target.</p>
      <p>BMP2
BTR70
T72</p>
      <p>At the binary classification: the BMP and tank using the same algorithm, the result of 80.24% was
achieved. The recognition results of the two classes without division into subclasses are shown in
Table 3.</p>
      <p>BMP2
T72</p>
      <p>
        In addition, we carried out the experiment of the real images recognition by training on model
images, which obtained using the ray tracing approach [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Table 4 shows the recognition results. The
total percentage of correct recognition was 27.6%. These results show that images modelled via
raytracing is not quite suitable for recognition real images. Perhaps they can be used to create a
simplified model of a three-dimensional scene.
      </p>
      <p>BMP2
BTR70
T72</p>
      <p>It should be noted that in the MSTAR database, the training sample has 587 images. In contrast to
presented model images dataset, the rotation of an object in the MSTAR images was performed with
an irregular step and with rather large positioning errors.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Conclusion</title>
      <p>The paper shows that the results of objects recognition using real images has the ability of the
effective usage at the developed software. The images were obtained by modeling can form the
training dataset at the proposed algorithm. In addition, the parallel algorithm acceleration is obtained
by kd-tree construction let us help to perform high-computing and effective scattering calculation on
the various object surfaces.</p>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The work was funded by the Russian Federation Ministry of Education and Science (agreement
007GZ/Ch3363/26) and RFBR (project # 17-29-03112 ofi_m).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Zherdev</surname>
            <given-names>D A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prokudin</surname>
            <given-names>V V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Minaev</surname>
            <given-names>E Y</given-names>
          </string-name>
          <year>2018</year>
          <article-title>HPC implementation of radar images modelling method using</article-title>
          <source>CUDA Journal of Physics: Conference Series 1096 012083</source>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Horn</surname>
            <given-names>D R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sugerman</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Houston</surname>
            <given-names>M</given-names>
          </string-name>
          and
          <article-title>Hanrahan P 2007 Interactive kd tree</article-title>
          <source>GPU raytracing Proceedings of the symposium on Interactive 3D graphics and games 167-174</source>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Wehr</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Radkowski</surname>
            <given-names>R 2018</given-names>
          </string-name>
          <article-title>Parallel kd-tree construction on the gpu with an adaptive split and sort strategy</article-title>
          <source>International Journal of Parallel Programming</source>
          <volume>46</volume>
          (
          <issue>6</issue>
          )
          <fpage>1139</fpage>
          -
          <lpage>1156</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Vinkler</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Havran</surname>
            <given-names>V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Bittner J 2016 Performance</surname>
          </string-name>
          <article-title>Comparison of Bounding Volume Hierarchies</article-title>
          and Kd‐ Trees
          <source>for GPU Ray Tracing Computer Graphics Forum</source>
          <volume>35</volume>
          (
          <issue>8</issue>
          )
          <fpage>68</fpage>
          -
          <lpage>79</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Savchenko</surname>
            <given-names>A V</given-names>
          </string-name>
          <year>2018</year>
          <article-title>Trigonometric series in orthogonal expansions for density estimates of deep image features</article-title>
          <source>Computer Optics</source>
          <volume>42</volume>
          (
          <issue>1</issue>
          )
          <fpage>149</fpage>
          -
          <lpage>158</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179-2018-42-1-
          <fpage>149</fpage>
          -158
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Borodinov</surname>
            <given-names>A A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Myasnikov</surname>
            <given-names>V V</given-names>
          </string-name>
          <year>2018</year>
          <article-title>Classification of radar images with different methods of image preprocessing</article-title>
          <source>CEUR Proceedings</source>
          <volume>2210</volume>
          <fpage>6</fpage>
          -
          <lpage>13</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Fursov</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zherdev</surname>
            <given-names>D</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kazanskiy</surname>
            <given-names>N 2016</given-names>
          </string-name>
          <article-title>Support subspaces method for synthetic aperture radar automatic target recognition</article-title>
          <source>International Journal of Advanced Robotic Systems</source>
          <volume>13</volume>
          (
          <issue>5</issue>
          ) DOI:
          <fpage>10</fpage>
          .1177/1729881416664848
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Zherdev</surname>
            <given-names>D A</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fursov</surname>
            <given-names>V A</given-names>
          </string-name>
          <year>2015</year>
          <article-title>Support plane method applied to ground objects recognition using modelled SAR images Applications of Digital Image Processing XXXVIII International Society for Optics and Photonics 9599</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>