<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Performance Analysis of Different Feature Detection Techniques for Modern and Old Buildings</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>S. Rayhan Kabir</string-name>
          <email>rayhanhemel@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Md. Akhtaruzzaman</string-name>
          <email>azaman01@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Rafita Haque</string-name>
          <email>rafitahaque93@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dept. of CSE, Asian University of Bangladesh</institution>
          ,
          <addr-line>Dhaka</addr-line>
          ,
          <country country="BD">Bangladesh</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Building detection and feature detection are nowadays significant research fields in the area of computer vision. In the human eye perspectives, it is very easy for separating the old and modern buildings. In the computation aspect, differentiation of the old and modern buildings depends on feature detection. The different building structures contain different characteristics and features. Various methods of feature detection concept are being used for collection of the features. This research paper presents four computational methods for detecting the feature of several modern and old buildings. In this experiment, we have analyzed Canny Edge Detection, Hough Line Transform, Find Contours and Harris Corner Detector techniques for the modern and old buildings. After conducting these techniques, we have analyzed the performance of feature detection for the modern and old buildings. In this manuscript, we have also shown that, why these four techniques are suitable for detecting the features of modern and old buildings.</p>
      </abstract>
      <kwd-group>
        <kwd>Building Detection</kwd>
        <kwd>Computer Vision</kwd>
        <kwd>Image Processing</kwd>
        <kwd>Feature Detection</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Object detection is right now an imperative research
territory in the field of computer vision and image
processing. A few kinds of identification approaches
are utilizing the present application and research.
Building Detection is one of them. In recent years,
some experiments have been revealed, where computer
vision approaches are utilized in ancient architecture
and modern architecture segments.</p>
      <p>A detection technique is used in damage and
collapsed buildings, which are based on digital surface
models [MYLY18]. Another detection method focuses
on “Light Detection &amp; Ranging” (LiDAR) method and
detected the building by using feature compressor
[NSS+18]. A manuscript has been presented a building
detection approach using shadow, shape, and color
features of a building [GJ18]. A feature
acknowledgment method has been utilized in an ancient
structure which depends on deep learning [ZWZZ18].
Here, the analysts have proposed a technique to
distinguish the few highlights of the old structure by
utilizing a neural network system. Another ongoing
strategy centers on acknowledgment and perception for
antiquated Maya symbolic representation [COG18].</p>
      <p>After viewing the above literature review, feature
detection of a building seems to be a very significant
research area and recent trends in Computer Science.
Furthermore, these former experiments have not
disclosed any combined concepts about the
performance of different feature detection techniques
for modern and old buildings. Moreover, the structures
of the modern and old buildings are not in the same
aspects. In addition, the performance or execution of
feature detection techniques are displayed in different
activities for modern and old buildings.</p>
      <p>According to the above research gaps, we have
instituted this research, where we have shown the
diverse performances of different feature detection
techniques for modern and old buildings. To construct
our research, we have utilized the Canny Edge Detector
[CCWT18], Hough Line Transform [TWBW18], Find
Contours [SMNC18] and Harris Corner Detector
[SIV18] techniques. After utilizing these techniques,
we have shown different performances for the different
modern and ancient buildings. Finally, in this paper, we
have exposed a percentage rate of these four feature
detection techniques for modern and old architectures
or buildings.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Feature Identification for Buildings</title>
      <p>In the computer vision aspects, there are variant types
of ideas for feature identification [GPP15], such as
corners, points, edges etc. [TG16]. In our experiment,
we have applied some techniques for collecting the
building features of modern and old dimensions.</p>
      <sec id="sec-2-1">
        <title>2.1 Canny Edge Detection</title>
        <p>Edge identification covers a decent variety of scientific
process that’s objectives is at distinguishing the focuses
in a picture. In our test, we have utilized the Canny
edge detection strategy. This technique was used for
recognizing an extensive variety of edges from the
picture. Various researches agree with the Canny
technique to displaying the best results in edge
detection [MA09] [KS16]. Here, the horizontal (Gx)
and vertical (Gy) directions were sifted by finding the
gradient intensity of a picture. We organized the edge
angle [MK13] for every pixel as taken after. After
implicating this approach, the gradient was always
standing to edges and also rounded to the angles for the
vertical, horizontal and diagonal directions.</p>
        <p>After implicating these equations, the gradient was
always standing to edges and also rounded to the angles
for the vertical, horizontal and diagonal directions.
Figure 1 has illustrated the output of the Canny method
for modern and old buildings and its simulation graphs.
(1)
(2)</p>
        <sec id="sec-2-1-1">
          <title>Original Image of Modern Building</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Canny Image of Modern Building</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>Original Image of Old Building</title>
          <p>Simulation of Canny Edge Detection</p>
        </sec>
        <sec id="sec-2-1-4">
          <title>Canny Image of Old Building</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.2 Hough Line Transform</title>
        <p>This is a feature extraction technique. It was respected
with the lines identification in a shape on the images.
Here, the line can be illuminated by two variables
[Open17]. We have denoted the variables m and b for
Cartesian coordinate method and variables r and θ for
Polar coordinate method [AOL+92]. These two
methods are utilized in Hough Line Transform
technique for identifying the line among the buildings
(See Figure 2). In our research, a line has been denoted
as y where,</p>
        <p>y = mx + b</p>
        <sec id="sec-2-2-1">
          <title>In parametric form,</title>
          <p>r = x cos θ + y sin θ.</p>
          <p>Figure 3 has been shown as the input and output of
the picture in this technique. Hence, the applied
equation of this technique is as follows:
(3)
(4)
(5)</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>Original Image of Modern Building</title>
          <p>Hough Line Transform Image of Modern Building</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>Original Image of Old Building</title>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>2.3 Find Contours Technique</title>
        <p>Contours can be stated as a curve or inclination for
joining all the points’ border and having the same
color. This method was utilized for shape analysis and
object detection in a building image. In our experiment,
we have used Image Moment [ZWSP15] approach for
finding the counters of the different aged buildings. The
spatial moment of an image is denoted as mij where i
and j are nested “for loop” order. The image moment
[Open14] computed as:</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4 Harris Corner Detection</title>
        <p>Corner identification is a method used to extract the
corner features of an image. In computer vision, a
corner can also be noted as a point. Harris corner
detection technique extracts the corners from an image.
It commonly finds the intensity of an image for a
prolapse of (u, v). In this approach, there is a Gaussian
window function and gives weights to pixels down. The
mathematical structure of this technique [Nelli17] is
given below which is utilized in our experiment.
Here, E is the variety between the original and moved
Gaussian window. The window's dislocation in the
direction x is u and y direction is v. Window w(x, y) is
at position (x, y). The image intensity is I. Window’s
intensity is I(x+u, y+v), the original intensity is I(x, y)
and w(x, y) is a window Gaussian function. At
OpenCV, the harries corner detector function has been
entitled as cv2.cornerHarris( ). Here, Harris technique
has been improved by using its directional
differentiations and also covered the high threshold
values (See Figure 6) [CZZD09]. In Figure 7, we have
displayed the Harries approach for modern and old
buildings.
(7)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>2. Result and Analysis</title>
      <p>By using Canny Edge Detection, Hough Line
Transform, Find Contours and Harris Corner Detector
techniques in modern and old aged buildings we have
got different performances. We have done this
experiment on several old and modern buildings’
images. Figure 9 has illustrated the false and true
feature detections among the images and Table 2 has
demonstrated the accuracy percentage of false and true
feature detection rates.</p>
      <p>True and false feature detection in Canny images</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True and false feature detection in Find Contour images</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection</p>
      <p>True
Detection</p>
      <p>False
Detection
True and false feature detection in Hough Line
Transform images</p>
      <p>True and false feature detection in Harris image</p>
    </sec>
    <sec id="sec-4">
      <title>3. Conclusion and Future Works</title>
      <p>Our exploration has delineated an application based
analysis which has shown the performance of feature
detection techniques in feature recognition of isolated
aged buildings. This examination is fundamentally
centered on the period distinguishing proof by utilizing
highlight location. The examination is as of now being
worked on in the viewpoint of Deep Learning. These
are key research for better nearness about component
location and period recognizable proof at continuous in
our future activities. The ongoing situations idea can be
changed into Machine Learning based model by using
Feedforward Neural Network (FNN).
[MYLY18] L. Moya, F. Yamazaki. , W. Liu , and M.</p>
      <p>
        Yamada. Detection of collapsed buildings
fro
        <xref ref-type="bibr" rid="ref8">m lidar data due to the 2016</xref>
        Kumamoto earthquake in Japan. Natural
Hazards and Earth System Sciences,
18:65–68, January 2018.
[NSS+18]
[GJ18]
      </p>
      <p>F. H. Nahhas, H. Z. M. Shafri, M. I.</p>
      <p>Sameen, B. Pradhan and S. Mansor. Deep
Learning Approach for Building
Detection Using LiDAR–Orthophoto
Fusion. Journal of Sensors, 2018: article
ID 7212307, August 2018.</p>
      <p>A. J. Ghandour and A. A. Jezzini.</p>
      <p>Autonomous Building Detection Using
Edge Properties and Image Color
Invariants. Buildings, 8(5): article 65,</p>
      <p>May 2018.
[ZWZZ18] Z. Zou, N. Wang, P. Zhao and X. Zhao.</p>
      <p>Feature recognition and detection for
ancient architecture based on machine
vision. In Proceedings SPIE 10602,
Smart Structures and NDE for Industry
4.0, 1060209, United States, 2018.
[COG18]</p>
      <p>G. Can, J. Odobez and D. Gatica-Perez.</p>
      <p>How to Tell Ancient Signs Apart?
Recognizing and Visualizing Maya
Glyphs with CNNs. ACM Journal on
Computing and Cultural Heritage, 1(1):
article 1, May 2018.
[CCWT18] J. Cao, L. Chen, M. Wang and Y. Tian.</p>
      <p>Implementing a Parallel Image Edge
Detection Algorithm Based on the
OtsuCanny Operator on the Hadoop Platform.</p>
      <p>Computational Intelligence and
Neuroscience, 2018: article ID 3598284,</p>
      <p>May 2018.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <surname>[TWBW18] M. Tatsubori</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Walcott-Bryant</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <surname>Bryant</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Wamburu</surname>
          </string-name>
          .
          <article-title>A Probabilistic Hough Transform for Opportunistic Crowd-sensing of Moving Traffic Obstacles</article-title>
          .
          <source>In 2018 SIAM International Conference on Data Mining</source>
          , California, USA, pages
          <fpage>217</fpage>
          -
          <lpage>215</lpage>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [SMNC18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Soomro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Munir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. N.</given-names>
            <surname>Choi</surname>
          </string-name>
          .
          <article-title>Hybrid two-stage active contour method with region and edge information for intensity inhomogeneous image segmentation</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>PLoS</surname>
            <given-names>ONE</given-names>
          </string-name>
          ,
          <volume>13</volume>
          (
          <issue>1</issue>
          ): e0191827,
          <year>January 2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[SIV18] [GPP15] [TG16] [MA09] [KS16] [MK13] [Open17] [AOL</source>
          +92]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Ientilucci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Voisin</surname>
          </string-name>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>Improvement of the Harris corner detector using an entropy-block-based strategy</article-title>
          .
          <source>In Proceedings SPIE 10644</source>
          ,
          <article-title>Algorithms</article-title>
          and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XXIV,
          <volume>1064414</volume>
          ,
          <string-name>
            <surname>Florida</surname>
          </string-name>
          , United States,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>Comparison of Different Feature Detection Techniques for Image Mosaicing</article-title>
          .
          <source>ACCENTS Transactions on Image Processing and Computer Vision</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>November 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Thareja</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Goyal</surname>
          </string-name>
          .
          <article-title>Performance Analysis of Edges, Corners and the genres: A Subjective Estimation</article-title>
          .
          <source>IOSR Journal of Electronics and Communication Engineering</source>
          ,
          <volume>1</volume>
          :
          <fpage>98</fpage>
          -
          <lpage>104</lpage>
          , Conf.
          <volume>15010</volume>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>R.</given-names>
            <surname>Maini</surname>
          </string-name>
          and
          <string-name>
            <given-names>H.</given-names>
            <surname>Aggarwal</surname>
          </string-name>
          .
          <article-title>Study and Comparison of Various Image Edge Detection Techniques</article-title>
          .
          <source>International Journal of Image Processing</source>
          ,
          <volume>3</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>12</lpage>
          ,
          <year>February 2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Kaur</surname>
          </string-name>
          and
          <string-name>
            <given-names>I</given-names>
            <surname>Singh</surname>
          </string-name>
          .
          <article-title>Comparison between Edge Detection Techniques</article-title>
          .
          <source>International Journal of Computer Applications</source>
          (
          <volume>0975</volume>
          -
          <fpage>8887</fpage>
          ),
          <volume>145</volume>
          (
          <issue>15</issue>
          ):
          <fpage>15</fpage>
          -
          <lpage>18</lpage>
          ,
          <year>July 2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Mordvintsev</surname>
          </string-name>
          and
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Canny Edge Detection. OpenCV-Python</surname>
          </string-name>
          <string-name>
            <surname>Tutorials</surname>
          </string-name>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>Hough</given-names>
            <surname>Line</surname>
          </string-name>
          <string-name>
            <surname>Transform</surname>
          </string-name>
          ,
          <source>Image Processing (imgproc module)</source>
          ,
          <source>OpenCV Tutorials</source>
          , OpenCV,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <string-name>
            <given-names>J.</given-names>
            <surname>Alakuijala</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Oikarinen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Louhisalmi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ying</surname>
          </string-name>
          and
          <string-name>
            <given-names>J.</given-names>
            <surname>Koivukangas</surname>
          </string-name>
          .
          <article-title>Image transformation from polar to Cartesian coordinates simplifies the segmentation of [ZWSP15] [Open14] [Nelli17] [CZZD09] brain images</article-title>
          .
          <source>In Proceedings 14th Annual International Conference of the IEEE Engineering in Medicine and Biology Society</source>
          , pages:
          <fpage>1918</fpage>
          -
          <lpage>1919</lpage>
          , Paris, France,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <surname>Phillips</surname>
          </string-name>
          .
          <article-title>Pathological brain detection based on wavelet entropy and Hu moment invariants</article-title>
          .
          <source>International Journal of Image Processing</source>
          ,
          <volume>26</volume>
          (s1):
          <fpage>S1283</fpage>
          -
          <lpage>S1290</lpage>
          ,
          <year>September 2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>Structural</given-names>
            <surname>Analysis</surname>
          </string-name>
          and Descriptors, OpenCV documentation,
          <source>OpenCV</source>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>Shape 2.4.13</source>
          .7
          <string-name>
            <given-names>F.</given-names>
            <surname>Nelli. OpenCV &amp; Python - Harris Corner</surname>
          </string-name>
          Detection
          <article-title>- a method to detect corners in an image</article-title>
          .
          <source>Meccanismo Complesso</source>
          ,
          <year>February 2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <given-names>J</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Dou</surname>
          </string-name>
          .
          <article-title>The Comparison and Application of Corner Detection Algorithms</article-title>
          .
          <source>Journal of Multimedia</source>
          ,
          <volume>4</volume>
          (
          <issue>6</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          ,
          <year>December 2009</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>