<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Edge detection of objects on the satellite images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>E E Kurbatova</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Vyatka State University</institution>
          ,
          <addr-line>Moskovskaya 36, Kirov, Russia, 610006</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>115</fpage>
      <lpage>122</lpage>
      <abstract>
        <p>Image segmentation is an important stage in image processing. The approach for the satellite images segmentation based on objects edge detectionis proposed. The approach usesthe random Markov fields as a mathematical model of an image. It is proposed to use the methods of contour and texture segmentation on different color components of a satellite image. The contour segmentation detects objects with different colors. It is applied to component with color information. The transition probability in two-dimensional Markov chains is used as a texture feature. The texture segmentation is applied to component with brightness information. The simulation results of the proposed approach in different color models, such as RGB, HSV, Lab, are presented. The accuracy of detecting contours was estimated using the set of test images on the base of five criteria. The use of a combination of color and texture characteristics of regions, made it possible to improve the accuracy of objects edge detection.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Remote sensing data are widely used in different applications, including agriculture, forestry and
water management, monitoring of the environment and emergencies, urban planning, cartography, etc.
The thematic processing is one of the ways to process the satellite images in such systems. It includes
detection, decoding and objects recognition stages. Using the thematic decoding of the satellite
images, it is possible to allocate different classes of objects, such as forests, fields, rivers, urban zones,
etc. [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] The obtained decoding results can be used for calculating the characteristics of objects and for
tracing their changes over time. It is needed to apply the complex approach, which consist of several
continues stages, for decoding of the satellite images. Different image processing methods can be used
on different stages.
      </p>
      <p>In general, image decoding consists of such stages:
 image acquisition;
 image enhancement (filtering, contrast enhancement, increase of resolution, etc.);
 object detection (edge analysis, segmentation into homogeneous regions);
 objects classification (sorting the allocated objects to finite number of classes).</p>
      <p>Each stage of such process uses data obtained on the previous stage. Therefore, the quality of each
stage affects the accuracy of the recognition results. Usually at each next stage the more complex
algorithms are used. They require more time for processing and have the less degree of their
automation. Therefore, it is preferable to use algorithms, which have a small number of parameters for
setting, require small computational resources and minimum operator participation. At the same time
they must provide high quality processing. It is especially actually for the algorithms applying at the
first stages.</p>
      <p>
        This work is about the object detection stage, the main method of which is segmentation. Different
features can be used for image segmentation. Among them are object brightness, color, texture, shape,
etc. In general, all segmentation methods can be divided into two classes: methods of contour analysis,
and texture methods. Contour methods are based on the objects edge detection for some feature [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2-4</xref>
        ].
Texture methods are methods, which find homogeneous regions. Such regions are characterized that a
texture feature is unchanged or changesa little within the region. At the same time, it varies
significantly in different regions. As a texture feature can be used different statistical, structural,
morphological and spectral image characteristics [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref9">9-12</xref>
        ]. But often it is not enough to use only one
characteristic for object detection. Therefore, a combination of different features and algorithms are
often used in the modern approaches for image segmentation [
        <xref ref-type="bibr" rid="ref13 ref14 ref15 ref16 ref17">13-17</xref>
        ].
      </p>
      <p>In this work, the approach for image segmentation of satellite images is proposed. It is based on
objects edges detection using the color and texture information. It increases the accuracy of objects
edge detection. The approach uses the mathematical model based on Markov random fields for image
description.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Image segmentation method</title>
      <p>
        In the previous work [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ], the mathematical model based on Markov random fields has been proposed
for image description. Based on this model in some works [
        <xref ref-type="bibr" rid="ref19 ref20">19,20</xref>
        ] the contour and texture
segmentation methods have been developed. They provide high efficiency and have low
computational complexity. In this work it is proposed to use these methods jointly.
      </p>
      <sec id="sec-2-1">
        <title>2.1. Mathematical model of an image</title>
        <p>According to the used model, g-bits digital halftone images (DHI) are represented by the set of g bit
binary images (BBI). Each BBI is the superposition of two one-dimensional Markov random chains
with two equiprobablestatesM1 and M2 and matrices of transition probabilities in the horizontal and
vertical directions:
1Π = ‖ 11 11
11 1222‖ , 2Π = ‖ 22 1211</p>
        <p>
          The entropy approach was applied for calculating the probabilities of the binary elements states.
Thus, the amount of information in the element 3 relative to the states of the neighboring
elements 1,  2is calculated by equation (2) [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ].
        </p>
        <p>( 3| 1,  2)= −
 ( 3| 1) ( 3| 2)</p>
        <p>,
 ( 3| 2,  1)
where  ( 3| 1),  ( 3| 2)are one-dimensional densities of transition probability of the neighboring
elements,  ( 3| 2,  1)is the density of transition probability in two-dimensional Markov chain.</p>
        <p>The transition probability density in the binary two-dimensional Markov chain can be expressed by
equation (3),where  (∙)is the delta function.</p>
        <p>( 3| 2,  1)=</p>
        <p>( 3 =   | 1 =   ,  2 =   )×  ( 1 −   )×  ( 2 −   )</p>
        <p>Taking into account the equation (3), the transition probability matrix for various combinations of
the neighboring elements states has the form (4).</p>
        <p>
          2
∑
 , , =1
Π = ‖ 
 
 
 
 
 
 
 
 1
 4
 1′
 4′
‖ = ‖‖ 23  23′′‖‖ ;  ,  = ̅1̅,̅2̅;  ≠ 
sliding window method.
as[
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]
The elements of this matrix are related with the elements of the 1, 2matrices by the relations (5).
 1 =  
 2 =  
=  ( 3 =  1| 1 =  1;  2 =  1)= 1
=  ( 3 =  1| 1 =  1;  2 =  2
)= 1
  ∙ 2  ⁄ 3  ;  4 = 1 −  1;
  ∙ 2  ⁄ 3  ;  3 = 1 −  2,
where 3  ,  ,  = ̅1̅,̅2̅,  ≠  are the elements of transition probability matrix 3
Π = 1
Π × 2Π.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Texture segmentation method</title>
        <p>
          This method [
          <xref ref-type="bibr" rid="ref19 ref20">19,20</xref>
          ] is based on two-dimensional mathematical model of an image. In general, texture
is a region where some statistical properties are constant or change slowly. The estimate of transition
probability in two-dimensional Markov chain is used as a texture feature. It is calculated using the
For the first line of window the estimate of transition probability 1 ̂ for horizontal is calculated
1
 ̂ = 1 −
2 1
 ̂( )
        </p>
        <p>,

 ̃ ( , )
=</p>
        <p>1
 ∙</p>
        <p>∑</p>
        <p>∑  ̂
 =1  =1

( , ),
(3)
(4)
(5)
(6)
of
(7)
where  ̂( ) is the estimate of the average sequence length of the identical BBI elements;  1 is the initial
probability ( 1 = 0,5).</p>
        <p>From the second line, the estimate of transition probability for vertical 2 ̂ and estimate  ̂
transition probability in two-dimensional Markov chain are calculated by the matrix (4).</p>
        <p>All the obtained estimates are averaged within the window to produce a mean estimate of transition
probability  ̃ :
where m, n are height and width of the sliding window.</p>
        <p>This mean value is used as a texture feature for the central element of the window.</p>
        <p>A window of fixed size is moved from left to the right and top to bottom on lth BBI to get texture
feature for each image element.</p>
        <p>Then image element is marked by comparing the calculated texture feature with the threshold.</p>
        <p>As a result, each image element has the label corresponding to a certain texture. The threshold can
be selected on the basis of the analysis of texture feature histogram. If there are several textures on the
image, it is needed to select several thresholds.</p>
        <p>In the case of color image processing, each color component can be represented as the DHI. All
color components are processed separately. The threshold is selected for each component. The
segmentation results obtained on different color components are combined into a single color image.
On this image different colors correspond to the regions of different textures.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Contour segmentation method</title>
        <p>To detect objects edges the amount of information between the element 3 and the various
combinations of the neighboring elements is calculated. It is determined with the matrix (4) and the
equation (2) for each element of l th BBI.</p>
        <p>
          The amount of information in the element 3 will be minimal, if the neighboring elements 1,  2
have the same states with the 3 [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ].
        </p>
        <p>On the edge of other brightness region one or two neighboring elements have different states with
 3.In this case the amount of information in the element  3 is increased.If the amount of information
in the element 3is greater than to h the pixel belongs to the contour. The element  3belongs to
homogeneous region in the other case.</p>
        <p>The threshold h is calculated for each BBI taking into account the minimal amount of information
and the amount of information, when one of the neighboring elements has a different state.
ℎ = 0,5 ∗ ( ( 3 =   | 1 =   ,  2 =   )+  ( 3 =   | 1 =   ,  2 =   )).
(8)
It is supposed that the transition probability matrices are a priori known.</p>
        <p>In the case of color image the contours are detected on each color component. Then the contour
maps of each component are combined to asingle contour image.</p>
      </sec>
      <sec id="sec-2-4">
        <title>2.4. Objects edge detection on satellite images</title>
        <p>
          Most often, satellite images are multispectral (multicomponents). They are displayed as the color
images, which have three channels. The three channels may be three multispectral bands of the same
scene. Various types of color images can be prepared based on the different band combinations.
Truecolor images use visible red, green and blue bands. False-color images use the combination of near
infrared, red and green bands. Pseudocolor images contain medium and near infrared and green
bands.[
          <xref ref-type="bibr" rid="ref21 ref22">21,22</xref>
          ].
        </p>
        <p>Color is an important characteristic of objects that often simplifies there segmentation and
recognition. There are several ways to specify colors. The RGB color model is the simplest and the
most nature. In this case a color image consists of three components (red, green and blue), described
by their corresponding intensities. This model has large color coverage, but it is poorly suited for
processing tasks, because the color and the brightness information are encoded in the same three
channels. In Lab and HSV color models color and brightness information are separate into different
components. So they are much more convenient for processing.</p>
        <p>In Lab color model, the a and b components encode color. The first component a determines the
color position between green and magenta, the second component b – its position between blue and
yellow. The third component Lis independent of color information and encodes brightness only.</p>
        <p>The HSV color model uses only one channel to describe color. The image contains of three
components. They are the hue H, the saturation S and the value (or brightness) V components. The hue
H component is the color position; the saturation S component is the amount of gray in the color. The
value V is the brightness or intensity of the color.</p>
        <p>The main idea of the proposed approach is that different segmentation methods are appliedto
different components of a color image. The texture segmentation method, which was described in
subsection 2.2, is used on the component with brightness information. The contour segmentation
method, described in the previous subsection, is applied to the component with color information. As a
result of texture segmentation the regions of different textures are marked by different labels. To get
the contour map the second stage after texture segmentation was added. On the second stage the
contour segmentation is applied to the marked image output by the texture segmentation. Then the
contours detected on different components are combined into single contour image. Thus, the
proposed approach takes into account the color and texture information for image segmentation. This
improves the accuracy of the objects edges detection.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Simulation results</title>
      <p>To estimate the performance of the proposed approach, we have simulated it on images in RGB, Lab
and HSV color models. The software used was Matlab. The experiments were designed to analyse the
role of texture and color characteristics played on image segmentation. In the first experiment we used
RGB color model and the contour segmentation method discussed earlier in subsection 2.3. In this case
the image was divided into three components (R, G and B components), and each component was
processed by the contour segmentation method. In the second experiment we applied the texture
segmentation method to each component of RGB image. To detect the edges, the contour segmentation
method was applied to the segmented regions output by the texture segmentation. Then, we simulated
the combination of texture and contour segmentations on the images in Lab color model. The
component L(lightness) was proposed by the texture segmentation method with the next contour
detection. The color components a and b were processed by the contour segmentation method. In the
final experiment the images in HSV color model were processed. The texture segmentation was
applied to V component (brightness), and the contour segmentation was applied to H component
(hue). The S component (saturation) was not used.</p>
      <p>The qualitative performance was evaluated by comparing the segmenting results with a benchmark.
For lack of benchmarks for the real satellite images, we tested the proposed approach on the images
from the Berkeley Segmentation</p>
      <p>
        Dataset [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. It contains the human-annotated ground truth
segmentations corresponding to each test image.
      </p>
      <p>
        The quantitative comparison for performance is based on five measure metrics. They are
FOM(Figure of Merit) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], RMS (root mean squared error) [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], P(precision), R(recall), F-measure
[
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Because the result of the proposed approach is contour image, we used metrics based on contour
representation for quantitative evaluation of segmentation accuracy.
      </p>
      <p>The FOM (figure of merit) is an empirical distance between the image with contours from the
segmentation resultsg and the corresponding ground truthf. It shows how similar the ground truth and
the segmentation result are. The FOM is defined as:
where card(f) is the number of contour elements in the image f, card(g) is the number of the contour
elements in the image g, di is the distance between ith pixel in f and the nearest pixel to it in g.</p>
      <p>The RMS is the root mean squared error. It shows how different the groundtruth and segmented
image are. The RMS is defined as</p>
      <p>The FOM, P, R and F-measure are the higher the better segmentation results. The RMS is the lower
the higher segmentation accuracy.
models (the bold letters means best value). The values are averaged over all processed test images.
different algorithms and color models. We use the highest BBI for segmentation, because there are the
more significant region details on it.
 =

( )
;  =


( )</p>
      <p>,
 = 2 ∙

 ∙ 
+ 
.
where w and h are width and height of the image, fi and gi are the intensities of ith pixel in the ground
truth and segmented contour image.</p>
      <p>The P (precision) is the relation between the correctly detected contour elements and all elements
detected as contours on the image g. The R (recall) is the relation between the correctly detected
contour element and all elements detected as contours on the ground truth image f. They are calculated
by the equations:
where TP is the number of true positives decisions of the algorithm, i.e. the number of image
elements, which are contours both on the segmented image and the ground truth.</p>
      <p>F-measure is a widely used metric to evaluate segmentation results that combines precision and
recall. It is the weighted harmonic mean of precision and recall. The F-measure is calculated by the
equation (12).</p>
      <p>( )
∑
 =1
1⁄2</p>
      <p>(9)
(10)
(11)
(12)</p>
      <sec id="sec-3-1">
        <title>Segmentation method</title>
      </sec>
      <sec id="sec-3-2">
        <title>Contour segmentation based on RGB color model Texture segmentation based on RGB color model Segmentation based on Lab color model Segmentation based on HSV color model</title>
        <p>F
0.249
0.274
0.287
0.322
c)
d)
Figure 3. Segmentation results.
e)</p>
        <p>Figure 3b shows the results of contour segmentation using the RGB color model. The edges are
detected by the method based on two-dimensional Markov chains on each color component (R, G,
B).Then all contours are combined into one resulting image (figure 3b). Figure 3c shows the result of
the second experiment. Here the texture segmentation method is applied to each color component of
RGB color image. It is assumed that the initial image contains only two different textures. On the
segmented image the regions of the first texture are marked as “1”, and the regions of the another
texture - as “0”. As a result of texture segmentation binary image were obtained. The contour
segmentation method is applied to the texture segmentation results. Thus, the edges of texture regions
are detected. The contour images of three components are combined into one resulting image
(figure 3c).</p>
        <p>Figure 3d illustrated image segmentation result in Lab color model. Here only the final resulting
contour image is shown. It is a combination of contour segmentation results of a and b components
and contours of the texture regions detected on the L component. Figure 3e shows the resulting
contour image obtained on satellite image in HSV color model. It is a combination of contours
detected on the H component and contours of the texture regions detected on the V component.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>From the simulation results, a conclusion can be draw that the contour segmentation method allows
detecting edges of regions with different colors on the image. But it gives unsatisfactory results for
texture regions. Such texture regions are often observed on satellite images. They do not have
pronounced edges in terms of brightness or color. This leads to the significant over-segmentation. This
case is shown on figure 3b. The texture segmentation allows detecting the edges of texture regions
more clearly. But in the same time, some edges between objects of different colors can be lost what is
illustrated on figure 3c.The sea region and the flat part of coast are differ significantly in color. But in
the same time, they have close values of transition probability of the elements in two-dimensional
Markov chain. Consequently, the algorithm is detected them as one region.</p>
      <p>The segmentation based on Lab and HSV color models provides similar results. Herewith, the
edges of objects of different colors and the edges of different texture regions are detected more
precisely. The use of HSV color model has another advantage that it is enough to process only two
components. This allows to reduce the computational time significantly.</p>
      <p>The results of simulation on the test images confirm these conclusions. In table 1, segmentation
based on HSV color model gets one of the best values in the most quality criteria. We can also find
that the contour segmentation based on RGB color model has the best values in FOM and P. The
reason for this is that such segmentation belongs to over-segmentation which gives more details.
Therefore, there are a lot of coincidences between the ground truth and segmented contours. But in
addition in this case there are also a lot of false detected contour pixels. As a result such segmentation
has the worst values in RMS and R. So the results of contour segmentation based on RGB color model
are unsatisfactory.</p>
      <p>Thus, the proposed approach consists of the joint use of contour and texture segmentation on the
different image components. It takes into account color and texture characteristics of objects for
segmentation. Due to this the segmentation results are more accurate. It is recommended to use HSV
color model, because it shown the best results.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Vorobiova</surname>
            <given-names>N S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sergeyev</surname>
            <given-names>V V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Chernov</surname>
            <given-names>A V</given-names>
          </string-name>
          <year>2016</year>
          <article-title>Information technology of early crop identification by using satellite images</article-title>
          <source>Computer Optics</source>
          <volume>40</volume>
          (
          <issue>6</issue>
          )
          <fpage>929</fpage>
          -
          <lpage>938</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          - 6179-2016-40-6-
          <fpage>929</fpage>
          -938
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Gonzalez R C and Woods R E 2008</surname>
          </string-name>
          <article-title>Digital image processing</article-title>
          (New York: Prentice Hall) p
          <fpage>954</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Verma</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Chugh</surname>
            <given-names>A 2016</given-names>
          </string-name>
          <article-title>An increased modularity based contour detection</article-title>
          <source>International Journal of Computer Applications</source>
          <volume>135</volume>
          (
          <issue>12</issue>
          )
          <fpage>41</fpage>
          -
          <lpage>44</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Swami</surname>
            <given-names>D</given-names>
          </string-name>
          and
          <string-name>
            <surname>Chaurasia B J 2017</surname>
          </string-name>
          <article-title>Super-pixel and Neighborhood based contour detection Comp</article-title>
          .
          <source>&amp; Math. Sci. 8</source>
          (
          <issue>6</issue>
          )
          <fpage>226</fpage>
          -
          <lpage>234</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Borne</surname>
            <given-names>F</given-names>
          </string-name>
          and
          <string-name>
            <surname>Viennois G 2017</surname>
          </string-name>
          <article-title>Texture-based classification for characterizing regions on remote sensing images</article-title>
          <source>Journal of Applied Remote Sensing</source>
          <volume>11</volume>
          (
          <issue>3</issue>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Hemalatha</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Anouncia S M 2017</surname>
          </string-name>
          <article-title>Unsupervised segmentation of remote sensing images using FD based texture analysis model</article-title>
          and ISODATA
          <source>International Journal of Ambient Computing and Intelligence</source>
          <volume>8</volume>
          (
          <issue>3</issue>
          )
          <fpage>58</fpage>
          -
          <lpage>75</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Prudente</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Da Silva</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Johann</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mercante</surname>
            <given-names>E</given-names>
          </string-name>
          and
          <string-name>
            <surname>Oldoni L 2017</surname>
          </string-name>
          <article-title>Comparative assessment between per-pixel and object-oriented for mapping land cover</article-title>
          and
          <source>use Journal of the Brazilian Association of Agricultural Engineering</source>
          <volume>37</volume>
          (
          <issue>5</issue>
          )
          <fpage>1015</fpage>
          -
          <lpage>1027</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Abbas</surname>
            <given-names>A W</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Minallh</surname>
            <given-names>N</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ahmad</surname>
            <given-names>N</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Abid</surname>
            <given-names>S A R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khan M A A 2016 K-Means</surname>
          </string-name>
          and
          <article-title>ISODATA Clustering Algorithms for Landcover Classification Using Remote Sensing Sindh Univ</article-title>
          .
          <source>Res. Jour. (Sci. Ser</source>
          .)
          <volume>48</volume>
          (
          <issue>2</issue>
          )
          <fpage>315</fpage>
          -
          <lpage>318</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Baya</surname>
            <given-names>A E</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Larese</surname>
            <given-names>M G</given-names>
          </string-name>
          and
          <string-name>
            <surname>Namias</surname>
            <given-names>R</given-names>
          </string-name>
          2017 Clustering stability for
          <source>automated color image segmentation Expert Systems with Applications</source>
          <volume>86</volume>
          <fpage>258</fpage>
          -
          <lpage>273</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Li</surname>
            <given-names>M</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Wu C 2014</surname>
          </string-name>
          <article-title>A Review of Remote Sensing Image Classification Techniques: the Role of Spatio-contextual Information</article-title>
          <source>European Journal of Remote Sensing</source>
          <volume>47</volume>
          <fpage>389</fpage>
          -
          <lpage>411</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Haralick R M 1979</surname>
          </string-name>
          <article-title>Statistical and structural approaches to texture</article-title>
          <source>Proceedings of the IEEE</source>
          <volume>67</volume>
          (
          <issue>5</issue>
          )
          <fpage>786</fpage>
          -
          <lpage>804</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Hemalatha</surname>
            <given-names>S</given-names>
          </string-name>
          and
          <string-name>
            <surname>Anouncia S M 2016</surname>
          </string-name>
          <article-title>A computational model for texture analysis in images with fractional differential filter for texture detection</article-title>
          <source>International Journal of Ambient Computing and Intelligence</source>
          <volume>7</volume>
          (
          <issue>2</issue>
          )
          <fpage>93</fpage>
          -
          <lpage>113</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Zhang</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gao Y W and Feng</surname>
            <given-names>S W</given-names>
          </string-name>
          <year>2015</year>
          <article-title>Image segmentation with texture clustering based</article-title>
          <source>JSEG International Conference on Machine Learning and Cybernetics</source>
          (ICMLC) DOI:
          <fpage>10</fpage>
          .1109/ICMLC.
          <year>2015</year>
          .7340623
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Hu</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ding</surname>
            <given-names>Y</given-names>
          </string-name>
          and
          <string-name>
            <surname>Liu</surname>
            <given-names>Y 2017</given-names>
          </string-name>
          <article-title>Accurate and fast building detection using binary bag-of-features ISPRS</article-title>
          <source>Hannover Workshop: HRIGI 17 - CMRT 17 - ISA 17 - EuroCOW 17 XLII-1/W1</source>
          613-
          <fpage>617</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Liu</surname>
            <given-names>L X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fan S M</surname>
            ,
            <given-names>Ning X D and Liao L J 2017</given-names>
          </string-name>
          <article-title>An efficient level set model with self-similarity for texture segmentation</article-title>
          <source>Neurocomputing</source>
          <volume>266</volume>
          <fpage>150</fpage>
          -
          <lpage>164</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>El Merabet</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Meurie</surname>
            <given-names>C</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ruichek</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sbihi</surname>
            <given-names>A</given-names>
          </string-name>
          and
          <string-name>
            <surname>Touahni</surname>
            <given-names>R 2015</given-names>
          </string-name>
          <article-title>Building roof segmentation from aerial images using a line-and region-based watershed segmentation technique</article-title>
          <source>Sensors</source>
          <volume>15</volume>
          (
          <issue>2</issue>
          )
          <fpage>3172</fpage>
          -
          <lpage>3203</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Myasnikov</surname>
            <given-names>E V</given-names>
          </string-name>
          <year>2017</year>
          <article-title>Hyperspectral image segmentation using dimensionality reduction and classical segmentation approaches</article-title>
          <source>Computer Optics</source>
          <volume>41</volume>
          (
          <issue>4</issue>
          )
          <fpage>564</fpage>
          -
          <lpage>572</lpage>
          DOI: 10.18287/
          <fpage>2412</fpage>
          -6179- 2017-41-4-
          <fpage>564</fpage>
          -572
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <surname>Petrov</surname>
            <given-names>E P</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Trubin</surname>
            <given-names>I S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Medvedeva</surname>
            <given-names>E V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Smolskiy S M 2013 Mathematical</surname>
          </string-name>
          <article-title>Models of Video-Sequences of Digital Half-Tone Images Integrated models for information communication systems and net-works : design and development</article-title>
          (IGI Global)
          <volume>207</volume>
          -
          <fpage>241</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>Medvedeva</surname>
            <given-names>E V</given-names>
          </string-name>
          and
          <string-name>
            <surname>Kurbatova E E 2015</surname>
          </string-name>
          <article-title>Image segmentation based on two-dimensional Markov chains Computer Vision in Control Systems-2</article-title>
          . Innovations in practice (Springer International Publishing Switzerland)
          <fpage>277</fpage>
          -
          <lpage>295</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <surname>Kurbatova</surname>
            <given-names>Е Е</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Medvedeva</surname>
            <given-names>E V</given-names>
          </string-name>
          and
          <article-title>Okulova A A 2015 Method of isolating texture areas in images Pattern Recognition</article-title>
          and
          <source>Image Analysis</source>
          <volume>25</volume>
          (
          <issue>1</issue>
          )
          <fpage>47</fpage>
          -
          <lpage>52</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <surname>Burnett</surname>
            <given-names>C</given-names>
          </string-name>
          and
          <string-name>
            <surname>Blaschke T 2003</surname>
          </string-name>
          <article-title>A multi-scale segmentation/object relationship modelling methodology for landscape analysis</article-title>
          <source>Ecological Modelling</source>
          <volume>168</volume>
          (
          <issue>3</issue>
          )
          <fpage>233</fpage>
          -
          <lpage>249</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Krautsou</surname>
            <given-names>S L</given-names>
          </string-name>
          <year>2008</year>
          <article-title>Processing of remote sensing images (methods analysis) (Minsk: UIIP NAS Belarus</article-title>
          ) p
          <fpage>256</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23] Berkeley Segmentation Dataset (Accsess mode: http://www.eecs.berkeley.edu/Research/ Projects/CS/vision/grouping/segbench) (
          <volume>01</volume>
          .
          <fpage>11</fpage>
          .
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Zhang</surname>
            <given-names>Y 2006</given-names>
          </string-name>
          <article-title>Advances in Image And Video Segmentation</article-title>
          (USA: IRM Press) p
          <fpage>473</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <surname>Martin</surname>
            <given-names>D</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fowlkes</surname>
            <given-names>C</given-names>
          </string-name>
          and
          <article-title>Malik J 2004 Learning to detect natural image boundaries using local brightness, color and texture cues</article-title>
          <source>IEEE Trans. on Pattern analysis and Machine Intelligence</source>
          <volume>26</volume>
          <fpage>530</fpage>
          -
          <lpage>549</lpage>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>