=Paper=
{{Paper
|id=Vol-2210/paper16
|storemode=property
|title=Edge detection of objects on the satellite images
|pdfUrl=https://ceur-ws.org/Vol-2210/paper16.pdf
|volume=Vol-2210
|authors=Ekaterina Kurbatova
}}
==Edge detection of objects on the satellite images==
Edge detection of objects on the satellite images
E E Kurbatova1
1
Vyatka State University, Moskovskaya 36, Kirov, Russia, 610006
Abstract. Image segmentation is an important stage in image processing. The approach for the
satellite images segmentation based on objects edge detectionis proposed. The approach
usesthe random Markov fields as a mathematical model of an image. It is proposed to use the
methods of contour and texture segmentation on different color components of a satellite
image. The contour segmentation detects objects with different colors. It is applied to
component with color information. The transition probability in two-dimensional Markov
chains is used as a texture feature. The texture segmentation is applied to component with
brightness information. The simulation results of the proposed approach in different color
models, such as RGB, HSV, Lab, are presented. The accuracy of detecting contours was
estimated using the set of test images on the base of five criteria. The use of a combination of
color and texture characteristics of regions, made it possible to improve the accuracy of objects
edge detection.
1. Introduction
Remote sensing data are widely used in different applications, including agriculture, forestry and
water management, monitoring of the environment and emergencies, urban planning, cartography, etc.
The thematic processing is one of the ways to process the satellite images in such systems. It includes
detection, decoding and objects recognition stages. Using the thematic decoding of the satellite
images, it is possible to allocate different classes of objects, such as forests, fields, rivers, urban zones,
etc. [1] The obtained decoding results can be used for calculating the characteristics of objects and for
tracing their changes over time. It is needed to apply the complex approach, which consist of several
continues stages, for decoding of the satellite images. Different image processing methods can be used
on different stages.
In general, image decoding consists of such stages:
๏ท image acquisition;
๏ท image enhancement (filtering, contrast enhancement, increase of resolution, etc.);
๏ท object detection (edge analysis, segmentation into homogeneous regions);
๏ท objects classification (sorting the allocated objects to finite number of classes).
Each stage of such process uses data obtained on the previous stage. Therefore, the quality of each
stage affects the accuracy of the recognition results. Usually at each next stage the more complex
algorithms are used. They require more time for processing and have the less degree of their
automation. Therefore, it is preferable to use algorithms, which have a small number of parameters for
setting, require small computational resources and minimum operator participation. At the same time
they must provide high quality processing. It is especially actually for the algorithms applying at the
first stages.
This work is about the object detection stage, the main method of which is segmentation. Different
features can be used for image segmentation. Among them are object brightness, color, texture, shape,
etc. In general, all segmentation methods can be divided into two classes: methods of contour analysis,
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018)
Image Processing and Earth Remote Sensing
E E Kurbatova
and texture methods. Contour methods are based on the objects edge detection for some feature [2-4].
Texture methods are methods, which find homogeneous regions. Such regions are characterized that a
texture feature is unchanged or changesa little within the region. At the same time, it varies
significantly in different regions. As a texture feature can be used different statistical, structural,
morphological and spectral image characteristics [9-12]. But often it is not enough to use only one
characteristic for object detection. Therefore, a combination of different features and algorithms are
often used in the modern approaches for image segmentation [13-17].
In this work, the approach for image segmentation of satellite images is proposed. It is based on
objects edges detection using the color and texture information. It increases the accuracy of objects
edge detection. The approach uses the mathematical model based on Markov random fields for image
description.
2. Image segmentation method
In the previous work [18], the mathematical model based on Markov random fields has been proposed
for image description. Based on this model in some works [19,20] the contour and texture
segmentation methods have been developed. They provide high efficiency and have low
computational complexity. In this work it is proposed to use these methods jointly.
2.1. Mathematical model of an image
According to the used model, g-bits digital halftone images (DHI) are represented by the set of g bit
binary images (BBI). Each BBI is the superposition of two one-dimensional Markov random chains
with two equiprobablestatesM1 and M2 and matrices of transition probabilities in the horizontal and
vertical directions:
1
1 ๐11 1๐12 2 2
๐11 2๐12
ฮ = 1 โ โ , ฮ = 2 โ โ (1)
๐21 1๐22 ๐21 2๐22
Figure 1 shows the lth BBI divided into areas ๐น๐ (๐ = ฬ
ฬ
ฬ
ฬ
1,4) that are Markov chains of different
dimensions. F2 and F3regions are one-dimensional Markov chains. F4 region is two-dimensional
Markov chain. As shown in figure 2, the neighborhood of the image element in this region consists of
three elements.
๏ฎ4 v2
v1 v3
๐1 = ๐๐,๐โ1 ; ๐2 = ๐๐โ1,๐ ;
๐3 = ๐๐,๐ ; ๐4 = ๐๐โ1,๐โ1 ;
Figure 1. The areas of binary Markov random field. Figure 2. The fragment of F4 region.
The entropy approach was applied for calculating the probabilities of the binary elements states.
Thus, the amount of information in the element๐3 relative to the states of the neighboring
elements๐1 , ๐2 is calculated by equation (2) [19].
๐ค(๐3 |๐1 )๐ค(๐3 |๐2 )
๐ผ (๐3 |๐1 , ๐2 ) = โ๐๐๐ , (2)
๐ค (๐3 |๐2 , ๐1 )
where ๐ค (๐3 |๐1 ), ๐ค(๐3 |๐2 ) are one-dimensional densities of transition probability of the neighboring
elements, ๐ค(๐3 |๐2 , ๐1 ) is the density of transition probability in two-dimensional Markov chain.
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 116
Image Processing and Earth Remote Sensing
E E Kurbatova
The transition probability density in the binary two-dimensional Markov chain can be expressed by
equation (3),where ๐ฟ(โ)is the delta function.
2
๐ค(๐3 |๐2 , ๐1 ) = โ ๐(๐3 = ๐๐ |๐1 = ๐๐ , ๐2 = ๐๐ ) ร ๐ฟ(๐1 โ ๐๐ ) ร ๐ฟ(๐2 โ ๐๐ ) (3)
๐,๐,๐=1
Taking into account the equation (3), the transition probability matrix for various combinations of
the neighboring elements states has the form (4).
๐๐๐๐ ๐๐๐๐ ๐ผ1 ๐ผ1โฒ
๐๐๐๐ ๐๐๐๐ ๐ผ2 ๐ผ2โฒ
ฮ = โ๐ ๐๐๐๐ โ = โ โ ; ๐, ๐ = ฬ
ฬ
ฬ
ฬ
1,2; ๐ โ ๐ (4)
๐๐๐ ๐ผ3 ๐ผ3โฒ
๐๐๐๐ ๐๐๐๐ ๐ผ4 ๐ผ4โฒ
The elements of this matrix are related with the elements of the 1๏, 2๏matrices by the relations (5).
๐ผ1 = ๐๐๐๐ = ๐(๐3 = ๐1 |๐1 = ๐1 ; ๐2 = ๐1 ) = 1๐๐๐ โ 2๐๐๐ โ 3๐๐๐ ; ๐ผ4 = 1 โ ๐ผ1 ;
(5)
๐ผ2 = ๐๐๐๐ = ๐(๐3 = ๐1 |๐1 = ๐1 ; ๐2 = ๐2 ) = 1๐๐๐ โ 2๐๐๐ โ 3๐๐๐ ; ๐ผ3 = 1 โ ๐ผ2 ,
where 3๐๐๐ , ๐, ๐ = ฬ
ฬ
ฬ
ฬ
1,2, ๐ โ ๐ are the elements of transition probability matrix 3ฮ = 1ฮ ร 2ฮ .
2.2. Texture segmentation method
This method [19,20] is based on two-dimensional mathematical model of an image. In general, texture
is a region where some statistical properties are constant or change slowly. The estimate of transition
probability in two-dimensional Markov chain is used as a texture feature. It is calculated using the
sliding window method.
For the first line of window the estimate of transition probability 1๐ฬ๐๐ for horizontal is calculated
as[18]
1 2๐1
๐ฬ๐๐ = 1 โ (๐) , (6)
๐ฬ
where ๐ฬ (๐) is the estimate of the average sequence length of the identical BBI elements; ๐1 is the initial
probability (๐1 = 0,5).
From the second line, the estimate of transition probability for vertical 2๐ฬ๐๐ and estimate ๐ฬ๐๐๐ of
transition probability in two-dimensional Markov chain are calculated by the matrix (4).
All the obtained estimates are averaged within the window to produce a mean estimate of transition
probability ๐ฬ๐๐๐ :
๐ ๐
(๐,๐) 1 (๐,๐)
๐ฬ๐๐๐ = โ โ ๐ฬ๐๐๐ , (7)
๐โ๐
๐=1 ๐=1
where m, n are height and width of the sliding window.
This mean value is used as a texture feature for the central element of the window.
A window of fixed size is moved from left to the right and top to bottom on lth BBI to get texture
feature for each image element.
Then image element is marked by comparing the calculated texture feature with the threshold.
As a result, each image element has the label corresponding to a certain texture. The threshold can
be selected on the basis of the analysis of texture feature histogram. If there are several textures on the
image, it is needed to select several thresholds.
In the case of color image processing, each color component can be represented as the DHI. All
color components are processed separately. The threshold is selected for each component. The
segmentation results obtained on different color components are combined into a single color image.
On this image different colors correspond to the regions of different textures.
2.3. Contour segmentation method
To detect objects edges the amount of information between the element๐3 and the various
combinations of the neighboring elements is calculated. It is determined with the matrix (4) and the
equation (2) for each element of l th BBI.
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 117
Image Processing and Earth Remote Sensing
E E Kurbatova
The amount of information in the element๐3 will be minimal, if the neighboring elements๐1 , ๐2
have the same states with the๐3 [18].
On the edge of other brightness region one or two neighboring elements have different states with
๐3 .In this case the amount of information in the element ๐3 is increased.If the amount of information
in the element๐3 is greater than to h the pixel belongs to the contour. The element ๐3 belongs to
homogeneous region in the other case.
The threshold h is calculated for each BBI taking into account the minimal amount of information
and the amount of information, when one of the neighboring elements has a different state.
โ = 0,5 โ (๐ผ(๐3 = ๐๐ |๐1 = ๐๐ , ๐2 = ๐๐ ) + ๐ผ(๐3 = ๐๐ |๐1 = ๐๐ , ๐2 = ๐๐ )). (8)
It is supposed that the transition probability matrices are a priori known.
In the case of color image the contours are detected on each color component. Then the contour
maps of each component are combined to asingle contour image.
2.4. Objects edge detection on satellite images
Most often, satellite images are multispectral (multicomponents). They are displayed as the color
images, which have three channels. The three channels may be three multispectral bands of the same
scene. Various types of color images can be prepared based on the different band combinations. True-
color images use visible red, green and blue bands. False-color images use the combination of near
infrared, red and green bands. Pseudocolor images contain medium and near infrared and green
bands.[21,22].
Color is an important characteristic of objects that often simplifies there segmentation and
recognition. There are several ways to specify colors. The RGB color model is the simplest and the
most nature. In this case a color image consists of three components (red, green and blue), described
by their corresponding intensities. This model has large color coverage, but it is poorly suited for
processing tasks, because the color and the brightness information are encoded in the same three
channels. In Lab and HSV color models color and brightness information are separate into different
components. So they are much more convenient for processing.
In Lab color model, the a and b components encode color. The first component a determines the
color position between green and magenta, the second component b โ its position between blue and
yellow. The third component Lis independent of color information and encodes brightness only.
The HSV color model uses only one channel to describe color. The image contains of three
components. They are the hue H, the saturation S and the value (or brightness) V components. The hue
H component is the color position; the saturation S component is the amount of gray in the color. The
value V is the brightness or intensity of the color.
The main idea of the proposed approach is that different segmentation methods are appliedto
different components of a color image. The texture segmentation method, which was described in
subsection 2.2, is used on the component with brightness information. The contour segmentation
method, described in the previous subsection, is applied to the component with color information. As a
result of texture segmentation the regions of different textures are marked by different labels. To get
the contour map the second stage after texture segmentation was added. On the second stage the
contour segmentation is applied to the marked image output by the texture segmentation. Then the
contours detected on different components are combined into single contour image. Thus, the
proposed approach takes into account the color and texture information for image segmentation. This
improves the accuracy of the objects edges detection.
3. Simulation results
To estimate the performance of the proposed approach, we have simulated it on images in RGB, Lab
and HSV color models. The software used was Matlab. The experiments were designed to analyse the
role of texture and color characteristics played on image segmentation. In the first experiment we used
RGB color model and the contour segmentation method discussed earlier in subsection 2.3. In this case
the image was divided into three components (R, G and B components), and each component was
processed by the contour segmentation method. In the second experiment we applied the texture
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 118
Image Processing and Earth Remote Sensing
E E Kurbatova
segmentation method to each component of RGB image. To detect the edges, the contour segmentation
method was applied to the segmented regions output by the texture segmentation. Then, we simulated
the combination of texture and contour segmentations on the images in Lab color model. The
component L(lightness) was proposed by the texture segmentation method with the next contour
detection. The color components a and b were processed by the contour segmentation method. In the
final experiment the images in HSV color model were processed. The texture segmentation was
applied to V component (brightness), and the contour segmentation was applied to H component
(hue). The S component (saturation) was not used.
The qualitative performance was evaluated by comparing the segmenting results with a benchmark.
For lack of benchmarks for the real satellite images, we tested the proposed approach on the images
from the Berkeley Segmentation Dataset [23]. It contains the human-annotated ground truth
segmentations corresponding to each test image.
The quantitative comparison for performance is based on five measure metrics. They are
FOM(Figure of Merit) [24], RMS (root mean squared error) [24], P(precision), R(recall), F-measure
[25]. Because the result of the proposed approach is contour image, we used metrics based on contour
representation for quantitative evaluation of segmentation accuracy.
The FOM (figure of merit) is an empirical distance between the image with contours from the
segmentation resultsg and the corresponding ground truthf. It shows how similar the ground truth and
the segmentation result are. The FOM is defined as:
๐๐๐๐(๐)
โ1
๐น๐๐ = (max{๐๐๐๐(๐), ๐๐๐๐(๐)})โ1 โ โ (1 + ๐๐2 ) , (9)
๐=1
where card(f) is the number of contour elements in the image f, card(g) is the number of the contour
elements in the image g, di is the distance between ith pixel in f and the nearest pixel to it in g.
The RMS is the root mean squared error. It shows how different the groundtruth and segmented
image are. The RMS is defined as
1โ2
๐คโโ
1
๐
๐๐ = ( โ โ(๐๐ โ ๐๐ )2 ) (10)
๐คโโ
๐=1
where w and h are width and height of the image, fi and gi are the intensities of ith pixel in the ground
truth and segmented contour image.
The P (precision) is the relation between the correctly detected contour elements and all elements
detected as contours on the image g. The R (recall) is the relation between the correctly detected
contour element and all elements detected as contours on the ground truth image f. They are calculated
by the equations:
๐๐ ๐๐
๐= ;๐
= , (11)
๐๐๐๐(๐) ๐๐๐๐(๐)
where TP is the number of true positives decisions of the algorithm, i.e. the number of image
elements, which are contours both on the segmented image and the ground truth.
F-measure is a widely used metric to evaluate segmentation results that combines precision and
recall. It is the weighted harmonic mean of precision and recall. The F-measure is calculated by the
equation (12).
๐โ๐
๐น =2โ . (12)
๐
+๐
The FOM, P, R and F-measure are the higher the better segmentation results. The RMS is the lower
the higher segmentation accuracy.
Table 1 gives the values of these qualitative metrics for segmentation results in different color
models (the bold letters means best value). The values are averaged over all processed test images.
Figure 3 presents the segmentation results on the true-color satellite image (figure 3a) using
different algorithms and color models. We use the highest BBI for segmentation, because there are the
more significant region details on it.
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 119
Image Processing and Earth Remote Sensing
E E Kurbatova
Table 1. Quality criteria for test images.
Quality criteria
Segmentation method
FOM RMS R P F
Contour segmentation based on RGB color model 0.144 0.437 0.772 0.164 0.249
Texture segmentation based on RGB color model 0.137 0.322 0.519 0.197 0.274
Segmentation based on Lab color model 0.127 0.268 0.384 0.256 0.287
Segmentation based on HSV color model 0.140 0.304 0.543 0.247 0.322
a) b) c)
d) e)
Figure 3. Segmentation results.
Figure 3b shows the results of contour segmentation using the RGB color model. The edges are
detected by the method based on two-dimensional Markov chains on each color component (R, G,
B).Then all contours are combined into one resulting image (figure 3b). Figure 3c shows the result of
the second experiment. Here the texture segmentation method is applied to each color component of
RGB color image. It is assumed that the initial image contains only two different textures. On the
segmented image the regions of the first texture are marked as โ1โ, and the regions of the another
texture - as โ0โ. As a result of texture segmentation binary image were obtained. The contour
segmentation method is applied to the texture segmentation results. Thus, the edges of texture regions
are detected. The contour images of three components are combined into one resulting image
(figure 3c).
Figure 3d illustrated image segmentation result in Lab color model. Here only the final resulting
contour image is shown. It is a combination of contour segmentation results of a and b components
and contours of the texture regions detected on the L component. Figure 3e shows the resulting
contour image obtained on satellite image in HSV color model. It is a combination of contours
detected on the H component and contours of the texture regions detected on the V component.
4. Conclusion
From the simulation results, a conclusion can be draw that the contour segmentation method allows
detecting edges of regions with different colors on the image. But it gives unsatisfactory results for
texture regions. Such texture regions are often observed on satellite images. They do not have
pronounced edges in terms of brightness or color. This leads to the significant over-segmentation. This
case is shown on figure 3b. The texture segmentation allows detecting the edges of texture regions
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 120
Image Processing and Earth Remote Sensing
E E Kurbatova
more clearly. But in the same time, some edges between objects of different colors can be lost what is
illustrated on figure 3c.The sea region and the flat part of coast are differ significantly in color. But in
the same time, they have close values of transition probability of the elements in two-dimensional
Markov chain. Consequently, the algorithm is detected them as one region.
The segmentation based on Lab and HSV color models provides similar results. Herewith, the
edges of objects of different colors and the edges of different texture regions are detected more
precisely. The use of HSV color model has another advantage that it is enough to process only two
components. This allows to reduce the computational time significantly.
The results of simulation on the test images confirm these conclusions. In table 1, segmentation
based on HSV color model gets one of the best values in the most quality criteria. We can also find
that the contour segmentation based on RGB color model has the best values in FOM and P. The
reason for this is that such segmentation belongs to over-segmentation which gives more details.
Therefore, there are a lot of coincidences between the ground truth and segmented contours. But in
addition in this case there are also a lot of false detected contour pixels. As a result such segmentation
has the worst values in RMS and R. So the results of contour segmentation based on RGB color model
are unsatisfactory.
Thus, the proposed approach consists of the joint use of contour and texture segmentation on the
different image components. It takes into account color and texture characteristics of objects for
segmentation. Due to this the segmentation results are more accurate. It is recommended to use HSV
color model, because it shown the best results.
5. References
[1] Vorobiova N S, Sergeyev V V and Chernov A V 2016 Information technology of early crop
identification by using satellite images Computer Optics 40(6) 929-938 DOI: 10.18287/2412-
6179-2016-40-6-929-938
[2] Gonzalez R C and Woods R E 2008 Digital image processing (New York: Prentice Hall) p 954
[3] Verma S and Chugh A 2016 An increased modularity based contour detection International
Journal of Computer Applications 135(12) 41-44
[4] Swami D and Chaurasia B J 2017 Super-pixel and Neighborhood based contour detection
Comp. & Math. Sci. 8(6) 226-234
[5] Borne F and Viennois G 2017 Texture-based classification for characterizing regions on remote
sensing images Journal of Applied Remote Sensing 11(3)
[6] Hemalatha S and Anouncia S M 2017 Unsupervised segmentation of remote sensing images
using FD based texture analysis model and ISODATA International Journal of Ambient
Computing and Intelligence 8(3) 58-75
[7] Prudente V, Da Silva B, Johann J, Mercante E and Oldoni L 2017 Comparative assessment
between per-pixel and object-oriented for mapping land cover and use Journal of the Brazilian
Association of Agricultural Engineering 37(5) 1015-1027
[8] Abbas A W, Minallh N, Ahmad N, Abid S A R, Khan M A A 2016 K-Means and ISODATA
Clustering Algorithms for Landcover Classification Using Remote Sensing Sindh Univ. Res.
Jour. (Sci. Ser.) 48(2) 315-318
[9] Baya A E, Larese M G and Namias R 2017 Clustering stability for automated color image
segmentation Expert Systems with Applications 86 258-273
[10] Li M, Zhang S, Zhang B, Li S and Wu C 2014 A Review of Remote Sensing Image
Classification Techniques: the Role of Spatio-contextual Information European Journal of
Remote Sensing 47 389-411
[11] Haralick R M 1979 Statistical and structural approaches to texture Proceedings of the IEEE
67(5) 786-804
[12] Hemalatha S and Anouncia S M 2016 A computational model for texture analysis in images
with fractional differential filter for texture detection International Journal of Ambient
Computing and Intelligence 7(2) 93-113
[13] Zhang J, Gao Y W and Feng S W 2015 Image segmentation with texture clustering based JSEG
International Conference on Machine Learning and Cybernetics (ICMLC) DOI:
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 121
Image Processing and Earth Remote Sensing
E E Kurbatova
10.1109/ICMLC.2015.7340623
[14] Hu Y, Li Z, Li P, Ding Y and Liu Y 2017 Accurate and fast building detection using binary
bag-of-features ISPRS Hannover Workshop: HRIGI 17 โ CMRT 17 โ ISA 17 โ EuroCOW 17
XLII-1/W1 613-617
[15] Liu L X, Fan S M, Ning X D and Liao L J 2017 An efficient level set model with self-similarity
for texture segmentation Neurocomputing 266 150-164
[16] El Merabet Y, Meurie C, Ruichek Y, Sbihi A and Touahni R 2015 Building roof segmentation
from aerial images using a line-and region-based watershed segmentation technique Sensors
15(2) 3172-3203
[17] Myasnikov E V 2017 Hyperspectral image segmentation using dimensionality reduction and
classical segmentation approaches Computer Optics 41(4) 564-572 DOI: 10.18287/2412-6179-
2017-41-4-564-572
[18] Petrov E P, Trubin I S, Medvedeva E V and Smolskiy S M 2013 Mathematical Models of
Video-Sequences of Digital Half-Tone Images Integrated models for information
communication systems and net-works : design and development (IGI Global) 207-241
[19] Medvedeva E V and Kurbatova E E 2015 Image segmentation based on two-dimensional
Markov chains Computer Vision in Control Systems-2. Innovations in practice (Springer
International Publishing Switzerland) 277-295
[20] Kurbatova ะ ะ, Medvedeva E V and Okulova A A 2015 Method of isolating texture areas in
images Pattern Recognition and Image Analysis 25(1) 47-52
[21] Burnett C and Blaschke T 2003 A multi-scale segmentation/object relationship modelling
methodology for landscape analysis Ecological Modelling 168(3) 233-249
[22] Krautsou S L 2008 Processing of remote sensing images (methods analysis) (Minsk: UIIP NAS
Belarus) p 256
[23] Berkeley Segmentation Dataset (Accsess mode: http://www.eecs.berkeley.edu/Research/
Projects/CS/vision/grouping/segbench) (01.11.2017)
[24] Zhang Y 2006 Advances in Image And Video Segmentation (USA: IRM Press) p 473
[25] Martin D, Fowlkes C and Malik J 2004 Learning to detect natural image boundaries using local
brightness, color and texture cues IEEE Trans. on Pattern analysis and Machine Intelligence 26
530-549
IV International Conference on "Information Technology and Nanotechnology" (ITNT-2018) 122