=Paper= {{Paper |id=Vol-3006/08_regular_paper |storemode=property |title=Statistical texture analysis of forest areas from very high spatial resolution satellite images |pdfUrl=https://ceur-ws.org/Vol-3006/08_regular_paper.pdf |volume=Vol-3006 |authors=Egor V. Dmitriev,Timofei V. Kondranin,Petr G. Melnik,Sergey A. Donskoy }} ==Statistical texture analysis of forest areas from very high spatial resolution satellite images== https://ceur-ws.org/Vol-3006/08_regular_paper.pdf
Statistical texture analysis of forest areas from very
high spatial resolution satellite images
Egor V. Dmitriev1,2 , Timofei V. Kondranin2 , Petr G. Melnik3 and Sergey A. Donskoy4
1
  Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences, Moscow, Russia
2
  Moscow Institute of Physics and Technology (National Research University), Dolgoprudny, Moscow Region, Russia
3
  Mytischi Branch of Bauman Moscow State Technical University, Mytischi, Moscow Region, Russia
4
  Federal Forestry Agency ROSLESINFORG, Moscow, Russia


                                         Abstract
                                         Aerospace images with a spatial resolution of less than 1 m are actively used by regional services to obtain
                                         and update information about various environmental objects. Considerable efforts are being devoted
                                         to the development of remote sensing methods for forest areas. The structure of the forest canopy
                                         depends on various parameters, most of which are determined by ground-based methods during forest
                                         management works. Remote sensing methods for assessing the structural parameters of forest stands are
                                         based on texture analysis of panchromatic and multispectral images. A statistical approach is often used
                                         to extract texture features. The basis of this approach is the description of the distributions characterizing
                                         the mutual arrangement of image pixels in grayscale. This paper compares the effectiveness of matrix
                                         based statistical methods for extracting textural features for solving the problem of classifying various
                                         natural and manmade objects, as well as structures of the forest canopy. We consider statistics of various
                                         orders based on estimates of the distributions of gray levels, as well as the mutual occurrence, frequency,
                                         difference and structuring of gray levels. The results of assessing the informativeness of statistical textural
                                         characteristics in determining various structures of the forest canopy are presented. Dependences of
                                         the classification results on the choice of distribution parameters are determined. For the quantitative
                                         validation of the results obtained, data from ground surveys and expert visual classification of very high
                                         resolution WorldView-2 images of the territories of Savvatyevkoe and Bronnitskoe forestries are used.

                                         Keywords
                                         Remote sensing, pattern recognition, texture analysis, very high resolution images, soil-vegetation cover.




1. Introduction
In recent years, machine learning methods have been widely used for various tasks of automation
and increasing the information content of procedures for thematic processing and analysis of
aerospace images in the visible and near infrared spectral ranges. Multispectral satellite images
of low and medium spatial resolution are traditionally used for survey of the soil and vegetation
cover and the construction of large-scale thematic maps [1]. With the increase in the spatial
and spectral resolution of satellite equipment, a number of novel tasks associated with remote
sensing of natural and anthropogenic objects have arisen.
   High (1–4 m) and very high (< 1 m) spatial resolution of panchromatic satellite images,
forms the basis of methods for solving new tasks of monitoring land, forest and water resources,

SDM-2021: All-Russian conference, August 24–27, 2021, Novosibirsk, Russia
" yegor@mail.ru (E. V. Dmitriev); melnik_petr@bk.ru (P. G. Melnik); lesshii@bk.ru (S. A. Donskoy)
                                       © 2021 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073       CEUR Workshop Proceedings (CEUR-WS.org)



                                                                                                          56
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                             56–66


searching for mineral deposits and assessing ecological situation, which are more complex
from the point of view of increased consumer requirements. There is a need to develop special
approaches for analyzing large amounts of information and obtaining remote estimates of
the characteristics of the examined objects with a given accuracy. Improving the efficiency
of thematic processing of aerospace images of high spatial and spectral resolution is in high
demand in many applications in the fields of natural resource management, agriculture, forestry
and environmental monitoring [2].
   The current trend in the development of methods for thematic processing of high resolution
images is the combined use of spectral and texture features. So for example, a method of
spectral-texture processing of aerial hyperspectral images of a forest canopy was presented
in [3]. Analysis of the results of test calculations for selected areas of the Savvatyevskoe forestry
(Russia, Tver) showed that the proposed approach provides a significant increase in the accuracy
of classification of the species composition and age groups, in comparison with the averaged
spectral characteristics. It should be noted that for synthesized multispectral images, taking
into account spectral features showed an increase in accuracy by more than 10%.
   The presented results on improving the accuracy due to the use of texture features are
also confirmed by the comparison with the previously obtained results of thematic processing
of hyperspectral images of nearby territories presented in [4]. Both effective nonparametric
methods of cluster analysis [5] and optimized ensemble machine learning algorithms [6] can be
successfully used for spectral-texture classifications.
   The work [7] shows new possibilities of using statistical texture analysis of satellite images of
very high spatial resolution to retrieve the structural parameters of forest stands, characterizing
the variety of sizes and density of crowns, as well as the relative position of individual trees.
The presented technique is based on the parameterization of linear relationships between the
Haralick texture features and the structural parameters of pine stands. The results obtained can
be effectively used to provide more accurate estimates of the aboveground biomass of forest
stand fractions.
   The accuracy of texture analysis depends on the chosen feature extraction method [8]. In this
paper, we discuss the possibilities of using various statistical methods for measuring textures
based on the matrix representation.


2. Texture feature extraction and classification methods
The panchromatic satellite images presented in grayscale are considered as an object of texture
analysis since they have the highest spatial resolution. The texture is formed by the spatial ar-
rangement and the mutual combination of structural elements. Natural objects are characterized
by a random arrangement of structural elements and significant variations in their parameters,
for example, tone and size. Thus, the task of constructing parameters characterizing a particular
texture is associated with obtaining statistical estimates.
   Statistical methods of texture analysis are based on assessing the spatial distribution of local
characteristics of structural elements for all possible locations in the image and extracting
statistical parameters from the obtained distributions of local characteristics. Matrix methods
assume that the desired distribution is discrete and has a finite number of elements. An image



                                                 57
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                             56–66


for which a texture extraction is made must contain a sufficiently large number of structural
elements to obtain reliable estimate of probability mass function. Texture features obtained on
the basis of matrix methods are subdivided into characteristics of the 1st and 2nd orders.
   The construction of first order texture characteristics implies that the structural elements are
individual pixels of the original image [9]. Gray-Level Matrix (GLM) is a vector of frequencies
of gray level occurrence in the processed image 𝐼(𝑥, 𝑦) with the size 𝐿𝑥 × 𝐿𝑦 :

                    GLM(𝑘) = # {(𝑥, 𝑦)|𝐼(𝑥, 𝑦) = 𝑘, (𝑥, 𝑦) ∈ 𝐿𝑥 × 𝐿𝑦 }

where # means the number of elements in the set, 𝑘 = 1, . . . , 𝑁𝑔𝑙 , and 𝑁𝑔𝑙 is the number of
gray scales.
   For building the Gray Level Difference Matrix (GLDM), the original image 𝐼 is converted into
a difference image 𝐷𝐼:

                          𝐷𝐼Δ𝑥,Δ𝑦 = |𝐼(𝑥, 𝑦) − 𝐼(𝑥 + Δ𝑥, 𝑦 + Δ𝑦)|

where parameters Δ𝑥 and Δ𝑦 are displacements along the horizontal and vertical directions,
respectively. GLDM is a vector of frequencies of occurrence of absolute values of differences in
gray levels at the given displacement:

GLDM(𝑘) = # {(𝑥, 𝑦)|𝐷𝐼(𝑥, 𝑦) = 𝑘, (𝑥, 𝑦) ∈ (𝐿𝑥 − Δ𝑥)×(𝐿𝑦 − Δ𝑦)} ,               𝑘 = 0, . . . , 𝑁𝑔𝑙 −1.

  An example of constructing GLM and GLDM matrices is shown in Figure 1.
  For calculating extract texture features, GLM and GLDM are converted into corresponding
probability mass function estimates

                             GLM(𝑘)                             GLDM(𝑘)
                𝐹GLM (𝑘) = 𝑁          ,         𝐹GLDM (𝑘) = 𝑁 −1          .
                            𝑔𝑙
                           ∑︀                                𝑔𝑙
                                                             ∑︀
                               GLM(𝑘)                             GLDM(𝑘)
                              𝑘=1                               𝑘=0

   The corresponding 1st order texture characteristics are presented in Table 1.
   The extraction of second order texture characteristics is primarily associated with the con-
struction of two-dimensional distributions. Structural elements in this case consist of two pixels
or two groups of pixels, for each of which a corresponding characteristic is determined. The best
known is the method proposed in [10]. The method uses structural elements consisting of two




Figure 1: Construction of GLM and GLDM from sample image.




                                                58
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                                        56–66


Table 1
First order texture features.
                        GLM                                                  GLDM
     Name of                                              Name of
                                Formula                                                 Formula
     feature                                              feature
                                      𝐿𝑦
                                  𝐿𝑥 ∑︀                                          𝑔𝑙 −1
                                                                                𝑁∑︀
                             1 ∑︀
      Mean           𝜇=                    𝐼(𝑥, 𝑦)   Expectation                         𝑘𝐹GLDM (𝑘)
                           𝐿𝑥 𝐿𝑦 𝑥=1 𝑦=1                                         𝑘=0
                                   𝐿𝑦
                               𝐿𝑥 ∑︀                                             𝑔𝑙 −1
                                                                                𝑁∑︀
                          1 ∑︀
  Mean-square                         𝐼 2 (𝑥, 𝑦)          Contrast                       𝑘 2 𝐹GLDM (𝑘)
                        𝐿𝑥 𝐿𝑦 𝑥=1 𝑦=1                                             𝑘=0
                       𝑁𝑔𝑙
                                                          Angular                  𝑔𝑙 −1
                                                                                  𝑁∑︀
                                                                                            2
                       ∑︀
     Entropy       −         𝐹GLM (𝑘) log 𝐹GLM (𝑘)        Second                           𝐹GLDM (𝑘)
                       𝑘=1                                Moment                   𝑘=0
                              𝑁𝑔𝑙                                         𝑔𝑙 −1
                                                                         𝑁∑︀
                                     2
                              ∑︀
     Energy                         𝐹GLM (𝑘)              Entropy    −            𝐹GLDM (𝑘) log 𝐹GLDM (𝑘)
                              𝑘=1                                         𝑘=0
                        𝑁𝑔𝑙
                              (𝑘 − 𝜇)2 𝐹GLM (𝑘)
                        ∑︀
     Variance
                        𝑘=1



pixels at a certain specified distance (adjacency distance). One of these pixels is called reference.
For the reference pixel, the neighboring one is selected in a given direction of adjacency. To
describe the spatial relationship between the reference and neighboring pixels, the frequencies
of occurrence of the corresponding gray-scale pairs are calculated for all possible positions of
the reference pixel in the original image. Based on these frequencies, we can form a matrix
known as the Gray-Level Co-occurrence Matrix (GLCM) or Spatial Gray-Level Dependency
Matrix (SGLDM). An example of constructing the GLCM is shown in Figure 2. The GLCM is a




Figure 2: Construction of GLCM and GGCM from sample image.




                                                     59
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                           56–66


square matrix containing integer values. The size of GLCM is determined by the number of
gray levels. So in the example presented in Figure 2, the original image has 8 gray levels and,
respectively, the GLCM has a size of 8 × 8. The matrix has a symmetrical appearance if you do
not take into account the order of the grayscale in the reference and neighboring pixels.
   The original image 𝐼(𝑥, 𝑦) is a function of two spatial coordinates, so for each pixel we can
calculate the differential estimate of the gradient function. The magnitude of the gradient
                                            √︃(︂ )︂
                                                𝜕𝐼 2
                                                         (︂ )︂2
                                                           𝜕𝐼
                                 𝑔(𝑥, 𝑦) =             +
                                                𝜕𝑥         𝜕𝑦

characterizes the rate of changing tone of the image in reference pixels. To obtain a difference
estimate, we use the Sobel operator:
                                     √︁
                           𝑆(𝑥, 𝑦) = 𝑆𝑥2 (𝑥, 𝑦) + 𝑆𝑦2 (𝑥, 𝑦) ≃ 𝑔(𝑥, 𝑦)

where

  𝑆𝑥 (𝑥, 𝑦) = [𝐼(𝑥 + 1, 𝑦 − 1) + 2𝐼(𝑥 + 1, 𝑦) + 𝐼(𝑥 + 1, 𝑦 + 1)] −
                                       − [𝐼(𝑥 − 1, 𝑦 − 1) + 2𝐼(𝑥 − 1, 𝑦) + 𝐼(𝑥 − 1, 𝑦 + 1)] ,


  𝑆𝑦 (𝑥, 𝑦) = [𝐼(𝑥 − 1, 𝑦 + 1) + 2𝐼(𝑥, 𝑦 + 1) + 𝐼(𝑥 + 1, 𝑦 + 1)] −
                                       − [𝐼(𝑥 − 1, 𝑦 − 1) + 2𝐼(𝑥, 𝑦 − 1) + 𝐼(𝑥 + 1, 𝑦 − 1)] .

Thus, by specifying the number of gradient gradations to be equal 𝑁𝑔𝑙 , we can build an image
of modules of brightness gradients
                                         [︂                ]︂
                                            𝑆(𝑥, 𝑦) − 𝑆min
                           𝐺(𝑥, 𝑦) = int                      · 𝑁𝑔𝑙
                                             𝑆max − 𝑆min
in pixels of the original image. Gray Gradient Co-occurrence Matrix (GGCM) [11] is built in the
same way as GLCM, only for the image 𝐺(𝑥, 𝑦).
   The estimate of the probability mass function of the co-occurrence of a given number of gray
levels can be obtained as the normalized GLCM
                                              GCLM(𝑖, 𝑗)
                                  𝑝(𝑖, 𝑗) = 𝑁              ,
                                             𝑔𝑙
                                            ∑︀
                                                GCLM(𝑖, 𝑗)
                                            𝑖,𝑗=1

where 𝑖, 𝑗 are indices GLCM elements. For GGCM, this estimate is calculated in a similar way.
   Based on the values 𝑝(𝑖, 𝑗), statistics known as Haralick texture features are calculated. Ini-
tially, 14 different texture features (Haralick features) were proposed in the original paper [10],
however a few additional features were proposed in subsequent years. At present, 19 different
texture features are known: Autocorrelation, Cluster Prominence, Cluster Shade, Contrast,
Correlation, Difference Entropy, Difference Variance, Dissimilarity, Energy, Entropy, Homo-
geneity, Local homogeneity, Information Measure of Correlation 1, Information Measure of



                                                60
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                            56–66


Correlation 2, Maximum Probability, Sum Average, Sum Entropy, Sum Squares, Sum Variance.
The detailed description all of them is presented in [12]. It should also be noted that in work [13]
it is stated that for most practical tasks, it is sufficient to use 5 of them, which are given in
Table 2. The necessary marginal expectations and marginal STD can be calculated as:
                                                                   ⎯
             𝑁 ∑︁
            ∑︁  𝑁                       𝑁 ∑︁
                                       ∑︁   𝑁                      ⎸𝑁 𝑁
                                                                   ⎸∑︁ ∑︁
      𝜇𝑖 =          𝑖 · 𝑝(𝑖, 𝑗), 𝜇𝑗 =           𝑗 · 𝑝(𝑖, 𝑗), 𝜎𝑖 = ⎷           (𝑖 − 𝜇𝑖 ) · 𝑝(𝑖, 𝑗).
           𝑖=1 𝑗=1                       𝑖=1 𝑗=1                                 𝑖=1 𝑗=1

   Texture segmentation of panchromatic satellite images is based on the moving window
method. The moving window is a rectangular contour selecting the analyzed part of the image
under processing. The size of the window is determined by the characteristic scale of recognized
textures. If the window size is chosen too small, the result of the texture classification will
represent the high frequency noise. On the other hand, too large size of the window leads to
excessive smoothing of the contours of recognized objects. The center of the window runs
through all the points of the panchromatic image. To reduce the amount of computation in
practical tasks, it is sufficient to run only pixels whose coordinates correspond to the pixel
centers of the joint multispectral image, when processing panchromatic and multispectral
images together.
   To carry out the supervised classification texture features we employed an ensemble algorithm
known as Error Correcting Output Codes (ECOC). The algorithm is designed to formalize the
responses of binary learners as the multiclass classifier based on some results of the theory of
information and coding. Le the algorithm of binary classification is the Support Vector Machine
(SVM) with the Gaussian kernel. The response of the SVM algorithm is the classification score
which means the normalized distance from the classified sample to the discriminant surface in
the area of the relevant class. The coding stage of the ECOC algorithm consists of calculating
classification scores or each of SVM binary learners defined by one-versus-one coding design
matrix and corresponded Hinge binary losses for each of the considered classes. The decoding
stage consists of the selection of the class corresponding to the minimum average loss. For


Table 2
Informative Haralick texture features.
                        Name of feature                       Formula
                                                       𝑁 ∑︀
                                                          𝑁
                                                              (𝑖 − 𝑗)2 · 𝑝(𝑖, 𝑗)
                                                       ∑︀
                       Contrast (Inertia)
                                                  𝑖=1 𝑗=1
                                              𝑁 ∑︀
                                                 𝑁 (𝑖 − 𝜇 ) · (𝑗 − 𝜇 ) · 𝑝(𝑖, 𝑗)
                                              ∑︀          𝑖         𝑗
                          Correlation
                                             𝑖=1 𝑗=1                 𝜎𝑖 · 𝜎𝑗
                                                           𝑁 ∑︀
                                                              𝑁
                                                                     𝑝2 (𝑖, 𝑗)
                                                           ∑︀
                             Energy
                                                          𝑖=1 𝑗=1
                                                       𝑁 ∑︀
                                                       ∑︀  𝑁
                            Entropy                −             𝑝(𝑖, 𝑗) · ln 𝑝(𝑖, 𝑗)
                                                       𝑖=1 𝑗=1
                                                         𝑁 ∑︀
                                                        ∑︀  𝑁      𝑝(𝑖, 𝑗)
                      Local Homogeneity                                    2
                                                        𝑖=1 𝑗=1 1 + (𝑖 − 𝑗)




                                                   61
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                              56–66


optimizing the feature space arises, the regularized forward selection method is used. The
method has better stability in comparison with the standard greedy selection algorithm suffering
from the high sensitivity of the selected optimal sequence of features to small changes in the
training set. On the other hand, the regularized algorithm requires more computational costs.
   The classification quality was assessed by confusion matrix (CM) and the related parameters,
such as total error (TE), total omission error (TOE) and total commission errors (TCE). CM is the
basic classification quality characteristic allowing a comprehensive visual analysis of different
aspects of the classification method used. Rows of CM represent reference classes and columns
represent predicted classes. TE is defined as the amount of incorrectly classified samples over
the total number of samples. TOE is the mean omission error over all classes considered, where
the omission error is the amount of false classified samples of the selected class over all samples
of this class. TCE is the mean commission error over all possible responses of the classifier used,
where the commission error is defined as the probability of false classification for each possible
classification result.


3. Results and discussion
During the joint spectral-texture processing of satellite images, texture features are usually
employed for solving the following two tasks: segmentation of the contours of natural and
manmade objects (including the selection of building zones and forest areas), and classification
of structural parameters of the forest canopy. Thus, for carrying out numerical experiments
using the above methods, we selected two relevant test plots in WorldView-2 panchromatic
images with the spatial resolution ∼ 0.5 m. The first test plot, which hereinafter referred to as
Konstantinovsky, is located on the territory of the Savvatyevskoe forestry (Tver region) near the
Domnikovo village. The plot contains several large zones corresponding to 5 types of objects of
varying complexity having well differing texture: water surface (Konstantinovsky sand quarry),
pine forest, building zone, field and peat swamp forest. The RGB image of the test plot, the
corresponding expert map of objects and the results of texture analysis are shown in Figure 3.
   The classifier used is sensitive to the difference in the number of training samples for the
considered classes. Increasing the number of training samples for one of the classes leads to
increasing the prior probability of its classification. Thus, in order to avoid this problem, we
used a balanced training set containing 500 samples for each class. Remaining data (testing
set) were used for independent validation. Also, since the accuracy of texture classification
essentially depends on the size of the moving window, we carried out a series of calculations,
which allowed us to determine the range of acceptable size values. Thus, we used the moving
window with the size of 109 pixels in horizontal and vertical directions. The original image
has been reduced to 64 gray levels. The described above feature optimization was used for the
GLCM and GGCM methods to avoid the curse of dimensionality problem. For the classification
of texture features obtained by the GLM and GLDM methods, we used a full set of features.
   Estimates of total characteristics of classification quality obtained from training (resubstitution
method) and testing sets (independent validation) are represented in the Table 3. We can see
that the GLCM method provides the most accurate results. The total error is about 1%. At that
the total omission and commission errors are very close. It should be noted that difference



                                                 62
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                      56–66




Figure 3: Texture segmentation of natural and manmade objects from the panchromatic image of
Konstantinovsky test plot.


Table 3
Total characteristics of classification quality.
                                             Konstantinovsky     GFP Dementyev
                                             Resub    Indep      Resub  Indep
                                      TE      0.10        0.12    0.22   0.247
                           GLM       TOE      0.10        0.11    0.22   0.243
                                     TCE      0.10        0.12   0.218   0.315
                                      TE      0.25    0.268      0.234   0.319
                          GLCM       TOE      0.25    0.253      0.234   0.250
                                     TCE      0.25    0.261      0.239   0.365
                                      TE      0.01    0.012      0.015   0.033
                          GLCM       TOE      0.01    0.012      0.015   0.026
                                     TCE      0.01    0.011      0.015   0.115
                                      TE     0.068    0.083      0.023   0.043
                          GGCM       TOE     0.068    0.079      0.023   0.033
                                     TCE     0.066    0.098      0.023   0.115


between dependent and independent estimates of the error is insignificant, which indicates a
good generalization ability of the trained ECOC SVM classifier. The total errors of the GGCM
and GLM methods are significantly higher, but remain at an acceptable level. It should be
noted that a visual comparison of the classification results presented in Figure 3 shows that




                                                     63
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                                 56–66


GGCM reproduces the expert map of objects much better and contains significantly less noise
in comparison with GLM.
   Table 4 contains class-wise omission and commission classification errors for Konstantinovsky
test plot. The best classification result corresponds to water surface. We can see that all the
methods reveal a high accuracy. The building zone and field are classified with acceptable level
of errors. The worst accuracies correspond to conifer and peat swamp forest stands, however it
low enough for GLCM and GGCM methods. GLDM demonstrates worst results and it cannot
be used texture segmentation of forest areas.
   The second test plot, hereinafter referred to as GFP Dementyev, is located on the territory
of the Bronnitskoe forestry near the Lubninka village. The plot is part of the territory of the
geographical forest plantations (GFP) of the forester P.I. Dementyev. The RGB image of GFP
Dementyev and the corresponding expert classification map are shown in Figure 4. The choice
of this site is due to the large variety of plantations with different structures. By the variety


Table 4
Class-wise characteristics of classification quality for Konstantinovsky test plot.
                    buildings    field   natural confer forest   peat swamp forest    water surface
              OE      0.089      0.077            0.22                   0.15             0.017
    GLM
              CE      0.22       0.14             0.15                   0.09               0
              OE       0.16      0.215           0.538                  0.332            0.0195
    GLCM
              CE       0.32      0.222           0.432                  0.329            0.001
              OE      0.004      0.015           0.019                  0.022              0
    GLCM
              CE      0.009      0.009           0.017                  0.022              0
              OE      0.012      0.058           0.084                  0.097              0.14
   GGCM
              CE      0.038      0.29            0.05                   0.097             0.017




Figure 4: Texture segmentation of forest structure from the panchromatic image of GFP Dementyev
test plot.




                                                   64
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                                  56–66


Table 5
Class-wise characteristics of classification quality for GFP Dementyev test plot.
                       normal                          dense        cluster    mixed      larch
                       confer     field1   field2    deciduous    structured   normal    regular
                        forest                         forest        forest     forest    forest
                 OE     0.275    0.0746    0.0804        0.376      0.367      0.406      0.123
        GLM
                 CE     0.424     0.123    0.0555        0.931      0.462      0.0959     0.114
                 OE     0.391     0.148    0.131         0.172      0.212       0.554     0.145
       GLCM
                 CE     0.586     0.205    0.0934        0.899      0.477       0.174     0.122
                 OE     0.017     0.018    0.022         0.037      0.0046     0.065      0.017
       GLCM
                 CE     0.017     0.039    0.012          0.69       0.025     0.0052     0.017
                 OE     0.055     0.046    0.022         0.0066     0.011       0.07      0.019
       GGCM
                 CE     0.017     0.046    0.028          0.47       0.22      0.0052      0.02


of species, the stands of the Bronnitskoe forestry cover the main forest-forming species of
Russia. From the 1950s to the present, various species and ecotypes of larch, which are grown
here outside the natural habitat, have been tested in this area. The forest canopy of the test
site contains 7 visually noticeable texture classes: 1 — mixed conifer stand (larch, pine and
spruce) with a dense canopy and high values of density; 2 and 3 — agricultural areas of different
structure; 4 — deciduous stand with a predominance of birch, dense canopy and the relative
stocking of 0.9; 5 — mixed birch stand with the relative stocking of 0.8; 6 — mixed birch stand
with a pronounced cluster structure of the canopy; 7 — cultivated plantations of larch with a
regular structure.
   In this case GLCM and GGCM methods show similar results. Total classification error is
about 4%. Accuracy of GLCM seems to be a little higher compare to GGCM, however we can
see for this method that difference between resubstitution and independent estimates is also
more significant than for GGCM. GLM and GLDM method demonstrate weak classification
results. Analyzing class-wise errors presented in Table 5, we can see that the regular structure
of larch stands corresponds to minimum errors, about 2% for GLCM and GGCM methods. This
result is confirmed also by Figure 4. The highest errors correspond to the dense deciduous stand,
however it is explained by small number of pixels corresponding to this object.


Acknowledgments
The reported study was funded by RFBR, projects No. 20-07-00370 “Fundamental problems of
increasing the informativeness of processing data from optoelectronic aerospace devices of high
spatial and spectral resolution” and No. 19-01-00215 “Investigation of operative opportunities
of hyper-spectral technologies of remote sensing of the Earth to solve regional problems using
updated hyper-spectral cameras from space”.




                                                    65
Egor V. Dmitriev et al. CEUR Workshop Proceedings                                          56–66


References
 [1] Egorov V.A., Bartalev S.A., Kolbudaev P.A., Plotnikov D.E., Khvostikov S.A. Land cover map
     of Russia derived from Proba-V satellite data // Sovremennye Problemy Distantsionnogo
     Zondirovaniya Zemli iz Kosmosa. 2018. Vol. 15. P. 282–286.
 [2] Shafri H.Z. Machine learning in hyperspectral and multispectral remote sensing data anal-
     ysis // Artificial Intelligence Science and Technology: Proceedings of the 2016 International
     Conference (AIST2016). 2017. P. 3–9.
 [3] Rylov S.A., Melnikov P.V., Pestunov I.A. Spectral-textural classification of hyperspectral
     images with high spatial resolution // Interexpo GEO-Siberia. 2016. Vol. 4. No. 1. P. 78–84.
     (In Russ.)
 [4] Dmitriev E.V. Classification of the forest cover of Tver’ region using hyperspectral airborne
     imagery // Izvestiya, Atmospheric and Oceanic Physics. 2014. Vol. 50, No. 9. P. 929–942.
 [5] Pestunov I.A., Sinyavsky Yu.N.. Nonparametric grid-based clustering algorithm for remote
     sensing data // Optoelectronics, Instrumentation and Data Processing. 2006. Vol. 2. P. 78–87.
 [6] Dmitriev E.V., Kozoderov V.V., Dementyev A.O., Safonova A.N. Combining classifiers in
     the problem of thematic processing of hyperspectral aerospace images // Optoelectronics,
     Instrumentation and Data Processing. 2018. Vol. 54. No. 3. P. 213–221.
 [7] Beguet B., Guyon D., Boukir S., Chehata N. Automated retrieval of forest structure variables
     based on multi-scale texture analysis of VHR satellite imagery // ISPRS J. Photogramm.
     Remote Sens. 2014. Vol. 96. P. 164–178.
 [8] Petrou M.M., Kamata S.I. Image processing: Dealing with texture. John Wiley & Sons, 2021.
 [9] Weszka J.S., Dyer C.R., Rosenfeld A. A comparative study of texture measures for ter-
     rain classification // IEEE transactions on Systems, Man, and Cybernetics. 1976. No. 4.
     P. 269–285.
[10] Haralick R.M., Shanmugam K., Dinstein I. Textural features for image classification // IEEE
     Transactions on Systems, Man, and Cybernetics, SMC-3. 1973. No. 6. P. 610–621.
[11] Chen S., Wu C., Chen D., Tan W. Scene classification based on gray level-gradient co-
     occurrence matrix in the neighborhood of interest points // 2009 IEEE International
     Conference on Intelligent Computing and Intelligent Systems. 2009. Vol. 4. P. 482–485.
[12] Dmitriev E.V., Kozoderov V.V., Sokolov A.A. The performance of texture features in the
     problem of classification of the soil-vegetation objects // CEUR Workshop Proceedings.
     2019. Vol. 2534. P. 91–98.
[13] Conners R.W., Harlow C.A. A theoretical comparison of texture algorithms // IEEE Trans-
     actions on Pattern Analysis and Machine Intelligence. 1980. Vol. 3. P. 204–222.




                                                66