<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Statistical texture analysis of forest areas from very high spatial resolution satellite images</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Egor V. Dmitriev</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Timofei V. Kondranin</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Petr G. Melnik</string-name>
          <xref ref-type="aff" rid="aff3">3</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergey A. Donskoy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Federal Forestry Agency ROSLESINFORG</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Marchuk Institute of Numerical Mathematics of the Russian Academy of Sciences</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>Moscow Institute of Physics and Technology (National Research University)</institution>
          ,
          <addr-line>Dolgoprudny, Moscow Region</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff3">
          <label>3</label>
          <institution>Mytischi Branch of Bauman Moscow State Technical University</institution>
          ,
          <addr-line>Mytischi, Moscow Region</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>56</fpage>
      <lpage>66</lpage>
      <abstract>
        <p>Aerospace images with a spatial resolution of less than 1 m are actively used by regional services to obtain and update information about various environmental objects. Considerable eforts are being devoted to the development of remote sensing methods for forest areas. The structure of the forest canopy depends on various parameters, most of which are determined by ground-based methods during forest management works. Remote sensing methods for assessing the structural parameters of forest stands are based on texture analysis of panchromatic and multispectral images. A statistical approach is often used to extract texture features. The basis of this approach is the description of the distributions characterizing the mutual arrangement of image pixels in grayscale. This paper compares the efectiveness of matrix based statistical methods for extracting textural features for solving the problem of classifying various natural and manmade objects, as well as structures of the forest canopy. We consider statistics of various orders based on estimates of the distributions of gray levels, as well as the mutual occurrence, frequency, diference and structuring of gray levels. The results of assessing the informativeness of statistical textural characteristics in determining various structures of the forest canopy are presented. Dependences of the classification results on the choice of distribution parameters are determined. For the quantitative validation of the results obtained, data from ground surveys and expert visual classification of very high resolution WorldView-2 images of the territories of Savvatyevkoe and Bronnitskoe forestries are used.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Remote sensing</kwd>
        <kwd>pattern recognition</kwd>
        <kwd>texture analysis</kwd>
        <kwd>very high resolution images</kwd>
        <kwd>soil-vegetation cover</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        In recent years, machine learning methods have been widely used for various tasks of automation
and increasing the information content of procedures for thematic processing and analysis of
aerospace images in the visible and near infrared spectral ranges. Multispectral satellite images
of low and medium spatial resolution are traditionally used for survey of the soil and vegetation
cover and the construction of large-scale thematic maps [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. With the increase in the spatial
and spectral resolution of satellite equipment, a number of novel tasks associated with remote
sensing of natural and anthropogenic objects have arisen.
      </p>
      <p>
        High (1–4 m) and very high (&lt; 1 m) spatial resolution of panchromatic satellite images,
forms the basis of methods for solving new tasks of monitoring land, forest and water resources,
searching for mineral deposits and assessing ecological situation, which are more complex
from the point of view of increased consumer requirements. There is a need to develop special
approaches for analyzing large amounts of information and obtaining remote estimates of
the characteristics of the examined objects with a given accuracy. Improving the eficiency
of thematic processing of aerospace images of high spatial and spectral resolution is in high
demand in many applications in the fields of natural resource management, agriculture, forestry
and environmental monitoring [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        The current trend in the development of methods for thematic processing of high resolution
images is the combined use of spectral and texture features. So for example, a method of
spectral-texture processing of aerial hyperspectral images of a forest canopy was presented
in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. Analysis of the results of test calculations for selected areas of the Savvatyevskoe forestry
(Russia, Tver) showed that the proposed approach provides a significant increase in the accuracy
of classification of the species composition and age groups, in comparison with the averaged
spectral characteristics. It should be noted that for synthesized multispectral images, taking
into account spectral features showed an increase in accuracy by more than 10%.
      </p>
      <p>
        The presented results on improving the accuracy due to the use of texture features are
also confirmed by the comparison with the previously obtained results of thematic processing
of hyperspectral images of nearby territories presented in [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Both efective nonparametric
methods of cluster analysis [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] and optimized ensemble machine learning algorithms [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] can be
successfully used for spectral-texture classifications.
      </p>
      <p>
        The work [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] shows new possibilities of using statistical texture analysis of satellite images of
very high spatial resolution to retrieve the structural parameters of forest stands, characterizing
the variety of sizes and density of crowns, as well as the relative position of individual trees.
The presented technique is based on the parameterization of linear relationships between the
Haralick texture features and the structural parameters of pine stands. The results obtained can
be efectively used to provide more accurate estimates of the aboveground biomass of forest
stand fractions.
      </p>
      <p>
        The accuracy of texture analysis depends on the chosen feature extraction method [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. In this
paper, we discuss the possibilities of using various statistical methods for measuring textures
based on the matrix representation.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Texture feature extraction and classification methods</title>
      <p>The panchromatic satellite images presented in grayscale are considered as an object of texture
analysis since they have the highest spatial resolution. The texture is formed by the spatial
arrangement and the mutual combination of structural elements. Natural objects are characterized
by a random arrangement of structural elements and significant variations in their parameters,
for example, tone and size. Thus, the task of constructing parameters characterizing a particular
texture is associated with obtaining statistical estimates.</p>
      <p>Statistical methods of texture analysis are based on assessing the spatial distribution of local
characteristics of structural elements for all possible locations in the image and extracting
statistical parameters from the obtained distributions of local characteristics. Matrix methods
assume that the desired distribution is discrete and has a finite number of elements. An image
for which a texture extraction is made must contain a suficiently large number of structural
elements to obtain reliable estimate of probability mass function. Texture features obtained on
the basis of matrix methods are subdivided into characteristics of the 1st and 2nd orders.</p>
      <p>
        The construction of first order texture characteristics implies that the structural elements are
individual pixels of the original image [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Gray-Level Matrix (GLM) is a vector of frequencies
of gray level occurrence in the processed image  (, ) with the size  × :
      </p>
      <p>GLM() = # {(, )| (, ) = , (, ) ∈  × }
where # means the number of elements in the set,  = 1, . . . , , and  is the number of
gray scales.</p>
      <p>For building the Gray Level Diference Matrix (GLDM), the original image  is converted into
a diference image  :</p>
      <p>Δ,Δ = | (, ) −  ( + Δ,  + Δ)|
where parameters Δ and Δ are displacements along the horizontal and vertical directions,
respectively. GLDM is a vector of frequencies of occurrence of absolute values of diferences in
gray levels at the given displacement:
GLDM() = # {(, )| (, ) = , (, ) ∈ ( −
Δ)× ( −
Δ)} ,  = 0, . . . , − 1.</p>
      <p>An example of constructing GLM and GLDM matrices is shown in Figure 1.</p>
      <p>For calculating extract texture features, GLM and GLDM are converted into corresponding
probability mass function estimates</p>
      <p>GLM()
GLM() = 
∑︀ GLM()
=1
,
GLDM() =</p>
      <p>GLDM()
− 1
∑︀
=0</p>
      <p>GLDM()
.</p>
      <p>The corresponding 1st order texture characteristics are presented in Table 1.</p>
      <p>
        The extraction of second order texture characteristics is primarily associated with the
construction of two-dimensional distributions. Structural elements in this case consist of two pixels
or two groups of pixels, for each of which a corresponding characteristic is determined. The best
known is the method proposed in [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]. The method uses structural elements consisting of two
pixels at a certain specified distance (adjacency distance). One of these pixels is called reference.
For the reference pixel, the neighboring one is selected in a given direction of adjacency. To
describe the spatial relationship between the reference and neighboring pixels, the frequencies
of occurrence of the corresponding gray-scale pairs are calculated for all possible positions of
the reference pixel in the original image. Based on these frequencies, we can form a matrix
known as the Gray-Level Co-occurrence Matrix (GLCM) or Spatial Gray-Level Dependency
Matrix (SGLDM). An example of constructing the GLCM is shown in Figure 2. The GLCM is a
square matrix containing integer values. The size of GLCM is determined by the number of
gray levels. So in the example presented in Figure 2, the original image has 8 gray levels and,
respectively, the GLCM has a size of 8 × 8. The matrix has a symmetrical appearance if you do
not take into account the order of the grayscale in the reference and neighboring pixels.
      </p>
      <p>The original image (, ) is a function of two spatial coordinates, so for each pixel we can
calculate the diferential estimate of the gradient function. The magnitude of the gradient
(, ) =
√︃︂(  )︂ 2 (︂  )︂ 2</p>
      <p>+


characterizes the rate of changing tone of the image in reference pixels. To obtain a diference
estimate, we use the Sobel operator:</p>
      <p>(, ) = √︁2(, ) + 2 (, ) ≃ (, )
where
(, ) = [( + 1,  − 1) + 2( + 1, ) + ( + 1,  + 1)] −
(, ) = [( − 1,  + 1) + 2(,  + 1) + ( + 1,  + 1)] −
− [( − 1,  − 1) + 2( − 1, ) + ( − 1,  + 1)] ,
− [( − 1,  − 1) + 2(,  − 1) + ( + 1,  − 1)] .</p>
      <p>
        Thus, by specifying the number of gradient gradations to be equal , we can build an image
of modules of brightness gradients
(, ) = int
︂[ (, ) − min ]︂
max − min
· 
in pixels of the original image. Gray Gradient Co-occurrence Matrix (GGCM) [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ] is built in the
same way as GLCM, only for the image (, ).
      </p>
      <p>The estimate of the probability mass function of the co-occurrence of a given number of gray
levels can be obtained as the normalized GLCM
(, ) =</p>
      <p>GCLM(, )

∑︀ GCLM(, )
,=1
where ,  are indices GLCM elements. For GGCM, this estimate is calculated in a similar way.</p>
      <p>
        Based on the values (, ), statistics known as Haralick texture features are calculated.
Initially, 14 diferent texture features (Haralick features) were proposed in the original paper [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ],
however a few additional features were proposed in subsequent years. At present, 19 diferent
texture features are known: Autocorrelation, Cluster Prominence, Cluster Shade, Contrast,
Correlation, Diference Entropy, Diference Variance, Dissimilarity, Energy, Entropy,
Homogeneity, Local homogeneity, Information Measure of Correlation 1, Information Measure of
Correlation 2, Maximum Probability, Sum Average, Sum Entropy, Sum Squares, Sum Variance.
The detailed description all of them is presented in [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. It should also be noted that in work [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]
it is stated that for most practical tasks, it is suficient to use 5 of them, which are given in
Table 2. The necessary marginal expectations and marginal STD can be calculated as:
      </p>
      <p>Texture segmentation of panchromatic satellite images is based on the moving window
method. The moving window is a rectangular contour selecting the analyzed part of the image
under processing. The size of the window is determined by the characteristic scale of recognized
textures. If the window size is chosen too small, the result of the texture classification will
represent the high frequency noise. On the other hand, too large size of the window leads to
excessive smoothing of the contours of recognized objects. The center of the window runs
through all the points of the panchromatic image. To reduce the amount of computation in
practical tasks, it is suficient to run only pixels whose coordinates correspond to the pixel
centers of the joint multispectral image, when processing panchromatic and multispectral
images together.</p>
      <p>To carry out the supervised classification texture features we employed an ensemble algorithm
known as Error Correcting Output Codes (ECOC). The algorithm is designed to formalize the
responses of binary learners as the multiclass classifier based on some results of the theory of
information and coding. Le the algorithm of binary classification is the Support Vector Machine
(SVM) with the Gaussian kernel. The response of the SVM algorithm is the classification score
which means the normalized distance from the classified sample to the discriminant surface in
the area of the relevant class. The coding stage of the ECOC algorithm consists of calculating
classification scores or each of SVM binary learners defined by one-versus-one coding design
matrix and corresponded Hinge binary losses for each of the considered classes. The decoding
stage consists of the selection of the class corresponding to the minimum average loss. For
optimizing the feature space arises, the regularized forward selection method is used. The
method has better stability in comparison with the standard greedy selection algorithm sufering
from the high sensitivity of the selected optimal sequence of features to small changes in the
training set. On the other hand, the regularized algorithm requires more computational costs.</p>
      <p>The classification quality was assessed by confusion matrix (CM) and the related parameters,
such as total error (TE), total omission error (TOE) and total commission errors (TCE). CM is the
basic classification quality characteristic allowing a comprehensive visual analysis of diferent
aspects of the classification method used. Rows of CM represent reference classes and columns
represent predicted classes. TE is defined as the amount of incorrectly classified samples over
the total number of samples. TOE is the mean omission error over all classes considered, where
the omission error is the amount of false classified samples of the selected class over all samples
of this class. TCE is the mean commission error over all possible responses of the classifier used,
where the commission error is defined as the probability of false classification for each possible
classification result.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Results and discussion</title>
      <p>During the joint spectral-texture processing of satellite images, texture features are usually
employed for solving the following two tasks: segmentation of the contours of natural and
manmade objects (including the selection of building zones and forest areas), and classification
of structural parameters of the forest canopy. Thus, for carrying out numerical experiments
using the above methods, we selected two relevant test plots in WorldView-2 panchromatic
images with the spatial resolution ∼ 0.5 m. The first test plot, which hereinafter referred to as
Konstantinovsky, is located on the territory of the Savvatyevskoe forestry (Tver region) near the
Domnikovo village. The plot contains several large zones corresponding to 5 types of objects of
varying complexity having well difering texture: water surface (Konstantinovsky sand quarry),
pine forest, building zone, field and peat swamp forest. The RGB image of the test plot, the
corresponding expert map of objects and the results of texture analysis are shown in Figure 3.</p>
      <p>The classifier used is sensitive to the diference in the number of training samples for the
considered classes. Increasing the number of training samples for one of the classes leads to
increasing the prior probability of its classification. Thus, in order to avoid this problem, we
used a balanced training set containing 500 samples for each class. Remaining data (testing
set) were used for independent validation. Also, since the accuracy of texture classification
essentially depends on the size of the moving window, we carried out a series of calculations,
which allowed us to determine the range of acceptable size values. Thus, we used the moving
window with the size of 109 pixels in horizontal and vertical directions. The original image
has been reduced to 64 gray levels. The described above feature optimization was used for the
GLCM and GGCM methods to avoid the curse of dimensionality problem. For the classification
of texture features obtained by the GLM and GLDM methods, we used a full set of features.</p>
      <p>Estimates of total characteristics of classification quality obtained from training (resubstitution
method) and testing sets (independent validation) are represented in the Table 3. We can see
that the GLCM method provides the most accurate results. The total error is about 1%. At that
the total omission and commission errors are very close. It should be noted that diference
between dependent and independent estimates of the error is insignificant, which indicates a
good generalization ability of the trained ECOC SVM classifier. The total errors of the GGCM
and GLM methods are significantly higher, but remain at an acceptable level. It should be
noted that a visual comparison of the classification results presented in Figure 3 shows that
GLM</p>
      <sec id="sec-3-1">
        <title>GLCM</title>
      </sec>
      <sec id="sec-3-2">
        <title>GLCM</title>
      </sec>
      <sec id="sec-3-3">
        <title>GGCM TE TOE TCE</title>
        <p>TE
TOE
TCE
TE
TOE
TCE
TE
TOE
TCE
0.10
0.10
0.10
0.25
0.25
0.25
0.01
0.01
0.01
0.068
0.068
0.066
0.22
0.22
0.218
0.234
0.234
0.239
0.015
0.015
0.015
0.023
0.023
0.023
0.247
0.243
0.315
0.319
0.250
0.365
0.033
0.026
0.115
0.043
0.033
0.115
0.12
0.11
0.12
0.268
0.253
0.261
0.012
0.012
0.011
0.083
0.079
0.098
GGCM reproduces the expert map of objects much better and contains significantly less noise
in comparison with GLM.</p>
        <p>Table 4 contains class-wise omission and commission classification errors for Konstantinovsky
test plot. The best classification result corresponds to water surface. We can see that all the
methods reveal a high accuracy. The building zone and field are classified with acceptable level
of errors. The worst accuracies correspond to conifer and peat swamp forest stands, however it
low enough for GLCM and GGCM methods. GLDM demonstrates worst results and it cannot
be used texture segmentation of forest areas.</p>
        <p>The second test plot, hereinafter referred to as GFP Dementyev, is located on the territory
of the Bronnitskoe forestry near the Lubninka village. The plot is part of the territory of the
geographical forest plantations (GFP) of the forester P.I. Dementyev. The RGB image of GFP
Dementyev and the corresponding expert classification map are shown in Figure 4. The choice
of this site is due to the large variety of plantations with diferent structures. By the variety</p>
        <p>natural confer forest peat swamp forest
GLM</p>
      </sec>
      <sec id="sec-3-4">
        <title>GLCM</title>
      </sec>
      <sec id="sec-3-5">
        <title>GLCM</title>
      </sec>
      <sec id="sec-3-6">
        <title>GGCM OE CE OE</title>
        <p>CE
OE
CE
OE
CE
buildings
0.15
0.09
0.332
0.329
of species, the stands of the Bronnitskoe forestry cover the main forest-forming species of
Russia. From the 1950s to the present, various species and ecotypes of larch, which are grown
here outside the natural habitat, have been tested in this area. The forest canopy of the test
site contains 7 visually noticeable texture classes: 1 — mixed conifer stand (larch, pine and
spruce) with a dense canopy and high values of density; 2 and 3 — agricultural areas of diferent
structure; 4 — deciduous stand with a predominance of birch, dense canopy and the relative
stocking of 0.9; 5 — mixed birch stand with the relative stocking of 0.8; 6 — mixed birch stand
with a pronounced cluster structure of the canopy; 7 — cultivated plantations of larch with a
regular structure.</p>
        <p>In this case GLCM and GGCM methods show similar results. Total classification error is
about 4%. Accuracy of GLCM seems to be a little higher compare to GGCM, however we can
see for this method that diference between resubstitution and independent estimates is also
more significant than for GGCM. GLM and GLDM method demonstrate weak classification
results. Analyzing class-wise errors presented in Table 5, we can see that the regular structure
of larch stands corresponds to minimum errors, about 2% for GLCM and GGCM methods. This
result is confirmed also by Figure 4. The highest errors correspond to the dense deciduous stand,
however it is explained by small number of pixels corresponding to this object.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The reported study was funded by RFBR, projects No. 20-07-00370 “Fundamental problems of
increasing the informativeness of processing data from optoelectronic aerospace devices of high
spatial and spectral resolution” and No. 19-01-00215 “Investigation of operative opportunities
of hyper-spectral technologies of remote sensing of the Earth to solve regional problems using
updated hyper-spectral cameras from space”.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Egorov</surname>
            <given-names>V.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bartalev</surname>
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kolbudaev</surname>
            <given-names>P.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plotnikov</surname>
            <given-names>D.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Khvostikov</surname>
            <given-names>S.A.</given-names>
          </string-name>
          <article-title>Land cover map of Russia derived from Proba-V satellite data // Sovremennye Problemy Distantsionnogo Zondirovaniya Zemli iz Kosmosa</article-title>
          .
          <year>2018</year>
          . Vol.
          <volume>15</volume>
          . P.
          <volume>282</volume>
          -
          <fpage>286</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Shafri</surname>
            <given-names>H.Z.</given-names>
          </string-name>
          <article-title>Machine learning in hyperspectral and multispectral remote sensing data analysis</article-title>
          <source>// Artificial Intelligence Science and Technology: Proceedings of the 2016 International Conference (AIST2016)</source>
          .
          <year>2017</year>
          . P. 3-
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Rylov</surname>
            <given-names>S.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Melnikov</surname>
            <given-names>P.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pestunov</surname>
            <given-names>I.A</given-names>
          </string-name>
          .
          <article-title>Spectral-textural classification of hyperspectral images with high spatial resolution // Interexpo GEO-Siberia</article-title>
          .
          <year>2016</year>
          . Vol.
          <volume>4</volume>
          . No. 1. P.
          <volume>78</volume>
          -
          <fpage>84</fpage>
          . (In Russ.)
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Dmitriev</surname>
            <given-names>E.V.</given-names>
          </string-name>
          <article-title>Classification of the forest cover of Tver' region using hyperspectral airborne imagery</article-title>
          // Izvestiya, Atmospheric and
          <string-name>
            <given-names>Oceanic</given-names>
            <surname>Physics</surname>
          </string-name>
          .
          <year>2014</year>
          . Vol.
          <volume>50</volume>
          , No. 9. P.
          <volume>929</volume>
          -
          <fpage>942</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Pestunov</surname>
            <given-names>I.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sinyavsky</surname>
            <given-names>Yu.N..</given-names>
          </string-name>
          <article-title>Nonparametric grid-based clustering algorithm for remote sensing data // Optoelectronics, Instrumentation</article-title>
          and
          <string-name>
            <given-names>Data</given-names>
            <surname>Processing</surname>
          </string-name>
          .
          <year>2006</year>
          . Vol.
          <volume>2</volume>
          . P.
          <volume>78</volume>
          -
          <fpage>87</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Dmitriev</surname>
            <given-names>E.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozoderov</surname>
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dementyev</surname>
            <given-names>A.O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Safonova</surname>
            <given-names>A.N.</given-names>
          </string-name>
          <article-title>Combining classifiers in the problem of thematic processing of hyperspectral aerospace images // Optoelectronics, Instrumentation</article-title>
          and
          <string-name>
            <given-names>Data</given-names>
            <surname>Processing</surname>
          </string-name>
          .
          <year>2018</year>
          . Vol.
          <volume>54</volume>
          . No. 3. P.
          <volume>213</volume>
          -
          <fpage>221</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Beguet</surname>
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Guyon</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boukir</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chehata</surname>
            <given-names>N.</given-names>
          </string-name>
          <article-title>Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery // ISPRS</article-title>
          <string-name>
            <given-names>J. Photogramm. Remote</given-names>
            <surname>Sens</surname>
          </string-name>
          .
          <year>2014</year>
          . Vol.
          <volume>96</volume>
          . P.
          <volume>164</volume>
          -
          <fpage>178</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Petrou</surname>
            <given-names>M.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kamata</surname>
            <given-names>S.I.</given-names>
          </string-name>
          <article-title>Image processing: Dealing with texture</article-title>
          . John Wiley &amp; Sons,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Weszka</surname>
            <given-names>J.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dyer</surname>
            <given-names>C.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosenfeld</surname>
            <given-names>A</given-names>
          </string-name>
          .
          <article-title>A comparative study of texture measures for terrain classification //</article-title>
          <source>IEEE transactions on Systems, Man, and Cybernetics</source>
          .
          <year>1976</year>
          . No. 4. P.
          <volume>269</volume>
          -
          <fpage>285</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Haralick</surname>
            <given-names>R.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shanmugam</surname>
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dinstein</surname>
            <given-names>I.</given-names>
          </string-name>
          <article-title>Textural features for image classification //</article-title>
          <source>IEEE Transactions on Systems, Man, and Cybernetics</source>
          , SMC-
          <fpage>3</fpage>
          .
          <year>1973</year>
          . No. 6. P.
          <volume>610</volume>
          -
          <fpage>621</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Chen</surname>
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tan</surname>
            <given-names>W.</given-names>
          </string-name>
          <article-title>Scene classification based on gray level-gradient cooccurrence matrix in the neighborhood of interest points // 2009</article-title>
          <source>IEEE International Conference on Intelligent Computing and Intelligent Systems</source>
          .
          <year>2009</year>
          . Vol.
          <volume>4</volume>
          . P.
          <volume>482</volume>
          -
          <fpage>485</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Dmitriev</surname>
            <given-names>E.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kozoderov</surname>
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sokolov</surname>
            <given-names>A.A.</given-names>
          </string-name>
          <article-title>The performance of texture features in the problem of classification of the soil-vegetation objects /</article-title>
          / CEUR Workshop Proceedings.
          <year>2019</year>
          . Vol.
          <volume>2534</volume>
          . P.
          <volume>91</volume>
          -
          <fpage>98</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Conners</surname>
            <given-names>R.W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Harlow</surname>
            <given-names>C.A.</given-names>
          </string-name>
          <article-title>A theoretical comparison of texture algorithms /</article-title>
          / IEEE Transactions on
          <source>Pattern Analysis and Machine Intelligence</source>
          .
          <year>1980</year>
          . Vol.
          <volume>3</volume>
          . P.
          <volume>204</volume>
          -
          <fpage>222</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>