<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Statistical texture analysis of forest areas from very high spatial resolution satellite images</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Egor</forename><forename type="middle">V</forename><surname>Dmitriev</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Marchuk Institute of Numerical Mathematics</orgName>
								<orgName type="institution">Russian Academy of Sciences</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Moscow Institute of Physics and Technology (</orgName>
								<orgName type="institution">National Research University)</orgName>
								<address>
									<settlement>Dolgoprudny, Moscow</settlement>
									<region>Region</region>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Timofei</forename><forename type="middle">V</forename><surname>Kondranin</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Moscow Institute of Physics and Technology (</orgName>
								<orgName type="institution">National Research University)</orgName>
								<address>
									<settlement>Dolgoprudny, Moscow</settlement>
									<region>Region</region>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Petr</forename><forename type="middle">G</forename><surname>Melnik</surname></persName>
							<email>melnik_petr@bk.ru</email>
							<affiliation key="aff2">
								<orgName type="department">Mytischi Branch of Bauman</orgName>
								<orgName type="institution">Moscow State Technical University</orgName>
								<address>
									<settlement>Mytischi, Moscow</settlement>
									<region>Region</region>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sergey</forename><forename type="middle">A</forename><surname>Donskoy</surname></persName>
							<affiliation key="aff3">
								<orgName type="institution">Federal Forestry Agency ROSLESINFORG</orgName>
								<address>
									<settlement>Moscow</settlement>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Statistical texture analysis of forest areas from very high spatial resolution satellite images</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">63238B4B732B132B96FE314F73AFA491</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T21:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Remote sensing</term>
					<term>pattern recognition</term>
					<term>texture analysis</term>
					<term>very high resolution images</term>
					<term>soil-vegetation cover</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Aerospace images with a spatial resolution of less than 1 m are actively used by regional services to obtain and update information about various environmental objects. Considerable efforts are being devoted to the development of remote sensing methods for forest areas. The structure of the forest canopy depends on various parameters, most of which are determined by ground-based methods during forest management works. Remote sensing methods for assessing the structural parameters of forest stands are based on texture analysis of panchromatic and multispectral images. A statistical approach is often used to extract texture features. The basis of this approach is the description of the distributions characterizing the mutual arrangement of image pixels in grayscale. This paper compares the effectiveness of matrix based statistical methods for extracting textural features for solving the problem of classifying various natural and manmade objects, as well as structures of the forest canopy. We consider statistics of various orders based on estimates of the distributions of gray levels, as well as the mutual occurrence, frequency, difference and structuring of gray levels. The results of assessing the informativeness of statistical textural characteristics in determining various structures of the forest canopy are presented. Dependences of the classification results on the choice of distribution parameters are determined. For the quantitative validation of the results obtained, data from ground surveys and expert visual classification of very high resolution WorldView-2 images of the territories of Savvatyevkoe and Bronnitskoe forestries are used.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In recent years, machine learning methods have been widely used for various tasks of automation and increasing the information content of procedures for thematic processing and analysis of aerospace images in the visible and near infrared spectral ranges. Multispectral satellite images of low and medium spatial resolution are traditionally used for survey of the soil and vegetation cover and the construction of large-scale thematic maps <ref type="bibr" target="#b0">[1]</ref>. With the increase in the spatial and spectral resolution of satellite equipment, a number of novel tasks associated with remote sensing of natural and anthropogenic objects have arisen.</p><p>High (1-4 m) and very high (&lt; 1 m) spatial resolution of panchromatic satellite images, forms the basis of methods for solving new tasks of monitoring land, forest and water resources, searching for mineral deposits and assessing ecological situation, which are more complex from the point of view of increased consumer requirements. There is a need to develop special approaches for analyzing large amounts of information and obtaining remote estimates of the characteristics of the examined objects with a given accuracy. Improving the efficiency of thematic processing of aerospace images of high spatial and spectral resolution is in high demand in many applications in the fields of natural resource management, agriculture, forestry and environmental monitoring <ref type="bibr" target="#b1">[2]</ref>.</p><p>The current trend in the development of methods for thematic processing of high resolution images is the combined use of spectral and texture features. So for example, a method of spectral-texture processing of aerial hyperspectral images of a forest canopy was presented in <ref type="bibr" target="#b2">[3]</ref>. Analysis of the results of test calculations for selected areas of the Savvatyevskoe forestry (Russia, Tver) showed that the proposed approach provides a significant increase in the accuracy of classification of the species composition and age groups, in comparison with the averaged spectral characteristics. It should be noted that for synthesized multispectral images, taking into account spectral features showed an increase in accuracy by more than 10%.</p><p>The presented results on improving the accuracy due to the use of texture features are also confirmed by the comparison with the previously obtained results of thematic processing of hyperspectral images of nearby territories presented in <ref type="bibr" target="#b3">[4]</ref>. Both effective nonparametric methods of cluster analysis <ref type="bibr" target="#b4">[5]</ref> and optimized ensemble machine learning algorithms <ref type="bibr" target="#b5">[6]</ref> can be successfully used for spectral-texture classifications.</p><p>The work <ref type="bibr" target="#b6">[7]</ref> shows new possibilities of using statistical texture analysis of satellite images of very high spatial resolution to retrieve the structural parameters of forest stands, characterizing the variety of sizes and density of crowns, as well as the relative position of individual trees. The presented technique is based on the parameterization of linear relationships between the Haralick texture features and the structural parameters of pine stands. The results obtained can be effectively used to provide more accurate estimates of the aboveground biomass of forest stand fractions.</p><p>The accuracy of texture analysis depends on the chosen feature extraction method <ref type="bibr" target="#b7">[8]</ref>. In this paper, we discuss the possibilities of using various statistical methods for measuring textures based on the matrix representation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Texture feature extraction and classification methods</head><p>The panchromatic satellite images presented in grayscale are considered as an object of texture analysis since they have the highest spatial resolution. The texture is formed by the spatial arrangement and the mutual combination of structural elements. Natural objects are characterized by a random arrangement of structural elements and significant variations in their parameters, for example, tone and size. Thus, the task of constructing parameters characterizing a particular texture is associated with obtaining statistical estimates.</p><p>Statistical methods of texture analysis are based on assessing the spatial distribution of local characteristics of structural elements for all possible locations in the image and extracting statistical parameters from the obtained distributions of local characteristics. Matrix methods assume that the desired distribution is discrete and has a finite number of elements. An image for which a texture extraction is made must contain a sufficiently large number of structural elements to obtain reliable estimate of probability mass function. Texture features obtained on the basis of matrix methods are subdivided into characteristics of the 1 st and 2 nd orders.</p><p>The construction of first order texture characteristics implies that the structural elements are individual pixels of the original image <ref type="bibr" target="#b8">[9]</ref>. Gray-Level Matrix (GLM) is a vector of frequencies of gray level occurrence in the processed image 𝐼(𝑥, 𝑦) with the size 𝐿 𝑥 × 𝐿 𝑦 :</p><formula xml:id="formula_0">GLM(𝑘) = # {(𝑥, 𝑦)|𝐼(𝑥, 𝑦) = 𝑘, (𝑥, 𝑦) ∈ 𝐿 𝑥 × 𝐿 𝑦 }</formula><p>where # means the number of elements in the set, 𝑘 = 1, . . . , 𝑁 𝑔𝑙 , and 𝑁 𝑔𝑙 is the number of gray scales.</p><p>For building the Gray Level Difference Matrix (GLDM), the original image 𝐼 is converted into a difference image 𝐷𝐼:</p><formula xml:id="formula_1">𝐷𝐼 Δ𝑥,Δ𝑦 = |𝐼(𝑥, 𝑦) − 𝐼(𝑥 + Δ𝑥, 𝑦 + Δ𝑦)|</formula><p>where parameters Δ𝑥 and Δ𝑦 are displacements along the horizontal and vertical directions, respectively. GLDM is a vector of frequencies of occurrence of absolute values of differences in gray levels at the given displacement:</p><formula xml:id="formula_2">GLDM(𝑘) = # {(𝑥, 𝑦)|𝐷𝐼(𝑥, 𝑦) = 𝑘, (𝑥, 𝑦) ∈ (𝐿 𝑥 − Δ𝑥)×(𝐿 𝑦 − Δ𝑦)} , 𝑘 = 0, . . . , 𝑁 𝑔𝑙 −1.</formula><p>An example of constructing GLM and GLDM matrices is shown in Figure <ref type="figure" target="#fig_0">1</ref>. For calculating extract texture features, GLM and GLDM are converted into corresponding probability mass function estimates</p><formula xml:id="formula_3">𝐹 GLM (𝑘) = GLM(𝑘) 𝑁 𝑔𝑙 ∑︀ 𝑘=1 GLM(𝑘)</formula><p>, 𝐹 GLDM (𝑘) = GLDM(𝑘)</p><formula xml:id="formula_4">𝑁 𝑔𝑙 −1 ∑︀ 𝑘=0 GLDM(𝑘)</formula><p>.</p><p>The corresponding 1 st order texture characteristics are presented in Table <ref type="table" target="#tab_0">1</ref>.</p><p>The extraction of second order texture characteristics is primarily associated with the construction of two-dimensional distributions. Structural elements in this case consist of two pixels or two groups of pixels, for each of which a corresponding characteristic is determined. The best known is the method proposed in <ref type="bibr" target="#b9">[10]</ref>. The method uses structural elements consisting of two  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝐼(𝑥, 𝑦)</head><p>Expectation</p><formula xml:id="formula_5">𝑁 𝑔𝑙 −1 ∑︀ 𝑘=0 𝑘𝐹 GLDM (𝑘) Mean-square 1 𝐿 𝑥 𝐿 𝑦 𝐿𝑥 ∑︀ 𝑥=1 𝐿𝑦 ∑︀ 𝑦=1 𝐼 2 (𝑥, 𝑦) Contrast 𝑁 𝑔𝑙 −1 ∑︀ 𝑘=0 𝑘 2 𝐹 GLDM (𝑘) Entropy − 𝑁 𝑔𝑙 ∑︀ 𝑘=1 𝐹 GLM (𝑘) log 𝐹 GLM (𝑘) Angular Second Moment 𝑁 𝑔𝑙 −1 ∑︀ 𝑘=0 𝐹 2 GLDM (𝑘) Energy 𝑁 𝑔𝑙 ∑︀ 𝑘=1 𝐹 2 GLM (𝑘) Entropy − 𝑁 𝑔𝑙 −1 ∑︀ 𝑘=0 𝐹 GLDM (𝑘) log 𝐹 GLDM (𝑘) Variance 𝑁 𝑔𝑙 ∑︀ 𝑘=1 (𝑘 − 𝜇) 2 𝐹 GLM (𝑘)</formula><p>pixels at a certain specified distance (adjacency distance). One of these pixels is called reference.</p><p>For the reference pixel, the neighboring one is selected in a given direction of adjacency. To describe the spatial relationship between the reference and neighboring pixels, the frequencies of occurrence of the corresponding gray-scale pairs are calculated for all possible positions of the reference pixel in the original image. Based on these frequencies, we can form a matrix known as the Gray-Level Co-occurrence Matrix (GLCM) or Spatial Gray-Level Dependency Matrix (SGLDM). An example of constructing the GLCM is shown in Figure <ref type="figure" target="#fig_1">2</ref>. The GLCM is a square matrix containing integer values. The size of GLCM is determined by the number of gray levels. So in the example presented in Figure <ref type="figure" target="#fig_1">2</ref>, the original image has 8 gray levels and, respectively, the GLCM has a size of 8 × 8. The matrix has a symmetrical appearance if you do not take into account the order of the grayscale in the reference and neighboring pixels. The original image 𝐼(𝑥, 𝑦) is a function of two spatial coordinates, so for each pixel we can calculate the differential estimate of the gradient function. The magnitude of the gradient</p><formula xml:id="formula_6">𝑔(𝑥, 𝑦) = √︃ (︂ 𝜕𝐼 𝜕𝑥 )︂ 2 + (︂ 𝜕𝐼 𝜕𝑦 )︂ 2</formula><p>characterizes the rate of changing tone of the image in reference pixels. To obtain a difference estimate, we use the Sobel operator:</p><formula xml:id="formula_7">𝑆(𝑥, 𝑦) = √︁ 𝑆 2 𝑥 (𝑥, 𝑦) + 𝑆 2 𝑦 (𝑥, 𝑦) ≃ 𝑔(𝑥, 𝑦)</formula><p>where</p><formula xml:id="formula_8">𝑆 𝑥 (𝑥, 𝑦) = [𝐼(𝑥 + 1, 𝑦 − 1) + 2𝐼(𝑥 + 1, 𝑦) + 𝐼(𝑥 + 1, 𝑦 + 1)] − − [𝐼(𝑥 − 1, 𝑦 − 1) + 2𝐼(𝑥 − 1, 𝑦) + 𝐼(𝑥 − 1, 𝑦 + 1)] , 𝑆 𝑦 (𝑥, 𝑦) = [𝐼(𝑥 − 1, 𝑦 + 1) + 2𝐼(𝑥, 𝑦 + 1) + 𝐼(𝑥 + 1, 𝑦 + 1)] − − [𝐼(𝑥 − 1, 𝑦 − 1) + 2𝐼(𝑥, 𝑦 − 1) + 𝐼(𝑥 + 1, 𝑦 − 1)] .</formula><p>Thus, by specifying the number of gradient gradations to be equal 𝑁 𝑔𝑙 , we can build an image of modules of brightness gradients</p><formula xml:id="formula_9">𝐺(𝑥, 𝑦) = int [︂ 𝑆(𝑥, 𝑦) − 𝑆 min 𝑆 max − 𝑆 min ]︂ • 𝑁 𝑔𝑙</formula><p>in pixels of the original image. Gray Gradient Co-occurrence Matrix (GGCM) <ref type="bibr" target="#b10">[11]</ref> is built in the same way as GLCM, only for the image 𝐺(𝑥, 𝑦).</p><p>The estimate of the probability mass function of the co-occurrence of a given number of gray levels can be obtained as the normalized GLCM 𝑝(𝑖, 𝑗) = GCLM(𝑖, 𝑗) The detailed description all of them is presented in <ref type="bibr" target="#b11">[12]</ref>. It should also be noted that in work <ref type="bibr" target="#b12">[13]</ref> it is stated that for most practical tasks, it is sufficient to use 5 of them, which are given in Table <ref type="table" target="#tab_2">2</ref>. The necessary marginal expectations and marginal STD can be calculated as:</p><formula xml:id="formula_10">𝑁 𝑔𝑙 ∑︀ 𝑖,𝑗=1</formula><formula xml:id="formula_11">𝜇 𝑖 = 𝑁 ∑︁ 𝑖=1 𝑁 ∑︁ 𝑗=1 𝑖 • 𝑝(𝑖, 𝑗), 𝜇 𝑗 = 𝑁 ∑︁ 𝑖=1 𝑁 ∑︁ 𝑗=1 𝑗 • 𝑝(𝑖, 𝑗), 𝜎 𝑖 = ⎯ ⎸ ⎸ ⎷ 𝑁 ∑︁ 𝑖=1 𝑁 ∑︁ 𝑗=1 (𝑖 − 𝜇 𝑖 ) • 𝑝(𝑖, 𝑗).</formula><p>Texture segmentation of panchromatic satellite images is based on the moving window method. The moving window is a rectangular contour selecting the analyzed part of the image under processing. The size of the window is determined by the characteristic scale of recognized textures. If the window size is chosen too small, the result of the texture classification will represent the high frequency noise. On the other hand, too large size of the window leads to excessive smoothing of the contours of recognized objects. The center of the window runs through all the points of the panchromatic image. To reduce the amount of computation in practical tasks, it is sufficient to run only pixels whose coordinates correspond to the pixel centers of the joint multispectral image, when processing panchromatic and multispectral images together.</p><p>To carry out the supervised classification texture features we employed an ensemble algorithm known as Error Correcting Output Codes (ECOC). The algorithm is designed to formalize the responses of binary learners as the multiclass classifier based on some results of the theory of information and coding. Le the algorithm of binary classification is the Support Vector Machine (SVM) with the Gaussian kernel. The response of the SVM algorithm is the classification score which means the normalized distance from the classified sample to the discriminant surface in the area of the relevant class. The coding stage of the ECOC algorithm consists of calculating classification scores or each of SVM binary learners defined by one-versus-one coding design matrix and corresponded Hinge binary losses for each of the considered classes. The decoding stage consists of the selection of the class corresponding to the minimum average loss. For optimizing the feature space arises, the regularized forward selection method is used. The method has better stability in comparison with the standard greedy selection algorithm suffering from the high sensitivity of the selected optimal sequence of features to small changes in the training set. On the other hand, the regularized algorithm requires more computational costs.</p><p>The classification quality was assessed by confusion matrix (CM) and the related parameters, such as total error (TE), total omission error (TOE) and total commission errors (TCE). CM is the basic classification quality characteristic allowing a comprehensive visual analysis of different aspects of the classification method used. Rows of CM represent reference classes and columns represent predicted classes. TE is defined as the amount of incorrectly classified samples over the total number of samples. TOE is the mean omission error over all classes considered, where the omission error is the amount of false classified samples of the selected class over all samples of this class. TCE is the mean commission error over all possible responses of the classifier used, where the commission error is defined as the probability of false classification for each possible classification result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results and discussion</head><p>During the joint spectral-texture processing of satellite images, texture features are usually employed for solving the following two tasks: segmentation of the contours of natural and manmade objects (including the selection of building zones and forest areas), and classification of structural parameters of the forest canopy. Thus, for carrying out numerical experiments using the above methods, we selected two relevant test plots in WorldView-2 panchromatic images with the spatial resolution ∼ 0.5 m. The first test plot, which hereinafter referred to as Konstantinovsky, is located on the territory of the Savvatyevskoe forestry (Tver region) near the Domnikovo village. The plot contains several large zones corresponding to 5 types of objects of varying complexity having well differing texture: water surface (Konstantinovsky sand quarry), pine forest, building zone, field and peat swamp forest. The RGB image of the test plot, the corresponding expert map of objects and the results of texture analysis are shown in Figure <ref type="figure" target="#fig_2">3</ref>.</p><p>The classifier used is sensitive to the difference in the number of training samples for the considered classes. Increasing the number of training samples for one of the classes leads to increasing the prior probability of its classification. Thus, in order to avoid this problem, we used a balanced training set containing 500 samples for each class. Remaining data (testing set) were used for independent validation. Also, since the accuracy of texture classification essentially depends on the size of the moving window, we carried out a series of calculations, which allowed us to determine the range of acceptable size values. Thus, we used the moving window with the size of 109 pixels in horizontal and vertical directions. The original image has been reduced to 64 gray levels. The described above feature optimization was used for the GLCM and GGCM methods to avoid the curse of dimensionality problem. For the classification of texture features obtained by the GLM and GLDM methods, we used a full set of features.</p><p>Estimates of total characteristics of classification quality obtained from training (resubstitution method) and testing sets (independent validation) are represented in the Table <ref type="table" target="#tab_3">3</ref>. We can see that the GLCM method provides the most accurate results. The total error is about 1%. At that the total omission and commission errors are very close. It should be noted that difference  between dependent and independent estimates of the error is insignificant, which indicates a good generalization ability of the trained ECOC SVM classifier. The total errors of the GGCM and GLM methods are significantly higher, but remain at an acceptable level. It should be noted that a visual comparison of the classification results presented in Figure <ref type="figure" target="#fig_2">3</ref> shows that GGCM reproduces the expert map of objects much better and contains significantly less noise in comparison with GLM. Table <ref type="table" target="#tab_4">4</ref> contains class-wise omission and commission classification errors for Konstantinovsky test plot. The best classification result corresponds to water surface. We can see that all the methods reveal a high accuracy. The building zone and field are classified with acceptable level of errors. The worst accuracies correspond to conifer and peat swamp forest stands, however it low enough for GLCM and GGCM methods. GLDM demonstrates worst results and it cannot be used texture segmentation of forest areas.</p><p>The second test plot, hereinafter referred to as GFP Dementyev, is located on the territory of the Bronnitskoe forestry near the Lubninka village. The plot is part of the territory of the geographical forest plantations (GFP) of the forester P.I. Dementyev. The RGB image of GFP Dementyev and the corresponding expert classification map are shown in Figure <ref type="figure" target="#fig_3">4</ref>. The choice of this site is due to the large variety of plantations with different structures. By the variety   of species, the stands of the Bronnitskoe forestry cover the main forest-forming species of Russia. From the 1950s to the present, various species and ecotypes of larch, which are grown here outside the natural habitat, have been tested in this area. The forest canopy of the test site contains 7 visually noticeable texture classes: 1 -mixed conifer stand (larch, pine and spruce) with a dense canopy and high values of density; 2 and 3 -agricultural areas of different structure; 4 -deciduous stand with a predominance of birch, dense canopy and the relative stocking of 0.9; 5 -mixed birch stand with the relative stocking of 0.8; 6 -mixed birch stand with a pronounced cluster structure of the canopy; 7 -cultivated plantations of larch with a regular structure. In this case GLCM and GGCM methods show similar results. Total classification error is about 4%. Accuracy of GLCM seems to be a little higher compare to GGCM, however we can see for this method that difference between resubstitution and independent estimates is also more significant than for GGCM. GLM and GLDM method demonstrate weak classification results. Analyzing class-wise errors presented in Table <ref type="table" target="#tab_5">5</ref>, we can see that the regular structure of larch stands corresponds to minimum errors, about 2% for GLCM and GGCM methods. This result is confirmed also by Figure <ref type="figure" target="#fig_3">4</ref>. The highest errors correspond to the dense deciduous stand, however it is explained by small number of pixels corresponding to this object.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Construction of GLM and GLDM from sample image.</figDesc><graphic coords="3,162.21,560.11,270.86,90.19" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Construction of GLCM and GGCM from sample image.</figDesc><graphic coords="4,193.47,440.13,208.34,201.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Texture segmentation of natural and manmade objects from the panchromatic image of Konstantinovsky test plot.</figDesc><graphic coords="8,89.29,84.19,416.68,235.43" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Texture segmentation of forest structure from the panchromatic image of GFP Dementyev test plot.</figDesc><graphic coords="9,103.88,460.53,387.52,182.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>First order texture features.</figDesc><table><row><cell></cell><cell cols="2">GLM</cell><cell></cell><cell></cell><cell>GLDM</cell></row><row><cell>Name of feature</cell><cell></cell><cell cols="3">Formula</cell><cell>Name of feature</cell><cell>Formula</cell></row><row><cell>Mean</cell><cell>𝜇 =</cell><cell>1 𝐿 𝑥 𝐿 𝑦</cell><cell>𝐿𝑥 ∑︀ 𝑥=1</cell><cell>𝐿𝑦 ∑︀ 𝑦=1</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>𝑗 are indices GLCM elements. For GGCM, this estimate is calculated in a similar way. Based on the values 𝑝(𝑖, 𝑗), statistics known as Haralick texture features are calculated. Initially, 14 different texture features (Haralick features) were proposed in the original paper [10], however a few additional features were proposed in subsequent years. At present, 19 different texture features are known: Autocorrelation, Cluster Prominence, Cluster Shade, Contrast, Correlation, Difference Entropy, Difference Variance, Dissimilarity, Energy, Entropy, Homogeneity, Local homogeneity, Information Measure of Correlation 1, Information Measure of Correlation 2, Maximum Probability, Sum Average, Sum Entropy, Sum Squares, Sum Variance.</figDesc><table /><note>GCLM(𝑖, 𝑗), where 𝑖,</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Informative Haralick texture features.</figDesc><table><row><cell>Name of feature</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>Formula</cell></row><row><cell>Contrast (Inertia)</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell>𝑁</cell><cell>𝑁</cell><cell></cell><cell></cell></row><row><cell>Correlation</cell><cell>∑︀</cell><cell>∑︀</cell><cell></cell><cell></cell></row><row><cell></cell><cell>𝑖=1</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Energy</cell><cell></cell><cell></cell><cell></cell><cell cols="2">𝑁 ∑︀</cell><cell>𝑁 ∑︀</cell><cell>𝑝 2 (𝑖, 𝑗)</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell cols="2">𝑖=1</cell><cell>𝑗=1</cell></row><row><cell></cell><cell></cell><cell></cell><cell>𝑁</cell><cell></cell><cell>𝑁</cell></row><row><cell>Entropy</cell><cell></cell><cell>−</cell><cell>∑︀</cell><cell></cell><cell>∑︀</cell><cell>𝑝(𝑖, 𝑗) • ln 𝑝(𝑖, 𝑗)</cell></row><row><cell></cell><cell></cell><cell cols="2">𝑖=1</cell><cell cols="2">𝑗=1</cell></row><row><cell>Local Homogeneity</cell><cell></cell><cell></cell><cell cols="2">𝑁 ∑︀ 𝑖=1</cell><cell>𝑁 ∑︀ 𝑗=1</cell><cell>𝑝(𝑖, 𝑗) 1 + (𝑖 − 𝑗) 2</cell></row></table><note>𝑁 ∑︀ 𝑖=1 𝑁 ∑︀ 𝑗=1 (𝑖 − 𝑗) 2 • 𝑝(𝑖, 𝑗) 𝑗=1 (𝑖 − 𝜇 𝑖 ) • (𝑗 − 𝜇 𝑗 ) • 𝑝(𝑖, 𝑗) 𝜎 𝑖 • 𝜎 𝑗</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Total characteristics of classification quality.</figDesc><table><row><cell></cell><cell></cell><cell cols="4">Konstantinovsky GFP Dementyev</cell></row><row><cell></cell><cell></cell><cell>Resub</cell><cell>Indep</cell><cell>Resub</cell><cell>Indep</cell></row><row><cell></cell><cell>TE</cell><cell>0.10</cell><cell>0.12</cell><cell>0.22</cell><cell>0.247</cell></row><row><cell>GLM</cell><cell>TOE</cell><cell>0.10</cell><cell>0.11</cell><cell>0.22</cell><cell>0.243</cell></row><row><cell></cell><cell>TCE</cell><cell>0.10</cell><cell>0.12</cell><cell>0.218</cell><cell>0.315</cell></row><row><cell></cell><cell>TE</cell><cell>0.25</cell><cell>0.268</cell><cell>0.234</cell><cell>0.319</cell></row><row><cell>GLCM</cell><cell>TOE</cell><cell>0.25</cell><cell>0.253</cell><cell>0.234</cell><cell>0.250</cell></row><row><cell></cell><cell>TCE</cell><cell>0.25</cell><cell>0.261</cell><cell>0.239</cell><cell>0.365</cell></row><row><cell></cell><cell>TE</cell><cell>0.01</cell><cell>0.012</cell><cell>0.015</cell><cell>0.033</cell></row><row><cell>GLCM</cell><cell>TOE</cell><cell>0.01</cell><cell>0.012</cell><cell>0.015</cell><cell>0.026</cell></row><row><cell></cell><cell>TCE</cell><cell>0.01</cell><cell>0.011</cell><cell>0.015</cell><cell>0.115</cell></row><row><cell></cell><cell>TE</cell><cell>0.068</cell><cell>0.083</cell><cell>0.023</cell><cell>0.043</cell></row><row><cell>GGCM</cell><cell cols="2">TOE 0.068</cell><cell>0.079</cell><cell>0.023</cell><cell>0.033</cell></row><row><cell></cell><cell cols="2">TCE 0.066</cell><cell>0.098</cell><cell>0.023</cell><cell>0.115</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Class-wise characteristics of classification quality for Konstantinovsky test plot.</figDesc><table><row><cell></cell><cell></cell><cell cols="5">buildings field natural confer forest peat swamp forest water surface</cell></row><row><cell>GLM</cell><cell>OE CE</cell><cell>0.089 0.22</cell><cell>0.077 0.14</cell><cell>0.22 0.15</cell><cell>0.15 0.09</cell><cell>0.017 0</cell></row><row><cell>GLCM</cell><cell>OE CE</cell><cell>0.16 0.32</cell><cell>0.215 0.222</cell><cell>0.538 0.432</cell><cell>0.332 0.329</cell><cell>0.0195 0.001</cell></row><row><cell>GLCM</cell><cell>OE CE</cell><cell>0.004 0.009</cell><cell>0.015 0.009</cell><cell>0.019 0.017</cell><cell>0.022 0.022</cell><cell>0 0</cell></row><row><cell>GGCM</cell><cell>OE CE</cell><cell>0.012 0.038</cell><cell>0.058 0.29</cell><cell>0.084 0.05</cell><cell>0.097 0.097</cell><cell>0.14 0.017</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5</head><label>5</label><figDesc>Class-wise characteristics of classification quality for GFP Dementyev test plot.</figDesc><table><row><cell></cell><cell></cell><cell>normal</cell><cell></cell><cell></cell><cell>dense</cell><cell>cluster</cell><cell>mixed</cell><cell>larch</cell></row><row><cell></cell><cell></cell><cell>confer</cell><cell cols="2">field1 field2</cell><cell>deciduous</cell><cell>structured</cell><cell>normal</cell><cell>regular</cell></row><row><cell></cell><cell></cell><cell>forest</cell><cell></cell><cell></cell><cell>forest</cell><cell>forest</cell><cell>forest</cell><cell>forest</cell></row><row><cell>GLM</cell><cell>OE CE</cell><cell>0.275 0.424</cell><cell cols="2">0.0746 0.0804 0.123 0.0555</cell><cell>0.376 0.931</cell><cell>0.367 0.462</cell><cell>0.406 0.0959</cell><cell>0.123 0.114</cell></row><row><cell>GLCM</cell><cell>OE CE</cell><cell>0.391 0.586</cell><cell cols="2">0.131 0.205 0.0934</cell><cell>0.172 0.899</cell><cell>0.212 0.477</cell><cell>0.554 0.174</cell><cell>0.145 0.122</cell></row><row><cell>GLCM</cell><cell>OE CE</cell><cell>0.017 0.017</cell><cell>0.018 0.039</cell><cell>0.022 0.012</cell><cell>0.037 0.69</cell><cell>0.0046 0.025</cell><cell>0.065 0.0052</cell><cell>0.017 0.017</cell></row><row><cell>GGCM</cell><cell>OE CE</cell><cell>0.055 0.017</cell><cell>0.046 0.046</cell><cell>0.022 0.028</cell><cell>0.0066 0.47</cell><cell>0.011 0.22</cell><cell>0.07 0.0052</cell><cell>0.019 0.02</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The reported study was funded by RFBR, projects No. 20-07-00370 "Fundamental problems of increasing the informativeness of processing data from optoelectronic aerospace devices of high spatial and spectral resolution" and No. 19-01-00215 "Investigation of operative opportunities of hyper-spectral technologies of remote sensing of the Earth to solve regional problems using updated hyper-spectral cameras from space".</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Land cover map of Russia derived from Proba-V satellite data // Sovremennye Problemy Distantsionnogo Zondirovaniya Zemli iz Kosmosa</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Egorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Bartalev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Kolbudaev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">E</forename><surname>Plotnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Khvostikov</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="282" to="286" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Machine learning in hyperspectral and multispectral remote sensing data analysis // Artificial Intelligence Science and Technology</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">Z</forename><surname>Shafri</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 International Conference</title>
				<meeting>the 2016 International Conference</meeting>
		<imprint>
			<date type="published" when="2016">AIST2016. 2017</date>
			<biblScope unit="page" from="3" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Spectral-textural classification of hyperspectral images with high spatial resolution // Interexpo GEO-Siberia</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Rylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V</forename><surname>Melnikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Pestunov</surname></persName>
		</author>
		<editor>Russ.</editor>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="78" to="84" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Classification of the forest cover of Tver&apos; region using hyperspectral airborne imagery // Izvestiya</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Dmitriev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Atmospheric and Oceanic Physics</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="issue">9</biblScope>
			<biblScope unit="page" from="929" to="942" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Nonparametric grid-based clustering algorithm for remote sensing data // Optoelectronics</title>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">A</forename><surname>Pestunov</surname></persName>
		</author>
		<author>
			<persName><forename type="middle">N</forename><surname>Sinyavsky Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Instrumentation and Data Processing</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="78" to="87" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Combining classifiers in the problem of thematic processing of hyperspectral aerospace images // Optoelectronics</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Dmitriev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Kozoderov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">O</forename><surname>Dementyev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Safonova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Instrumentation and Data Processing</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="213" to="221" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Automated retrieval of forest structure variables based on multi-scale texture analysis of VHR satellite imagery</title>
		<author>
			<persName><forename type="first">B</forename><surname>Beguet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Guyon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Boukir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Chehata</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ISPRS J. Photogramm. Remote Sens</title>
		<imprint>
			<biblScope unit="volume">96</biblScope>
			<biblScope unit="page" from="164" to="178" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Image processing: Dealing with texture</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Petrou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Kamata</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>John Wiley &amp; Sons</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A comparative study of texture measures for terrain classification // IEEE transactions on Systems</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Weszka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">R</forename><surname>Dyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rosenfeld</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Man, and Cybernetics</title>
		<imprint>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="269" to="285" />
			<date type="published" when="1976">1976</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Textural features for image classification</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Haralick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Shanmugam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Dinstein</surname></persName>
		</author>
		<idno>SMC-3</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Systems, Man, and Cybernetics</title>
		<imprint>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="610" to="621" />
			<date type="published" when="1973">1973</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Scene classification based on gray level-gradient cooccurrence matrix in the neighborhood of interest points</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Tan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Intelligent Computing and Intelligent Systems</title>
				<imprint>
			<date type="published" when="2009">2009. 2009</date>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page" from="482" to="485" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The performance of texture features in the problem of classification of the soil-vegetation objects</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Dmitriev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Kozoderov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Sokolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2534</biblScope>
			<biblScope unit="page" from="91" to="98" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A theoretical comparison of texture algorithms</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Conners</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Harlow</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="204" to="222" />
			<date type="published" when="1980">1980</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
