<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Identification of Objects on Satellite Images Using the Image Texture Properties</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Hnatushenko</surname></persName>
							<email>hnatushenko.v.v@nmu.one</email>
							<affiliation key="aff0">
								<orgName type="institution">Dnipro University of Technology</orgName>
								<address>
									<addrLine>19 Dmytra Yavornytskoho Ave</addrLine>
									<postCode>49005</postCode>
									<settlement>Dnipro</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Yana</forename><surname>Shedlovska</surname></persName>
							<email>shedlovskii.i.a@nmu.one</email>
							<affiliation key="aff0">
								<orgName type="institution">Dnipro University of Technology</orgName>
								<address>
									<addrLine>19 Dmytra Yavornytskoho Ave</addrLine>
									<postCode>49005</postCode>
									<settlement>Dnipro</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Igor</forename><surname>Shedlovsky</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Dnipro University of Technology</orgName>
								<address>
									<addrLine>19 Dmytra Yavornytskoho Ave</addrLine>
									<postCode>49005</postCode>
									<settlement>Dnipro</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vyacheslav</forename><surname>Gorev</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Dnipro University of Technology</orgName>
								<address>
									<addrLine>19 Dmytra Yavornytskoho Ave</addrLine>
									<postCode>49005</postCode>
									<settlement>Dnipro</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Identification of Objects on Satellite Images Using the Image Texture Properties</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">DEE95A1B11030FF2BB89C6EDA686F47B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-06-19T15:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Remote sensing data</term>
					<term>segmentation</term>
					<term>texture</term>
					<term>NDVI</term>
					<term>NSVDI</term>
					<term>object identification</term>
					<term>tree counting</term>
					<term>satellite image</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper focuses on identifying objects in satellite images using image texture properties, which is an important problem in agriculture. Texture segmentation can distinguish areas that correspond to tree plantations. Orchards and tree plantations can cover vast areas with thousands of trees, making the automation of harvest estimation crucial. Satellite images enable the creation of an effective automatic system for counting trees in plantations. In this work, we applied image texture segmentation to identify areas corresponding to agricultural plantations. We calculated textural properties of the image using the gray-level cooccurrence matrix, including mean value, variance, homogeneity, second angular moment, correlation, contrast, divergence, and entropy. These characteristics were used for segmentation, with multi-scale segmentation employed to distinguish areas of the image with specific textures. We proposed an algorithm for counting objects in satellite images, based on identifying individual objects that create a texture according to their spectral characteristics. The images used in this work primarily featured three object classes: trees, soil, and tree shadows. Since trees in gardens and plantations are arranged uniformly and have the same size, they can be easily distinguished from other image pixels based on their spectral characteristics. We analyzed NDVI and NSVDI spectral indices for tree detection and used the automatic spectral index histogram splitting method to distinguish objects with a high index value corresponding to trees.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today, due to modern satellites, it is possible to obtain high-quality images of the Earth's surface. High spatial resolution images are of special practical value <ref type="bibr" target="#b0">[1]</ref>. These remote sensing data are used for monitoring the Earth's surface, mapping, tracking deforestation, assessing the consequences of natural disasters, and in many other areas of human activity <ref type="bibr" target="#b1">[2]</ref>. Such modern satellites as the WorldView-2 and the WorldView-3 are capable of producing images with a spatial resolution of up to 1.24 m per pixel in the panchromatic channel and up to 0.3 m per pixel in the multispectral channels. The WorldView-3 satellite is capable of covering up to 680,000 square kilometers of the Earth's surface per day, and WorldView-2 up to one million square kilometers. The huge number of multi-channel satellite images arriving to the Earth every day requires fast, high-quality processing to obtain useful information in a timely manner <ref type="bibr" target="#b2">[3]</ref>. There is a need to develop methods of automated computer processing of data. Satellite images are actively used in agriculture. With the help of images obtained in the near-infrared parts of the spectrum, it is possible to assess the state and stages of crop growth, to determine the type of vegetation. Recently, aerial images have also been actively used in agriculture.</p><p>One of the urgent tasks of remote sensing data processing is the counting of trees in plantations. Cultivation of fruit trees, almonds, walnuts and hazelnuts, oil palms is an important part of the agricultural industry and a significant source of income in such countries as the USA, Malaysia, Brazil, etc <ref type="bibr" target="#b3">[4]</ref>. In order to effectively organize work and forecast the harvest, it is necessary to know the number of trees in the cultivated area. Until now, human labor was used to count and assess the condition of individual trees on plantations, which was very time-consuming and time-consuming. Orchards and tree plantations can cover huge areas and have thousands of trees, so the task of automating harvest counting is very important. Thanks to satellite images, it is possible to create an effective automatic system of counting trees in plantations <ref type="bibr" target="#b4">[5]</ref>.</p><p>To solve this problem, it is necessary to choose effective methods of processing and analyzing data obtained by means of remote sensing. The task of identifying and counting trees can be divided into two subtasks: 1) identification of areas of the Earth's surface where there are agricultural plantations, in particular fruit tree plantations; 2) identification of individual objects in the selected area of the image for the purpose of counting them. In this work, these problems are solved using the methods of texture analysis of digital images based on the adjacency matrix of gray levels, image segmentation based on textural features, and selection of individual objects based on their spectral features.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related works</head><p>Attempts have been made repeatedly to create an automatic tree identification and counting system <ref type="bibr" target="#b5">[6]</ref>. Remote sensing technology is extensively utilized for monitoring and quantifying canopy growth, detecting potential plant diseases, and tracking changes within forest structures. Timely analysis of this data is critical for optimizing yields and evaluating the response of forests to climate anomalies <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>.</p><p>The cultivation of oil palm is a vital contributor to agricultural productivity in numerous developing countries across the tropics. As such, conducting research to investigate and accurately quantify oil palm cultivation is both valuable and meaningful. In papers, CNN and RCNN were applied for tree detection and counting <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>In paper <ref type="bibr" target="#b11">[12]</ref>, an algorithm for searching for trees in the image using a geometric-optical tree model was proposed, assuming that the center of the tree is at the point of maximum similarity between the model and the sample. The dome-shaped crown of the tree was taken as the basis of the geometricoptical model of the tree. Information about the lighting in the image was used to create it. The sun height, azimuth, and tree width parameters were determined automatically for each individual image, but the result of automatic parameter determination was no better than the user's visual assessment, and needs improvement.</p><p>LiDAR data were used in <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b14">15]</ref>. Using the LiDAR-based canopy height model and segmentation methods the tree crowns were delineated.</p><p>In paper <ref type="bibr" target="#b15">[16]</ref>, a method of counting fruit trees was also proposed. To find the crowns of individual trees, the LoG filter was used, which gives the strongest response to round objects. Since the LoG filter detects all objects that have a shape similar to the crown of a tree, the authors use vegetation indices and take into account only those objects that are green vegetation. Binarization of the red channel of the image and calculation of the NDVI vegetation index were used to find the vegetation in the image.</p><p>Vegetation indices were used in paper <ref type="bibr" target="#b16">[17]</ref> to identify individual trees in a plantation. The most common vegetation indices were investigated, and those that give the maximum difference between the spectral characteristics of trees and the background were selected. The borders of the plantations were outlined manually on the pictures, the crowns of the trees were located as local maxima of the values of the vegetation indices.</p><p>In <ref type="bibr" target="#b17">[18]</ref>, an algorithm for searching for young palm trees in a satellite image using Haar features was proposed. Seven Haar features describing the shape of the tree were taken. In order to avoid incorrect identification of objects that are not trees, the obtained results were classified by the support vectors (SVM) method.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Materials and methods</head><p>Satellite images obtained by the WorldView-2 and the WorldView-3 satellites were used in this work. The WorldView-2 is the first commercial instrument with an eight-channel high-resolution spectrometer that includes traditional spectral channels as well as four additional ones. They provide higher accuracy in detailed analysis of the state of vegetation, selection of objects, analysis of the coastline and coastal water area. In its characteristics, the WorldView-2 meets the highest requirements. The data received from this satellite have a root mean square error (RMS) of no worse than 4 m without ground reference points <ref type="bibr" target="#b18">[19]</ref>.</p><p>The WorldView-3 satellite is designed for shooting in panchromatic and multispectral modes. Camera equipment is completely similar to that installed on the WorldView-2 satellite. The accuracy of the geo-positioning in the plan is 6.5 m or 4 m SKP without additional correction of the plan coordinates by ground reference points. The WorldView-3 conducts shooting in the following modes: VNIR ( Visible and Near Infrared )multispectral visible and near-infrared range, only 8 channels; SWIR ( Shortwave Infrared )mid-infrared range, allows shooting through haze, fog, smog, dust, smoke and clouds, only 8 channels; CAVIS ( clouds , aerosols , vapors , ice , snow )allows for atmospheric correction of only 12 channels with a spatial resolution of 30 m at nadir and wavelengths from 0.4 μm to 2.2 μm .</p><p>One of the methods that ensure the selection of useful information from remote sensing data is image segmentation. Segmentation allows you to identify homogeneous areas on the Earth's surface that correspond to certain natural objects <ref type="bibr" target="#b19">[20]</ref>.</p><p>Analysis of a number of satellite images and aerial photographs has shown that each image contains several types of land cover (forest, grass, soil, etc.), which can be characterized by their spectral textural properties. Fig. <ref type="figure">1</ref> shows a fragment of a satellite image. Texture fragments of vegetation can be divided into two main types: visually different in terms of spectral properties and structure; visually close in terms of spectral properties and structure. Such fragments belong to the same class, for example "forest", "soil", "grass", etc. Texture fragments within the same class are close in their characteristics <ref type="bibr" target="#b20">[21]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: Satellite image</head><p>The complex structure of satellite and aerospace images does not allow us to solve these problems using only the spectral properties of the images. The result of segmentation of the satellite image (Fig. <ref type="figure" target="#fig_0">2</ref>) based on spectral characteristics showed that it is impossible to distinguish areas with a uniform texture using this method. Spectral properties of objects on the Earth's surface do not always provide complete information about the objects, as they depend on many factors, such as relief, soil type, climate, geographical location of the area. Additional a priori information such as image acquisition geometry and image context information must be used to improve the reliability of feature class decisions. To identify fruit tree plantations, textural properties of images were used in this work. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Calculation of the texture properties of an image based on the gray-level co-occurrence matrix</head><p>Texture analysis methods are widely used in the segmentation, analysis and deciphering of remote sensing imagery <ref type="bibr" target="#b21">[22]</ref>. Various approaches to the detection and description of textures are known: statistical, geometric, structural, spectral, and model <ref type="bibr" target="#b22">[23]</ref>. Texture analysis methods based on a onedimensional frequency histogram do not take into account the relative position of image pixels. They allow one to take into account only group properties, pixels belonging to one object on an aerial image.</p><p>The adjacency matrix of gray levels, or gray-level co-occurrence matrix (GLCM) <ref type="bibr" target="#b23">[24]</ref> allows one to take into account the relative position of the pixels of the image and thus analyze the texture with pronounced spatial regularity. GLCM is calculated in one of the image channels and has dimensions L×L, where L is the number of gray levels in the image channel. It shows how often pixels with value i border pixels with values j in the horizontal (0 o ), vertical (90 o ), or diagonal (45 o and 135 o ) direction. We denote the GLCM as P:</p><p>𝑃 𝑟,𝜃 (𝑖, 𝑗) = |{(𝑘, 𝑠), (𝑡, 𝑣): 𝐼(𝑘, 𝑠) = 𝑖, 𝐼(𝑡, 𝑣) = 𝑗}|, <ref type="bibr" target="#b0">(1)</ref> where i,j are matrix brightness levels 𝑃(𝑖, 𝑗 = 1, 𝐿); 𝐼(𝑘, 𝑠)and 𝐼(𝑡, 𝑣) are the values of the elements of the matrix P with the coordinates (𝑘, 𝑠)and (𝑡, 𝑣); r is the distance between the elements 𝐼(𝑘, 𝑠)and 𝐼(𝑡, 𝑣); θ is the angle between the elements 𝐼(𝑘, 𝑠)and 𝐼(𝑡, 𝑣)relative to the horizontal axis.</p><p>Based on the calculated adjacency matrices, the following indicators of textural features are calculated:</p><p>Average value:</p><formula xml:id="formula_0">𝜇 𝑖 = 𝜇 𝑗 = ∑ [𝑖 ∑ 𝑃(𝑖, 𝑗) 𝐿−1 𝑗=0 ] . 𝐿−1 𝑖=0<label>(2)</label></formula><p>Energy:</p><formula xml:id="formula_1">𝑇 𝑒 = ∑ ∑[𝑃(𝑖, 𝑗)] 2 𝐿−1 𝑗=0 . 𝐿−1 𝑖=0<label>(3)</label></formula><p>Variation:</p><formula xml:id="formula_2">𝜎 𝑖 2 = ∑ [(𝑖 − 𝜇 2 ) 2 ∑ 𝑃(𝑖, 𝑗) 𝐿−1 𝑗=0 ] . 𝐿−1 𝑖=0<label>(4)</label></formula><p>Homogeneity:</p><formula xml:id="formula_3">𝑇 ℎ = ∑ ∑ 𝑃(𝑖, 𝑗) 𝐿−1 𝑗=0 (1 + |𝑖 − 𝑗|) ⁄ ; 𝐿−1 𝑖=0<label>(5)</label></formula><p>where 𝑃(𝑖, 𝑗)is the frequency of occurence in the window of two pixels with brightness i and j at an angle α at a distance d; σ 2 is the root mean square deviation of pixel values in the window. By utilizing statistical points 1-4, it is possible to generate texture features that consider the relative positions of neighboring pixels within a given window. As a result, these features are particularly effective at describing textures that exhibit significant spatial regularity. The following assessments also apply:</p><p>The second angular moment</p><formula xml:id="formula_4">𝑇 2 = ∑ ∑(𝑃 (𝑖, 𝑗) 𝑀 ⁄ ) 2 , 𝐿 𝑗=1 𝐿 𝑖=1 (<label>6</label></formula><formula xml:id="formula_5">)</formula><p>where M is the total number of pairs of adjacent elements, this number is a measure of image homogeneity. For d =1, α=0, M =2 L y ( Lx -1). Contrast</p><formula xml:id="formula_6">𝑇 𝐶 = ∑ 𝑛 2 𝐿−1 𝑛 [∑ ∑(𝑃 (𝑖, 𝑗) 𝑀 ⁄ ) 𝐿 𝑗=1 𝐿 𝑖=1 ] , |𝑖 − 𝑗| = 𝑛,<label>(7)</label></formula><p>is determined by the magnitude of local variations of pixel values: the larger it is, the higher the contrast is.</p><p>Correlation coefficient</p><formula xml:id="formula_7">𝑇 𝑐𝑐 = 𝜎 𝑥 −1 𝜎 𝑦 −1 ∑ ∑ [𝑖𝑗 ( 𝑃(𝑖, 𝑗) 𝑀 ) − 𝑚 𝑥 𝑚 𝑦 ] , 𝐿 𝑗=1 𝐿 𝑖=1<label>(8)</label></formula><p>where m x, m y, σ x, σ y are mean values and root mean square deviations for 𝑝 𝑥 (𝑖) = ∑ 𝑃 (𝑖, 𝑗) 𝑀 ⁄ 𝐿 𝑗=1</p><p>and 𝑝 𝑦 (𝑗) = ∑ 𝑃 (𝑖, 𝑗) 𝑀 ⁄ 𝐿 𝑖=1</p><p>, respectively. The correlation coefficient is a measure of the linearity of the regression dependence of brightness in the image.</p><p>Variance</p><formula xml:id="formula_8">𝑇 𝐷 = ∑ ∑(𝑖 − 𝑚) 2 (𝑃 (𝑖, 𝑗) 𝑀 ⁄ ) 𝐿 𝑗=1 , 𝐿 𝑖=1<label>(9)</label></formula><p>determines the brightness variations from the average value. </p><p>characterizes the uneven distribution of brightness of image elements. The above characteristics were calculated for test satellite images.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Texture image segmentation</head><p>Image segmentation is a crucial stage in image processing that involves partitioning an image into homogenous areas based on shared pixel characteristics. This process has a significant impact on subsequent calculations of object properties and classification results <ref type="bibr" target="#b24">[25]</ref>. To achieve the most accurate outcomes, it's essential to carefully select the optimal segmentation method and its parameters that are best suited for the specific problem at hand. Segmentation methods can be divided into automatic ones and interactive ones, the latter requiring the participation of the user. Automatic methods are also divided into two classes: 1) selection of image areas with certain properties specific to a specific subject area (marker methods, binarization); 2) dividing the image into homogeneous regions. The methods that divide the image into homogeneous areas are the most versatile, since they are not focused on a specific subject area or specific analysis tasks. Such algorithms are the most widespread in the computer vision, they include methods of water separation, the method of boundary selection, and methods based on multidimensional histogram clustering.</p><p>Assessing the quality of segmentation methods is not a straightforward process, as there is no universally accepted objective criterion for doing so. The optimal choice of method ultimately depends on the specific problem that needs to be addressed. To facilitate comparisons between segmentation techniques, reference image databases with known "ground-truth" segmentations can be utilized to evaluate the quality of each method's performance. One of the problems solved in the section is determining which segmentation methods and what values of their parameters are most suitable for use in the classification of multidimensional photogrammetric images with high spatial resolution.</p><p>The multi-scale segmentation of an image based on the calculated texture properties was used in this work to select areas of the image corresponding to a specific texture <ref type="bibr" target="#b25">[26]</ref>. The multi-scale segmentation method (Multiresolution segmentation) is based on the technique of sequential merging of adjacent image elements. This is an optimization procedure that minimizes the average heterogeneity of image objects. The multi-scale segmentation method was applied to the satellite image shown in Fig. <ref type="figure">1</ref> to identify areas with similar textural characteristics. The calculated textural characteristics are taken as input data: the mean value, the variance, the homogeneity, the second angular moment, the correlation, the contrast, the divergence, and entropy. Fig. <ref type="figure">3</ref> shows the result of segmentation of the satellite image (Fig. <ref type="figure">1</ref>) based on its textural features. The segments belonging to the texture corresponding to a plantation of trees are separated using the supervised classification method based on the parallelepiped algorithm. For controlled classification, reference areas are used, which are chosen by the operator in accordance with their belonging to a certain information class. The spectral features representing one class of pixels in the image are determined for each reference area. Each pixel is put to one of the classes by successively comparing it with all reference features. In controlled classification, information classes and their number are determined first followed by the determination of their corresponding spectral classes.</p><p>Parallelepiped algorithm. This classification algorithm is based on the Boolean logic and statistical indicators of the training sample in n spectral ranges. First, for each class c and range k, the average brightness value in the training sample is calculated 𝜇 ck and 𝜎 ck . Than the following rule is applied to classify the pixels of the image. A pixel belongs to a class if and only if its brightness BV ijk satisfies the following condition: 𝜇 ck − 𝜎 ck ≤ BV ijk ≤ 𝜇 ck − 𝜎 ck , <ref type="bibr" target="#b11">(12)</ref> where c= 1,2,3... m is the class, and k= 1,2,3... m is the spectral range.</p><p>If we denote the lower and upper bounds of the inequality as:</p><p>𝐿 ck = 𝜇 ck − 𝜎 ck , 𝐻 ck = 𝜇 ck + 𝜎 ck , (13) the inequality will take the form:</p><formula xml:id="formula_10">𝐿 ck ≤ BV ijk ≤ 𝐻 ck . (14)</formula><p>The set of points corresponding to inequality ( <ref type="formula">14</ref>) forms a parallelepiped in the n -dimensional space of spectral features. If the brightness values of the pixels belong to the parallelepiped, the pixel belongs to this class. In this way, the segments of the image corresponding to the tree plantation are determined (Fig. <ref type="figure">4</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Counting objects in satellite images</head><p>When the region of an image corresponding to a certain texture is found, it is possible to count the structural elements that make it up. This paper proposes an algorithm for counting objects in satellite images. The algorithm consists in selecting individual objects that create a texture based on their spectral characteristics.</p><p>The proposed algorithm consists of the following steps:</p><p>1. Selection and calculation of the spectral characteristics by which the search is carried out.</p><p>2. Binarization of the image according to the obtained spectral characteristics.</p><p>3. Finding connected components of the binarized image. 4. Selection of connected components corresponding to the parameters. 5. Calculation of the connected components. At the first step, the spectral characteristics according to which objects are searched for are selected. When searching for such objects as trees, it is possible to use vegetation indices as spectral features: NDVI, SAVI, ARVI, EVI. The NDVI index is commonly used <ref type="bibr" target="#b26">[27]</ref>.</p><p>The normalized vegetation index NDVI (Normalized Difference Vegetation Index) is used to solve problems using quantitative estimates of vegetation cover. The physiological state of the plant cover is largely determined by the content of chlorophyll and the level of moisture. It is advisable to use relative indicators of the state of vegetation, in particular forests, obtained taking into account spectral indices closely correlated with the level of plant chlorophyll and moisture <ref type="bibr" target="#b27">[28]</ref>. The relative vegetation index NDVI is an indicator of the amount of photosynthetically active biomass and is calculated according to the formula:</p><formula xml:id="formula_11">𝑁𝐷𝑉𝐼 = 𝑁𝐼𝑅 − 𝑅𝐸𝐷 𝑁𝐼𝑅 + 𝑅𝐸𝐷 ,<label>(15)</label></formula><p>where NIR is the reflection of light in the near-infrared region of the spectrum, RED is the reflection in the red region of the spectrum (8 and 5 channels of WorldView-2 and WorldView-3 images, respectively). The calculation of NDVI is based on the two most stable areas of the spectral reflectance curve of vascular plants. The maximum absorption of solar radiation by chlorophyll takes place in the red region of the spectrum (0.6-0.7 μm); the IR region (0.7-1.0 μm) is the region of maximum reflection of leaf cellular structures.</p><p>A high photosynthetic activity, usually associated with dense vegetation, reduces the reflection in the red region of the spectrum and increses it in the IR one. Comparing these indicators with each other allows one to clearly distinguish plant objects from other environmental objects. The use of the normalized difference between the minimum and maximum reflectance allows one to reduce the effect of the image illumination and the radiation absorption by the atmosphere. Natural objects not associated with vegetation have a fixed NDVI value, which allows using this parameter for their identification. Depending on the objects on the Earth's surface, the NDVI index takes the following values: − negative values (when calculating in the range from -1 to 1) for water bodies;  The main advantage of NDVI is the ease of calculation because no additional data are required except for the remote sensing data and the survey parameters. The NDVI index can be calculated on the basis of any images that have spectral channels in the red and near-IR ranges. The NDVI index has many modifications: SAVI, ARVI, EVI, etc. They are designed to reduce the impact of interfering factors.</p><p>Several high spatial resolution images with plantations of different types of trees were investigated (Fig. <ref type="figure">5 and 7</ref>). The trees of a plantation usually have the same size, shape and spacing. This greatly facilitates the task of counting in comparison with counting trees in forest areas where the trees are located randomly. Three classes of objects are mainly present in the images used in this work: trees, soil, and tree shadows. Considering the fact that trees in gardens and plantations are arranged in an orderly manner, they have the same size. Due to their spectral characteristics, they are easily distinguished from the rest of the image pixels. Fig. <ref type="figure">5</ref> shows an image fragment and the NDVI index calculated on its basis (Fig. <ref type="figure">5 (b)</ref>). Fig. <ref type="figure">6</ref> shows the histogram of the NDVI index. Using the method of automatic histogram splitting, it is possible to distinguish objects with a high index value, which corresponds to trees. The calculation of the vegetation index showed that in some images (Fig. <ref type="figure">7</ref>) its value is too high in the entire area corresponding to the tree plantation. In this case, the histogram of the image is shifted from the center, as shown in Fig. <ref type="figure">8</ref>. This leads to the impossibility of distinguishing the objects that correspond to trees using the histogram thresholding. In this case, it is possible to use other spectral characteristics of the image, in particular the NSVDI shadow identification index <ref type="bibr" target="#b28">[29]</ref>:</p><formula xml:id="formula_12">𝑁𝑆𝑉𝐷𝐼 = 𝑆 − 𝑉 𝑆 + 𝑉 , (<label>16</label></formula><formula xml:id="formula_13">)</formula><p>where S is the image saturation and V is the brightness. To obtain the S and V components of the image, it is transformed from the RGB color model to the HSV color model. NSVDI takes values from -1 to 1, shadow areas have high index values. This work uses the NSVDI index to identify and count trees. For the image shown in Fig. <ref type="figure">1</ref>, the proposed object counting algorithm was applied:</p><p>The first step of the algorithm is to find shadows in the image. This work used the image whose fragment is shown in Fig. <ref type="figure">1</ref>. The input image was transformed into the HSV color model. The obtained image was then used to find the NSVDI index (Fig. <ref type="figure" target="#fig_6">9</ref>). The pixels of the image that belong to the shadow areas take higher values, while the pixels that take smaller values belong to the non-shadow areas.</p><p>The second step of the algorithm is image binarization. An algorithm for automatically finding the optimal histogram threshold was applied to the image of the NSVDI index (Fig. <ref type="figure" target="#fig_7">10</ref>). The image of the NSVDI index is divided into two classes according to the optimal threshold <ref type="bibr" target="#b29">[30]</ref>.  The third step of the algorithm is to find connected components of the obtained binary image. In this way, we obtained numbered segments of the image corresponding to the shadows (Fig. <ref type="figure" target="#fig_8">11</ref>). The resulting image contains a large number of small segments that may correspond to noise, texture inhomogeneity, etc., while the large segments correspond to the shadows from individual trees.</p><p>The fourth step is to distinguish the connected components obtained at the second step by size. We discard the segments whose size is smaller than the given threshold, thus leaving only the segments that corresponding to the tree shadows. Further, it is not difficult to count the obtained segments. Fig. <ref type="figure" target="#fig_8">11</ref> shows the result of distinguishing individual objects using the connected components. The OpenCV computer vision algorithm library has been used for image processing and processing results visualization. As a result of the test image processing, 773 image elements corresponding to trees were obtained (Fig. <ref type="figure" target="#fig_8">11</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions and plans for the future</head><p>The paper proposes a method of processing satellite images of high spatial resolution based on textural characteristics to solve the applied problem of identifying areas of the Earth's surface with agricultural plantations, in particular fruit tree plantations. An algorithm for the identification of individual objects in the selected area of an image for the purpose of their counting is proposed.</p><p>Texture segmentation was used to solve the problem of identifying areas of the Earth's surface where agricultural plantations are located. The calculation of textural properties of the image was performed on the basis of the gray-level co-occurrence matrix. The multiresolution segmentation method was applied to a satellite image to identify areas with similar textural characteristics. The calculated textural characteristics were taken as input data for segmentation: the mean value, the variance, the homogeneity, the second angular moment, the correlation, the contrast, the divergence, and the entropy.</p><p>The algorithm for selecting individual objects is based on the calculation of the spectral properties of images. Various spectral characteristics of images were analyzed. As a result, it was found that it is advisable to use vegetation indices and shade identification indices to identify trees in large agricultural plantations. On the basis of the spectral properties of the images and the result of their binarization, the objects that create the texture were distinguished. These objects were counted by means of software.</p><p>The actual number of trees was calculated on the studied samples and compared with the result of the algorithm. The obtained results showed a sufficiently high accuracy of the calculation. Further work will be devoted to improving texture segmentation, developing methods for automatically determining the minimum size of segments corresponding to the shadows from individual trees.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The result of multiscale segmentation processing</figDesc><graphic coords="4,103.35,185.84,402.50,307.47" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Divergence</head><label></label><figDesc>in pixel values based on the absolute difference in brightness levels. Entropy 𝐸 = − ∑ ∑ 𝑃(𝑖, 𝑗)𝑙𝑜𝑔(𝑃(𝑖, 𝑗)),</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>−</head><label></label><figDesc>positive and close to zero values for soils and dry vegetation; − maximal values for vegetative vegetation; − intermediate values for different states of the vegetation cover.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :Figure 4 :</head><label>34</label><figDesc>Figure 3: Result of satellite image segmentation based on textural features</figDesc><graphic coords="8,104.35,122.60,400.00,304.08" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :Figure 6 :</head><label>56</label><figDesc>Figure 6: Histogram of the NDVI index</figDesc><graphic coords="9,168.85,567.98,271.47,186.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :Figure 8 :</head><label>78</label><figDesc>Figure 8: Histogram of the NDVI index</figDesc><graphic coords="10,136.32,319.30,336.55,236.99" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: NSVDI spectral index image</figDesc><graphic coords="11,112.93,122.59,369.15,281.98" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Result of the NSVDI spectral index binarization</figDesc><graphic coords="11,116.00,430.67,362.84,217.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Result of individual objects identification</figDesc><graphic coords="12,102.43,109.95,390.14,299.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,101.72,444.61,405.74,309.90" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Accuracy evaluation of automated object recognition using multispectral aerial images and neural network</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mozgovoy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Hnatushenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vasyliev</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.2502905</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Tenth International Conference on Digital Image Processing</title>
				<meeting>Tenth International Conference on Digital Image Processing<address><addrLine>ICDIP</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page">108060H</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Land Cover and Land Use Monitoring Based on Satellite Data within World Bank Project</title>
		<author>
			<persName><forename type="first">N</forename><surname>Kussul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shelestov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lavreniuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kolotii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vasiliev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 10th International Conference Dependable Systems</title>
				<meeting>10th International Conference Dependable Systems<address><addrLine>DESSERT</addrLine></address></meeting>
		<imprint>
			<publisher>Services and Technologies</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="127" to="130" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A Very High Resolution Satellite Imagery Classification Algorithm</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">I</forename><surname>Shedlovska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Hnatushenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/ELNANO.2018.8477447</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 2018 IEEE 38th International Conference on Electronics and Nanotechnology</title>
				<meeting>2018 IEEE 38th International Conference on Electronics and Nanotechnology</meeting>
		<imprint>
			<publisher>ELNANO</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="654" to="657" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Cross-Comparison of Individual Tree Detection Methods Using Low and High Pulse Density Airborne Laser Scanning Data</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sparks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Corrao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M S</forename><surname>Smith</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs14143480</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">3480</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">G</forename><surname>Weinstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marconi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bohlman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zare</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>White</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs11111309</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">1309</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">TreeSatAI Benchmark Archive: a multi-sensor, multi-label dataset for tree species classification in remote sensing</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ahlswede</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Schulz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Helber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Förster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Arias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hees</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Demir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Kleinschmit</surname></persName>
		</author>
		<idno type="DOI">10.5194/essd-15-681-2023</idno>
	</analytic>
	<monogr>
		<title level="j">Earth System Science Data</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="681" to="695" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Individual tree detection from unmanned aerial vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest</title>
		<author>
			<persName><forename type="first">M</forename><surname>Mohan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Klauberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Jat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Catts</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Cardil</surname></persName>
		</author>
		<idno type="DOI">10.3390/f8090340</idno>
	</analytic>
	<monogr>
		<title level="j">Forests</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page">340</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Improving individual tree crown delineation and attributes estimation of tropical forests using airborne LiDAR data</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">Mohd</forename><surname>Jaafar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Woodhouse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Silva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Omar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Abdul Maulud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hudak</surname></persName>
		</author>
		<idno type="DOI">10.3390/f9120759</idno>
	</analytic>
	<monogr>
		<title level="j">Forests</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">759</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Automatic Detection of Oil Palm Tree from UAV Images Based on the Deep Learning Method</title>
		<author>
			<persName><forename type="first">L</forename><surname>Xinni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">K</forename><surname>Ghazali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fengrong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">M</forename><surname>Izzeldin</surname></persName>
		</author>
		<idno type="DOI">10.1080/08839514.2020.1831226</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="13" to="24" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Oil Palm Tree Detection and Counting for Precision Farming Using Deep Learning CNN</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kipli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Lee</forename><surname>Jaw Bin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Huai En</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joseph</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Gan Yong Kien</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Jalil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shamim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><surname>Mahmud</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-16-7597-3_45</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering</title>
		<title level="s">Lecture Notes in Networks and Systems</title>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Kaiser</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Ray</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Bandyopadhyay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Jacob</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Long</surname></persName>
		</editor>
		<meeting>the Third International Conference on Trends in Computational and Cognitive Engineering<address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">348</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Large-Scale Oil Palm Tree Detection from High-Resolution Remote Sensing Images Using Faster-RCNN</title>
		<author>
			<persName><forename type="first">J</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yuan</surname></persName>
		</author>
		<idno type="DOI">10.1109/IGARSS.2019.8898360</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of IGARSS 2019 -2019 IEEE International Geoscience and Remote Sensing Symposium</title>
				<meeting>IGARSS 2019 -2019 IEEE International Geoscience and Remote Sensing Symposium<address><addrLine>Yokohama, Japan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1422" to="1425" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Detection and counting of orchard trees from vhr images using a geometrical-optical model and marked template matching</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Maillard</surname></persName>
		</author>
		<author>
			<persName><surname>Gomes</surname></persName>
		</author>
		<idno type="DOI">10.5194/isprsannals-III-7-75-2016</idno>
	</analytic>
	<monogr>
		<title level="j">ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences</title>
		<imprint>
			<biblScope unit="volume">III</biblScope>
			<biblScope unit="page" from="75" to="82" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Tree Species Classifications of Urban Forests Using UAV-LiDAR Intensity Frequency Data</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Gong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhu</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs15010110-25</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page">110</biblScope>
			<date type="published" when="2022">2023. Dec 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Predicting Individual Tree Diameter of Larch (Larix olgensis) from UAV-LiDAR Data Using Six Different Algorithms</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pukkala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">/10.3390/rs14051125-24</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">1125</biblScope>
			<date type="published" when="2022-02">2022. Feb 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">UAV-Based LiDAR Scanning for Individual Tree Detection and Height Measurement in Young Forest Permanent Trials</title>
		<author>
			<persName><forename type="first">F</forename><surname>Rodriguez-Puerta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gomez-Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Martin-Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Perez-Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Prada</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs14010170</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">170</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Tree crown detection on multispectral VHR satellite imagery</title>
		<author>
			<persName><forename type="first">N</forename><surname>Daliakopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">G</forename><surname>Grillakis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Koutroulis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">K</forename><surname>Tsanis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Photogrammetric Engineering &amp; Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">75</biblScope>
			<biblScope unit="page" from="1201" to="1211" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Oil palm tree detection with high resolution multi-spectral satellite imagery</title>
		<author>
			<persName><forename type="first">P</forename><surname>Srestasathiern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rakwatin</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs6109749</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="9749" to="9774" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Development of young oil palm tree recognition using Haar-based rectangular windows</title>
		<author>
			<persName><forename type="first">S</forename><surname>Daliman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A R</forename><surname>Abu-Bakar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">H</forename><surname>Md</surname></persName>
		</author>
		<author>
			<persName><surname>Nor Azam</surname></persName>
		</author>
		<idno type="DOI">10.1088/1755-1315/37/1/012041</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 8th IGRSM International Conference and Exhibition on Remote Sensing &amp; GIS, IGRSM 2016</title>
				<meeting>the 8th IGRSM International Conference and Exhibition on Remote Sensing &amp; GIS, IGRSM 2016</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page">12041</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Remote Sensing Satellites for Digital Earth</title>
		<author>
			<persName><forename type="first">W</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chen</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-32-9915-3_3</idno>
	</analytic>
	<monogr>
		<title level="m">Manual of Digital Earth</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Guo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Goodchild</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Annoni</surname></persName>
		</editor>
		<meeting><address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="55" to="123" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Semantic Segmentation Guided Coarse-to-Fine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction and Radius Expansion</title>
		<author>
			<persName><forename type="first">X</forename><surname>Ning</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lv</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs14194926-01</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">4926</biblScope>
			<date type="published" when="2022-10">2022. Oct 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Image Texture, Texture Features, and Image Texture Classification and Segmentation</title>
		<author>
			<persName><surname>Cc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><surname>Lan</surname></persName>
		</author>
		<idno type="DOI">/10.1007/978-3-030-13773-1_1</idno>
	</analytic>
	<monogr>
		<title level="m">Image Texture Analysis</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="258" to="264" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">On Satellite Image Segmentation via Piecewise Constant Approximation of Selective Smoothed Target Mapping</title>
		<author>
			<persName><forename type="first">V</forename><surname>Hnatushenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kogut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Uvarov</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.amc.2020.125615</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Mathematics and Computation</title>
		<imprint>
			<biblScope unit="volume">389</biblScope>
			<biblScope unit="page">26</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Vector-difference texture segmentation method in technical and medical express diagnostic systems</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">N</forename><surname>Krylov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">P</forename><surname>Volkova</surname></persName>
		</author>
		<idno type="DOI">10.15276/hait.04.2020.2</idno>
	</analytic>
	<monogr>
		<title level="j">Herald of Advanced Information Technology</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="174" to="186" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Studying the Classification of Texture Images by K-Means of Co-Occurrence Matrix and Confusion Matrix</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Kaduhm</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Abduljabbar</surname></persName>
		</author>
		<idno type="DOI">10.30526/36.1.2894</idno>
	</analytic>
	<monogr>
		<title level="j">Ibn AL-Haitham Journal For Pure and Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="113" to="122" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Multiscale Optimized Segmentation of Urban Green Cover in High Resolution Remote Sensing Image</title>
		<author>
			<persName><forename type="first">P</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Feng</surname></persName>
		</author>
		<idno type="DOI">10.3390/rs10111813</idno>
	</analytic>
	<monogr>
		<title level="j">Remote Sens</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">20</biblScope>
			<biblScope unit="page">1813</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Multi-resolution segmentation parameters optimization and evaluation for VHR remote sensing image based on mean NSQI and discrepancy measure</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Jing</forename></persName>
		</author>
		<idno type="DOI">10.1080/14498596.2019.1615011</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Spatial Science</title>
		<imprint>
			<biblScope unit="page" from="253" to="278" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Significant Remote Sensing Vegetation Indices: A Review of Developments and Applications</title>
		<author>
			<persName><forename type="first">J</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Su</surname></persName>
		</author>
		<idno type="DOI">10.1155/2017/1353691</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Sensors</title>
		<imprint>
			<biblScope unit="volume">2017</biblScope>
			<biblScope unit="issue">17</biblScope>
			<biblScope unit="page">1353691</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Libraries for Remote Sensing Data Classification by K-Means Clustering and NDVI Computation in Congo River Basin</title>
		<author>
			<persName><forename type="first">P</forename><surname>Lemenkova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Debeir</surname></persName>
		</author>
		<idno type="DOI">10.3390/app122412554</idno>
	</analytic>
	<monogr>
		<title level="j">Appl. Sci</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">12554</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>DRC</note>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Automatic shadow detection in aerial and terrestrial images</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">L</forename><surname>De Souza Freitas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">M</forename><surname>Da Fonseca Reis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M G</forename><surname>Tommaselli</surname></persName>
		</author>
		<idno type="DOI">10.1590/s1982-21702017000400038</idno>
	</analytic>
	<monogr>
		<title level="j">Boletim de Ciências Geodésicas</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="578" to="590" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Shadow detection and removal using a shadow formation model</title>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">I</forename><surname>Shedlovska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Hnatushenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/dsmp.2016.7583537</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 2016 IEEE First International Conference on Data Stream Mining &amp; Processing</title>
				<meeting>2016 IEEE First International Conference on Data Stream Mining &amp; Processing</meeting>
		<imprint>
			<publisher>DSMP</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="187" to="190" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
