<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>V. Hnatushenko);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Using the Image Texture Properties</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Volodymyr Hnatushenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Yana Shedlovska</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor Shedlovsky</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vyacheslav Gorev</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Dnipro University of Technology</institution>
          ,
          <addr-line>19 Dmytra Yavornytskoho Ave., 49005 Dnipro</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>1920</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0003</lpage>
      <abstract>
        <p>This paper focuses on identifying objects in satellite images using image texture properties, which is an important problem in agriculture. Texture segmentation can distinguish areas that correspond to tree plantations. Orchards and tree plantations can cover vast areas with thousands of trees, making the automation of harvest estimation crucial. Satellite images enable the creation of an effective automatic system for counting trees in plantations. In this work, we applied image texture segmentation to identify areas corresponding to agricultural plantations. We calculated textural properties of the image using the gray-level cooccurrence matrix, including mean value, variance, homogeneity, second angular moment, segmentation, with multi-scale segmentation employed to distinguish areas of the image with to trees. Remote sensing data, segmentation, texture, NDVI, NSVDI, object identification, tree Today, due to modern satellites, it is possible to obtain high-quality images of the Earth's surface.</p>
      </abstract>
      <kwd-group>
        <kwd>correlation</kwd>
        <kwd>contrast</kwd>
        <kwd>divergence</kwd>
        <kwd>and</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>specific textures.</p>
      <p>We proposed an algorithm for counting objects in satellite images, based on identifying
individual objects that create a texture according to their spectral characteristics. The images
used in this work primarily featured three object classes: trees, soil, and tree shadows. Since
trees in gardens and plantations are arranged uniformly and have the same size, they can be
easily distinguished from other image pixels based on their spectral characteristics. We
analyzed NDVI and NSVDI spectral indices for tree detection and used the automatic spectral
index histogram splitting method to distinguish objects with a high index value corresponding</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        High spatial resolution images are of special practical value [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. These remote sensing data are used for
monitoring the Earth's surface, mapping, tracking deforestation, assessing the consequences of natural
disasters, and in many other areas of human activity [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Such modern satellites as the WorldView-2
and the WorldView-3 are capable of producing images with a spatial resolution of up to 1.24 m per
pixel in the panchromatic channel and up to 0.3 m per pixel in the multispectral channels. The
WorldView-3 satellite is capable of covering up to 680,000 square kilometers of the Earth's surface per
day, and WorldView-2 up to one million square kilometers. The huge number of multi-channel satellite
images arriving to the Earth every day requires fast, high-quality processing to obtain useful information
in a timely manner [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. There is a need to develop methods of automated computer processing of data.
Satellite images are actively used in agriculture. With the help of images obtained in the near- infrared
parts of the spectrum, it is possible to assess the state and stages of crop growth, to determine the type
of vegetation. Recently, aerial images have also been actively used in agriculture.
EMAIL:
      </p>
      <p>Hnatushenko.V.V@nmu.one
(V.</p>
      <p>Hnatushenko);</p>
      <p>Shedlovska.Y.I@nmu.one</p>
      <p>2023 Copyright for this paper by its authors.</p>
      <p>
        One of the urgent tasks of remote sensing data processing is the counting of trees in plantations.
Cultivation of fruit trees, almonds, walnuts and hazelnuts, oil palms is an important part of the
agricultural industry and a significant source of income in such countries as the USA, Malaysia, Brazil,
etc [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. In order to effectively organize work and forecast the harvest, it is necessary to know the number
of trees in the cultivated area. Until now, human labor was used to count and assess the condition of
individual trees on plantations, which was very time-consuming and time-consuming. Orchards and
tree plantations can cover huge areas and have thousands of trees, so the task of automating harvest
counting is very important. Thanks to satellite images, it is possible to create an effective automatic
system of counting trees in plantations [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>To solve this problem, it is necessary to choose effective methods of processing and analyzing data
obtained by means of remote sensing. The task of identifying and counting trees can be divided into
two subtasks: 1) identification of areas of the Earth's surface where there are agricultural plantations, in
particular fruit tree plantations; 2) identification of individual objects in the selected area of the image
for the purpose of counting them. In this work, these problems are solved using the methods of texture
analysis of digital images based on the adjacency matrix of gray levels, image segmentation based on
textural features, and selection of individual objects based on their spectral features.</p>
    </sec>
    <sec id="sec-3">
      <title>2. Related works</title>
      <p>
        Attempts have been made repeatedly to create an automatic tree identification and counting system
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Remote sensing technology is extensively utilized for monitoring and quantifying canopy growth,
detecting potential plant diseases, and tracking changes within forest structures. Timely analysis of this
data is critical for optimizing yields and evaluating the response of forests to climate anomalies [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ].
      </p>
      <p>
        The cultivation of oil palm is a vital contributor to agricultural productivity in numerous developing
countries across the tropics. As such, conducting research to investigate and accurately quantify oil
palm cultivation is both valuable and meaningful. In papers, CNN and RCNN were applied for tree
detection and counting [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
        ].
      </p>
      <p>
        In paper [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ], an algorithm for searching for trees in the image using a geometric-optical tree model
was proposed, assuming that the center of the tree is at the point of maximum similarity between the
model and the sample. The dome-shaped crown of the tree was taken as the basis of the
geometricoptical model of the tree. Information about the lighting in the image was used to create it. The sun
height, azimuth, and tree width parameters were determined automatically for each individual image,
but the result of automatic parameter determination was no better than the user's visual assessment, and
needs improvement.
      </p>
      <p>
        LiDAR data were used in [
        <xref ref-type="bibr" rid="ref13">13, 14, 15</xref>
        ]. Using the LiDAR-based canopy height model and
segmentation methods the tree crowns were delineated.
      </p>
      <p>In paper [16], a method of counting fruit trees was also proposed. To find the crowns of individual
trees, the LoG filter was used, which gives the strongest response to round objects. Since the LoG filter
detects all objects that have a shape similar to the crown of a tree, the authors use vegetation indices
and take into account only those objects that are green vegetation. Binarization of the red channel of
the image and calculation of the NDVI vegetation index were used to find the vegetation in the image.</p>
      <p>Vegetation indices were used in paper [17] to identify individual trees in a plantation. The most
common vegetation indices were investigated, and those that give the maximum difference between the
spectral characteristics of trees and the background were selected. The borders of the plantations were
outlined manually on the pictures, the crowns of the trees were located as local maxima of the values
of the vegetation indices.</p>
      <p>In [18], an algorithm for searching for young palm trees in a satellite image using Haar features was
proposed. Seven Haar features describing the shape of the tree were taken. In order to avoid incorrect
identification of objects that are not trees, the obtained results were classified by the support vectors
(SVM) method.</p>
    </sec>
    <sec id="sec-4">
      <title>3. Materials and methods</title>
      <p>Satellite images obtained by the WorldView-2 and the WorldView-3 satellites were used in this
work. The WorldView-2 is the first commercial instrument with an eight-channel high-resolution
spectrometer that includes traditional spectral channels as well as four additional ones. They provide
higher accuracy in detailed analysis of the state of vegetation, selection of objects, analysis of the
coastline and coastal water area. In its characteristics, the WorldView-2 meets the highest requirements.
The data received from this satellite have a root mean square error (RMS) of no worse than 4 m without
ground reference points [19].</p>
      <p>The WorldView-3 satellite is designed for shooting in panchromatic and multispectral modes.
Camera equipment is completely similar to that installed on the WorldView-2 satellite. The accuracy
of the geo-positioning in the plan is 6.5 m or 4 m SKP without additional correction of the plan
coordinates by ground reference points. The WorldView-3 conducts shooting in the following modes:
VNIR ( Visible and Near Infrared ) – multispectral visible and near-infrared range, only 8 channels;
SWIR ( Shortwave Infrared ) – mid-infrared range, allows shooting through haze, fog, smog, dust,
smoke and clouds, only 8 channels; CAVIS ( clouds , aerosols , vapors , ice , snow ) – allows for
atmospheric correction of only 12 channels with a spatial resolution of 30 m at nadir and wavelengths
from 0.4 μm to 2.2 μm .</p>
      <p>One of the methods that ensure the selection of useful information from remote sensing data is image
segmentation. Segmentation allows you to identify homogeneous areas on the Earth's surface that
correspond to certain natural objects [20].</p>
      <p>Analysis of a number of satellite images and aerial photographs has shown that each image contains
several types of land cover (forest, grass, soil, etc.), which can be characterized by their spectral textural
properties. Fig. 1 shows a fragment of a satellite image. Texture fragments of vegetation can be divided
into two main types: visually different in terms of spectral properties and structure; visually close in
terms of spectral properties and structure. Such fragments belong to the same class, for example
"forest", "soil", "grass", etc. Texture fragments within the same class are close in their
characteristics [21].</p>
      <p>The complex structure of satellite and aerospace images does not allow us to solve these problems
using only the spectral properties of the images. The result of segmentation of the satellite image (Fig. 2)
based on spectral characteristics showed that it is impossible to distinguish areas with a uniform texture
using this method. Spectral properties of objects on the Earth's surface do not always provide complete
information about the objects, as they depend on many factors, such as relief, soil type, climate,
geographical location of the area. Additional a priori information such as image acquisition geometry
and image context information must be used to improve the reliability of feature class decisions. To
identify fruit tree plantations, textural properties of images were used in this work.</p>
    </sec>
    <sec id="sec-5">
      <title>4. Calculation of the texture properties of an image based on the gray-level co-occurrence matrix</title>
      <p>Texture analysis methods are widely used in the segmentation, analysis and deciphering of remote
sensing imagery [22]. Various approaches to the detection and description of textures are known:
statistical, geometric, structural, spectral, and model [23]. Texture analysis methods based on a
onedimensional frequency histogram do not take into account the relative position of image pixels. They
allow one to take into account only group properties, pixels belonging to one object on an aerial image.</p>
      <p>The adjacency matrix of gray levels, or gray-level co-occurrence matrix (GLCM) [24] allows one to
take into account the relative position of the pixels of the image and thus analyze the texture with
pronounced spatial regularity. GLCM is calculated in one of the image channels and has dimensions
L×L, where L is the number of gray levels in the image channel. It shows how often pixels with value
i border pixels with values j in the horizontal (0 o), vertical (90 o), or diagonal (45 o and 135 o ) direction.
We denote the GLCM as P:</p>
      <p>, ( ,  ) = |{( ,  ), ( ,  ):  ( ,  ) =  ,  ( ,  ) =  }|, (1)
where i,j are matrix brightness levels  ( ,  = 1,  );  ( ,  )and  ( ,  )are the values of the elements
of the matrix P with the coordinates ( ,  )and ( ,  ); r is the distance between the elements  ( ,  )and
 ( ,  ); θ is the angle between the elements  ( ,  )and  ( ,  )relative to the horizontal axis.
calculated:</p>
      <p>Average value:</p>
      <sec id="sec-5-1">
        <title>Energy:</title>
      </sec>
      <sec id="sec-5-2">
        <title>Variation:</title>
      </sec>
      <sec id="sec-5-3">
        <title>Homogeneity:</title>
        <p />
        <p>Contrast
Variance
where  ( ,  )is the frequency of occurence in the window of two pixels with brightness i and j at an
angle α at a distance d; σ 2 is the root mean square deviation of pixel values in the window.</p>
        <p>By utilizing statistical points 1-4, it is possible to generate texture features that consider the relative
positions of neighboring pixels within a given window. As a result, these features are particularly
effective at describing textures that exhibit significant spatial regularity. The following assessments
also apply:</p>
        <p>The second angular moment
 2 =
∑</p>
        <p>2
∑( ( ,  )⁄ ) ,
where M is the total number of pairs of adjacent elements, this number is a measure of image
homogeneity. For d =1, α=0, M =2 L y ( Lx -1).</p>
        <p>−1</p>
        <p />
        <p>∑( ( ,  )⁄ )] , | −  | =  ,
 
=
∑</p>
        <p>∑( −  )2( ( ,  )⁄ ),
determines the brightness variations from the average value.
is determined by the magnitude of local variations of pixel values: the larger it is, the higher the contrast
is.</p>
        <p>Correlation coefficient
=   −1  −1 ∑
∑ [ (
) −</p>
        <p>] ,
 ( ,  )

where m x
, m y</p>
        <p>, σ x

∑ =1  ( ,  )⁄
and   ( ) =
, σ y are mean values and root mean square deviations for   ( ) =
∑

 =1  ( ,  )⁄ , respectively. The correlation coefficient is a measure
of the linearity of the regression dependence of brightness in the image.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)</p>
        <p />
      </sec>
      <sec id="sec-5-4">
        <title>Entropy</title>
        <p>(10)
(11)
characterizes the difference in pixel values based on the absolute difference in brightness levels.
characterizes the uneven distribution of brightness of image elements.</p>
        <p>The above characteristics were calculated for test satellite images.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>5. Texture image segmentation</title>
      <p>Image segmentation is a crucial stage in image processing that involves partitioning an image into
homogenous areas based on shared pixel characteristics. This process has a significant impact on
subsequent calculations of object properties and classification results [25]. To achieve the most accurate
outcomes, it's essential to carefully select the optimal segmentation method and its parameters that are
best suited for the specific problem at hand. Segmentation methods can be divided into automatic ones
and interactive ones, the latter requiring the participation of the user. Automatic methods are also
divided into two classes: 1) selection of image areas with certain properties specific to a specific subject
area (marker methods, binarization); 2) dividing the image into homogeneous regions. The methods
that divide the image into homogeneous areas are the most versatile, since they are not focused on a
specific subject area or specific analysis tasks. Such algorithms are the most widespread in the computer
vision, they include methods of water separation, the method of boundary selection, and methods based
on multidimensional histogram clustering.</p>
      <p>Assessing the quality of segmentation methods is not a straightforward process, as there is no
universally accepted objective criterion for doing so. The optimal choice of method ultimately depends
on the specific problem that needs to be addressed. To facilitate comparisons between segmentation
techniques, reference image databases with known "ground-truth" segmentations can be utilized to
evaluate the quality of each method's performance. One of the problems solved in the section is
determining which segmentation methods and what values of their parameters are most suitable for use
in the classification of multidimensional photogrammetric images with high spatial resolution.</p>
      <p>The multi-scale segmentation of an image based on the calculated texture properties was used in this
work to select areas of the image corresponding to a specific texture [26]. The multi-scale segmentation
method (Multiresolution segmentation) is based on the technique of sequential merging of adjacent
image elements. This is an optimization procedure that minimizes the average heterogeneity of image
objects. The multi-scale segmentation method was applied to the satellite image shown in Fig. 1 to
identify areas with similar textural characteristics. The calculated textural characteristics are taken as
input data: the mean value, the variance, the homogeneity, the second angular moment, the correlation,
the contrast, the divergence, and entropy.
The segments belonging to the texture corresponding to a plantation of trees are separated using the
supervised classification method based on the parallelepiped algorithm. For controlled classification,
reference areas are used, which are chosen by the operator in accordance with their belonging to a
certain information class. The spectral features representing one class of pixels in the image are
determined for each reference area. Each pixel is put to one of the classes by successively comparing it
with all reference features. In controlled classification, information classes and their number are
determined first followed by the determination of their corresponding spectral classes.</p>
      <p>Parallelepiped algorithm. This classification algorithm is based on the Boolean logic and statistical
indicators of the training sample in n spectral ranges. First, for each class c and range k, the average
brightness value in the training sample is calculated  ckand  ck. Than the following rule is applied to
classify the pixels of the image. A pixel belongs to a class if and only if its brightness BVijksatisfies the
following condition:</p>
      <p>ck −  ck ≤ BVijk ≤  ck −  ck,
where c= 1,2,3... m is the class, and k= 1,2,3... m is the spectral range.</p>
      <p>If we denote the lower and upper bounds of the inequality as:</p>
      <p>ck =  ck −  ck,  ck =  ck +  ck,
the inequality will take the form:</p>
      <p>ck ≤ BVijk ≤  ck. (14)</p>
      <p>The set of points corresponding to inequality (14) forms a parallelepiped in the n -dimensional space
of spectral features. If the brightness values of the pixels belong to the parallelepiped, the pixel belongs
to this class. In this way, the segments of the image corresponding to the tree plantation are determined
(Fig. 4).</p>
    </sec>
    <sec id="sec-7">
      <title>6. Counting objects in satellite images</title>
      <p>When the region of an image corresponding to a certain texture is found, it is possible to count the
structural elements that make it up. This paper proposes an algorithm for counting objects in satellite
images. The algorithm consists in selecting individual objects that create a texture based on their
spectral characteristics.</p>
      <p>The proposed algorithm consists of the following steps:
1. Selection and calculation of the spectral characteristics by which the search is carried out.
2. Binarization of the image according to the obtained spectral characteristics.
3. Finding connected components of the binarized image.
4. Selection of connected components corresponding to the parameters.</p>
      <p>5. Calculation of the connected components.</p>
      <p>At the first step, the spectral characteristics according to which objects are searched for are selected.</p>
      <p>When searching for such objects as trees, it is possible to use vegetation indices as spectral features:
NDVI, SAVI, ARVI, EVI. The NDVI index is commonly used [27].</p>
      <p>The normalized vegetation index NDVI (Normalized Difference Vegetation Index) is used to solve
problems using quantitative estimates of vegetation cover. The physiological state of the plant cover is
largely determined by the content of chlorophyll and the level of moisture. It is advisable to use relative
indicators of the state of vegetation, in particular forests, obtained taking into account spectral indices
closely correlated with the level of plant chlorophyll and moisture [28]. The relative vegetation index
NDVI is an indicator of the amount of photosynthetically active biomass and is calculated according to
the formula:
(12)
(13)
(15)
where NIR is the reflection of light in the near-infrared region of the spectrum, RED is the reflection in
the red region of the spectrum (8 and 5 channels of WorldView-2 and WorldView-3 images,
respectively).</p>
      <p>The calculation of NDVI is based on the two most stable areas of the spectral reflectance curve of
vascular plants. The maximum absorption of solar radiation by chlorophyll takes place in the red region
of the spectrum (0.6-0.7 μm); the IR region (0.7-1.0 μm) is the region of maximum reflection of leaf
cellular structures.</p>
      <p>A high photosynthetic activity, usually associated with dense vegetation, reduces the reflection in
the red region of the spectrum and increses it in the IR one. Comparing these indicators with each other
allows one to clearly distinguish plant objects from other environmental objects. The use of the
normalized difference between the minimum and maximum reflectance allows one to reduce the effect
of the image illumination and the radiation absorption by the atmosphere. Natural objects not associated
with vegetation have a fixed NDVI value, which allows using this parameter for their identification.
Depending on the objects on the Earth's surface, the NDVI index takes the following values:
− negative values (when calculating in the range from -1 to 1) for water bodies;
− positive and close to zero values for soils and dry vegetation;
− maximal values for vegetative vegetation;
− intermediate values for different states of the vegetation cover.</p>
      <p>NDVI values increase with the development of green biomass and decrease with its drying. The
calculation of NDVI is based on a series of images taken at different time. This makes it possible to
obtain a dynamic picture of the behavior of the borders and characteristics of different types of
vegetation (monthly, seasonal, annual variations).</p>
      <p>The NDVI index is widely used in agriculture to perform:
− monitoring the development of agricultural crops during one season;
− vegetation cover mapping;
− drought monitoring, assessment of the productivity of ecosystems and agricultural territories;
− calculation of the soil moisture content, control of vegetation phases, etc.</p>
      <p>The main advantage of NDVI is the ease of calculation because no additional data are required
except for the remote sensing data and the survey parameters. The NDVI index can be calculated on
the basis of any images that have spectral channels in the red and near-IR ranges. The NDVI index has
many modifications: SAVI, ARVI, EVI, etc. They are designed to reduce the impact of interfering
factors.</p>
      <p>Several high spatial resolution images with plantations of different types of trees were investigated
(Fig. 5 and 7). The trees of a plantation usually have the same size, shape and spacing. This greatly
facilitates the task of counting in comparison with counting trees in forest areas where the trees are
located randomly. Three classes of objects are mainly present in the images used in this work: trees,
soil, and tree shadows. Considering the fact that trees in gardens and plantations are arranged in an
orderly manner, they have the same size. Due to their spectral characteristics, they are easily
distinguished from the rest of the image pixels. Fig. 5 shows an image fragment and the NDVI index
calculated on its basis (Fig. 5 (b)). Fig. 6 shows the histogram of the NDVI index. Using the method of
automatic histogram splitting, it is possible to distinguish objects with a high index value, which
corresponds to trees.</p>
      <p>The calculation of the vegetation index showed that in some images (Fig. 7) its value is too high in
the entire area corresponding to the tree plantation. In this case, the histogram of the image is shifted
from the center, as shown in Fig. 8. This leads to the impossibility of distinguishing the objects that
correspond to trees using the histogram thresholding.
( b)</p>
      <p>In this case, it is possible to use other spectral characteristics of the image, in particular the NSVDI
shadow identification index [29]:
where S is the image saturation and V is the brightness. To obtain the S and V components of the image,
it is transformed from the RGB color model to the HSV color model. NSVDI takes values from -1 to
1, shadow areas have high index values.</p>
      <p>This work uses the NSVDI index to identify and count trees. For the image shown in Fig. 1, the
proposed object counting algorithm was applied:</p>
      <p>The first step of the algorithm is to find shadows in the image. This work used the image whose
fragment is shown in Fig. 1. The input image was transformed into the HSV color model. The obtained
image was then used to find the NSVDI index (Fig. 9). The pixels of the image that belong to the
shadow areas take higher values, while the pixels that take smaller values belong to the non-shadow
areas.
,
(16)</p>
      <p>The second step of the algorithm is image binarization. An algorithm for automatically finding the
optimal histogram threshold was applied to the image of the NSVDI index (Fig. 10). The image of the
NSVDI index is divided into two classes according to the optimal threshold [30].</p>
      <p>The third step of the algorithm is to find connected components of the obtained binary image. In this
way, we obtained numbered segments of the image corresponding to the shadows (Fig. 11). The
resulting image contains a large number of small segments that may correspond to noise, texture
inhomogeneity, etc., while the large segments correspond to the shadows from individual trees.</p>
      <p>The fourth step is to distinguish the connected components obtained at the second step by size. We
discard the segments whose size is smaller than the given threshold, thus leaving only the segments that
corresponding to the tree shadows. Further, it is not difficult to count the obtained segments. Fig. 11
shows the result of distinguishing individual objects using the connected components.</p>
      <p>The OpenCV computer vision algorithm library has been used for image processing and processing
results visualization. As a result of the test image processing, 773 image elements corresponding to
trees were obtained (Fig. 11).</p>
    </sec>
    <sec id="sec-8">
      <title>7. Conclusions and plans for the future</title>
      <p>The paper proposes a method of processing satellite images of high spatial resolution based on
textural characteristics to solve the applied problem of identifying areas of the Earth's surface with
agricultural plantations, in particular fruit tree plantations. An algorithm for the identification of
individual objects in the selected area of an image for the purpose of their counting is proposed.</p>
      <p>Texture segmentation was used to solve the problem of identifying areas of the Earth's surface where
agricultural plantations are located. The calculation of textural properties of the image was performed
on the basis of the gray-level co-occurrence matrix. The multiresolution segmentation method was
applied to a satellite image to identify areas with similar textural characteristics. The calculated textural
characteristics were taken as input data for segmentation: the mean value, the variance, the
homogeneity, the second angular moment, the correlation, the contrast, the divergence, and the entropy.</p>
      <p>The algorithm for selecting individual objects is based on the calculation of the spectral properties
of images. Various spectral characteristics of images were analyzed. As a result, it was found that it is
advisable to use vegetation indices and shade identification indices to identify trees in large agricultural
plantations. On the basis of the spectral properties of the images and the result of their binarization, the
objects that create the texture were distinguished. These objects were counted by means of software.</p>
      <p>The actual number of trees was calculated on the studied samples and compared with the result of
the algorithm. The obtained results showed a sufficiently high accuracy of the calculation. Further work
will be devoted to improving texture segmentation, developing methods for automatically determining
the minimum size of segments corresponding to the shadows from individual trees.</p>
    </sec>
    <sec id="sec-9">
      <title>8. References</title>
      <p>[14] Y. Sun, X. Jin, T. Pukkala, F. Li, Predicting Individual Tree Diameter of Larch (Larix olgensis)
from UAV-LiDAR Data Using Six Different Algorithms, Remote Sens. 14 (2022) 1125. doi:
/10.3390/rs14051125 - 24 Feb 2022.
[15] F. Rodriguez-Puerta, E. Gomez-Garcia, S. Martin-Garcia, F. Perez-Rodriguez, E. Prada.
UAVBased LiDAR Scanning for Individual Tree Detection and Height Measurement in Young Forest
Permanent Trials, Remote Sens. 14 (2022) 170. doi: 10.3390/rs14010170.
[16] N. Daliakopoulos, E. G. Grillakis, A. G. Koutroulis, I. K. Tsanis, Tree crown detection on
multispectral VHR satellite imagery, Photogrammetric Engineering &amp; Remote Sensing 75 (2009)
1201–1211.
[17] P. Srestasathiern, P. Rakwatin, Oil palm tree detection with high resolution multi-spectral satellite
imagery, Remote Sens. 6 (2014) 9749-9774. doi:10.3390/rs6109749.
[18] S. Daliman, S. A. R. Abu-Bakar, S. H. Md. Nor Azam, Development of young oil palm tree
recognition using Haar-based rectangular windows, in: Proceedings of the 8th IGRSM
International Conference and Exhibition on Remote Sensing &amp; GIS, IGRSM 2016, IOP Publishing
IOP Conf. Series: Earth and Environmental Science 37, 012041, 2016.
doi:10.1088/17551315/37/1/012041.
[19] W. Fu, J. Ma, P. Chen, F. Chen, Remote Sensing Satellites for Digital Earth, in: Guo, H.,
Goodchild, M.F., Annoni, A. (eds) Manual of Digital Earth, Springer, Singapore, 2020, pp. 55–
123. doi: 10.1007/978-981-32-9915-3_3.
[20] X. Ning, Y. Ma, Y. Hou, Z. Lv, H. Jin and Y. Wang, Semantic Segmentation Guided
Coarse-toFine Detection of Individual Trees from MLS Point Clouds Based on Treetop Points Extraction
and Radius Expansion, Remote Sens. 14 (2022) 4926. doi: 10.3390/rs14194926 - 01 Oct 2022.
[21] CC. Hung, E. Song, Y. Lan. Image Texture, Texture Features, and Image Texture Classification
and Segmentation, in: Image Texture Analysis. Springer, Cham. 2019, pp. 258-264. doi:
/10.1007/978-3-030-13773-1_1.
[22] V. Hnatushenko, P. Kogut, M. Uvarov. On Satellite Image Segmentation via Piecewise Constant
Approximation of Selective Smoothed Target Mapping, Applied Mathematics and Computation,
Vol.389, 2020, Id 125615, 26p, doi: 10.1016/j.amc.2020.125615.
[23] V.N. Krylov, N.P. Volkova. Vector-difference texture segmentation method in technical and
medical express diagnostic systems, Herald of Advanced Information Technology, 3 (2020) 174 –
186. doi: 10.15276/hait.04.2020.2.
[24] H. S. Kaduhm, H. M. Abduljabbar, Studying the Classification of Texture Images by K-Means of
Co-Occurrence Matrix and Confusion Matrix. Ibn AL-Haitham Journal For Pure and Applied
Sciences, 36 (2023) 113–122. doi: 10.30526/36.1.2894.
[25] P. Xiao, X. Zhang, H. Zhang, R. Hu, X. Feng. Multiscale Optimized Segmentation of Urban Green
Cover in High Resolution Remote Sensing Image, Remote Sens. 10 (2018) 1813 (20 pages). doi:
10.3390/rs10111813.
[26] Y. Chen, Q. Chen, C, Jing. Multi-resolution segmentation parameters optimization and evaluation
for VHR remote sensing image based on mean NSQI and discrepancy measure, Journal of Spatial
Science (2019) 253-278. doi: 10.1080/14498596.2019.1615011.
[27] J. Xue, B. Su, Significant Remote Sensing Vegetation Indices: A Review of Developments and</p>
      <p>Applications, Journal of Sensors 2017 (2017) 1353691 (17 pages). doi: 10.1155/2017/1353691.
[28] P. Lemenkova, O. Debeir, Libraries for Remote Sensing Data Classification by K-Means
Clustering and NDVI Computation in Congo River Basin, DRC. Appl. Sci. 12 (2022) 12554. doi:
10.3390/app122412554.
[29] V. L. De Souza Freitas, B. M. Da Fonseca Reis, A. M. G. Tommaselli, Automatic shadow detection
in aerial and terrestrial images, Boletim de Ciências Geodésicas 23 (2017) 578–590. doi:
10.1590/s1982-21702017000400038.
[30] Y. I. Shedlovska, V.V. Hnatushenko. Shadow detection and removal using a shadow formation
model. Proceedings of 2016 IEEE First International Conference on Data Stream Mining &amp;
Processing, DSMP, 2016, pp. 187‒190. doi:10.1109/dsmp.2016.7583537.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Mozgovoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Hnatushenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasyliev</surname>
          </string-name>
          .
          <article-title>Accuracy evaluation of automated object recognition using multispectral aerial images and neural network</article-title>
          ,
          <source>in: Proceedings of Tenth International Conference on Digital Image Processing, ICDIP</source>
          <year>2018</year>
          ,
          <article-title>108060H</article-title>
          .
          <source>doi: 10.1117/12</source>
          .2502905.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kussul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shelestov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lavreniuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kolotii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vasiliev</surname>
          </string-name>
          .
          <source>Land Cover and Land Use Monitoring Based on Satellite Data within World Bank Project. in: Proceedings of 10th International Conference Dependable Systems, Services and Technologies</source>
          ,
          <string-name>
            <surname>DESSERT</surname>
          </string-name>
          ,
          <year>2019</year>
          , pp.
          <fpage>127</fpage>
          -
          <lpage>130</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Y.I.</given-names>
            <surname>Shedlovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.V.</given-names>
            <surname>Hnatushenko</surname>
          </string-name>
          .
          <article-title>A Very High Resolution Satellite Imagery Classification Algorithm</article-title>
          ,
          <source>in: Proceedings of 2018 IEEE 38th International Conference on Electronics and Nanotechnology</source>
          ,
          <string-name>
            <surname>ELNANO</surname>
          </string-name>
          ,
          <year>2018</year>
          , pp.
          <fpage>654</fpage>
          -
          <lpage>657</lpage>
          . doi:
          <volume>10</volume>
          .1109/ELNANO.
          <year>2018</year>
          .
          <volume>8477447</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sparks</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Corrao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M. S.</given-names>
            <surname>Smith</surname>
          </string-name>
          ,
          <article-title>Cross-Comparison of Individual Tree Detection Methods Using Low and High Pulse Density Airborne Laser Scanning Data</article-title>
          ,
          <source>Remote Sens</source>
          <volume>14</volume>
          (
          <year>2022</year>
          ) 3480 doi: 10.3390/rs14143480.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.G.</given-names>
            <surname>Weinstein</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Marconi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Bohlman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zare</surname>
          </string-name>
          ,
          <string-name>
            <surname>E. White,</surname>
          </string-name>
          <article-title>Individual Tree-Crown Detection in RGB Imagery Using Semi-Supervised Deep Learning Neural Networks</article-title>
          .
          <source>Remote Sens</source>
          .
          <volume>11</volume>
          (
          <year>2019</year>
          )
          <article-title>1309</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs11111309.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahlswede</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Schulz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gava</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Helber</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Bischke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Förster</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Arias</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Hees</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Demir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Kleinschmit</surname>
          </string-name>
          , TreeSatAI Benchmark Archive:
          <article-title>a multi-sensor, multi-label dataset for tree species classification in remote sensing</article-title>
          ,
          <source>Earth System Science Data</source>
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <fpage>681</fpage>
          -
          <lpage>695</lpage>
          . doi:
          <volume>10</volume>
          .5194/essd15-681-
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Mohan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Klauberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Jat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Catts</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Cardil</surname>
          </string-name>
          ,
          <article-title>Individual tree detection from unmanned aerial vehicle (UAV) derived canopy height model in an open canopy mixed conifer forest</article-title>
          ,
          <source>Forests</source>
          <volume>8</volume>
          (
          <year>2017</year>
          ) 340 doi: 10.3390/f8090340.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>W. Mohd</given-names>
            <surname>Jaafar</surname>
          </string-name>
          , I. Woodhouse,
          <string-name>
            <given-names>C.</given-names>
            <surname>Silva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Omar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Abdul Maulud</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hudak</surname>
          </string-name>
          ,
          <article-title>Improving individual tree crown delineation and attributes estimation of tropical forests using airborne LiDAR data</article-title>
          ,
          <source>Forests</source>
          <volume>9</volume>
          (
          <year>2018</year>
          ) 759 doi: 10.3390/f9120759.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xinni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. K.</given-names>
            <surname>Ghazali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fengrong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. M.</given-names>
            <surname>Izzeldin</surname>
          </string-name>
          ,
          <source>Automatic Detection of Oil Palm Tree from UAV Images Based on the Deep Learning Method, Applied Artificial Intelligence</source>
          ,
          <volume>35</volume>
          (
          <year>2021</year>
          )
          <fpage>13</fpage>
          -
          <lpage>24</lpage>
          . doi:
          <volume>10</volume>
          .1080/08839514.
          <year>2020</year>
          .
          <volume>1831226</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kipli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lee Jaw Bin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. Huai</given-names>
            <surname>En</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Joseph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gan Yong Kien</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. A.</given-names>
            <surname>Jalil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Ray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. Shamim</given-names>
            <surname>Kaiser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Mahmud</surname>
          </string-name>
          .
          <article-title>Oil Palm Tree Detection and Counting for Precision Farming Using Deep Learning CNN</article-title>
          , in: Kaiser,
          <string-name>
            <given-names>M.S.</given-names>
            ,
            <surname>Ray</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Bandyopadhyay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            ,
            <surname>Jacob</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            ,
            <surname>Long</surname>
          </string-name>
          , K.S. (eds)
          <source>Proceedings of the Third International Conference on Trends in Computational and Cognitive Engineering. Lecture Notes in Networks and Systems</source>
          , vol
          <volume>348</volume>
          , Springer, Singapore,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1007/
          <fpage>978</fpage>
          -981-16-7597-3_
          <fpage>45</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>J.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Xia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Fu</surname>
          </string-name>
          and
          <string-name>
            <given-names>S.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <article-title>Large-Scale Oil Palm Tree Detection from High-Resolution Remote Sensing Images Using Faster-RCNN</article-title>
          ,
          <source>in: Proceedings of IGARSS 2019 - 2019 IEEE International Geoscience and Remote Sensing Symposium</source>
          , Yokohama, Japan,
          <year>2019</year>
          , pp.
          <fpage>1422</fpage>
          -
          <lpage>1425</lpage>
          , doi: 10.1109/IGARSS.
          <year>2019</year>
          .
          <volume>8898360</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P</given-names>
            <surname>Maillard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. F.</given-names>
            <surname>Gomes</surname>
          </string-name>
          ,
          <article-title>Detection and counting of orchard trees from vhr images using a geometrical-optical model and marked template matching</article-title>
          ,
          <source>in: ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume III-7</source>
          , (
          <year>2016</year>
          ) pp.
          <fpage>75</fpage>
          -
          <lpage>82</lpage>
          . doi:
          <volume>10</volume>
          .5194/isprsannals-III-7
          <string-name>
            <surname>-</surname>
          </string-name>
          75-
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <article-title>Tree Species Classifications of Urban Forests Using UAV-LiDAR Intensity Frequency Data, Remote Sens</article-title>
          .
          <volume>15</volume>
          (
          <year>2023</year>
          )
          <article-title>110</article-title>
          . doi:
          <volume>10</volume>
          .3390/rs15010110 - 25
          <source>Dec</source>
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>