<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Analysis of the Influence of Vegetation Index Choice on the Classification of Satellite Images for Monitoring Forest Pathology*</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Bryansk State Technical University</institution>
          ,
          <addr-line>Bryansk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>0000</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Rational use of natural resources and control over their recovery, as well as over destruction due to natural and technogenic causes, is currently one of the most urgent problems of the humanity. Forests are no exception. Multispectral images from Earth's satellites are most often used for monitoring changes in forest planting. This is due to the fact that merging images taken in certain spectra makes it possible to recognize vegetation containing chlorophyll quite well. It also allows to detect changes in the level of chlorophyll, which shows the differences between healthy and damaged plants. Large areas of planted forests create the need to process huge amounts of data, which is difficult to do manually. One of the most important stages of image processing is the classification of objects in these images. This paper deals with various classification methods used to solve the problem of classifying images of remote sensing of the Earth. As a result, it was decided to evaluate the accuracy of classification methods on various vegetation indices. In the course of the study, the evaluation algorithm was determined, as well as one of the options for analyzing the results obtained. Conclusions were made about the work of classification methods on different vegetation indices.</p>
      </abstract>
      <kwd-group>
        <kwd>Remote Sensing of the Earth</kwd>
        <kwd>Forest Pathology Monitoring</kwd>
        <kwd>Vegetation Indices</kwd>
        <kwd>Image Processing</kwd>
        <kwd>Methods of Image Classification</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Today, wood remains a very valuable material in many industries, so deforestation has
become a profitable business. This often happens illegally, without control, without
taking into account the damage to forest plantings and the environment. Also, major
damage to the forest is caused by natural phenomena, such as droughts or windfalls,
forest pathologies such as tree diseases or insect pests, which is a bigger problem. In
addition, forest fires also cause great damage to forests, destroying more than a million
* Publication supported by RFBR grant № 19-07-00844
hectares of forest per year. For this reason, it is necessary to monitor the state of the
forest constantly [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The main source of data for monitoring the state of forests is digital images obtained
by artificial earth satellites. Because of vast forest territories, it is necessary to track
dozens of images for one region, and taking into account their updating (for example,
for Sentinel-2satellite system every 2-3 days), the volume of processed information
increases tenfold [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
      </p>
      <p>
        At the moment, the monitoring system operation can be divided into three parts. The
first part consists of selecting a suitable satellite image, which will be a reference. The
essence of this stage is to search for an image in which the region of interest will not be
blocked by interference, such as clouds, cloud shadows, and so on [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The next stage
is processing of space images. The problem while working with satellite images is that
the image is taken in different spectra that are difficult to be processed by humans, so
it is necessary to pre-process the image, i.e. to construct a vegetation index. Then, in
order to search for objects of interest in the image, we need to make a training sample,
classify the satellite image, and vectorize the classification results. After, the described
actions should be performed for another image obtained after a period of time for the
same territory. The final step is to compare the results of work on the reference image
and the new one. This algorithm is iterative and repeats throughout the vegetation
season [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The complexity of the algorithm is that it is necessary to involve experts to
process images, since monitoring systems are not able to identify problem regions
automatically.
      </p>
      <p>The relevance of this topic is due to the fact that forest monitoring involves checking
large amounts of data received from satellites. At the same time, most of the work is
performed manually and takes a long time, so it is necessary to execute some stages of
data processing semi-automatically or automatically. For example, the region for
monitoring may be blocked by clouds or other interference, but the operator will spend time
performing this step. Therefore, it is necessary to investigate methods for identifying
space images suitable for monitoring and automate this stage. And given that the
detection of forest pathologies by remote means is based on the fact that the stressed tree
is vegetatively dries out, a big problem is the shortest possible time to identify
pathologies and eliminate them. Therefore, it is necessary to reduce the operator's inefficient
working time as much as possible.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Vegetation indices</title>
      <p>Almost all satellite systems provide medium and high-resolution images in the form of
multispectral images. This feature of such images allows to select channels that provide
more information about the typical objects under study, i.e. cut off information about
extraneous objects from the image and emphasize the data for the task being solved.
The selected channels are combined according to certain rules, forming a single image.
This procedure is the first in space image processing, so it is performed on all images</p>
      <p>
        Analysis of the Influence of Vegetation Index Choice on the Classification of Satellite… 3
used in monitoring [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. Since it is necessary to recognize vegetation in images to
monitor the forest, specialized methods for merging image parameters – vegetation indexes
are used.
      </p>
      <p>Vegetation index is an indicator calculated as a result of operations with different
spectral data ranges (channels) of remote sensing, and it is related to vegetation
parameters in a given pixel of the image.</p>
      <p>Let us consider the most common and well-established indices that are used in
research.
2.1</p>
      <sec id="sec-2-1">
        <title>Normalized Difference Vegetation Index</title>
        <p>
          Normalized Difference Vegetation Index (NDVI) is the most popular and frequently
used vegetation index, which takes positive values for vegetation, and the larger the
green phytomass, the higher the index is [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]. The index values are also affected by the
species composition of vegetation, its closeness, state, exposure, the angle of the
surface, and the color of the soil under thinned vegetation. NDVI is often used as one of
the tools for conducting complex types of analysis, which can result in maps of forest
and agricultural productivity, maps of landscapes and natural zones, soil, arid,
phytohydrological, phenological and other ecological and climatic maps.
        </p>
        <p>The index is calculated using the formula:</p>
        <p>NDVI =</p>
        <p>NIR − RED
NIR + RED
,
where NIR is the pixel value in the near-infrared region; RED stands for the pixel value
in the red region. The NDVI itself varies between -1.0 and +1.0.
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Infrared Percentage Vegetation Index</title>
        <p>
          Infrared Percentage Vegetation Index (IPVI) in contrast to NDVI does not require
subtracting the red component from the numerator, which makes this index faster regarding
calculations [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ].
        </p>
        <p>The index is calculated using the formula:
(1)
(2)
IPVI =</p>
        <p>NIR
NIR + RED
,
where NIR is the pixel value in the near-infrared region; RED stands for the pixel value
in the red region. The index varies between 0 and 1.
2.3</p>
      </sec>
      <sec id="sec-2-3">
        <title>Atmospherically Resistant Vegetation Index</title>
        <p>
          Atmospherically Resistant Vegetation Index (ARVI) was developed by Kaufman and
Tanre [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. This index is an improved NDVI, used to correct the influence of the
atmosphere. It is most useful in regions with high atmospheric aerosol content, including
tropical areas contaminated with soot.
where Rb = RED − α ∗ (RED − BLUE), as a rule, α = 1 (if there is small vegetation
covering and unknown type of atmosphere α = 0.5); NIR is the pixel value in the
nearinfrared region; RED stands for the pixel value in the red region; BLUE is the pixel
value in the blue region. The index varies between -1 and 1.
2.4
        </p>
      </sec>
      <sec id="sec-2-4">
        <title>Enhanced Vegetation Index</title>
        <p>
          Enhanced Vegetation Index (EVI) is an optimized vegetation index NDVI, when
assessing the state of plants, it has advantages, since the influence of soil and atmosphere
in the values of this index is minimized [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. The index allows to assess the state of
plants, both in the conditions of dense and thinned vegetation covering.
        </p>
        <p>The index is computed following this equation:</p>
        <p>EVI =</p>
        <p>NIR – RED
NIR + C1 ∗ RED − C2 ∗ BLUE + L
∗ (1 + L),
where BLUE stands for the pixel value in the blue region; RED is the pixel value in the
red region; NIR is the pixel value in the near-infrared region; coefficients C1, C2 and L
empirically defined as equal to 6.0, 7.5 and 1.0 respectively. The index varies between
-1 and 1.
2.5</p>
      </sec>
      <sec id="sec-2-5">
        <title>Soil-Adjusted Vegetation Index</title>
        <p>
          Soil-Adjusted Vegetation Index (SAVI) is a vegetation index that tries to minimize the
impact of soil brightness by using a soil brightness correction factor [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ].
        </p>
        <p>The index is calculated using the formula:</p>
        <p>The index is calculated using the formula:</p>
        <p>ARVI =</p>
        <p>NIR − Rb
NIR + Rb</p>
        <p>,
SAVI =</p>
        <p>NIR − RED
NIR + RED + L
∗ (1 + L),
where NIR is the pixel value in the near-infrared region; RED stands for the pixel value
in the red region; L is a canopy background adjustment factor. The index varies
between -1 and 1.
3</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Image classification</title>
      <p>The next stage of monitoring, after creating the vegetation index, is the search for
objects in the image - classification of the image. Currently, the most commonly used
approach for topical processing is relative classification, based on widely used
multispectral images and additionally collected data, which are necessary to establish a
correspondence between groups of pixels with similar characteristic values and classes of
(3)
(4)
(5)</p>
      <p>
        Analysis of the Influence of Vegetation Index Choice on the Classification of Satellite… 5
the Earth's surface. This data can be collected as a result of field studies, and more
limited in comparison with classical field methods, since classes must be identified only
for a small number of pixels [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
      </p>
      <p>There are two types of relative classification: supervised classification (with
training) and unsupervised classification (without training).</p>
      <p>The essence of the supervised classification is to assign each of the image pixels to
a specific class of objects on the ground, which corresponds to a certain area in the
characteristics space.</p>
      <p>Supervised classification includes several stages. The first step is to determine which
object classes will be allocated as a result of the entire procedure. These may include
vegetation types, agricultural crops, forest species, hydrographic objects, and so on. At
the second stage, typical pixels are selected for each of the object classes, i.e. a training
sample is formed. The third stage is the calculation of parameters, the "spectral image"
of each of the classes formed as a result of a set of reference pixels. The set of
parameters depends on the algorithm that is supposed to be used for classification. The fourth
stage of the classification procedure is to view the entire image and assign each pixel
to a particular class. The result of this stage is an image (classification map), as well as
a table that gives the coordinates of the pixel and the name of the class it belongs to.</p>
      <p>Unsupervised classification is based on a fully automatic distribution of pixels into
classes based on statistics of pixel brightness distribution. This type of classification is
used if it is initially unknown which objects are present in the image, or if the number
of objects is large. As a result, the machine itself gives the resulting classes.</p>
      <p>Let us consider the most common classification methods used in researches.
3.1</p>
      <sec id="sec-3-1">
        <title>Minimum distance method</title>
        <p>This method is used when spectral characteristics of different classes are similar, and
the ranges of their brightness overlap. In the classification the method of minimum
brightness of pixels is used to consider a vector in the space of spectral characteristics.
Spectral distance between the reference vectors and vectors of brightness of all image
pixels is calculated, then pixels are distributed into classes, if the distance from this
vector to the reference one is less than a predetermined value (which is set in advance),
then this vector is referred to this class. If the distance is greater than the specified value,
it is referred to another class, or it does not belong to any of the classes.</p>
        <p>Minimum distance calculates the spectral distance between the pixel vector and the
average vector for each signature.</p>
        <p>Euclidean distance. Euclidean distance is a common distance function. It represents a
geometric distance in a multidimensional space:</p>
        <p>E = √∑
n
i=1
|ti − xi|2,
(6)
where n is the number of ranges; i is a certain range; t is an unknown spectrum; x is a
reference spectrum; E is Euclidean distance.</p>
        <p>Manhattan distance. Manhattan distance is the distance which is the average of the
differences in coordinates. In most cases, this measure of distance leads to the same
results as for the usual Euclidean distance. However, for this measure, the impact of
individual large differences is reduced (because they are not squared). Formula for
calculating Manhattan distance is the following:</p>
        <p>n
M = ∑|ti − xi|, (7)</p>
        <p>i=1
where n is the number of ranges; i is a certain range; t is an unknown spectrum; x is a
reference spectrum; M is Manhattan distance.</p>
        <p>The disadvantage of this method is that it does not take into account the distribution
(dispersion) of the pixel brightness in the reference areas. This can lead to errors during
classification.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Method of spectral angle</title>
        <p>Classification by the method of spectral angle is used to compare the spectral
characteristics of an image with the spectral characteristics of references. The algorithm
determines the proximity between these two characteristics by calculating the spectral
angle between them. To do this, they are represented as vectors in n-dimensional space,
where n is the number of spectral channels.</p>
        <p>Since the method of spectral angle uses only the direction of vectors, it is not
sensitive to the absolute brightness of pixels, since it is the length of the vector that
determines the measure of their brightness. All possible brightness levels are treated in the
same way, since pixels with lower brightness are simply located closer to the origin of
coordinates of the scatterplot. The color of pixels corresponding to their class in the
ndimensional characteristics space is determined by the direction of their radius vectors.</p>
        <p>The following formula is used to calculate the spectral angle:
α = cos−1 (</p>
        <p>⃗t ∗ ⃗x⃗
‖⃗t‖ ∗ ‖⃗x⃗ ‖
) ,
where α is the spectral angle between vectors x and t; t is an unknown spectrum; x is a
reference spectrum.</p>
        <p>The expression can also be represented as:
α = cos−1 (
∑in=b1 ti ∗ xi</p>
        <p>1 1) ,
(∑in=b1 ti2)2 ∗ (∑in=b1 xi2)2
where nb is the number of image spectral channels.
(8)
(9)</p>
        <p>Analysis of the Influence of Vegetation Index Choice on the Classification of Satellite… 7
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Assessment of classification accuracy</title>
      <p>
        An important step of the classification is to assess the accuracy of the results obtained.
This assessment is performed by comparing the image resulting from the classification
with field measurement data and other data, such as data of relevant thematic maps.
These materials are called reference data. This comparison is possible because each
pixel in the resulting image has geographical coordinates, and it is possible to compare
the type of surface that the pixel belongs to as a result of classification with the actual
surface type known from other sources. The accuracy of classification is assessed by
comparing the classification result with reference data, which are thematic maps, a set
of points studied in the field, etc. Points are selected on the resulting classification, and
the corresponding points on the reference data are considered. The comparison results
are recorded into a table called the matrix of errors (table 1). It contains the number of
right (located on the diagonal) and wrongly classified points [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
      </p>
      <p>The reliability of the obtained assessments of classification accuracy is achieved by
selecting a sufficient number of points for each of the classes obtained during
classification. In the best case, each point of the classification result is compared with the
reference data.</p>
      <p>If we add the diagonal elements (correctly recognized image points) and divide this
number by the total number of points involved in the assessment, we get the overall
classification accuracy. For each class, there are two values: the ratio of correctly
recognized pixels either to the line sum (the number of points in this class) or to the column
sum (the number of points in the reference data). A user error is a value that indicates
the probability that a point marked as class 2 on the classification result is actually class
2 point. Kappa parameter is also calculated based on the matrix of errors. This
parameter compares the number of pixels in each of the matrix cells with the possibility of
distributing pixels as a random variable.
where κ is Kappa parameter, N stands for the number of image pixels, m is the total
number of classes, ∑Dij stands for the sum of diagonal elements of the error matrix
(the sum of correctly classified pixels of the whole image), Ri is the total number of
Class 1
Class 2</p>
      <p>Number of
reference
pixels
e
f
e+f
(10)
Classes
Classes in
classification results
Total</p>
      <p>Kappa parameter is defined as follows:
pixels in i-line (pixel sum in i-line), Cj is the total number of pixels in j-column (pixel
sum in j-column).</p>
      <p>Kappa statistics can be calculated for each selected class. For a qualitative
assessment of map matching based on Kappa statistics the following ratios are used: poor and
very poor matching if κ&lt;0.4, satisfactory if 0.4&lt;κ&lt;0.55, good if 0.55&lt;κ&lt;0.7, very good
if 0.7&lt;κ&lt;0.85, and excellent if κ&lt;0.85.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Results</title>
      <p>At the initial stage of the classification with training of the satellite image, it is
necessary to identify all classes of the underlying surface that are present in this territory.
The task of classification research was to identify deforestation.</p>
      <p>The classification was performed using three methods: the minimum distance
method, which uses Euclidean distance, the minimum distance method, which uses
Manhattan distance, and the spectral angle method. NDVI, IPVI, ARVI, EVI, SAVI
indices were used as vegetation indices for preprocessing of satellite images.</p>
      <p>As a result of the classification of the image fragment, four types of underlying
surface (classes) are defined: deforestations (red), coniferous forests (dark green),
deciduous forests (light green), lakes (blue).</p>
      <p>The result of the classification methods on the selected vegetation indices is shown
in table 2.</p>
      <p>After receiving the results, the classification accuracy was assessed. Accuracy was
evaluated using the matrix of errors and Kappa statistics.</p>
      <p>An image provided by experts was used as reference data. A matrix of classification
errors was formed for deforestation class (Table 3). Coniferous forests, deciduous
forests, and lake classes were combined into one class-background. Deforestations were
defined into a separate class.</p>
      <p>The following conclusions were made for a qualitative assessment of map matching
based on the results of Kappa statistics:
• To detect deforestation, the minimum distance method (Euclidean distance), the
minimum distance method (Manhattan distance), and the spectral angle method
showed excellent classification results, using the following indices as the vegetation
ones: NDVI, ARVI, EVI.
• For IPVI and SAVI indices, only two methods showed excellent results: the
minimum distance method (Euclidean distance), and the minimum distance method
(Manhattan distance).
• The spectral angle method performed poorly for IPVI vegetation index. And very
good, but not excellent it performed for SAVI.</p>
      <p>Analysis of the Influence of Vegetation Index Choice on the Classification of Satellite… 9
VI
NDVI
IPVI
ARVI
EVI
SAVI</p>
      <p>Classes
Background
Deforestation
∑
Background
Deforestation
∑
Background
Deforestation
∑
Background
Deforestation
∑</p>
      <p>By the results of Table 4 Kappa statistics was calculated. Table 3 gives the
calculation results.
The paper analyzes monitoring of forest pathologies. The necessity to automate some
stages of the forest monitoring algorithm was identified. Empirical research was
conducted for using vegetation indices and methods of classification of forests on space
images.</p>
      <p>The research reveals the relationship between the choice of vegetation index and the
classification method. Depending on the area under study, it is offered to use the
necessary index (for example, in areas with tropical climate, it is better to use an index that
takes into account high air humidity (ARVI), etc.) and the proposed appropriate
classification method to improve the effectiveness of the results.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <article-title>Forest code of the Russian Federation as amended on December 27, 2018 (part 4, article</article-title>
          60.5).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>Earth</given-names>
            <surname>Observing</surname>
          </string-name>
          <article-title>System</article-title>
          .
          <source>Sentinel-2 Homepage</source>
          , https://eos.com/sentinel-2/c, last accessed
          <year>2020</year>
          /05/15.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Trubakov</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Trubakov</surname>
          </string-name>
          , А.,
          <string-name>
            <surname>Korostelyov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Titarev</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <article-title>Selection of Satellite Image Series for the Determination of Forest Pathology Dynamics Taking Into Account Cloud Coverage and Image Distortions Based on the Data Obtained from the Key Point Detector</article-title>
          .
          <source>Proceedings of the 29th International Conference on Computer Graphics and Vision</source>
          , Moscow, pp.
          <fpage>159</fpage>
          -
          <lpage>163</lpage>
          (
          <year>2019</year>
          ). DOI:
          <volume>10</volume>
          .30987/graphicon-2019
          <source>-2-159-163</source>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <source>The order of April 5</source>
          ,
          <string-name>
            <surname>2017</surname>
            <given-names>N</given-names>
          </string-name>
          156 «
          <article-title>On approval of the state forest pathology monitoring procedure».</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Showengerdt</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <article-title>Remote sensing. Models and methods of image processing</article-title>
          . M.,
          <year>2010</year>
          . 560 p.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Pettorelli</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Vik</surname>
            ,
            <given-names>J. O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mysterud</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gaillard</surname>
            ,
            <given-names>J.-M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tucker</surname>
            ,
            <given-names>C. J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stenseth</surname>
            ,
            <given-names>N. C.</given-names>
          </string-name>
          <article-title>Using the satellite-derived NDVI to assess ecological responses to environmental change</article-title>
          .
          <source>Trends in Ecology and Evolution. 2005</source>
          . Vol.
          <volume>20</volume>
          . P.
          <volume>503</volume>
          -
          <fpage>510</fpage>
          . DOI:
          <volume>10</volume>
          .1016/j.tree.
          <year>2005</year>
          .
          <volume>05</volume>
          .011
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Crippen</surname>
            ,
            <given-names>R. E.</given-names>
          </string-name>
          ,
          <source>Calculating the Vegetation Index Faster. Remote Sensing of Environment</source>
          . vol
          <volume>34</volume>
          . pp.
          <fpage>71</fpage>
          -
          <lpage>73</lpage>
          (
          <year>1990</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Kaufman</surname>
          </string-name>
          , Y. J.,
          <string-name>
            <surname>Tanre</surname>
            <given-names>D.</given-names>
          </string-name>
          <article-title>Atmospherically resistant vegetation index (ARVI)</article-title>
          .
          <source>Proc. IEEE Int. Geosci. and Remote Sensing Symp</source>
          , IEEE, New York, pp.
          <fpage>261</fpage>
          -
          <lpage>270</lpage>
          (
          <year>1992</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Skakun</surname>
            ,
            <given-names>R.S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wulder</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Franklin</surname>
            ,
            <given-names>S.E.</given-names>
          </string-name>
          <article-title>Sensitivity of the thematic mapper enhanced wetness difference index to detect mountain pine beetle red-attack damage</article-title>
          .
          <source>Remote Sensing of Environment</source>
          . vol.
          <volume>86</volume>
          . pp.
          <fpage>433</fpage>
          -
          <lpage>443</lpage>
          (
          <year>2003</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Mozgovoy</surname>
            ,
            <given-names>D.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kravets</surname>
            ,
            <given-names>O.V.</given-names>
          </string-name>
          <article-title>Using multispectral images for classification of agricultural crops</article-title>
          .
          <source>Ekologiya I Noosphera (1-2)</source>
          , -
          <volume>54</volume>
          -
          <fpage>58</fpage>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Oreshkina</surname>
            ,
            <given-names>LV</given-names>
          </string-name>
          , Shidlovsky, Comparison,
          <string-name>
            <surname>AV</surname>
          </string-name>
          , Kovalenok,
          <string-name>
            <surname>V.G.</surname>
          </string-name>
          <article-title>Comparison of classification methods for multi-zone satellite images</article-title>
          .
          <source>Proceedings of the Second Belorussia Space Congress</source>
          .
          <fpage>25</fpage>
          -27 October, Minsk, Belarus.
          <source>OIPI NAS of Belarus. 205-208</source>
          s (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Foody</surname>
            ,
            <given-names>G.M.</given-names>
          </string-name>
          <article-title>Status of land cover classification accuracy assessment</article-title>
          .
          <source>Remote Sensing of Environment (80)</source>
          , pp.
          <fpage>185</fpage>
          -
          <lpage>201</lpage>
          (
          <year>2002</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>