=Paper= {{Paper |id=Vol-2763/CPT2020_paper_s7-8 |storemode=property |title=Intelligent system of forest area recognition for tasks of geographically distributed economic systems |pdfUrl=https://ceur-ws.org/Vol-2763/CPT2020_paper_s7-8.pdf |volume=Vol-2763 |authors=А.А. Kuzmenko,D.А. Kondrashov,А.S. Sazonova,L.B. Filippova,R.А. Filippov }} ==Intelligent system of forest area recognition for tasks of geographically distributed economic systems== https://ceur-ws.org/Vol-2763/CPT2020_paper_s7-8.pdf
  Intelligent system of forest area recognition for tasks of geographically
                       distributed economic systems
                  А.А. Kuzmenko, D.А. Kondrashov, А.S. Sazonova, L.B. Filippova, R.А. Filippov
      alex-rf-32@yandex.ru | kuzmenko-alexandr@yandex.ru | asazonova@list.ru | libv88@mail.ru | redfil@mail.ru
                                 Bryansk State Technical University, Bryansk, Russia

    For a long period, our country has been in the process of radical transformations of the state economic system, associated with the
final transition to a market system of management, the development of local self-government and the independence of economic entities.
In the new conditions of the emerging market, the issues of ensuring the sustainable development of territorial economic systems and
sectors of the economy, which are the source and guarantor of social stability, employment, a high level and quality of life of the
population of the regions, come to the fore. The paper deals with an intelligent system for recognizing the dynamics of changes in forest
areas based on automatic pattern recognition methods. The existing methods of processing graphical information, classification and
clustering methods that are of value within the framework of the problems being solved are considered, and several original algorithms
are proposed. LTP and FFT algorithms were selected as feature extractors of which the simplest and most productive option is LTP.
Histogram equalization algorithms, median and Gaussian filters to eliminate noise and remove small image details are chosen to pre-
process the image. Euclidean and Mahalanobis distances were used as separability measures. Naive Bayes classifier is proposed to use
for classification.
    Key words: intelligent systems, pattern recognition, geographically distributed, economic systems

                                                                           The scale and size of the window can only be set
1. Introduction                                                        experimentally, which will be done in the corresponding
    Modern software market is able to offer a system for               part of the work. Reducing the amount of information
automation or solving almost any task. Problems of forest              means applying filters to the image that suppress noise and
protection and forest management were also not ignored:                unnecessary details. Color equalization involves
there is a wide range of software that automates accounting            equalizing the intensities in the channels used – the so-
activities, is integrated with GIS systems and provides                called equalization of the image histogram.
forest planning capabilities, access to tax and cadastral
                                                                       3. Image Filtering
maps, as well as acquisition and processing capabilities of
remote sensing data.                                                       Most of the image transformation methods used in this
    There are not many systems focused on automatic                    paper are based on convolution. Correlation and
processing of satellite images, and their functionality is             convolution are two closely related concepts. Correlation
unique compared to their analogues. For example,                       is the process of moving the filter mask over an image and
"ScanEx Image Processor" system is quite versatile and                 calculating the sum of the products of the mask element
allows processing both the supplied database of images                 values and the pixel values that the corresponding mask
and images from its own sources, but the system is closed,             elements fall on. Сonvolution mechanisms are the same,
provides a trial version only by agreement with the                    except the filter mask is pre-rotated 180° [3,9].
manufacturer, and does not allow modification of the                   Analytically, convolution is expressed as follows.
algorithms used. "Forestry and land use" is focused only                   The filter, or convolution kernel, is a square or
on processing the vendor's own database of images.                     rectangular matrix with an odd number of rows and
"KEDR" system is available only to state structures of                 columns. The odd number is due to the fact that the
Amur and Primorye territories and does not even have                   convolution result is assigned to the pixel, the response
open documents. Such introductory conditions complicate                center of the kernel (Fig. 1).
the search for the turnkey system.

2. Materials and methods
    When image recognition is based on bitmap graphics,
arrays of image pixels play the role of data arrays. Raw
data sets have extra information, which in addition to
increasing computational complexity can lead to the so-
called retraining of classifiers. Also, feature extraction and
classification algorithms are sensitive to the
transformation of the data used to different extend.
     So, to create a stable algorithm for recognizing the
forest texture, it is necessary to set:
− image zoom in m/pixel (m/px);
− optimal image segment size suitable for classification;
− algorithm for reducing the amount of information in                         Fig. 1. Application of Sobel filter (edge detection)
    the image;
                                                                          Convolution cannot be used for extreme pixels. This
− algorithm for equalizing the color of photos.
                                                                       problem is solved by creating an intermediate image with


Copyright © 2020 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY
4.0)
the completed extreme rows and columns. Pixels can be                −     𝑥𝑥𝑥𝑥 is 𝑛𝑛-value of value vector;
either zero-intensive or copy the extreme ones. The second           −     𝑖𝑖 is unit imaginary number;
method is used in this work.                                         −     𝑘𝑘 is a complex sinusoid frequency.
     Filtering methods are distinguished in the spatial and              The original data for the algorithm is a vector of
frequency domains. Processing methods in the spatial                 function values with a specified step. The result of the
domain contain approaches based on direct manipulation               algorithm is a vector of complex numbers, for which the
of image pixels. Spatial processing is characterized by the          index is the frequency value, and the real and imaginary
equation [4]:                                                        parts are the coordinates of the radius vector point.
                    𝑔𝑔(𝑥𝑥, 𝑦𝑦) = 𝑇𝑇[𝑓𝑓(𝑥𝑥, 𝑦𝑦)],               (1)       The frequency and amplitude components of the signal
where: 𝑓𝑓(𝑥𝑥, 𝑦𝑦) is an input image; 𝑔𝑔(𝑥𝑥, 𝑦𝑦) is an output         are vector arguments and the complex number module
image; T is an operator on 𝑓𝑓 in a certain point (𝑥𝑥, 𝑦𝑦).           respectively. The module is defined as the length of the
     The main approach to defining a neighborhood is to              radius vector:
select a rectangular area around the original pixel ( ,𝑦𝑦). To                                                    ,             (4)
find 𝑔𝑔 value at a certain point (𝑥𝑥,), 𝑓𝑓- function value is used   where 𝑎𝑎 is the real part, 𝑎𝑎 is the argument of the imaginary
inside a certain neighborhood of the point. This approach            part.
is based on the use of masks – two-dimensional arrays of                 The argument is defined as the angle between the
function values. The most well-known methods in this                 radius vector and the plane:
category are linear and median filtering.
     An averaging filter is used as a linear filter; its output                                                                 (5)
value is the average value in its mask neighborhood. The                 The image cannot be represented as a one-dimensional
same filter is used for removing image graininess caused             vector of numbers without losing important information.
by impulse noise.                                                    To obtain the spectrum of a two-dimensional vector of
     Fig. 2 shows an example of processing a noisy image             numbers, FFT is first applied to the columns, and then to
with median and linear filters [4,10].                               the rows of the matrix formed (Fig. 3).
     Based on experiments [4], it is concluded that a median
filter is more suitable for impulse noise, which preserves
good element boundaries and has a high speed.
     Filtering in the frequency domain is based on Fourier
transformation. Transformation means that any function
that periodically reproduces its values can be represented
as the sum of sine/cosine of different frequencies
multiplied by some coefficients. This sum is called Fourier
series.
     One of the most important properties of Fourier
transformation is that its result can be restored to its
original form without loss of information.




          Fig. 2. Sample of median and linear filtering
                                                                               Fig. 3. Feature extracting by means of FFT
4. Feature extractors
                                                                         FFT use gives well-separable vectors of class features,
    As it is shown in [9], the use of images in their original       but the described algorithm has a number of
form is ineffective within the classification task. The              disadvantages. First, FFT calculating is quite resource-
largest amount of data about the surface type in a photo is          intensive, especially against LTP background. The second
provided by patterns in its structure. To obtain these               feature is the peculiarity of the obtained vector for an
patterns special algorithms are used that is feature                 image segment – for example, it is sufficient to calculate
extractors [5]. The best-known feature extractors include            LTP once for the entire image.
artificial neural networks, algorithms based on Fourier                  The result of applying LBP operator to the pixel matrix
transformations and so-called descriptors of key points.             is a response matrix of values that characterize the
    Fourier transformation described above also has a                brightness distribution in the neighborhood of the central
discrete form that is suitable for digital image processing:         pixel – the so-called bins. Based on the bin matrix, you can
                                                                     build an image histogram, which, unlike the brightness
                                                              (2)
                                            ,                        histogram, does not characterize the color distribution, but
                                                                     the image structure (Fig. 4).
                                                              (3)
                                            ,
where:
−   𝑋𝑋𝑘𝑘 is the transformation result;
               Fig. 4. Getting image histogram

    Based on this, the task of searching for a forest on an
image of the earth's surface can be done by calculating
LBP histogram in the window mode and comparing it with
the standard.
    Another way to reduce the impact of noise, as well as
to eliminate some of the texture details, is to introduce a
threshold value 𝑇𝑇 in the indicator condition. In this case,
you can set three different values when building the code,
taking into account the sign of the difference between the
central pixel and neighboring ones. This method was
presented under the name "local ternary pattern" (LTP).
    In order to avoid an increase in the space of features,
LTP is divided into two parts – the positive and negative
patterns (Fig.5).




                                                                Fig. 7. Clustering algorithm, where lbpr is LBP histogram with
                                                                        the size w×h; ws is window size; tr is threshold

                                                                     To preserve the possibility of comparing classification
                 Fig. 5. Local ternary pattern                  results with similar ones based on BPF coefficients, it was
                                                                decided to process squares with a side length equal to
    The dimension of basic LBP result can be reduced in         number "2", which follows from the requirements of
two specific ways – using only so-called uniform patterns       BDPF algorithm. According to the requirements, the
or patterns that are not sensitive to rotation of the pixel     window size range 16×16 – 32×32 was selected. If the
neighborhood.                                                   window size is more than 32×32, the number of arithmetic
    Some binary codes have more information than others.        operations per pixel becomes critical. Since the area of the
Thus, a local binary pattern is called uniform if it contains   common pine crown, on the basis of which some of the
no more than three series of "0" and "1" [7,11]. Uniform        main comparisons are made, is 8-10 square meters, 16×16
LBPs define only important local features of the image,         segment completely covers from 1 to 4 adult trees, which
such as line ends, faces, corners, and spots (Fig. 6), and      still allows to cover several trees in a sliding window.
also provide significant memory savings, i.e. the set of             The texture of the forest is heterogeneous, but the
pattern s is reduced from 2𝑝𝑝 to (𝑃𝑃(𝑃𝑃 − 1) + 2.               selection of multiple clusters for a forest area is the second
                                                                condition for applying feature extractor, since one of the
                                                                tasks being solved in the current work is the selection of
                                                                forest stands of different species. The basic condition is a
                                                                clear separation of the forest from other types of terrain.
                                                                Since there is no need to allocate full-fledged clusters, so
      Fig. 6. Example of local features detected by LBP
                                                                the simplest algorithm is used – clusters are allocated by a
5. Selection of algorithm characteristics                       specified threshold, and the first pattern belonging to the
                                                                cluster is used as the cluster kernel.
    The maximum window scale was selected as 1m/px,                  For comparison of texture patterns it is necessary to
which corresponds to the capabilities of most types of          introduce a separability measure. There are many
modern satellite cameras [2] and makes it easier to link to     separability measures such as Euclidean distance, city
the metric area.                                                block distance, divergence, and many others [2]. The most
    The simplest way to determine the separability of           common measure in machine learning problems is
forest texture classes is to cluster text images and then       Euclidean distance, i.e. the distance between two points in
evaluate the result (Fig. 7).                                   n-dimensional space [2,12]. Euclidean distance does not
                                                                take into account the overlap of class distributions and is
                                                                not applicable at a low level of separability (Fig.8), but
there are modifications of this measure that eliminate its       indistinguishable. At threshold of 14, the forest, detached
disadvantages. One of these modifications is Mahalanobis         trees and buildings of the village are clearly
measure.                                                         distinguishable, but they are indistinguishable from trees.
                                                                 At the threshold of 50, only lake bridges and part of the
                                                                 road could be identified.




                                                                       Fig. 10. LTP allocation for various threshold values

             Fig. 8. Types of class separability [2]             6. Classification tools

    Application of Mahalanobis measure makes sense for               After pre-processing the image and subsequent
classification, but not for clustering, since the covariance     selection of feature vectors, it is necessary to determine
matrix is calculated based on a training set for a class that    whether these vectors belong to any type of terrain, that is,
does not yet exist.                                              to classify them. Classifying an object means specifying
    One pixel is considered as a clustering unit. A certain      the number of the class that this object belongs to.
area is captured around the pixel, for which a histogram is          Previously described similarity measures -
built and compared with standards. The cluster index is          Mahalanobis distance and Euclidean distances - can be
assigned to the center pixel. 17×17 b 33×33 are chosen as        used to classify feature vectors based on standards, which
window sizes, according to the data provided above,              was demonstrated when describing threshold clustering.
approximately one and four trees per window.                     This algorithm is easy to implement and to be scaled, but
    In the course of checking the separability of classes, it    the linear dependence of the speed on the number of
was found that regardless of the algorithm characteristics       reference vectors makes it unacceptable within the
for extracting features, structural features for different       framework of the described system.
types of terrain can be almost indistinguishable. Fig.9              There are many classification algorithms, and choosing
shows the result of selecting the threshold.                     a specific one is not an easy task. Determining the
                                                                 suitability of the classifier for working with the data
                                                                 formats used requires, at a minimum, the possibility to
                                                                 implement it for the selected development tools.
                                                                     Making up training and test samples if there are no
                                                                 ready-made ones freely available is a long and time-
                                                                 consuming process.
                                                                     Taking into account mentioned above, three classifiers
                                                                 with different specific features were selected based on the
                                                                 studied references. The first is a naive Bayes classifier for
                                                                 implementing a search based on a set of small classifiers.
                                                                 The second is a decision tree for optimizing classification
                                                                 based on similarity measures. The third is a multi-layer
                                                                 perceptron for processing large samples of data
                                                                 accumulated during the operation of the system. Since the
         Fig. 9. Clustering results for window 33×33             perceptron was not fully introduced into the system, there
                                                                 is no description of it.
    All the algorithms managed the task to some extent.
LBP could not identify the forest, but it accurately             7. Naive Bayes classifier
identified the transitions between the main types of terrain.
                                                                     Naive Bayes classifier (NBC) is a simple probability
CSLBP and MLBP were able to separate the forest, while
                                                                 classifier based on Bayes theorem:
failing to separate the texture of the forest from that of the
vegetable gardens. With the help of ULBP, it was possible                                                                        (6)
to identify the main contours of forest stands, but the          where: 𝐶𝐶𝑘𝑘 – 𝑘𝑘-class; 𝑋𝑋 = (𝑥𝑥1, 𝑥𝑥2, … , 𝑥𝑥𝑛𝑛) is a size feature
border lines (forest/field, forest/clearing) were excluded.      vector 𝑛𝑛; 𝑝𝑝(𝐶𝐶𝑘𝑘 ∨ 𝑋𝑋) is conditional (a posteriori) probability
The best result was achieved using LTP method, which             of belonging 𝑋𝑋 to class 𝐶𝐶𝑘𝑘; 𝑝𝑝(𝑋𝑋 ∨ 𝐶𝐶𝑘𝑘) is conditional
accurately marked the contours of the forest and thickets        probability to find vector 𝑋𝑋 in class 𝐶𝐶𝑘𝑘; 𝑝𝑝(𝐶𝐶𝑘𝑘) is
near the road, while selecting them in one cluster with the      unconditional (a priori) probability to meet class 𝐶𝐶𝑘𝑘; 𝑝𝑝(𝑋𝑋)
buildings of the village.                                        is the probability of availability of vector 𝑋𝑋 in the training
    Unlike other methods, LTP can be directly configured         sample.
without using filters, binarization, etc. Fig. 10 shows the           The classifier is called "naive", because for an
results of LTP allocation for various threshold values. At       available set of features, it is assumed that the distribution
the threshold of 0, the terrain types are almost                 of their values is independent of each other. Despite this
simplification, NBC in many cases shows itself no worse           on the basis of which the edit distance cannot be
than more complex classifiers [5].                                calculated. However, they can also be reduced to binary
    Since all vectors are represented in the sample with          attributes by setting requirements for the values of features
probability 1, the original formula is simplified to:             – if a > n, then class A, and so on. In this case, the vector
                 𝑝𝑝(𝐶𝐶𝑘𝑘 ∨ 𝑋𝑋) = 𝑝𝑝(𝑋𝑋 ∨ 𝐶𝐶𝑘𝑘) ∙                  is simplified to a binary tree and comes into compliance
                                                           (7)
                             𝑝𝑝(𝐶𝐶𝑘𝑘),                            with another common classification algorithm – the
    Given that the possible dependence of the probabilities       decision tree.
of features occurring is not taken into account, (𝑋𝑋 ∨ 𝐶𝐶𝑘𝑘) is        The decision tree training consists of selecting nodes
calculated as the product of the probabilities of all features:   based on a training sample, each of which is characterized
                                                                  by a feature vector attribute that most affects the outcome
                                                           (8)    of the classification stage [6]. Node splitting occurs until
    To work with features-vectors of values, there is a           the threshold probability is reached when the output value
                                                                  will take the required value.
modification of the classifier, that is, the so-called
                                                                       In general, the condition for reaching these aims at i-
Gaussian naive Bayes classifier (GNBC).
    Gaussian distribution is also called the normal               level can be represented as follows [6]:
distribution. The normal distribution graph is a bell-shaped                      𝐶𝐶𝐶𝐶 = (𝑄𝑄11 ∨ 𝑄𝑄12 ∨ … 𝑄𝑄1𝑘𝑘)
curve that is symmetrical with regard to the average value                                         ∧
                                                                                                                              (9)
(Fig. 11).                                                                        (𝑄𝑄21 ∨ 𝑄𝑄22 ∨ … 𝑄𝑄2𝑘𝑘) ∧ …
                                                                                     ∧ (𝑄𝑄𝑄𝑄1 ∨ 𝑄𝑄𝑄𝑄2 ∨ … 𝑄𝑄𝑄𝑄𝑄𝑄),
                                                                  where: 𝑄𝑄𝑄𝑄𝑄𝑄 is logical required condition; 𝑖𝑖 is a node level;
                                                                  𝑘𝑘 is the number of conditions.
                                                                       Since the process is organized on the basis of reference
                                                                  feature vectors, the last node may hide a set of such
                                                                  vectors. At the same time, passing the tree to the end does
                                                                  not guarantee that the sample belongs to the described
                                                                  classes. At the final stage it is compared with the standards
                                                                  using Mahalanobis distance described earlier, which is
              Fig. 11. Normal distribution graph
                                                                  used to make a conclusion about (not)belonging to the
     Due to the fact that to calculate the standard deviation,    class. Covariance matrix is calculated for each class
it is necessary to recalculate the mathematical expectations      separately.
of features again (the mathematical expectation can be                 Fig. 12 shows the tree structure.
calculated based on the previous value, as opposed to 𝜎𝜎),
NBС cannot be further trained in the course of work.
Given the method of determining a priori probability, an
important condition for correct NBC training is the
statistical correspondence of the training sample
composition to the composition of the studied data.

8. Decision tree
    The task of monitoring the dynamics of changes in the
                                                                             Fig. 12. Structure of hybrid decision tree
forest area involves processing large amounts of
information over a long period of time. This process                 The advantage of the described algorithm is its high
actively uses classification tools, and it may be necessary       speed of relatively simple searching [6]. It is important to
to adjust the classifiers for different tasks. Training a         note that using a covariance matrix makes it impossible to
classifier is a rather time-consuming process, since the          update instantly during operation – features are added to
main criterion for its success is the quality and volume of       the tree, but the matrix can only be recalculated in the
the training sample, which must be collected and provided         background process.
with appropriate markers. To simplify this task, the system
saves vectors of reference features and their source images       9. Classification scenario
to the database. This approach allows not only to reuse
prepared class maps, but also organize classification based           Previously, the advantage of using multiple algorithms
on the database without training. Classification based on         for distinguishing features or classifying them together has
the feature vector library belongs to the group of                been demonstrated. Not for all classification algorithms a
classification methods based on comparison with the               non-uniform feature vector can be created. For example,
standard [8]. The method of comparison with the standard          when classifying by similarity, it is not possible to use LBP
involves the construction of a graph of feature vectors,          and the average values of RGB spectrum together, because
while the classification process means finding the shortest       LBP will give two hundred features, and the spectrum will
path, which is based on the concept of edit distance – the        give three features, which will have negligible effect on
minimum number of changes, inserts and losses required            the result. To solve such problems, the concept of a
to change the image of A to the image of B [8]. The feature       classification/search scenario was formed (Fig. 13).
extractors described earlier give vectors of real numbers,
                                                                 values around the average will be S2+(S2/n), where 𝑆𝑆 is the
                                                                 standard deviation.
                                                                     To predict borders, outlines are initially selected – the
                                                                 image pixels are bypassed in the cycle, the border pixels
                                                                 are found, and the array is saved. Then the array is
                                                                 bypassed and a segment is added for pixels whose distance
                                                                 is greater than the threshold (Fig. 15).




                                                                                 Fig. 15. Separating boundaries

                                                                     After selection, the formula described is applied to the
                                                                 obtained points (junctions of segments). The shortest of
          Fig. 13. Scheme of classification scenario             the three segments is chosen as the direction of
                                                                 extrapolation -two to the two nearest previous points, and
    Classification scenario is a data structure that specifies   the third is the median of the resulting triangle (Fig. 16).
the order in which images are processed by multiple
algorithms. The resulting class maps are combined using
logical operations. The scenario can also be used to
describe one-dimensional algorithms. For example, the
following selection of trees according to the scenario
"median filter" - "spot selection (LBP)" - "center filtering".
                                                                    Fig. 16. Prediction of changes in boundaries by one step
10. Prediction of changes in the boundaries
    Changes in forest boundaries can be caused by many           11. Conclusions
factors, many of which are random. Events such as fires,             Within the framework of this paper, a number of
deforestation, and disease outbreaks lead to rapid changes       algorithms for processing and classifying graphical
in the structure of plantings, with no pronounced                information were proposed to solve the tasks of studying
periodicity.                                                     forest stands based on images of the earth's surface.
    Fig. 14 shows the boundary changes that need to be               LTP and FFT algorithms were selected as feature
taken into account when developing the algorithm.                extractors, of which the simplest and most productive
                                                                 option is LTP, and the most complete and at the same time
                                                                 resource–intensive is FFT.
                                                                     To pre-process the image, histogram equalization
                                                                 algorithms, median and Gaussian filters to eliminate noise
                                                                 and remove small image details were selected.
         Fig. 14. Basic scenarios of boundary changes                Euclidean distance was used as a measure of
                                                                 separability, Mahalanobis measure - for the purpose of
    Predicting function values with reference to a time          classification. Czekanowski's quantitative index is also
interval requires the use of one-factor forecasting              available in the system, which gives results similar to
functions [8], but in the absence of a large sample,             Euclidean distance, but with a different distribution of
debugging such a solution is not possible. If we reduce the      output quantities.
complexity of requirements for the forecast, methods that            For classification, it was proposed to use a naive Bayes
are easier to implement and debug become available, such         classifier, a simple but effective statistical classifier based
as step-by-step extrapolation, where the time interval is the    on Bayes theorem. As a less specialized classifier that
interval between sample events.                                  works without training on the basis of features stored in
    Under the assumption that the average level of the           the database, the decision tree algorithm was proposed, an
series has little tendency to change, we can assume that the     algorithm that significantly speeds up classification based
predicted level is equal to the average value of the levels      on comparison with the standard by organizing feature
in the past [8].                                                 vectors into a binary tree. A three-layer perceptron was
    The confidence limits for the average with a small           also proposed as a test solution for working with large
number of observations are defined as follows:                   samples, but it was not possible to test it fully due to the
                                                         (10)    large amount of training sample required.
where 𝑡𝑡𝑎𝑎 is the table value t of Student statistic with n-1        These algorithms were described and tested. On their
degree and probability level 𝑆𝑆𝑦𝑦.                               basis a set of libraries in C# language was developed,
    The total variance associated with both the fluctuation      which form the described system together. MongoDB was
of the sample average and the variation of individual            chosen as the database, which is easy to develop and quite
                                                                 high – performance database that uses BSON documents
                                                                 as a storage format. A web service based on Asp.Net.Core
was developed to provide shared access to the system's          About the authors
tools. Its organization features are described in the project
                                                                   Alexandr A. Kuzmenko, Bryansk State Technical
part.                                                           University, Bryansk, Russia. E-mail: alex-rf-32@yandex.ru
                                                                   Dmitriy A. Kondrashov, Bryansk State Technical University,
References                                                      Bryansk, Russia. E-mail: kuzmenko-alexandr@yandex.ru
[1] Moiseev N.A. On the state of forest use and the need           Anna S. Sazonova, Bryansk State Technical University,
     to improve forest management. Forestry Bulletin,           Bryansk, Russia. E-mail: asazonova@list.ru
                                                                   Ludmila B. Filippova, Bryansk State Technical University,
     2011, no.7.                                                Bryansk, Russia. E-mail: libv88@mail.ru
[2] Farutgin I.N. Technological solutions of ScanEx for            Rodion A. Filippov, Bryansk State Technical University,
     receiving and processing satellite information.            Bryansk, Russia. E-mail: redfil@mail.ru
     Interexpo Geo-Sybir, 2011. pp. 3-5.
[3] Forests and land use: solution description. Planet Labs
     Inc.,           2019.           Available           at:
     https://www.planet.com/markets/forestry/.
[4] Strugailo V.V. Overview of digital image filtering and
     segmentation methods. Moscow Automobile and
     Road Construction State Technical University. pp.
     270–281.
[5] Kolesenkov A.N. Monitoring of subsurface use
     processes based on processing of aerospace images.
     Izvestiya TulGU Technical Sciences, 2018, no. 2.
[6] Evdokimova N.I. Local patterns in the duplicate
     detection task. Computer Optics, 2017, vol. 41, no. 1.
     pp. 79-87 .
[7] Polovinkin P.N. Detectors and descriptors of key
     points. Image classification algorithms. The problem
     of detecting objects in images and methods for solving
     it. Lobachevsky State University of Nizhni Novgorod.
     30 p.
[8] Olsen, R.C., S. Bergman, and R.G. Resmini. Target
     detection in a forest environment using spectral
     imagery. SPIE 3118:4b, 1997.
[9] Ed. By Prasard S. Thenkabail, John G.Lyon, Alfredo
     Huete. Hiperspectral remote sensing of vegetation. –
     CRC Press, 2012.
[10] Filippov R.A. Internet of things: basic concepts:
     Tutorial. Bryansk, BSTU, 2016. 112 p. ISBN 978-5-
     906967-62-6.
[11] Averchenkov A.V. Development of a mathematical
     model of an information system for inventory and
     monitoring of software and hardware based on fuzzy
     logic methods. Kachestvo. Innovatsii. Obrazovaniye.,
     2018, no.7. pp. 105-112. ISSN: 1999-513X
[12] Leonov, Yu.A. Selection of rational schemes
     automation based on working synthesis instruments
     for technological processes/ YU.A. Leonov, E.A.
     Leonov, A.A. Kuzmenko, A.A. Martynenko , E.E.
     Averchenkova, R.A. Filippov - Yelm, WA, USA:
     Science Book Publishing House LLC, 2019 - 192 p. -
     ISBN: 978-5-9765-4023-1 - Text : unmediated.
[13] Leonov E.A., Intellectual subsystems for collecting
     information from the internet to create knowledge
     bases for self-learning systems / E.A. Leonov, Y.A.
     Leonov, Y.M. Kazakov, L.B. Filippova/ In: Abraham
     A., Kovalev S., Tarassov V., Snasel V., Vasileva M.,
     Sukhanov A. (eds) - Text : electronic // Proceedings
     of the Second International Scientific Conference
     “Intelligent Information Technologies for Industry”
     (IITI’17). IITI 2017. Advances in Intelligent Systems
     and Computing. - 2017 - vol 679. - Springer, Cham,
     p. 95-103 - DOI:10.1007/978-3-319-68321-8_10