<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>UDC 004.93'1 004.85 Objects of Interest Detection by Earth Remote Sensing Data Analysis</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Andrey N. Vinogradov</string-name>
          <email>vinogradov_an@rudn.university</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Igor P. Tishchenko</string-name>
          <email>igor.p.tishchenko@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Semion V. Paramonov</string-name>
          <email>s.paramonov@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ailamazyan Program Systems Institute of RAS (PSI RAS) 4a Petra-I st.</institution>
          ,
          <addr-line>s. Veskovo, Pereslavl district, Yaroslavl region, 152021, Russian Federation</addr-line>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Information Technologies Peoples' Friendship University of Russia (RUDN University) 6 Miklukho-Maklaya str.</institution>
          ,
          <addr-line>Moscow, 117198, Russian Federation</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2018</year>
      </pub-date>
      <fpage>4</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>In this paper the problem of large (commercial) fish schools detection, using remote sensing (RS) images of sea an ocean surface analysis is considered. Considered objects of interest (OI) detection and identification methods and algorithms using high-resolution space imagery. Images obtained by RS of the seas and oceans are characterized by the presence of images of objects of various types and classes. A classifier for diferent types of OI is considered. Also considered the OI searching methods and algorithms whose goal is to obtain data on the most probable locations of OI in the area of analysis. Restore the OI boundaries section describes the problem of image segmentation - splitting the image into areas corresponding to diferent objects in such a way that the constructed regions cover the objects of the image as accurately as possible, taking into account their complex shape and inevitable overlays. The OI detection and classification algorithm is presented, based on the the U-net type network architecture, which is able to use a smaller (in comparison with others) dataset for network “learning”, which is critical for the task considered in this paper.</p>
      </abstract>
      <kwd-group>
        <kwd>and phrases</kwd>
        <kwd>fish school</kwd>
        <kwd>earth remote sensing</kwd>
        <kwd>image recognition</kwd>
        <kwd>object classification</kwd>
        <kwd>object of interest</kwd>
        <kwd>machine learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Copyright © 2018 for the individual papers by the papers’ authors. Copying permitted for private
and academic purposes. This volume is published and copyrighted by its editors.
In: K. E. Samouylov, L. A. Sevastianov, D. S. Kulyabov (eds.): Selected Papers of the 1st Workshop
(Summer Session) in the framework of the Conference “Information and Telecommunication
Technologies and Mathematical Modeling of High-Tech Systems”, Tampere, Finland, 20–23 August,
2018, published at http://ceur-ws.org</p>
    </sec>
    <sec id="sec-2">
      <title>1. Introduction</title>
      <p>
        The most important tasks of large (commercial) fish schools search technology using
remote sensing (RS) data automated processing and analysis for solving the task of
monitoring the oceans and seas to identify commercial fish accumulations that need to be
solved have been formulated in [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. For further research and development it is necessary
to create a complete learning/test dataset of fish school images. Due to insuficient
quantity of real RS images containing the objects of interest (OI) – fish schools, it is
necessary to generate enough amount of artificially synthesized images, there OI would
be present in various forms. Another important task is to identify areas of OI most likely
location. This task can be solved in two ways: first is searching sea/ocean areas with
favorable oceanographic and meteorological conditions using low-resolution RS images
for further high-resolution space imagery. Another way is to analyze the movements of
ifshing vessels. During fishing, various types of fishing vessels perform specific maneuvers,
this activity can be detected by analyzing the AIS data and than used to further analyze
of these areas high resolution RS images in order OI detection. In this way it would be
interesting trying to apply some approaches from adjacent areas for data analysis, for
example, approaches of dynamic scaling [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] or queuing theory methods [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The solution
of all these problems requires processing of large RS datasets, which requires significant
computing resources. RS data allows its parallel processing [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. Therefore, this task
requires developing of special software and hardware complex that allow massively
parallel data processing. To solve this problem, the experimental sample of the RS data
processing complex [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], has been developed.
      </p>
      <p>
        The space vehicles (satellites) that have appeared in the last 10 years with
highresolution equipment provide high-quality RS images. The spatial image resolution
above 1-2 m per pixel provides the tasks of the so-called object search and identification
for relatively small size (meters, tens of meters) objects. Given that typical commercial
ifsh schools near the surface of the ocean or the sea (the so-called pelagic fish schools)
are from 5-10 meters to 150-200 meters in size, they will be seen on high-resolution RS
images as detectable and identifiable objects [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        In the process of fishing areas monitoring, both the collected data and the results of
processing are geocoded (geographically), and, accordingly, can be aggregated within
a single geospatial database. It is characteristic that technologies of processing and
analysis of geospatial data developing in recent years are based on a qualitative transition
from a set of arrays of numerical characteristics to geospatial objects that have both
geographic and temporal dynamics. A convenient user tool for accessing and managing
this geospatial dataset is a specialized GIS-system [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], providing opportunities of data
sampling request, analysis, editing, visualization, modeling, etc. The main important
element of monitoring implementing is OI classifier, which allows to identify commercial
ifsh schools efectively, by RS data analysing.
      </p>
      <p>A set of methods and algorithms for OI searching, detecting, classifying and identifying
is a key component in the process of processing historical and operational RS data. The
result of the research, using the sequential application of these methods and algorithms
is information about the presence of OI in a pre-designated search area, its geographical
coordinates and characteristics.</p>
      <p>2.</p>
    </sec>
    <sec id="sec-3">
      <title>Main section</title>
      <p>The initial data for the entire processing are:
– Data on the search area (coordinates of the vertex points of the polygon that limits
the part of the fishing area, necessary to analysis);
– Time range (start and end date and time, indicating the time interval);
– Historical oceanographic and meteorological data, estimated to calculate a
commercial fish school of a designated type finding probability;
– Operational data of the fishing vessels movements in the given sea area.</p>
      <p>Data processing is performed sequentially, since at each stage the results of the
methods and algorithms of the previous stage are used.</p>
    </sec>
    <sec id="sec-4">
      <title>OI Searching</title>
      <p>The purpose of OI searching methods and algorithms is to obtain data on the most
probable locations of OI in the area of analysis. As a result of the operation of the
methods in and search algorithms of the OI, the coordinates of fragments of marine
areas should be obtained to request the high-resolution RS data.</p>
      <p>When searching for OI, two main methods and related algorithms are used:
– Method of OI searching, based on oceanographic meteorological characteristics;
– Method of OI searching, based on fishing vessels activities.</p>
      <p>It is assumed that OI search on oceanographic meteorological characteristics
(preliminary search for areas with high probability of containing OI) is carried out as
follows.</p>
      <p>There is a certain number of zones (in our case – the squares of the explored areas),
each of which is assigned a certain vector-tag (a set of numerical characteristics containing
oceanographic and meteorological parameters). Each of the squares belongs to the class
“0” (it can not serve as a place for the appearance of a fish school) or “1” (it can serve as
a place for the appearance of a fish school).</p>
      <p>This problem is a typical classification problem. XGBoost classification algorithm is
used from the family of algorithms “boosted trees” (“forced trees”). This algorithm over
the past 1–2 years has been widely used due to its high eficiency. According to a lot of
researchs, when performing tests on a wide variety of data, the task of classifying data
(distribution of 2 or more classes) using this algorithm most often shows the highest
quality scores, in particular, the smallest classification error in the AUC-ROC estimate
(area under the error curve).</p>
      <p>
        The XGBoost algorithm based on the procedure of sequentially constructing a
composition of algorithms for classifying trees. The questions of the program implementation
of this algorithm have been studied in suficient detail and are not considered here. A
detailed description of the algorithm and its implementation can be found in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>Algorithm learning (that is, the calculation of the value of the main parameters of
the classification algorithm XGBoost) is based on pre-collected historical data for this
search area. During the analysis of the current situation, the algorithm obtains current
(up-to-date) data on the search area and classifies the squares of the explored sea areas,
assigning them the values 0/1.</p>
      <p>Such an algorithm allows you to select areas that are promising for more detailed
analysis, in particular using sophisticated segmentation, classification and detection
techniques.</p>
    </sec>
    <sec id="sec-5">
      <title>Detecting objects on RS images</title>
      <p>In the field of developing methods and algorithms for image processing, the task of
objects detecting is one of the most urgent in view of the wide possibilities of applied
applications. That is why the history of the development of algorithms and methods
for detecting objects has a long period of several decades. The classic delivery of the
detection task is the processing of some visual scene, fixed in the form of a digital image
(data array), where there is some background, on top of which one or many objects are
represented; objects may also be absent.</p>
      <p>In the vast majority of cases, objects of several diferent types can appear on the image.
In this case, object detection can be performed simultaneously with their classification.
When implementing a simple detection, all types of objects that must be recognized can
be combined into one class.</p>
      <p>In this form, the detection task is to recognize the presence on the image of an object
of a given type with a certain probability and to predict its position on the picture in
the form of a corresponding bounding box. In this case, the object can lie anywhere in
the image and can have any size (scale). In some cases (as in the problem solved in the
study), additional processing of images may be required for the purpose of segmentation
and detection of the boundaries of objects.</p>
    </sec>
    <sec id="sec-6">
      <title>Image segmentation</title>
      <p>The task of image segmentation, generally speaking, is more complex than the task
of detecting (detecting) objects. Segmentation is understood as the division of an image
into areas corresponding to diferent objects. It is required that the constructed areas as
accurately as possible cover the objects of the image, taking into account their complex
shape and the inevitable overlays.</p>
      <p>Images obtained by remote sensing of the oceans and seas are characterized by the
presence of images of objects of various nature on them. Such objects can be:
– atmospheric fronts, clouds;
– condensation traces of aircraft in the atmosphere;
– zones of water surface disturbance;
– zone of ice accumulations in arctic latitudes;
– elements of the seabed relief;
– drifting phytoplankton and zooplankton;
– commercial fish schools;
– Oil stains;
– zones of fishing vessels activity, etc.</p>
      <p>The images of such objects are areas characterized by certain textural features and
having fuzzy blurred boundaries. In addition, segmentation is complicated by the fact
that the image is actually multi-layer, that is, objects of interest overlap. For example,
the image of a fish cluster in a photograph can be partly covered by a shadow from the
clouds and, at the same time, it is superimposed on visible algae and underwater relief
elements from the air (see Figure 1).</p>
      <p>When large OIs are considered, one more technical circumstance appears that makes
it dificult to directly apply known methods of detection, segmentation, and classification.
When using high-resolution images, one OI can be displayed in several frames, some of
which may be unavailable for some reason (for example, the boundary of the shooting
area or the image is reached is damaged). In such cases it is useful to try to restore the
shape of the boundary of the OI on the basis of available information.</p>
      <p>The method of classification of squares of water areas, described in the OI Search,
gives a preliminary prediction of the presence of fish accumulations, but for a more
accurate analysis, deep image processing is necessary. In addition, several diferent types
of objects can be present on the image, and for successful detection, you must first
divide them among themselves, accurately defining the boundaries of objects.</p>
      <p>These circumstances become a significant limitation for the application of the following
well-known methods of image segmentation.</p>
      <p>1. Methods based on the clustering of image points; methods based on color and
brightness histograms and the choice of threshold values; the “watershed” method.
To the problem under consideration, these methods are poorly applicable due to
overlapping of images of objects of interest.
2. Methods based on graph models: conditional random fields; Markov random fields.</p>
      <p>Such methods are able to model overlapping objects of interest, but require a large
marked training sample containing objects of interest of various types.</p>
      <p>Therefore, the image segmentation method based on the construction of the
boundaries of the sought-for areas of interest objects was chosen.</p>
      <p>In addition, it is possible to more accurately construct the boundary of the object of
interest, making it possible to use the form of this boundary as one of the features for
classifying objects of interest.</p>
      <p>The restored boundary of the OI area also makes it possible to estimate the size of
this area, and thus estimate the amount of the resource reserve.</p>
    </sec>
    <sec id="sec-7">
      <title>OI Classification</title>
      <p>OI classification is understood as the assignment of image fragments obtained through
detection and segmentation procedures to one of the predefined types.</p>
      <p>As shown in Table 1, in the subject domain, more than 10 varieties of OI can be
identified, each with its own identifying features.</p>
      <p>Using of detection methods by machine learning, for example, based on convolutional
neural networks or conditional random fields, makes it possible to describe the desired
regions in the form of sets of rectangles that limit the images of the objects sought.
However, for a full solution of the problem posed in the study, it is important not only
to identify the rectangular area in which the OIs are located, but also to accurately
determine the boundaries of the alleged objects, since the shape of the object boundary in
some cases is an important identifying feature used to classify the object of interest for a
particular type. Among the objects under consideration, not only the fish accumulations
that are of primary interest are represented on the photographs, but also other objects,
and in many cases it is possible to distinguish the classes of objects among themselves
precisely in the form of boundaries.</p>
      <p>For example, in Figure 2 shows the accumulation of commercial fish and the zone of
phytoplankton development. It can be seen that the boundaries of fish accumulations
are suficiently smooth and can have only a few special angular points. At the same time,
the boundary of a typical zone of phytoplankton development is much more complex, it
has more singular points, and the average degree of curvature is higher.</p>
      <p>In addition, the border of this object of interest is an identification feature that serves
to identify its unique characteristics (area, estimated volume of reserves, etc.).</p>
    </sec>
    <sec id="sec-8">
      <title>Restore the OI boundaries</title>
      <p>We have shown above the role of constructing the boundaries of objects of interest
in the procedures of image segmentation and extraction of features for subsequent
classification.</p>
      <p>For the initial delineation of boundaries, the well-known operators of Gabor, Canny,
and Sobel are applied to the image. After applying this procedure, a system of lines
appearing on the boundaries of various objects of interest appears on the image. As a
OI Classificator
rule, these lines intersect and interrupt. In Figure 3 shows a snapshot of an oil spill that
was “cut” by a passing vessel and the boundaries identified after some filtration.</p>
      <p>In Figure 4 shows a snapshot of the phytoplankton development area in the Barents
Sea section with cloud cover.
For successful processing of such situations, it is necessary to solve the task of tracing
the boundary of the object of interest in conditions of overlaying the image of another
object and its boundaries. This problem reduces to the task of reconstructing the
interrupted curves in the image.</p>
      <p>
        To solve this problem, an approach to solving the problem of recovering damaged
images was used [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ], which, in addition to the obtained sections of smoothed curves,
takes into account also the original image itself, which improves the quality of the
solution of the problem. The proposed method is universal, it can work both with a
lfat image and with a spherical image, i.e. defined in a region on a sphere of suficiently
large radius.
      </p>
      <p>
        The apparatus of geometric control theory and sub-Riemannian geometry is used [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ].
A corresponding mathematical model is constructed and a neurophysiological motivation
for using just such a model is given.
      </p>
    </sec>
    <sec id="sec-9">
      <title>Algorithm for detecting and classifying objects in images</title>
      <p>In the problem of recognition of objects for today as the classification characteristics
of the object, it is necessary to select the statistical characteristics formed at the output
of some convolutional neural network (SNC) processing the image. Let’s consider the
method of solving problems of detecting and identifying OI on images using SNS.</p>
      <p>Let the SNS handle some image and select a set of statistical characteristics (feature
cards). The set of obtained maps is compared with the available set of reference character
maps for all types of OI. The comparison is performed using a classification algorithm,
usually also on the basis of a neural network. The result is a set of probabilities of
belonging to the processed image to one of the types of OI; accordingly, the object class
is determined by the greatest of probabilities.</p>
      <p>If it is known that there is (with a high probability) an OI in the image being
processed, then when solving the detection task, it is required to obtain the coordinates
of the location of the image of the OI in the image as a fragment of the image completely
containing the object; the boundaries of the fragment form a so-called bounding box or
an object cover mask.</p>
      <p>It should be noted that both detection and classification of an object in an image can
rely on the same object identification features when analyzing an image, but in the case
of detection, it is also required to determine the localization of identification features in
the image coordinate system.</p>
      <p>In order to avoid repeated recurrence of the operation of highlighting “feature cards”
when processing images of the SNA, research and development of recent years are aimed
at creating algorithms that realize detection and classification of objects simultaneously.</p>
      <p>To define such a problem, in particular, the term “semantic segmentation” is used; In
this task, when processing an image, its pixels are assigned to one of the interest classes
(or to the background); a group of pixels of one class forms a mask that identifies the
object (s) of the class.</p>
      <p>
        A number of algorithms, based on this principle, have been analyzed recently.
Conditionally they can be divided into two main groups:
a) Algorithms based on the formation of "proposed regions" (proposed regions), such
as: Regions With CNNs (R-CNN) [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]; Fast RCNN [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]; Faster RCNN [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ];
YOLO [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]; SSD Single Shot Detector [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
b) Algorithms based on the encoder-decoder architecture, such as: DenseNet [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ];
      </p>
      <p>
        SegNet [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]; U-net [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ].
      </p>
      <p>
        When analyzing the data of the algorithm for the purpose of their possible use in the
development of this PNDI, the results of tests performed on the same type of computing
equipment on a single test set of PASCAL VOC images were used. The comparative
criteria included the following indicators [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]:
– Network training time
– Time for searching and detecting objects on the test dataset
– Object mask prediction accuracy
– Accuracy of object class definition
– CPU load
– Graphic accelerator load
– Memory usage.
      </p>
      <p>Also, examples of application of these algorithms to image processing problems
solutions of an applied character or with close data characteristics are considered in this
paper.</p>
      <p>
        As a result of the analysis, the following conclusions were drawn:
– On the standard data sets of the PASCAL VOC type (according to the testing data
given in the literature or presented by the authors of the algorithms), the considered
algorithms show close accuracy indicators (92–97% in the object classification,
85–95% in the object mask masking accuracy).
– The type of network with the Unet architecture, which is often used in solving
segmentation problems of an object with a fuzzy outline on an uneven background,
for example, such as the scanning of human organs (c), can be considered as the
closest application; processing of RS data [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ], etc.
– The algorithm, based on the architecture of the network type U-net, is able to
use a smaller, in comparison with others, dataset for network “learning”, which is
critical for the task considered in this paper.
      </p>
      <p>OI detection and classification algorithm is presented in the following description:
1: Form  ×  model images
2: Generate markup of object positions in images
3: Set the required Pixel Classification Accuracy
4: Cycle for  ×  model images
5: Get  masks of classification of image pixels (probabilities of  -class in range
from 0 to 1)
6: Get  binary masks of pixels belonging to the class by the Pixel Classification</p>
      <p>Accuracy criterion; The pixel belonging to the class takes the value 1
7: Mask all adjacent pixels belonging to the same class in clusters
8: Each cluster of pixels belonging to the same class is selected in a separate mask
of the selected object of the class
9: Calculate the accuracy of the mask of the object with the specified
10: Calculate the accuracy of classification of objects selected by the mask
11: End of cycle
12: Remember the CNN (convolutional neural network) status parameters as a set of
values  _
13: Return  _</p>
      <p>To carry out preliminary testing of this algorithm to solve the problem of the
possibility of its application within the framework of the problem under consideration,
its software implementation was implemented, which includes the following features:</p>
      <p>As a coding part of the U-net network, it is suggested to use the implementation of
the network with the VGG-19 architecture, discussed earlier in this chapter, previously
trained on a large sample of objects from the standard ImageNet data set. The presence
of a “pre-trained model” allows you to significantly reduce the process of configuring the
network for the task at hand.</p>
      <p>
        During the setup, i.e. additional training of the network, its parameters are tuned
to typical objects of their selection for a given application task (in our case — objects
of interest on the sea surface). This procedure is called “distillation” (transfer of
knowledge) [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ].
      </p>
      <p>As the data sets for training, satellite images of objects of 4 classes on the sea surface,
available in a small amount at the moment, were used: fish school; algae / plankton;
pollution; empty sea surface without objects; as well as generated synthetic images of
similar objects. To enlarge the sample of images, a so-called “augmentation” procedure
was applied to each of them — image modification, resizing and rotation to a random
angle. Thus, the number of images for each class was 100.</p>
      <p>In the process of algorithm “learning”, 90% of images (90 for each class) and 10
images were used for testing. The following metrics were used to assess the quality of
the algorithm:</p>
      <p>To assess the accuracy of the classification of the object in the image — the F-measure
(F1 score), defined as:</p>
      <p>It should be noted that at this stage of the assessment, given the available sample
size, there is no sense in justifying and refining the parameters of these metrics, and
they are used in the most general form.</p>
      <p>The results of the assessment test are shown in Table 2:</p>
      <p>Within the scope of the task of OI searching and detecting, using oceans and seas
RS data processing, a subtask was devised for constructing the OI areas boundaries.
The following results were obtained:</p>
      <p>The analysis of OI areas on oceans and seas RS data images has been carried out. The
main features characterizing the OI are revealed, and the values of these characteristics
for diferent types of OI are determined.</p>
      <p>For cases of intersection of diferent OI in one image, a method for determining the
OI boundaries is considered, based on the method of reconstructing curves on a spherical
image.
(a) Basic OI class (fish school)
(b) “Seaweeds” OI class
(c) “Pollution” OI class</p>
      <p>An algorithm for detecting OI on satellite images is developed. Experiments were
carried out to expand the learning set of the neural network by synthesizing images of
OI, using the modification of the initial set of real images. The experiments showed an
improvement in the quality of OI recognition with increasing the volume of the training
dataset replenished in this way.</p>
    </sec>
    <sec id="sec-10">
      <title>Acknowledgments</title>
      <p>The publication has been prepared with the support of the “RUDN University
Program 5-100”. The work is partially supported by state program 0077-2016-0002
«Research and development of machine learning methods for the anomalies detection».</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Vinogradov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.P.</given-names>
            <surname>Kurshev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Paramonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.A.</given-names>
            <surname>Belov</surname>
          </string-name>
          ,
          <article-title>Methods and tools for the analysis of remote sensing data of the marine environment for the commercial ifsh schools detection, Proceedings of the VIII All-Russian Scientific</article-title>
          and Technical Conference “Actual Problems of AeroSpace Engineering and Information Technologies” (Moscow, 1-
          <fpage>3</fpage>
          June 2016).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>E.S.</given-names>
            <surname>Sopin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.V.</given-names>
            <surname>Gorbunova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.V.</given-names>
            <surname>Gaidamaka</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.R.</given-names>
            <surname>Zaripova</surname>
          </string-name>
          ,
          <article-title>Analysis of Cumulative Distribution Function of the Response Time in Cloud Computing Systems with Dynamic Scaling, Automatic Control</article-title>
          and
          <source>Computer Sciences</source>
          .
          <volume>52</volume>
          (
          <issue>1</issue>
          ) (
          <year>2018</year>
          )
          <fpage>60</fpage>
          -
          <lpage>66</lpage>
          . DOI:
          <volume>10</volume>
          .3103/S0146411618010066.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gaidamaka</surname>
          </string-name>
          , E. Zaripova,
          <article-title>Comparison of polling disciplines when analyzing waiting time for signaling message processing at SIP-server</article-title>
          ,
          <source>Communications in Computer and Information Science</source>
          ,
          <volume>564</volume>
          (
          <year>2015</year>
          )
          <fpage>358</fpage>
          -
          <lpage>372</lpage>
          . DOI:
          <volume>10</volume>
          .1007/978-3-
          <fpage>319</fpage>
          -25861-4_
          <fpage>30</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kondratyev</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Tishchenko</surname>
          </string-name>
          ,
          <source>Concept of Distributed Processing System of Image Flow, Robot Intelligence Technology and Applications 4. Results from the 4th International Conference on Robot Intelligence Technology and Applications (RiTA2015)</source>
          , ed.
          <source>by J</source>
          .
          <string-name>
            <surname>-H. Kim</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Karray</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Jo</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Sincak</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          <string-name>
            <surname>Myung</surname>
          </string-name>
          .
          <source>Serie “Advances in Intelligent Systems and Computing”</source>
          ,
          <volume>447</volume>
          (
          <year>2016</year>
          )
          <fpage>479</fpage>
          -
          <lpage>487</lpage>
          . URL: https://link.springer.com/chapter/10.1007/978-3-
          <fpage>319</fpage>
          -31293-4_
          <fpage>38</fpage>
          . DOI:
          <volume>10</volume>
          .1007/978- 3-
          <fpage>319</fpage>
          -31293-4_
          <fpage>38</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>A.</given-names>
            <surname>Kondratyev</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Tishchenko</surname>
          </string-name>
          ,
          <article-title>Concept of Distributed Processing System of Images Flow in Terms of Pi-Calculus, 18th Conference of Open Innovations Association and Seminar on Information Security and Protection of Information Technology (FRUCT-ISPIT), St</article-title>
          . Petersburg, (
          <year>2016</year>
          )
          <fpage>131</fpage>
          -
          <lpage>137</lpage>
          . URL: http://ieeexplore.ieee.org/document/7561518/ DOI:10.1109/FRUCTISPIT.
          <year>2016</year>
          .
          <volume>7561518</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Paramonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.V.</given-names>
            <surname>Pesotsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Vinogradov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.P.</given-names>
            <surname>Kurshev</surname>
          </string-name>
          ,
          <article-title>Creation of a series of photorealistic models of digital space images of the sea surface and objects under its surface using high-performance computing platforms</article-title>
          .
          <source>Proceedings of V National Supercomputer Forum</source>
          .
          <volume>29</volume>
          .
          <fpage>11</fpage>
          -
          <lpage>03</lpage>
          .
          <fpage>12</fpage>
          .
          <year>2016</year>
          . Pereslavl-Zalessky. Russia.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>S.A.</given-names>
            <surname>Belov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Paramonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Vinogradov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.P.</given-names>
            <surname>Kurshev</surname>
          </string-name>
          ,
          <article-title>Perspectives of RS data application in the tasks of fishing intensification</article-title>
          .
          <source>Proceedings of 17th International Scientific and Technical Conference</source>
          “
          <article-title>FROM PICTURE TO DIGITAL REALITY: RS and photogrammetry”</article-title>
          .
          <source>October 16-19</source>
          ,
          <year>2017</year>
          , Hadera, Israel.
          <fpage>36</fpage>
          -
          <lpage>40</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Paramonov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Zhuravlev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.N.</given-names>
            <surname>Vinogradov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.P.</given-names>
            <surname>Kurshev</surname>
          </string-name>
          ,
          <article-title>Development of a high-performance system for processing oceanographic data based on distributed architecture</article-title>
          .
          <source>National Supercomputer Forum (NSCF-2017)</source>
          .
          <volume>28</volume>
          .
          <fpage>11</fpage>
          -
          <lpage>01</lpage>
          .
          <fpage>12</fpage>
          .
          <year>2017</year>
          . PereslavlZalessky, Russia.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>Chen</given-names>
            <surname>Tianqi</surname>
          </string-name>
          , Guestrin Carlos,
          <article-title>XGBoost: A Scalable Tree Boosting System</article-title>
          .
          <source>eprint arXiv:1603.02754. March</source>
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>A.P.</given-names>
            <surname>Mashtakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.A.</given-names>
            <surname>Ardentov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.L.</given-names>
            <surname>Sachkov</surname>
          </string-name>
          , Parallel Algorithm and
          <article-title>Software for Image Inpainting via Sub-Riemannian Minimizers on the Group of Rototranslations</article-title>
          ,
          <source>Numerical Mathematics: Theory, Methods and Applications</source>
          ,
          <volume>6</volume>
          (
          <issue>1</issue>
          ) (
          <year>2013</year>
          )
          <fpage>95</fpage>
          -
          <lpage>115</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11. G. Citti,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sarti</surname>
          </string-name>
          ,
          <article-title>A cortical based model of perceptual completion in the rototranslation Space</article-title>
          , J. Math. Imaging Vis.,
          <volume>24</volume>
          (
          <year>2006</year>
          ),
          <fpage>307</fpage>
          -
          <lpage>326</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Girshick</surname>
            <given-names>Ross</given-names>
          </string-name>
          , Donahue Jef, Darrell Trevor, Malik Jitendra,
          <article-title>Rich feature hierarchies for accurate object detection and semantic segmentation</article-title>
          .
          <source>eprint arXiv:1311.2524. November</source>
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Girshick</surname>
            <given-names>Ross</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fast</surname>
            <given-names>R</given-names>
          </string-name>
          -CNN.
          <source>eprint arXiv:1504.08083. April</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ren</surname>
            <given-names>Shaoqing</given-names>
          </string-name>
          , He Kaiming, Girshick Ross, Sun Jian,
          <string-name>
            <surname>Faster</surname>
            <given-names>R-CNN</given-names>
          </string-name>
          :
          <article-title>Towards Real-Time Object Detection with Region Proposal Networks</article-title>
          .
          <source>eprint arXiv:1506.01497 June</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>J. Redmon</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Divvala</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Girshick</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <article-title>You only look once: Unified, real-time object detection</article-title>
          .
          <source>In: CVPR</source>
          . (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Liu</surname>
            <given-names>Wei</given-names>
          </string-name>
          , Anguelov Dragomir, Erhan Dumitru, Szegedy Christian, SSD:
          <string-name>
            <surname>Single Shot MultiBox Detector. Reed</surname>
          </string-name>
          , Scott; Fu, Cheng-Yang; Berg, Alexander C.
          <source>eprint arXiv:1512.02325. December</source>
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Ségou</surname>
            <given-names>Simon</given-names>
          </string-name>
          , Drozdzal Michal, Vázquez David,
          <string-name>
            <given-names>Romero</given-names>
            <surname>Adriana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation</article-title>
          , (
          <year>2016</year>
          ) eprint arXiv:
          <volume>1611</volume>
          .
          <fpage>09326</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>V.</given-names>
            <surname>Badrinarayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kendall</surname>
          </string-name>
          , R. Cipolla, “
          <article-title>SegNet: A Deep Convolutional EncoderDecoder Architecture for Image Segmentation,”</article-title>
          <source>in IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          ,
          <volume>39</volume>
          (
          <issue>12</issue>
          )
          <fpage>2481</fpage>
          -
          <lpage>2495</lpage>
          , Dec.
          <volume>1</volume>
          <fpage>2017</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19. U-Net:
          <article-title>Convolutional Networks for Biomedical Image Segmentation</article-title>
          . Olaf Ronneberger, Philipp Fischer,
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Brox</surname>
          </string-name>
          .
          <source>Medical Image Computing and ComputerAssisted Intervention (MICCAI)</source>
          , Springer, LNCS,
          <volume>9351</volume>
          (
          <year>2015</year>
          )
          <fpage>234</fpage>
          -
          <lpage>241</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Huang</surname>
            <given-names>Jonathan</given-names>
          </string-name>
          , Vivek Rathod, Chen Sun, Menglong Zhu, Anoop Korattikara, Alireza Fathi,
          <string-name>
            <given-names>Ian</given-names>
            <surname>Fischer</surname>
          </string-name>
          , et al.
          <year>2016</year>
          . “
          <article-title>Speed/accuracy Trade-Ofs for Modern Convolutional Object Detectors.” arXiv [cs</article-title>
          .CV]. arXiv.
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          and
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          , “
          <article-title>Road Extraction by Deep Residual U-Net,”</article-title>
          <source>in IEEE Geoscience and Remote Sensing Letters</source>
          ,
          <volume>15</volume>
          (
          <issue>5</issue>
          )
          <fpage>749</fpage>
          -
          <lpage>753</lpage>
          , May
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Geofrey</surname>
            <given-names>Hinton</given-names>
          </string-name>
          ,
          <article-title>Oriol Vinyals, Jef Dean, Distilling the knowledge in a neural network</article-title>
          ,
          <source>in Proceedings of the Deep Learning and Representation Learning Workshop</source>
          , (
          <year>2014</year>
          ).
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>