<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Objects of Interest Detection by Earth Remote Sensing Data Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andrey</forename><forename type="middle">N</forename><surname>Vinogradov</surname></persName>
							<email>vinogradov_an@rudn.university</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Information Technologies Peoples&apos; Friendship</orgName>
								<orgName type="institution">University of Russia (RUDN University)</orgName>
								<address>
									<addrLine>6 Miklukho-Maklaya str</addrLine>
									<postCode>117198</postCode>
									<settlement>Moscow</settlement>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Igor</forename><forename type="middle">P</forename><surname>Tishchenko</surname></persName>
							<email>igor.p.tishchenko@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="department">Ailamazyan Program Systems Institute of RAS (PSI RAS) 4a Petra-I st</orgName>
								<address>
									<addrLine>s. Veskovo, Pereslavl district</addrLine>
									<postCode>152021</postCode>
									<settlement>Yaroslavl region</settlement>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Semion</forename><forename type="middle">V</forename><surname>Paramonov</surname></persName>
							<email>s.paramonov@gmail.com</email>
							<affiliation key="aff1">
								<orgName type="department">Ailamazyan Program Systems Institute of RAS (PSI RAS) 4a Petra-I st</orgName>
								<address>
									<addrLine>s. Veskovo, Pereslavl district</addrLine>
									<postCode>152021</postCode>
									<settlement>Yaroslavl region</settlement>
									<country key="RU">Russian Federation</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Samouylov</surname></persName>
						</author>
						<author>
							<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Sevastianov</surname></persName>
						</author>
						<author>
							<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Kulyabov</surname></persName>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Mathematical Modeling of High-Tech Systems&quot;</orgName>
								<orgName type="laboratory">Conference &quot;Information and Telecommunication Technologies</orgName>
								<address>
									<postCode>20-23, 2018</postCode>
									<settlement>Tampere, August</settlement>
									<country key="FI">Finland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Objects of Interest Detection by Earth Remote Sensing Data Analysis</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">C7AFD4992FC0228B1A1E711BD5D1B8B3</idno>
					<note type="submission">Selected Papers of the 1 st Workshop (Summer Session</note>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T12:11+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>fish school</term>
					<term>earth remote sensing</term>
					<term>image recognition</term>
					<term>object classification</term>
					<term>object of interest</term>
					<term>machine learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper the problem of large (commercial) fish schools detection, using remote sensing (RS) images of sea an ocean surface analysis is considered. Considered objects of interest (OI) detection and identification methods and algorithms using high-resolution space imagery. Images obtained by RS of the seas and oceans are characterized by the presence of images of objects of various types and classes. A classifier for different types of OI is considered. Also considered the OI searching methods and algorithms whose goal is to obtain data on the most probable locations of OI in the area of analysis. Restore the OI boundaries section describes the problem of image segmentation -splitting the image into areas corresponding to different objects in such a way that the constructed regions cover the objects of the image as accurately as possible, taking into account their complex shape and inevitable overlays. The OI detection and classification algorithm is presented, based on the the U-net type network architecture, which is able to use a smaller (in comparison with others) dataset for network "learning", which is critical for the task considered in this paper.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The most important tasks of large (commercial) fish schools search technology using remote sensing (RS) data automated processing and analysis for solving the task of monitoring the oceans and seas to identify commercial fish accumulations that need to be solved have been formulated in <ref type="bibr" target="#b0">[1]</ref>. For further research and development it is necessary to create a complete learning/test dataset of fish school images. Due to insufficient quantity of real RS images containing the objects of interest (OI) -fish schools, it is necessary to generate enough amount of artificially synthesized images, there OI would be present in various forms. Another important task is to identify areas of OI most likely location. This task can be solved in two ways: first is searching sea/ocean areas with favorable oceanographic and meteorological conditions using low-resolution RS images for further high-resolution space imagery. Another way is to analyze the movements of fishing vessels. During fishing, various types of fishing vessels perform specific maneuvers, this activity can be detected by analyzing the AIS data and than used to further analyze of these areas high resolution RS images in order OI detection. In this way it would be interesting trying to apply some approaches from adjacent areas for data analysis, for example, approaches of dynamic scaling <ref type="bibr" target="#b1">[2]</ref> or queuing theory methods <ref type="bibr" target="#b2">[3]</ref>. The solution of all these problems requires processing of large RS datasets, which requires significant computing resources. RS data allows its parallel processing <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Therefore, this task requires developing of special software and hardware complex that allow massively parallel data processing. To solve this problem, the experimental sample of the RS data processing complex <ref type="bibr" target="#b5">[6]</ref>, has been developed.</p><p>The space vehicles (satellites) that have appeared in the last 10 years with highresolution equipment provide high-quality RS images. The spatial image resolution above 1-2 m per pixel provides the tasks of the so-called object search and identification for relatively small size (meters, tens of meters) objects. Given that typical commercial fish schools near the surface of the ocean or the sea (the so-called pelagic fish schools) are from 5-10 meters to 150-200 meters in size, they will be seen on high-resolution RS images as detectable and identifiable objects <ref type="bibr" target="#b6">[7]</ref>.</p><p>In the process of fishing areas monitoring, both the collected data and the results of processing are geocoded (geographically), and, accordingly, can be aggregated within a single geospatial database. It is characteristic that technologies of processing and analysis of geospatial data developing in recent years are based on a qualitative transition from a set of arrays of numerical characteristics to geospatial objects that have both geographic and temporal dynamics. A convenient user tool for accessing and managing this geospatial dataset is a specialized GIS-system <ref type="bibr" target="#b7">[8]</ref>, providing opportunities of data sampling request, analysis, editing, visualization, modeling, etc. The main important element of monitoring implementing is OI classifier, which allows to identify commercial fish schools effectively, by RS data analysing.</p><p>A set of methods and algorithms for OI searching, detecting, classifying and identifying is a key component in the process of processing historical and operational RS data. The result of the research, using the sequential application of these methods and algorithms is information about the presence of OI in a pre-designated search area, its geographical coordinates and characteristics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Main section</head><p>The initial data for the entire processing are: -Data on the search area (coordinates of the vertex points of the polygon that limits the part of the fishing area, necessary to analysis); -Time range (start and end date and time, indicating the time interval); -Historical oceanographic and meteorological data, estimated to calculate a commercial fish school of a designated type finding probability; -Operational data of the fishing vessels movements in the given sea area.</p><p>Data processing is performed sequentially, since at each stage the results of the methods and algorithms of the previous stage are used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>OI Searching</head><p>The purpose of OI searching methods and algorithms is to obtain data on the most probable locations of OI in the area of analysis. As a result of the operation of the methods in and search algorithms of the OI, the coordinates of fragments of marine areas should be obtained to request the high-resolution RS data.</p><p>When searching for OI, two main methods and related algorithms are used:</p><p>-Method of OI searching, based on oceanographic meteorological characteristics; -Method of OI searching, based on fishing vessels activities.</p><p>It is assumed that OI search on oceanographic meteorological characteristics (preliminary search for areas with high probability of containing OI) is carried out as follows.</p><p>There is a certain number of zones (in our case -the squares of the explored areas), each of which is assigned a certain vector-tag (a set of numerical characteristics containing oceanographic and meteorological parameters). Each of the squares belongs to the class "0" (it can not serve as a place for the appearance of a fish school) or "1" (it can serve as a place for the appearance of a fish school).</p><p>This problem is a typical classification problem. XGBoost classification algorithm is used from the family of algorithms "boosted trees" ("forced trees"). This algorithm over the past 1-2 years has been widely used due to its high efficiency. According to a lot of researchs, when performing tests on a wide variety of data, the task of classifying data (distribution of 2 or more classes) using this algorithm most often shows the highest quality scores, in particular, the smallest classification error in the AUC-ROC estimate (area under the error curve).</p><p>The XGBoost algorithm based on the procedure of sequentially constructing a composition of algorithms for classifying trees. The questions of the program implementation of this algorithm have been studied in sufficient detail and are not considered here. A detailed description of the algorithm and its implementation can be found in <ref type="bibr" target="#b8">[9]</ref>.</p><p>Algorithm learning (that is, the calculation of the value of the main parameters of the classification algorithm XGBoost) is based on pre-collected historical data for this search area. During the analysis of the current situation, the algorithm obtains current (up-to-date) data on the search area and classifies the squares of the explored sea areas, assigning them the values 0/1.</p><p>Such an algorithm allows you to select areas that are promising for more detailed analysis, in particular using sophisticated segmentation, classification and detection techniques.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Detecting objects on RS images</head><p>In the field of developing methods and algorithms for image processing, the task of objects detecting is one of the most urgent in view of the wide possibilities of applied applications. That is why the history of the development of algorithms and methods for detecting objects has a long period of several decades. The classic delivery of the detection task is the processing of some visual scene, fixed in the form of a digital image (data array), where there is some background, on top of which one or many objects are represented; objects may also be absent.</p><p>In the vast majority of cases, objects of several different types can appear on the image. In this case, object detection can be performed simultaneously with their classification. When implementing a simple detection, all types of objects that must be recognized can be combined into one class.</p><p>In this form, the detection task is to recognize the presence on the image of an object of a given type with a certain probability and to predict its position on the picture in the form of a corresponding bounding box. In this case, the object can lie anywhere in the image and can have any size (scale). In some cases (as in the problem solved in the study), additional processing of images may be required for the purpose of segmentation and detection of the boundaries of objects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Image segmentation</head><p>The task of image segmentation, generally speaking, is more complex than the task of detecting (detecting) objects. Segmentation is understood as the division of an image into areas corresponding to different objects. It is required that the constructed areas as accurately as possible cover the objects of the image, taking into account their complex shape and the inevitable overlays.</p><p>Images obtained by remote sensing of the oceans and seas are characterized by the presence of images of objects of various nature on them. Such objects can be:</p><p>atmospheric fronts, clouds; condensation traces of aircraft in the atmosphere; zones of water surface disturbance; zone of ice accumulations in arctic latitudes; elements of the seabed relief; drifting phytoplankton and zooplankton; commercial fish schools; -Oil stains; zones of fishing vessels activity, etc.</p><p>The images of such objects are areas characterized by certain textural features and having fuzzy blurred boundaries. In addition, segmentation is complicated by the fact that the image is actually multi-layer, that is, objects of interest overlap. For example, the image of a fish cluster in a photograph can be partly covered by a shadow from the clouds and, at the same time, it is superimposed on visible algae and underwater relief elements from the air (see Figure <ref type="figure" target="#fig_0">1</ref>). When large OIs are considered, one more technical circumstance appears that makes it difficult to directly apply known methods of detection, segmentation, and classification. When using high-resolution images, one OI can be displayed in several frames, some of which may be unavailable for some reason (for example, the boundary of the shooting area or the image is reached is damaged). In such cases it is useful to try to restore the shape of the boundary of the OI on the basis of available information.</p><p>The method of classification of squares of water areas, described in the OI Search, gives a preliminary prediction of the presence of fish accumulations, but for a more accurate analysis, deep image processing is necessary. In addition, several different types of objects can be present on the image, and for successful detection, you must first divide them among themselves, accurately defining the boundaries of objects.</p><p>These circumstances become a significant limitation for the application of the following well-known methods of image segmentation.</p><p>1. Methods based on the clustering of image points; methods based on color and brightness histograms and the choice of threshold values; the "watershed" method.</p><p>To the problem under consideration, these methods are poorly applicable due to overlapping of images of objects of interest. 2. Methods based on graph models: conditional random fields; Markov random fields.</p><p>Such methods are able to model overlapping objects of interest, but require a large marked training sample containing objects of interest of various types. Therefore, the image segmentation method based on the construction of the boundaries of the sought-for areas of interest objects was chosen.</p><p>In addition, it is possible to more accurately construct the boundary of the object of interest, making it possible to use the form of this boundary as one of the features for classifying objects of interest.</p><p>The restored boundary of the OI area also makes it possible to estimate the size of this area, and thus estimate the amount of the resource reserve.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>OI Classification</head><p>OI classification is understood as the assignment of image fragments obtained through detection and segmentation procedures to one of the predefined types.</p><p>As shown in Table <ref type="table">1</ref>, in the subject domain, more than 10 varieties of OI can be identified, each with its own identifying features.</p><p>Using of detection methods by machine learning, for example, based on convolutional neural networks or conditional random fields, makes it possible to describe the desired regions in the form of sets of rectangles that limit the images of the objects sought. However, for a full solution of the problem posed in the study, it is important not only to identify the rectangular area in which the OIs are located, but also to accurately determine the boundaries of the alleged objects, since the shape of the object boundary in some cases is an important identifying feature used to classify the object of interest for a particular type. Among the objects under consideration, not only the fish accumulations that are of primary interest are represented on the photographs, but also other objects, and in many cases it is possible to distinguish the classes of objects among themselves precisely in the form of boundaries.</p><p>For example, in Figure <ref type="figure">2</ref> shows the accumulation of commercial fish and the zone of phytoplankton development. It can be seen that the boundaries of fish accumulations are sufficiently smooth and can have only a few special angular points. At the same time, the boundary of a typical zone of phytoplankton development is much more complex, it has more singular points, and the average degree of curvature is higher.</p><p>In addition, the border of this object of interest is an identification feature that serves to identify its unique characteristics (area, estimated volume of reserves, etc.).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Restore the OI boundaries</head><p>We have shown above the role of constructing the boundaries of objects of interest in the procedures of image segmentation and extraction of features for subsequent classification.</p><p>For the initial delineation of boundaries, the well-known operators of Gabor, Canny, and Sobel are applied to the image. After applying this procedure, a system of lines appearing on the boundaries of various objects of interest appears on the image. As a rule, these lines intersect and interrupt. In Figure <ref type="figure">3</ref> shows a snapshot of an oil spill that was "cut" by a passing vessel and the boundaries identified after some filtration.</p><p>In Figure <ref type="figure" target="#fig_2">4</ref> shows a snapshot of the phytoplankton development area in the Barents Sea section with cloud cover.  For successful processing of such situations, it is necessary to solve the task of tracing the boundary of the object of interest in conditions of overlaying the image of another object and its boundaries. This problem reduces to the task of reconstructing the interrupted curves in the image.</p><p>To solve this problem, an approach to solving the problem of recovering damaged images was used <ref type="bibr" target="#b9">[10]</ref>, which, in addition to the obtained sections of smoothed curves, takes into account also the original image itself, which improves the quality of the solution of the problem. The proposed method is universal, it can work both with a flat image and with a spherical image, i.e. defined in a region on a sphere of sufficiently large radius.</p><p>The apparatus of geometric control theory and sub-Riemannian geometry is used <ref type="bibr" target="#b10">[11]</ref>. A corresponding mathematical model is constructed and a neurophysiological motivation for using just such a model is given.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Algorithm for detecting and classifying objects in images</head><p>In the problem of recognition of objects for today as the classification characteristics of the object, it is necessary to select the statistical characteristics formed at the output of some convolutional neural network (SNC) processing the image. Let's consider the method of solving problems of detecting and identifying OI on images using SNS.</p><p>Let the SNS handle some image and select a set of statistical characteristics (feature cards). The set of obtained maps is compared with the available set of reference character maps for all types of OI. The comparison is performed using a classification algorithm, usually also on the basis of a neural network. The result is a set of probabilities of belonging to the processed image to one of the types of OI; accordingly, the object class is determined by the greatest of probabilities.</p><p>If it is known that there is (with a high probability) an OI in the image being processed, then when solving the detection task, it is required to obtain the coordinates of the location of the image of the OI in the image as a fragment of the image completely containing the object; the boundaries of the fragment form a so-called bounding box or an object cover mask.</p><p>It should be noted that both detection and classification of an object in an image can rely on the same object identification features when analyzing an image, but in the case of detection, it is also required to determine the localization of identification features in the image coordinate system.</p><p>In order to avoid repeated recurrence of the operation of highlighting "feature cards" when processing images of the SNA, research and development of recent years are aimed at creating algorithms that realize detection and classification of objects simultaneously.</p><p>To define such a problem, in particular, the term "semantic segmentation" is used; In this task, when processing an image, its pixels are assigned to one of the interest classes (or to the background); a group of pixels of one class forms a mask that identifies the object (s) of the class.</p><p>A number of algorithms, based on this principle, have been analyzed recently. Conditionally they can be divided into two main groups: a) Algorithms based on the formation of "proposed regions" (proposed regions), such as: Regions With CNNs (R-CNN) <ref type="bibr" target="#b11">[12]</ref>; Fast RCNN <ref type="bibr" target="#b12">[13]</ref>; Faster RCNN <ref type="bibr" target="#b13">[14]</ref>; YOLO <ref type="bibr" target="#b14">[15]</ref>; SSD Single Shot Detector <ref type="bibr" target="#b15">[16]</ref>. b) Algorithms based on the encoder-decoder architecture, such as: DenseNet <ref type="bibr" target="#b17">[17]</ref>;</p><p>SegNet <ref type="bibr" target="#b18">[18]</ref>; U-net <ref type="bibr" target="#b19">[19]</ref>. When analyzing the data of the algorithm for the purpose of their possible use in the development of this PNDI, the results of tests performed on the same type of computing equipment on a single test set of PASCAL VOC images were used. The comparative criteria included the following indicators <ref type="bibr" target="#b20">[20]</ref>:</p><p>-Network training time -Time for searching and detecting objects on the test dataset -Object mask prediction accuracy -Accuracy of object class definition -CPU load -Graphic accelerator load -Memory usage.</p><p>Also, examples of application of these algorithms to image processing problems solutions of an applied character or with close data characteristics are considered in this paper.</p><p>As a result of the analysis, the following conclusions were drawn:</p><p>-On the standard data sets of the PASCAL VOC type (according to the testing data given in the literature or presented by the authors of the algorithms), the considered algorithms show close accuracy indicators (92-97% in the object classification, 85-95% in the object mask masking accuracy). -The type of network with the Unet architecture, which is often used in solving segmentation problems of an object with a fuzzy outline on an uneven background, for example, such as the scanning of human organs (c), can be considered as the closest application; processing of RS data <ref type="bibr" target="#b21">[21]</ref>, etc. -The algorithm, based on the architecture of the network type U-net, is able to use a smaller, in comparison with others, dataset for network "learning", which is critical for the task considered in this paper. OI detection and classification algorithm is presented in the following description: 1: Form 𝑁 × 𝑚 model images 2: Generate markup of object positions in images 3: Set the required Pixel Classification Accuracy 4: Cycle for 𝑁 × 𝑚 model images 5:</p><p>Get 𝑘 masks of classification of image pixels (probabilities of 𝑘-class in range from 0 to 1) 6:</p><p>Get 𝑘 binary masks of pixels belonging to the class by the Pixel Classification Accuracy criterion; The pixel belonging to the class takes the value 1 7:</p><p>Mask all adjacent pixels belonging to the same class in clusters 8:</p><p>Each cluster of pixels belonging to the same class is selected in a separate mask of the selected object of the class 9:</p><p>Calculate the accuracy of the mask of the object with the specified 10:</p><p>Calculate the accuracy of classification of objects selected by the mask 11: End of cycle 12: Remember the CNN (convolutional neural network) status parameters as a set of values 𝑊 _𝑑𝑐 13: Return 𝑊 _𝑑𝑐 To carry out preliminary testing of this algorithm to solve the problem of the possibility of its application within the framework of the problem under consideration, its software implementation was implemented, which includes the following features:</p><p>As a coding part of the U-net network, it is suggested to use the implementation of the network with the VGG-19 architecture, discussed earlier in this chapter, previously trained on a large sample of objects from the standard ImageNet data set. The presence of a "pre-trained model" allows you to significantly reduce the process of configuring the network for the task at hand.</p><p>During the setup, i.e. additional training of the network, its parameters are tuned to typical objects of their selection for a given application task (in our case -objects of interest on the sea surface). This procedure is called "distillation" (transfer of knowledge) <ref type="bibr" target="#b22">[22]</ref>.</p><p>As the data sets for training, satellite images of objects of 4 classes on the sea surface, available in a small amount at the moment, were used: fish school; algae / plankton; pollution; empty sea surface without objects; as well as generated synthetic images of similar objects. To enlarge the sample of images, a so-called "augmentation" procedure was applied to each of them -image modification, resizing and rotation to a random angle. Thus, the number of images for each class was 100.</p><p>In the process of algorithm "learning", 90% of images (90 for each class) and 10 images were used for testing. The following metrics were used to assess the quality of the algorithm:</p><p>To assess the accuracy of the classification of the object in the image -the F-measure (F1 score), defined as: 𝐹 1 = 2 * 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 • 𝑟𝑒𝑐𝑎𝑙𝑙 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 + 𝑟𝑒𝑐𝑎𝑙𝑙 An estimation of the segmentation accuracy of an object (mask overlay) uses the intersection metric of IoU sets (or the Jacquard index), at the detection threshold of 0.5: 𝐼𝑜𝑈 = 𝑡𝑟𝑢𝑒_𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑡𝑟𝑢𝑒_𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 + 𝑓 𝑎𝑙𝑠𝑒_𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 + 𝑓 𝑎𝑙𝑠𝑒_𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 It should be noted that at this stage of the assessment, given the available sample size, there is no sense in justifying and refining the parameters of these metrics, and they are used in the most general form.</p><p>The results of the assessment test are shown in Table <ref type="table" target="#tab_1">2</ref>:  These estimates, as is understandable, are not satisfactory for using the obtained implementation at once to solve practical problems, but already at the stage of "aftertraining", an increase in the quality of the algorithm was obtained on a very small set of data.</p><p>In Fig. <ref type="figure" target="#fig_5">5</ref> presents the results of constructing object masks obtained as a result of additional network training using the available data set with subsequent testing on some test cases.</p><p>The mask of the object is constructed according to the following standard principle: for the final data array containing the probability of belonging to a certain pixel, those values that exceed a given threshold (in current estimation experiments it is 0.5) are selected for this class. Pixels assigned to an image in a given class are highlighted in color.</p><p>In the future it is expected to achieve the required quality indicators of the algorithm by improving its software implementation and, significantly, increasing the amount of data for learning the algorithm.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Conclusions</head><p>Within the scope of the task of OI searching and detecting, using oceans and seas RS data processing, a subtask was devised for constructing the OI areas boundaries. The following results were obtained:</p><p>The analysis of OI areas on oceans and seas RS data images has been carried out. The main features characterizing the OI are revealed, and the values of these characteristics for different types of OI are determined.</p><p>For cases of intersection of different OI in one image, a method for determining the OI boundaries is considered, based on the method of reconstructing curves on a spherical image.  </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 .</head><label>1</label><figDesc>Figure 1. Pelagic fish school -the formation in the near-surface layer differing in color and texture</figDesc><graphic coords="4,141.44,296.23,128.14,149.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 .Figure 3 .</head><label>23</label><figDesc>Figure 2. Commercial fish school and the phytoplankton growth zone</figDesc><graphic coords="7,47.13,56.41,131.84,99.22" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 .</head><label>4</label><figDesc>Figure 4. Commercial fish school and the phytoplankton growth zone</figDesc><graphic coords="7,47.13,347.21,158.58,109.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head></head><label></label><figDesc>(a) Basic OI class (fish school) (b) "Seaweeds" OI class (c) "Pollution" OI class</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 .</head><label>5</label><figDesc>Figure 5. Objects of several classes mask building demonstration</figDesc><graphic coords="11,93.39,279.64,109.89,109.89" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Results of the evaluation test:</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_0">ITTMM-WSS-2018</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_1">Vinogradov Andrey N., Tishchenko Igor P., Paramonov Semion V.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" xml:id="foot_2">Vinogradov Andrey N., Tishchenko P., Paramonov Semion V.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_3">ITTMM-WSS-2018</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="12" xml:id="foot_4">ITTMM-WSS-2018</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="14" xml:id="foot_5">ITTMM-WSS-2018</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="16" xml:id="foot_6">ITTMM-WSS-2018</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The publication has been prepared with the support of the "RUDN University Program 5-100". The work is partially supported by state program 0077-2016-0002 «Research and development of machine learning methods for the anomalies detection».</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Methods and tools for the analysis of remote sensing data of the marine environment for the commercial fish schools detection</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Vinogradov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Kurshev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Paramonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Belov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the VIII All-Russian Scientific and Technical Conference &quot;Actual Problems of AeroSpace Engineering and Information Technologies</title>
				<meeting>the VIII All-Russian Scientific and Technical Conference &quot;Actual Problems of AeroSpace Engineering and Information Technologies<address><addrLine>Moscow</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016-06-03">1-3 June 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Analysis of Cumulative Distribution Function of the Response Time in Cloud Computing Systems with Dynamic Scaling</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">S</forename><surname>Sopin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Gorbunova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">V</forename><surname>Gaidamaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">R</forename><surname>Zaripova</surname></persName>
		</author>
		<idno type="DOI">10.3103/S0146411618010066</idno>
	</analytic>
	<monogr>
		<title level="j">Automatic Control and Computer Sciences</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="60" to="66" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Comparison of polling disciplines when analyzing waiting time for signaling message processing at SIP-server</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Gaidamaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Zaripova</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-25861-4_30</idno>
	</analytic>
	<monogr>
		<title level="j">Communications in Computer and Information Science</title>
		<imprint>
			<biblScope unit="volume">564</biblScope>
			<biblScope unit="page" from="358" to="372" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Concept of Distributed Processing System of Image Flow</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kondratyev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tishchenko</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-31293-4_38</idno>
		<idno>DOI</idno>
		<ptr target=":10.1007/978-3-319-31293-4_38" />
	</analytic>
	<monogr>
		<title level="m">Results from the 4th International Conference on Robot Intelligence Technology and Applications</title>
				<editor>
			<persName><forename type="first">J.-H</forename><surname>Kim</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Karray</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Jo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Sincak</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Myung</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2016">RiTA2015. 2016</date>
			<biblScope unit="volume">447</biblScope>
			<biblScope unit="page" from="479" to="487" />
		</imprint>
	</monogr>
	<note>Advances in Intelligent Systems and Computing</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Concept of Distributed Processing System of Images Flow in Terms of Pi-Calculus</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kondratyev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Tishchenko</surname></persName>
		</author>
		<idno type="DOI">10.1109/FRUCT-ISPIT.2016.7561518</idno>
		<ptr target="http://ieeexplore.ieee.org/document/7561518/DOI:10.1109/FRUCT-ISPIT.2016.7561518" />
	</analytic>
	<monogr>
		<title level="m">18th Conference of Open Innovations Association and Seminar on Information Security and Protection of Information Technology (FRUCT-ISPIT)</title>
				<meeting><address><addrLine>St. Petersburg</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="131" to="137" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Creation of a series of photorealistic models of digital space images of the sea surface and objects under its surface using high-performance computing platforms</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Paramonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Pesotsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Vinogradov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Kurshev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of V National Supercomputer Forum</title>
				<meeting>V National Supercomputer Forum<address><addrLine>Russia</addrLine></address></meeting>
		<imprint>
			<publisher>Pereslavl-Zalessky</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page">12</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Perspectives of RS data application in the tasks of fishing intensification</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Belov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Paramonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Vinogradov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Kurshev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of 17th International Scientific and Technical Conference &quot;FROM PICTURE TO DIGITAL REALITY: RS and photogrammetry</title>
				<meeting>17th International Scientific and Technical Conference &quot;FROM PICTURE TO DIGITAL REALITY: RS and photogrammetry<address><addrLine>Hadera, Israel</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">October 16-19, 2017</date>
			<biblScope unit="page" from="36" to="40" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Development of a high-performance system for processing oceanographic data based on distributed architecture</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Paramonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">V</forename><surname>Zhuravlev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Vinogradov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">P</forename><surname>Kurshev</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">National Supercomputer Forum</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page">12</biblScope>
			<date type="published" when="2017">2017. 2017</date>
			<publisher>Pereslavl-Zalessky</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">XGBoost: A Scalable Tree Boosting System</title>
		<author>
			<persName><forename type="first">Chen</forename><surname>Tianqi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Guestrin</forename><surname>Carlos</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1603.02754</idno>
		<imprint>
			<date type="published" when="2016-03">March 2016</date>
		</imprint>
	</monogr>
	<note type="report_type">eprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Parallel Algorithm and Software for Image Inpainting via Sub-Riemannian Minimizers on the Group of Rototranslations</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Mashtakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Ardentov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">L</forename><surname>Sachkov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Numerical Mathematics: Theory, Methods and Applications</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="95" to="115" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A cortical based model of perceptual completion in the rototranslation Space</title>
		<author>
			<persName><forename type="first">G</forename><surname>Citti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sarti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Math. Imaging Vis</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="307" to="326" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Rich feature hierarchies for accurate object detection and semantic segmentation</title>
		<author>
			<persName><forename type="first">Girshick</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Donahue</forename><surname>Jeff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Darrell</forename><surname>Trevor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Malik</forename><surname>Jitendra</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1311.2524</idno>
		<imprint>
			<date type="published" when="2013-11">November 2013</date>
		</imprint>
	</monogr>
	<note type="report_type">eprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Girshick</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R-Cnn</forename><surname>Fast</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1504.08083</idno>
		<imprint>
			<date type="published" when="2015-04">April 2015</date>
		</imprint>
	</monogr>
	<note type="report_type">eprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks</title>
		<author>
			<persName><forename type="first">Ren</forename><surname>Shaoqing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">He</forename><surname>Kaiming</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Girshick</forename><surname>Ross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sun</forename><surname>Jian</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1506.01497</idno>
		<imprint>
			<date type="published" when="2015-06">June 2015</date>
		</imprint>
	</monogr>
	<note type="report_type">eprint</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">You only look once: Unified, real-time object detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Redmon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Divvala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Girshick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Farhadi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CVPR</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">SSD: Single Shot MultiBox Detector</title>
		<author>
			<persName><forename type="first">Liu</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anguelov</forename><surname>Dragomir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Erhan</forename><surname>Dumitru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Szegedy</forename><surname>Christian</surname></persName>
		</author>
		<editor>Reed, Scott</editor>
		<editor>Fu, Cheng-Yang</editor>
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Alexander</forename><forename type="middle">C</forename><surname>Berg</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1512.02325</idno>
		<imprint>
			<date type="published" when="2015-12">December 2015</date>
		</imprint>
	</monogr>
	<note type="report_type">eprint</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation</title>
		<author>
			<persName><forename type="first">Ségou</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Drozdzal</forename><surname>Michal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vázquez</forename><surname>David</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Romero Adriana</surname></persName>
		</author>
		<author>
			<persName><surname>Bengio</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1611.09326</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation</title>
		<author>
			<persName><forename type="first">V</forename><surname>Badrinarayanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kendall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cipolla</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="2481" to="2495" />
			<date type="published" when="2017-01">Dec. 1 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">U-Net: Convolutional Networks for Biomedical Image Segmentation</title>
	</analytic>
	<monogr>
		<title level="m">Medical Image Computing and Computer-Assisted Intervention (MICCAI)</title>
				<editor>
			<persName><forename type="first">Olaf</forename><surname>Ronneberger</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Philipp</forename><surname>Fischer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Thomas</forename><surname>Brox</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2015">2015</date>
			<biblScope unit="volume">9351</biblScope>
			<biblScope unit="page" from="234" to="241" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Speed/accuracy Trade-Offs for Modern Convolutional Object Detectors</title>
		<author>
			<persName><forename type="first">Vivek</forename><surname>Huang Jonathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chen</forename><surname>Rathod</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Menglong</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anoop</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alireza</forename><surname>Korattikara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ian</forename><surname>Fathi</surname></persName>
		</author>
		<author>
			<persName><surname>Fischer</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv</note>
	<note>cs.CV</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Road Extraction by Deep Residual U-Net</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Geoscience and Remote Sensing Letters</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="749" to="753" />
			<date type="published" when="2018-05">May 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Distilling the knowledge in a neural network</title>
		<author>
			<persName><forename type="first">Geoffrey</forename><surname>Hinton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oriol</forename><surname>Vinyals</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jeff</forename><surname>Dean</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Deep Learning and Representation Learning Workshop</title>
				<meeting>the Deep Learning and Representation Learning Workshop</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
