<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Low-Dimensional Representations in Generative Self-Learning Models</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Serge</forename><surname>Dolgikh</surname></persName>
							<email>sdolgikh@nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">National Aviation University</orgName>
								<address>
									<postCode>02000</postCode>
									<settlement>Kyiv, Ukraine</settlement>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Low-Dimensional Representations in Generative Self-Learning Models</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">94BA92C66764A6EA378DDC72B2B0D7EA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:53+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Informative representations play an important role in learning and intelligence. We analyzed distributions of image classes in low dimensional representations created by a class of deep autoencoder neural network models in unsupervised learning. The representations of real aerial images have been shown to contain higher-level concept structures such as low-dimensional surfaces and higher density clusters that form as a result of unsupervised training with minimization of generative error. Compact and well-defined character of some distributions was demonstrated with a positive correlation between the categorization performance of the model and its classification accuracy. The results provide direct empirical support for the connection between unsupervised learning in models with self-encoding and regeneration and categorization of native concepts in the representations. 1 Introduction: Unsupervised Representations Study of unsupervised representations with the intent to identify and separate the most informative components in general data has a long history in machine learning. Unsupervised hierarchical representations created with models like Restricted Botzmann Machines (RBM), Deep Belief Networks (DBN) [1, 2] different types of autoencoder models [3] proved to be efficient and improved the accuracy of subsequent classification [4]. The deep relationship between training of intelligent models and the statistical principles such as minimization of free energy was studied in [5, 6] and other works leading to forming understanding that common methods of training such as gradient descent in deep neural networks and Contrastive Divergence in DBN generally produce configurations compatible with the principles of minimization of free energy and variational Bayesian inference. On the experimental side, interesting effects of spontaneous high-level concept sensitivity in unsupervised deep neural network models were observed in a number of works. Google Lab team [7] observed an intriguing effect of spontaneous formation of concept sensitive neurons activated by images in certain higher-level category with a massive deep and sparse autoencoder neural network model trained in entirely unsupervised process without any exposure to groupd truth with very large arrays of</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>YouTube images.</head><p>In <ref type="bibr" target="#b7">[8]</ref> a spontaneous formation of grid-like cells, similar to those observed in mammals was detected in a recurrent neural network with deep reinforcement learning. Higherlevel concept-related structures were observed in the representations of deep autoencoder models with strong redundancy reduction with data representing raw Internet traffic in large public telecommunications networks in <ref type="bibr" target="#b8">[9]</ref>. The results demonstrated that a density structure in the representations created by such models that emerges as a result of unsupervised training with minimization of generative error it can be used in the iterative approach to training of artificial learning systems that can offer higher flexibility and considerably lower ground truth requirements compared to common methods. Representations of deep variational autoencoder models were studied in <ref type="bibr" target="#b9">[10]</ref>, demonstrating effective disentangled representations with data of several different types in entirely unsupervised learning under the constraints of redundancy reduction. These and a number of further results <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref> may suggest that certain neural networks whether artificial or biological, in the process of unsupervised learning with an incentive to improve the quality of regeneration of the observable data may naturally structure information by characteristics of similarity in their representations, thus identifying certain natural or native concepts that perhaps can be correlated with higher-level concepts in the observable data. Based on this observation, the hypothesis investigated in this work is that the natural structure in the representations created by certain unsupervised models in self-supervised learning with minimization of the generative error can be correlated with higher-level concepts in the input data, and that relationship can be used in developing approaches to flexible and iterative learning in the environments where prior domain knowledge is scarce or not available. In this study we are following the line of research outlined in <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b8">9]</ref> by first creating a compact representation of the observable dataset with a deep self-encoding neural network model (a two-stage stacked autoencoder), then analysing the parameters of distributions of the higherlevel concepts in the dataset in the representation created in unsupervised training. But unlike <ref type="bibr" target="#b6">[7]</ref> that investigated single-neuron that is, essentially, one-dimensional representations and distributions of concepts (Fig. <ref type="figure">1</ref>), the design of the models in this study, with physical constraints on the dimensionality of the representation layer created low-dimensional representations, that allowed to improve Figure <ref type="figure">1</ref>: Effective activation of a concept-sensitive neuron (based on <ref type="bibr" target="#b6">[7]</ref>) the resolution of the learned concepts from "better than random" across arbitrarily selected pre-known range of higher-level concepts in <ref type="bibr" target="#b6">[7]</ref> to "better than random binary" and in a number of cases, "confident binary" classification per concept that was not known previously. It is thought that these results can be of an interest for the research community in unsupervised learning and self-learning systems because, as some recent studies indicate <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref> similar low-dimensional representations with only a small number of active neurons can play important role in sensory networks of biologic systems, such as visual and smell processing; as well, the connection between unsupervised learning and concept structures in the representations may suggest approaches to self-learning that would be common for biologic and artificial systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Methods</head><p>The model used in the study is a stacked two-stage autoencoder with strong physical compression in the layer of the final representation. This choice was based on the earlier cited results as well as some strong arguments in favor of neural network models based on generative selflearning being good candidates for producing effective unsupervised representations. Being a universal approximator <ref type="bibr" target="#b14">[15]</ref>, feedforward neural networks have virtually unlimited versatility and are well suited to model complex data types. And not in the least, deep neural networks are widely present in biologic systems that are also highly successful in self-learning with minimal data <ref type="bibr" target="#b15">[16]</ref>. The data was represented by a dataset of raw images obtained in aerial observation of terrain, as described in this section.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Deep Stacked Autoencoder Model</head><p>The diagram of the model is given in Figure <ref type="figure">2</ref>. The model produced two stages of representations of unprocessed aerial image data. The encoder of the first stage was a convolutional-pooling autoencoder that produced a numerical representation of dimension 576 from color images with dimensions (32,32) to (128,128). The aim of this stage was to acquire higher scale features in the images via a sequence of convolution-pooling stages. The resulting numerical representation was used as the input to the second stage autoencoder with a strong reduction of physical dimensionality of the representation layer. The dimension of the representation was chosen based on principal component analysis of the numerical representation of the first stage that revealed three components with combined variation of over 0.95. Hence, the maximum Figure <ref type="figure">2</ref>: Stacked autoencoder model with physical redundancy reduction compression of information achieved in the representation layer of the model was approximately 16,000, from input images in the first stage to the final representation. A certain advantage of the studied models is that they allow to measure and visualize the distributions created in unsupervised training directly from the central layer of the latent representation. In feed-forward neural networks, the accuracy of regeneration of input data combined with significant compression in the layer of representation means that the latent representation has retained significant essential information about the original distribution and observing it directly may yield some valuable insights about the character of the concept distributions in the observable data. The models were implemented with Keras/Tensorflow <ref type="bibr" target="#b16">[17]</ref>. For measurement and visualization of distributions we used common libraries and packages such as: sklearnkit, numpy, matplotlib and others. Models were trained in an unsupervised autoencoder mode to achieve good reproduction of inputs measured by a cost function such as Mean Squared Error (MSE). Several criteria of effectiveness of unsupervised training were used, such as monitoring the cost function and cross-categorical accuracy that both shown significant improvement in unsupervised training with minimization of the generative error. Additionally, generative performance of trained models was measured by calculating a mean deviation of the input sample from the generated output to the mean norm of the input sample, with an average result in the range of 0.1. In our view these training results and the fact that in feedforward neural network models the output is generated only from the information that is contained in the repre-sentation layer indicate that the latent representation has indeed retained significant essential information about the original distribution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Data</head><p>The dataset consisted of approximately 1,100 color images with resolution (64,64) manually labeled with ten higherlevel classes of terrain type such as "trees", "buildings", "water", etc., as described in Table <ref type="table" target="#tab_0">1</ref>. The higher-level classes used in the study represented three different broad categories: 1. Background: the area of the class concept spans the entire image or most of it; an example is "trees" or "field" 2. Structure: the concept area spans a significant part of the image area, such as roads; construction structures, e.g. bridges, power lines; excavations. 3. Object: an object located in compact area relative to the size of the image; vehicles and miscellaneous machinery were in this category. The composition of the dataset background, 100 Water (4) background/structure, 100 Roads <ref type="bibr" target="#b4">(5)</ref> structure, 100 Excavations <ref type="bibr" target="#b5">(6)</ref> structure, 100 Vehilcles <ref type="bibr" target="#b6">(7)</ref> object, 100 Other <ref type="bibr" target="#b7">(8)</ref><ref type="bibr" target="#b8">(9)</ref><ref type="bibr" target="#b9">(10)</ref> varied, 400 with classes of different categories allowed to investigate the character of concept distributions in the latent representations for different types of higher-level concepts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Unsupervised Representations</head><p>A trained model can perform the encoding transformation from the observable data space to the latent representation obtained with the activations of the central layer of the Phase 2 encoder, and the generative transformation from the latent representation to the observable space as:</p><formula xml:id="formula_0">R(X) = encoder_model.encode(X) (1) X (Y ) = generator_model.decode(Y )<label>(2)</label></formula><p>In the latent representation of a trained model, the emergent density structure can be identified by applying a density-based clustering method such as DBScan, Mean-Shift and numerous variations <ref type="bibr" target="#b17">[18]</ref>. It allows to identify density clusters of the encoded samples in the representation space without any need for the ground truth data. For example, the associated density cluster for a sample X in the input data space can be calculated as:</p><formula xml:id="formula_1">K nat (X) = cluster_model.predict(X)<label>(3)</label></formula><p>where cluster_model is a density-based clustering method trained with a general data sample in the latent representation.</p><p>To perform classification, a binary concept classifier can be trained with a subset of labeled concept samples in the latent space. The resulting classifier can be applied to predict the explicit concept class of samples in the input space as:</p><formula xml:id="formula_2">K exp (X) = classi f ier.predict(encode(X))<label>(4)</label></formula><p>where K exp is the explicit or external class of the sample X predicted by the trained classifier. Thus K exp and K nat represent respectively, the externally known class of the sample and its native or implicit cluster identified from the density distribution in the latent space needing no external knowledge of the domain, distribution, or any other prior knowledge about the data. The structure in the latent representation that emerges as a result of unsupervised training, or "unsupervised landscape" can be measured and observed by the following methods:</p><p>1. By applying unsupervised clustering in the representation to identify density distribution in general unlabeled data sample as well as concept samples 2. By measuring the parameters of general and concept distributions in the representation space 3. By applying multi-dimensional histogram methods in the representation space to measure density and volume distributions in general and concept samples 4. Via visualization and direct observation of general and concept samples in the representation space.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4">Unsupervised Categorizaion</head><p>By unsupervised categorization is meant the ability of some models with unsupervised self-encoding and regeneration of the input data to group data samples in the latent representation in compact structures by certain similarity. Such natively similar samples in the representation are then transformed in the generative stage of the model to samples in the observable data space that are related by association to the same, or related native concepts.</p><p>To measure categorization ability of models, two types of data samples were used: 1) concept samples transformed to the representation space define concept distribution region, that is, the region in the representation space where samples associated with certain higher-level concept can be found.</p><p>2) a general sample, a set of non-labeled data points that is used to identify and measure the size and shape of the region in the representation space that is populated by all categories, in other words, the image of a representative subset of the input data set in the latent space of the model.</p><p>Relative measurements of concept versus general distributions allow to draw conclusions about categorization performance of the model for the given concept, such as the: relative size, density of concept distribution regions, their shape, dimensionality and other parameters that can affect learning of the concept. Distributions of data in the latent representations, or density landscape created by such models in the process of unsupervised learning can then be analyzed, measured and visualized by transforming marking subsets of labeled concept samples to the latent space with encoding transformation (1), while generative ability of the model can be evaluated by measuring the deviation of the generated output from the input.</p><p>The hypothesis that can be drawn from the results discussed earlier is that a structured information "landscape" that emerges in unsupervised training of the models with the incentive to reduce the regeneration error can be correlated with higher-level concepts that have strong representation in the input data.</p><p>3 Results</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Visualization Analysis</head><p>Concept distributions in the latent space created in unsupervised training with minimization of regeneration error can be visualized and measured directly. To produce visualizations of concept regions, subsets of concept samples were transformed to the latent state of a pre-trained model and visualized with available plotting packages. Continuous approximations of the concept regions in the latent space were obtained with triangular interpolation of the concept samples transformed to the latent space.</p><p>Compact Distributions It was observed that the classes in the "background" and some, in the "structure" category, covering significant part of the image area generally produced compact and well-defined concept regions in the form of a two-dimensional surface. These distributions are illustrated in Fig. <ref type="figure" target="#fig_0">3</ref>. In the diagram, the top plot shows distributions of two concepts of "background" type (classes 3 and 4) in the latent space of a trained unsupervised model. The surface character of the distributions can be clearly observed visually and is confirmed by PCA analysis of the encoded concept samples that yielded over 80% variance for the two highest components. The bottom plot visualizes distributions of several concepts simultaneously in a compact region of the latent space. Interestingly, the distribution regions of the multiple concepts, again in the clear form of two-dimensional surfaces, are layered quite closely together rather than being separated into isolated clusters, as was the case with classes of some other categories. In this pattern, concept regions are stacked closely in the same region of the representation space like an "onion shell", a strategy that allows to pack It is worth noting that these results also substantiate the manifold assumption commonly used in unsupervised and semi-supervised learning <ref type="bibr" target="#b18">[19]</ref>. For most of studied concepts in this category distribution regions indeed consisted of connected and smooth manifolds or sets of such manifolds. The results of measurements of the distribution parameters for these concepts will be presented in the next section.</p><p>Sparse Distributions Distributions of object and structure type concepts showed a different pattern that was noticeably sparser and spread over the latent representation. In Fig. <ref type="figure">4</ref> concept regions of "structure" and "object" classes are shown with the compact classes that allows to compare relative scales of the variation in the concept regions of classes of different categories: top plot: classes 6 (sparse) and 3 (compact); bottom plot: classes 7 (sparse) and 2,4 (compact). A clear difference in the character of distributions of different categories can be observed the distribution visualizations in Fig. <ref type="figure" target="#fig_0">3 and 4</ref>. Interestingly, while larger scale background-type concepts appear to occupy compact and well-defined region in the latent space with a small number or single dominant cluster, classes representing local concepts are spread throughout the representation space in multiple clusters. A possible explanation for the latter observation could be that the relationship between the explicit higher-level concepts that label the samples in the dataset and the internal or native concept clusters (3) in unsupervised mode can be more complex than one-to-one. For example, an explicit higher-level concept may encompass a number of different native clusters in which case a distribution of the type seen in Fig. <ref type="figure">4</ref> Figure <ref type="figure">4</ref>: Sparse concept distributions in the representation space can be observed.</p><p>Another logical possiblity is that the complexity and depth of the models used in the study, as well as the size of the dataset were not sufficient to identify these more complex patterns with sufficient confidence. This question requires further investigation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Categorization and Classification</head><p>In this section we attempted to establish the relationship between the categorization properties of concept distributions in the unsupervised representations and the performance of supervised learning with training data in the representation space of a trained model. As mentioned in the previous sections, the categorizing ability of an unsupervised model can be evaluated with two essentially different approaches: first, in a completely unsupervised mode, where the external concept labels are not provided with samples in the dataset, the parameters of general distribution can measured, such as the dimensions, shape, the parameters of density distribution. These measurements are important because they provide an a priori evaluation of the categorization ability of the model before any knowledge of external semantics such as known higher-level categories associated with the input data has been applied. For that reason, these methods can be applied to data of any nature in a truly general manner.</p><p>On the other hand, if external labels for a subset of the data are available (as was the case in this study), it should be possible to train a classifier with labeled data in a supervised mode, but with parameters or "features" being the coordinates in the representation space of a pretrained model with (4). Comparing the results of the two approaches can indicate how closely the structure that emerges in the representation as a result of unsupervised learning reflects the external concepts used in supervised approach.</p><p>The results of measurements of distribution parameters and the accuracy of classification for for selected concepts for each of the scale types are presented in Table <ref type="table" target="#tab_1">2</ref>. The parameters of the concept distribution region in the latent space were defined ( <ref type="bibr" target="#b8">[9]</ref>) as:</p><p>Spread, a characteristic size of the region relative to that of the general distribution Concentration, the number of concept density clusters relative to the total number of clusters in the general distribution Density, the density of the structure measured as the population per volume in the latent coordinates, relative to the density of a uniform distribution. Finally, Accuracy for the concept was measured as F1 classification score that accounts for classification errors of both types. The accuracy of a trained classifier was measured with multiples batches of randomly selected in-and out-of-class test samples. Note that the second value in the accuracy column relates to self-learning accuracy that will be discussed in the next section. In the results, a clear correlation can be seen between the parameters of a concept distribution in the representation space and the accuracy of the concept classifier trained with a labeled subset of in-and out-of-concept samples in the latent coordinates. It can be seen as another indication, in addition to already mentioned results that unsupervised training, perhaps under certain conditions and constraints as discussed in <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b19">20]</ref> can produce configurations of data in the representation space that are correlated with common higherlevel concepts in the observable data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Self-Learning with Unsupervised Representations</head><p>As was shown in <ref type="bibr" target="#b8">[9]</ref>, the structure emergent in the latent representations as a result of unsupervised training can be used in learning of new concepts with minimal data, down to counted positive samples. The approach has unsupervised and semi-supervised learning phases:</p><p>-in the unsupervised phase that requires no labeled data, principlal density clusters with significant population are identified as was outlined earlier in Section 2.3, (4); these structures can be seen as principal native concepts in the observable data; -in the semi-supervised self-learning phase that follows, a small number of positive concept samples is used to tag or mark the clusters that can be associated with the concept being learned, and creating a small labeled dataset from the genuine concept samples and those obtained from the unsupervised cluster distribution; -then a binary concept classifier is trained with the dataset and can be used for prediction of the concept being learned for new samples in the input space. Because the genuine labeled samples are used only for tagging of clusters of interest, the method can indeed work with very minimal sets of labeled concept data, down to a single, "signal" sample.</p><p>In this section the single sample self-learning based on unsupervised density structure was applied to the image dataset, with results for representative classes in each category presented in Table <ref type="table" target="#tab_1">2</ref>, the second value in the accuracy column.</p><p>These results show that the concepts with compact representations were learned successfully with a single sample of the concept, while those with more spread and sparse representation achieved only marginally better resolution compared to the random strategy. A possible explanation for this effect can be found in the analysis of the distribution patterns for the concepts in Fig. <ref type="figure" target="#fig_0">3, 4</ref>. If the representation image of a higher-level concept comprises several native clusters, the datapoints generated in the vicinity of the signal sample wouldn't sufficiently cover the entire distribution region of the concept in the latent space and the resolution of the classifier would be reduced. This was confirmed by further experiments where it was observed that increasing the size of the learning sample for this type of concepts substantially improves the results of learning. Unlike traditional in machine learning supervised methods, learning with unsupervsed denstity structure or density landscape is more reminiscent of the learning processes in the biologic systems that are often spontaneous, flexible and require minimal data with building accuracy gradualty over a sequence of learning iterations. Landscape-based learning can imitate such processes by testing concept distribution regions in an iterative trial and error process as and when learning data becomes available in a close interaction with the environment.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Conclusion</head><p>The analysis of higher-level concept distributions of image data in the latent space of self-learning models presented in this work is in agreement with the earlier findings that unsupervised training of models with self-encoding and regeneration can lead to emergence of identifiable structure in the latent representation that can be correlated with higher-level concepts in the observable data. Correlation of classification accuracy with the categorization parameters of the concept distributions in the latent space of such models was shown now with data of different types and nature <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b8">9]</ref> pointing at the possibility of a general character of this effect. Low-dimensional representations can be of interest due to growing evidence that such representations can play an important role in processing of sensory data by biologic systems. Recent results <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref> demonstrated that effective representations of sensory data such as images and smells can be produced with a small number of active neurons in biologic neural networks. Linking these results with the findings in this work, where the examples of such low-dimensional representations created artificially were investigated, one can hypothesize that perhaps, the representations in more complex sparse neural networks even of a massive kind <ref type="bibr" target="#b6">[7]</ref> can be modeled as a set or a "stack" of low dimensional representation regions indexed by the combination of neurons that collectively participate in creating the latent representation, with surface-like concept regions observed in our results, distributed in them (Fig. <ref type="figure" target="#fig_1">5</ref>).</p><p>In such a stacked representation, a concept region for ex- All in all, it is believed that the study of native categorization properties of the generative models may lead to better understanding of the underlying principles of self-learning and development of models that could learn in more natural way <ref type="bibr" target="#b15">[16]</ref>, closer to the spontaneous and iterative learning processes in biologic systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Compact concept distributions in the latent space</figDesc><graphic coords="4,307.56,80.50,231.03,238.29" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Concept regions in a sparse latent representation ample, "cats" can be indexed by the indices of activated neurons W k and the index S k of the concept surface in the representation subregion of W k : I cats = (W cats , S cats ). Thus, prototypes of native concepts in the observable data can form in an unsupervised observation of the environment via self-learning with minimization of error of regeneration, requiring minimal supervision and prior knowledge of the domain. Analysing concept distributions in the representations of deep learning models can offer a novel perspective on the program of Explainable AI [21]. Much effort has been invested by the research community in attempts to describe the learning configurations and rules that emerge in complex and deep learning models in training. Understanding the native structure of information in the latent representation created in training can offer a different, and in some cases, very visual interpretation of learning processes in these systems.All in all, it is believed that the study of native categorization properties of the generative models may lead to better understanding of the underlying principles of self-learning and development of models that could learn in more natural way<ref type="bibr" target="#b15">[16]</ref>, closer to the spontaneous and iterative learning processes in biologic systems.</figDesc><graphic coords="6,307.56,283.20,231.03,68.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Aerial image dataset</figDesc><table><row><cell>Class</cell><cell>Category, Number of samples</cell></row><row><cell cols="2">Buildings (1) background/structure, 100</cell></row><row><cell>Trees (2)</cell><cell>background, 100</cell></row><row><cell>Field (3)</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Self-learning with unsupervised representations</figDesc><table><row><cell>Class</cell><cell>Categorizaton</cell><cell>Accuracy</cell></row><row><cell>Background</cell><cell></cell><cell></cell></row><row><cell>Trees (2)</cell><cell cols="2">0.16, 0.06, 246 0.79, 0.65</cell></row><row><cell>Field (3)</cell><cell cols="2">0.18, 0.06, 357 0.81, 0.72</cell></row><row><cell>Water (4)</cell><cell cols="2">0.19, 0.08, 375 0.84, 0.78</cell></row><row><cell>Structure</cell><cell></cell><cell></cell></row><row><cell>Roads (5)</cell><cell cols="2">0.23, 0.11, 228 0.68, 0.57</cell></row><row><cell cols="3">Excavations (6) 0.28, 0.14, 292 0.71, 0.54</cell></row><row><cell>Object</cell><cell></cell><cell></cell></row><row><cell>Vehilcles (7)</cell><cell cols="2">0.78, 0.22, 135 0.73, 0.53</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>The author is grateful to Prof. Pilip Prystavka, Chair of the Applied Mathematics, National Aviation University (Kyiv) for valuable discussions of the findings and the opportunity to use the dataset of images used in this work.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A fast learning algorithm for deep belief nets</title>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Osindero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">W</forename><surname>Teh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Comp</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1527" to="1554" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Training restricted Boltzmann machines: an introduction</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fischer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Igel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recogn</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="25" to="39" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Learning deep architectures for AI</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Found. Trends Machine Learning</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="127" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An analysis of single-layer networksin unsupervised feature learning</title>
		<author>
			<persName><forename type="first">A</forename><surname>Coates</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Y</forename><surname>Ng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 14th Intl. Conf. on Artificial Intelligence and Statistics</title>
				<meeting>14th Intl. Conf. on Artificial Intelligence and Statistics</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="215" to="223" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A unified energy-based framework for unsupervised learning</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ranzato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-L</forename><surname>Boureau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chopra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lecun</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. 11th Intl. Conf. on Artificial Intelligence and Statistics</title>
				<meeting>11th Intl. Conf. on Artificial Intelligence and Statistics</meeting>
		<imprint>
			<date type="published" when="2007">2007</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="371" to="379" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A free energy principle for biological systems</title>
		<author>
			<persName><forename type="first">K</forename><surname>Friston</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Entropy</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="2100" to="2121" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Building highlevel features using large scale unsupervised learning</title>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Ransato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Monga</surname></persName>
		</author>
		<idno>arXiv 1112.6209</idno>
		<imprint>
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Vector-based navigation using grid-like representations in artificial agents</title>
		<author>
			<persName><forename type="first">C</forename><surname>Banino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Barry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kumaran</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature</title>
		<imprint>
			<biblScope unit="volume">557</biblScope>
			<biblScope unit="page" from="429" to="433" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Categorized representations and general learning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dolgikh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc.10th Intl. Conf. on Theory and Application of Soft Computing, Computing with Words and Perceptions</title>
				<meeting>.10th Intl. Conf. on Theory and Application of Soft Computing, Computing with Words and Perceptions</meeting>
		<imprint>
			<date type="published" when="1095">1095. 2019</date>
			<biblScope unit="page" from="93" to="100" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Early visual concept learning with unsupervised deep learning</title>
		<author>
			<persName><forename type="first">I</forename><surname>Higgins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Matthey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Glorot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Pal</surname></persName>
		</author>
		<idno>arXiv 1606.05579</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Concept learning through deep reinforcement learning with memoryaugmented neural networks</title>
		<author>
			<persName><forename type="first">J</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">110</biblScope>
			<biblScope unit="page" from="47" to="54" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Modeling conceptual understanding in image reference games</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Alaniz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Akata</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Proc. Syst</title>
				<meeting><address><addrLine>Vancouver, BC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="13155" to="13165" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Natural images are reliably represented by sparse and variable populations of neurons in visual cortex</title>
		<author>
			<persName><forename type="first">T</forename><surname>Yoshida</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ohki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Nature Communications</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">872</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Grid-like neural representations support olfactory navigation of a twodimensional odor space</title>
		<author>
			<persName><forename type="first">X</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Gjorgiea</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">K</forename><surname>Shanahan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neuron</title>
		<imprint>
			<biblScope unit="volume">102</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1066" to="1075" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Multilayer feedforward neural networks are universal approximators</title>
		<author>
			<persName><forename type="first">K</forename><surname>Hornik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stinchcombe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>White</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="359" to="366" />
			<date type="published" when="1989">1989</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Neuroscience inspired Artificial Intelligence</title>
		<author>
			<persName><forename type="first">D</forename><surname>Hassabis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kumaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Summerfield</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neuron</title>
		<imprint>
			<biblScope unit="volume">95</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="245" to="258" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<author>
			<persName><surname>Keras</surname></persName>
		</author>
		<ptr target="https://keras.io/" />
		<title level="m">Python deep learning library</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">The estimation of the gradient of a density function, with applications in pattern recognition</title>
		<author>
			<persName><forename type="first">K</forename><surname>Fukunaga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">D</forename><surname>Hostetler</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Inf. Theory bf</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="32" to="40" />
			<date type="published" when="1975">1975</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Semi-supervised learning</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Belkin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Acad. Press Lib. in Signal Proc. Elsevier</title>
				<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1239" to="1269" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Why good generative models categorize</title>
		<author>
			<persName><forename type="first">S</forename><surname>Dolgikh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="s">Int. Journ. Mod. Edu. Comp. Sci.</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>to appear</note>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Explaining explanations: an overview of interpretability of machine learning</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">H</forename><surname>Gilpin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">Z</forename><surname>Yuan</surname></persName>
		</author>
		<idno>arXiv 1806.00069</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
