<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Hyperspectral data dimensionality reduction using nonlinear autoencoders</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Evgeny</forename><surname>Myasnikov</surname></persName>
							<email>mevg@geosamara.ru</email>
							<affiliation key="aff0">
								<orgName type="department">Geoinformatics and Information Security department</orgName>
								<orgName type="institution">Samara National Research University</orgName>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="department">Image Processing Systems Institute of RAS -Branch of the FSRC &quot;Crystallography and Photonics</orgName>
								<orgName type="institution">&quot; RAS Samara</orgName>
								<address>
									<country key="RU">Russia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Hyperspectral data dimensionality reduction using nonlinear autoencoders</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">060755CD0EBB518A8B4EC236CC6E6CC1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T00:04+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>autoencoder</term>
					<term>hyperspectral images</term>
					<term>nonlinear mapping</term>
					<term>principal component analysis</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The known feature of hyperspectral images is a high spectral resolution, which allows us to identify materials and classify objects in images with high accuracy. However hyperspectral images contain substantial redundancy, which can be eliminated with the aid of dimensionality reduction techniques. In this paper, we propose and study several dimensionality reduction techniques based on the pretraining the encoder-decoder neural network with the results of the nonlinear mapping and principal component analysis techniques. The experiments performed on an open dataset show that the proposed techniques both provide the discriminative low-dimensional features and allow us to reconstruct source hyperspectral data with little error.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>INTRODUCTION</head><p>Hyperspectral images are widely used nowadays in different fields such as agriculture, medicine, biology, chemistry, and so on. The known feature of hyperspectral images is high spectral resolution, which allows us to identify materials and classify depicted images with high accuracy.</p><p>However hyperspectral images contain substantial redundancy, which can be eliminated with the aid of dimensionality reduction techniques. The images obtained after the dimensionality reduction stage can be processed efficiently as much less data volume is involved in processing. It is worth noting that dimensionality reduction techniques are often used in different problems of image analysis (see <ref type="bibr" target="#b0">[1]</ref><ref type="bibr" target="#b1">[2]</ref><ref type="bibr" target="#b2">[3]</ref>, for example). The key requirement to the dimensionality reduction procedures is the possibility to preserve the quality of the solution of applied problems that is classification, segmentation, material detection, and so on.</p><p>The most commonly used techniques for the dimensionality reduction of hyperspectral data are linear techniques such as Principal Component Analysis (PCA). While a number of general-purpose nonlinear dimensionality reduction procedures exist <ref type="bibr" target="#b3">[4]</ref>, their use in hyperspectral image analysis is limited as many of them do not provide the ability to restore source hyperspectral data as such procedures provide only one-way data mapping.</p><p>In the last years, neural network approaches become more and popular. In particular, autoencoder neural networks <ref type="bibr" target="#b4">[5]</ref> were used for the dimensionality reduction of hyperspectral images. Such neural networks perform both nonlinear dimensionality reduction and provide the inverse mapping, which allows us to restore the source hyperspectral data up to some reconstruction error.</p><p>Recently, it was shown <ref type="bibr" target="#b5">[6]</ref> that the autoencoder network can be pretrained using principal component analysis technique, and its use for the dimensionality reduction allowed to outperform the PCA technique both in terms of the reconstruction error and classification accuracy.</p><p>However, it was also shown <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref> that the nonlinear mapping technique <ref type="bibr" target="#b8">[9]</ref> have advantages over the PCA in terms of classification and segmentation quality of hyperspectral images. For this reason, in this paper, we study the possibility to train the autoencoder-like architecture to capture the nonlinear mapping. In particular, we split the autoencoder into encoder and decoder and train both parts separately using the results of nonlinear mapping and investigate the effect of the subsequent fine-tuning of the whole network.</p><p>The structure of the paper is as follows. In the next Section II, we give necessary theoretical information on the neural network architecture and the nonlinear mapping algorithm. In Section III we describe the training procedures used in the experimental study and describe the results of experiments. The conclusions and the list of references are given at the end of the paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>II. METHOD</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Autoencoder Neural Network</head><p>The autoencoder neural network proposed in <ref type="bibr" target="#b4">[5]</ref> was earlier referred to as the autoassociative neural network. It consists of two consecutive parts called the encoder and decoder.</p><p>The encoder part takes a multidimensional vector x ϵ R M as input and produces corresponding low-dimensional representations y ϵ R m so that m&lt;M. The encoder consists of at least two fullyconnected layers. The first layer contains some number of neurons (defined by the parameters of the neural network architecture) connected to all the components of an input vector. The last layer of the encoder contains the number of neurons equal to the desired dimensionality of the reduced space.</p><p>The decoder usually has the mirror-reflected architecture. It has the same number of layers with the same number of neurons, but this is not the necessary requirement. Anyway, the input layer of the decoder takes the reduced representation y ϵ R m from the output of the encoder and restores the multidimensional vectors x ~ϵ R M . So the output layer of the decoder has the number of neurons equal to the input dimensionality M. The number of hidden layers and neurons is defined by the parameters of the neural network architecture.</p><p>As the number of neurons in the output layer of the encoder is less than the number of neurons in the input and hidden layers, this layer is often referred to as a bottleneck layer, and the whole network architecture is often referred to as a bottleneck architecture.</p><p>The autoencoder architecture is usually trained in selflearning mode by applying the same multidimensional vectors x ϵ R M to both input and output layers of the autoencoder. The training process itself is based on the minimization of the following cast function:</p><formula xml:id="formula_0">     N i i i x x N E 1 2 1  </formula><p>where N is the number of samples, and x ϵ R M ,</p><p>x ~ ϵ R M are inputs and outputs of the network. After training the encoder can be used to perform the dimensionality reduction of the source data (direct mapping), and decoder can be used to restore the source data by its reduced representation (inverse mapping).</p><p>In this paper, we study, if the encoder and decoder parts can be trained separately to force the neural network to perform the mapping with the desired properties. It was shown earlier that the separate pre-training of encoder and decoder with the PCA results helped to perform the training more efficiently compared to the standard training.</p><p>In particular, the approach proposed in <ref type="bibr" target="#b5">[6]</ref> consists of the following steps: perform the PCA for the input dataset; pretrain the encoder to produce the PCA results for the input data; pre-train the decoder to produce the input data for the encoded data; fine-tune the whole network according to the standard scheme.</p><p>In this paper, we follow the similar scheme but use the results of the nonlinear mapping algorithm instead of the PCA, and perform the fine-tuning optionally to study if such an approach can be more efficient than the standard PCA, nonlinear mapping or the proposed recently autoencoder pretrained with the PCA <ref type="bibr" target="#b5">[6]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Nonlinear Mapping</head><p>The nonlinear mapping is a numerical procedure that performs the mapping (nonfunctional) of data into lowdimensional space so that the data structure is preserved (see <ref type="bibr" target="#b7">[8]</ref> for example). This structure is defined in nonlinear mapping by all the pairwise distances between the points in the dataset. The Euclidean distance d() is usually used to measure the distances.</p><p>As the pairwise distances cannot be preserved exactly in a common case, the so-called data mapping error is introduced:</p><formula xml:id="formula_1">           N j i j i j i j i j i y y d x x d ) ( 1 , 2 , , ) ( ) , (     </formula><p>Here N is the number of data points, d(xi,xj) is the distance between points xi and xj in the multidimensional space, d(yi,yj) is the distance between the corresponding points yi, yj in the reduced space, µ and  are some constants. Usually, µ is the inversion of the sum of square distances between all the possible pairs of data points in multidimensional space, and ij are equal to one.</p><p>The minimization of the data mapping error is usually performed using the gradient descent technique. The coordinates of data points yi ϵ R m are the tunable parameters.</p><p>In this paper, we use the stochastic gradient descent based on mini-batches to minimize the data mapping error. The overall algorithm for dimensionality reduction using the nonlinear mapping consists of the initialization of the coordinates yi with the results of the principal component analysis with the subsequent refinement of yi using the stochastic gradient descent. The optimization process (refinement) stops when the coordinates of the data points yi in the reduced space become stable.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. The methods used in the study</head><p>As it was outlined in the introduction, in this paper, we study several variants of training the autoencoder-like encoder-decoder network. In particular, we consider the following techniques:</p><p>-The autoencoder network pretrained with the results of the PCA technique (AE-PCA), as it is described in <ref type="bibr" target="#b5">[6]</ref>; -The neural network with encoder and decoder (ED-NLM) trained separately using the results of the nonlinear mapping technique; -The same autoencoder network pretrained with the results of the nonlinear mapping technique and fine-tuned using the standard approach (AE-NLM).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>III. EXPERIMENTS</head><p>In this section, we describe the results of the experiments, which were performed using the Indian Pines dataset. This dataset was acquired using the AVIRIS hyperspectral sensor. This dataset contains 145 x 145 image pixels and 224 spectral components <ref type="bibr" target="#b9">[10]</ref>. Due to the high noise and water absorption in the source image, we used the version containing 204 spectral channels.</p><p>In all the described experiments, for the implementation of the neural networks, we used the Keras framework and Python language. The experiments were carried out on GeForce GTX 1070 ti.</p><p>For each considered neural network technique, we varied the number of hidden layers in the encoder and decoder and performed experiments for one and two hidden layers that correspond to four and six layers in the corresponding autoencoder networks.</p><p>The number of neurons in the input layer of the encoder and the output layer of the decoder was defined by the dimensionality of the input space that is the number of channels in the hyperspectral image. The number of neurons in the bottleneck layer varied from 1 to 10 according to the dimensionality of the reduced space. We also varied the number of neurons in the hidden layers. In particular, we used 64, 128, and 256 neurons in hidden layers.</p><p>According to the recommendations given in <ref type="bibr" target="#b5">[6]</ref>, we used ReLU activation functions for hidden layers and linear activations in the output layers of the encoder and decoder. Analogously, we used Adam optimizer <ref type="bibr" target="#b10">[11]</ref> with the default parameters. The batch size was set to 16, however, we suppose that a bigger batch size could also be used.</p><p>To measure the effectiveness of each particular approach, we estimated both the reconstruction error as it is defined in (1) and the classification accuracy using the reduced representation. The latter indicator plays an important role in hyperspectral image analysis problems, for example, in vegetation type recognition <ref type="bibr" target="#b11">[12]</ref>.</p><p>For the latter indicator, we used the overall accuracy of the one nearest neighbor (1-NN) classifier. The accuracy itself was measured as a fraction of correctly classified image pixels. To measure the accuracy, at first, we performed dimensionality reduction using one of the studied techniques for all the pixels in the considered image. Then we split all the ground truth pixels into training and testing sets in the proportion 60/40. After that, we trained the classifier using the training set and estimated its accuracy using the test one.</p><p>In our first experiment, we compared different techniques described in Subsection II.C and different architectures from the viewpoint of the reconstruction error <ref type="bibr" target="#b0">(1)</ref>. The results of this experiment are shown in Fig. <ref type="figure" target="#fig_0">1</ref>. In particular, we pretrained the encoder and decoder of the AE-PCA network for 50 iterations, fine-tuned the entire network for 50 iterations, and then measures the reconstruction quality. For the AE-NLM network, we trained the network with the same strategy, but used the NLM results instead of the PCA results at the pretraining stage. For the ED-NLM network, we trained separately encoder and decoder for 100 epochs. After the training, we measured the error (1) as the quality indicator. The experiment was carried out for a different number of layers and neurons.</p><p>As can be seen in the figure, the reconstruction error decreases with the growth of the dimensionality m of the reduced space defined by the number of neurons in the bottleneck layer, which is an expected result.</p><p>While we cannot highlight any winner technique in this experiment, we should note, that the AE-NLM technique often shows better results. It means that the nonlinear mapping result, which was used for training, provide the ability to restore the source data with quite a good quality. This also means that the decoder trained on the NLM data can be used as an inverse mapping for the NLM. As can be seen, the proposed techniques provided better results than the classical approaches in most cases. Again, it is difficult to outline any approach. Nevertheless, we do not observe any substantial advantages in the fine-tuning of the NLM initialized network over the version with separate encoder and decoder.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>IV. CONCLUSION</head><p>In this paper, we studied several dimensionality reduction neural network techniques based on autoencoder architecture. We compared the proposed techniques from the viewpoint of the reconstruction error and the accuracy of the per-pixel classification.</p><p>We showed that the proposed techniques outperformed the baseline (PCA and NLM) approached in terms of the classification accuracy in almost all the considered cases. The decoder trained using the results of the NLM can be successfully used as an inverse mapping for hyperspectral image analysis.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. The reconstruction error for different techniques and network architectures (a-c).</figDesc><graphic coords="3,72.85,513.70,198.45,146.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. The classification accuracy for different techniques and network architectures (a-c).In our second experiment, we compared the considered techniques from the viewpoint of the classification accuracy. The results of this experiment are shown in Fig.2. In this figure, we added the results for the classical linear (PCA) and nonlinear (NLM) dimensionality reduction techniques.</figDesc><graphic coords="3,333.85,515.05,198.95,140.25" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENT</head><p>The work was partly funded by RFBR according to the research project 18-07-01312-a in parts of «2. Method» -«3. Experiments» and by the Russian Federation Ministry of Science and Higher Education within a state contract with the «Crystallography and Photonics» Research Center of the RAS in parts «1. Introduction» and «4. Conclusion».</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Comparative study of description algorithms for complex-valued gradient fields of digital images using linear dimensionality reduction methods</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Dmitriev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Myasnikov</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2018-42-5-822-828</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="822" to="828" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Optimization of the multidimensional signal interpolator in a lower dimensional space</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Gashnikov</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2019-43-4-653-660</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="653" to="660" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">The study of dimensionality reduction methods in the task of browsing of digital image collections</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">V</forename><surname>Myasnikov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="296" to="301" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Nonlinear Dimensionality Reduction</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Verleysen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2007">2007</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Nonlinear principal component analysis using autoassociative neural networks</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Kramer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">AIChE J</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="233" to="243" />
			<date type="published" when="1991">1991</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Dimensionality Reduction of Hyperspectral Images using Autoassociative Neural Networks</title>
		<author>
			<persName><forename type="first">E</forename><surname>Myasnikov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE Proc. of International Multi-Conference on Engineering, Computer and Information Sciences (SIBIRCON)</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="591" to="0595" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Evaluation of nonlinear dimensionality reduction techniques for classification of hyperspectral images</title>
		<author>
			<persName><forename type="first">E</forename><surname>Myasnikov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2268</biblScope>
			<biblScope unit="page" from="147" to="154" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Vegetation type recognition in hyperspectral images using a conjugacy indicator</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Bibikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">L</forename><surname>Kazanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Fursov</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2018-42-5-846-854</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="846" to="854" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A nonlinear mapping for data structure analysis</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">W</forename><surname>Sammon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Computers</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="401" to="409" />
			<date type="published" when="1969">1969</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">F</forename><surname>Baumgardner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">L</forename><surname>Biehl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Landgrebe</surname></persName>
		</author>
		<idno type="DOI">10.4231/R7RX991C</idno>
		<title level="m">220 Band AVIRIS Hyperspectral Image Data Set</title>
				<imprint>
			<date type="published" when="1992-06-12">June 12, 1992. 2015</date>
		</imprint>
		<respStmt>
			<orgName>Purdue University Research Repository</orgName>
		</respStmt>
	</monogr>
	<note>Indian Pine Test Site 3</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<title level="m" type="main">Adam: A Method for Stochastic Optimization</title>
		<author>
			<persName><forename type="first">D</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ba</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.6980v8</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Vegetation type recognition in hyperspectral images using a conjugacy indicator</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Bibikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">L</forename><surname>Kazanskiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">A</forename><surname>Fursov</surname></persName>
		</author>
		<idno type="DOI">10.18287/2412-6179-2018-42-5-846-854</idno>
	</analytic>
	<monogr>
		<title level="j">Computer Optics</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="846" to="854" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
