=Paper=
{{Paper
|id=Vol-2665/paper8
|storemode=property
|title=Hyperspectral data dimensionality reduction using nonlinear autoencoders
|pdfUrl=https://ceur-ws.org/Vol-2665/paper8.pdf
|volume=Vol-2665
|authors=Evgeny Myasnikov
}}
==Hyperspectral data dimensionality reduction using nonlinear autoencoders ==
Hyperspectral data dimensionality reduction using
nonlinear autoencoders
Evgeny Myasnikov
Geoinformatics and Information Security department
Samara National Research University;
Image Processing Systems Institute of RAS - Branch of the FSRC "Crystallography and Photonics" RAS
Samara, Russia
mevg@geosamara.ru
Abstract—The known feature of hyperspectral images is a allowed to outperform the PCA technique both in terms of
high spectral resolution, which allows us to identify materials the reconstruction error and classification accuracy.
and classify objects in images with high accuracy. However
hyperspectral images contain substantial redundancy, which However, it was also shown [7,8] that the nonlinear
can be eliminated with the aid of dimensionality reduction mapping technique [9] have advantages over the PCA in
techniques. In this paper, we propose and study several terms of classification and segmentation quality of
dimensionality reduction techniques based on the pretraining hyperspectral images. For this reason, in this paper, we
the encoder-decoder neural network with the results of the study the possibility to train the autoencoder–like
nonlinear mapping and principal component analysis
techniques. The experiments performed on an open dataset architecture to capture the nonlinear mapping. In particular,
show that the proposed techniques both provide the we split the autoencoder into encoder and decoder and train
discriminative low-dimensional features and allow us to both parts separately using the results of nonlinear mapping
reconstruct source hyperspectral data with little error. and investigate the effect of the subsequent fine-tuning of
Keywords—autoencoder, hyperspectral images, nonlinear the whole network.
mapping, principal component analysis The structure of the paper is as follows. In the next
Section II, we give necessary theoretical information on the
I. INTRODUCTION neural network architecture and the nonlinear mapping
Hyperspectral images are widely used nowadays in algorithm. In Section III we describe the training procedures
different fields such as agriculture, medicine, biology, used in the experimental study and describe the results of
chemistry, and so on. The known feature of hyperspectral experiments. The conclusions and the list of references are
images is high spectral resolution, which allows us to given at the end of the paper.
identify materials and classify depicted images with high II. METHOD
accuracy.
A. Autoencoder Neural Network
However hyperspectral images contain substantial
redundancy, which can be eliminated with the aid of The autoencoder neural network proposed in [5] was
dimensionality reduction techniques. The images obtained earlier referred to as the autoassociative neural network. It
after the dimensionality reduction stage can be processed consists of two consecutive parts called the encoder and
efficiently as much less data volume is involved in decoder.
processing. It is worth noting that dimensionality reduction The encoder part takes a multidimensional vector x ϵ RM
techniques are often used in different problems of image as input and produces corresponding low-dimensional
analysis (see [1-3], for example). The key requirement to the representations y ϵ Rm so that m