<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>HyperHound: a Framework for Hyperspectral Image Analysis and Target Detection using Deep Learning Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Rosario Di Carlo</string-name>
          <email>rosario.dicarlo.ext@leonardo.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Roberto Morelli</string-name>
          <email>roberto.morelli.ext@leonardo.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Nicolosi</string-name>
          <email>alessandro.nicolosi@leonardo.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lab of Artificial Intelligence, Leonardo Labs</institution>
          ,
          <addr-line>Via Pieragostini 80, Genova, 16149</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lab of Artificial Intelligence, Leonardo Labs</institution>
          ,
          <addr-line>Via Tiburtina Km. 12.400, Roma, 00156</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Hyperspectral images have shown great potential for the target detection task. These images collect the reflectance physical value over a large electromagnetic spectrum providing a fingerprint that characterizes uniquely distinct materials. In this work, a framework is developed to recognize diferent materials using several approaches ranging from classical methods to deep learning ones. Diferent learning paradigms are investigated considering both supervised and self-supervised methods. The main diference between these approaches concerns the labeling process. Indeed, while the former method requires labeling the data, the latter approach is based on pseudo-labels generation described in this contribution.</p>
      </abstract>
      <kwd-group>
        <kwd>Hyperspectral</kwd>
        <kwd>deep learning</kwd>
        <kwd>target detection</kwd>
        <kwd>HSI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>Hyperspectral imaging (HSI) [1] is an advanced tech</title>
        <p>nology that allows for the collection of a wide range of
spectral data acquired by remote sensors. It has been
shown to be useful for various applications, including
object detection, classification, and material recognition.</p>
      </sec>
      <sec id="sec-1-2">
        <title>In particular, hyperspectral images provide unique ma</title>
        <p>terial fingerprints that can be used to identify diferent
materials.</p>
        <p>
          In recent years, there has been an increasing interest in
developing machine learning models that can accurately
recognize materials from hyperspectral images. Deep
learning has emerged as a promising approach to solving
complex problems in various fields. Among the
diferent deep learning models, convolutional neural networks
(CNNs) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] have become dominant for processing
visualrelated tasks. The concept of CNNs was first introduced
in a paper by LeCun et al. [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and has since been
improved upon by subsequent research [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ] and refined and
simplified by other studies [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] .
        </p>
      </sec>
      <sec id="sec-1-3">
        <title>This paper proposes a framework that leverages both</title>
        <p>classical and deep learning approaches for material
recognition in hyperspectral images. Diferent
learning paradigms, including supervised and self-supervised
methods, are investigated and evaluated for their
performance on a benchmark dataset. The approach
demonnized by CINI, May 29–31, 2023, Pisa, Italy
∗Corresponding author.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. HyperHound Framework</title>
      <p>HyperHound is a framework developed specifically for
analyzing hyperspectral images. It has been designed
to allow for easy implementation and testing of various
models for target detection. This framework comes with
a broad range of capabilities, with its main features
described below and provided with a simple user interface
(UI) shown in Fig. 1:
• Compatibility with the PIX format: support
loading files in PCI (Geomatics Database File)
format, splitting them into smaller patches, and
visualizing them for the analysis process.
• Datasets: Integration of both publicly available
and privately collected datasets to enable model
evaluation and comprehensive data analysis.
• Implementation of classic target detection
models: target detection is a critical task in
computer vision, which involves identifying specific
objects of interest within an image, HyperHound
implements several algorithms that provide a
solid foundation for measuring the performance
of newer and more advanced models.
Implementing these classic models allows us to compare
the results of diferent models and evaluate their
relative strengths and weaknesses. Some of the
classical models implemented are Euclidean
distance, CEM, MF, and ACE.
• Data labeling: the interface of HyperHound
provides two options for labeling data, individual
that assume a Gaussian distribution. However, real-world
hyperspectral data obtained through remote sensing
often exhibits strong non-linearity and non-Gaussianity,
which can result in a decline in the performance of these
classical detection algorithms.</p>
      <sec id="sec-2-1">
        <title>3.2. Self-supervised</title>
        <p>Self-supervised learning is a type of machine-learning
technique in which a model is trained to learn patterns
and relationships within a dataset without the need for
explicit labeling or supervision. For the scope of this
work, this method is used to learn a space topology to
cluster similar hyperspectral signatures. In this sense,
aFnigduarnea1ly: zUinIgofaHspyepcetrrhaolusingdnafrtaumreeowfotrhkelosaeldeicntgedSapliixneals. data starting from a reference signature, this algorithm can
detect similar targets from the images analyzed. To
overcome the labeling burden, an unsupervised method is
used to generate pseudo-labels. The strategy used in this
pixel labeling and bounding box selection. The work is described in the evaluation and results section
labeled data can be used to train a classification and leverages a clustering pre-text task. Once
pseudomodel. labels are generated, contrastive learning is used to train
• Functionalities of Inference and Training: the model to cluster properly signatures belonging to
HyperHound implements functionalities to both distinct classes. It is worth remembering that these are
perform inference with pre- trained deep learn- the classes defined in a self-supervised manner, that is,
ing models and training models on the fly from using an unsupervised pre-text task. A fully connected
the interface. The inference process is optimized neural network was chosen to learn the distance metric
by splitting the input image into smaller slices for class discrimination.</p>
        <p>
          and processing them in parallel on a GPU.
• Database of spectral signatures: consisting of 3.3. Fully-connected neural network
laboratory- sampled materials collected from on- (FCNN)
line sources. This resource enables comparisons
between the reflectance of individual pixels and A fully-connected neural network consists of a series of
available materials, enabling the computation of fully connected layers that connect every neuron in one
similarity scores. layer to every neuron in the other layer.
• Atmospheric correction: Integration with Each neuron represents a computational unit that
proPy6S [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ], a Python implementation of the 6S cesses its input and passes its results to each neuron of the
model [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ], to compute atmospheric correction of next layer. Layer by layer a hierarchical representation
the spectral image according to the atmospheric of the input is learned to improve the classification task
conditions during the data acquisition. that consists of producing a probability for each pixel to
belong to the target object. For the hyperspectral images,
the input of the FCNN is represented by all the channels
3. Methods of a single image pixel that are processed consequently
by all the fully-connected layers. Indeed, the first layer of
3.1. Standard Methods the network has an input dimension equal to the
hyperspectral channels while the other layers have a number
Classical hyperspectral image target detection algo- of neurons that gradually decreases. The last layer has a
rithms, such as Spectral Angle Mapper (SAM) [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] and number of neurons equal to the dimension of the code
Spectral Information Divergence (SID) [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ] are two used to encode the pixel given in the input.
straightforward detection algorithms that measure the Indeed, the network is trained to encode the input into
“distance” between the spectrum of the test pixel and the a sequence of numbers in a latent space. In this way,
prior spectral signature of the target. Also Constrained pixels belonging to the same class are clustered together
Energy Minimization (CEM) [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] matched filter to reduce their distance into the latent space. To promote
(MF) [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ], and adaptive coherence/cosine estimator (ACE) this behavior, the training proceeds by means of a metric
[
          <xref ref-type="bibr" rid="ref14">14</xref>
          ] [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] are typically developed using constrained least learning approach as explained at the beginning of this
square regression methods or hypothesis testing methods paragraph.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>3.4. Supervised</title>
        <p>the Davinci-1 infrastructure, which comprises a total of
80 nodes, each equipped with four Nvidia A100 GPUs.</p>
        <sec id="sec-2-2-1">
          <title>The supervised learning method involves training a</title>
          <p>
            model using labeled training data, which consists of a set
of inputs and their corresponding outputs or class labels. 4. Evaluation and Results
The model’s parameters are updated iteratively during
the training phase to accurately predict the desired out- The following paragraphs begin by presenting one of the
puts. In the testing phase, the model is evaluated against datasets used to validate the self-supervised approach.
new input or test data to assess its ability to predict the Subsequently, the training and validation processes were
correct labels. With suficient training, the model can expanded to other hyperspectral datasets [
            <xref ref-type="bibr" rid="ref16">16</xref>
            ]. Finally,
predict the labels of new input data. However, this ap- the supervised method, including the labeling process
proach requires a large amount of labeled training data and performance evaluation, is reported.
to fine-tune the model parameters. Therefore, it is most
appropriate for situations where much-labeled data is 4.1. Salinas
available. The HyperHound framework facilitates this
labeling process and the following training. The model
adopted to test the framework is a convolutional neural
network with 3D convolutional filters.
          </p>
        </sec>
        <sec id="sec-2-2-2">
          <title>Salinas is a hyperspectral dataset collected by the 224</title>
          <p>band AVIRIS sensor over Salinas Valley, California, and is
characterized by high spatial resolution (3.7-meter pixels).</p>
          <p>The area covered comprises 512 lines by 217 samples. 20
3.5. 3D Convolutional neural network water absorption bands were discarded: [108-112],
[154167], for a total number of bands equal to 224. This image
(3D-CNN) was available only as at-sensor radiance data. It includes
Identifying ground objects in hyperspectral imaging re- vegetables, bare soils, and vineyard fields. Salinas
groundquires both spectral and spatial information. To efec- truth contains 16 classes.
tively classify these objects, a 3D convolutional neural
network (CNN) was implemented. The network pro- 4.2. Data Labelling
cesses each pixel of the images by considering the relation
between adjacent channels, in addition to spatial patterns
across neighboring pixels. The input of the 3D-CNN is a
patch of 7x7 pixels, where N is the number of channels
in the hyperspectral image. The architecture consists
of a series of 3D convolutional layers, with decreasing
iflter numbers leading to the last fully- connected layer.</p>
          <p>This final layer takes the flattened concatenation of a
set of feature maps from the last convolutional layers as
input and outputs the probability of the center pixel of
the input patch belonging to a target object. A scheme
of this neural network is reported in Fig. 2.</p>
        </sec>
        <sec id="sec-2-2-3">
          <title>The self-supervised approach for labeling involves gen</title>
          <p>erating samples that are labeled without full supervision.
One method for accomplishing this is through the use
of endmembers, which are defined as pure spectral
signatures that can be linearly combined to represent the
hyperspectral image pixels. Endmembers can be thought
of as the basis vectors of a geometrical subspace. During
image acquisition, due to the relatively low spatial
resolution of hyperspectral sensors, some pixels may collect
a mix of signatures from diferent materials. This means
that each pixel can be seen as a superimposition of each
endmember. By identifying the endmembers in the
hyperspectral image, it is possible to obtain a set of pure
spectral signatures that can be used to label the image
data. Once the endmembers have been identified, they
can be used in a variety of ways to label the image data.
For example, one approach is to use spectral unmixing to
estimate the abundance fractions of each endmember in
each pixel. Nevertheless, some methods exist to unmix
the pixel to find the basic constituents of each material,
but their application doesn’t guarantee the optimality</p>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>3.6. Hyperparameters optimization</title>
        <sec id="sec-2-3-1">
          <title>The model included in the HyperHound framework is</title>
          <p>the result of extensive hyperparameter optimization. To
scale the search for optimizing hyperparameters, Ray
Tune, a Python library designed to execute experiments
and tune hyperparameters at any scale, was utilized. This
was done using the Leonardo HPC system, specifically
where   are the randomly generated coeficients and 

represents the n endmembers extracted from the source
image. This sampling is repeated to generate all the
beling step, where the endmember corresponding to the
highest coeficient is used to label the sample, in other
words:
(1)
(2)
dataset samples. The key step in this process is the la- other models on most of the tested datasets.
  =  (</p>
          <p>); .</p>
        </sec>
        <sec id="sec-2-3-2">
          <title>So, in the end, a dataset with a custom number of samples is generated with several classes equal to the number of endmembers.</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>4.3. Performance</title>
        <sec id="sec-2-4-1">
          <title>All the models were evaluated on diferent hyperspectral</title>
          <p>datasets, each containing one or multiple classes to
detect. For each dataset, one of the classes was designated
as the target class, while the others were considered
background classes.</p>
        </sec>
        <sec id="sec-2-4-2">
          <title>To identify the target class, a representative pixel was selected and the distance between that pixel and all other</title>
        </sec>
      </sec>
      <sec id="sec-2-5">
        <title>4.4. Proprietary dataset</title>
        <p>The dataset used for the supervised task is a proprietary
dataset. It consists of 4 images collected with a
hyperspectral sensor during an aerial acquisition. The images
were pre- processed by performing the L1 pre-processing
chain, which consists of the following operations:
• Spectral and Radiometric Calibration
• Geo-Referencing
• Geo-Rectification</p>
        <p>These operations are missing the L2 pre-processing
chain, which involves atmospheric correction and
conversion of values to reflectance. In addition, the images
have artifacts probably due to the vibrations the sensor
was subjected to during the flight. With these limitations,
a single pixel may contain a mixture of multiple
hyperspectral signatures making the detection task harder.</p>
      </sec>
      <sec id="sec-2-6">
        <title>4.5. Data Labelling</title>
        <sec id="sec-2-6-1">
          <title>Each of the 4 images was cropped into 200 smaller tiles,</title>
          <p>each measuring 613 × 613 in size, for a total of 800 tiles.</p>
          <p>Through ground surveys, it was determined that 18 of
these tiles contained the targets to identify. Of these
18 tiles, 11 were included in the training-validation sets,
while the remaining 7 tiles were used to test the models.</p>
          <p>The labeling process is provided using the HyperHound
interface. Through this interface, it is possible to display
an image and collect a set of pixels to represent both
target and background samples. This collection can be
performed by using both bounding boxes or dot
annotations, for a finer pixel selection. This procedure was
repeated on all 18 tiles used for the training, validation,
and test. A patch of dimension 7x7 was cropped around
each pixel collected to provide the input in the form of
images to the 3D CNN used for the training. The total
number of patches collected for training and validation
was nearly 1500 with a proportion of 1:14 between
target and background samples. The partition of data into
training, validation, and testing is summarized in Table
1.
it is considered a true positive (TP). On the other hand,
if there is no overlap between a predicted object and
a ground-truth object, it is considered a false positive
(FP). Finally, if a ground-truth label is not associated with
any predicted object, the false negative (FN) count is
in4.6. Performance creased by one unit. The results are provided in the table
The objective was to identify target areas, and a detec- (2).
tion metric was used to evaluate the model’s performance. The proposed model was evaluated on seven images
The  1 score was chosen as the evaluation metric since it that were not included in the training or validation set
handles class imbalances better than accuracy and other and achieved an  1 score of 0.6 in identifying the targets.
metrics. An algorithm was developed to associate the An example of detection comparison on a test image
model’s predictions with the ground-truth labels, and is reported in Fig. 5. The first patch (top left corner)
assess the model’s performance. The output of the model represents the ground truth, that is, a completely black
is a heat-map representing the probability of a pixel be- image with green boxes corresponding to the targets to
longing to a target area. Therefore, the first step was to detect. The image is black in order to preserve sensitive
apply a threshold to obtain a binary mask, where each information and the original image pertaining to this
cluster of fully-connected pixels represents a predicted test is not shown. The remaining patches represent the
object. Subsequently, if a predicted object partially or predictions of all the competing methods. Notably, only
fully overlaps with an object in the ground-truth mask, the CNN3D was able to detect correctly all the targets
on this test tile.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>5. Conclusions</title>
      <sec id="sec-3-1">
        <title>This article presents the HyperHound framework, which</title>
        <p>has been developed for hyperspectral image analysis. The
framework provides an efective solution for analyzing
hyperspectral data by applying deep learning techniques.
Two types of deep learning models were analyzed using
the framework: self-supervised and supervised. The
selfsupervised approach is particularly useful in addressing
the challenges of a lack of labeled data and the dificulty
of pixel-level ground truth annotation. The model learns
to predict features from the input data itself, without
any explicit supervision. This approach is particularly
efective when the ground truth data is not available, and
it has shown good results in many literature datasets.
However, the self- supervised models are less robust and
their detection metrics are generally lower compared
with supervised models. The supervised model, on the
other hand, utilizes ground truth data to train the model.
This type of model yields good results, providing accurate
results even under diferent real-world conditions where
classical and unsupervised models often fail.</p>
        <p>In this study, it is highlighted that many hyperspectral
datasets used as benchmarks lack suficient data, and the
training and validation data are often highly correlated,
resulting in models that are not robust to diferent
realworld conditions. However, the supervised models have
shown significant improvement and are particularly
useful for man-in-the-loop applications. They provide an
excellent tool for guiding and facilitating the task of an
expert analyst in identifying targets, which is a
challenging task in hyperspectral data analysis. Therefore, the
HyperHound framework and supervised models provide
a promising direction for hyperspectral data analysis, and
they hold great potential for addressing the challenges
of real-world applications.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Citations and Bibliographies References</title>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Landgrebe</surname>
          </string-name>
          ,
          <article-title>Hyperspectral image data analysis</article-title>
          ,
          <source>IEEE Signal processing magazine</source>
          <volume>19</volume>
          (
          <year>2002</year>
          )
          <fpage>17</fpage>
          -
          <lpage>28</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G. E.</given-names>
            <surname>Hinton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. R.</given-names>
            <surname>Salakhutdinov</surname>
          </string-name>
          ,
          <article-title>Reducing the dimensionality of data with neural networks</article-title>
          ,
          <source>science</source>
          <volume>313</volume>
          (
          <year>2006</year>
          )
          <fpage>504</fpage>
          -
          <lpage>507</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Fukushima</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Miyake</surname>
          </string-name>
          , T. Ito,
          <article-title>Neocognitron: A neural network model for a mechanism of visual pattern recognition</article-title>
          ,
          <source>IEEE transactions on systems, man, and cybernetics</source>
          (
          <year>1983</year>
          )
          <fpage>826</fpage>
          -
          <lpage>834</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>Y.</given-names>
            <surname>LeCun</surname>
          </string-name>
          , L. Bottou,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Bengio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hafner</surname>
          </string-name>
          ,
          <article-title>Gradientbased learning applied to document recognition</article-title>
          ,
          <source>Proceedings of the IEEE</source>
          <volume>86</volume>
          (
          <year>1998</year>
          )
          <fpage>2278</fpage>
          -
          <lpage>2324</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. C.</given-names>
            <surname>Ciresan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Meier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Masci</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Gambardella</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schmidhuber</surname>
          </string-name>
          ,
          <article-title>Flexible, high performance convolutional neural networks for image classification</article-title>
          , in: Twenty-second
          <source>international joint conference on artificial intelligence, Citeseer</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P. Y.</given-names>
            <surname>Simard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Steinkraus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. C.</given-names>
            <surname>Platt</surname>
          </string-name>
          , et al.,
          <article-title>Best practices for convolutional neural networks applied to visual document analysis</article-title>
          .,
          <source>in: Icdar</source>
          , volume
          <volume>3</volume>
          ,
          <string-name>
            <surname>Edinburgh</surname>
          </string-name>
          ,
          <year>2003</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>R. T</surname>
          </string-name>
          . Wilson,
          <article-title>Py6s: A python interface to the 6s radiative transfer model</article-title>
          .,
          <source>Comput. Geosci</source>
          .
          <volume>51</volume>
          (
          <year>2013</year>
          )
          <fpage>166</fpage>
          -
          <lpage>171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E. F.</given-names>
            <surname>Vermote</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tanré</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Deuze</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Herman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-J.</given-names>
            <surname>Morcette</surname>
          </string-name>
          ,
          <article-title>Second simulation of the satellite signal in the solar spectrum, 6s: An overview</article-title>
          ,
          <source>IEEE transactions on geoscience and remote sensing 35</source>
          (
          <year>1997</year>
          )
          <fpage>675</fpage>
          -
          <lpage>686</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>F. A.</given-names>
            <surname>Kruse</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lefkof</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Boardman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Heidebrecht</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Shapiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Barloon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Goetz</surname>
          </string-name>
          ,
          <article-title>The spectral image processing system (sips)-interactive visualization and analysis of imaging spectrometer data</article-title>
          ,
          <source>Remote sensing of environment 44</source>
          (
          <year>1993</year>
          )
          <fpage>145</fpage>
          -
          <lpage>163</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.-I.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <surname>C.-C. Chang</surname>
            ,
            <given-names>J. O.</given-names>
          </string-name>
          <string-name>
            <surname>Jensen</surname>
          </string-name>
          ,
          <string-name>
            <surname>F. M. D'Amico</surname>
          </string-name>
          ,
          <article-title>New hyperspectral discrimination measure for spectral characterization</article-title>
          ,
          <source>Optical engineering 43</source>
          (
          <year>2004</year>
          )
          <fpage>1777</fpage>
          -
          <lpage>1786</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>L.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Zhang,</surname>
          </string-name>
          <article-title>Adjusted spectral matched filter for target detection in hyperspectral imagery</article-title>
          ,
          <source>Remote sensing 7</source>
          (
          <year>2015</year>
          )
          <fpage>6611</fpage>
          -
          <lpage>6634</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cohen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. G.</given-names>
            <surname>Blumberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Rotman</surname>
          </string-name>
          ,
          <article-title>Subpixel hyperspectral target detection using local spectral and spatial information</article-title>
          ,
          <source>Journal of Applied Remote Sensing</source>
          <volume>6</volume>
          (
          <year>2012</year>
          )
          <fpage>063508</fpage>
          -
          <lpage>063508</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>D.</given-names>
            <surname>Manolakis</surname>
          </string-name>
          , E. Truslow,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pieper</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Cooley</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brueggeman</surname>
          </string-name>
          ,
          <article-title>Detection algorithms in hyperspectral imaging systems: An overview of practical algorithms</article-title>
          ,
          <source>IEEE Signal Processing Magazine</source>
          <volume>31</volume>
          (
          <year>2013</year>
          )
          <fpage>24</fpage>
          -
          <lpage>33</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>E. J.</given-names>
            <surname>Kelly</surname>
          </string-name>
          ,
          <string-name>
            <surname>K. M. Forsythe</surname>
          </string-name>
          ,
          <article-title>Adaptive detection and parameter estimation for multidimensional signal models</article-title>
          ,
          <source>Technical Report, Massachusetts Inst of Tech Lexington Lincoln Lab</source>
          ,
          <year>1989</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>X.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Paswaters</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Cline</surname>
          </string-name>
          ,
          <article-title>A comparative study of target detection algorithms for hyperspectral imagery, in: Algorithms and Technologies for Multispectral, Hyperspectral, and Ultraspectral Imagery XV</article-title>
          , volume
          <volume>7334</volume>
          ,
          <string-name>
            <surname>SPIE</surname>
          </string-name>
          ,
          <year>2009</year>
          , pp.
          <fpage>682</fpage>
          -
          <lpage>693</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>B. A. M Graña</surname>
          </string-name>
          , MA Veganzons,
          <source>Hyperspectral remote sensing scenes</source>
          ,
          <year>2011</year>
          . URL: https://www.ehu.eus/ccwintco/index.php/ Hyperspectral_Remote_Sensing_Scenes.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>