<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Per-Channel Regularization for Regression-Based Spectral Reconstruction</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yi-Tun Lin</string-name>
          <email>Yi-Tun.Lin@uea.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Graham D. Finlayson</string-name>
          <email>G.Finlayson@uea.ac.uk</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Multi-</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>School of Computing Sciences, University of East Anglia</institution>
          ,
          <addr-line>Norwich</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Spectral reconstruction algorithms seek to recover spectra from RGB images. This estimation problem is often formulated as leastsquares regression, and a Tikhonov regularization is generally incorporated, both to support stable estimation in the presence of noise and to prevent over- tting. The degree of regularization is controlled by a single penalty-term parameter, which is often selected using the cross validation experimental methodology. In this paper, we generalize the simple regularization approach to admit a per-spectral-channel optimization setting, and a modi ed cross-validation procedure is developed. Experiments validate our method. Compared to the conventional regularization, our perchannel approach signi cantly improves the reconstruction accuracy at multiple spectral channels, by up to 17% increments for all the considered models.</p>
      </abstract>
      <kwd-group>
        <kwd>Spectral reconstruction spectral imaging</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>
        The light spectrum is a continuous intensity distribution across wavelengths.
This spectral information is commonly used to help determine and/or
discriminate the physical properties of object surfaces, for example in remote sensing [
        <xref ref-type="bibr" rid="ref14 ref23 ref25 ref8">25,
8, 14, 23</xref>
        ] and medical imaging [
        <xref ref-type="bibr" rid="ref28 ref29">28, 29</xref>
        ]. Also, in various practical applications, the
devices (sensors or displays), light sources and object surfaces are characterized
by spectral measurements [
        <xref ref-type="bibr" rid="ref1 ref16 ref27 ref9">1, 9, 16, 27</xref>
        ].
      </p>
      <p>
        Despite the advantages of spectral capture, almost all images that we record
contain just 3 measurements - the 3 weighted spectral averages over the Red,
Green and Blue spectral regions. Perforce, much spectral information is therefore
lost in this RGB image formation process. Indeed, it is a classical result in color
science, that there are many spectra - called metamers [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ] - which integrate to
the same RGB, and of course, given only one RGB measurement we cannot tell
which physical spectrum induced it. Still, by adopting learning approaches we
can estimate the spectrum that likely corresponds to a given RGB. Estimating
spectra from RGBs is called spectral reconstruction (SR).
      </p>
      <p>In Fig. 1 we illustrate RGB image formation and the SR process. In the
top-left panel we see a single radiance spectrum measured at one location in
the hyperspectral image (bottom left). This spectrum is sampled by 3 sensors,
resulting in the 3-value RGB response (top-right). Repeating this process for all
image locations, the corresponding RGB image in the bottom right is derived
from the hyperspectral image. Then, the spectral reconstruction algorithms
attempt to recover the hyperspectral image back from the RGB image (or an
approximation thereof).</p>
      <p>
        Historically, this SR problem is e ectively solved by least-squares regression
[
        <xref ref-type="bibr" rid="ref11 ref15 ref17 ref18 ref2">15, 11, 17, 18, 2</xref>
        ], where the map from RGBs (or the non-linear RGB features) to
spectra is modelled as a simple linear transform. More recently, deep learning
approaches [
        <xref ref-type="bibr" rid="ref22 ref4 ref6">4, 22, 6</xref>
        ] have been developed that provide even better SR performance.
E ectively, this performance increment is achieved by regressing an RGB in the
context of its neighborhood to its corresponding spectra. Clearly, this patch-based
idea has merit. For example, if the algorithm can identify a patch in the scene
as a `skin region' then spectral recovery is plausibly easier to solve, since skin
spectra have characteristic spectral shapes [
        <xref ref-type="bibr" rid="ref19 ref7">7, 19</xref>
        ].
      </p>
      <p>
        Despite clear rationale behind the deep-learning approach, Aeschbacher et
al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] show that the regression-based A+ algorithm provides very competitive
performance. Moreover, Lin and Finlayson [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] shows that several regression
methods actually generalize better than the leading deep-learning models when
the scene exposure changes.
      </p>
      <p>The main concern of this paper is in the regularization step of
regressionbased SR algorithms. The classical (multivariate) regression problem from
statistics is written as</p>
      <p>MA</p>
      <p>B ;
where A is an m N matrix as the table of measured data (m is the dimension of
the measured data and N is the number of data samples, with N m), and B
is the corresponding target data matrix, of dimension k N (k is the dimension
of the target data). The aim is to nd the k m linear mapping M that makes
the approximation as good as possible.</p>
      <p>Now, let us suppose small uctuations in the target data, denoted as a matrix
E of very small numbers (all entries are close to 0). The following regression is
almost identical as Equation (1):
(1)
M0A</p>
      <p>B + E :
(2)
And yet, often we nd that the best solution for M and M0 are very di erent
from one another. The reason for this is that some dimensions of the measured
data (the rows of the data matrix A) could be highly correlated, such that there
can be very di erent M's that t B equally well.</p>
      <p>
        Regularization theory [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ] is a way of dealing with this kind of non-robustness.
Given the example of Equation (1) and (2), we may ask that among all plausible
(near optimized) solutions, which one is more likely to generalize to the `unseen
data' better. Typically, the principle of regularization follows the idea that the
best tting function (i.e. M) should be the simplest possible solution that can
still ts the data well.
      </p>
      <p>In Fig. 2 we show a 1-D example. In the least-squares sense, the wiggly red
curve is found to best t the training data (black data points). Yet intuitively,
this is not the correct t, as the data points appear to follow a much simpler
distribution. In contrast, the regularized t (blue curve) seems to model the data
better.</p>
      <p>Returning to our spectral reconstruction problem, taking linear regression as
an example, the matrix A corresponds to a set of image RGBs (m = 3), and B
refers to the spectra we are trying to recover (k is the number of spectral
channels we have measured). The insight that we explore in this paper is that there
is no reason why the ts for di erent spectral channels should be regularized
altogether. Rather, we seek to consider the per-channel regression problem, where
each spectral channel is recovered in turn, and correspondingly, each row of M
is solved in turn. This simple modi cation allows us to carry out a per-channel
regularization that ensures individual optimizations for all spectral channels in
the spectral reconstruction problem.</p>
      <p>
        Either for the conventional global regularization or for our per-channel
approach, care must be taken not to overly tune the terms in M to the data at
hand. This led us to develop a modi ed cross validation procedure. Our method
separates the data on hand into three subsets, respectively for training,
regularization and testing, which is a novel adjustment from the standard methodology
[
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] and is another contribution of this paper.
2
2.1
      </p>
    </sec>
    <sec id="sec-2">
      <title>Background</title>
      <sec id="sec-2-1">
        <title>Hyperspectral Images and RGB Simulation</title>
        <p>In a hyperspectral image, spectra are measured discretely at some sampled
wavelengths. Suppose the visible spectrum runs from 400 to 700 nanometers and the
spectral sampling is every 10 nanometers, we get a 31-dimensional discrete
representation of spectra, denoted as r 2 R31.</p>
        <p>
          Correspondingly, the spectral sensitivities of the R, G and B camera sensors
can also be represented in discrete vector form (i.e. as 31-dimensional vectors),
respectively denoted as sR, sG and sB. Then, as per the illustration in Fig. 1 we
can write image formation as [
          <xref ref-type="bibr" rid="ref26">26</xref>
          ]:
0R1
0sT 1
        </p>
        <p>R
x = BGC = B@sTGCA r ;
@ A</p>
        <p>B sTB
where x = (R; G; B)T is the 3-value RGB camera response.</p>
        <p>In the SR problem (the bottom of Fig. 1) we seek to recover hyperspectral
images from the RGB images. Denote an SR algorithm as : R3 7! R31,
2.2</p>
      </sec>
      <sec id="sec-2-2">
        <title>Regression-Based Spectral Reconstruction</title>
        <p>The general regression-based formulation of the SR problem is written as
(x)</p>
        <p>
          r :
(x) = M'(x) ;
where '( ) is a feature transformation which maps each RGB to a
corresponding p-term feature vector, and in turn it is mapped by the regression matrix
(3)
(4)
(5)
M. Each of the various regression-based models [
          <xref ref-type="bibr" rid="ref11 ref15 ref17 ref18 ref2">15, 11, 17, 18, 2</xref>
          ] adopts a
bespoke de nition of '(x). For details of the considered models, including Linear,
Root-Polynomial and Adjusted Anchored Neighborhood Regression [
          <xref ref-type="bibr" rid="ref15 ref17 ref2">15, 17, 2</xref>
          ],
see Appendix A.
        </p>
        <p>Least-Squares Optimization The most common least-squares optimization
seeks to minimize the sum of squared errors between the ground-truth
training spectral data and the reconstruction from their corresponding RGBs: (x).
Given the formulation of (x) in Equation (5), the least-squares optimization of
M is formulated as:</p>
        <p>M = arg min</p>
        <p>M</p>
        <p>N
X
i=1
jjri</p>
        <p>M'(xi)jj22
;
where N is the number of data points in the training set and i indexes an
individual spectrum.</p>
        <p>Collating all spectral training data in a data matrix R = (r1; r2; :::; rN ) and
the corresponding feature vector matrix = ('(x1); '(x2); :::; '(xN )), Equation
(6) can be written as:</p>
        <p>M = arg min</p>
        <p>M
jjR</p>
        <p>M jj2F
:
Here jj jj2F is the squared Frobenius norm, which is exactly the sum-of-squares
of all entries of the enclosed matrix.</p>
        <p>
          Tikhonov Regularization In regression-based SR, the most common method
to regularize a model is Tikhonov Regularization [
          <xref ref-type="bibr" rid="ref15 ref24">15, 24</xref>
          ], which hypothesizes
that a more natural t is obtained when the `magnitude' (or the `matrix norm')
of M is bounded to some extent. Based on this assumption, the least-squares
optimization in Equation (7) is extended to incorporate a regularization term:
M = arg min
        </p>
        <p>M
jjR</p>
        <p>
          M jj2F + jjMjj2F
:
Here, the jjMjj2F term (the regularization term, or penalty term) is controlled
by a user-de ned regularization parameter 0, which is usually determined
empirically [
          <xref ref-type="bibr" rid="ref13 ref21">13, 21</xref>
          ].
        </p>
        <p>
          Equation (8) is solved in closed form [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]:
        </p>
        <p>M = R</p>
        <p>T(</p>
        <p>T + I) 1 ;
where I is the p
vectors '(x)).</p>
        <p>p identity matrix (recall that p is the dimension of the feature
(6)
(7)
(8)
(9)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>Proposed Method</title>
      <p>In spectral reconstruction, we wish to recover spectral measurements in the
range from 400 to 700 nanometers (the visible spectrum). Suppose we know the
intensity of light entering the camera at 400 nanometers. Given this knowledge if
we wished to predict the value of the spectrum at 410 nanometers, it makes sense
to assume a similar value as the one at 400 nanometers. Indeed, the fact that
the intensity values at close-by wavelengths are similar is why we can represent
the continuous visible spectrum at discrete wavelengths. Conversely, one could
not use the knowledge of light at 400 nanometers to predict the spectral value
at, say, 700 nanometers.</p>
      <p>And yet, in the literature when we regularize the regression-based SR models,
we are - in some sense - assuming that all wavelengths are related. Our new
per-channel reformulation of Tikhonov regularization for spectral reconstruction
e ectively allows the recovery of the spectral values at distant wavelengths to
be considered more independently from one another.
3.1</p>
      <sec id="sec-3-1">
        <title>Per-Channel Regularization</title>
        <p>Let us split the regression matrix M by row: M = (m1; m2; :::; m31)T, such that
the general form of regression-based SR formulated in Equation (5) becomes
(10)
(11)
0 m1T 1</p>
        <p>0 r^1 1
(x) = M'(x) = BBBB m...2T CCCC '(x) = BBBB r^...2 CCCC ;
where (r^1; r^2; :::; r^31)T = ^r is the reconstructed spectrum. For an arbitrary kth
spectral channel, the estimated intensity value r^k is given by r^k = mkT'(x).</p>
        <p>Note that as we represent the regression model by channel, we do not alter
the original model. This says that the regression-based spectral reconstruction
has always been in such a way that the reconstruction for each spectral channel
depends exclusively on the corresponding row of M.</p>
        <p>Given this fact, we might expect that each row of M would be optimized
independently. However, this was not the case for the conventional regularized
least-squares solution in Equation (9). Indeed, we see in Equation (8) the
regularization term is only controlled by one single regularization parameter and
the ts for all spectral channels (all rows of M) are regularized by the same .
Regardless of how we optimize this parameter, this setting clearly contradicts
to the inherent independence between the rows of M.</p>
        <p>Let us split the spectral reconstruction problem into 31 independent
problems, where the function k : R3 7! R reconstructs the kth-channel values of
the reconstructed spectra by the kth row of M:
k(x) = mkT'(x) :
where kT includes the kth channel values of all training spectral data. mkT is
then optimized following the regularized least-squares optimization:
Then, we are to determine mkT, as the least-squares t for the kth channel data.
Recall the training spectral data matrix R whose columns are individual training
spectra, now we split R by spectral channel instead:
(12)
(13)
(14)
mkT = arg min
mTk</p>
        <p>T
jj k
mkT jj22 + kjjmkjj2
2
;
and solved in closed form:
mkT =
kT T(</p>
        <p>T + kI) 1 :
Here k represents the channel-wise regularization parameter that only controls
the regularization for the kth channel.</p>
        <p>Clearly, our per-channel regularization scheme (Equation (14)) solves the
regression matrix M row-by-row, such that each row is ready to be regularized
independently. The remaining question is then how we are going to optimize
these regularization parameters.
3.2</p>
      </sec>
      <sec id="sec-3-2">
        <title>Modi ed Cross Validation</title>
        <p>
          Perforce, regularization parameters (conventionally the single and our
perchannel k's) are determined empirically. In the literature, a grid-search
approach is adopted, where di erent parameters are tried to regularize the model.
These `intermediate models' are then used to recover spectra from a set of unseen
RGB images, and the model that minimizes the evaluation criteria is selected.
For example, see [
          <xref ref-type="bibr" rid="ref11 ref17 ref2">11, 17, 2</xref>
          ].
        </p>
        <p>
          As usual we would like to train, regularize and test a model using the images
from the same database, we must partition the database into several subsets
for these di erent usages. All (to our knowledge) deep-learning models simply
separate the image database into 3 subsets randomly (respectively for training,
validation and testing, in the parlance of deep learning), see [
          <xref ref-type="bibr" rid="ref22 ref6">22, 6</xref>
          ]. However,
this setting can potentially create so-called `unfair' separations, such that if the
database is separated di erently, the results may vary.
        </p>
        <p>
          A better practice is using a cross-validation process. In this paper we develop
our own cross validation scheme, which is modi ed from the conventional
Kfold cross validation [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. This is because the conventional K-fold only seeks to
separate a dataset into training and testing data, and here we need an additional
partition as the regularization data.
        </p>
        <p>In Fig. 3 we show the comparison between the conventional 4-fold cross
validation (left) and our method (right). For both methods the same experiment
is conducted 4 times. In each trial, the conventional method selects 3 out of
4 portions of data for training (marked in blue) and the remaining part is for
testing (marked in orange). In our method, however, only 2 out of 4 portions of
data are for training, which allows 1 portion of data (marked in green) used for
regularization, that is to determine the and k parameters. Subject to these
terms we solve for the best regression model for the training (blue) data. The
performance statistics are calculated based on the recovery errors on the testing
(orange) data and averaged over the 4 trials.</p>
        <p>Notice for our cross validation method there actually exists more possible
permutations than the presented 4-trial setting. To be exact, there should be 12
di erent permutations. We remark that according to our empirical study,
experimenting with more trials (than the presented setting) does not make signi cant
di erence in the performance statistics.
4
4.1</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Experiments</title>
      <sec id="sec-4-1">
        <title>Considered Models</title>
        <p>
          In this paper we consider 3 regression-based models:
{ Linear Regression (LR) [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ]
{ Root-Polynomial Regression (RPR) [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]
{ Adjusted Anchored Neighborhood Regression (A+) [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] .
        </p>
        <p>For all the above models we adopt both the original regularization (as reported
in their citations and as per Equation (9)) and our per-channel regularization
(as per Equation (14)).</p>
      </sec>
      <sec id="sec-4-2">
        <title>Database</title>
        <p>
          We use the ICVL hyperspectral image database [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] (Fig. 4), which provides
201 hyperspectral images of spatial dimension 1392 1300 and with 31 spectral
channels. The spectral channels represent narrow-band intensity measurements,
respectively at every 10 nanometers (nm) between 400 and 700 nm.
        </p>
        <p>
          The corresponding RGB images are simulated following the linear RGB
simulation setting (Equation (3)), and the CIE 1964 color matching functions [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]
are used as the spectral sensitivities.
The selected evaluation metric is Relative Absolute Error (RAE) [
          <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
          ], which is
de ned per channel as:
        </p>
        <p>
          RAE(rk; r^k) =
rk
rk
r^k ;
(15)
where rk and r^k are respectively the kth-channel values of the ground-truth and
reconstructed spectra. E ectively, this metric measures the percentage absolute
error. RAE is the most common performance measure used in recent research,
and the rationale of using this metric can be found in [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
5
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Results and Discussion</title>
      <p>
        In Table 1, we present the per-channel error statistics of LR (left table), RPR
(middle table) and A+ (right table) under the original settings - where a single
penalty term is used in the regularization [
        <xref ref-type="bibr" rid="ref15 ref17 ref2">15, 17, 2</xref>
        ] - and our per-channel
regularization method. The spectral channels are represented by the wavelengths
(nm). We also calculate the percentage `improvements' as:
      </p>
      <p>Improve (%) = 100</p>
      <sec id="sec-5-1">
        <title>RAEoriginal</title>
        <p>RAEours ;</p>
      </sec>
      <sec id="sec-5-2">
        <title>RAEoriginal</title>
        <p>(16)
which is presented in the rightmost column of each table. In the bottom of each
table, the Mean RAE results (averaging over all spectral channels) are shown.</p>
        <p>First, we see that for all considered models, our method improves the RAE
in multiple channels by over 10% (marked in bold and with underlines), with
maximal improvements around 16-17%.</p>
        <p>
          Secondly, in terms of Mean RAE performance, our method improves the
RPR model the most, by an 8.6% increment, compared to 3.2% for A+ and
3.1% for LR. Signi cantly, the A+ model is the leading sparse coding model,
which is shown able to perform equally-well as some deep-learning solutions [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ].
By improving the A+ model, we e ectively bring forward the shallow-learned
baseline. Moreover, our method reduces the gap between RPR and A+. Relative
to A+, RPR model is much simpler (with signi cantly less model parameters)
[
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], which ensures more e ective model re-training and shorter runtime.
        </p>
        <p>Lastly, for the A+ model, it seems curious that the per-channel performances
in the rst three channels (400, 410 and 420 nm) degrade by minute di erences.
Indeed, this means the regularization parameters we chose for these channels are
not actually optimized for the test-set data. We remark that this is most likely
originated from the unequal separation of data subsets in cross validation, such
that the best regularization parameter for the regularization-set data does not
correspond to the best for the test-set data. We are investigating how to remedy
this issue.</p>
        <p>
          For one example image in the ICVL database [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ], we visualize the spectral
reconstruction errors as the Mean RAE error maps in Fig. 5. It is clear that for
all models our method improves the Mean RAE in various parts of the image.
For example, the tree stem for LR and RPR, and the grassy ground for A+.
In the spectral reconstruction (SR) problem, hyperspectral images are
reconstructed from RGB images. Many approaches are based on least-squares
regression, where the tting function is modelled by a simple linear transformation,
and a Tikhonov regularization process is applied to improve model
generalizability. Conventionally, the ts for all spectral channels are jointly regularized.
We demonstrate that the t for each spectral channel can be formulated
independently, such that the t for each channel is regularized (therefore optimized)
independently. We also provide a novel modi cation of K-fold cross validation
so that the models can be fairly trained, regularized and tested with a single
image database. Compared to the original models, our per-channel
regularization method improves the accuracy of recovery for individual spectral channels
by up to 17% increments, and by 3-9% in mean improvements over all spectral
channels.
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>Appendix: Regression Models</title>
      <p>
        A.1
Linear regression (LR) [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] assumes a linear map from RGB to spectra. The
spectral estimate is written as
(x) = Mx ;
(17)
      </p>
      <p>
        3 regression matrix.
As a simple non-linear extension from LR, in Root-Polynomial Regression (RPR)
[
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] we expand a series of root-polynomial terms from each RGBs. Denote
' : R3 7! Rp as the -order root-polynomial transformation, the example
transformations for the 2nd; 3rd and 4th order RPR are:
      </p>
      <p>Order Root-Polynomials
'2(x) R; G; B; pRG; pGB; pBR</p>
      <p>R; G; B; pRG; pGB; pBR;
'3(x) 3 RG2; p3GB2; p3BR2; 3 R2G; p3G2B; p3B2R; p3RGB
p p
R; G; B; pRG; pGB; pBR;
p
3 RG2; p3GB2; p3BR2; p3R2G; p3G2B; p3B2R; p3RGB;
'4(x) p4R3G; 4 R3B; 4 G3R; 4 G3B; 4 B3R; 4 B3G;</p>
      <p>p p p p p
p p p
4 R2GB; 4 G2RB; 4 B2RG
The spectral reconstruction then seeks to linearly map these root-polynomial
vectors to spectra:
(x) = M' (x) :
(18)
In this paper we set</p>
      <p>= 6, which is the 6th-order RPR.</p>
      <p>A.3</p>
      <sec id="sec-6-1">
        <title>Adjusted Anchored Neighborhood Regression (A+ Sparse</title>
      </sec>
      <sec id="sec-6-2">
        <title>Coding)</title>
        <p>
          The leading sparse coding method `A+' [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] assumes linear maps from RGBs to
spectra (e ectively, operates LR in every neighborhood). Denote i(x) as the
spectral reconstruction mapping for the data in the ith neighborhood. On input
of an RGB x, the reconstruction is written as:
i(x) = Mix :
(19)
See [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] for more details about the model.
        </p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Abrardo</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Alparone</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cappellini</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prosperi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Color constancy from multispectral images</article-title>
          .
          <source>In: Proceedings of the International Conference on Image Processing</source>
          . vol.
          <volume>3</volume>
          , pp.
          <volume>570</volume>
          {
          <fpage>574</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>1999</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Aeschbacher</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timofte</surname>
            ,
            <given-names>R.:</given-names>
          </string-name>
          <article-title>In defense of shallow learned spectral reconstruction from RGB images</article-title>
          .
          <source>In: Proceedings of the International Conference on Computer Vision</source>
          . pp.
          <volume>471</volume>
          {
          <fpage>479</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Arad</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ben-Shahar</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          :
          <article-title>Sparse recovery of hyperspectral signal from natural RGB images</article-title>
          .
          <source>In: Proceedings of the European Conference on Computer Vision</source>
          . pp.
          <volume>19</volume>
          {
          <fpage>34</fpage>
          . Springer (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Arad</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ben-Shahar</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timofte</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , et al.:
          <article-title>NTIRE 2018 challenge on spectral reconstruction from RGB images</article-title>
          .
          <source>In: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops</source>
          . pp.
          <volume>929</volume>
          {
          <fpage>938</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Arad</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timofte</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ben-Shahar</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finlayson</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          , et al.:
          <article-title>NTIRE 2020 challenge on spectral reconstruction from an RGB image</article-title>
          .
          <source>In: Proceedings of the Conference on Computer Vision</source>
          and
          <article-title>Pattern Recognition Workshops</article-title>
          .
          <source>IEEE</source>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Arun</surname>
            ,
            <given-names>P.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buddhiraju</surname>
            ,
            <given-names>K.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Porwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chanussot</surname>
            ,
            <given-names>J.:</given-names>
          </string-name>
          <article-title>CNN based spectral super-resolution of remote sensing images</article-title>
          .
          <source>Signal Processing</source>
          <volume>169</volume>
          ,
          <issue>107394</issue>
          (
          <year>2020</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Bashkatov</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Genina</surname>
            ,
            <given-names>E.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kochubey</surname>
            ,
            <given-names>V.I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuchin</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          :
          <article-title>Optical properties of the subcutaneous adipose tissue in the spectral range 400{2500 nm</article-title>
          .
          <source>Optics and Spectroscopy</source>
          <volume>99</volume>
          (
          <issue>5</issue>
          ),
          <volume>836</volume>
          {
          <fpage>842</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhao</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jia</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>Spectral{spatial classi cation of hyperspectral data based on deep belief network</article-title>
          .
          <source>IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing</source>
          <volume>8</volume>
          (
          <issue>6</issue>
          ),
          <volume>2381</volume>
          {
          <fpage>2392</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Cheung</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Westland</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hardeberg</surname>
            ,
            <given-names>J.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Connah</surname>
            ,
            <given-names>D.R.</given-names>
          </string-name>
          :
          <article-title>Characterization of trichromatic color cameras by using a new multispectral imaging technique</article-title>
          .
          <source>Journal of the Optical Society of America A</source>
          <volume>22</volume>
          (
          <issue>7</issue>
          ),
          <volume>1231</volume>
          {
          <fpage>1240</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10. Commission Internationale de l'
          <source>Eclairage: CIE proceedings 1964 Vienna session, committee report E-1.4. 1</source>
          (
          <year>1964</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Connah</surname>
            ,
            <given-names>D.R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hardeberg</surname>
          </string-name>
          , J.Y.:
          <article-title>Spectral recovery using polynomial models</article-title>
          .
          <source>In: Color Imaging X: Processing, Hardcopy, and Applications</source>
          . vol.
          <volume>5667</volume>
          , pp.
          <volume>65</volume>
          {
          <fpage>75</fpage>
          . International Society for Optics and
          <string-name>
            <surname>Photonics</surname>
          </string-name>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Finlayson</surname>
            ,
            <given-names>G.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morovic</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Metamer sets</article-title>
          .
          <source>Journal of the Optical Society of America A</source>
          <volume>22</volume>
          (
          <issue>5</issue>
          ),
          <volume>810</volume>
          {
          <fpage>819</fpage>
          (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Galatsanos</surname>
            ,
            <given-names>N.P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Katsaggelos</surname>
            ,
            <given-names>A.K.</given-names>
          </string-name>
          :
          <article-title>Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <volume>1</volume>
          (
          <issue>3</issue>
          ),
          <volume>322</volume>
          {
          <fpage>336</fpage>
          (
          <year>1992</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Ghamisi</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dalla</surname>
            <given-names>Mura</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            ,
            <surname>Benediktsson</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.A.</surname>
          </string-name>
          :
          <article-title>A survey on spectral{spatial classi cation techniques based on attribute pro les</article-title>
          .
          <source>IEEE Transactions on Geoscience and Remote Sensing</source>
          <volume>53</volume>
          (
          <issue>5</issue>
          ),
          <volume>2335</volume>
          {
          <fpage>2353</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Heikkinen</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Lenz</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Jetsu</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parkkinen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hauta-Kasari</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , Jaaskelainen, T.:
          <article-title>Evaluation and uni cation of some methods for estimating re ectance spectra from RGB images</article-title>
          .
          <source>Journal of the Optical Society of America A</source>
          <volume>25</volume>
          (
          <issue>10</issue>
          ),
          <volume>2444</volume>
          {
          <fpage>2458</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Lam</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sato</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          :
          <article-title>Spectral modeling and relighting of re ective- uorescent scenes</article-title>
          .
          <source>In: Proceedings of the Conference on Computer Vision and Pattern Recognition</source>
          . pp.
          <volume>1452</volume>
          {
          <fpage>1459</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Lin</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Finlayson</surname>
          </string-name>
          , G.D.:
          <article-title>Exposure invariance in spectral reconstruction from RGB images</article-title>
          .
          <source>In: Proceedings of the Color and Imaging Conference</source>
          . vol.
          <year>2019</year>
          , pp.
          <volume>284</volume>
          {
          <fpage>289</fpage>
          .
          <article-title>Society for Imaging Science and Technology (</article-title>
          <year>2019</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Nguyen</surname>
            ,
            <given-names>R.M.H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prasad</surname>
            ,
            <given-names>D.K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Brown</surname>
          </string-name>
          , M.S.:
          <article-title>Training-based spectral reconstruction from a single RGB image</article-title>
          .
          <source>In: Proceedings of the European Conference on Computer Vision</source>
          . pp.
          <volume>186</volume>
          {
          <fpage>201</fpage>
          . Springer (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Healey</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Prasad</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tromberg</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>Face recognition in hyperspectral images</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence</source>
          <volume>25</volume>
          (
          <issue>12</issue>
          ),
          <volume>1552</volume>
          {
          <fpage>1560</fpage>
          (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>Refaeilzadeh</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tang</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
          </string-name>
          , H.:
          <article-title>Cross-Validation</article-title>
          .
          <source>In: Encyclopedia of Database Systems</source>
          , pp.
          <volume>532</volume>
          {
          <fpage>538</fpage>
          .
          <string-name>
            <surname>Springer</surname>
            <given-names>US</given-names>
          </string-name>
          , Boston, MA (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>Reginska</surname>
          </string-name>
          , T.:
          <article-title>A regularization parameter in discrete ill-posed problems</article-title>
          .
          <source>SIAM Journal on Scienti c Computing</source>
          <volume>17</volume>
          (
          <issue>3</issue>
          ),
          <volume>740</volume>
          {
          <fpage>749</fpage>
          (
          <year>1996</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>Shi</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chen</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xiong</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>HSCNN+: Advanced CNN-based hyperspectral recovery from RGB images</article-title>
          .
          <source>In: Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops</source>
          . pp.
          <volume>939</volume>
          {
          <fpage>947</fpage>
          .
          <string-name>
            <surname>IEEE</surname>
          </string-name>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <surname>Tao</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zou</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Unsupervised spectral{spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classi cation</article-title>
          .
          <source>IEEE Geoscience and Remote Sensing Letters</source>
          <volume>12</volume>
          (
          <issue>12</issue>
          ),
          <volume>2438</volume>
          {
          <fpage>2442</fpage>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <surname>Tikhonov</surname>
            ,
            <given-names>A.N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Goncharsky</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stepanov</surname>
            ,
            <given-names>V.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yagola</surname>
            ,
            <given-names>A.G.</given-names>
          </string-name>
          :
          <article-title>Numerical Methods for the Solution of Ill-posed Problems</article-title>
          , vol.
          <volume>328</volume>
          . Springer Science &amp; Business
          <string-name>
            <surname>Media</surname>
          </string-name>
          (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <surname>Veganzones</surname>
            ,
            <given-names>M.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tochon</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dalla-Mura</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Plaza</surname>
            ,
            <given-names>A.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Chanussot</surname>
          </string-name>
          , J.:
          <article-title>Hyperspectral image segmentation using a new spectral unmixing-based binary partition tree representation</article-title>
          .
          <source>IEEE Transactions on Image Processing</source>
          <volume>23</volume>
          (
          <issue>8</issue>
          ),
          <volume>3574</volume>
          {
          <fpage>3589</fpage>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>Wandell</surname>
            ,
            <given-names>B.A.</given-names>
          </string-name>
          :
          <article-title>The synthesis and analysis of color images</article-title>
          .
          <source>IEEE Transactions on Pattern Analysis and Machine Intelligence (1)</source>
          ,
          <volume>2</volume>
          {
          <fpage>13</fpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xu</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Diao</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ye</surname>
            ,
            <given-names>Z.</given-names>
          </string-name>
          :
          <article-title>Self-training-based spectral image reconstruction for art paintings with multispectral imaging</article-title>
          .
          <source>Applied Optics</source>
          <volume>56</volume>
          (
          <issue>30</issue>
          ),
          <volume>8461</volume>
          {
          <fpage>8470</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Mou</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yu</surname>
          </string-name>
          , H.:
          <article-title>Tensor-based dictionary learning for spectral CT reconstruction</article-title>
          .
          <source>IEEE Transactions on Medical Imaging</source>
          <volume>36</volume>
          (
          <issue>1</issue>
          ),
          <volume>142</volume>
          {
          <fpage>154</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xi</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yang</surname>
            ,
            <given-names>Q.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cong</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhou</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          :
          <article-title>Spectral CT reconstruction with image sparsity and spectral mean</article-title>
          .
          <source>IEEE Transactions on Computational Imaging</source>
          <volume>2</volume>
          (
          <issue>4</issue>
          ),
          <volume>510</volume>
          {
          <fpage>523</fpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>