<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Per-Channel Regularization for Regression-Based Spectral Reconstruction</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Yi-Tun</forename><surname>Lin</surname></persName>
							<email>yi-tun.lin@uea.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computing Sciences</orgName>
								<orgName type="institution">University of East Anglia</orgName>
								<address>
									<settlement>Norwich</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Graham</forename><forename type="middle">D</forename><surname>Finlayson</surname></persName>
							<email>g.finlayson@uea.ac.uk</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computing Sciences</orgName>
								<orgName type="institution">University of East Anglia</orgName>
								<address>
									<settlement>Norwich</settlement>
									<country key="GB">UK</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Per-Channel Regularization for Regression-Based Spectral Reconstruction</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">432D94A614567C3C1DDCCC822E356F36</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T21:13+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Spectral reconstruction</term>
					<term>Hyperspectral imaging</term>
					<term>Multispectral imaging</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Spectral reconstruction algorithms seek to recover spectra from RGB images. This estimation problem is often formulated as leastsquares regression, and a Tikhonov regularization is generally incorporated, both to support stable estimation in the presence of noise and to prevent over-fitting. The degree of regularization is controlled by a single penalty-term parameter, which is often selected using the cross validation experimental methodology. In this paper, we generalize the simple regularization approach to admit a per-spectral-channel optimization setting, and a modified cross-validation procedure is developed. Experiments validate our method. Compared to the conventional regularization, our perchannel approach significantly improves the reconstruction accuracy at multiple spectral channels, by up to 17% increments for all the considered models.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The light spectrum is a continuous intensity distribution across wavelengths. This spectral information is commonly used to help determine and/or discriminate the physical properties of object surfaces, for example in remote sensing <ref type="bibr" target="#b24">[25,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b13">14,</ref><ref type="bibr" target="#b22">23]</ref> and medical imaging <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b28">29]</ref>. Also, in various practical applications, the devices (sensors or displays), light sources and object surfaces are characterized by spectral measurements <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b26">27]</ref>.</p><p>Despite the advantages of spectral capture, almost all images that we record contain just 3 measurements -the 3 weighted spectral averages over the Red, Green and Blue spectral regions. Perforce, much spectral information is therefore lost in this RGB image formation process. Indeed, it is a classical result in color science, that there are many spectra -called metamers <ref type="bibr" target="#b11">[12]</ref> -which integrate to the same RGB, and of course, given only one RGB measurement we cannot tell which physical spectrum induced it. Still, by adopting learning approaches we can estimate the spectrum that likely corresponds to a given RGB. Estimating spectra from RGBs is called spectral reconstruction (SR). In Fig. <ref type="figure" target="#fig_0">1</ref> we illustrate RGB image formation and the SR process. In the top-left panel we see a single radiance spectrum measured at one location in the hyperspectral image (bottom left). This spectrum is sampled by 3 sensors, resulting in the 3-value RGB response (top-right). Repeating this process for all image locations, the corresponding RGB image in the bottom right is derived from the hyperspectral image. Then, the spectral reconstruction algorithms attempt to recover the hyperspectral image back from the RGB image (or an approximation thereof).</p><p>Historically, this SR problem is effectively solved by least-squares regression <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b1">2]</ref>, where the map from RGBs (or the non-linear RGB features) to spectra is modelled as a simple linear transform. More recently, deep learning approaches <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b5">6]</ref> have been developed that provide even better SR performance. Effectively, this performance increment is achieved by regressing an RGB in the context of its neighborhood to its corresponding spectra. Clearly, this patch-based idea has merit. For example, if the algorithm can identify a patch in the scene as a 'skin region' then spectral recovery is plausibly easier to solve, since skin spectra have characteristic spectral shapes <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b18">19]</ref>.</p><p>Despite clear rationale behind the deep-learning approach, Aeschbacher et al. <ref type="bibr" target="#b1">[2]</ref> show that the regression-based A+ algorithm provides very competitive performance. Moreover, Lin and Finlayson <ref type="bibr" target="#b16">[17]</ref> shows that several regression methods actually generalize better than the leading deep-learning models when the scene exposure changes.</p><p>The main concern of this paper is in the regularization step of regressionbased SR algorithms. The classical (multivariate) regression problem from statis-tics is written as</p><formula xml:id="formula_0">MA ≈ B ,<label>(1)</label></formula><p>where A is an m×N matrix as the table of measured data (m is the dimension of the measured data and N is the number of data samples, with N m), and B is the corresponding target data matrix, of dimension k × N (k is the dimension of the target data). The aim is to find the k × m linear mapping M that makes the approximation as good as possible. Now, let us suppose small fluctuations in the target data, denoted as a matrix E of very small numbers (all entries are close to 0). The following regression is almost identical as Equation ( <ref type="formula" target="#formula_0">1</ref>):</p><formula xml:id="formula_1">M A ≈ B + E . (<label>2</label></formula><formula xml:id="formula_2">)</formula><p>And yet, often we find that the best solution for M and M are very different from one another. The reason for this is that some dimensions of the measured data (the rows of the data matrix A) could be highly correlated, such that there can be very different M's that fit B equally well.</p><p>Regularization theory <ref type="bibr" target="#b23">[24]</ref> is a way of dealing with this kind of non-robustness. Given the example of Equation ( <ref type="formula" target="#formula_0">1</ref>) and ( <ref type="formula" target="#formula_1">2</ref>), we may ask that among all plausible (near optimized) solutions, which one is more likely to generalize to the 'unseen data' better. Typically, the principle of regularization follows the idea that the best fitting function (i.e. M) should be the simplest possible solution that can still fits the data well.</p><p>In Fig. <ref type="figure">2</ref> we show a 1-D example. In the least-squares sense, the wiggly red curve is found to best fit the training data (black data points). Yet intuitively, this is not the correct fit, as the data points appear to follow a much simpler distribution. In contrast, the regularized fit (blue curve) seems to model the data better.</p><p>Fig. <ref type="figure">2</ref>. Example of a least-squares fit (red curve) and a regularized least-squares fit (blue curve). The black data points are the training data. In the least-squares sense, the overall distance between the data points and the red wiggly fit is less than the blue smooth fit, but the latter looks 'more correct'.</p><p>Returning to our spectral reconstruction problem, taking linear regression as an example, the matrix A corresponds to a set of image RGBs (m = 3), and B refers to the spectra we are trying to recover (k is the number of spectral channels we have measured). The insight that we explore in this paper is that there is no reason why the fits for different spectral channels should be regularized altogether. Rather, we seek to consider the per-channel regression problem, where each spectral channel is recovered in turn, and correspondingly, each row of M is solved in turn. This simple modification allows us to carry out a per-channel regularization that ensures individual optimizations for all spectral channels in the spectral reconstruction problem.</p><p>Either for the conventional global regularization or for our per-channel approach, care must be taken not to overly tune the terms in M to the data at hand. This led us to develop a modified cross validation procedure. Our method separates the data on hand into three subsets, respectively for training, regularization and testing, which is a novel adjustment from the standard methodology <ref type="bibr" target="#b19">[20]</ref> and is another contribution of this paper.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Hyperspectral Images and RGB Simulation</head><p>In a hyperspectral image, spectra are measured discretely at some sampled wavelengths. Suppose the visible spectrum runs from 400 to 700 nanometers and the spectral sampling is every 10 nanometers, we get a 31-dimensional discrete representation of spectra, denoted as r ∈ R 31 .</p><p>Correspondingly, the spectral sensitivities of the R, G and B camera sensors can also be represented in discrete vector form (i.e. as 31-dimensional vectors), respectively denoted as s R , s G and s B . Then, as per the illustration in Fig. <ref type="figure" target="#fig_0">1</ref> we can write image formation as <ref type="bibr" target="#b25">[26]</ref>:</p><formula xml:id="formula_3">x =    R G B    =    s T R s T G s T B    r ,<label>(3)</label></formula><p>where x = (R, G, B) T is the 3-value RGB camera response.</p><p>In the SR problem (the bottom of Fig. <ref type="figure" target="#fig_0">1</ref>) we seek to recover hyperspectral images from the RGB images. Denote an SR algorithm as</p><formula xml:id="formula_4">Ψ : R 3 → R 31 , Ψ (x) ≈ r . (<label>4</label></formula><formula xml:id="formula_5">)</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Regression-Based Spectral Reconstruction</head><p>The general regression-based formulation of the SR problem is written as</p><formula xml:id="formula_6">Ψ (x) = Mϕ(x) ,<label>(5)</label></formula><p>where ϕ(•) is a feature transformation which maps each RGB to a corresponding p-term feature vector, and in turn it is mapped by the regression matrix M. Each of the various regression-based models <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b1">2]</ref> adopts a bespoke definition of ϕ(x). For details of the considered models, including Linear, Root-Polynomial and Adjusted Anchored Neighborhood Regression <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b1">2]</ref>, see Appendix A.</p><p>Least-Squares Optimization The most common least-squares optimization seeks to minimize the sum of squared errors between the ground-truth training spectral data and the reconstruction from their corresponding RGBs: Ψ (x).</p><p>Given the formulation of Ψ (x) in Equation ( <ref type="formula" target="#formula_6">5</ref>), the least-squares optimization of M is formulated as:</p><formula xml:id="formula_7">M = arg min M N i=1 ||r i − Mϕ(x i )|| 2 2 , (<label>6</label></formula><formula xml:id="formula_8">)</formula><p>where N is the number of data points in the training set and i indexes an individual spectrum. Collating all spectral training data in a data matrix R = (r 1 , r 2 , ..., r N ) and the corresponding feature vector matrix Φ = (ϕ(x 1 ), ϕ(x 2 ), ..., ϕ(x N )), Equation ( <ref type="formula" target="#formula_7">6</ref>) can be written as:</p><formula xml:id="formula_9">M = arg min M ||R − MΦ|| 2 F .<label>(7)</label></formula><p>Here || • || 2 F is the squared Frobenius norm, which is exactly the sum-of-squares of all entries of the enclosed matrix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Tikhonov Regularization</head><p>In regression-based SR, the most common method to regularize a model is Tikhonov Regularization <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b23">24]</ref>, which hypothesizes that a more natural fit is obtained when the 'magnitude' (or the 'matrix norm') of M is bounded to some extent. Based on this assumption, the least-squares optimization in Equation ( <ref type="formula" target="#formula_9">7</ref>) is extended to incorporate a regularization term:</p><formula xml:id="formula_10">M = arg min M ||R − MΦ|| 2 F + γ||M|| 2 F .<label>(8)</label></formula><p>Here, the ||M|| 2 F term (the regularization term, or penalty term) is controlled by a user-defined regularization parameter γ ≥ 0, which is usually determined empirically <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b20">21]</ref>.</p><p>Equation ( <ref type="formula" target="#formula_10">8</ref>) is solved in closed form <ref type="bibr" target="#b14">[15]</ref>:</p><formula xml:id="formula_11">M = RΦ T (ΦΦ T + γI) −1 , (<label>9</label></formula><formula xml:id="formula_12">)</formula><p>where I is the p × p identity matrix (recall that p is the dimension of the feature vectors ϕ(x)).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Proposed Method</head><p>In spectral reconstruction, we wish to recover spectral measurements in the range from 400 to 700 nanometers (the visible spectrum). Suppose we know the intensity of light entering the camera at 400 nanometers. Given this knowledge if we wished to predict the value of the spectrum at 410 nanometers, it makes sense to assume a similar value as the one at 400 nanometers. Indeed, the fact that the intensity values at close-by wavelengths are similar is why we can represent the continuous visible spectrum at discrete wavelengths. Conversely, one could not use the knowledge of light at 400 nanometers to predict the spectral value at, say, 700 nanometers. And yet, in the literature when we regularize the regression-based SR models, we are -in some sense -assuming that all wavelengths are related. Our new per-channel reformulation of Tikhonov regularization for spectral reconstruction effectively allows the recovery of the spectral values at distant wavelengths to be considered more independently from one another.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Per-Channel Regularization</head><p>Let us split the regression matrix M by row: M = (m 1 , m 2 , ..., m 31 ) T , such that the general form of regression-based SR formulated in Equation (5) becomes</p><formula xml:id="formula_13">Ψ (x) = Mϕ(x) =       m T 1 m T 2 . . . m T 31       ϕ(x) =       r1 r2 . . . r31       ,<label>(10)</label></formula><p>where (r 1 , r2 , ..., r31 ) T = r is the reconstructed spectrum. For an arbitrary k th spectral channel, the estimated intensity value rk is given by rk = m T k ϕ(x). Note that as we represent the regression model by channel, we do not alter the original model. This says that the regression-based spectral reconstruction has always been in such a way that the reconstruction for each spectral channel depends exclusively on the corresponding row of M.</p><p>Given this fact, we might expect that each row of M would be optimized independently. However, this was not the case for the conventional regularized least-squares solution in Equation <ref type="bibr" target="#b8">(9)</ref>. Indeed, we see in Equation ( <ref type="formula" target="#formula_10">8</ref>) the regularization term is only controlled by one single regularization parameter γ and the fits for all spectral channels (all rows of M) are regularized by the same γ. Regardless of how we optimize this γ parameter, this setting clearly contradicts to the inherent independence between the rows of M.</p><p>Let us split the spectral reconstruction problem into 31 independent problems, where the function Ψ k : R 3 → R reconstructs the k th -channel values of the reconstructed spectra by the k th row of M:</p><formula xml:id="formula_14">Ψ k (x) = m T k ϕ(x) .<label>(11)</label></formula><p>Then, we are to determine m T k , as the least-squares fit for the k th channel data. Recall the training spectral data matrix R whose columns are individual training spectra, now we split R by spectral channel instead:</p><formula xml:id="formula_15">R = (r 1 , r 2 , ..., r N ) =       ρ T 1 ρ T 2 . . . ρ T 31       ,<label>(12)</label></formula><p>where ρ T k includes the k th channel values of all training spectral data. m T k is then optimized following the regularized least-squares optimization:</p><formula xml:id="formula_16">m T k = arg min m T k ||ρ T k − m T k Φ|| 2 2 + γ k ||m k || 2 2 ,<label>(13)</label></formula><p>and solved in closed form:</p><formula xml:id="formula_17">m T k = ρ T k Φ T (ΦΦ T + γ k I) −1 . (<label>14</label></formula><formula xml:id="formula_18">)</formula><p>Here γ k represents the channel-wise regularization parameter that only controls the regularization for the k th channel. Clearly, our per-channel regularization scheme (Equation ( <ref type="formula" target="#formula_17">14</ref>)) solves the regression matrix M row-by-row, such that each row is ready to be regularized independently. The remaining question is then how we are going to optimize these regularization parameters.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Modified Cross Validation</head><p>Perforce, regularization parameters (conventionally the single γ and our perchannel γ k 's) are determined empirically. In the literature, a grid-search approach is adopted, where different parameters are tried to regularize the model. These 'intermediate models' are then used to recover spectra from a set of unseen RGB images, and the model that minimizes the evaluation criteria is selected. For example, see <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>As usual we would like to train, regularize and test a model using the images from the same database, we must partition the database into several subsets for these different usages. All (to our knowledge) deep-learning models simply separate the image database into 3 subsets randomly (respectively for training, validation and testing, in the parlance of deep learning), see <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b5">6]</ref>. However, this setting can potentially create so-called 'unfair' separations, such that if the database is separated differently, the results may vary.</p><p>A better practice is using a cross-validation process. In this paper we develop our own cross validation scheme, which is modified from the conventional Kfold cross validation <ref type="bibr" target="#b19">[20]</ref>. This is because the conventional K-fold only seeks to separate a dataset into training and testing data, and here we need an additional partition as the regularization data.</p><p>In Fig. <ref type="figure" target="#fig_1">3</ref> we show the comparison between the conventional 4-fold cross validation (left) and our method (right). For both methods the same experiment is conducted 4 times. In each trial, the conventional method selects 3 out of 4 portions of data for training (marked in blue) and the remaining part is for testing (marked in orange). In our method, however, only 2 out of 4 portions of data are for training, which allows 1 portion of data (marked in green) used for regularization, that is to determine the γ and γ k parameters. Subject to these terms we solve for the best regression model for the training (blue) data. The performance statistics are calculated based on the recovery errors on the testing (orange) data and averaged over the 4 trials. Notice for our cross validation method there actually exists more possible permutations than the presented 4-trial setting. To be exact, there should be 12 different permutations. We remark that according to our empirical study, experimenting with more trials (than the presented setting) does not make significant difference in the performance statistics.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1">Considered Models</head><p>In this paper we consider 3 regression-based models:</p><p>-Linear Regression (LR) <ref type="bibr" target="#b14">[15]</ref> -Root-Polynomial Regression (RPR) <ref type="bibr" target="#b16">[17]</ref> -Adjusted Anchored Neighborhood Regression (A+) <ref type="bibr" target="#b1">[2]</ref> . For all the above models we adopt both the original regularization (as reported in their citations and as per Equation ( <ref type="formula" target="#formula_11">9</ref>)) and our per-channel regularization (as per Equation ( <ref type="formula" target="#formula_17">14</ref>)).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2">Database</head><p>We use the ICVL hyperspectral image database <ref type="bibr" target="#b2">[3]</ref> (Fig. <ref type="figure" target="#fig_2">4</ref>), which provides 201 hyperspectral images of spatial dimension 1392 × 1300 and with 31 spectral channels. The spectral channels represent narrow-band intensity measurements, respectively at every 10 nanometers (nm) between 400 and 700 nm.</p><p>The corresponding RGB images are simulated following the linear RGB simulation setting (Equation ( <ref type="formula" target="#formula_3">3</ref>)), and the CIE 1964 color matching functions <ref type="bibr" target="#b9">[10]</ref> are used as the spectral sensitivities. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3">Evaluation Criteria</head><p>The selected evaluation metric is Relative Absolute Error (RAE) <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>, which is defined per channel as:</p><formula xml:id="formula_19">RAE(r k , rk ) = r k − rk r k ,<label>(15)</label></formula><p>where r k and rk are respectively the k th -channel values of the ground-truth and reconstructed spectra. Effectively, this metric measures the percentage absolute error. RAE is the most common performance measure used in recent research, and the rationale of using this metric can be found in <ref type="bibr" target="#b3">[4]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Results and Discussion</head><p>In Table <ref type="table" target="#tab_0">1</ref>, we present the per-channel error statistics of LR (left table), RPR (middle table) and A+ (right table) under the original settings -where a single penalty term is used in the regularization <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b1">2]</ref> -and our per-channel regularization method. The spectral channels are represented by the wavelengths λ (nm). We also calculate the percentage 'improvements' as: which is presented in the rightmost column of each table. In the bottom of each table, the Mean RAE results (averaging over all spectral channels) are shown. First, we see that for all considered models, our method improves the RAE in multiple channels by over 10% (marked in bold and with underlines), with maximal improvements around 16-17%.</p><formula xml:id="formula_20">Improve (%) = 100 × RAE original − RAE ours RAE original ,<label>(16)</label></formula><p>Secondly, in terms of Mean RAE performance, our method improves the RPR model the most, by an 8.6% increment, compared to 3.2% for A+ and 3.1% for LR. Significantly, the A+ model is the leading sparse coding model, which is shown able to perform equally-well as some deep-learning solutions <ref type="bibr" target="#b1">[2]</ref>. By improving the A+ model, we effectively bring forward the shallow-learned baseline. Moreover, our method reduces the gap between RPR and A+. Relative to A+, RPR model is much simpler (with significantly less model parameters) <ref type="bibr" target="#b16">[17]</ref>, which ensures more effective model re-training and shorter runtime.</p><p>Lastly, for the A+ model, it seems curious that the per-channel performances in the first three channels (400, 410 and 420 nm) degrade by minute differences. Indeed, this means the regularization parameters we chose for these channels are not actually optimized for the test-set data. We remark that this is most likely originated from the unequal separation of data subsets in cross validation, such that the best regularization parameter for the regularization-set data does not correspond to the best for the test-set data. We are investigating how to remedy this issue.</p><p>For one example image in the ICVL database <ref type="bibr" target="#b2">[3]</ref>, we visualize the spectral reconstruction errors as the Mean RAE error maps in Fig. <ref type="figure" target="#fig_3">5</ref>. It is clear that for all models our method improves the Mean RAE in various parts of the image. For example, the tree stem for LR and RPR, and the grassy ground for A+. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusion</head><p>In the spectral reconstruction (SR) problem, hyperspectral images are reconstructed from RGB images. Many approaches are based on least-squares regression, where the fitting function is modelled by a simple linear transformation, and a Tikhonov regularization process is applied to improve model generalizability. Conventionally, the fits for all spectral channels are jointly regularized. We demonstrate that the fit for each spectral channel can be formulated independently, such that the fit for each channel is regularized (therefore optimized) independently. We also provide a novel modification of K-fold cross validation so that the models can be fairly trained, regularized and tested with a single image database. Compared to the original models, our per-channel regularization method improves the accuracy of recovery for individual spectral channels by up to 17% increments, and by 3-9% in mean improvements over all spectral channels.</p><p>The spectral reconstruction then seeks to linearly map these root-polynomial vectors to spectra:</p><formula xml:id="formula_21">Ψ (x) = Mϕ α (x) .<label>(18)</label></formula><p>In this paper we set α = 6, which is the 6 th -order RPR.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.3 Adjusted Anchored Neighborhood Regression (A+ Sparse</head><p>Coding)</p><p>The leading sparse coding method 'A+' <ref type="bibr" target="#b1">[2]</ref> assumes linear maps from RGBs to spectra (effectively, operates LR in every neighborhood). Denote Ψ i (x) as the spectral reconstruction mapping for the data in the i th neighborhood. On input of an RGB x, the reconstruction is written as:</p><formula xml:id="formula_22">neighborhood(x) = i ⇒ Ψ i (x) = M i x .<label>(19)</label></formula><p>See <ref type="bibr" target="#b1">[2]</ref> for more details about the model.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1. RGB image formation (Hyperspectral → RGB image) and spectral reconstruction (RGB → Hyperspectral image). For illustration we color-coded the wavelength scale by the colors we would see when observing each single wavelength of light.</figDesc><graphic coords="2,169.35,115.84,276.67,175.31" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 3 .</head><label>3</label><figDesc>Fig. 3. The conventional 4-fold cross validation (left) and the modified scheme used in this paper (right). Each colored patch represents equal amount of randomly allocated data. The blue, orange and green patches represent the data for training, testing and regularization, respectively.</figDesc><graphic coords="8,235.68,259.82,144.00,87.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Fig. 4 .</head><label>4</label><figDesc>Fig. 4. Example scenes from the ICVL hyperspectral database [3]. Note that the shown RGB images are rendered only for display (not the ground-truth RGB images that used in the experiments).</figDesc><graphic coords="9,134.77,240.85,345.84,73.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Fig. 5 .</head><label>5</label><figDesc>Fig. 5. Visualizing the spectral reconstruction outcome of the considered models under the original setting (top row) and our new regularization method (bottom row). The RGB image (left) is rendered for illustration purpose, which is not the RGBs used for spectral reconstruction.</figDesc><graphic coords="11,152.06,340.47,311.26,172.59" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Spectral reconstruction results at each spectral channel for LR (left), RPR (middle) and A+ (right). Significant improvements (&gt; 10%) are marked in bold and with underlines. For each model, the Mean RAE error over all spectral channels are given in the bottom of each table and marked in gray.</figDesc><table><row><cell></cell><cell></cell><cell>LR</cell><cell></cell><cell></cell><cell></cell><cell>RPR</cell><cell></cell><cell></cell><cell>A+</cell></row><row><cell></cell><cell>Heikki-</cell><cell></cell><cell></cell><cell></cell><cell>Lin</cell><cell></cell><cell></cell><cell></cell><cell>Aeschb-</cell></row><row><cell>λ</cell><cell>nen et</cell><cell cols="2">Ours Improve</cell><cell>λ</cell><cell>et al.</cell><cell cols="2">Ours Improve</cell><cell>λ</cell><cell>acher et</cell><cell>Ours Improve</cell></row><row><cell></cell><cell>al. [15]</cell><cell></cell><cell></cell><cell></cell><cell>[17]</cell><cell></cell><cell></cell><cell></cell><cell>al. [2]</cell></row><row><cell cols="3">nm RAE (×10 −2 )</cell><cell>%</cell><cell cols="3">nm RAE (×10 −2 )</cell><cell>%</cell><cell cols="2">nm RAE (×10 −2 )</cell><cell>%</cell></row><row><cell cols="3">400 26.96 26.89</cell><cell>0.3%</cell><cell cols="3">400 21.15 19.51</cell><cell>7.8%</cell><cell cols="2">400 16.01 16.08</cell><cell>-0.5%</cell></row><row><cell cols="3">410 17.63 17.57</cell><cell>0.4%</cell><cell cols="3">410 13.36 12.27</cell><cell>8.2%</cell><cell cols="2">410 10.41 10.43</cell><cell>-0.2%</cell></row><row><cell cols="3">420 12.13 12.08</cell><cell>0.4%</cell><cell cols="2">420 9.31</cell><cell>7.83</cell><cell>15.9%</cell><cell cols="2">420 7.10</cell><cell>7.11</cell><cell>-0.1%</cell></row><row><cell cols="3">430 10.10 10.05</cell><cell>0.4%</cell><cell cols="2">430 7.51</cell><cell>6.40</cell><cell>14.7%</cell><cell cols="2">430 5.68</cell><cell>5.68</cell><cell>0.0%</cell></row><row><cell cols="2">440 4.05</cell><cell>4.02</cell><cell>0.8%</cell><cell cols="2">440 2.91</cell><cell>2.50</cell><cell>14.0%</cell><cell cols="2">440 2.28</cell><cell>2.23</cell><cell>2.3%</cell></row><row><cell cols="2">450 2.14</cell><cell>2.07</cell><cell>3.4%</cell><cell cols="2">450 1.87</cell><cell>1.62</cell><cell>13.3%</cell><cell cols="2">450 1.54</cell><cell>1.28</cell><cell>16.7%</cell></row><row><cell cols="2">460 5.07</cell><cell>5.06</cell><cell>0.1%</cell><cell cols="2">460 3.50</cell><cell>3.12</cell><cell>10.9%</cell><cell cols="2">460 2.83</cell><cell>2.47</cell><cell>12.6%</cell></row><row><cell cols="2">470 7.06</cell><cell>7.04</cell><cell>0.3%</cell><cell cols="2">470 4.94</cell><cell>4.18</cell><cell>15.4%</cell><cell cols="2">470 3.80</cell><cell>3.40</cell><cell>10.5%</cell></row><row><cell cols="2">480 8.83</cell><cell>8.79</cell><cell>0.4%</cell><cell cols="2">480 6.05</cell><cell>5.32</cell><cell>12.1%</cell><cell cols="2">480 4.70</cell><cell>4.32</cell><cell>8.1%</cell></row><row><cell cols="2">490 8.16</cell><cell>8.12</cell><cell>0.6%</cell><cell cols="2">490 5.31</cell><cell>4.88</cell><cell>8.1%</cell><cell cols="2">490 4.41</cell><cell>4.16</cell><cell>5.7%</cell></row><row><cell cols="2">500 7.47</cell><cell>7.42</cell><cell>0.7%</cell><cell cols="2">500 5.05</cell><cell>4.68</cell><cell>7.3%</cell><cell cols="2">500 4.11</cell><cell>3.94</cell><cell>4.1%</cell></row><row><cell cols="2">510 5.49</cell><cell>5.43</cell><cell>1.1%</cell><cell cols="2">510 3.73</cell><cell>3.59</cell><cell>3.6%</cell><cell cols="2">510 3.12</cell><cell>3.01</cell><cell>3.4%</cell></row><row><cell cols="2">520 1.71</cell><cell>1.68</cell><cell>1.6%</cell><cell cols="2">520 1.73</cell><cell>1.58</cell><cell>8.8%</cell><cell cols="2">520 1.31</cell><cell>1.27</cell><cell>2.6%</cell></row><row><cell cols="2">530 1.72</cell><cell>1.71</cell><cell>0.9%</cell><cell cols="2">530 1.27</cell><cell>1.22</cell><cell>4.0%</cell><cell cols="2">530 1.16</cell><cell>1.15</cell><cell>1.3%</cell></row><row><cell cols="2">540 3.30</cell><cell>2.96</cell><cell>10.4%</cell><cell cols="2">540 2.48</cell><cell>2.28</cell><cell>8.1%</cell><cell cols="2">540 1.98</cell><cell>1.97</cell><cell>0.6%</cell></row><row><cell cols="2">550 4.04</cell><cell>3.58</cell><cell>11.4%</cell><cell cols="2">550 3.03</cell><cell>2.75</cell><cell>9.3%</cell><cell cols="2">550 2.38</cell><cell>2.36</cell><cell>0.8%</cell></row><row><cell cols="2">560 4.07</cell><cell>3.56</cell><cell>12.4%</cell><cell cols="2">560 2.72</cell><cell>2.63</cell><cell>3.4%</cell><cell cols="2">560 2.33</cell><cell>2.31</cell><cell>0.7%</cell></row><row><cell cols="2">570 3.18</cell><cell>2.64</cell><cell>16.9%</cell><cell cols="2">570 2.30</cell><cell>1.93</cell><cell>15.8%</cell><cell cols="2">570 1.82</cell><cell>1.80</cell><cell>0.9%</cell></row><row><cell cols="2">580 1.78</cell><cell>1.47</cell><cell>17.4%</cell><cell cols="2">580 1.23</cell><cell>1.14</cell><cell>6.9%</cell><cell cols="2">580 1.18</cell><cell>1.16</cell><cell>1.8%</cell></row><row><cell cols="2">590 1.51</cell><cell>1.48</cell><cell>1.6%</cell><cell cols="2">590 1.11</cell><cell>1.02</cell><cell>8.0%</cell><cell cols="2">590 1.06</cell><cell>1.05</cell><cell>1.2%</cell></row><row><cell cols="2">600 1.10</cell><cell>1.10</cell><cell>0.1%</cell><cell cols="2">600 1.02</cell><cell>0.96</cell><cell>5.9%</cell><cell cols="2">600 0.88</cell><cell>0.88</cell><cell>0.6%</cell></row><row><cell cols="2">610 2.82</cell><cell>2.55</cell><cell>9.7%</cell><cell cols="2">610 1.93</cell><cell>1.82</cell><cell>6.2%</cell><cell cols="2">610 1.62</cell><cell>1.55</cell><cell>4.2%</cell></row><row><cell cols="2">620 4.05</cell><cell>3.56</cell><cell>12.1%</cell><cell cols="2">620 2.82</cell><cell>2.35</cell><cell>16.9%</cell><cell cols="2">620 2.17</cell><cell>2.07</cell><cell>4.8%</cell></row><row><cell cols="2">630 3.99</cell><cell>3.52</cell><cell>11.7%</cell><cell cols="2">630 2.76</cell><cell>2.50</cell><cell>9.4%</cell><cell cols="2">630 2.26</cell><cell>2.15</cell><cell>4.8%</cell></row><row><cell cols="2">640 5.08</cell><cell>4.51</cell><cell>11.2%</cell><cell cols="2">640 3.30</cell><cell>3.17</cell><cell>3.9%</cell><cell cols="2">640 3.04</cell><cell>2.80</cell><cell>7.7%</cell></row><row><cell cols="2">650 4.87</cell><cell>4.60</cell><cell>5.5%</cell><cell cols="2">650 3.76</cell><cell>3.28</cell><cell>12.8%</cell><cell cols="2">650 3.31</cell><cell>3.12</cell><cell>5.9%</cell></row><row><cell cols="2">660 5.34</cell><cell>5.02</cell><cell>6.0%</cell><cell cols="2">660 4.05</cell><cell>3.93</cell><cell>3.1%</cell><cell cols="2">660 3.86</cell><cell>3.68</cell><cell>4.6%</cell></row><row><cell cols="2">670 7.04</cell><cell>6.58</cell><cell>6.4%</cell><cell cols="2">670 5.44</cell><cell>5.17</cell><cell>5.0%</cell><cell cols="2">670 5.23</cell><cell>4.81</cell><cell>8.0%</cell></row><row><cell cols="2">680 6.79</cell><cell>6.56</cell><cell>3.3%</cell><cell cols="2">680 5.55</cell><cell>5.24</cell><cell>5.5%</cell><cell cols="2">680 5.16</cell><cell>4.79</cell><cell>7.3%</cell></row><row><cell cols="2">690 5.72</cell><cell>5.62</cell><cell>1.6%</cell><cell cols="2">690 5.51</cell><cell>5.45</cell><cell>1.1%</cell><cell cols="2">690 4.66</cell><cell>4.65</cell><cell>0.3%</cell></row><row><cell cols="3">700 10.39 10.31</cell><cell>0.8%</cell><cell cols="2">700 8.96</cell><cell>8.88</cell><cell>0.9%</cell><cell cols="2">700 8.01</cell><cell>7.97</cell><cell>0.6%</cell></row><row><cell cols="2">All 6.24</cell><cell>6.05</cell><cell>3.1%</cell><cell cols="2">All 4.70</cell><cell>4.30</cell><cell>8.6%</cell><cell cols="2">All 3.85</cell><cell>3.73</cell><cell>3.2%</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A Appendix: Regression Models</head><p>A.1 Linear Regression Linear regression (LR) <ref type="bibr" target="#b14">[15]</ref> assumes a linear map from RGB to spectra. The spectral estimate is written as</p><p>where M is a 31 × 3 regression matrix.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A.2 Root-Polynomial Regression</head><p>As a simple non-linear extension from LR, in Root-Polynomial Regression (RPR) <ref type="bibr" target="#b16">[17]</ref> we expand a series of root-polynomial terms from each RGBs. Denote ϕ α : R 3 → R p as the α-order root-polynomial transformation, the example transformations for the 2 nd , 3 rd and 4 th order RPR are:</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Color constancy from multispectral images</title>
		<author>
			<persName><forename type="first">A</forename><surname>Abrardo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Alparone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Cappellini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Prosperi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Image Processing</title>
				<meeting>the International Conference on Image Processing</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="570" to="574" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">In defense of shallow learned spectral reconstruction from RGB images</title>
		<author>
			<persName><forename type="first">J</forename><surname>Aeschbacher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Timofte</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Computer Vision</title>
				<meeting>the International Conference on Computer Vision</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="471" to="479" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Sparse recovery of hyperspectral signal from natural RGB images</title>
		<author>
			<persName><forename type="first">B</forename><surname>Arad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ben-Shahar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the European Conference on Computer Vision</title>
				<meeting>the European Conference on Computer Vision</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="19" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">NTIRE 2018 challenge on spectral reconstruction from RGB images</title>
		<author>
			<persName><forename type="first">B</forename><surname>Arad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ben-Shahar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Timofte</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops</title>
				<meeting>the Conference on Computer Vision and Pattern Recognition Workshops</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="929" to="938" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">NTIRE 2020 challenge on spectral reconstruction from an RGB image</title>
		<author>
			<persName><forename type="first">B</forename><surname>Arad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Timofte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Ben-Shahar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Finlayson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops</title>
				<meeting>the Conference on Computer Vision and Pattern Recognition Workshops</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">CNN based spectral super-resolution of remote sensing images</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V</forename><surname>Arun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Buddhiraju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Porwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chanussot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Signal Processing</title>
		<imprint>
			<biblScope unit="volume">169</biblScope>
			<biblScope unit="page">107394</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Optical properties of the subcutaneous adipose tissue in the spectral range 400-2500 nm</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Bashkatov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Genina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Kochubey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Tuchin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Optics and Spectroscopy</title>
		<imprint>
			<biblScope unit="volume">99</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="836" to="842" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Spectral-spatial classification of hyperspectral data based on deep belief network</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jia</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="2381" to="2392" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Characterization of trichromatic color cameras by using a new multispectral imaging technique</title>
		<author>
			<persName><forename type="first">V</forename><surname>Cheung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Westland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Hardeberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Connah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Optical Society of America A</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page" from="1231" to="1240" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m">Commission Internationale de l&apos;Eclairage: CIE proceedings 1964 Vienna session</title>
				<imprint>
			<date type="published" when="1964">1964</date>
		</imprint>
	</monogr>
	<note>committee report E-1.4</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Spectral recovery using polynomial models</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">R</forename><surname>Connah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">Y</forename><surname>Hardeberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Color Imaging X: Processing, Hardcopy, and Applications</title>
				<imprint>
			<publisher>International Society for Optics and Photonics</publisher>
			<date type="published" when="2005">2005</date>
			<biblScope unit="volume">5667</biblScope>
			<biblScope unit="page" from="65" to="75" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Metamer sets</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Finlayson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Morovic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Optical Society of America A</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="810" to="819" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Methods for choosing the regularization parameter and estimating the noise variance in image restoration and their relation</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">P</forename><surname>Galatsanos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Katsaggelos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="322" to="336" />
			<date type="published" when="1992">1992</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">A survey on spectral-spatial classification techniques based on attribute profiles</title>
		<author>
			<persName><forename type="first">P</forename><surname>Ghamisi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dalla Mura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Benediktsson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Geoscience and Remote Sensing</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="2335" to="2353" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Evaluation and unification of some methods for estimating reflectance spectra from RGB images</title>
		<author>
			<persName><forename type="first">V</forename><surname>Heikkinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lenz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Jetsu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Parkkinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hauta-Kasari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Jääskeläinen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the Optical Society of America A</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page" from="2444" to="2458" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Spectral modeling and relighting of reflective-fluorescent scenes</title>
		<author>
			<persName><forename type="first">A</forename><surname>Lam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sato</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="1452" to="1459" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Exposure invariance in spectral reconstruction from RGB images</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Finlayson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Color and Imaging Conference</title>
				<meeting>the Color and Imaging Conference</meeting>
		<imprint>
			<publisher>Society for Imaging Science and Technology</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">2019</biblScope>
			<biblScope unit="page" from="284" to="289" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Training-based spectral reconstruction from a single RGB image</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M H</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">K</forename><surname>Prasad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Brown</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the European Conference on Computer Vision</title>
				<meeting>the European Conference on Computer Vision</meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="186" to="201" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Face recognition in hyperspectral images</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Healey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Prasad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Tromberg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="1552" to="1560" />
			<date type="published" when="2003">2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Cross-Validation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Refaeilzadeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Encyclopedia of Database Systems</title>
				<meeting><address><addrLine>Boston, MA</addrLine></address></meeting>
		<imprint>
			<publisher>Springer US</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="532" to="538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A regularization parameter in discrete ill-posed problems</title>
		<author>
			<persName><forename type="first">T</forename><surname>Regińska</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">SIAM Journal on Scientific Computing</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="740" to="749" />
			<date type="published" when="1996">1996</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">HSCNN+: Advanced CNN-based hyperspectral recovery from RGB images</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Conference on Computer Vision and Pattern Recognition Workshops</title>
				<meeting>the Conference on Computer Vision and Pattern Recognition Workshops</meeting>
		<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="939" to="947" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Unsupervised spectral-spatial feature learning with stacked sparse autoencoder for hyperspectral imagery classification</title>
		<author>
			<persName><forename type="first">C</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zou</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Geoscience and Remote Sensing Letters</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="2438" to="2442" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Tikhonov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Goncharsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">V</forename><surname>Stepanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Yagola</surname></persName>
		</author>
		<title level="m">Numerical Methods for the Solution of Ill-posed Problems</title>
				<imprint>
			<publisher>Springer Science &amp; Business Media</publisher>
			<date type="published" when="2013">2013</date>
			<biblScope unit="volume">328</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Hyperspectral image segmentation using a new spectral unmixing-based binary partition tree representation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Veganzones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tochon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Dalla-Mura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chanussot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Image Processing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="3574" to="3589" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">The synthesis and analysis of color images</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Wandell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Pattern Analysis and Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="2" to="13" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Self-training-based spectral image reconstruction for art paintings with multispectral imaging</title>
		<author>
			<persName><forename type="first">P</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Diao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ye</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Optics</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="issue">30</biblScope>
			<biblScope unit="page" from="8461" to="8470" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Tensor-based dictionary learning for spectral CT reconstruction</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Mou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Medical Imaging</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="142" to="154" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Spectral CT reconstruction with image sparsity and spectral mean</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Cong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Computational Imaging</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="510" to="523" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
