<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Virtual Cleaning of Artworks Using a Deep Generative Network ⋆</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Morteza</forename><forename type="middle">Maali</forename><surname>Amiri</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Chester F. Carlson Center for Imaging Science</orgName>
								<orgName type="institution">Rochester Institute of Technology</orgName>
								<address>
									<addrLine>54 Lomb Memorial Drive</addrLine>
									<postCode>14623</postCode>
									<settlement>Rochester</settlement>
									<region>NY</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">David</forename><forename type="middle">W</forename><surname>Messinger</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Chester F. Carlson Center for Imaging Science</orgName>
								<orgName type="institution">Rochester Institute of Technology</orgName>
								<address>
									<addrLine>54 Lomb Memorial Drive</addrLine>
									<postCode>14623</postCode>
									<settlement>Rochester</settlement>
									<region>NY</region>
									<country key="US">USA</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="institution">Rochester Institute of Technology</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Virtual Cleaning of Artworks Using a Deep Generative Network ⋆</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E3D31B20E11E9A9EA5D75942C6494714</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T19:50+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Deep Generative Network</term>
					<term>Virtual cleaning of artworks</term>
					<term>Varnish removal (D. W. Messinger) 0000-0002-0391-3310 (M. M. Amiri); 0000-0002-2273-9194 (D. W. Messinger)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>It is well-known that the varnish applied to artwork yellows with time changing its appearance accordingly. Conservators are then sometimes prompted to physically clean the artwork in an attempt to recover the original look of the work. At times, the conservators only partially clean the artwork first. They then try to virtually clean the rest of the artwork to visualize the result of the cleaning before physically cleaning the entire piece. There have been many different approaches that have been proposed to virtually clean a partially cleaned artwork. All of them have some limitations, the low accuracy of which is the main one. In this paper, a deep generative network is proposed to virtually clean a partially cleaned artwork in the RGB domain. The proposed generative model consists of several up-sampling and down-sampling convolution blocks and skip connections with a symmetric architecture. The loss function is calculated using the part of the artwork that has been physically cleaned for which we have access to both RGB images before and after cleaning. Therefore the network is able to clean the whole artwork using only a small area of it that has already been physically cleaned. A Macbeth ColorChecker and images of the Mona Lisa are used to test the approach and the results are compared with a recent approach available in the literature which uses a Convolutional Neural Network (CNN). The results are found to be acceptable given that the approach proposed herein has a potential to be applied in a real situation and there is no need for a large training dataset, on which the CNN method relied on.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Artworks are usually varnished for the purpose of protection. Although successful in their main purpose, with time, this application can change the visual qualities of artworks <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. Therefore, physically removing the aged varnish in order to reestablish the original appearance of the artwork becomes of great importance <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. There have been two major approaches to clean artwork, namely, physical and virtual cleaning. In the physical approach, the conservator physically removes the varnish layer using a solvent and gel system. These types of cleaning are very time-consuming and can also be detrimental to the artwork <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9]</ref>. Virtual cleaning, on the other hand, refers to the outcome of the physical approach using simulation. Virtual cleaning could provide the conservator with the likely appearance of the cleaned artwork, helping them to see if the physical cleaning is necessary and potentially guide their work.</p><p>Most of the studies done in the area of virtual cleaning are based on first cleaning a small part of the painting physically. and they use an RGB image of the painting before and after cleaning. Using that small part, for which they have data belonging to both cleaned and unclean state, they attempt to virtually clean the entire painting producing a visualization of the cleaned work. They typically do that through fitting some type of regression to the data obtained from the small area before and after cleaning. They then apply the same regression model to the rest of the painting which leads to the artwork being virtually cleaned <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>. Papas and Pitas <ref type="bibr">(2000)</ref> stated that the RGB color space of the camera does not work well and they proposed to use the CIELAB color space instead, asserting that CIELAB works better than RGB color space. Virtually cleaning the Mona Lisa was another breakthrough in the field of virtual cleaning <ref type="bibr" target="#b11">[12]</ref>. Having access to the classical paints used in 16th century Italy, the authors were able to make a varnished and unvarnished color chart out of them. They were able to extract the relationship between the varnished and unvarnished color chart enabling them to estimate the unvarnished version of the Mona Lisa <ref type="bibr" target="#b11">[12]</ref>. <ref type="bibr" target="#b12">Palomero and Soriano (2011)</ref> developed the first neural network approach trying to virtually clean artworks <ref type="bibr" target="#b12">[13]</ref>. They also first cleaned a part of the artwork and then trained a shallow network using that small part. They then used the same model to clean the rest of the artwork <ref type="bibr" target="#b12">[13]</ref>. <ref type="bibr" target="#b13">Trumpy, et al. (2015)</ref> developed the first physics-based model in order to virtually clean artworks <ref type="bibr" target="#b13">[14]</ref> through making a few simplifying assumptions, such as that a dark site on the painting is a "perfect" black that absorbs all incident light (perfect meaning not grayish) and the varnish spectral reflectance is wavelength independent. Through first finding the darkest and lightest part of the painting and cleaning them, they were able to estimate the spectral transmittance of the varnish layer which would be used to estimate the cleaned spectral reflectance of the entire painting <ref type="bibr" target="#b13">[14]</ref>. <ref type="bibr" target="#b14">Kirchner, et al. (2018)</ref> used Kubelka-Munk trying to estimate the virtually cleaned artworks <ref type="bibr" target="#b14">[15]</ref>. In order to do that, they first characterized the varnish layer through cleaning the artworks at a few spots that appeared white allowing them to compute the spectral transmittance of the varnish. Characterizing the varnish layer enabled them to estimate the cleaned version of the whole painting <ref type="bibr" target="#b14">[15]</ref>. <ref type="bibr" target="#b15">Linhares, et al. (2020)</ref> did a similar work as <ref type="bibr" target="#b14">[15]</ref> through characterizing the varnish layer first. However, they characterized the varnish layer through removing the whole varnish and measuring the spectral reflectance of the painting before and after varnish removal <ref type="bibr" target="#b15">[16]</ref>. The latest work in the area of virtual cleaning of artworks belong to Maali Amiri and Messinger (2021) <ref type="bibr" target="#b16">[17]</ref>. They first developed a Convolutional Neural Network (CNN) model. The network was trained on images of natural scenes and humans that were artificially yellowed mimicking the impact that varnish has on the artwork visually. They were able to visualize the cleaned version of artworks using their proposed CNN model in a very acceptable manner <ref type="bibr" target="#b16">[17]</ref>. The methods proposed until now suffer from a few limitations, namely, the requirement to specify the perfect black and white regions on the painting, the need to have access to spectral data, generalizability of the method to other works, and the need to have access to a large set of data for training.</p><p>In this work, we propose a Deep Generative Network (DGN) to virtually clean a partially cleaned artwork. The generative model we use herein has been used in the area of remote sensing for the purpose of denoising of the hyperspectral image and single image super-resolution <ref type="bibr" target="#b17">[18,</ref><ref type="bibr" target="#b18">19]</ref>. The authors developed a convolutional generative network that was able to take in a noise cube and output a super-resolved remotely sensed image. The network is deep and symmetrical and has also borrowed the idea of skip connections from U-Net enabling it to use residual information as best as possible. In this work, we have modified the network to fit our purpose. Instead of feeding the network with a random noise image, we feed in the RGB image of the uncleaned artwork. To be more specific, we have information of a small area of the painting before and after cleaning. The RGB image of the artwork is first changed into CIELAB, and the a*b* channels are used to train the network. The loss function, on the other hand, is computed between the uncleaned and the corresponding cleaned area of the artwork (the small area for which we have access to both cleaned and uncleaned data). The model is tested on Macbeth ColorChecker and the Mona Lisa that are partially cleaned. The results show that our approach here has done a better job compared to the model proposed by <ref type="bibr" target="#b16">[17]</ref> when it comes to the Mona Lisa but has done slightly worse when compared to <ref type="bibr" target="#b16">[17]</ref> for the Macbeth ColorChecker. Overall, the method proposed herein is more applicable to a real situation where the conservator has no access to a large set of data with which to train the model. Comparing our model to that of <ref type="bibr" target="#b16">[17]</ref> seems fair as in their paper they showed that their model had outperformed the only physics-based model proposed for artwork virtual cleaning <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b16">17]</ref>.</p><p>This paper is laid out as follows: the next section will present the specifications of the data, while the method will be explained in a more detailed manner along with the evaluation metrics and experimental environment. After that, results are presented along with discussions in the next section. Finally the conclusions are presented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Methodology</head><p>In this section, the data used are explained and the proposed algorithm is described in detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Data</head><p>One of the datasets used to test the proposed method is the Macbeth ColorChecker spectral reflectance data. The spectral reflectances were artificially yellowed using the same formula suggested by <ref type="bibr" target="#b16">[17]</ref> in the spectral domain. The artificially yellowed spectral reflectance mimics the visual impact varnish has on the painting. So, due to the Macbeth ColorChecker having a wide range of colors along with neutral patches, we use it as an initial test for our approach. The Macbeth ColorChecker is simulated in a way that it is "varnished" with a layer of a particular spectral reflectance and transmittance (generally speaking, varnish is yellow, and its spectral reflectance and transmittance should represent that <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>), as explained by <ref type="bibr" target="#b16">[17]</ref>. The yellowed spectral reflectances and the originals were converted into sRGB data afterwards. The Macbeth ColorChecker was primarily used to assess the feasibility of the proposed methods before application to a well-known work of art. Consequently, we apply the network to the Mona Lisa to further test the network. The varnished and cleaned versions of the Mona Lisa are taken from <ref type="bibr" target="#b11">[12]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Deep Generative Network (Architecture and Application)</head><p>In this section, the Deep Generative Network (DGN) that has been developed in this work is described. This method requires only a small area of the artwork to be cleaned. Then using the data of both the cleaned and varnished conditions of the same area, the network learns how to map from the uncleaned condition to the clean one. It then applies the same map to the rest of the artwork resulting in a virtually cleaned artwork.</p><p>The idea behind a DGN is top learn the relationship 𝑥 = 𝑓 𝜃 (𝑧), which maps an image 𝑧 to another image 𝑥. This approach is used here to recover the virtually cleaned artwork from the unclean one in the RGB color domain. The goal here is to generate image 𝑋, which is the virtually cleaned image of the varnished artwork. Through feeding the varnished image 𝑍 into the generator, image 𝑋 with this characteristic will be attained. 𝑍 is the RGB image of the artwork before cleaning. As mentioned above, only a small area of the painting is cleaned and we have the RGB image of that area for both cleaned and uncleaned conditions. Let us call the area of the painting for which we have both the cleaned and uncleaned data 𝐴. The RGB image of this area that is physically cleaned is called 𝐴 𝑐 and the corresponding RGB image of this area that belongs to 𝑍 (that is unclean) is 𝐴 𝑢 . It makes sense that 𝐴 𝑢 belongs to 𝑍 as 𝑍 is the RGB image of the uncleaned artwork. When 𝑍 goes through the network, the part corresponding to 𝐴 𝑢 is taken out and the pixel-wise error between 𝐴 𝑢 and 𝐴 𝑐 is calculated to compute the loss, which is then back-propagated to the generator, through which the parameters 𝜃 of the mapping function are optimized. Fig. <ref type="figure" target="#fig_0">1</ref> shows the process described. It should be noted that there is no training in a traditional sense using this approach. The error computed between the 𝐴 𝑢 and 𝐴 𝑐 is back propagated to the generator and the generator will clean the whole image using this error coming from the loss function. This cleaning process is taken place step by step at each epochs, until the network reaches the maximum number of epochs.</p><p>Through trial and error we come to know that the network works better in CIELAB color space than in RGB. This improvement in the neural network performance by changing the color space to CIELAB has been reported in the literature as well <ref type="bibr" target="#b10">[11]</ref>. Therefore, we first convert the RGB image, 𝑍, into the CIELAB color space. The L* channel is then set aside and the a*b* channels, as input, go through two main modules of the network, consisting of several blocks as follows:</p><p>1) The down-sampling block 𝑑 (𝑖) : Each 𝑑 (𝑖) is composed of convolutional layer 𝐶</p><p>𝑑 (𝑖) also performing the down-sampling operation through setting the stride 𝑆 = 2. After that, batch normalization and the LeakyReLU activation layer are performed. The output is then fed into the next convolutional layer 𝐶 (2) 𝑑 (𝑖) with the same stride. Similar to the first convolutional layer, this operation is followed by a batch normalization layer and the LeakyReLU activation function. 𝐶 </p><formula xml:id="formula_1">𝑑 (𝑖), 𝑘<label>(2)</label></formula><formula xml:id="formula_2">𝑑 (𝑖), 𝑛<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑑 (𝑖) and 𝑛</head><p>(2) 𝑑 (𝑖).</p><p>2) The up-sampling block 𝑢 (𝑖) : Each 𝑢 (𝑖) consists of a few stacked layers. Opposite to the down-sampling blocks, batch normalization is the first layer. Afterwards, the first convolutional layer 𝐶     </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">𝐶</head><formula xml:id="formula_3">𝑢 (𝑖), 𝑘<label>(2)</label></formula><formula xml:id="formula_4">𝑢 (𝑖), 𝑛<label>(1)</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>𝑢 (𝑖) and 𝑛</head><p>(2) 𝑢 (𝑖), respectively. The skip connection, shown as 𝑠 (𝑖) , is also utilized to connect the down-sampled data to the up-sampled data (the up-sampling and down-sampling blocks are symmetrical), so the residual information can be fully employed. 𝑜 (0) denotes the output block. It is indeed the up-sampling block that is modified so that the up-sampling layer is superseded with one convolutional layer which is followed by one Sigmoid activation layer.</p><p>The network has an hourglass architecture as shown in Fig. <ref type="figure" target="#fig_2">2</ref>. Each down-sampling and up-sampling sections are comprised of 5 blocks and 5 skip connections. The filter size is 3 × 3 in the up-sampling and down-sampling blocks but it is 1 × 1 in the last convolutional layer. There are 128 filters in the convolutional layers in the down-sampling and up-sampling blocks and there are only 2 (to be equal to the a*b* channels) filters in the last convolutional layers. As it was mentioned, only a*b* channels of the image 𝑍 are input into the network. The output from the network is also the a*b* of the image 𝑋. This output will be combined with the L* channel of the image 𝑍 that was first set aside, constructing the CIELAB image of output 𝑋. The CIELAB image is then converted back into RGB image following standard formulae for sRGB.</p><p>As mentioned, the input to the network is the a*b* image of the uncleaned artwork 𝑍 and the generated image is 𝑋. The cost function is defined as the pixel-wise difference between 𝐴 𝑢 and 𝐴 𝑐 . 𝐴 𝑢 belongs to 𝑍 and therefore, it changes in each iteration. Consequently, the cost function is given as</p><formula xml:id="formula_5">𝑚𝑖𝑛‖𝐴𝑢 − 𝐴𝑐‖ 2<label>(1)</label></formula><p>It should be noted that the input to the model should be replaced with the output of the model after each iteration. The overall algorithm is shown in Algorithm 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Evaluation Metrics and Experimental Environment</head><p>Visualization of the results, per-pixel spectral Euclidean Distance (ED) and Spectral Angle (SA), between the original (cleaned) image and the virtually cleaned image, are the metrics used in this work for accuracy evaluation <ref type="bibr" target="#b19">[20]</ref>. The color space used is RGB and each pixel is considered a vector in this space, with the vector tip located at a particular point in the color space according to the RGB values. The spectral Euclidean distance is obtained through calculating the Euclidean distance between two pixels in that color space. The spectral angle is calculated between two </p><formula xml:id="formula_6">𝑆𝐴 𝑘 = 𝑐𝑜𝑠 −1 (︂ t 𝑘 • r 𝑘 |t 𝑘 ||r k | )︂<label>(2)</label></formula><p>where 𝑘 denotes the 𝑘 𝑡ℎ pixel, t 𝑘 and r 𝑘 denote the two pixels belonging to the test and reference images, and 𝑆𝐴 𝑘 denotes the spectral angle between these two pixels. Python 3.9.7 |Anaconda, Inc. is used as a base coding environment for the DGN algorithm. More specifically, the DGN codes were written and run in the TensorFlow environment, which was installed onto the Anaconda. In terms of hardware, the programs are run on a GPU (NVIDIA GeForce MX350). The training of the DGN is performed using only one image and is consequently referred to as an unsupervised learning method <ref type="bibr" target="#b17">[18]</ref>. As mentioned before, only a small area of the image is used to compute the loss function, and the same loss is then used for the whole image to virtually clean it. 1500 epochs are used to train the model. MATLAB R2022a, the package of mathematical software was also used for evaluation computations, making the Macbeth ColorChecker and yellowing it.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Results and Discussions</head><p>In this section, the results of applying DGN to virtually clean the Macbeth ColorChecker and the Mona Lisa are presented and examined. First, we consider the Macbeth ColorChecker.</p><p>The Macbeth ColorChecker was simulated as varnished and unvarnished and is used to test the approach, similarly to previous work by <ref type="bibr" target="#b16">[17]</ref>. The Macbeth ColorChecker has 24 different color patches, including a range of neutral samples. As mentioned, the DGN needs only a small area of the painting to be physically cleaned and then, using that small part to learn the transfer function describing the varnish effect, the whole painting is virtually cleaned. Given that the Macbeth ColorChecker has different color patches, we empirically identified that the number of patches necessary to be physically cleaned is at least three. Therefore, we applied the method, using the following combination of three patches: a) red, green and blue, b) black and white and a neutral patch, and c) all of the neutral patches, i.e., six neutral patches that exist on the standard Macbeth ColorChecker. The combination in c obviously contains more than three patches, but is presented as an alternate approach to training the network for testing. The results are visually compared to the method proposed by <ref type="bibr" target="#b16">[17]</ref>, as shown in Fig. <ref type="figure" target="#fig_7">3</ref> and quantitatively compared in Table <ref type="table" target="#tab_0">1</ref>. We observe that the DGN has done an acceptable job compared to the CNN proposed by <ref type="bibr" target="#b16">[17]</ref>, even though the number of training samples required by the DGN is significantly smaller that that of t he CNN. To have a better understanding of the results, Table <ref type="table" target="#tab_0">1</ref> shows the quantitative results in terms of the mean values of ED and SA for the whole ColorChecker. These metrics are computed between the virtually cleaned color chart and the original one. As it is observed from Table <ref type="table" target="#tab_0">1</ref>, the CNN model has done a slightly better job in terms of cleaning the Macbeth ColorChecker. This is not too concerning as the method proposed herein is more practical than the CNN proposed by <ref type="bibr" target="#b16">[17]</ref>. The DGN proposed herein only needs a small area of the painting to be cleaned, while the CNN needs a significantly larger number of training samples to work. While the end goal of each approach is the same, a virtually cleaned work of art, the operational aspects of the two methods a significantly different.</p><p>Finally, we also applied the DGN to clean the Mona Lisa. The results are shown in Fig. <ref type="figure" target="#fig_8">4</ref>. Fig. <ref type="figure" target="#fig_8">4</ref> (c) shows the area of the painting that was used to compute the loss; in other words, that area is used to train the network to go from the unclean to the clean version of the artwork. As shown in Fig. <ref type="figure" target="#fig_8">4</ref> (e), the DGN has again done a visually acceptable job of cleaning the artwork, considering that the area of the painting used to train the network is fairly small. The ED and SA are also computed between the original clean Mona Lisa and the virtually cleaned one. The results are both visualized (Fig. <ref type="figure" target="#fig_9">5</ref>) and reported in terms of the mean values across the whole image (Table <ref type="table" target="#tab_2">2</ref>). The visualization of the ED and SA values show specific areas of the work that are not well cleaned (note that in Figure <ref type="figure" target="#fig_9">5</ref> all four results are normalized to 1). To better understand the absolute performance, the mean values of the ED and SA are also reported which clarifies which method has outperformed the other. As it is observed from Fig. <ref type="figure" target="#fig_9">5</ref>, the CNN has not done a good job especially predicting the cleaned color of the sky, and overall the error is higher and more widespread in the CNN.</p><p>We see from Table <ref type="table" target="#tab_2">2</ref>, the proposed method here has surprisingly outperformed the CNN proposed by <ref type="bibr" target="#b16">[17]</ref>. It is surprising as the CNN outperformed our proposed method when the Macbeth ColorChecker was the object of interest, but the results here are the opposite in the case of the Mona Lisa. This could be because of the richness of the colors and structural features  that are present in the Mona Lisa, as opposed to the Macbeth ColorChecker, which is a simple color chart. This would also confirm that the method proposed herein is more practical than CNN, as asserted above. The method proposed herein has a potential of being applied to a wider type of artworks compared to the CNN, which requires a large set of training data with content similar to the artwork itself.</p><p>It is important to note that the small area chosen in the artwork should be a representative of all the features and material present on the painting. Looking at Figure <ref type="figure" target="#fig_8">4</ref> (c), one could see that the small area contains a small part of the sky, human eye and skin, and her dress. This will strengthen the performance of the DGN. To examine this point further, another experiment is performed in which, the small area varies from what has been chosen in Figure <ref type="figure" target="#fig_8">4 (c)</ref>. The new small area only comprises the person (part of her face, her dress, her hair and her skin) as shown in Fig. <ref type="figure" target="#fig_11">6</ref>  As seen from the bottom row of Fig. <ref type="figure" target="#fig_11">6</ref>, the sky and everything around the person, has not been cleaned as well as the top row where the area chosen is a better representative of everything in the image. It is also worthwhile noting that the DGN has not done terribly, however, with a more thorough area, it could lead to a better result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>In this work, we developed a Deep Generative Network (DGN) to tackle the problem of virtual cleaning of artwork for visualization. We compared our method to the latest method in this area which used a Convolutional Neural Network (CNN). We used the Macbeth ColorChecker and the Mona Lisa to test our method. We found that the proposed model did not outperform the CNN in the case of the Macbeth ColorChecker, but it did outperform the CNN in the case of the Mona Lisa. This shows the high potential of the work proposed herein to be applied in the real case and to a wider range of artworks. The method proposed herein could potentially help the conservators with seeing how a painting would look if it were to be physically cleaned, or aid them in choosing from different options they have for a physical cleaning and so forth.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>( 1 )</head><label>1</label><figDesc>𝑑 (𝑖) and 𝐶(2) 𝑑 (𝑖) can be set to different kernel sizes and different numbers of filters shown as 𝑘<ref type="bibr" target="#b0">(1)</ref> </figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>( 1 )</head><label>1</label><figDesc>𝑢 (𝑖) with S = 1 and a batch normalization and LeakyReLU activation function are used. The output is then fed into the next convolutional layer 𝐶</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>( 2 )</head><label>2</label><figDesc>𝑢 (𝑖). The output, after batch normalization and non-linear activation, is input into the bilinear up-sampling layer with factor</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: The overall algorithm of the proposed deep generative network. It should be noted that the generator actually takes in the error and based on that, it generates a new image, which would be the virtually cleaned image. There is no training in the traditional sense here, and the generator only learns to clean the whole image using the error it is computing based on the cleaned parts.</figDesc><graphic coords="5,89.29,84.93,415.96,436.68" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>( 1 )</head><label>1</label><figDesc>𝑢 (𝑖) and 𝐶</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>( 2 )</head><label>2</label><figDesc>𝑢 (𝑖), similar to the down-sampling block, can be set to different kernel sizes and different numbers of filters shown as 𝑘<ref type="bibr" target="#b0">(1)</ref> </figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The architecture of the work along with how the input and output are processed.</figDesc><graphic coords="6,106.71,84.56,396.73,150.69" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: a) all neutral patches, b) black and white and a neutral patch, c) CNN output, d) original Macbeth, e) red, green and blue patches and f) unclean (i.e., yellow) Macbeth.</figDesc><graphic coords="8,154.66,136.70,283.46,142.96" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: a) Unclean Mona Lisa, b) original clean Mona Lisa, c) The area of Mona Lisa that is assumed to be physically cleaned, d) virtually cleaned using CNN proposed by [17], e) virtually cleaned using DGN.</figDesc><graphic coords="9,126.32,84.19,340.16,372.89" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: a) ED calculated between the original clean Mona Lisa and the virtually cleaned one using CNN, b) ED calculated between the original clean Mona Lisa and the virtually cleaned one using DGN, c) SA calculated between the original clean Mona Lisa and the virtually cleaned one using CNN, d) SA calculated between the original clean Mona Lisa and the virtually cleaned one using DGN.</figDesc><graphic coords="10,183.01,84.19,226.77,303.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head></head><label></label><figDesc>. In this Figure, the top row shows the results of Figs 4 and 5 combined in the case of the DGN and the bottom row shows the results of the DGN when a different and smaller area is chosen.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: First experiment: a) SA calculated between the original clean Mona Lisa and the virtually cleaned one using DGN, b) ED calculated between the original clean Mona Lisa and the virtually cleaned one using DGN, c) virtually cleaned using DGN, d) The area of Mona Lisa that is assumed to be physically cleaned. Second experiment: e) SA calculated between the original clean Mona Lisa and the virtually cleaned one using DGN, f) ED calculated between the original clean Mona Lisa and the virtually cleaned one using DGN, g) virtually cleaned using DGN, h) The area of Mona Lisa that is assumed to be physically cleaned.</figDesc><graphic coords="11,126.31,173.34,340.16,249.69" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Algorithm 1</head><label>1</label><figDesc>Deep Generative Network Algorithm Procedure: Virtual Cleaning (𝐴 𝑐 ) Input: a*b* image of the uncleaned artwork 𝑍 while epoch &lt; max_epoch do 𝑋 = 𝑀 𝑜𝑑𝑒𝑙(𝑍) (Model here stands for the deep generative model.) 𝐴 𝑢 = 𝑋 (The part of the 𝑋 corresponding to 𝐴 𝑐 is taken out) 𝑚𝑖𝑛‖𝐴 𝑢 − 𝐴 𝑐 ‖ 2 𝑍 = 𝑋 (replace the input with the output of the model in each iteration)</figDesc><table><row><cell>end while</cell></row><row><cell>Return 𝑋</cell></row><row><cell>End Procedure</cell></row><row><cell>vectors and is reported in radians in the range [0, 3.142], defined as</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1</head><label>1</label><figDesc>Euclidean distance and SA mean and standard deviation (SD) values between the original and virtually cleaned Macbeth color chart.</figDesc><table><row><cell>Method</cell><cell cols="2">Euclidean distance</cell><cell>SA</cell><cell></cell></row><row><cell></cell><cell>Mean</cell><cell>SD</cell><cell>Mean</cell><cell>SD</cell></row><row><cell>All neutral patches</cell><cell>0.06</cell><cell>0.022</cell><cell>0.06</cell><cell>0.021</cell></row><row><cell cols="2">Black, white and a neutral patch 0.056</cell><cell>0.026</cell><cell cols="2">0.062 0.024</cell></row><row><cell>Red, green and blue patches</cell><cell>0.074</cell><cell>0.042</cell><cell cols="2">0.070 0.034</cell></row><row><cell>CNN proposed by [17]</cell><cell>0.021</cell><cell>0.002</cell><cell cols="2">0.014 0.004</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Euclidean distance and SA mean and standard deviation (SD) values between the original and virtually cleaned Mona Lisa.</figDesc><table><row><cell>Method</cell><cell cols="2">Euclidean distance</cell><cell>SA</cell></row><row><cell></cell><cell>Mean</cell><cell>SD</cell><cell>Mean</cell><cell>SD</cell></row><row><cell>DGN</cell><cell>0.0167</cell><cell>0.0015</cell><cell cols="2">0.1045 0.0139</cell></row><row><cell cols="2">CNN [17] 0.0371</cell><cell>0.0024</cell><cell cols="2">0.1489 0.0209</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Acknowledgments</head><p>This research was funded by the Xerox Chair in Imaging Science in the Chester F. Carson Center for Imaging Science at the Rochester Institute of Technology.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The barbizon painters: a guide to their suppliers</title>
		<author>
			<persName><forename type="first">S</forename><surname>Constantin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Studies in conservation</title>
		<imprint>
			<biblScope unit="volume">46</biblScope>
			<biblScope unit="page" from="49" to="67" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The unvarnished truth: Mattness, &apos;primitivism&apos;and modernity in french painting, c. 1870-1907</title>
		<author>
			<persName><forename type="first">A</forename><surname>Callen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Burlington Magazine</title>
		<imprint>
			<biblScope unit="volume">136</biblScope>
			<biblScope unit="page" from="738" to="746" />
			<date type="published" when="1994">1994</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Bruce-Gardner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hedley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Villers</surname></persName>
		</author>
		<title level="m">Impressionist and post-impressionist masterpieces: The courtauld collection</title>
				<imprint>
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An evaluation of color change in nineteenth-century grounds on canvas upon varnishing and varnish removal</title>
		<author>
			<persName><forename type="first">M</forename><surname>Watson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Burnstock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">New Insights into the Cleaning of Paintings: Proceedings from the Cleaning 2010 International Conference</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
		<respStmt>
			<orgName>Universidad Politecnica de Valencia and Museum Conservation Institute, Smithsonian Institution</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A review of solvent action on oil paint</title>
		<author>
			<persName><forename type="first">L</forename><surname>Baij</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hermans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ormsby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Noble</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Iedema</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Keune</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Heritage Science</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="1" to="23" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Sustainability in art conservation: a novel bio-based organogel for the cleaning of water sensitive works of art</title>
		<author>
			<persName><forename type="first">S</forename><surname>Prati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Volpi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fontana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Galletti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Giorgini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mazzeo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Mazzocchetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Samorì</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sciutto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Tagliavini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pure and Applied Chemistry</title>
		<imprint>
			<biblScope unit="volume">90</biblScope>
			<biblScope unit="page" from="239" to="251" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Characterization of polyvinyl alcoholborax/agarose (pva-b/ag) double network hydrogel utilized for the cleaning of works of art</title>
		<author>
			<persName><forename type="first">E</forename><surname>Al-Emam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Soenen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Caen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Janssens</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Heritage Science</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="1" to="14" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Experimental tests used for treatment of red weathering crusts in disintegrated granite-egypt</title>
		<author>
			<persName><forename type="first">M</forename><surname>El-Gohary</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of cultural heritage</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="471" to="479" />
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Setup of a sustainable indoor cleaning methodology for the sculpted stone surfaces of the duomo of milan</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gulotta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Saviello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gherardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Toniolo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Anzani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rabbolini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goidanich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Heritage Science</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Image processing for virtual restoration of artworks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Barni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bartolini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Cappellini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE multimedia</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="34" to="37" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Digital color restoration of old paintings</title>
		<author>
			<persName><forename type="first">M</forename><surname>Pappas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Pitas</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on image processing</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="291" to="294" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Multispectral camera and radiative transfer equation used to depict leonardo&apos;s sfumato in mona lisa</title>
		<author>
			<persName><forename type="first">M</forename><surname>Elias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Cotte</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied optics</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="2146" to="2154" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Digital cleaning and &quot;dirt&quot; layer visualization of an oil painting</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">M T</forename><surname>Palomero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">N</forename><surname>Soriano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Optics express</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page" from="21011" to="21017" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Experimental study on merits of virtual cleaning of paintings with aged varnish</title>
		<author>
			<persName><forename type="first">G</forename><surname>Trumpy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Conover</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Simonot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Thoury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Picollo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Delaney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Optics express</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="33836" to="33848" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Digitally reconstructing van gogh&apos;s field with irises near arles. part 1: varnish</title>
		<author>
			<persName><forename type="first">E</forename><surname>Kirchner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Van Der Lans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ligterink</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hendriks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Delaney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Color Research &amp; Application</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="150" to="157" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Chromatic changes in paintings of adriano de sousa lopes after the removal of aged varnish</title>
		<author>
			<persName><forename type="first">J</forename><surname>Linhares</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cardeira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bailão</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pastilha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nascimento</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Conservar Património</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="50" to="64" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Virtual cleaning of works of art using deep convolutional neural networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Maali Amiri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Messinger</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Heritage Science</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A new deep generative network for unsupervised remote sensing single-image super-resolution</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Haut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fernandez-Beltran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">E</forename><surname>Paoletti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Plaza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Pla</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Geoscience and Remote sensing</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="6792" to="6810" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Deep image prior</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ulyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lempitsky</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE conference on computer vision and pattern recognition</title>
				<meeting>the IEEE conference on computer vision and pattern recognition</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="9446" to="9454" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Contaminant classification of poultry hyperspectral imagery using a spectral angle mapper algorithm</title>
		<author>
			<persName><forename type="first">B</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Windham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lawrence</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Smith</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biosystems Engineering</title>
		<imprint>
			<biblScope unit="volume">96</biblScope>
			<biblScope unit="page" from="323" to="333" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
