<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Automated Image Color Mapping for a Historic Photographic Collection</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Taylor Arnold</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lauren Tilton</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Science &amp; Linguistics, University of Richmond</institution>
          ,
          <country country="US">U.S.A</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Rhetoric &amp; Communication Studies, University of Richmond</institution>
          ,
          <country country="US">U.S.A</country>
        </aff>
      </contrib-group>
      <fpage>37</fpage>
      <lpage>47</lpage>
      <abstract>
        <p>In the 1970s, the United States Environmental Protection Agency sponsored Documerica, a large-scale photography initiative to document environmental subjects nation-wide. While over 15,000 digitized public-domain photographs from the collection are available online, most of the images were scanned from damaged copies of the original prints. We present and evaluate a modified histogram matching technique based on the underlying chemistry of the prints for correcting the damaged images by using training data collected from a small set of undamaged prints. The entire set of color-adjusted Documerica images is made available in an open repository.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;computer vision</kwd>
        <kwd>color analysis</kwd>
        <kwd>histogram matching</kwd>
        <kwd>documentary photography</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Many of the most important environmental laws and federal agencies in the United States came
into existence during the large-scale political and social environmental movement that formed
during the 1960s and 1970s [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Significant political advances that continue into the present
day include the Clean Air Act (1963), the Clean Water Act (1972), and the Resource
Conservation and Recovery Act (1976). The United States Environmental Protection Agency (EPA) was
founded in 1970, to manage, advocate, and set standards for these new legal frameworks [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ].
The EPA continues today with a staf of over 16,000 and a budget of over $12 million [ 3].
      </p>
      <p>
        Documerica was an EPA-funded project running from 1972 to 1977 that aimed to
“photographically document subjects of environmental concern in America during the 1970s” [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The
photographs captured a wide range of topics. Mixed through images of water pollution,
chemical spills, and factory smoke plumes are pastoral landscapes from national parks around the
country [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Urban images of junkyards and trash are juxtaposed with images of cleaner
technologies, mass transit, and Americans of all ages at play [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ]. Over 15,000 photographic prints
from the collection have been digitized and made available through the National Archives of
the United States. As a product of the federal government, these images are in the public
domain and serve as a rich potential source of documentary evidence of the United States in the
1970s and the early years of the modern environmental movement.
      </p>
      <p>
        Unfortunately, a damaged set of Documerica prints was used for digitization. The scanned
photographic prints have an intense red shift, causing everything from the sky, to lakes, to
trees to have a red/orange hue in place of their normal expected colors. The sharpness of
the images has not been significantly afected indicating that the damage was due to a slow
degradation over time through a combination of heat and light. The color shift is sufÏciently
strong to reduce the aesthetic and rhetorical power of the images that are available online [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ].
Interestingly, the National Archives holds two additional sets of Documerica prints. These two
other sets reveal the aesthetic qualities of the original photographs and open the door to the
possibility of correcting the damaged colors in digital prints.
      </p>
      <p>
        In this article, we present and evaluate an algorithm to automatically correct the color of the
damaged digitized Documerica images using a small set of the undamaged prints as training
data. Our goal is to restore the aesthetic qualities of the images rather than the impossible task
of perfectly matching the colors in the reference images [
        <xref ref-type="bibr" rid="ref17 ref4">4, 17</xref>
        ]. Through several quantitative
and qualitative analyses, we show that our transformed images more closely represent the
expected colors of the photographed scenes. Our technique is sufÏciently tractable that it could
be applied to other color-shifted collections that do not have a reference set to train against.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Data</title>
      <p>
        The photographers employed by the Documerica project took photographs using the
Kodachrome color reversal film produced by Eastman Kodak [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Unlike many other technologies
for color photography, Kodachrome film used a subtractive technology that recorded light by
measuring the amount of cyan, magenta, and yellow dye needed to print a reconstruction of
the image [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ]. A special setup was required to create prints from the images. Special labs were
able to take film and turn it into color photographic slides through three diferently colored
developers [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>The National Archives of the United States has three copies of prints from Documerica: (1)
archival copies held in cold storage and available on special request, (2) a non-circulating set
held in their print room, and (3) a damaged set of prints that were used for the digitization
process.1 The authors visited the National Archives and selected a set of 23 slides from across
the collection to manually re-scan from both the cold-storage and non-circulating print set.
We selected images that contained a variety of diferent colors. After determining that the cold
storage images were indistinguishable from the print room copies, we decided to only re-scan
the latter. We then manually cropped each of these images to match the digitized prints. One
selected image had already been replaced with a corrected scan online. This left a training set
of 22 images for which we had manual reference scans from the print room and the damaged,
digitized copies on National Archives website.
1It is not entirely certain how the latter set was damaged or why it was selected for digitization. The third set was
previously used as a circulation copy, and was likely damaged due to its continued circulation. Personal
communications with the current archivists suggest that because the digitization was done of-site, someone selected the
circulating copy as to send of before realizing how much it had become damaged.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Overview of Color Mapping Algorithms</title>
      <p>
        Color adjustment of digital photographs is a well-studied topic and an essential element of
the workflow of modern photographers. For example, a common technique for professional
photographers is first to photograph a color reference card containing a variety of boxes with
known hues. Then, commercial software can use this reference image to learn how to adjust
any other images taken with the same equipment in similar lighting conditions [
        <xref ref-type="bibr" rid="ref23">23, 24</xref>
        ]. Adobe
Photoshop provides automated algorithms for matching color across images, which is useful
for applications such as putting together images from a photoshoot done under various lighting
conditions [9]. Color adjustment is also an essential stylistic element for photographers [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ].
Commercial photo editing software such as Adobe Lightroom provides a variety of methods
for automatically applying a suite of filters to a set of images—a technique commonly used, for
example, by wedding photographers to create a distinct style—or for manually adjusting the
color of an individual image.
      </p>
      <p>
        In addition to commercial software for color adjustment, there has also been considerable
academic research on the specific task of matching the color between two images, a process
known as color mapping or image color transfer. In one of the earliest studies of color
mapping, Reinhard et al. introduced a technique consisting of a simple standardization of color
intensities in a specific color space [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. They showed that this technique worked well as
both a mild correction to standardize images of the sun setting over the horizon and more
drastic applications of style transfer from an oil painting to a modern photograph. In a more
recent survey, Faridul et al. provide a nomenclature of available techniques for color
transfer: geometry-based, statistical, and user-assisted [10]. Statistical techniques extend the ideas
of standardizing the mean and variance of color intensities to more involved transformations
known as histogram matching [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]. Much of the sophistication of novel methods has focused
on making diferential transformations of various types over various parts of the image [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ].
One motivation for localized changes is adjusting lighting conditions as a pre-processing step
for other algorithms.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. CMY Median Histogram Matching (MHM)</title>
      <p>
        Our approach to image color mapping with the Documerica collection is closely related to
the well-known histogram matching technique. In this technique we attempt to match the
distribution of colors in one image with the distribution of colors in a reference image through
a monotonic transformation of the pixel intensities [
        <xref ref-type="bibr" rid="ref14 ref18">18, 14</xref>
        ]. Two special considerations in our
application require some minor changes to the standard histogram matching algorithm.
      </p>
      <p>
        First of all, through our understanding of the materiality of the prints, it is likely that the
color shift that damaged the prints likely afected the images along the dimensions of the three
developer colors. The contributing elements of heat, light, and/or humidity would afect each
of these dyes diferently according to their chemical composition. Our qualitative analysis of
the consistent red shift in the digitized images further indicates the need for a color correction
applied individually to each of the CMY color channels. So, in contrast to the standard strategy
for histogram matching that uses color spaces adapted to the sensitivity of the human eye, we
will apply our directly to the RGB/CMY pixel intensities [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ].2
      </p>
      <p>The second diference from a standard histogram matching algorithm is that we want to
avoid overfitting to the distribution of a single image. In fact, one of the goals of our
transformation is to restore the diversity of lighting and colors that are in the source materials. Instead
of perfectly matching the distribution for a single image, we want to create a transformation
by averaging the histogram transformations across the entire training set of 22 pairs of images.</p>
      <p>The goal of our adapted technique, which we will refer to as median histogram matching
(MHM), is to learn three monotonic functions   ̂ ,
 ̂
 , and   ̂ that each map an input color
intensity (from 0 to 1) into an output color intensity on the same scale. Applying these to the
cyan, magenta, and yellow components of a damaged image should result in a transformed
image that more closely represents the undamaged form of the print. In addition to being
motivated by the chemistry of the prints, the transformation mirrors tools available in popular
photo editing software such as the Photos application of macOS and Adobe Photoshop.</p>
      <p>Diferences in the technology available for digitizing the undamaged prints—including the
lighting, orientation, overscan size, and print-specific artifacts—made it infeasible to line up
our reference images pixel-by-pixel. Following related work on color correction, we focused
on lining up the general distribution of each color channel intensity. For each image  and color
channel  , we computed the percentiles for the color intensities of both the damaged image and
the scanned undamaged print. Then, we constructed a function   ̂ by matching the two sets of
percentiles to one another. For example, if the 20th percentile of the damaged image had a cyan
intensity of 0.6 and the undamaged percentile was 0.82,   ̂ (0.6)would be set to 0.82. We then set
 ̂ (0)to 0 and   ̂ (1)to 1 and filled in the intermediate values through linear interpolation. Then,

the final predictions for each color channel  (  ̂ ) are given by taking the median across all of
the images in our training collection. These transformations are guaranteed to be monotonic
and to be well-defined for all input values.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results</title>
      <sec id="sec-5-1">
        <title>5.1. Transformations</title>
        <p>outlined in the previous section. The grey lines show the estimates  ̂ for each image  , with
the darker colored lines showing the final learned transformations based on the median across
the entire training set. These transformations were also applied in the final column of Figure 1.</p>
        <p>The cyan color channel transformation is the most diferent from an identity mapping. The
transformation suggests that the amount of cyan intensity should be increased, which aligns
with our qualitative analysis of the collection has having a general red tint (the intensity of
cyan can be computed by taking the inverse of the red pixel intensity). The transformation
indicates that the cyan dye as degraded at a rate faster than the other two dyes. The magenta
and yellow curves, on the other hand, are much closer to the identity transformation. While the
2The representation of a pixel in CMY space can be computed by simply subtracting the RGB representation from
1. So, matching the CMY distribution is equivalent to matching the RGB representation.
cyan
magenta
yellow
algorithm does indicate some changes for individual images, these are relatively minor when
we take the median across the entire training set.</p>
        <p>We see that there are considerable diferences across training images. These diferences are
less pronounced than they first appear because over half of all the pixel intensities in the
digitized images have intensities over 0.8. Looking at the upper part of each curve, we see that
these are much more stable than the rest of the transformation. Additionally, the
transformations on the bottom of the curves correspond to parts of the image that have a small amount
of the given color dye. Often these are very bright parts of the image that are close to white,
and the diferences indicated on the chart have a small visual efect, as we see in the following
subsection.</p>
        <p>The diferences across training images highlight two points of caution. First, the adjustments
are nowhere near perfect. The final exact colors can still be quite diferent than the original
prints, the negatives themselves, or the light reflected by the objects being photographed.
Secondly, in order to find a good transformation for another collection. Taking the median across
several hand-selected transformations will likely work well, whereas applying a filter trained
on only one image to a larger collection can quickly lead to overfitting.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Leave-One-Out Cross-Validation</title>
        <p>
          We can use quantitative measurements to see how well the learned median transformations
work on the training data itself. In order to do this, we use a leave-one-out cross-validation
technique in which we compute the median transformations with the  th image removed—
which we denote by   −̂ () —and then compare this to the transformation given by using only
the  th image itself [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]. To compare these two functions, we use a distance metric  given by
Average errors (sum of the  2 norms across color channels, multiplied by 100) for predicting the color
channel transformation on the training set of images. Standard errors are given after the means.
Uniform errors are computed equally across the color intensities. Weighted errors use the empirical
distribution for each image.
        </p>
        <sec id="sec-5-2-1">
          <title>Uniform Error</title>
        </sec>
        <sec id="sec-5-2-2">
          <title>Weighted Error</title>
        </sec>
        <sec id="sec-5-2-3">
          <title>Identity Transformation</title>
        </sec>
        <sec id="sec-5-2-4">
          <title>Leave-One-Out Estimator</title>
          <p>To get have a sense of the scale of this metric, we will compare it with using the identity
transformation to estimate  ̂ . This metric treats all input color intensities equally. We also
compute a weighted version of the metric weighted by the density of the colors in the image
itself to provide a more accurate measurement of how much the two transformations difer on
a specific image.</p>
          <p>The results of the leave-one-out cross-validation are shown in Table 1. The cross-validated
median estimator has an error rate about three times smaller than the error from using the
input images. The errors are slightly smaller when weighted by the empirical distribution of
the intensities, but the relative relationships remain similar. Only 1 of the 22 training images
has a cross-validation error larger for the median estimator than the identity transformation,
with every other image being improved by a factor of at least 50%.</p>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>5.3. Comparison to Manual Edits</title>
        <p>One of the motivations for the work in this paper was our perceived ability to make the
digitized, damaged Documerica prints look much closer to what we expected through manual
color adjustments in commercial photo editing tools. One editor on the English version of
Wikipedia, posting under the username Hohum, had a similar experience with the
Documerica photographs. In January 2023, they uploaded manually corrected versions of 14 Documerica
photographs to Wikipedia. No description of the method used was given. It seems unlikely that
the user had access to the undamaged prints; they have uploaded thousands of other corrected
images under a large variety of categories on Wikimedia Commons. While we should not treat
the edits by Hohum as a gold-standard transformation that we should aim to replicate
perfectly, comparing our method to the manually edited photographs ofers further quantitative
evidence for the efÏcacy of our automated technique.</p>
        <p>As we did in comparing the transformations in the leave-one-out cross-validation, we will
look at the average distances between the reference image (here, the corrected version uploaded
to Wikimedia Commons) to both the uncorrected image and our automatically corrected
image. Here, though, we look at the average Euclidean distance at the level of individual pixels.
Because we are interested in measuring the visual perception of the transformation, we will
ifrst convert the images into two diferent color spaces designed to represent human visual
perceptions: CIELAB and the CIELUV [15]. We report the results with and without considering
the lightness dimension of the colors.</p>
        <p>Table 2 compares our transformation to the independent, manual transformations performed
by Wikipedia user Hohum. Across all four selected color spaces, the median transformation
method is roughly twice as close to the manual adjustment than the original image. For all the
photos, the distance from the median transformation is closer to the manual transformation
in the CIELAB and AB color spaces. Only one image, which consists almost entirely of a
ifeld of monochromatic corn shot from above, has a worse CIELUV distance in the median
transformation.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.4. Qualitative comparison</title>
        <p>By looking at a subset of the transformations applied outside of our training set, we can
qualitatively describe how well the algorithm produces more realistic colors in the images. Creating
more realistic images that are more aesthetically interesting and engaging is the ultimate goal
of our work. Figure 3 shows 18 Documerica images and result of applying our transformation.
The red shift in the original images is very noticeable in the first and third columns. The
transformed images lose this red tone and become more color-appropriate. The image of the two
people fishing and the group of people on a bus both show no elements that still have any red
hues. On the other hand, images such as the sunset and the rocky clif with a bridge still retain
their expected red characteristics. In other words, we have not overcorrected the images by
making them all too blue. Instead, the updated collection shows a diverse set of colors that
help to re-establish the true scale and scope of the Documerica project itself.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future Work</title>
      <p>
        The Documerica project produced a large set of historically important documentary
photographs [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. With the increased present-day attention to environmental issues, the collection
has the potential to have a visible role in helping to understand the longer history of
environmentalism in the United States. However, the unrealistic red shift of the digitized collection has
until now reduced its aesthetic and rhetorical appeal and thus its general reach. Through the
adjustment of this shift through the algorithmic color transfer outlined in this paper and the
publication of the corrected images, we hope to help rectify this situation.3 As a next step—or,
more accurately, the original motivation for this work—we plan to build an interactive digital
public interface to help explore and understand the Documerica collection across its various
spatial, temporal, and visual components [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ].
3To download the color-adjusted collection, see: https://distantviewing.org/downloads.
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>U. S. N.</given-names>
            <surname>Archives</surname>
          </string-name>
          . DOCUMERICA:
          <article-title>The Environmental Protection Agency's Program to Photographically Document Subjects of Environmental Concern,</article-title>
          <year>1972</year>
          -
          <fpage>1977</fpage>
          . https://catalog.ar chives.gov/id/542493.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>T.</given-names>
            <surname>Arnold</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Tilton</surname>
          </string-name>
          . Distant Viewing:
          <article-title>Computational Exploration of Digital Images</article-title>
          . MIT Press,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>A. J.</given-names>
            <surname>Barnes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Graham</surname>
          </string-name>
          , and
          <string-name>
            <given-names>D. M.</given-names>
            <surname>Konisky</surname>
          </string-name>
          .
          <article-title>Fifty years at the US Environmental Protection Agency: progress, retrenchment, and opportunities</article-title>
          . Rowman &amp; Littlefield Publishers,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Barthes</surname>
          </string-name>
          . “Rhétorique de l'image”.
          <source>In: communications 4.1</source>
          (
          <issue>1964</issue>
          ), pp.
          <fpage>40</fpage>
          -
          <lpage>51</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>B.</given-names>
            <surname>Berman</surname>
          </string-name>
          and
          <string-name>
            <given-names>M. M.</given-names>
            <surname>Cronin</surname>
          </string-name>
          . “
          <string-name>
            <surname>Project</surname>
            <given-names>DOCUMERICA</given-names>
          </string-name>
          :
          <article-title>A Cautionary Tale”</article-title>
          .
          <source>In: Journalism History 43.4</source>
          (
          <issue>2018</issue>
          ), pp.
          <fpage>186</fpage>
          -
          <lpage>197</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T. W.</given-names>
            <surname>Bober</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Vacco</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. J.</given-names>
            <surname>Dagon</surname>
          </string-name>
          , and
          <string-name>
            <surname>H. E. Fowler.</surname>
          </string-name>
          “
          <article-title>The Photographic Process”</article-title>
          .
          <source>In: Handbook of Industrial and Hazardous Wastes Treatment</source>
          (
          <year>2004</year>
          ), p.
          <fpage>297</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B. I.</given-names>
            <surname>Bustard</surname>
          </string-name>
          and
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Ferriero</surname>
          </string-name>
          .
          <article-title>Searching for the Seventies: The DOCUMERICA Photography Project</article-title>
          .
          <source>Foundation for the National Archives</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8] [9] [10]
          <string-name>
            <given-names>N.</given-names>
            <surname>Carter</surname>
          </string-name>
          .
          <article-title>The politics of the environment: Ideas, activism, policy</article-title>
          . Cambridge University Press,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Evening</surname>
          </string-name>
          .
          <article-title>The Adobe Photoshop Lightroom 5 Book: The Complete Guide for Photographers</article-title>
          .
          <source>Pearson Education</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <string-name>
            <given-names>H. S.</given-names>
            <surname>Faridul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Pouli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Chamaret</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Stauder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Reinhard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Kuzovkin</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Trémeau</surname>
          </string-name>
          . “
          <article-title>Colour mapping: A review of recent methods, extensions and applications”</article-title>
          .
          <source>In: Computer Graphics Forum 35.1</source>
          (
          <issue>2016</issue>
          ), pp.
          <fpage>59</fpage>
          -
          <lpage>88</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hastie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Tibshirani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Friedman</surname>
          </string-name>
          , and
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Friedman</surname>
          </string-name>
          .
          <article-title>The elements of statistical learning: data mining, inference, and prediction</article-title>
          . Vol.
          <volume>2</volume>
          . Springer,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>R.</given-names>
            <surname>Hirsch</surname>
          </string-name>
          .
          <article-title>Seizing the Light: A social &amp; Aesthetic history of photography</article-title>
          .
          <source>Routledge</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hwang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.-Y.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. So</given-names>
            <surname>Kweon</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. Joo</given-names>
            <surname>Kim</surname>
          </string-name>
          . “
          <article-title>Color transfer using probabilistic moving least squares”</article-title>
          .
          <source>In: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          .
          <source>2014</source>
          , pp.
          <fpage>3342</fpage>
          -
          <lpage>3349</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>W.</given-names>
            <surname>Jia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>He</surname>
          </string-name>
          , and
          <string-name>
            <given-names>Q.</given-names>
            <surname>Wu</surname>
          </string-name>
          . “
          <article-title>A comparison on histogram based image matching methods”</article-title>
          .
          <source>In: 2006 IEEE International Conference on Video and Signal Based Surveillance. Ieee</source>
          .
          <year>2006</year>
          , pp.
          <fpage>97</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <string-name>
            <given-names>M.</given-names>
            <surname>Mahy</surname>
          </string-name>
          ,
          <string-name>
            <surname>L. Van Eycken</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <surname>A. Oosterlinck.</surname>
          </string-name>
          “
          <article-title>Evaluation of uniform color spaces developed after the adoption of CIELAB and CIELUV”</article-title>
          .
          <source>In: Color Research &amp; Application 19.2</source>
          (
          <issue>1994</issue>
          ), pp.
          <fpage>105</fpage>
          -
          <lpage>121</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>B.</given-names>
            <surname>Matiash</surname>
          </string-name>
          .
          <article-title>The Visual Palette: Defining Your Photographic Style</article-title>
          . Rocky Nook, Inc.,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>G.</given-names>
            <surname>Mu</surname>
          </string-name>
          .
          <article-title>Traité du signe visuel: Pour une rhétorique de l'image.</article-title>
          <string-name>
            <surname>Seuil</surname>
          </string-name>
          ,
          <year>1992</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>L.</given-names>
            <surname>Neumann</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Neumann</surname>
          </string-name>
          . “
          <article-title>Color style transfer techniques using hue, lightness and saturation histogram matching”</article-title>
          .
          <source>In: Computational Aesthetics in Graphics, Visualization and Imaging</source>
          .
          <year>2005</year>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>122</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>S.</given-names>
            <surname>Pénichon</surname>
          </string-name>
          .
          <article-title>Twentieth Century Colour Photographs: The complete guide to processes, identification &amp; preservation</article-title>
          .
          <source>Thames &amp; Hudson</source>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Reinhard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Adhikhmin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gooch</surname>
          </string-name>
          , and
          <string-name>
            <given-names>P.</given-names>
            <surname>Shirley</surname>
          </string-name>
          . “
          <article-title>Color transfer between images”</article-title>
          .
          <source>In: IEEE Computer graphics and applications 21</source>
          .5 (
          <issue>2001</issue>
          ), pp.
          <fpage>34</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Rinde</surname>
          </string-name>
          . “
          <article-title>Richard Nixon and the rise of American environmentalism”</article-title>
          .
          <source>In: Distillations 3.1</source>
          (
          <issue>2017</issue>
          ), pp.
          <fpage>16</fpage>
          -
          <lpage>29</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Shubinski</surname>
          </string-name>
          . “
          <article-title>From FSA to EPA: Project documerica, the dustbowl legacy, and the quest to photograph 1970s America”</article-title>
          .
          <source>PhD thesis</source>
          . University of Iowa,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23] [24]
          <string-name>
            <given-names>A.</given-names>
            <surname>Temkin</surname>
          </string-name>
          .
          <article-title>Color chart: Reinventing color, 1950 to today</article-title>
          .
          <source>The Museum of Modern Art</source>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Varichon</surname>
          </string-name>
          .
          <source>Color Charts: A History</source>
          . Princeton University Press,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          and
          <string-name>
            <given-names>L.</given-names>
            <surname>Ma</surname>
          </string-name>
          . “
          <article-title>Color transfer in correlated color space”</article-title>
          .
          <source>In: Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications</source>
          .
          <source>2006</source>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>309</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>