<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>These authors contributed equally.
$ davit.gigilashvili@ntnu.no (D. Gigilashvili); giorgio.trumpy@ntnu.no (G. Trumpy)</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Appearance Manipulation in Spatial Augmented Reality using Image Diferences</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Davit Gigilashvili</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Giorgio Trumpy</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Colourlab, Department of Computer Science, Norwegian University of Science and Technology</institution>
          ,
          <addr-line>Teknologivegen 22, 2815, Gjøvik</addr-line>
          ,
          <country country="NO">Norway</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2022</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>Rapidly emerging augmented reality technologies enable us to virtually alter appearance of objects and materials in a fast and eficient way. The state-of-the-art research shows that the human visual system has a poor ability to invert the optical processes in the scene and rather relies on images cues and spatial distribution of luminance to perceive appearance attributes, such as gloss and translucency. For this reason, we hypothesize that it is possible to alter gloss and translucency appearance by projecting an image onto the original to mimic the luminance distribution characteristic of the target appearance. To demonstrate feasibility of this approach, we use pairs of physically-based renderings of glossy and matte, and translucent and opaque materials, respectively; we calculate a compensation image - a luminance diference between them, and subsequently, we demonstrate that by algebraic addition of luminance, an image of matte object can appear glossy, and an image of opaque object can appear translucent, when respective compensation images are projected onto them. Furthermore, we introduce a novel method to increase apparent opacity of translucent materials. Finally, we propose a future direction, which could enable nearly real-time appearance manipulation.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Augmented reality</kwd>
        <kwd>appearance</kwd>
        <kwd>translucency</kwd>
        <kwd>gloss</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Appearance of objects and materials, i.e. "the collected visual aspects" [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] is an important property,
which oftentimes defines our appraisal and interaction with them. Perception of appearance by
the human visual system (HVS) is a complex psychovisual process, and it is usually categorized
into perception of four basic attributes: color, gloss, translucency, and texture [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. Although
the exact mechanisms of appearance perception remain largely unknown, the recent studies
to a large extent agree that the HVS does not invert optical processes in the scene, and rather
relies on particular regularities in the images, dubbed image cues [
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6">3, 4, 5, 6</xref>
        ]. Diferent studies
have proposed and investigated a link between particular image cues and appearance attributes,
such as the contrast between specular and non-specular regions [
        <xref ref-type="bibr" rid="ref7">7, 8</xref>
        ], or brightness of the
edges [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6, 9</xref>
        ] – for translucency, and luminance histogram asymmetry [10, 11], sharpness,
contrast, or coverage area of the specular highlights [12, 13, 14, 15] – for gloss. Manipulation of
appearance by changing the optical properties of a material is a costly and time-consuming
process with limited predictability and need for trial-and-error [
        <xref ref-type="bibr" rid="ref7">7, 16</xref>
        ].
      </p>
      <p>One way to manipulate appearance of objects and materials is Spatial Augmented Reality
(SAR) [17, 18, 19]. In typical augmented reality, virtual graphical renderings are superimposed
over real-world images with the help of an additional visualization hardware, such as a
headmounted display. Spatial Augmented Reality manipulates appearance of real objects and scenes
with real-world illumination design instead of virtual overlays with see-through displays, and
thus, can be observed with a naked eye [20, 21]. The most popular technique for SAR is projection
mapping, when a projector is used to project a pre-calculated image on top of a real object to
achieve desired appearance [17, 18, 19]. SAR as an eficient and minimally invasive technique is
being used in a broad range of applications, such as a virtual restoration of artworks [22, 23, 24]
and cultural heritage artifacts [25], architecture [26], product design and evaluation [27], and
enhancement of apparent dynamic range of textured surfaces [28].</p>
      <p>One of the first demonstrations of the SAR was a seminal work by Raskar et al. [29]. The
authors proposed Shader Lamps – the system that projected virtual textures and shadows onto
a white Lambertian 3D model of Taj Mahal to simulate the virtual appearance. Multiple works
have enhanced the approach since then to highlight scene elements, e.g. edges [21], and to
manipulate apparent BRDF and surface material [30, 31]. The systems usually rely on a complex
hardware infrastructure, where a camera provides input on the scene, which is used by the
algorithm to calculate the image, which is then projected onto the scene element [30, 31]. The
calculations usually involve estimation of surface normals and albedo. Some algorithms are
perceptually-based to ensure generation of perceptually accurate color appearance [32].</p>
      <p>
        A recent work by Amano [20] uses perceptually-based algorithm to manipulate gloss and
translucency appearance by projection mapping. The author relies on Motoyoshi’s
observations, who argues that skewness of a luminance histogram or a similar measure of histogram
asymmetry is used for gloss perception [11], and "spatial and contrast relationship between
specular highlights and non-specular shading patterns is a robust cue for perceived translucency
of three-dimensional objects" [8]. Amano [20] proposes a projection-camera feedback system,
where an input image detected by a camera is processed by a tone-mapping algorithm, which
for altering gloss appearance manipulates histogram skewness of the image detected by the
camera, and increases the brightness near the contours while inversing low spatial frequency
component to increase apparent translucency. The validity of this approach depends on the
validity of Motoyoshi’s statements. Subsequent studies revealed a limitation in Motoyoshi’s
work on gloss, as the image cues to gloss turned out to be subject to strict photo-geometric
constraints that are not captured by spatially blind luminance histogram [33, 10, 34]. Also,
contrast between specular and non-specular regions is diagnostic for translucency only to a
limited extent [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], and translucency perception process is likely also to include perception of
3D surface geometry [35, 36].
      </p>
      <p>Another limitation of all previous studies is the fact that target appearance is loosely defined
and lacks ground truth that could enable assessment of performance. In this work, we
hypothesize that a linear diference image (expressing luminance without gamma encoding) between
glossy (target) and matte (original), as well as translucent (target) and opaque (original) objects
(Eq. 1), can mimic the target appearance by simple pixel-by-pixel algebraic addition of this
diference to the original (Eq. 2).</p>
      <p>[, , ] =  (0,  [, , ] − [, , ])
 [, , ] ≈ [, , ] ⊕   [, , ]
(1)
(2)
, where Diference is a diference image (negative values are set to 0), Target is either glossy or
translucent image, Original is respectively, either matte or opaque image, i and j are spatial pixel
coordinates, z is a channel (RGB) number, and ⊕ signifies projection of the right operand on
top of the left one. The presented model assumes that the image cues that distinguish the target
(glossy or translucent) from the original (matte or opaque) are always brighter in the target
than in the original, so these image cues correspond to positive values in the diference image.
The following chapter presents an alternative approach that circumvents this assumption. The
alternative approach is adopted to project a compensation image that makes a translucent object
look opaque (i.e. with darker image cues; refer to Fig. 5).</p>
      <p>Gigilashvili et al. [10] demonstrated earlier that a photograph of a matte object can appear
glossy if specular highlights are superimposed on top of them. This approach satisfies
photogeometric constraints necessary for gloss perception [33, 10, 34] as the ground truth target
image, which is used for calculating a diference, is a physically accurate rendering as opposed
to global processing. To demonstrate feasibility of this approach, we implemented a double
projector system, where original and compensation (diference) images are projected onto a
white reflective screen. The motivation of this work is threefold:
• Firstly, we want to demonstrate that gloss and translucency appearance can be
manipulated by simple algebraic addition of radiance that mimics the luminance distribution of
the target appearance. For this purpose, we investigate several case studies, in order to
test this approach in action and check the validity of the above-mentioned hypotheses.
Unlike many previous studies, we have a physically-accurate renderings of desired target
appearance, which permits us to compare the results with the ground truth and also to
consider the case where translucency and glossiness in the target image can be enhanced
further;
• Secondly, we want to test a novel idea about increasing apparent opacity. We propose and
experimentally test a novel method to increase luminance contrast and simulate radiance
removal from the scene, in order to manipulate translucency and opacity appearance.
• And thirdly, based on these simple manipulations in controlled laboratory conditions, we
want to discuss prospects and obstacles on the way toward full real-time automation of
translucency and gloss enhancement in the real world scenes.</p>
      <p>The article is organized as follows: in the next section we describe the methodology.
Afterward, we present the results on the example of several case studies that is followed by discussion.
Finally, we conclude and outline directions for future work toward more generic automated
approach.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>2.1. Setup
The schematic representation of the setup is shown in Fig. 1, and the photo of the scene is shown
in Fig. 2. We placed two identical Sony VPL-AW15 projectors stacked on top of each other in a
completely dark room. Each projector was connected to a separate PC. The projectors focused
the image on a flat white screen located 2 meters away parallel to the projection plane. The
second projector was placed right on top of the first one. We projected checkerboard images
of diferent colors with each projector, which helped us register the projections manually and
ensure spatial match between the two. We calibrated the two projectors equalizing their peak
brightness levels and the white-points. The calibration procedure was the following: The two
projectors shone a full white side-by-side, and a visual match was achieved by adjusting the
brightness and the color balance in their settings panel, while keeping all other settings equal.
For documentation purposes, we also carried out spectral projector characterization. However,
we did not develop any color management module at this stage. We projected primary colors
one-by-one with levels from 0 to 255 to the screen and measured the reflectance spectrum with a
Konica Minolta CS-2000 spectroradiometer in the range of 380-780nm. Afterward, we projected
random chromatic and grayscale colors and measured their spectra to check the additivity of
the three channels. The projector turned out nearly perfectly additive1.</p>
      <p>Nikon D3200 camera was used to document the results. Each manipulation session was
captured with the same exposure parameters. For each session, the shutter speed was adjusted
case-by-case according to the brightest image (enhancement of the target). For each pair of
stimuli, five diferent projection scenarios have been documented:
1. One projector: Projecting an original image.
2. One projector: Projecting a target image.
3. One projector: Projecting a compensation image. Compensation image calculation
algorithm is presented in a subsequent section.
4. Two projectors: one projector was projecting an original image, while the second
one was projecting a compensation image (to obtain the target visualization).
5. Two projectors: one projector was projecting a target image, while the second
one was projecting a compensation image (to further modify the appearance).</p>
      <sec id="sec-2-1">
        <title>2.2. Stimuli</title>
        <p>We studied six cases that covered diferent illumination environments, materials, and shapes of
diferent complexities (see Fig. 4). All images will be included in Supplementary Materials2.
In four cases, we attempted to make opaque image appear translucent. In two cases, we studied
1For reproducing this work, measured spectra, projector characterization details, and checkerboard images used for
registration are available upon request from the corresponding author.
2Supplementary materials are available upon request from the corresponding author and can
be also accessed directly at the following repository: https://github.com/davitgigilashvili/
Appearance-Manipulation-Supplementary-Material/</p>
        <p>PC1</p>
        <sec id="sec-2-1-1">
          <title>Original Image PC2</title>
        </sec>
        <sec id="sec-2-1-2">
          <title>Compensation Image</title>
        </sec>
        <sec id="sec-2-1-3">
          <title>Camera to Document the Results</title>
        </sec>
        <sec id="sec-2-1-4">
          <title>Screen</title>
        </sec>
        <sec id="sec-2-1-5">
          <title>Enhanced Image</title>
        </sec>
        <sec id="sec-2-1-6">
          <title>Projector 1</title>
        </sec>
        <sec id="sec-2-1-7">
          <title>Projector 2</title>
          <p>
            possibility of producing glossy appearance on a matte image. In five cases, synthetic images
have been used, generated with physically-based rendering (bidirectional path tracer) in Mitsuba
Renderer [37]. Isotropic phase function has been used for all renderings. In the sixth case,
photographs of real-world objects were used. Other properties are as follows:
• Case 1: Bumpy sphere, opaque→translucent. This stimuli are similar to those used
in [
            <xref ref-type="bibr" rid="ref7">7</xref>
            ]. The spheres are rendered in the Virtual Viewing Booth of [38].
Wavelengthindependent absorption and scattering coeficients are 70 cm − 1 for translucent, and 1000
cm− 1 for an opaque object. Index of refraction of the material is 1.3, which is placed in
the vacuum (1.0). Image dimensions are 1000× 1000 px.
• Case 2: Cuboid, opaque→translucent. The cuboid object was rendered in a Cornell
box with a skimmed milk material as measured by [
            <xref ref-type="bibr" rid="ref8">39</xref>
            ]. Similarly to [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], the extinction
coeficient was scaled with a factor of 0.005 for opaque, and 0.0005 for translucent material.
          </p>
          <p>
            Image dimensions are 1000× 1000 px.
• Case 3: Front-lit Bust, opaque→translucent. These are the same images as
"Translucent Front-lit" and "Opaque Front-lit" from [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. The bust shape from the Plastique
dataset [
            <xref ref-type="bibr" rid="ref9">40</xref>
            ] has been rendered in Bernhard Vogl’s museum environment map embedded
in Mitsuba [37]. Wavelength-independent absorption and scattering coeficients equal
to 1 cm− 1 for a translucent object, and 1000000 cm− 1 and 0 cm− 1, respectively, for an
opaque one. Surface roughness expressed as a root mean square slope of microfacets
          </p>
          <p>
            equals to 0.15 for the both. The material with the index of refraction equal to 1.5 is placed
in the vacuum. Image dimensions are 512× 512 px.
• Case 4: Back-lit bust, opaque→translucent. These are the same images as
"Translucent Back-lit" and "White Difuse Back-lit" from [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. The translucent material and the
rendering conditions are the same as in Case 3. Difuse opaque object is modeled as a
Lambertian surface with a hexadecimal surface color equal to #ECECEC3.
• Case 5: Bust, matte→glossy. These are the same images as "Opaque and Specular" and
"Opaque Matter" from [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. The rendering conditions are the same as in Case 3. Glossy
and matte versions difer in surface roughness, which equals to 0 and 0.15, respectively.
• Case 6: Sphere, matte→glossy. These are the segmented photographs of physical
objects from the Plastique collection [
            <xref ref-type="bibr" rid="ref9">40</xref>
            ]. The images are the same as 2A and 2B in [10].
          </p>
          <p>Additionally, we considered the case, where we attempted to make the translucent object
shown in Case 1 appear opaque (see Fig. 5). The principle is the following: we need to increase
the luminance contrast between specular and non-specular areas. We cannot decrease the
luminance in non-specular areas, as we cannot remove energy from the scene by projection
systems. However, we can increase luminance contrast by adding an energy to the specular
areas. Brighter areas will increase luminance contrast and thus, apparent opacity. The exact
calculations are given in the next section.
3In Mitsuba [37], colors can be specified both as RGB triplets as well as HTML-type hexadecimal values.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>2.3. Calculation of a compensation image</title>
        <p>
          As translucent objects usually exhibit larger luminance in the critical regions (edges, concavities
and convexities, thin parts [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]) than their opaque counterparts, and respectively, glossy objects
have high-intensity specular highlights unlike matte ones, the compensation image is calculated
as a pixel-wise diference for each of the RGB channels between translucent and opaque, and
glossy and matte images. However, we remove gamma in the calculation process to get the
luminance. The images are converted to double precision variable before calculations, all
negative values in the compensation image are set to zero4, and then the compensation image
is converted back to 24-bit unsigned integer. Finally, if synthetic images include noticeable
unintended noise, in order to avoid grainy appearance, compensation image is blurred (here
it was used in Case 1 only). The pseudo-code can be found below. See MATLAB scripts in
Supplementary Materials.
        </p>
        <p>= (()/255)
  = (( )/255)
 =   − 
if  &lt; 0 then</p>
        <p>= 0
end if
 = 8((1/) × 255);
 = ();</p>
        <p>The compensation image for simulating a radiance removal to opacify a translucent object has
to be calculated in a diferent manner, as the target image has image cues that are darker than
the original. In this case, we first multiply the target translucent image by a boost factor. We
calculate the boost factor that provides a diference image between boosted-target and original
that has all positive pixel values. By projecting this diference image on the original image, we
obtain a brighter image than the target image that looks opaque because the image cues that
determine the opacity properly look darker in comparison to their brighter surrounding. The
psedo-code is given below:
  = (( )/255)
 = (()/255)
 _ =   ×  
   =  −  
 =  _ +   
 = 8((1/) × 255);</p>
        <p>The assumption in the former method that target glossy and translucent images exhibit higher
or equal intensities in all pixels than in their matte and opaque counterparts does not always
hold (see Fig. 3). A generalized method can be developed based on the boosting approach, which
will be also applicable for translucency and glossiness enhancement. The respective script can
be found in the Supplementary Materials.
4Negative values represent parts where the object is brighter than the target; when the presence of negative values
is a limiting factor for the realism of the result, we use the second type of calculation (see below).</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Results and Discussion</title>
      <p>The results are illustrated in Fig. 4. It is worth mentioning that in order to show the projection
performance, all images shown in Fig. 4 are actual projections captured by a DSLR camera. The
original images used as input to the projection system can be found in Supplementary
Materials. The efects of appearance manipulation can be better detected in video demonstrations
that are also included in Supplementary Materials.</p>
      <p>Fig. 4 shows that the approach has to some extent succeeded in all cases. However, it is worth
mentioning that the enhancement results and respective target images do not match perfectly,
which can be explained with multiple factors: first of all, we have not implemented any color
management solution to compensate for cross-projector discrepancies, and thus, we cannot
rule out that the residual diference between the enhancement and target images is caused by
the diference between the characteristics of the two projectors; secondly, in some cases, the
pixels in the original image have higher values than corresponding pixels in the target image,
which leads to negative values in the compensation image. As negative values are set to 0 in
the compensation image, the enhancement image appears lighter than the target. Furthermore,
black level of the compensating projector is non-zero, which also introduces unintended energy
to the scene and leads to lighter-than-intended appearance of the enhancement result.</p>
      <p>For a bumpy object, where the contrast between specular highlights and shadows below
the bumps is a major cue to translucency, the compensation image primarily brightens the
shadowed areas (Fig. 4 - Case 1). This also eliminates the correlation between intensities
and 3D surface geometry, and the surface normals facing away from the illumination are not
dark anymore, which according to Marlow et al. [35, 36] is one of the fundamental factors
evoking translucency perception. Enhanced result looks more translucent than the target image,
and enhancement of the target further inverses the contrast and produces highly transparent
bumps onto opaque-looking sphere, which appears to some extent unrealistic. In case 2 of Fig. 4,
compensation image produces convincing translucent appearance through a luminance gradient
which decreases proportionally with the distance from the incidence surface. Enhancement of
the target further increases apparent penetration depth and thus, translucency.</p>
      <p>In case 3 of Fig. 4, the compensation image produced convincing translucent appearance by
brightening dark areas near the contours, and especially at thin parts. However, the apparent
lightness of the result is higher than that of the target. In case 4, opaque Lambertian back-lit
object is made transparent by projection of high-intensity luminance gradient, which in some
areas is contrast-reversed as noticed by Motoyoshi [8]. Brightening thin parts, which is a major
cue to translucency and transparency in this scene, produces a convincing semi-transparent
or translucent appearance. However, the resulting image is lighter than the target. In case
5, specular highlights are superimposed on top of a matter object, producing a convincing
glossy look. However, this is not due to the increased skewness of the luminance histogram
as proposed in [11], which indeed, is increased by the introduction of the highlights. Due to
perfect registration of the two images and identical geometry, the highlights are superimposed
at the areas where they would naturally occur, satisfying photo-geometric constraints necessary
for evoking gloss perception [33, 10, 34]. If the projectors are misregistered and the highlights
are translated, apparent glossiness will disappear, even though the skewness of the histogram
will remain unchanged. The result is lighter, because in non-specular areas the matte surface is
lighter than the glossy surface. Finally, a convincing gloss appearance is generated in case 6 too,
as the mirror-reflection of the environment is perfectly aligned with the surface geometry of the
matte sphere. However, in this case, both the resulting image and further enhancement appear
darker than the original image (that can be partially attributed to the simultaneous contrast
efects, as the pixel values of the DSLR-captured enhanced version are higher than that of the
target).</p>
      <p>We also attempted opacifying a translucent image with a compensation image. The results are
illustrated in Fig. 5. The resulting image is brighter than the target, which was expected, because
the intensities are scaled up in many parts of the image to increase the luminance contrast
with the areas where little or no energy is added. Due to linear addition of the intensities,
adding compensation image to the target image increased the contrast between specular and
non-specular areas. This decreased the apparent translucency, demonstrating that the approach
is promising. However, the opacity of the target material has not been reached, as the energy
below bumps in the resulting translucent image does not appear as dark as the shadows in the
target opaque image. Hence, larger dynamic range of the projection system might be needed to
achieve the satisfactory result. Besides, as the luminance contrast is suficiently large in the
target image, its enhancement afected apparent lightness, but did not further increase apparent
opacity.</p>
      <p>We compared our results with another work manipulating translucency and gloss by SAR [20].
While their inversion of low-frequency component and brightening contours renders
ghostlike atmospheric translucency, and spatially-blind histogram skewness manipulation darkens
significant areas of the image, making them nearly indiscernible (see dwarf and bear in Fig. 3
of [20]), our method produces more realistic appearance with less unintended artifacts. Besides,
the ground truth enables us to evaluate the performance, while no clear metric is available
in [20] to assess whether the proposed method performs as intended.</p>
      <p>The method introduced in this article can be useful for manipulating appearance in
prerendered images and videos under controlled observation conditions, such as screenings of
animation in a dark hall. However, this method has two major limitations: first of all, the
image of the ground truth target material is not always readily available for calculation of a
compensation image; secondly, real-world scenes might be dynamic and variable that requires
adaptive recalculation of the compensation image. These limitations especially arise when
appearance of real 3D objects are to be manipulated instead of appearance of virtual objects
in projected 2D images. This work is just a first step to test the feasibility of appearance
manipulation by simple algebraic addition of radiance. To manipulate appearance in real-life
scenarios, a camera-feedback system will be needed, which will capture current scene, analyze
appearance, 3D geometry, and illumination, and calculate a compensation image in real-time,
which itself will require a sophisticated hardware and software implementation discussed in
the subsequent paragraphs.</p>
      <p>Our results demonstrate feasibility of appearance manipulation with simple linear addition of
the radiant energy. The availability of the ground truth target simplified the calculation of the
compensation image. The next stage of this work is extending it to appearance manipulation
of real 3D objects rather than that of a projected image, and automatizing the process with a
projector-camera feedback framework, similarly to [20]. Physically-based ground truth image
and a reliable compensation image can be calculated ofline. However, as the physically-based
rendering is overly time consuming, considerable shortcuts will be needed for real-time solutions.
A fundamental aspect in appearance manipulation is correct estimation of a 3D surface geometry.
Apparent 3D shape and self-occluding contours proposedly play a significant role in translucency
appearance. If the luminance intensities co-vary with the 3D surface geometry, object appears
opaque [35, 36]. To eliminate this co-variation by projecting additional energy (e.g. to the
previously shadowed areas, as in Case 1), it is essential to properly estimate the 3D geometry.
This has a vital importance for gloss perception as well, as the distribution and shape of the
specular highlights are also defined by the 3D geometry of the object [33, 10, 34].</p>
      <p>
        Additional equipment, such as depth cameras, additional projectors for photometric stereo [
        <xref ref-type="bibr" rid="ref10">21,
41</xref>
        ] and further correction of the estimated surface normals [31], as well as machine learning
techniques might be needed to estimate the 3D shape in a reliable manner. Once the 3D shape
is estimated, real-time calculation of a compensation image to eliminate shape and shading
co-variation, or putting specular highlights at geometrically accurate locations becomes feasible,
especially with the help of state-of-the-art machine learning techniques [
        <xref ref-type="bibr" rid="ref11">42</xref>
        ]. We will address
this topic in future works.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>In this work, we demonstrated that gloss and translucency appearance in Spatial Augmented
Reality can be manipulated with simple algebraic addition of radiance energy. The energy – the
compensation image – that needs to be projected is found as a diference between
physicallybased renderings of original and target materials we want to mimic. Unlike previous studies,
we had access to the ground truth target image in this work, which enabled us to estimate the
magnitude of required manipulation, assess the quality of the result, and ensure that luminance
gradient is both physically realistic and satisfies photogeometric constraints (however, in some
real-life and real-time applications, this might not be possible). Besides, we proposed a novel
method to simulate energy removal from the scene to decrease apparent translucency and
increase apparent opacity of the objects.</p>
      <p>In future works, the appearance of real 3D objects should be manipulated with a
projectorcamera feedback automated process. While physically-based renderings can be used to calculate
compensation images ofline, more sophisticated approach is needed for real-time performance.
For calculation of a compensation image, estimation of 3D surface geometry is essential, which
can be achieved with depth-cameras, photometric stereo, and machine learning techniques.
Imaging 5 (2022) 000501–1–000501–18. URL: https://doi.org/10.2352/J.Percept.Imaging.
2022.5.000501.
[8] I. Motoyoshi, Highlight–shading relationship as a cue for the perception of translucent and
transparent materials, Journal of Vision 10 (2010) 1–11. URL: https://doi.org/10.1167/10.9.6.
[9] I. Gkioulekas, B. Walter, E. H. Adelson, K. Bala, T. Zickler, On the appearance of translucent
edges, in: Proceedings of the 28th IEEE Conference on Computer Vision and Pattern
Recognition (CVPR), Computer Vision Foundation, 2015, pp. 5528–5536.
[10] D. Gigilashvili, M. Tanaka, M. Pedersen, J. Y. Hardeberg, Image statistics as glossiness and
translucency predictor in photographs of real-world objects, in: Proceedings of the 10th
Colour and Visual Computing Symposium (CVCS 2020), volume 2688, CEUR Workshop
Proceedings, 2020, pp. 1–15.
[11] I. Motoyoshi, S. Nishida, L. Sharan, E. H. Adelson, Image statistics and the perception of
surface qualities, Nature 447 (2007) 206–209. URL: https://doi.org/10.1038/nature05724.
[12] F. B. Leloup, G. Obein, M. R. Pointer, P. Hanselaer, Toward the soft metrology of surface
gloss: A review, Color Research &amp; Application 39 (2014) 559–570. URL: https://doi.org/10.
1002/col.21846.
[13] P. J. Marlow, B. L. Anderson, Generative constraints on image cues for perceived gloss,</p>
      <p>Journal of Vision 13 (2013) 1–23. URL: https://doi.org/10.1167/13.14.2.
[14] F. Pellacini, J. A. Ferwerda, D. P. Greenberg, Toward a psychophysically-based light
reflection model for image synthesis, in: SIGGRAPH ’00: Proceedings of the 27th Annual
Conference on Computer Graphics and Interactive Techniques, ACM
Press/AddisonWesley Publishing Co., New York, NY, 2000, pp. 55–64.
[15] J.-B. Thomas, J. Y. Hardeberg, G. Simone, Image contrast measure as a gloss material
descriptor, in: Proceedings of the 6th International Workshop on Computational Color
Imaging (CCIW2017), Springer, Cham, 2017, pp. 233–245.
[16] D. Gigilashvili, P. Urban, J.-B. Thomas, M. Pedersen, J. Yngve Hardeberg, Perceptual
navigation in absorption-scattering space, in: Color and Imaging Conference, volume
2021, Society for Imaging Science and Technology, 2021, pp. 328–333.
[17] H. Benko, A. D. Wilson, F. Zannier, Dyadic projected spatial augmented reality, in:
Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology,
2014, pp. 645–655.
[18] R. Raskar, G. Welch, H. Fuchs, Spatially augmented reality, Augmented Reality: Placing</p>
      <p>Artificial Objects in Real Scenes (1999) 64–71.
[19] J. Underkofler, B. Ullmer, H. Ishii, Emancipated pixels: real-world graphics in the
luminous room, in: Proceedings of the 26th Annual Conference on Computer Graphics and
Interactive Techniques, 1999, pp. 385–392.
[20] T. Amano, Manipulation of material perception with light-field projection, in:
ThreeDimensional Imaging, Visualization, and Display 2019, volume 10997, International Society
for Optics and Photonics, 2019, pp. 1099706:1–1099706:13.
[21] O. Wang, M. Fuchs, C. Fuchs, H. P. Lensch, J. Davis, H.-P. Seidel, A context-aware light
source, in: 2010 IEEE International Conference on Computational Photography (ICCP),
IEEE, 2010, pp. 1–8.
[22] J. Stenger, N. Khandekar, R. Raskar, S. Cuellar, A. Mohan, R. Gschwind, Conservation of a
room: A treatment proposal for Mark Rothko’s Harvard Murals, Studies in Conservation
61 (2016) 348–361.
[23] D. Vázquez, A. Fernández-Balbuena, H. Canabal, C. Muro, D. Durmus, W. Davis, A. Benítez,
S. Mayorga, Energy optimization of a light projection system for buildings that virtually
restores artworks, Digital Applications in Archaeology and Cultural Heritage 16 (2020)
1–13.
[24] T. Yoshida, C. Horii, K. Sato, A virtual color reconstruction system for real heritage with
light projection, in: Proceedings of VSMM, volume 3, Citeseer, 2003, pp. 1–7.
[25] D. G. Aliaga, A. J. Law, Y. H. Yeung, A virtual restoration stage for real-world objects, in:</p>
      <p>ACM SIGGRAPH Asia 2008 papers, 2008, pp. 1–10.
[26] X. Calixte, P. Leclercq, The interactive projection mapping as a spatial augmented reality to
help collaborative design: Case study in architectural design, in: International Conference
on Cooperative Design, Visualization and Engineering, Springer, 2017, pp. 143–152.
[27] M. K. Park, K. J. Lim, M. K. Seo, S. J. Jung, K. H. Lee, Spatial augmented reality for product
appearance design evaluation, Journal of Computational Design and Engineering 2 (2015)
38–46.
[28] O. Bimber, D. Iwai, Superimposing dynamic range, ACM Transactions on Graphics (TOG)
27 (2008) 1–8.
[29] R. Raskar, G. Welch, K.-L. Low, D. Bandyopadhyay, Shader lamps: Animating real objects
with image-based illumination, in: Eurographics Workshop on Rendering Techniques,
Springer, 2001, pp. 89–102.
[30] T. Amano, S. Ushida, Y. Miyabayashi, Dependent appearance-manipulation with multiple
projector-camera systems, in: Proceedings of the 27th International Conference on
Artificial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments,
2017, pp. 101–107.
[31] T. Okazaki, T. Okatani, K. Deguchi, A projector-camera system for high-quality synthesis
of virtual reflectance on real object surfaces, Information and Media Technologies 5 (2010)
691–703.
[32] A. J. Law, D. G. Aliaga, B. Sajadi, A. Majumder, Z. Pizlo, Perceptually based appearance
modification for compliant appearance editing, in: Computer Graphics Forum, volume 30,
Wiley Online Library, 2011, pp. 2288–2300.
[33] B. L. Anderson, J. Kim, Image statistics do not explain the perception of gloss and lightness,</p>
      <p>Journal of Vision 9 (2009) 1–17. URL: https://doi.org/10.1167/9.11.10.
[34] J. Kim, P. Marlow, B. L. Anderson, The perception of gloss depends on highlight congruence
with surface shading, Journal of Vision 11(9) (2011) 1–19. URL: https://doi.org/10.1167/11.
9.4.
[35] P. J. Marlow, B. L. Anderson, The cospecification of the shape and material properties of
light permeable materials, Proceedings of the National Academy of Sciences 118 (2021)
1–10. URL: https://doi.org/10.1073/pnas.2024798118.
[36] P. J. Marlow, J. Kim, B. L. Anderson, Perception and misperception of surface opacity,
Proceedings of the National Academy of Sciences 114 (2017) 13840–13845. URL: https:
//doi.org/10.1073/pnas.1711416115.
[37] W. Jakob, Mitsuba Renderer, 2010. URL: http://www.mitsuba-renderer.org.
[38] P. Urban, T. M. Tanksale, A. Brunton, B. M. Vu, S. Nakauchi, Redefining A in RGBA:
Towards a standard for graphical 3D printing, ACM Transactions on Graphics (TOG) 38</p>
    </sec>
    <sec id="sec-5">
      <title>A. Supplementary Materials</title>
      <p>Supplementary materials can be accessed at the following repository:
https://github.com/davitgigilashvili/Appearance-Manipulation-Supplementary-Material/.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>ASTM</given-names>
            <surname>International. E284-</surname>
          </string-name>
          17 - Standard Terminology of Appearance,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Pointer</given-names>
            <surname>Michael</surname>
          </string-name>
          (
          <article-title>Chairman of Technical Committee 1-65), A framework for the measurement of visual appearance</article-title>
          ,
          <source>Technical Report CIE 175:2006</source>
          , International Commission on Illumination,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <article-title>Visual perception of materials and surfaces</article-title>
          ,
          <source>Current Biology</source>
          <volume>21</volume>
          (
          <year>2011</year>
          )
          <fpage>R978</fpage>
          -
          <lpage>R983</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Chadwick</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Kentridge</surname>
          </string-name>
          ,
          <article-title>The perception of gloss: A review</article-title>
          ,
          <source>Vision Research</source>
          <volume>109</volume>
          (
          <year>2015</year>
          )
          <fpage>221</fpage>
          -
          <lpage>235</lpage>
          . URL: https://doi.org/10.1016/j.visres.
          <year>2014</year>
          .
          <volume>10</volume>
          .026.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Fleming</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. H.</given-names>
            <surname>Bülthof</surname>
          </string-name>
          ,
          <article-title>Low-level image cues in the perception of translucent materials</article-title>
          ,
          <source>ACM Transactions on Applied Perception (TAP) 2</source>
          (
          <year>2005</year>
          )
          <fpage>346</fpage>
          -
          <lpage>382</lpage>
          . URL: https: //doi.org/10.1145/1077399.1077409.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gigilashvili</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-B. Thomas</surname>
            ,
            <given-names>J. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hardeberg</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <article-title>Pedersen, Translucency perception: A review</article-title>
          ,
          <source>Journal of Vision</source>
          <volume>21</volume>
          (
          <issue>8</issue>
          ):4 (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>41</lpage>
          . URL: https://doi.org/10.1167/jov.21.
          <issue>8</issue>
          .4.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Gigilashvili</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Urban</surname>
          </string-name>
          ,
          <string-name>
            <surname>J.-B. Thomas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Pedersen</surname>
            ,
            <given-names>J. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hardeberg</surname>
          </string-name>
          ,
          <article-title>The impact of optical and geometrical thickness on perceived translucency diferences</article-title>
          ,
          <source>Journal of Perceptual</source>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>14</lpage>
          . URL: https://doi.org/10.1145/3319910.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [39]
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Jensen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. R.</given-names>
            <surname>Marschner</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Levoy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Hanrahan</surname>
          </string-name>
          ,
          <article-title>A practical model for subsurface light transport</article-title>
          ,
          <source>in: Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques</source>
          ,
          <year>2001</year>
          , pp.
          <fpage>511</fpage>
          -
          <lpage>518</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [40]
          <string-name>
            <surname>J.-B. Thomas</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Deniel</surname>
            ,
            <given-names>J. Y.</given-names>
          </string-name>
          <string-name>
            <surname>Hardeberg</surname>
          </string-name>
          ,
          <article-title>The plastique collection: A set of resin objects for material appearance research</article-title>
          ,
          <source>XIV Conferenza del Colore</source>
          , Florence, Italy (
          <year>2018</year>
          ) 12 pages.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [41]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Woodham</surname>
          </string-name>
          ,
          <article-title>Photometric method for determining surface orientation from multiple images</article-title>
          ,
          <source>in: Shape from Shading</source>
          ,
          <year>1989</year>
          , pp.
          <fpage>513</fpage>
          -
          <lpage>531</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [42]
          <string-name>
            <given-names>K. R.</given-names>
            <surname>Storrs</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. L.</given-names>
            <surname>Anderson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. W.</given-names>
            <surname>Fleming</surname>
          </string-name>
          ,
          <article-title>Unsupervised learning predicts human perception and misperception of gloss, Nature Human Behaviour (</article-title>
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . URL: https://doi.org/10.1038/s41562-021-01097-6.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>