=Paper= {{Paper |id=Vol-3271/Paper12_CVCS2022 |storemode=property |title=Appearance Manipulationin Spatial Augmented Reality using Image Differences |pdfUrl=https://ceur-ws.org/Vol-3271/Paper12_CVCS2022.pdf |volume=Vol-3271 |authors=Davit Gigilashvili,GiorgioTrumpy |dblpUrl=https://dblp.org/rec/conf/cvcs/GigilashviliG22 }} ==Appearance Manipulationin Spatial Augmented Reality using Image Differences== https://ceur-ws.org/Vol-3271/Paper12_CVCS2022.pdf
Appearance Manipulation in Spatial Augmented
Reality using Image Differences
Davit Gigilashvili*,† , Giorgio Trumpy†
Colourlab, Department of Computer Science, Norwegian University of Science and Technology, Teknologivegen 22, 2815,
Gjøvik, Norway


                                         Abstract
                                         Rapidly emerging augmented reality technologies enable us to virtually alter appearance of objects and
                                         materials in a fast and efficient way. The state-of-the-art research shows that the human visual system
                                         has a poor ability to invert the optical processes in the scene and rather relies on images cues and spatial
                                         distribution of luminance to perceive appearance attributes, such as gloss and translucency. For this
                                         reason, we hypothesize that it is possible to alter gloss and translucency appearance by projecting an
                                         image onto the original to mimic the luminance distribution characteristic of the target appearance. To
                                         demonstrate feasibility of this approach, we use pairs of physically-based renderings of glossy and matte,
                                         and translucent and opaque materials, respectively; we calculate a compensation image – a luminance
                                         difference between them, and subsequently, we demonstrate that by algebraic addition of luminance, an
                                         image of matte object can appear glossy, and an image of opaque object can appear translucent, when
                                         respective compensation images are projected onto them. Furthermore, we introduce a novel method to
                                         increase apparent opacity of translucent materials. Finally, we propose a future direction, which could
                                         enable nearly real-time appearance manipulation.

                                         Keywords
                                         Augmented reality, appearance, translucency, gloss




1. Introduction
Appearance of objects and materials, i.e. "the collected visual aspects" [1] is an important property,
which oftentimes defines our appraisal and interaction with them. Perception of appearance by
the human visual system (HVS) is a complex psychovisual process, and it is usually categorized
into perception of four basic attributes: color, gloss, translucency, and texture [2]. Although
the exact mechanisms of appearance perception remain largely unknown, the recent studies
to a large extent agree that the HVS does not invert optical processes in the scene, and rather
relies on particular regularities in the images, dubbed image cues [3, 4, 5, 6]. Different studies
have proposed and investigated a link between particular image cues and appearance attributes,
such as the contrast between specular and non-specular regions [7, 8], or brightness of the
edges [5, 6, 9] – for translucency, and luminance histogram asymmetry [10, 11], sharpness,
contrast, or coverage area of the specular highlights [12, 13, 14, 15] – for gloss. Manipulation of
The 11th Colour and Visual Computing Symposium, September 08–09, 2022, Gjøvik, Norway
*
  Corresponding author.
†
  These authors contributed equally.
$ davit.gigilashvili@ntnu.no (D. Gigilashvili); giorgio.trumpy@ntnu.no (G. Trumpy)
 0000-0002-6956-6569 (D. Gigilashvili); 0000-0001-9534-0507 (G. Trumpy)
                                       © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
    CEUR
    Workshop
    Proceedings
                  http://ceur-ws.org
                  ISSN 1613-0073
                                       CEUR Workshop Proceedings (CEUR-WS.org)
appearance by changing the optical properties of a material is a costly and time-consuming
process with limited predictability and need for trial-and-error [7, 16].
   One way to manipulate appearance of objects and materials is Spatial Augmented Reality
(SAR) [17, 18, 19]. In typical augmented reality, virtual graphical renderings are superimposed
over real-world images with the help of an additional visualization hardware, such as a head-
mounted display. Spatial Augmented Reality manipulates appearance of real objects and scenes
with real-world illumination design instead of virtual overlays with see-through displays, and
thus, can be observed with a naked eye [20, 21]. The most popular technique for SAR is projection
mapping, when a projector is used to project a pre-calculated image on top of a real object to
achieve desired appearance [17, 18, 19]. SAR as an efficient and minimally invasive technique is
being used in a broad range of applications, such as a virtual restoration of artworks [22, 23, 24]
and cultural heritage artifacts [25], architecture [26], product design and evaluation [27], and
enhancement of apparent dynamic range of textured surfaces [28].
   One of the first demonstrations of the SAR was a seminal work by Raskar et al. [29]. The
authors proposed Shader Lamps – the system that projected virtual textures and shadows onto
a white Lambertian 3D model of Taj Mahal to simulate the virtual appearance. Multiple works
have enhanced the approach since then to highlight scene elements, e.g. edges [21], and to
manipulate apparent BRDF and surface material [30, 31]. The systems usually rely on a complex
hardware infrastructure, where a camera provides input on the scene, which is used by the
algorithm to calculate the image, which is then projected onto the scene element [30, 31]. The
calculations usually involve estimation of surface normals and albedo. Some algorithms are
perceptually-based to ensure generation of perceptually accurate color appearance [32].
   A recent work by Amano [20] uses perceptually-based algorithm to manipulate gloss and
translucency appearance by projection mapping. The author relies on Motoyoshi’s observa-
tions, who argues that skewness of a luminance histogram or a similar measure of histogram
asymmetry is used for gloss perception [11], and "spatial and contrast relationship between
specular highlights and non-specular shading patterns is a robust cue for perceived translucency
of three-dimensional objects" [8]. Amano [20] proposes a projection-camera feedback system,
where an input image detected by a camera is processed by a tone-mapping algorithm, which
for altering gloss appearance manipulates histogram skewness of the image detected by the
camera, and increases the brightness near the contours while inversing low spatial frequency
component to increase apparent translucency. The validity of this approach depends on the
validity of Motoyoshi’s statements. Subsequent studies revealed a limitation in Motoyoshi’s
work on gloss, as the image cues to gloss turned out to be subject to strict photo-geometric
constraints that are not captured by spatially blind luminance histogram [33, 10, 34]. Also,
contrast between specular and non-specular regions is diagnostic for translucency only to a
limited extent [6], and translucency perception process is likely also to include perception of
3D surface geometry [35, 36].
   Another limitation of all previous studies is the fact that target appearance is loosely defined
and lacks ground truth that could enable assessment of performance. In this work, we hypothe-
size that a linear difference image (expressing luminance without gamma encoding) between
glossy (target) and matte (original), as well as translucent (target) and opaque (original) objects
(Eq. 1), can mimic the target appearance by simple pixel-by-pixel algebraic addition of this
difference to the original (Eq. 2).

             𝐷𝑖𝑓 𝑓 𝑒𝑟𝑒𝑛𝑐𝑒[𝑖, 𝑗, 𝑧] = 𝑀 𝐴𝑋(0, 𝑇 𝑎𝑟𝑔𝑒𝑡[𝑖, 𝑗, 𝑧] − 𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙[𝑖, 𝑗, 𝑧])              (1)

                    𝑇 𝑎𝑟𝑔𝑒𝑡[𝑖, 𝑗, 𝑧] ≈ 𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙[𝑖, 𝑗, 𝑧] ⊕ 𝐷𝑖𝑓 𝑓 𝑒𝑟𝑒𝑛𝑐𝑒[𝑖, 𝑗, 𝑧]                (2)
, where Difference is a difference image (negative values are set to 0), Target is either glossy or
translucent image, Original is respectively, either matte or opaque image, i and j are spatial pixel
coordinates, z is a channel (RGB) number, and ⊕ signifies projection of the right operand on
top of the left one. The presented model assumes that the image cues that distinguish the target
(glossy or translucent) from the original (matte or opaque) are always brighter in the target
than in the original, so these image cues correspond to positive values in the difference image.
The following chapter presents an alternative approach that circumvents this assumption. The
alternative approach is adopted to project a compensation image that makes a translucent object
look opaque (i.e. with darker image cues; refer to Fig. 5).
   Gigilashvili et al. [10] demonstrated earlier that a photograph of a matte object can appear
glossy if specular highlights are superimposed on top of them. This approach satisfies photo-
geometric constraints necessary for gloss perception [33, 10, 34] as the ground truth target
image, which is used for calculating a difference, is a physically accurate rendering as opposed
to global processing. To demonstrate feasibility of this approach, we implemented a double
projector system, where original and compensation (difference) images are projected onto a
white reflective screen. The motivation of this work is threefold:

    • Firstly, we want to demonstrate that gloss and translucency appearance can be manipu-
      lated by simple algebraic addition of radiance that mimics the luminance distribution of
      the target appearance. For this purpose, we investigate several case studies, in order to
      test this approach in action and check the validity of the above-mentioned hypotheses.
      Unlike many previous studies, we have a physically-accurate renderings of desired target
      appearance, which permits us to compare the results with the ground truth and also to
      consider the case where translucency and glossiness in the target image can be enhanced
      further;
    • Secondly, we want to test a novel idea about increasing apparent opacity. We propose and
      experimentally test a novel method to increase luminance contrast and simulate radiance
      removal from the scene, in order to manipulate translucency and opacity appearance.
    • And thirdly, based on these simple manipulations in controlled laboratory conditions, we
      want to discuss prospects and obstacles on the way toward full real-time automation of
      translucency and gloss enhancement in the real world scenes.

  The article is organized as follows: in the next section we describe the methodology. After-
ward, we present the results on the example of several case studies that is followed by discussion.
Finally, we conclude and outline directions for future work toward more generic automated
approach.
2. Methodology
2.1. Setup
The schematic representation of the setup is shown in Fig. 1, and the photo of the scene is shown
in Fig. 2. We placed two identical Sony VPL-AW15 projectors stacked on top of each other in a
completely dark room. Each projector was connected to a separate PC. The projectors focused
the image on a flat white screen located 2 meters away parallel to the projection plane. The
second projector was placed right on top of the first one. We projected checkerboard images
of different colors with each projector, which helped us register the projections manually and
ensure spatial match between the two. We calibrated the two projectors equalizing their peak
brightness levels and the white-points. The calibration procedure was the following: The two
projectors shone a full white side-by-side, and a visual match was achieved by adjusting the
brightness and the color balance in their settings panel, while keeping all other settings equal.
For documentation purposes, we also carried out spectral projector characterization. However,
we did not develop any color management module at this stage. We projected primary colors
one-by-one with levels from 0 to 255 to the screen and measured the reflectance spectrum with a
Konica Minolta CS-2000 spectroradiometer in the range of 380-780nm. Afterward, we projected
random chromatic and grayscale colors and measured their spectra to check the additivity of
the three channels. The projector turned out nearly perfectly additive1 .
   Nikon D3200 camera was used to document the results. Each manipulation session was
captured with the same exposure parameters. For each session, the shutter speed was adjusted
case-by-case according to the brightest image (enhancement of the target). For each pair of
stimuli, five different projection scenarios have been documented:

    1. One projector: Projecting an original image.
    2. One projector: Projecting a target image.
    3. One projector: Projecting a compensation image. Compensation image calculation algo-
       rithm is presented in a subsequent section.
    4. Two projectors: one projector was projecting an original image, while the second
       one was projecting a compensation image (to obtain the target visualization).
    5. Two projectors: one projector was projecting a target image, while the second
       one was projecting a compensation image (to further modify the appearance).


2.2. Stimuli
We studied six cases that covered different illumination environments, materials, and shapes of
different complexities (see Fig. 4). All images will be included in Supplementary Materials2 .
In four cases, we attempted to make opaque image appear translucent. In two cases, we studied

1
  For reproducing this work, measured spectra, projector characterization details, and checkerboard images used for
  registration are available upon request from the corresponding author.
2
  Supplementary materials are available upon request from the corresponding author and can
  be also accessed directly at the following repository:                       https://github.com/davitgigilashvili/
  Appearance-Manipulation-Supplementary-Material/
                                           Our Setup
          PC1
    Original Image                                      Camera to Document
                                                            the Results

                                                                                     Screen
                                                                              Enhanced Image


                                    Projector 1
       PC2
Compensation Image
                                    Projector 2




Figure 1: The schematic representation of the setup (distances are not to scale). Illustrating an example
when the appearance of an original opaque image (PC1) is manipulated with a compensation image
(PC2) to produce translucent appearance (screen).


possibility of producing glossy appearance on a matte image. In five cases, synthetic images
have been used, generated with physically-based rendering (bidirectional path tracer) in Mitsuba
Renderer [37]. Isotropic phase function has been used for all renderings. In the sixth case,
photographs of real-world objects were used. Other properties are as follows:

    • Case 1: Bumpy sphere, opaque→translucent. This stimuli are similar to those used
      in [7]. The spheres are rendered in the Virtual Viewing Booth of [38]. Wavelength-
      independent absorption and scattering coefficients are 70 cm−1 for translucent, and 1000
      cm−1 for an opaque object. Index of refraction of the material is 1.3, which is placed in
      the vacuum (1.0). Image dimensions are 1000×1000 px.
    • Case 2: Cuboid, opaque→translucent. The cuboid object was rendered in a Cornell
      box with a skimmed milk material as measured by [39]. Similarly to [6], the extinction
      coefficient was scaled with a factor of 0.005 for opaque, and 0.0005 for translucent material.
      Image dimensions are 1000×1000 px.
    • Case 3: Front-lit Bust, opaque→translucent. These are the same images as "Translu-
      cent Front-lit" and "Opaque Front-lit" from [6]. The bust shape from the Plastique
      dataset [40] has been rendered in Bernhard Vogl’s museum environment map embedded
      in Mitsuba [37]. Wavelength-independent absorption and scattering coefficients equal
      to 1 cm−1 for a translucent object, and 1000000 cm−1 and 0 cm−1 , respectively, for an
      opaque one. Surface roughness expressed as a root mean square slope of microfacets
Figure 2: A photograph of the setup. Left laptop provides an opaque image to a respective projector,
while the right laptop provides a compensation image to the second projector. The projected image
with a translucent object is shown in the upper part.


         equals to 0.15 for the both. The material with the index of refraction equal to 1.5 is placed
         in the vacuum. Image dimensions are 512×512 px.
       • Case 4: Back-lit bust, opaque→translucent. These are the same images as "Translu-
         cent Back-lit" and "White Diffuse Back-lit" from [6]. The translucent material and the
         rendering conditions are the same as in Case 3. Diffuse opaque object is modeled as a
         Lambertian surface with a hexadecimal surface color equal to #ECECEC 3 .
       • Case 5: Bust, matte→glossy. These are the same images as "Opaque and Specular" and
         "Opaque Matter" from [6]. The rendering conditions are the same as in Case 3. Glossy
         and matte versions differ in surface roughness, which equals to 0 and 0.15, respectively.
       • Case 6: Sphere, matte→glossy. These are the segmented photographs of physical
         objects from the Plastique collection [40]. The images are the same as 2A and 2B in [10].

  Additionally, we considered the case, where we attempted to make the translucent object
shown in Case 1 appear opaque (see Fig. 5). The principle is the following: we need to increase
the luminance contrast between specular and non-specular areas. We cannot decrease the
luminance in non-specular areas, as we cannot remove energy from the scene by projection
systems. However, we can increase luminance contrast by adding an energy to the specular
areas. Brighter areas will increase luminance contrast and thus, apparent opacity. The exact
calculations are given in the next section.

3
    In Mitsuba [37], colors can be specified both as RGB triplets as well as HTML-type hexadecimal values.
2.3. Calculation of a compensation image
As translucent objects usually exhibit larger luminance in the critical regions (edges, concavities
and convexities, thin parts [6]) than their opaque counterparts, and respectively, glossy objects
have high-intensity specular highlights unlike matte ones, the compensation image is calculated
as a pixel-wise difference for each of the RGB channels between translucent and opaque, and
glossy and matte images. However, we remove gamma in the calculation process to get the
luminance. The images are converted to double precision variable before calculations, all
negative values in the compensation image are set to zero4 , and then the compensation image
is converted back to 24-bit unsigned integer. Finally, if synthetic images include noticeable
unintended noise, in order to avoid grainy appearance, compensation image is blurred (here
it was used in Case 1 only). The pseudo-code can be found below. See MATLAB scripts in
Supplementary Materials.
   𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙 = (𝑑𝑜𝑢𝑏𝑙𝑒(𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙)/255)𝑔𝑎𝑚𝑚𝑎
   𝑇 𝑎𝑟𝑔𝑒𝑡 = (𝑑𝑜𝑢𝑏𝑙𝑒(𝑇 𝑎𝑟𝑔𝑒𝑡)/255)𝑔𝑎𝑚𝑚𝑎
   𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑇 𝑎𝑟𝑔𝑒𝑡 − 𝑂𝑟𝑖𝑔𝑖𝑛𝑎𝑙
   if 𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 < 0 then
       𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 0
   end if
   𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑢𝑖𝑛𝑡8((𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛1/𝑔𝑎𝑚𝑚𝑎 ) ×255);
   𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑏𝑙𝑢𝑟(𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛);
   The compensation image for simulating a radiance removal to opacify a translucent object has
to be calculated in a different manner, as the target image has image cues that are darker than
the original. In this case, we first multiply the target translucent image by a boost factor. We
calculate the boost factor that provides a difference image between boosted-target and original
that has all positive pixel values. By projecting this difference image on the original image, we
obtain a brighter image than the target image that looks opaque because the image cues that
determine the opacity properly look darker in comparison to their brighter surrounding. The
psedo-code is given below:
   𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡 = (𝑑𝑜𝑢𝑏𝑙𝑒(𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡)/255)𝑔𝑎𝑚𝑚𝑎
   𝑂𝑝𝑎𝑞𝑢𝑒 = (𝑑𝑜𝑢𝑏𝑙𝑒(𝑂𝑝𝑎𝑞𝑢𝑒)/255)𝑔𝑎𝑚𝑚𝑎
   𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡_𝐵𝑜𝑜𝑠𝑡𝑒𝑑 = 𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡 × 𝑏𝑜𝑜𝑠𝑡𝐹 𝑎𝑐𝑡𝑜𝑟
   𝐷𝑖𝑓 𝑓 𝑒𝑟𝑒𝑛𝑐𝑒 = 𝑂𝑝𝑎𝑞𝑢𝑒 − 𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡
   𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑇 𝑟𝑎𝑛𝑠𝑙𝑢𝑐𝑒𝑛𝑡_𝐵𝑜𝑜𝑠𝑡𝑒𝑑 + 𝐷𝑖𝑓 𝑓 𝑒𝑟𝑒𝑛𝑐𝑒
   𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛 = 𝑢𝑖𝑛𝑡8((𝐶𝑜𝑚𝑝𝑒𝑛𝑠𝑎𝑡𝑖𝑜𝑛1/𝑔𝑎𝑚𝑚𝑎 ) ×255);
   The assumption in the former method that target glossy and translucent images exhibit higher
or equal intensities in all pixels than in their matte and opaque counterparts does not always
hold (see Fig. 3). A generalized method can be developed based on the boosting approach, which
will be also applicable for translucency and glossiness enhancement. The respective script can
be found in the Supplementary Materials.


4
    Negative values represent parts where the object is brighter than the target; when the presence of negative values
    is a limiting factor for the realism of the result, we use the second type of calculation (see below).
Figure 3: The difference between grayscale versions of the target and original objects in Case 4. As we
see, the difference is negative in many parts of the object, which could lead to artifacts if the negative
values are simply set to 0.


3. Results and Discussion
The results are illustrated in Fig. 4. It is worth mentioning that in order to show the projection
performance, all images shown in Fig. 4 are actual projections captured by a DSLR camera. The
original images used as input to the projection system can be found in Supplementary Mate-
rials. The effects of appearance manipulation can be better detected in video demonstrations
that are also included in Supplementary Materials.
   Fig. 4 shows that the approach has to some extent succeeded in all cases. However, it is worth
mentioning that the enhancement results and respective target images do not match perfectly,
which can be explained with multiple factors: first of all, we have not implemented any color
management solution to compensate for cross-projector discrepancies, and thus, we cannot
rule out that the residual difference between the enhancement and target images is caused by
the difference between the characteristics of the two projectors; secondly, in some cases, the
pixels in the original image have higher values than corresponding pixels in the target image,
which leads to negative values in the compensation image. As negative values are set to 0 in
the compensation image, the enhancement image appears lighter than the target. Furthermore,
black level of the compensating projector is non-zero, which also introduces unintended energy
to the scene and leads to lighter-than-intended appearance of the enhancement result.
   For a bumpy object, where the contrast between specular highlights and shadows below
the bumps is a major cue to translucency, the compensation image primarily brightens the
shadowed areas (Fig. 4 - Case 1). This also eliminates the correlation between intensities
Figure 4: The results show that the compensation image has succeeded in all cases to produce translu-
cency and gloss appearance, respectively. Although some differences are noticeable with the target
image, the results are usually convincing. Further enhancement of the target image also enhances gloss
and translucency further but might produce some artifacts.
and 3D surface geometry, and the surface normals facing away from the illumination are not
dark anymore, which according to Marlow et al. [35, 36] is one of the fundamental factors
evoking translucency perception. Enhanced result looks more translucent than the target image,
and enhancement of the target further inverses the contrast and produces highly transparent
bumps onto opaque-looking sphere, which appears to some extent unrealistic. In case 2 of Fig. 4,
compensation image produces convincing translucent appearance through a luminance gradient
which decreases proportionally with the distance from the incidence surface. Enhancement of
the target further increases apparent penetration depth and thus, translucency.
   In case 3 of Fig. 4, the compensation image produced convincing translucent appearance by
brightening dark areas near the contours, and especially at thin parts. However, the apparent
lightness of the result is higher than that of the target. In case 4, opaque Lambertian back-lit
object is made transparent by projection of high-intensity luminance gradient, which in some
areas is contrast-reversed as noticed by Motoyoshi [8]. Brightening thin parts, which is a major
cue to translucency and transparency in this scene, produces a convincing semi-transparent
or translucent appearance. However, the resulting image is lighter than the target. In case
5, specular highlights are superimposed on top of a matter object, producing a convincing
glossy look. However, this is not due to the increased skewness of the luminance histogram
as proposed in [11], which indeed, is increased by the introduction of the highlights. Due to
perfect registration of the two images and identical geometry, the highlights are superimposed
at the areas where they would naturally occur, satisfying photo-geometric constraints necessary
for evoking gloss perception [33, 10, 34]. If the projectors are misregistered and the highlights
are translated, apparent glossiness will disappear, even though the skewness of the histogram
will remain unchanged. The result is lighter, because in non-specular areas the matte surface is
lighter than the glossy surface. Finally, a convincing gloss appearance is generated in case 6 too,
as the mirror-reflection of the environment is perfectly aligned with the surface geometry of the
matte sphere. However, in this case, both the resulting image and further enhancement appear
darker than the original image (that can be partially attributed to the simultaneous contrast
effects, as the pixel values of the DSLR-captured enhanced version are higher than that of the
target).
   We also attempted opacifying a translucent image with a compensation image. The results are
illustrated in Fig. 5. The resulting image is brighter than the target, which was expected, because
the intensities are scaled up in many parts of the image to increase the luminance contrast
with the areas where little or no energy is added. Due to linear addition of the intensities,
adding compensation image to the target image increased the contrast between specular and
non-specular areas. This decreased the apparent translucency, demonstrating that the approach
is promising. However, the opacity of the target material has not been reached, as the energy
below bumps in the resulting translucent image does not appear as dark as the shadows in the
target opaque image. Hence, larger dynamic range of the projection system might be needed to
achieve the satisfactory result. Besides, as the luminance contrast is sufficiently large in the
target image, its enhancement affected apparent lightness, but did not further increase apparent
opacity.
   We compared our results with another work manipulating translucency and gloss by SAR [20].
While their inversion of low-frequency component and brightening contours renders ghost-
like atmospheric translucency, and spatially-blind histogram skewness manipulation darkens
Figure 5: Although opacifier image considerably decreased translucency of the original image, it did
not reach the opacity of the target image.


significant areas of the image, making them nearly indiscernible (see dwarf and bear in Fig. 3
of [20]), our method produces more realistic appearance with less unintended artifacts. Besides,
the ground truth enables us to evaluate the performance, while no clear metric is available
in [20] to assess whether the proposed method performs as intended.
   The method introduced in this article can be useful for manipulating appearance in pre-
rendered images and videos under controlled observation conditions, such as screenings of
animation in a dark hall. However, this method has two major limitations: first of all, the
image of the ground truth target material is not always readily available for calculation of a
compensation image; secondly, real-world scenes might be dynamic and variable that requires
adaptive recalculation of the compensation image. These limitations especially arise when
appearance of real 3D objects are to be manipulated instead of appearance of virtual objects
in projected 2D images. This work is just a first step to test the feasibility of appearance
manipulation by simple algebraic addition of radiance. To manipulate appearance in real-life
scenarios, a camera-feedback system will be needed, which will capture current scene, analyze
appearance, 3D geometry, and illumination, and calculate a compensation image in real-time,
which itself will require a sophisticated hardware and software implementation discussed in
the subsequent paragraphs.
   Our results demonstrate feasibility of appearance manipulation with simple linear addition of
the radiant energy. The availability of the ground truth target simplified the calculation of the
compensation image. The next stage of this work is extending it to appearance manipulation
of real 3D objects rather than that of a projected image, and automatizing the process with a
projector-camera feedback framework, similarly to [20]. Physically-based ground truth image
and a reliable compensation image can be calculated offline. However, as the physically-based
rendering is overly time consuming, considerable shortcuts will be needed for real-time solutions.
A fundamental aspect in appearance manipulation is correct estimation of a 3D surface geometry.
Apparent 3D shape and self-occluding contours proposedly play a significant role in translucency
appearance. If the luminance intensities co-vary with the 3D surface geometry, object appears
opaque [35, 36]. To eliminate this co-variation by projecting additional energy (e.g. to the
previously shadowed areas, as in Case 1), it is essential to properly estimate the 3D geometry.
This has a vital importance for gloss perception as well, as the distribution and shape of the
specular highlights are also defined by the 3D geometry of the object [33, 10, 34].
   Additional equipment, such as depth cameras, additional projectors for photometric stereo [21,
41] and further correction of the estimated surface normals [31], as well as machine learning
techniques might be needed to estimate the 3D shape in a reliable manner. Once the 3D shape
is estimated, real-time calculation of a compensation image to eliminate shape and shading
co-variation, or putting specular highlights at geometrically accurate locations becomes feasible,
especially with the help of state-of-the-art machine learning techniques [42]. We will address
this topic in future works.


4. Conclusion
In this work, we demonstrated that gloss and translucency appearance in Spatial Augmented
Reality can be manipulated with simple algebraic addition of radiance energy. The energy – the
compensation image – that needs to be projected is found as a difference between physically-
based renderings of original and target materials we want to mimic. Unlike previous studies,
we had access to the ground truth target image in this work, which enabled us to estimate the
magnitude of required manipulation, assess the quality of the result, and ensure that luminance
gradient is both physically realistic and satisfies photogeometric constraints (however, in some
real-life and real-time applications, this might not be possible). Besides, we proposed a novel
method to simulate energy removal from the scene to decrease apparent translucency and
increase apparent opacity of the objects.
   In future works, the appearance of real 3D objects should be manipulated with a projector-
camera feedback automated process. While physically-based renderings can be used to calculate
compensation images offline, more sophisticated approach is needed for real-time performance.
For calculation of a compensation image, estimation of 3D surface geometry is essential, which
can be achieved with depth-cameras, photometric stereo, and machine learning techniques.


References
 [1] ASTM International. E284-17 – Standard Terminology of Appearance, 2017.
 [2] Pointer Michael (Chairman of Technical Committee 1-65), A framework for the measure-
     ment of visual appearance, Technical Report CIE 175:2006, International Commission on
     Illumination, 2006.
 [3] B. L. Anderson, Visual perception of materials and surfaces, Current Biology 21 (2011)
     R978–R983.
 [4] A. C. Chadwick, R. W. Kentridge, The perception of gloss: A review, Vision Research 109
     (2015) 221–235. URL: https://doi.org/10.1016/j.visres.2014.10.026.
 [5] R. W. Fleming, H. H. Bülthoff, Low-level image cues in the perception of translucent
     materials, ACM Transactions on Applied Perception (TAP) 2 (2005) 346–382. URL: https:
     //doi.org/10.1145/1077399.1077409.
 [6] D. Gigilashvili, J.-B. Thomas, J. Y. Hardeberg, M. Pedersen, Translucency perception: A
     review, Journal of Vision 21(8):4 (2021) 1–41. URL: https://doi.org/10.1167/jov.21.8.4.
 [7] D. Gigilashvili, P. Urban, J.-B. Thomas, M. Pedersen, J. Y. Hardeberg, The impact of optical
     and geometrical thickness on perceived translucency differences, Journal of Perceptual
     Imaging 5 (2022) 000501–1–000501–18. URL: https://doi.org/10.2352/J.Percept.Imaging.
     2022.5.000501.
 [8] I. Motoyoshi, Highlight–shading relationship as a cue for the perception of translucent and
     transparent materials, Journal of Vision 10 (2010) 1–11. URL: https://doi.org/10.1167/10.9.6.
 [9] I. Gkioulekas, B. Walter, E. H. Adelson, K. Bala, T. Zickler, On the appearance of translucent
     edges, in: Proceedings of the 28th IEEE Conference on Computer Vision and Pattern
     Recognition (CVPR), Computer Vision Foundation, 2015, pp. 5528–5536.
[10] D. Gigilashvili, M. Tanaka, M. Pedersen, J. Y. Hardeberg, Image statistics as glossiness and
     translucency predictor in photographs of real-world objects, in: Proceedings of the 10th
     Colour and Visual Computing Symposium (CVCS 2020), volume 2688, CEUR Workshop
     Proceedings, 2020, pp. 1–15.
[11] I. Motoyoshi, S. Nishida, L. Sharan, E. H. Adelson, Image statistics and the perception of
     surface qualities, Nature 447 (2007) 206–209. URL: https://doi.org/10.1038/nature05724.
[12] F. B. Leloup, G. Obein, M. R. Pointer, P. Hanselaer, Toward the soft metrology of surface
     gloss: A review, Color Research & Application 39 (2014) 559–570. URL: https://doi.org/10.
     1002/col.21846.
[13] P. J. Marlow, B. L. Anderson, Generative constraints on image cues for perceived gloss,
     Journal of Vision 13 (2013) 1–23. URL: https://doi.org/10.1167/13.14.2.
[14] F. Pellacini, J. A. Ferwerda, D. P. Greenberg, Toward a psychophysically-based light
     reflection model for image synthesis, in: SIGGRAPH ’00: Proceedings of the 27th Annual
     Conference on Computer Graphics and Interactive Techniques, ACM Press/Addison-
     Wesley Publishing Co., New York, NY, 2000, pp. 55–64.
[15] J.-B. Thomas, J. Y. Hardeberg, G. Simone, Image contrast measure as a gloss material
     descriptor, in: Proceedings of the 6th International Workshop on Computational Color
     Imaging (CCIW2017), Springer, Cham, 2017, pp. 233–245.
[16] D. Gigilashvili, P. Urban, J.-B. Thomas, M. Pedersen, J. Yngve Hardeberg, Perceptual
     navigation in absorption-scattering space, in: Color and Imaging Conference, volume
     2021, Society for Imaging Science and Technology, 2021, pp. 328–333.
[17] H. Benko, A. D. Wilson, F. Zannier, Dyadic projected spatial augmented reality, in: Pro-
     ceedings of the 27th Annual ACM Symposium on User Interface Software and Technology,
     2014, pp. 645–655.
[18] R. Raskar, G. Welch, H. Fuchs, Spatially augmented reality, Augmented Reality: Placing
     Artificial Objects in Real Scenes (1999) 64–71.
[19] J. Underkoffler, B. Ullmer, H. Ishii, Emancipated pixels: real-world graphics in the lumi-
     nous room, in: Proceedings of the 26th Annual Conference on Computer Graphics and
     Interactive Techniques, 1999, pp. 385–392.
[20] T. Amano, Manipulation of material perception with light-field projection, in: Three-
     Dimensional Imaging, Visualization, and Display 2019, volume 10997, International Society
     for Optics and Photonics, 2019, pp. 1099706:1–1099706:13.
[21] O. Wang, M. Fuchs, C. Fuchs, H. P. Lensch, J. Davis, H.-P. Seidel, A context-aware light
     source, in: 2010 IEEE International Conference on Computational Photography (ICCP),
     IEEE, 2010, pp. 1–8.
[22] J. Stenger, N. Khandekar, R. Raskar, S. Cuellar, A. Mohan, R. Gschwind, Conservation of a
     room: A treatment proposal for Mark Rothko’s Harvard Murals, Studies in Conservation
     61 (2016) 348–361.
[23] D. Vázquez, A. Fernández-Balbuena, H. Canabal, C. Muro, D. Durmus, W. Davis, A. Benítez,
     S. Mayorga, Energy optimization of a light projection system for buildings that virtually
     restores artworks, Digital Applications in Archaeology and Cultural Heritage 16 (2020)
     1–13.
[24] T. Yoshida, C. Horii, K. Sato, A virtual color reconstruction system for real heritage with
     light projection, in: Proceedings of VSMM, volume 3, Citeseer, 2003, pp. 1–7.
[25] D. G. Aliaga, A. J. Law, Y. H. Yeung, A virtual restoration stage for real-world objects, in:
     ACM SIGGRAPH Asia 2008 papers, 2008, pp. 1–10.
[26] X. Calixte, P. Leclercq, The interactive projection mapping as a spatial augmented reality to
     help collaborative design: Case study in architectural design, in: International Conference
     on Cooperative Design, Visualization and Engineering, Springer, 2017, pp. 143–152.
[27] M. K. Park, K. J. Lim, M. K. Seo, S. J. Jung, K. H. Lee, Spatial augmented reality for product
     appearance design evaluation, Journal of Computational Design and Engineering 2 (2015)
     38–46.
[28] O. Bimber, D. Iwai, Superimposing dynamic range, ACM Transactions on Graphics (TOG)
     27 (2008) 1–8.
[29] R. Raskar, G. Welch, K.-L. Low, D. Bandyopadhyay, Shader lamps: Animating real objects
     with image-based illumination, in: Eurographics Workshop on Rendering Techniques,
     Springer, 2001, pp. 89–102.
[30] T. Amano, S. Ushida, Y. Miyabayashi, Dependent appearance-manipulation with multiple
     projector-camera systems, in: Proceedings of the 27th International Conference on Artifi-
     cial Reality and Telexistence and 22nd Eurographics Symposium on Virtual Environments,
     2017, pp. 101–107.
[31] T. Okazaki, T. Okatani, K. Deguchi, A projector-camera system for high-quality synthesis
     of virtual reflectance on real object surfaces, Information and Media Technologies 5 (2010)
     691–703.
[32] A. J. Law, D. G. Aliaga, B. Sajadi, A. Majumder, Z. Pizlo, Perceptually based appearance
     modification for compliant appearance editing, in: Computer Graphics Forum, volume 30,
     Wiley Online Library, 2011, pp. 2288–2300.
[33] B. L. Anderson, J. Kim, Image statistics do not explain the perception of gloss and lightness,
     Journal of Vision 9 (2009) 1–17. URL: https://doi.org/10.1167/9.11.10.
[34] J. Kim, P. Marlow, B. L. Anderson, The perception of gloss depends on highlight congruence
     with surface shading, Journal of Vision 11(9) (2011) 1–19. URL: https://doi.org/10.1167/11.
     9.4.
[35] P. J. Marlow, B. L. Anderson, The cospecification of the shape and material properties of
     light permeable materials, Proceedings of the National Academy of Sciences 118 (2021)
     1–10. URL: https://doi.org/10.1073/pnas.2024798118.
[36] P. J. Marlow, J. Kim, B. L. Anderson, Perception and misperception of surface opacity,
     Proceedings of the National Academy of Sciences 114 (2017) 13840–13845. URL: https:
     //doi.org/10.1073/pnas.1711416115.
[37] W. Jakob, Mitsuba Renderer, 2010. URL: http://www.mitsuba-renderer.org.
[38] P. Urban, T. M. Tanksale, A. Brunton, B. M. Vu, S. Nakauchi, Redefining A in RGBA:
     Towards a standard for graphical 3D printing, ACM Transactions on Graphics (TOG) 38
     (2019) 1–14. URL: https://doi.org/10.1145/3319910.
[39] H. W. Jensen, S. R. Marschner, M. Levoy, P. Hanrahan, A practical model for subsurface
     light transport, in: Proceedings of the 28th Annual Conference on Computer Graphics
     and Interactive Techniques, 2001, pp. 511–518.
[40] J.-B. Thomas, A. Deniel, J. Y. Hardeberg, The plastique collection: A set of resin objects for
     material appearance research, XIV Conferenza del Colore, Florence, Italy (2018) 12 pages.
[41] R. J. Woodham, Photometric method for determining surface orientation from multiple
     images, in: Shape from Shading, 1989, pp. 513–531.
[42] K. R. Storrs, B. L. Anderson, R. W. Fleming, Unsupervised learning predicts human
     perception and misperception of gloss, Nature Human Behaviour (2021) 1–16. URL:
     https://doi.org/10.1038/s41562-021-01097-6.



A. Supplementary Materials
Supplementary materials can be accessed at the following repository:
https://github.com/davitgigilashvili/Appearance-Manipulation-Supplementary-Material/.