=Paper= {{Paper |id=Vol-2744/paper8 |storemode=property |title=The Backward Photon Mapping for the Realistic Image Rendering |pdfUrl=https://ceur-ws.org/Vol-2744/paper8.pdf |volume=Vol-2744 |authors=Andrey Zhdanov,Dmitry Zhdanov }} ==The Backward Photon Mapping for the Realistic Image Rendering== https://ceur-ws.org/Vol-2744/paper8.pdf
    The Backward Photon Mapping for the Realistic Image
                       Rendering*

          Dmitry Zhdanov [0000-0001-7346-8155], Andrey Zhdanov [0000-0002-2569-1982]

             ITMO University, 49 Kronverksky Pr., St. Petersburg, 197101, Russia

                     ddzhdanov@mail.ru, adzhdanov@itmo.ru



        Abstract. The current paper is devoted to the methods of the realistic rendering
        methods based on the bidirectional stochastic ray tracing with photon maps. The
        research of the backward photon mapping method to account for both caustics
        and indirect illumination is presented. By using the backward photon maps au-
        thors reduced the amount of data that should be stored in the photon maps that
        allowed to speed up the process of the indirect luminance calculation. Methods
        used for constructing a tree of backward photon maps and methods of efficient
        parallelization used in algorithms of accumulation and forming the backward
        photon maps along with tracing forward and backward rays in the rendering pro-
        cess are considered. Methods to estimate the attained luminance error both for
        single image pixels and for the entire image with the designed rendering method
        are presented. The rendering results obtained with the use of the developed meth-
        ods and algorithms are presented.


        Keywords: Realistic Rendering, Photon Maps, Bidirectional Ray Tracing, Ac-
        curacy Estimation, Parallel Computing, Forward Ray Tracing, Backward Ray
        Tracing, Voxelization, Backward Photon Maps.


1       Introduction

Realistic rendering is the research area of modern realistic visualization, virtual proto-
typing, and virtual reality systems. It is used while solving a wide range of applied
problems, including the realistic images forming, optical effects simulation, virtual pro-
totyping of complex optical systems, etc.
    The main algorithms that are used when implementing the realistic rendering algo-
rithms are based on the stochastic ray tracing methods. In general, the task of the real-
istic rendering is to calculate the luminance of the observed scene at each point of the
image. In its turn, it requires that the realistic rendering algorithms are be based on the
physically correct laws of the light propagation and transformation. In 1985 James

Copyright Β© 2020 for this paper by its authors. Use permitted under Creative Commons License
Attribution 4.0 International (CC BY 4.0).

*   This work was funded by the Russian Science Foundation (project No. 18-79-10190).
2 A. Zhdanov, D. Zhdanov

Kajia proposed to use the following equation for calculating the visible luminance [1],
which is also called the rendering equation:
                                               2πœ‹
       𝐿(𝑝⃗, π‘£βƒ—π‘Ÿ ) = 𝜏(𝑑) (𝐿0 (𝑝⃗, π‘£βƒ—π‘Ÿ ) + ∬        𝐿𝑖 (𝑝⃗, 𝑣⃗𝑖 )𝑓(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ) cos(𝑛⃗⃗, 𝑣⃗𝑖 ) π‘‘πœ”)   (1)

In formula (1) 𝑝⃗ is the observation point, π‘£βƒ—π‘Ÿ is the direction to the observer from the
observation point, 𝜏(𝑑) is the transmittance between the observer and the observation
point, 𝐿0 (𝑝⃗, π‘£βƒ—π‘Ÿ ) is the luminance of the object self-luminance at the point of observation
in direction to the observer, 𝐿𝑖 (𝑝⃗, 𝑣⃗𝑖 ) is the luminance of the ambient light in the solid
angle π‘‘πœ”, in the direction 𝑣⃗𝑖 to the observation point 𝑝⃗, 𝑓(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ) is the Bidirectional
Scattering Distribution Function (BSDF) at the point 𝑝⃗ illuminated from direction 𝑣⃗𝑖 ,
to the observer in direction π‘£βƒ—π‘Ÿ , 𝑛⃗⃗ is the normal to the surface in the observation point.
    Since the rendering equation has the analytical solution only for a limited number of
combinations of optical material properties and geometric shapes, a large number of
numerical methods were developed to solve this equation. The most appropriate ap-
proaches are solutions based on the Monte Carlo ray tracing. These are forward, back-
ward, and bidirectional ray tracing. These approaches use different solutions for calcu-
lation of the direct visibility luminance component (𝐿0 (𝑝⃗, π‘£βƒ—π‘Ÿ )), and direct, indirect and
                                          2πœ‹
caustic luminance components (∬ 𝐿𝑖 (𝑝⃗, 𝑣⃗𝑖 )𝑓(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ) cos(𝑛⃗⃗, 𝑣⃗𝑖 ) π‘‘πœ” ). Both direct
and caustic luminances are formed by the first component of this infinite recursion and
the main difference between them is that caustic luminance is formed through surfaces
(or gradient media) that do not have diffuse surface properties. The light source lumi-
nance is simply scaled with the specular transmission coefficient to the observed sur-
face (𝜏(π‘‘π‘Ÿ )𝐿0 (𝑝⃗, π‘£βƒ—π‘Ÿ )). The indirect luminance component is defined by the rest of com-
                                    2πœ‹
ponents of the infinite sum (∬ 𝐿𝑖 (𝑝⃗, 𝑣⃗𝑖 )𝑓(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ) cos(𝑛⃗⃗, 𝑣⃗𝑖 ) π‘‘πœ” ).
    The main methods which are used for the luminance calculation are forward, back-
ward, and bidirectional ray tracing. These methods allow getting the unbiased solution
of the rendering equation. However, these methods [2] might be not effective for scenes
with complex indirect and caustic illumination. Traditional bidirectional ray tracing
method [3] might be quite effective for scenes when several diffuse events occur on the
way from the light source to the observed surfaces, however, it has very low effectivity
in the case of caustic illumination, especially if light sources are rather small. Therefore,
the most universal method for calculating the physically correct luminance of indirect
and caustic illumination are the methods based on the photon maps [4].
    In 1996 Henrik Wann Jensen proposed [5] to use the photon maps for calculation of
the global illumination. The rendering algorithm that uses photon maps consists of the
following three main steps: in the first step, rays are emitted from the light sources,
propagated across the scene and form the distribution of photons on the surfaces of the
scene; in the second step, photon maps are formed from the photon distribution; and at
the last step, the camera reads the luminance distribution from the photons observed
from the corresponding image pixels. The rendering process with the photon mapping
method is shown on Fig. 1.
                   The Backward Photon Mapping for the Realistic Image Rendering 3




                             Fig. 1. The photon mapping method.

The observed luminance is the sum of the luminances of all photons observed on the
backward ray paths:
                                          1
                     𝐿𝑖𝑑с (𝑝⃗, π‘£βƒ—π‘Ÿ ) β‰ˆ          βˆ‘πΎ     βƒ—, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ )βˆ†π›·(𝑣⃗𝑖 )
                                                 𝑖=1 𝑓(𝑝                               (2)
                                         πœ‹π‘Ÿ 2

In formula (2) K is the number of observed photons, βˆ†π›·(𝑣⃗𝑖 ) is the photon flux in di-
rection 𝑣⃗𝑖 . This solution is biased since the luminance is averaged over the integration
sphere of the radius r. This is the main disadvantage of the photon mapping method.
There are two mainly used solutions to select the radius of the integration sphere. At
first, the radius can be set during the forward ray tracing. In this case, it has a high
probability to be either overestimated or underestimated, since at that moment there is
no information on how this sphere will be observed. This can slow down the conver-
gence of the rendering or cause the image to blur. Secondly, the radius of the sphere
can be calculated when tracing backward rays. When a ray passes near a photon, which
is determined using the corresponding voxel structure, the radius of the integration
sphere is calculated based on the given solid angle of integration of the indirect or caus-
tic luminance. The disadvantage of this approach is the complexity of the algorithm for
finding the intersection of the ray with photons and, as a result, the slower rendering.
Also, the forward photon mapping method generally requires storing the history of the
ray trace path, i.e. photons on all diffuse surfaces of the scene. For "bounded and bright"
scenes, this can lead to a significant increase in memory required to store photon maps
that would slow down the rendering process as quite a large amount of data should be
processed.
    In 2005 Vlastimil Havran proposed [6] to use the backward photon mapping to ac-
celerate the final gathering for calculation of the indirect illumination. In our opinion,
this method deserves further research and development. So, in this paper, we present
our results of the additional research of the backward photon mapping method and the
corresponding methods for the efficient calculation of the indirect and caustic lumi-
nance components with backward photon mapping in a parallel computing environ-
ment.
4 A. Zhdanov, D. Zhdanov


2      Backward photon mapping

If we assess the physical correctness of the rendering process, then the method of back-
ward photon maps is similar to the method of the forward photon maps. The luminance
of the indirect and caustic illumination is calculated similarly to the method of forward
photon maps with the formula (2). The fundamental difference is that instead of forming
the illumination photon maps, the maps of how the illumination is be observed, i.e.
visibility maps, are formed. Fig. 2. illustrates how backward photon maps or visibility
maps work.




                       Fig. 2. The backward photon mapping method.

There are two fundamental differences between the methods of forward and backward
photon maps in the physics of the rays propagation process and the accumulation of
photon maps. At first, in the case of forward photon maps, the propagation of the light
rays (or photons) transfers the luminous flux βˆ†π›·(𝑣⃗𝑖 ). In the case of backward photon
maps, the ray emitted from the camera does not have the luminous flux. The backward
ray transfers the transmittance 𝜏(𝑑) from the camera to the point where the luminance
is calculated. Despite the different physical meanings of βˆ†π›·(𝑣⃗𝑖 ) and 𝜏(𝑑) for RGB and
simple spectral modeling, there are no fundamental differences between them as both
are vectors. However, if the polarization effects are taken into account then the structure
of the βˆ†π›·(𝑣⃗𝑖 ) and 𝜏(𝑑) parameters change. If the polarization effect is represented by
Stokes vectors, then each color component βˆ†π›·(𝑣⃗𝑖 ) becomes a Stokes vector of 4 com-
ponents. And each of the color components of 𝜏(𝑑) turns into a 4x4 Muller matrix. As
a result, if the rendering is performed with polarization effects in mind, then each color
component of the backward photon must store a Mueller matrix that significantly in-
creases the memory load for backward photon maps storage. However, as the polariza-
tion effect is not important in most of the scenes, it can be neglected.
    The second important difference is in the method for calculating 𝐡𝑆𝐷𝐹(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ).
According to the Helmholtz reciprocity principle, the following condition must be sat-
isfied:
                   The Backward Photon Mapping for the Realistic Image Rendering 5

                        𝐡𝑆𝐷𝐹(𝑝⃗, 𝑣⃗𝑖 , π‘£βƒ—π‘Ÿ ) = 𝐡𝑆𝐷𝐹(𝑝⃗, π‘£βƒ—π‘Ÿ , 𝑣⃗𝑖 )                    (3)

This condition is valid only for a limited number of surface optical properties models,
for example, Lambert or Blinn-Phong. For most of the other models, including models
obtained by measuring properties of the real-world objects, this condition is not satis-
fied. This means that the rendering result obtained with forward ray tracing will differ
from the result obtained with backward ray tracing. Typically, the result obtained by
the forward ray tracing is chosen as ground truth, so the backward ray tracing result
must be matched with the forward ray tracing one.
    To ensure the Helmholtz reciprocity principle is satisfied when implementing the
ray scattering on a diffuse surface, we used the importance sampling corresponding to
the BSDF function of the forward ray path, followed by compensation of 𝜏(𝑑) by the
BSDF value in the backward path of the same ray. Since the variation of the BSDF in
the forward and backward ray paths mainly does not exceed two times, the variation of
the obtained 𝜏(𝑑) values also do not exceed two times.
    The main advantage of the backward photon mapping method over the forward pho-
ton mapping method is the ability to limit the number of diffuse events occurred with
ray before creating the backward photon in the map. This limitation of the diffuse ray
tracing depth can significantly reduce the amount of data required to store backward
photon maps, especially for β€œbounded and bright” scenes with a large number of diffuse
reflections.
    The research has shown that the optimal diffuse ray tracing depth varies between
zero and one for most of the scenes. A further increase of the diffuse ray tracing depth
leads to a slowdown of the rendering process without a noticeable improvement of the
generated image quality. Even though from a formal point of view (based on the results
of the image accuracy evaluation) the zero diffuse rays tracing depth seems to be pref-
erable, for most of the scenes the optimal (based on the visual assessment of the image
quality) diffuse ray tracing depth is equal to one. This difference is explained by the
fact that at the zero diffuse ray path depth, the image is more biased, i.e. the sampling
error caused by the limited radius of the integration sphere is more noticeable. Visually,
this appears as the high graininess of the image and shadowing around the sharp edges
of the scene objects. It can be noted that for each point in the scene, its own "optimal"
depth of the diffuse ray tracing can be chosen, and this depth is determined by the var-
iation of luminance at the image point (for example, due to the proximity of the obser-
vation point to the light source or the excess "sharpness" of the BRDF at the observed
point). Moreover, the diffuse ray tracing depth can vary during the luminance calcula-
tion.
    The second important advantage of the backward photon mapping method is the
ability to determine the radius of the integration sphere when forming the backward
photon maps. The radius of the integration sphere is based on the value of the acceptable
luminance discretization error, i.e. in which solid angle the luminance would be aver-
aged. For convenience, this solid angle can be defined based on the solid angle of one
screen pixel as it is seen by the virtual camera. Sampling error is directly related to the
kind of luminance that would be accumulated at the observation point. In the case of
zero diffuse depth of the backward ray, all diffuse surfaces on the ray path accumulate
6 A. Zhdanov, D. Zhdanov

only direct and caustic illumination before the first diffuse event. This means that the
radius of the integration sphere should be kept minimal, about the size of the image
pixel. After the first diffuse event occurs, only the indirect luminance is accumulated
(the surface illumination also can be either direct or caustic) and the radius of the inte-
gration sphere can be expanded to a size of several pixels.
   Since the radius of the integration sphere is already known at the moment of forming
the backward photons map, the voxel accelerating structure (kd-tree) can be built not
just for backward photon centers but for the whole integrating spheres. Studies have
shown that building a voxel acceleration structure for spheres requires from one and a
half to two times more time comparing to building a similar structure for their centers
only, however, the search for the intersection of the forward ray with the photon maps
is executed much faster if the voxel acceleration structure stores integration spheres
comparing to the one storing centers only. Taking into account a large number of traced
forward rays, the overall acceleration gain by using a voxel acceleration structure with
integration spheres can reach one and a half times.
   Also, a natural solution is to split the backward photon maps by scene objects, i.e.
each scene object has its backward photon map, in which photons are accumulated
when the ray hits this object. This separation does not increase the memory load when
storing maps however allows us to speed up both the process of building the voxel
acceleration structure and the intersection search of the ray with the integration spheres.
Edge effects caused by the fact that illumination of the edge of one object cannot be
transformed into the luminance at the edge of a neighboring object (even though they
can be covered by the same sphere of integration), do not make a significant contribu-
tion to the variation of luminance, since the radius of the sphere of integration is rather
small. Also, in some cases, the separation of backward photon maps by scene objects
helps to avoid light leaking into unlit areas.


3      Parallelizing solutions for the backward photon
       mapping method

In the scope of the current research, the presented realistic rendering algorithm based
on backward photon maps was implemented for multicore computers without GPU.
The general scheme of the implemented algorithm is shown in Fig. 3.
   This algorithm resembles the progressive photon mapping rendering algorithm. The
main difference is that the backward rays are traced first with the light visibility and
direct luminance calculation. Then the backward photon maps are voxelized and the
accelerating structures are built. Subsequent forward ray tracing "throws" photons into
backward photon maps and forms the indirect and caustic illumination maps. These
maps are used to calculate the luminance of the caustic and secondary illumination. The
size of caustic and indirect illumination maps is not large enough, therefore the process
of "throwing" photons and calculating the luminance is repeated several times until the
number of backward photons hits by direct rays becomes close to the number of back-
ward photons. After completing the calculation of all the luminance components, the
                  The Backward Photon Mapping for the Realistic Image Rendering 7

accuracy of the generated image is evaluated and, if the required accuracy was not
achieved, then the whole process is repeated from the beginning.




           Fig. 3. Bidirectional ray tracing with backward photon maps algorithm.

For the efficient implementation of the rendering process on multicore computers, spe-
cial algorithms were developed for more efficient parallelization of backward ray trac-
ing, including the calculation luminance of the direct light visibility and direct illumi-
nation, construction of the accelerating structures, forward ray tracing, and calculation
of the caustic and indirect illumination components.
    The backward ray tracing is performed in blocks of 32x32, 64x64, or 128x128 pixels
(depending on the number of threads used for parallel computing). Each thread renders
its part of the image to avoid conflicts when writing to the shared image memory. Also,
each thread forms its local photon map, which also avoids conflicts when writing pho-
tons into the map of the same scene object. In addition to the listed "classical" parallel
8 A. Zhdanov, D. Zhdanov

methods, a method of parallelization by mask within one block was proposed. The idea
of this method is illustrated in Fig. 4.




                 Fig. 4. The method of thread jobs distribution with a mask.

All threads perform the rendering of each block. However, within the range of one
block, each thread traces rays that pass only through its part of the block pixels deter-
mined by the mask. This approach allows organizing a more uniform calculations load
across all threads. This is useful for rendering optically complex scenes on multi-core
computers. If a small part of the image contains objects that create long ray paths, for
example, light-guiding devices, then the calculation of this part of the image will not
"hang" and will not lead to a general underload of the CPU, since each area of the image
is processed by all threads.
    To create the accelerating structure for the backward photon map, kd-trees are used.
As each scene object has its backward photon map and corresponding accelerating
structure, the parallel voxelization of photon maps was implemented. The maps are
sorted in descending order by the number of photons they contain and are queued for
voxelization. Then first K (where K is the number of threads used in the simulation) of
the largest maps are voxelized in parallel. As soon as one thread finishes voxelizing a
map, it takes the next map from the queue and starts its voxelization. Since the number
of objects in a scene is usually orders of magnitude greater than the number of threads,
there is no noticeable decrease in the CPU load, and the acceleration of the voxelization
process almost linearly depends on the number of CPU cores.
    The next component of the parallel compute rendering system is forward ray tracing
and calculation of the caustic and indirect luminance. The direct Monte Carlo ray trac-
ing algorithm is simple enough for efficient parallelization. The main problem is the
time required to write image data to the shared memory. To avoid memory access con-
flicts and synchronization, an algorithm was developed that splits the forward ray trac-
ing and the calculation of the indirect and caustic luminance into two parts. The first
part traces the forward light rays and searches for the intersection of these rays with the
backward photon maps. To avoid memory conflicts when writing the result of the in-
tersection of the ray with the backward photon map (sphere of integration), an addi-
tional map of forward photons is formed, containing a list of all spheres of integration
that the given ray hit. This map is built for each separate thread. After a portion of the
light rays has been traced (as a rule, the portion size is from 100,000 to 1,000,000 rays),
this map is processed, and caustic and secondary luminance components are calculated.
The calculation of these luminance components is performed in parallel when each
                   The Backward Photon Mapping for the Realistic Image Rendering 9

thread chooses its part of the image (according to the masks that were formed for back-
ward ray tracing) and calculate corresponding pixels luminance. Since the generated
map of forward photons contains a reference to the backward photons and, as result,
the indices of the screen pixels from which they were emitted, each thread can select
only pixels that correspond to the screen area they process and calculate corresponding
caustic and indirect luminance components.
   It should be noted that after performing a series of rendering phases, from backward
ray tracing to the accuracy estimation and the intermediate image forming, masks are
rebuilt, re-distributing the backward rays across the image area.


4      Accuracy estimation

One of the important points of the realistic rendering, aimed for virtual prototyping of
optically complex scenes and optical devices, is the correct estimation of the calculation
accuracy. The estimation of the luminance calculation accuracy can be performed both
for an individual pixel of the image or for the entire image.
   Various approaches can be used to estimate accuracy. One of the simplest solutions
is based on the progressive nature of luminance calculations. Rendering (from back-
ward ray tracing to intermediate image forming) must be repeated many times to
achieve a good quality image. Therefore, it is possible to create additional M independ-
ent intermediate images instead of one during the rendering process, then it is possible
to estimate the accuracy of the total image by calculating the standard deviation be-
tween these images. This method was inherited from the algorithm used for evaluating
the accuracy of calculated illumination maps in the Lumicept rendering system [7]. At
the i-th rendering step, the rendering result is written to the intermediate image with the
index (i mod M). In this case, the final image is the average of all images, and the
accuracy of the pixel luminance 𝑅𝑀𝑆𝑖 can be estimated as the standard deviation be-
tween the luminance of the pixels in M images. Then the accuracy 𝛿 over the entire
image with size 𝑀 Γ— β„Ž can be estimated as follows:

                                           βˆ‘π‘€βˆ™β„Ž 𝑅𝑀𝑆𝑖 2
                                  𝛿 = √ 𝑖=1
                                        βˆ‘π‘€βˆ™β„Ž            2                              (4)
                                               𝑖=1 𝐿𝑖


In case if M is a small value, for example, 2, this method correctly estimates the average
accuracy of the entire image. However, the pixel luminance estimation might be highly
inaccurate. For a correct estimation of the separate pixel accuracy, the M value must be
more than 10, which leads to a significant increase in the amount of memory required
to store the intermediate images. Therefore, this algorithm was replaced by a more
"economical" one which is based on the following formula:
                                                                            2
                                  1   βˆ‘π‘   𝐿𝑖,𝑗 2           βˆ‘π‘       𝐿𝑖,𝑗
                        𝑅𝑀𝑆𝑖2 = ( 𝑗=1               βˆ’ ( 𝑗=1                 ) )        (5)
                                  𝑁        𝑁                     𝑁
10 A. Zhdanov, D. Zhdanov

In formula (5) 𝑁 is the total number of luminance calculation phases, Li,j is the lumi-
nance of the i-th pixel, calculated at the j-th calculation phase. As can be seen, to esti-
mate the pixel error it is enough to accumulate only one additional value: the square of
the pixel luminance over the calculation phases.
   During the rendering process, the estimation of the current calculation accuracy is
used not only to interrupt calculations when a required accuracy is achieved but also
this estimation can be used as the density probability function when emitting backward
rays through the image pixel. A larger error resulting in a higher probability of ray
emission should lead to a reduction of the error in the corresponding pixel.


5      Results

The developed rendering method, based on the backward photon maps, was integrated
into the Lumicept physically correct rendering package. Comparison with the prototype
that was implemented basing on the stochastic ray tracing with classic forward photon
maps showed that the backward photon maps method can significantly decrease the
number of photons created by one ray and the total size of the photon maps while keep-
ing the same number of traced rays. As a result, it also decreases the memory load of
the rendering methods.
   The backward and forward photons forming were tested on three scenes, these are
the Cornell Box scene, the Room2 scene, and the Light Guides scene. Rendered images
of test scenes acquired with the backward photon mapping method are shown in Fig. 5-
7. Table 1 shows the average number of photons created by a single forward ray in the
photon mapping method comparing to the number of backward photons created by one
backward ray in the backward photon mapping method for the same scene.




                   Fig. 5. Rendered images of the Cornell Box test scene.
                  The Backward Photon Mapping for the Realistic Image Rendering 11




                       Fig. 6. Rendered images of the Room2 test scene.




                    Fig. 7. Rendered images of the Light Guides test scene.

Table 1. The number of photons formed by one ray in the forward photon mapping and the
backward photon mapping methods.
 Scene                                    Photons per    Photons per          Photon map size
                                          forward ray    backward ray         reduction (times)
 Cornell Box                              3.45           1.59                 2.17
 Room2                                    7.6            1.93                 3.94
 Light Guides (light sources are inside   7.6            3.1                  2.45
 light guides)
 Light Guides (light sources are          5.6            3.1                  1.81
 directed into light guides but are
 positioned outside of light guides)
 Light Guides (light sources are inside   7.6            5.9                  1.29
 light guides; the camera is directed
 into light guide side face)
12 A. Zhdanov, D. Zhdanov

As can be seen, the number of photons per ray depends on the position of the light
sources and camera. The Light Guides scene with correct camera positioning shows an
example of the extreme case for the backward photon mapping method, however, in
real use cases the backward photon maps size is about two to three times smaller than
the forward photon maps formed in the same scene with the same number of traced
rays.


6      Conclusion

The developed method of the bidirectional stochastic ray tracing with backward photon
maps provides physically correct rendering and high computational efficiency. It can
be used not only for realistic rendering of the optically complex scenes but also for
virtual prototyping of various kinds of optical systems, including observation systems,
lighting systems, and virtual reality systems. This method can be effectively parallel-
ized to be used on multicore computers or in distributed computing systems. The effi-
ciency of the CPU implementation of the presented methods can be additionally in-
creased if ray tracing, luminance calculation, photon maps forming, and processing
methods are implemented with SSE, AVX, AVX2, and AVX-512 instruction sets. Cur-
rently, authors are working on rendering algorithms that would calculate the caustic and
indirect illumination by forming the independent forward and backward photon maps
for each of the illumination components. These algorithmic solutions might signifi-
cantly improve the performance of realistic rendering.


References
 1. Kajiya, J. T.: The rendering equation. In: SIGGRAPH '86 Proceedings, vol. 20, pp. 143-150,
    ACM, New York, USA (1986).
 2. Veach, E.: Robust monte carlo methods for light transport simulation. Ph. D. thesis. Stan-
    ford, CA, USA (1998).
 3. Georgiev, I., Krivanek, J., Slusallek, P.: Bidirectional light transport with vertex merging.
    In: SIGGRAPH Asia 2011, pp. 27:1-27:2, ACM, New York, NY, USA (2011).
 4. Kang, C. et al:. A survey of photon mapping state-of-the-art research and future challenges.
    In: Frontiers Inf Technol Electronic Vol 17, pp. 185–199. (2016).
 5. Jensen, H. W.:Global illumination using photon maps. In: Proceedings of the eurographics
    workshop on Rendering techniques ’96, pp. 21–30. Springer-Verlag, London, UK (1996).
 6. Havran, V., Herzog, G., Seidel, H.-P.: Fast Final Gathering via Reverse Photon Mapping.
    In: Proceedings of the Eurographics 2005, Vol. 24, Nr. 5, pp. 323-333. Blackwell Publish-
    ing, Oxford, UK (2005).
 7. Lumicept – Hybrid Light Simulation Software, http://www.integra.jp/en.