<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>ORCID:</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Rays in the Bidirectional Photon Mapping Method</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sergey Ershov</string-name>
          <email>ersh@gin.keldysh.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alexey Voloboy</string-name>
          <email>voloboy@gin.keldysh.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Keldysh Institute of Applied Mathematics RAS</institution>
          ,
          <addr-line>Miusskaya sq. 4. Moscow 125047</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0002</lpage>
      <abstract>
        <p>The classic Monte-Carlo ray tracing is a powerful technique for simulating almost all effects in ray optics, but it may be prohibitively slow for, for example, calculation of images seen by a lens camera. Therefore, in practice there are often used its various modifications, in particular, bi-directional stochastic ray tracing with photon maps. The well-known flaw of all stochastic methods is their noise. The noise level, that is, the root mean square of pixel brightness calculated during given time, depends, besides all, on the number of rays traced from the light source and from the camera. The choice of the optimal parameters must provide the lowest noise level in a fixed time. This article is devoted to the choice of the optimal number of rays that minimizes the noise. It is proved that this minimal noise is in the same time homogeneous over the image. We produce the formulae to calculate the optimal number of rays from several coefficients which can be obtained from a bi-directional ray tracing of several auxiliary variants. It happens that this optimum is rather wide i.e. the noise level changes with the number of rays slowly, which allows to choose it including other factors e.g. limit this number to save memory. realistic rendering, bi-directional Monte-Carlo ray tracing, photon maps, denoising GraphiCon 2021: 31st International Conference on Computer Graphics and Vision, September 27-30, 2021, Nizhny Novgorod, Russia</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Nowadays light simulation is widely used in realistic computer graphics and for design of new
materials and optical systems [1]. If wave effects may be neglected in simulation, then stochastic ray
tracing methods are preferable. This group of methods includes the simulation of light transport using
the Metropolis method [2] and stochastic ray tracing [3]. The classical forward Monte Carlo ray tracing
(which starts tracing from the light source) is inefficient for image generation, and therefore it is
replaced by bidirectional modifications of it [4–6]. One of the most popular methods among them is the
bidirectional stochastic ray tracing with photon maps (BDPM – Bidirectional Photon Mapping) [5, 7].
A well-known flaw of all stochastic methods is that they produce noisy results. Therefore, the noise
reduction problem always has primary importance. It is considered in many works, e.g., see [8–10].</p>
      <p>The level of noise in BDPM mainly depends on the random scattering of the forward and backward
rays, on the choice of the vertex for their merging (or, in other words, on the vertex of the camera ray
trajectory at which luminance is estimated from photon maps), and also on the number of forward and
backward rays traced in one iteration step. The majority of studies is devoted to the first two issues
(e.g., [9–12]), and the number of rays got less attention. However, this is an important factor, and it
often happens that the number of forward rays is already redundant and its further increase only
increases the computation time but does not reduce the noise. In other cases, it may happen that the
number of forward rays is indeed critical, while the number of traced backward rays is redundant. It is
usually difficult to predict which fraction of the forward and backward rays is optimal: however, a good
choice can speed up the computations by several times.</p>
      <p>2021 Copyright for this paper by its authors.</p>
      <p>In this paper, we propose a method which allows to estimate the optimal number of rays. A general
rule determining noise in the BDPM was derived in [8].</p>
      <p>One must realize that the rate of convergence of stochastic methods like BDPM is not actually
determined the variance of the image calculated in one iteration (or another number of them), but the
variance after the fixed time of calculations. Indeed, if the variance of one iteration is decreases twice
while the time to calculate it increases tenfold, the situation obviously worsens.</p>
      <p>Therefore, calculation of the optimum i.e. those number of camera and light paths that provide “the
best image” (or the fastest convergence) we must account for both the variance from one iteration and
the time spent on one iteration.</p>
      <p>In this paper we produce the simple approximating law for that time and, combining it with the law
of dependence of the variance from one iteration on the number of rays, we derive the formulae for the
optimal number of rays. It happens that those distribution of camera rays per pixel that makes the noise
homogeneous is just the one which makes it minimal. This is especially convenient because eliminates
the problem of which spatial aggregate to treat as the “noise amplitude”.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Expression for the noise level</title>
      <p>Generally the number of camera rays traced through a pixel may depend on this pixel, i.e. vary across
the image. The variance of the contribution of one iteration of BDPM can be calculated individually for
each pixel and is [11, 13]</p>
      <p>( ) ( )≡ ⟨⟨ 2⟩⟩( )− ⟨⟨ ⟩⟩2( ) ( )
≡ ⟨⟨ ⟩2 ⟩ ( )− ⟨⟨ ⟩⟩2( ) ( )≡ ⟨⟨ ⟩2 ⟩ ( )− ⟨⟨ ⟩⟩2( )
respectively.
where   is the number of light paths per iteration,   ( )is the number of camera paths through the
pixel p per iteration,  ( ( ),  ( ))is the contribution to the pixel’s luminance from merging of the light
path  ( ) with camera path  ( ), its average ⟨⟨ ⟩⟩ obviously equals the limiting (exact) pixel’s
luminance  ( ), and ⟨⋅⟩ and ⟨⋅⟩ denote the averaging over (the ensemble of) light and camera paths,</p>
      <p>The variance of the image obtained during time Т is obviously
where  is the (average) time spent on one iteration. Therefore the value that determines convergence
rate and which, therefore, must be minimized to obtain the best image, is
parts:
●
●
●
path.
 ( )≡  ( )
The total iteration time is the sum of these partial timings denoted as   ,   ,  
respectively.</p>
      <p>The time spent on the first parts is obviously linear in the number of rays. For the third part, it is linear
in the number of their pairs. The general functional form is then</p>
      <p>Let’s estimate the average time per iteration. The iteration in BDPM consists of three successive</p>
      <sec id="sec-2-1">
        <title>BMCRT (from camera); FMCRT (from light source); “Merging” of the above sub-paths, when each light path is (attempted to) join with each camera</title>
        <p>
          = ∑
 ( )  ( ),

 =    ,


=  
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
where  ( )has the sense of the average time spent on tracing of one camera ray through the pixel p, 
has the sense of the average time spent on tracing of one light ray and  ( )is the average time spent
on “merging” one pair of light ray and one camera ray (through pixel p). This  ( )can depend on the
pixel because for different pixels the camera rays go different parts of the scene where the density of
light rays (and thus the probability of the merging) is different.
        </p>
        <p>Summing, we have
 =   +   +  
=    + ∑
 
 ( )) (   + ∑</p>
        <p>( )  ( )

+   ∑</p>
        <p>( )  ( ))

Usually the number of rays is much greater than 1, so we can neglect the terms   −1( )и   −1:
 ( )= (
1
(
 
+  ( )) +
)(   + ∑</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Minimization of noise</title>
      <p>is homogeneous i.e.</p>
      <p>Usually the noise is not homogeneous over the image, i.e. it depends on the pixel. It is not very
convenient because it is difficult to decide, which is better to have less noise on the floor yet stronger
in the ceiling or vice versa? Therefore usually we use a single measure of noise, e.g. the maximum over
the image, which better corresponds to the visual perception of “noisy” or “clear” images.</p>
      <p>Therefore, what we need to minimize is  ( ).</p>
      <p>It happens that (it will be so should   ( )be continuous instead of integers) in this case the noise
Indeed, let us suppose the opposite: that in some pixel  ∗ the noise is below the maximum i.e.
 ( ∗)&lt;  ( ). Now let us decrease (infinitesimally as now we treated it as a continuous parameter!)
  ( ∗). Obviously, in all the other pixels but  ∗ the noise will decrease after it because of decrease of
the time spent on one iteration  :



 ( )=  ( ) ≡ 
while  ( )for these pixels won’t change. In the same time in pixel  ∗ the change of noise level will be
  ( ∗)= −
  2( ∗)  
1
(</p>
      <p>+  ( ∗))    ( ∗)+  ( ∗)</p>
      <p>. Therefore,   ( ) =  ( ) and
In view of our supposition  ( ∗)&lt;  ( ) the maximum is achieved outside  ∗ where for each pixel
  ( ) − 
( ∗)= ( ( ) −  ( ∗))
+</p>
      <p>+  ( ∗))    ( ∗)


(through those pixels where the noise is below the maximum) we can make it more optimal (smaller).
In other words, an inhomogeneous distribution cannot be the optimal (since it admits improvement).
Therefore, the optimal noise distribution is homogeneous.</p>
      <p>In fact the limitation that   ( )must be integer distort this simple solution, but in case   ( )is
large enough, the effect must be weak. One may not though hope to reach the homogeneous distribution
with small number of camera rays, so it is better to have it large enough so that it be at least 3 in the
“worse” pixels (recall that the roundoff error is below half unit!).</p>
      <p>So we should estimate the number of camera rays that yields a homogeneous distribution of noise.
For the sake of simplicity let us consider the case when 1
 ( )is negligible (as compared to the other
noise components) which for the case of BDD&gt;0 is if not always then very frequently. Then
and one can see that the homogeneous distribution of noise is achieved when
  ( )= 
+  ( )) = (
+  ( ))
where   ,   denote the image size in pixels and
× (
 
1</p>
      <p>1
    
 ≡
∑
 ( ),  ≡
∑</p>
      <p>Now we have just two free parameters:   and the scale factor   for the number of camera rays
(notice the actual average number of camera rays per pixel will deviate from it because of rounding)
which must be varies to minimize the (now homogeneous!) noise.
4. The optimal   and  
has</p>
      <p>
        Substituting (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) in (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) and writing  instead of  ( )because now the noise is homogeneous, one
      </p>
      <p>
        +

 
(
        <xref ref-type="bibr" rid="ref4">4</xref>
        )
(
        <xref ref-type="bibr" rid="ref5">5</xref>
        )
 =   ̂
 
+   ̂  
 
+
 
where the overhat denotes the sum:
      </p>
      <p>One easily finds that for the given   , minimum in   is achieved for</p>
      <p>Denoting
we can write this minimum as




(


+
which is achieved for
 = ( 2 −  ̂ )
+  √ ̂
+
+  ̂
+  ̂
=
 2 + 2 √ ̂
−
+  ̂</p>
      <p>+  ̂


  ̂</p>
      <p>
        Obviously, this is a monotone increasing function of  and thus a monotone decreasing function of
  . Therefor there is no extremum in   and formally the greater this value, the better. However too
many rays per pixels is bad as concerning memory etc. Therefore it is reasonable to take such   that
the noise is only (1 +  ), where  is a small number, times the limiting noise value for   → ∞ i.e.




 2 + 2 √ ̂
−
 ̂
+  ̂
 
(
        <xref ref-type="bibr" rid="ref6">6</xref>
        )
 
2
̂
      </p>
      <p>−√ ̂
+ √ ̂
+
( 
̂
(We naturally discarded the negative root) or
  =</p>
      <p>
        ̂
 2 −  ̂
=
)
2
−  ̂
(
        <xref ref-type="bibr" rid="ref7">7</xref>
        )
(
        <xref ref-type="bibr" rid="ref8">8</xref>
        )
 =
+




−√ ̂
+ √ ̂
( 
̂
+  ̂
      </p>
      <p>+ 2√ ̂ √ ̂
(√ ̂
+</p>
      <p>
        2
√ ̂ )
Substituting it in (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) we obtain the optimal number of the light (forward) rays:
 ̂
      </p>
    </sec>
    <sec id="sec-4">
      <title>5. How to estimate the timings</title>
      <p>The value of  , i.e. time spent on average on one light (forward) ray is rather easy to measure. We
trace some large number of FMCRT rays   , measure the time spent and divide it by   .</p>
      <p>But  ( )and  ( )are more difficult to obtain. In principle, we can perform a series of BMCRT
simulations for single pixel each, tracing a large number of rays through this pixel and then dividing
the time spent by their number. This is expensive, though, because for an accurate measurement of time
requires that it be at least milliseconds, while there usually is millions of pixels.</p>
      <p>The value of  ( )also can be measured straightforwardly, but this is even more difficult.</p>
      <p>
        Happily, we do not need all this because the expressions (
        <xref ref-type="bibr" rid="ref7">7</xref>
        ) and (
        <xref ref-type="bibr" rid="ref8">8</xref>
        ) include  ( )and  ( )only in
the following combinations
∑

∑

 ( ) ( ),
 ( ) ( ),
∑
 ( ) ( ),
 ( ) ( )
      </p>
      <p>By definition, the two former are BMCRT time and merging time when there are  ( )camera rays
per pixel and one FMCRT ray. The two latter are for  ( )camera rays per pixel.</p>
      <p>These values can be measured easily just by doing a few iterations of bi-directional ray tracing for
some specially chosen   ( )and some (more or less arbitrary) medium   about, say, 10000.</p>
      <p>
        Namely, let us take   ( )=   ×  ( )where the constant   is chosen so that the average number
of rays per pixels be about, say, 10. Then we measure the time spent on the backward ray tracing   ,
on the forward ray tracing   and time spent on merging the paths   . In view of (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ),
 =

 
 , ∑

 ( ) ( )=  , ∑
      </p>
      <p>( ) ( )=

 


∑</p>
      <p>Similarly, choosing   ( )=   ×  ( )we can measure the two remaining sums.</p>
    </sec>
    <sec id="sec-5">
      <title>6. Results</title>
      <p>We use the famous Cornell Box with an isotropic point light source slightly below the center of the
ceiling. All scene surfaces are gray Lambert with albedo kd=50%. Both direct and indirect illumination
was taken from photon maps. “Diffuse depth” i.e. the maximal allowed number of diffuse events for
camera ray [11] was BDD=1. Scene image is shown in Figure 1.</p>
      <p>There are two variants which differ in the radius of integration sphere R: “large” one equals 0.0083
of the scene size, and the “small” is 0.0015 of the scene size. Naturally, one can expect that smaller
radius requires more FMCRT rays.</p>
      <p>Calculation of  ̂ and  ̂ was done by the usual bi-MCRT, using radius 0.0083 of the scene size,
10000 iteration had been done with   = 10000 light paths and   ( )= 25 camera rays per pixel.
The matrices  ( )and  ( )were calculated as described in [13]. The sums are presented in Table 1.
Averages of the BMCRT term and cross-term for the two test variants</p>
      <p>R = 0.0083 of scene size</p>
      <p>Calculation of time-related terms was done like described in Section 5 for 10000 light paths and
average of 50 camera paths per pixel. Obviously the pure BMCRT and FMCRT terms are independent
of radius. Since measurement of time is not very accurate (because of loading by background processes,
adaptive changing of CPU clock etc), the figures are rounded (Table 2).
Time-related sums for the two test variants
R = 0.0083 of scene size</p>
      <p>
        R = 0.0015 of scene size

̂
 
̂
 
 ̂ 
 ̂ 
coefficients taken from Table 2.
Approximation of the iteration time (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) for the number of camera rays given by (
        <xref ref-type="bibr" rid="ref5">5</xref>
        ) becomes
Iteration time for radius = 0.0083 of scene size. The first number is real value, the second one is
Iteration time for radius – 0.0015 of scene size. The first number is real value, the second is from (9).
      </p>
      <p>One can see that approximation is quite good for large   but not for small   . Mainly this is
real run time is larger. Finally the optimal numbers of rays are in Table 5.
because (9) did not include rounding of the number of camera rays which increase   ( )and thus the</p>
      <p>REMARK. Rather large times are because we did not optimized the calculations and did not use</p>
      <sec id="sec-5-1">
        <title>SIMD and even multithreading.</title>
        <p>The number of rays for the two test variants</p>
        <p>
          Obviously, the small estimates for   is rather absurd since it yields less than one ray per pixel
everywhere. Small   ( )results in strong rounding and the error of rounding which can reach ±0.5 is
a serious distortion when   ( )is about 3. So we used   = 5 as the low bound. As explained above,
there is no formal optimum i.e. the greater the better, but too large values result in unjustified memory
consumption. So for we adopted   = 50 for the high bound. The distribution of   ( )for low and
high bounds is presented in Figure 2. The effect of this distortion is maximal where   ( )is small, i.e.
in the blue image areas, mainly the ceiling and the small box. Later we shall see that it is these areas
where the noise deviates from the mean value.
(right). The actual average is 5.6 and 54, respectively, because rounding to integers increases the
values as compared to (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ).
walls: 1.6; ceiling: 0.7; small box: 0.22
walls: 1.45; ceiling: 1.5; small box: 0.6
        </p>
        <p>
          As explained in Section 2, what determines the speed of convergence is the product (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) of the
variance in one iteration and  is time per iteration. We produce its root √ (notice it relates to the
absolute (not relative!) stochastic error) and visualize it in Table 6 (for larger radius of integration
sphere) and Table 7 (for smaller radius). According to toolbar the red color corresponds to value 2,
yellow color to 1 and blue color to 0. The middle row is for   close to optimal.
        </p>
        <p>One can see that the noise image roughly has three distinct areas: walls of the main box, ceiling and
the small box, the noise being nearly homogeneous throughout each area. The corresponding values are
printed below each image for quantitative comparison.</p>
        <p>
          The areas where the noise considerably deviates from the mean level coincide with areas where
  ( )is below 3 or 5, see Figure 2. Meanwhile prediction of noise level (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) itself is quite good as it
can be seen from comparison of images in Figure 3, so the inhomogeneity is due to the deviation of the
integer   ( )from the continuous values (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ):
walls: 1.8; ceiling: 0.9; small box: 0.3
walls: 1.47; ceiling: 1.46; small box: 0.75
        </p>
        <p>
          The situation is qualitatively similar to the case of larger radius (Table 6). Again one sees the same
three distinct areas: walls of the main box, ceiling and the small box, the noise being nearly
homogeneous throughout each area. The corresponding values are printed below each image for
quantitative comparison. Again the areas where the noise level deviates from mean are those where
  ( )is below 2 or 3 (cf. Figure 2) and so rounding to integers deviated them from (
          <xref ref-type="bibr" rid="ref5">5</xref>
          ).
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>7. Conclusion</title>
      <p>From comparison of the calculated variants one can see that
1. Noise is the more homogeneous the larger the   (natural: because of the rounding   ( )
becomes less essential) and the larger the ratio   /  (which looks strange).</p>
      <p>The noise for the most of the area is not sensitive to the   and   . In other words, the optimum
is “very wide” and deviating from it even threefold does not change the mean noise level much. This
is good in many aspects because in real simulation of complex scenes it may be difficult to find the
exact values of the number of rays (as they use coefficients that must be ray-traced themselves).
Also time per iteration is difficult to measure as it is a random value and additionally it may have
regular trends because of repeating caching etc.
3. The areas where the noise considerably deviates from the mean level coincide with areas where
  ( )is below 2 or 3, see Figure 2. Therefore it is reasonable to choose   (which all the same
does not have a precise optimum value) large enough so that   ( )exceeds, say, 3 in the most of
the image.</p>
      <p>This paper operates the absolute noise, while sometimes it is better to work with the relative noise
(divided by the luminance in this pixel). Our approach can be easily adapted to this case as well.</p>
    </sec>
    <sec id="sec-7">
      <title>8. Acknowledgements</title>
      <p>We should like to thank Elissey Birukov for his help with computations.</p>
    </sec>
    <sec id="sec-8">
      <title>9. References</title>
      <p>
        [9] I. Georgiev, J. Krivánek, T. Davidovic, P. Slusallek, Light transport simulation with vertex
connection and merging, ACM Trans. Graph., 31(
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) (2012) 192:1–192:10.
doi:10.1145/2366145.2366211
[10] T. Hachisuka, J. Pantaleoni, H.W. Jensen, A path space extension for robust light transport
simulation, ACM Trans. Graph., 31(
        <xref ref-type="bibr" rid="ref6">6</xref>
        ) (2012) 191:1–191:10.
[11] S.V. Ershov, A.G. Voloboy, Calculation of MIS weights for bidirectional path tracing with photon
maps in presence of direct illumination, Mathematica Montisnigri 48 (2020) 86–102.
doi:10.20948/mathmontis-2020-48-8
[12] M. Sbert, V. Havran, and L. Szirmay-Kalos, Multiple importance sampling revisited: breaking the
bounds, EURASIP Journal on Advances in Signal Processing, 15 (2018) 1–15.
[13] S.V. Ershov, E.D. Birukov, A.G. Voloboy, V.A. Galaktionov, Noise Dependence on the Number
of Rays in Bidirectional Stochastic Ray Tracing with Photon Maps, Programming and Computer
Software 47(
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) (2021) 194-200. doi:10.1134/S036176882103004X
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhdanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Galaktionov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Voloboy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zhdanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Garbul</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Potemin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Sokolov</surname>
          </string-name>
          ,
          <article-title>Photorealistic rendering of images formed by augmented reality optical systems</article-title>
          ,
          <source>Programming and Computer Software</source>
          <volume>44</volume>
          (
          <year>2018</year>
          )
          <fpage>213</fpage>
          -
          <lpage>224</lpage>
          . doi:
          <volume>10</volume>
          .1134/S0361768818040126.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>M.</given-names>
            <surname>Sik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Krivanek</surname>
          </string-name>
          ,
          <article-title>Survey of Markov Chain Monte Carlo methods in light transport simulation</article-title>
          ,
          <source>IEEE Transactions on Visualization and Computer Graphics</source>
          <volume>26</volume>
          (
          <issue>4</issue>
          ) (
          <year>2018</year>
          )
          <fpage>1821</fpage>
          -
          <lpage>1840</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pharr</surname>
          </string-name>
          , G. Humphreys, Physically Based Rendering. Second Edition: From Theory To Implementation, Morgan Kaufmann Publishers Inc., San Francisco, CA, USA,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Dodik</surname>
          </string-name>
          ,
          <article-title>Implementing probabilistic connections for bidirectional path tracing in the Mitsuba Renderer 2017 URL: https://www</article-title>
          .cg.tuwien.ac.at/research/publications/2017/dodik-2017-pcbpt
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.W.</given-names>
            <surname>Jensen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Christensen</surname>
          </string-name>
          ,
          <article-title>High quality rendering using ray tracing and photon mapping, in: ACM SIGGRAPH 2007 Courses, ser</article-title>
          .
          <source>SIGGRAPH '07. ACM</source>
          , New York, NY, USA,
          <year>2007</year>
          . doi:
          <volume>10</volume>
          .1145/1281500.1281593
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Veach</surname>
          </string-name>
          ,
          <article-title>A dissertation: Robust Monte-Carlo methods for light transport simulation</article-title>
          ,
          <year>1997</year>
          . URL: http://graphics.stanford.edu/papers/veach_thesis/thesis.pdf
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vorba</surname>
          </string-name>
          ,
          <article-title>Bidirectional photon mapping</article-title>
          ,
          <source>in:Proceedings of CESCG 2011: The 15th Central European Seminar on Computer Graphics</source>
          . Prague: Charles University,
          <year>2011</year>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>32</lpage>
          . URL: https://cgg.mff.cuni.cz/~jaroslav/papers/2011-bdpm/vorba2011-bdpm.pdf
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Ershov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.D.</given-names>
            <surname>Zhdanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Voloboy</surname>
          </string-name>
          ,
          <article-title>Estimation of noise in calculation of scattering medium luminance by MCRT</article-title>
          ,
          <source>Mathematica Montisnigri</source>
          <volume>45</volume>
          (
          <year>2019</year>
          )
          <fpage>60</fpage>
          -
          <lpage>73</lpage>
          . doi:
          <volume>10</volume>
          .20948/mathmontis-2019
          <source>-45-5</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>