<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Calculation of MIS weights for bidirectional path tracing with photon maps</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Finally</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Keldysh Institute for Applied Mathematics</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Research University of Information Technologies</institution>
          ,
          <addr-line>Mechanics, and Optics, St. Petersburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>A widely used method for noise reduction in Monte-Carlo ray tracing is combing different means of sampling, known as multiple importance sampling (MIS). For bi-directional Monte-Carlo ray tracing with photon maps (BDPM), the join paths are obtained by merging camera and light sub-paths, and since several light paths are checked again the same camera path, and vice versa, the join paths obtained are not statistically independent. Thus the noise in this method obeys laws different from those in simple classic MonteCarlo with independent samples so the weights that minimize that noise must also be calculated differently. This paper drives that weights for the simplest case when we mix contribution from only two vertices of camera ray. It shows that the weights obey an integral equation which is qualitatively different from the well-known MIS formulae for uncorrelated samples. Besides that, even if forget the integral operator, the weights depend on the integration sphere radius and the number of light rays used. The integral equation is solved analytically in a closed form and it is demonstrated how to perform the necessary calculations in BDPM.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Lighting simulation and calculation of global
illumination is widely applied nowadays in architecture,
design of new devices and new materials. Mostly
successive technique is Monte Carlo ray tracing
(MCRT) [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. But its main problem is stochastic noise,
and it strongly depends on the method of ray
generation. Therefore a lot of studies are devoted to
choice of the optimal probability distribution of ray
scattering [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2–5</xref>
        ].
      </p>
      <p>
        Multiple Importance Sampling (MIS) is a famous
and powerful approach here. Its base idea is to generate
several random numbers, one for each strategy (i.e.
probability density which admits an efficient generation
of samples), and then sum their contributions to the
accumulated average with weight (which usually
depends on the point) [
        <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
        ]. E. Veach produced and
published the mathematics of this approach [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. He
elaborated several efficient heuristics of weight
calculation and demonstrated that with them the
resulting noise is close to its minimal value. But his
study is applied to the classic MCRT method when
successive random points are absolutely independent.
      </p>
      <p>
        Many lighting simulation systems use more
advanced methods like, for example, bi-directional
Monte-Carlo path tracing (BDPT), bi-directional
Monte-Carlo ray tracing with photon maps (BDPM) [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]
or their combination [
        <xref ref-type="bibr" rid="ref7 ref8">7, 8</xref>
        ]. In these methods the
successive trajectories are not independent. For
example, in the photon map technique (the BDPM
method) the same forward path is merged with all the
backward paths and vice versa, so the resulting joined
full trajectories have common ”tail” and thus they are
not independent. In [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] it was demonstrated that the
noise in these methods follows other rules than the rule
in the simple or classic MCRT. Therefore the optimal
weights that minimize the resulting noise are different
from the optimal weights which minimize the noise
functional in the classic MCRT.
      </p>
      <p>
        Since the BDPM noise is a quadratic functional over
the ray contribution [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], it is also quadratic in weights.
Thus, basically, calculation of weights that minimize
that noise functional is mathematically trivial. But only
“basically” because in any bi-directional MCRT there is
an infinite set of weights [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], with own set of weights
for each join path length. The weights from different
sets are defined in different functional spaces (they have
different number of arguments, i.e. vertices and so on),
while all they are “coupled” in the noise functional.
There are also less important problems with ray
absorption etc. As a result, the optimal weights obey an
infinite system of linear integral equations, which are
extremely tedious. And their kernels must be calculated
from solving yet other integral equations similar to the
“rendering equation” and so on.
2. BDPM with fixed BDD, adaptive BDD and
MIS
      </p>
      <p>Meanwhile in practice the BDPM method can be
used even without MIS, but using a single strategy
when camera and light sub-paths merge at a pre-defined
camera vertices. This approach can be called “fixed
BDD”. BDD means “backward diffuse depth” i.e. the
maximally allowed number of diffuse scattering events
for camera ray. Here,
• Camera ray terminates after BDD diffuse events,
so we have only BDD+1 vertices for merging
• The integration sphere around the last camera
vertex “catches” all light rays
• The integration sphere around the rest vertices
“catches” only direct and caustic light rays, i.e.
those which did not have any diffuse scattering
before.</p>
      <p>Here it is important that BDD is not adaptive, i.e. it
is chosen at the camera ray beginning and does not
depend on the further ray history.</p>
      <p>
        This rigid approach is not optimal. It seems better to
choose BDD adaptively, depending on the ray history.
Roughly speaking, it may not stop at a surface which
optical properties are presented by too sharp
bidirectional distribution function (BDF). Because
otherwise we have to collect diffuse illumination here
and in case of sharp BDF this results in high noise [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
In other words, it would be good to decide whether to
terminate or continue camera path, i.e. to choose
between BDD and BDD+1 at each vertex. Then, since
both strategies are correct, i.e. give the same mean
luminance, we can use both to increase the number of
samples and thus reduce noise.
      </p>
      <p>In this paper we shall consider just this simplified
problem: “weighting” results from the cases of BDD
and BDD+1. Instead of termination at BDD we just set
zero weight for BDD+1. This is an extreme case and
normally the</p>
      <p>weight is not exactly 0, thus diffuse
illumination is collected at both last (as for BDD+1) and
last but one (as for BDD) vertices. The weights now are
just two functions. They are calculated so that the
variance of calculated image luminance should be
minimal. They depend on the camera sub-path.</p>
      <p>
        We derive the integral equation for the weights
under these assumptions and obtain its solution in a
closed form. One can see from it that even in this
simplest case the optimal weights depend on many
factors which are absent in the “balance” heuristic [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]
for classic MCRT where they depend on BDFs and light
source emission only. Then we show how it can be
calculated from photon maps.
3. Weighting the contribution in bi-directional
ray tracing
the total flux.
      </p>
      <p>General note. Calculations below assume that the
total flux of all scene lights is unity. For not unit flux,
 (… ) and  (… ) are irradiance and radiance divided by</p>
      <p>
        Regardless of the particular sort of bi-directional
MCRT, the application of MIS to it is the same. The
contribution to pixel luminance from a light and camera
rays is [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]
 =
      </p>
      <p>×  ( 

( )
  + ,  ( )( (0 ), …</p>
      <p>( )) ( )( (0 ), … ,  ( ))
( )</p>
      <p>cycles over all camera path vertices and 
cycles over all light path vertices,  is the integration
kernel and  is BDF in luminance units at the point  
( ).</p>
      <p>( )
 5</p>
      <p>( )
 1</p>
      <p>( )
 4</p>
      <p>( )
 6</p>
      <p>( )
th camera vertex when the join path of  vertices (i.e.
the light part of the join path has  − 
must be a function of that full path such that
(1)</p>
      <p>The weights may depend on all  vertices of the join
path, though can be also independent from some of
them so we have a very high-dimensional configuration
space that it is not good for numerical calculations.
Happily it is possible to constrain it. This
much
simplifies the problem of finding the optimal weights.
4. Weights for fixed BDD (single strategy)</p>
      <p>Let us consider the case of fixed BDD=M: direct
and caustic rays are taken in vertices up to M-th; diffuse
rays in the M-th vertex only; further vertices do not
collect rays. For the sake of simplicity we assume that
there is no specular scattering (caustic rays) in the
scene. Then, since a direct (light) ray has single
segment before junction, the weights are:
  +1,
  ,
= 1, 
= 1,  &gt; 
= 1, … , 
− 1
the rest being 0.
and BDD=M+1 are shown in fig. 2.</p>
      <p>Superimposed tables of weights matrix for BDD=M
m
k
strategies.
weights
  ,</p>
      <p>
        =
5. Mixing the two strategies: BDD and BDD+1
We can now mix the two above strategies using

 =0
 0,
 1 ≡ 1 −  0
, 

= 
= 
where  0 is the same for all 
deterministic function of the join path. Then the average
pixel luminance calculated with that mixed strategy will
be correct which can be shown like in [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] or [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. We
shall also show this simple derivation below.
      </p>
      <p>Since an arbitrary function of the full path suits, we
camera + light trajectories is
 
can use one which depends only on three vertices   −1,
and   +1. With such weights the contribution of
+ 1
,  ≥</p>
      <p>+ 2, ….
and is an arbitrary
 =
 ( (0 ), … ,  
( )) 0( 
→   +1;   )
(2)
+ 0( (0 ), … ,   +1) ( (0 ), … ,  
( )
( ))
→  ( −)1;  ( ))
→  
( )</p>
      <p>;  ( +)1)
( (0 ), … ,  ( +)1)
( (0 ), … ,  
( )
,   )
etc. is unit vector connecting the two vertices.
where  0( ;  ) is luminance of a surface point  in
direction</p>
      <p>under direct illumination and   +1 →</p>
      <p>For the sake of simplicity we assume that FMCRT
works so that the light ray energy is always 1 and that
 0( ;  ) which we assume is calculated exactly (w/o
noise!), as it is when there is only a finite number of
parallel or point light sources.
6. How all this works in BDPM</p>
      <p>How is (2) applied in practice? The meaning of this
formula is very simple.</p>
      <p>First, all works like for the case with fixed BDD+1,
i.e. we always trace the camera ray to the maximal
depth, because so far we do not know
whether this
vertex will be used or not (because w1=0). The two last</p>
      <p>Then, we take all sources of luminance brought to
vertices are  
( ) and  
( ) .</p>
      <p>+1
this camera ray. These are
• photons that hit near</p>
      <p>( )
this point)</p>
      <p>( )
• photons near   +1 (if there is direct then it also at
Then, we process photons near  
( ) like we would
do in case of BDD=M, but now the contribution of each
photon is scaled by  0( (0 ), … ,  
( )
,   ) taking the last
argument of the weight function as the previous hit
point   of that photon. In other words   is the start of
the FMCRT ray segment which ended near  
and caustic photons, if there are those, are taken with
( ). Direct
unit weight.
( )</p>
      <p>Then, we process photons near   +1 like we would
do in case of BDD=M+1, but now the contribution of
each photon is scaled by  1  (0 ), … ,   +1
( )
all photons. Same scaling is applied to direct and
, same for
caustic photons, if there are ones.</p>
      <p>And this is all. We therefore need weight function
(e.g. w1, because w0 is calculated from it) for the
following arguments:
×
×
( )
and
where   are the origins of all diffuse FMCRT ray
segments that ended near  ( ).</p>
    </sec>
    <sec id="sec-2">
      <title>7. Calculation of noise</title>
      <p>Now let us for the sake of simplicity suppose that
the direct illumination is negligible. For small M it is
really so in rather many scenes. Then contribution is
 ( 
( )) ( 
( ) →  
before it hits  
1 and then
where
×  ( 
( ) →  
( )
,  
( ) →  ( −)1;  
+ 1(… ) ( (0 ), … ,  ( +)1)
( )) is the energy of camera ray
( ). In BMCRT rays starts with  ( (0 )) =
 ( (0 ), … ,  ( +)1) =  ( 0
( ), … ,  
( )) (  +1</p>
      <p>( )
→  
( )</p>
      <p>,  ( ))
 ( ,  ) ≡</p>
      <p>( ,  ;  )|( ⋅  ( ))|  2
is the total backward scattering.</p>
      <p>The noise in (2) i.e. the variance of pixel luminance
calculated
with it from</p>
      <sec id="sec-2-1">
        <title>NF forward rays and</title>
        <p>
          NB
backward rays (started from the same pixel) obeys the
general law [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]:
 =
(〈〈 2
        </p>
        <p>〉 〉 − 〈〈 〉〉2)
1
   
+
+
1 −   −1
1 −   −1
 
 
(〈〈 〉
2 〉 − 〈〈 〉〉2)</p>
        <p>(4)
(〈〈 〉
2 〉 − 〈〈 〉〉2)
Here 〈⋅〉
is the</p>
        <p>averaging over the BMCRT
ensemble for the fixed FMCRT ray and 〈⋅〉 is the
averaging over the FMCRT ensemble for the fixed
camera ray. Notice 〈〈 〉〉 is independent from weights.
This linear term is independent from the order of
averaging so we drop subscripts here.</p>
        <p>Simplified expression of noise</p>
        <p>By experience, in the absolute majority of practical
cases the first two terms heavily dominate so we can
drop the 3rd. Then,  
approximate simplified noise law as</p>
        <p>≫ 1 and we can write the
1
 
 =
〈  −1(〈 2〉 − 〈〈 〉〉2)
+ (〈 〉2 − 〈〈 〉〉2)〉</p>
        <p />
        <p>Averaging over the illuminating FMCRT rays is
simple:
〈(⋅)〉 =</p>
        <p>(⋅)|( ⋅  ( ))| ( ,  ) 2
where z is the space point where we calculate it and I is
irradiance. Applying it to the ray contribution (3) gives
〈 〉
where here and below   +1 is the vertex of the join
so it is light vertex in the first line and</p>
        <p>≡ ( 0, … ,   )
denotes the camera path common for BDD and BDD+1,
and we dropped the superscript (c) because there are no
light ray vertices.</p>
        <p>Also here and below  is the direction of camera ray
segment after   (see Figure 3),
before and   and  is the direction of the join path
 ≡   →   −1</p>
        <p>≡   +1 →  
 ( , ) ≡ |( ⋅  ( ))| ( , )
 ( , ) is irradiance of  in direction  and  ( , ) is
radiance from  in direction  .</p>
        <p>Then, assuming that the kernel K is S–1 within the
integration sphere and 0 outside so  2 =  −1 we have
in the limit 
〈 2〉 =
while the cross term is  ( −1). Here
 (  ,  +1) ≡  2( ,  +1</p>
        <p>→   ;  +1) ( ,  +1) 2</p>
        <p>Now we must average over the ensemble of camera
rays. Let us denote the probability density of M first
vertices of camera ray as
 ( )</p>
        <p>( 0,…,  )</p>
        <p>In a usual BMCRT the distribution of the scattered
direction is proportional to BDF in the hit point, so:
 ( +1)( ,  +1) =  
( )( )
 ( , ;  )  ⋅  (  )</p>
        <p>
          ( ,  )
×  (  ,  +1)
[
          <xref ref-type="bibr" rid="ref2">2</xref>
          ]:
where  (… ) transforms angular density into spatial, see
 (  ,  +1) ≡
        </p>
        <p>⋅  (  )
|  −   +1|2
=
( 
−   +1) ⋅  (  )
| 
−   +1|3
Substituting (6) and (7) into (5) and integrating over
 ( +1)( 0,… ,  +1) we arrive at
   =  −1  02( ,  +1) 2( , ;  ) ( ,  )
×  ( ) 2  2  +1
+ −1  12( ,  +1) ( ,  ) (  ,  +1)
×  ( ,  +1) 2  2  +1
+  02( ) ( ) 2
+  12( ,  +1) ( ,  ) 2( ,  +1)
×  ( ,  +1) 2  2  +1
+2  1( ,  +1) ( ,  +1) 0( )
×  ( ,  +1) 2  2
  +1 +  
where the “const” is independent from weights,
  ≡   ,
  ( )
and
( )( 0,…,  ) (  ,  +1)</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>8. Optimal weights</title>
      <p>By definition, the optimal weights are those which
produce extreme value of noise. Recalling that  0 =
1 −  1 we calculate the variation of this quadratic
expression in response to the change  1 ↦  1 +   1:
 
where
Φ( ,  +1) = − −1 0( ,  +1) ( , ;  ) ( ,  )
+ −1 1( ,  +1) ( ,  ) (  ,  +1)
+ 1( ,  +1) ( ,  ) 2( ,  +1)
+ 0( )( ( ,  +1) −  ( ,  ))
− ( ,  )  1( , ′ +1) ( ′
, ′ +1)
×  ( ′, ;  )|( ′⋅  (  ))| 2 ′ +1
where
and</p>
      <p>Obviously the extremum condition:  
= 0 for an
arbitrary   1 is satisfied if and only if Φ( ,  +1) = 0.</p>
      <p>Obviously
diffuse irradiance of a
point equals
radiance of the surface seen from this point at that
direction
so Φ( ,  +1) = 0 implies</p>
      <p>( ,  ) =  ( ,  +1)
 ( , ;  ) 1( 0,… ,  +1) −  1( 0,…,  )
 ( ,  )
=  −1 ̃( , ;  )
 ( , ; ) ≡  −1 ̃( , ; ) +  −1  ( , )
+  ( , )
 being the hit point of ray fired from  in direction −
 ( , )
 ̃( , ; ) ≡</p>
      <p>( , ; )
∫ ( , ; )|( ⋅  ( ))| 2
is “backward normalized” BDF.</p>
      <p>This equation admits solution which depends only
on   and camera ray directions before and after it:
 1 =  1( , ,  )
because then</p>
      <p>1( ,  )
 1( ,  ) =
where</p>
      <p>≡  1( , ,  ) ( , ;  ) ( ,  ) 2
and   +1 can be calculated from   and  . So
+
 1( , ,  ) =  −1 ̃( , ;  )</p>
      <p>( , ;  )
∫ 1( , ,  ) ̃( , ;  ) ( ,  ) 2</p>
      <p>( , ;  )</p>
      <p>
        This is an integral equation, unlike the “balanced
heuristic” from [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]. In practical cases its solution is very
expensive
although
it
is
mathematically
trivial.
      </p>
      <p>Substituting  1 from (9) into (8) we obtain
(8)
(9)
∫ 1( , ,  ) ( , ;  ) ( ,  ) 2</p>
      <p>1 −   ∫ 1( , ,  ) ( ,  ) 2
is the weight “calculated neglecting G1”.</p>
      <p>1( , ,  ) ≡  −1 ̃( , ;  )
 ( , ;  )
where
 1( ,  ,   ) =  1( ,  ,   ) +

1 −   
 ( ,  ;   )
(10)
 ≡

≡</p>
      <p>Recall that as usual  is the direction of camera ray
and  is the direction of the join path
9. Calculation of weight in ray tracing</p>
      <p>The general idea is that all the integrals that enter
the weight formula are calculated with Monte-Carlo
method from photon maps.</p>
      <p>In ray tracing  ( ,   +1) and  (  ,   +1) (which
is rather similar to  ( ,   +1), just BDF is squared) are
calculated as sums over photon hits in the integration
sphere.</p>
      <p>( ,   +1) ≈
 ( ,   +1) ≈
1
1
  
  
 2   ,  ;   +1
   ,  ;   +1
about  
where the sums are over the FMCRT hits (  ,   )
inside the integration sphere around   +1.</p>
      <p>Notice we need weight for directions  of:
1. All FMCRT photons in the integration sphere
2. Camera ray after scattering</p>
      <p>Thus we calculate  ( ,   +1) and  (  ,   +1)
(which is in fact  ( ,   +1)) for all that directions (or
Then we have 
all that   +1, because 
1( ,  ,   ) for all that directions.</p>
      <p>=   +1 →   ), see Figure 3).</p>
      <p>Approximate calculation of the integrals A and B is
rather simple. We estimate them as
 ≈

≈
1
1
  
  
 1   ,  ,    ̃   ,  ;  
 1   ,  ,  
w1).
where the sums are over the FMCRT hits (  ,   )
inside the integration sphere around   .  1 for those
directions have been all the same calculated above.
Notice that direction of the scattered camera ray is not
included in that sum although
we know</p>
      <p>W1 for it,
because it has another angular distribution than one
needed to estimate this integral.</p>
      <p>The scheme of calculation is shown in Figure 3.</p>
      <p>Knowing A and B and W1 for all direction of the join
path past   , we calculate  1
for them and thus weight
(with weight w1) and photons near  
contributions from continuation of camera ray past  
(with weight 1–</p>
      <p>As said in the very beginning weights must be
same during all the MCRT process.
deterministic functions of the join path. In other words
for given ( ,  ,   ) the weight must be calculated the
But if we calculate the integrals from photon map
(11)
(12)
the result will differ from iteration to iteration because
photon maps are changed. The remedy is to freeze the
photon map used for calculation of integrals in the
weight formula so that it is the same for all iterations.
For example, we can always use photon map from the</p>
      <sec id="sec-3-1">
        <title>1st iteration.</title>
        <p>Notice that the calculation of luminance brought by
the camera ray, described in Section 6, works as usual,
i.e. it uses “the
main” photon
map (from
current
iteration). The set of arguments ( ,  ,   ) for which we
need weights is therefore also taken from this photon
map. It is only the sums (11) and (12) which are over
the FMCRT hits (  ,   ) from the frozen photon map.
Meanwhile positions of the integration spheres (that
collect those hits) are from the current,
iterationdependent photon map. This is natural because they
determine only the ray segment we calculate the weight
function for. And if in some next iteration we face the
same ray, the weight calculated for it will be exactly the
same, thus satisfying the conditions of unbiased
estimation.</p>
        <p>This paper is an attempt to find a compromise: use
sub-optimal weights to simplify their calculation. Our
idea is to weight just two strategies: terminate camera
ray at this vertex or continue it by yet one segment. In
this case there are only two weights and they are
independent from the “early part” of light path. They
are “sub-optimal” because the weights in all vertices but
two are fixed and independent of the “early part” of
light path. So the minimum of noise achieved with them
can be reduced further (we hope not much).</p>
        <p>We show that in this case the optimal weights obey a
linear integral equation that admits solution in close
form, i.e. solution is calculated analytically from some
integrals that must be calculated numerically. We
explain how they can be calculated using FMCRT.</p>
        <p>
          From (10) we see that the optimal weights for this
situation
• Are not local i.e. depend on scene properties in
points other that current path (via integrals  ,  ,  );
• Depend on BDFs and irradiance;
• Depend on the number of forward (light) paths
traced in iteration;
• Depend of the area of integration sphere,
while the “balanced/power heuristic” i.e. the formulae
for weights in classic bi-directional ray tracing [
          <xref ref-type="bibr" rid="ref2 ref3">2, 3</xref>
          ]
depend only on BDFs and distribution of light source
emission.
        </p>
        <p>In simple extreme cases the weights give
“intuitively obvious” result.
• If BDF at   is very sharp,  1 = 1, i.e. we go to
  +1 to collect diffuse illumination.
• If BDF at   is smooth while at   +1 either BDF is
sharp or illumination has highly nonhomogeneous in
space,  1 = 0, i.e. we stop at   and collect all
illumination there.
• If the number of lights paths   (or integration area</p>
        <p>S) is very large then also  1 = 0, i.e. we stop at   .
• If integration area is very small,  1 = 1, i.e. we go
to   +1 and collect diffuse illumination there.</p>
        <p>But not contrast cases, e.g. for BDF at   and
nonhomogeneous illumination at   +1 the weight is
neither 0 nor 1 and can’t be predicted that simply and
(11) is needed.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Acknowledgments</title>
      <p>The study was carried out within the framework of
the RFBR grants 18-01-00569 and 20-01-00547.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M.</given-names>
            <surname>Pharr</surname>
          </string-name>
          and
          <string-name>
            <given-names>G.</given-names>
            <surname>Humphreys</surname>
          </string-name>
          .
          <year>2010</year>
          .
          <article-title>Physically Based Rendering, Second Edition: From Theory to Implementation (2nd ed</article-title>
          .). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>E.</given-names>
            <surname>Veach</surname>
          </string-name>
          .
          <article-title>A dissertation: Robust Monte-Carlo methods for light transport simulation</article-title>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Vorba</surname>
          </string-name>
          .
          <article-title>Bidirectional photon mapping</article-title>
          .
          <source>In Proceedings of CESCG 2011: The 15th Central European Seminar on Computer Graphics</source>
          , Prague,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>I.</given-names>
            <surname>Georgiev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Křivánek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Davidovič</surname>
          </string-name>
          , and
          <string-name>
            <surname>Ph</surname>
          </string-name>
          .
          <source>Slusallek</source>
          .
          <year>2012</year>
          .
          <article-title>Light transport simulation with vertex connection and merging</article-title>
          .
          <source>ACM Trans. Graph</source>
          .
          <volume>31</volume>
          ,
          <issue>6</issue>
          ,
          <string-name>
            <surname>Article 192</surname>
          </string-name>
          (
          <year>November 2012</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ershov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhdanov</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>A.</given-names>
            <surname>Voloboy</surname>
          </string-name>
          .
          <article-title>Estimation of noise in calculation of scattering medium luminance by MCRT</article-title>
          .
          <source>Mathematica Montisnigri</source>
          , XLV:
          <fpage>60</fpage>
          -
          <lpage>73</lpage>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Jensen</surname>
          </string-name>
          ,
          <article-title>Global illumination using photon maps</article-title>
          ,
          <source>in Proceedings of the Eurographics Workshop on Rendering Techniques '96</source>
          , (London, UK, UK), pp.
          <fpage>21</fpage>
          -
          <lpage>30</lpage>
          , Springer-Verlag,
          <year>1996</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Popov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ramamoorthi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Durand</surname>
          </string-name>
          , and
          <string-name>
            <given-names>G.</given-names>
            <surname>Drettakis</surname>
          </string-name>
          ,
          <article-title>Probabilistic Connections for Bidirectional Path Tracing</article-title>
          , Computer Graphics Forum,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Dodik</surname>
          </string-name>
          ,
          <article-title>Implementing probabilistic connections for bidirectional path tracing in the Mitsuba Renderer</article-title>
          , Sept.
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.V.</given-names>
            <surname>Ershov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.D.</given-names>
            <surname>Zhdanov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.G.</given-names>
            <surname>Voloboy</surname>
          </string-name>
          .
          <article-title>Treating diffuse elements as quasispecular to reduce noise in bidirectional ray tracing //</article-title>
          <source>Proceedings of 28th International conference on computer graphics and vision</source>
          GraphiCon-2018, Tomsk, pp.
          <fpage>20</fpage>
          -
          <lpage>25</lpage>
          . URL: https://www.graphicon.ru/html/2018/papers/20-
          <fpage>25</fpage>
          .pdf Sergey V.
          <article-title>Ershov, PhD, senior researcher in the Keldysh Institute of Applied Math of RAS. E-mail: measure@spp.keldysh.ru Alexey G. Voloboy, Doctor of Science in physics and mathematics, leading researcher in the Keldysh Institute of Applied Math of RAS. E-mail: voloboy@gin.keldysh.ru Dmitry D. Zhdanov, PhD, associate professor in the Information Technologies, Mechanics and Optics (ITMO) University</article-title>
          . E-mail:
          <article-title>ddzhdanov@mail.ru Andrey D. Zhdanov, PhD student in the Information Technologies, Mechanics and Optics (ITMO) University</article-title>
          . Email: adzhdanov@itmo.ru
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>