<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Effective technology to visualize virtual environment using 360-degree video based on cubemap projection</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Federal State Institution «Scientific Research Institute for System Analysis of the Russian Academy of Sciences»</institution>
          ,
          <addr-line>Moscow</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>P.Y. Timokhin</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>The paper dealt with the task of increasing the efficiency of high-quality visualization of virtual environment (VE), based on video with a 360-degree view, created using cubemap projection. Such a visualization needs VE images on cube faces to be of high resolution, which prevents a smooth change of frames. To solve this task, an effective technology to extract and visualize visible faces of the cube is proposed, which allows the amount of data sent to graphics card to be significantly reduced without any loss of visual quality. The paper proposes algorithms for extracting visible faces that take into account all possible cases of hitting / missing cube edges in the camera field of view. Based on the obtained technology and algorithms, a program complex was implemented, and it was tested on 360-video of a virtual experiment to observe the Earth from space. Testing confirmed the effectiveness of the developed technology and algorithms in solving the task. The results can be applied in various fields of scientific visualization, in the construction of virtual environment systems, video simulators, virtual laboratories, in educational applications, etc.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>An important part of many up-to-date researches is
the visualization of scientific data and experiments using
a 3D virtual environment (VE) that simulates the object
of study [1]. This is especially demanded in the fields
where research is associated with high risk and work in
hard-to-reach environments: medicine [2], space [3], oil
and gas industry [4], etc. One of the effective forms to
share such visualization between researchers is video
with a 360-degree view, while watching which one is
able to rotate the camera in an arbitrary direction and feel
the effect of immersion in VE. So, for instance, using
360-video, anyone can explore the virtual model of the
center of the galaxy [5].</p>
      <p>To create a 360-video, various methods are used to
unwrap a spherical panorama onto a plane [6]: by means
of equidistant cylindrical projection, cubemap projection,
a projection on the faces of the viewer’s frustum [7], etc.
One of the widely distributed is cubemap projection in
which the panorama is mapped to 6 faces of the cube,
where each face covers a viewing angle of 90 degrees.
When playing such a video, the viewer is inside the cube
and looks at its faces with images of the virtual
environment. In order to feel immersion effect, it is
important that the images on the faces have a sufficiently
high resolution, i.e. the viewer should not see their
discrete (pixel) structure. Increasing of the resolution
leads to the formation of a large stream of graphic data,
which impedes the visualization process. In this regard,
the task to reduce the amount of streaming data without
noticeable loss of visualization quality is arisen.</p>
      <p>In this paper, an effective technology for solving this
task is proposed, which is based on the extraction and
visualization of cube faces been visible towards the
viewer. The technology is implemented in C ++ using the
OpenGL graphics library.
2.</p>
    </sec>
    <sec id="sec-2">
      <title>The pipeline of 360-video visualization</title>
      <p>Consider the task of visualization of 360-video with
frames comprising images of cube faces (face textures) as
shown in Fig. 1. The faces are named as they are seen by
the viewer inside the cube. To visualize 360-video, a
virtual 3D scene is created containing unit cube model
centered at the origin of the World Coordinate System
(WCS). Viewer’s virtual camera CV is placed in cube
center and is initially directed to the front face (Fig. 2),
where v and u are "view" and "up" vectors of camera CV
(in WCS), and r = v × u is "right" vector. The pipeline of
360-video visualization includes reading a frame from
video file with a frequency specified in video; extracting
face textures from the frame and applying them to cube
model; synthesizing the image of textured cube model
from camera CV. When watching a 360-video, we allow
camera rotation around the X and Y axes of its local
coordinate system, which corresponds to tilting the head
up/down and left/right.</p>
      <p>Fig. 1. The location of cube faces in the frame of 360-video</p>
      <p>The bottleneck of the pipeline described is transferring
of face textures from RAM to video memory (VRAM).
Therefore, transferring of all 6 face textures (the entire
frame of 360-video) to VRAM will be extremely
inefficient and impede the smoothness of visualization.
To solve this problem, we propose a technology based on
the extraction and visualization of only those cube faces
that are seen in the camera CV.</p>
    </sec>
    <sec id="sec-3">
      <title>The technology to extract and visualize visible cube faces</title>
      <p>To identify visible cube faces, we introduce the term
"face pair" - two cube faces with a common edge. We
enumerate cube vertices, as shown in Fig. 2, and specify
through them the edges: {0, 1}, {1, 5}, {5, 4}, {4, 0}, {6,
4}, {7, 5}, {3, 1}, {2, 0}, {2, 3}, {3, 7}, {7, 6}, {6, 2}.
These edges are corresponded by the following face
pairs: {0, 1}, {0, 3}, {0, 2}, {4, 0}, {2, 4}, {2, 3}, {3, 1},
{1, 4}, {1, 5}, {3, 5}, {5, 2}, {4, 5}, where 0…5 are face
numbers from Fig. 1. Depending on orientation and
projection parameters of camera CV, cube edges may hit
the frustum of the camera, or miss it. Every edge hitting the
frustum determines face pair needed to render. No edges
hitting the frustum means that camera CV captures some
single cube face.</p>
      <p>The proposed technology includes five stages. At the
first stage, camera CV's frustum parameters are
determined. At the second stage, boolean table H of cube
vertices visibility is created. At the third stage, visible
face pairs are extracted using table H. At the fourth
stage, single visible face is extracted (if necessary). At
the fifth stage, extracted cube faces are visualized. Let’s
consider these stages in detail.</p>
      <sec id="sec-3-1">
        <title>Frustum parameters</title>
        <p>To determine visible cube faces, we need the
following parameters of camera CV's frustum: γ hor and
γ vert - horizontal and vertical FOV (field of view) angles;
dn and df - distances to the near and far clipping planes.
The angle γ hor ∈[δ ,π −δ ] is user-defined, where δ is a
small constant ( δ = 1° in our work), and the angle γ vert is
determined by the ratio</p>
        <p>
          tg(γ vert / 2) = tg(γ hor / 2) / aspect ,
where aspect ≥ 1 is the aspect ratio of camera CV (the
ratio of frame’s width to its height). The distance to the
far plane should not be less than half of the longest cube
diagonal, so we take d f =3 2 + ε , where ε is machine
error of real numbers representation. The near clipping
plane should be located so that the near base of the
frustum does not contact cube faces from the inside. Fig.
3 shows that such contact point is the intersection of the
side line of camera CV's FOV with the center of cube
face.
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          )
Then, for the distance dn the equation can be written:
(dn + ε )2 =−(a2 0.52 + b2 ) =
=+ε 0.52 − (dn )2 (tg2 (γ hor / 2) + tg2 (γ vert / 2)).
(
          <xref ref-type="bibr" rid="ref2">2</xref>
          )
Substitute Eq. (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) into Eq. (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) and find the distance dn:
dn
        </p>
        <p>1
=
2 1 + tg2 (γ hor 2)(1 + 1 / aspect 2 )
− ε .</p>
        <p>
          (
          <xref ref-type="bibr" rid="ref3">3</xref>
          )
        </p>
        <p>The stage described is performed when starting
360video, as well as each time the user changes γ hor or
aspect.</p>
        <p>To simplify checking cube edges visibility, we create
a table H that stores boolean flags of being each cube
vertex in "+" half-space (where frustum is located) of each
clipping plane of camera CV, except the far plane. Relative
to this plane all cube vertices lie obviously in "+"
halfspace (see dn determination in Section 3.1). Table H
consists of 8 rows (the row index is the cube vertex
number), and each row stores 6 flags b0,…,b5:
- in the subgroup b0,…,b4, raised bith flag means that
the vertex lies in the "+" half-space or coincides with
the ith clipping plane (0 - near, 1 - left, 2 - right, 3
bottom, 4 - top);
- flag b5 is raised if all flags b0,…,b4 are raised, i.e. the
cube vertex is in the frustum of camera CV.</p>
        <p>Consider some cube vertex P. Denote by PWCS its
coordinates in WCS, and by p, the vector OWCSPWCS. The
calculation of flags b0,…,b5 for the vertex P is done by
the following</p>
        <p>Algorithm A1 to fill a row of the table H
1. Find the projection pv = (p, v) of the vector p on the
"view" vector v of camera CV.
2. Find the projection pr of the vector p on the "right"
vector r similarly to pv.
3. Find the projection pup of the vector p on the "up"
vector u similarly to pv.
4. Calculate the size dhor of horizontal FOV of camera
CV on the line of the vertex P (see. Fig. 4):</p>
        <p>dhor = 2 pvtg(γ hor 2) .</p>
        <p>Calculate the size dvert of vertical FOV of camera CV
on the line of the vertex P similarly to dhor.
b0 = (pv ≥ dn), b1 = (pr ≥ -dhor/2), b2 =
(pr ≤ dhor/2),
b3 = (pup ≥ -dvert/2), b4 = (pup ≤ dvert/2).
b5 = (b0 &amp;&amp; b1 &amp;&amp; b2 &amp;&amp; b3 &amp;&amp; b4).</p>
        <p>Having executed algorithm A1 for each cube vertex in
order, we obtain table H.</p>
        <p>Every face pair which common edge intersects the
frustum of camera CV will be extracted for visualization.
This is possible in two cases:</p>
        <p>(a) if at least one of edge vertices falls into the frustum.
This case can be easily detected by checking flags b5 of
edge vertices in the table H (at least one vertex having true
b5 is enough);</p>
        <p>(b) if both vertices lie outside the frustum, but the
edge intersects at least one clipping plane of camera CV,
and their intersection point falls into the frustum. Divide
this case into 3 steps: 1) determining the fact of the
intersection of the ith clipping plane by the edge; 2)
finding coordinates PI of their intersection point; 3)
checking falling the point PI into the frustum.</p>
        <p>Step 1. The fact of "edge - ith clipping plane"
intersection can be easily established using the table H.
For this, edge vertices should have opposite flags bi.
Note, if the edge lies in the ith clipping plane, and edge
vertices are outside the frustum, then intersection fact
with this plane will not be established (both flags bi will
be true), but it will be surely done with other clipping
plane, so the case (b) will be correctly detected. Another
important thing is that any cube edge isn't able to cross
the near or far base of the frustum (see Section 3.1), so
there is no need to check these planes.</p>
        <p>
          Step 2. After establishing the fact that cube edge
intersects the ith clipping plane, for instance, the right
plane (for the remaining planes the derivation will be
similar), we need to calculate coordinates PI of their
intersection point. For this, we introduce the following
denotations: PA, PB - the coordinates of edge vertices in
WCS; e - unit vector PAPB; pA - the vector OWCSPA; pI - the
vector OWCSPI; pI,r - the projection of the vector pI on the
vector r of camera CV; pI,v - the projection of the vector pI
on the vector v of camera CV. In this example, the point PI
lies in the right clipping plane, therefore its projection pI,r
is equal to half the size of the horizontal FOV on the line
of the point PI (similarly to size dhor in Fig. 4):
pI,r = pI,vtg γ hor or ( pI , r) = ( pI , v)tg γ hor . (
          <xref ref-type="bibr" rid="ref4">4</xref>
          )
2 2
        </p>
        <p>
          Using the distributive property of the dot product of
vectors rewrite the Eq. (
          <xref ref-type="bibr" rid="ref4">4</xref>
          ) as
( pI , χright ) = 0 , where χright = r − tg(γ hor )v . (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
2
        </p>
        <p>Write another expression for the vector pI using the
vector parametric equation of the line PAPB:</p>
        <p>
          pI =pA + tI e , (
          <xref ref-type="bibr" rid="ref6">6</xref>
          )
where tI is the parameter determining the position of the
point PI on the line PAPB. Substitute Eq. (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) into Eq. (
          <xref ref-type="bibr" rid="ref5">5</xref>
          )
and find tI:
        </p>
        <p>
          tI = −( pA, χright ) / (e, χright ) . (
          <xref ref-type="bibr" rid="ref7">7</xref>
          )
        </p>
        <p>
          Similarly to Eq. (
          <xref ref-type="bibr" rid="ref6">6</xref>
          ) write for coordinates PI the
expression PI = PA + tI e . Substitute tI from Eq. (
          <xref ref-type="bibr" rid="ref7">7</xref>
          ) in it
and find required coordinates PI:
        </p>
        <p>
          PI =PA −
( pA, χright ) e .
(e, χright )
(
          <xref ref-type="bibr" rid="ref8">8</xref>
          )
        </p>
        <p>
          As one can notice, the coordinates of intersection
points of the edge PAPB with the left, top and bottom
clipping planes will differ from Eq. (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) only by similar
terms χleft , χtop and χbtm . The expressions of these terms
are derived in a similar way:
χleft = r +tg(γ hor )v,
2
χtop = u − tg(γ vert )v,
        </p>
        <p>2
χbtm = u + tg(γ vert )v.</p>
        <p>2</p>
        <p>Step 3. Having the coordinates PI of intersection point,
we check falling the point PI into the frustum of camera
CV. To do this, we calculate the flag b5 for the point PI
using algorithm A1, and check its value (true means visible
edge). Note, when calculating flag b5, the calculation of the
flag bi can be omitted, as the point PI lies in the ith plane,
and bi will obviously be true.</p>
        <p>Based on the cases (a) and (b) considered, we define
the algorithm to check the visibility of the edge {m, n},
where m, n are the numbers of edge vertices, introduced at
the beginning of the Section 3. Denote by bedge the flag of
edge {m, n} visibility. The calculation of bedge is done by
the following</p>
        <p>Algorithm A2 to determine the edge {m, n} visibility
1. Check the visibility of edge vertices (using the table
H):
(9)
If (Hm,5 || Hn,5) is true, then:
bedge = true, exit the algorithm.</p>
        <p>Check, whether the edge lies in the "-" half-space of
any clipping plane:</p>
        <p>Loop by i from 0 to 4, where i is plane number
If (!Hm,i &amp;&amp; !Hn,i) is true, then:
bedge = false, exit the algorithm.</p>
        <p>End Loop.</p>
        <p>Check the presence of at least one visible point PI of
the intersection of the edge with any clipping plane
Loop by i from 1 to 4</p>
        <p>If (Hm,i ^ Hn,i) is true, then:</p>
        <p>
          Calculate coordinates PI by Eq. (
          <xref ref-type="bibr" rid="ref8">8</xref>
          ) and
(9).
        </p>
        <p>Calculate the flag b5 by algorithm A1.</p>
        <p>If b5 is true, then:
bedge = true, exit the algorithm.</p>
        <p>End If.</p>
        <p>End Loop.</p>
        <p>bedge = false.</p>
        <p>Next, using algorithm A2, visible face pairs are
extracted. Denote by Bfaces the boolean array of 6 flags of
cube faces visibility (true/false - the face is visible/not
visible), by D and E - the arrays of face pairs and cube
edges, introduced at the beginning of the Section 3, and
by bpair - the flag of at least one visible face pair. Execute
the following</p>
        <p>Algorithm A3 to extract visible face pairs
1. Clear array Bfaces with value false, bpair = false.
2. Loop by j from 0 to 11, where j is the edge index</p>
        <p>Calculate flag bedge of the edge E[j] by algorithm
A2.</p>
        <p>If bedge is true, then:</p>
        <p>Bfaces[D[j][0]] = true.</p>
        <p>Bfaces[D[j][1]] = true.</p>
        <p>bpair = true.</p>
        <p>End If.</p>
        <p>End Loop.</p>
        <p>If algorithm A3 results in true flag bpair, then we
proceed to the stage of visualization of the faces marked in
Bfaces (see the Section 3.5). If bpair is false (no visible face
pairs were found), this means that camera CV captures
some single cube face and we need to extract it for
visualization (see the Section 3.4).</p>
      </sec>
      <sec id="sec-3-2">
        <title>Extraction of single visible face</title>
        <p>As one can see, the visible one will be cube face with
the smallest angle between the external normal and the
vector v of camera CV. To determine the number of such
a face, we calculate the cosines of the angles between the
normals to the faces and the vector v, and extract the face
with the largest cosine. Denote by K the array of cosines
for faces 0-5, and by n2, n3 and n5 the normals to the
back, right, and top cube faces. Write the sequence of
normals for the faces 0-5: {-n5, -n2, n2, n3, -n3, n5}. Since
the normals n2, n3, n5 coincide with the axes OZWCS,
OXWCS, OYWCS, the calculation of the array K reduces to
writing the sequence of the vector v coordinates with
signs from the normals’ sequence. Execute the following</p>
        <p>Algorithm A4 to extract single visible kth face
K = {-vobs,y, -vobs,z, vobs,z, vobs,x, -vobs,x, vobs,y}.
k = 0. // by default 0th face is supposed visible.
Loop by i from 1 to 5, where i is face number (see Fig.
1).</p>
        <p>If K[i] &gt; K[k], then k = i.</p>
        <p>End Loop.</p>
        <p>Bfaces[k] = true.
1.
2.
3.
4.</p>
        <p>After executing the algorithm A4, array Bfaces will
contain one true flag marking single visible cube face.
Visualization of the faces marked in Bfaces is performed at
the next stage.</p>
      </sec>
      <sec id="sec-3-3">
        <title>Visualization of extracted faces</title>
        <p>To each element of the array Bfaces the face texture of
d x d pixels is corresponded, and all 6 face textures, as
noted in the Section 2, are merged into 360-frame of 3d x
2d pixels. At this stage, face textures marked in Bfaces will
be extracted from 360-frame and applied to cube model.
Face textures will be extracted to an array T of 6 texture
objects (one object per cube face). Each element of the
array T is a continuous area of VRAM, allocated for
storing one face texture. It is important to note here that
in 360-frame each face texture is stored not in one
continuous line, but in a number of d substrings of d
pixels in length (see Fig. 5). Since transferring a large
number of small data pieces into VRAM reduces the
GPU's performance, we tune video driver (using operator
glPixelStorei of the OpenGL library) so that the
substrings of face texture are automatically merged and
transferred to VRAM as one continuous piece. This is
done in the following</p>
        <p>Algorithm A5 to visualize 360-frame
1. Clear frame buffer, set the viewport, as well as
projection and modelview matrices according to
camera CV’s params.
2. Loop by i from 0 to 5, where i is face number.</p>
        <p>If Bfaces[i] is true, then:
Set row length nRL of 360-frame, the number
nSR of skipped rows and the number nSP of
skipped pixels in a row (see Fig. 5):
glPixelStorei (…_ROW_LENGTH, 3d),
glPixelStorei (…_SKIP_ROWS, i 3 d ),
glPixelStorei (…_SKIP_PIXELS,
(i % 3)d),</p>
        <p>where "…" is shortening of
GL_UNPACK.</p>
        <p>Load ith face texture to T[i]th texture object by
means of the operator glTexSubImage2D.</p>
        <p>Render T[i]th texture object on the ith face.</p>
        <p>End If.</p>
        <p>End Loop.</p>
        <p>Note, if 360-frame isn't changed during the
visualization process (for example, the video is paused),
then in step 2 of the algorithm A5, the same face textures
are not repeatedly loaded into VRAM, but the previously
loaded ones are used.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>The proposed technology and algorithms were
implemented in a software complex (360-player) written
in C++ language using the OpenGL graphics library. The
player performs high-quality visualization of a virtual
environment from a 360-video based on cubemap
projection. During the visualization, the viewer can rotate
the camera corresponding to tilting the head up/down and
left/right, as well as change camera FOV (viewing angle
and aspect).</p>
      <p>The developed solution was tested on 360-video with
a resolution of 3000x2000 pixels, created in the
visualization system for virtual experiments to observe
the Earth from the International Space Station (ISS) [3].
By means of the player developed, an experiment was
reproduced where the researcher, rotating the observation
tool, searches and analyzes a number of Earth objects
along the ISS daily track. Fig. 6a shows an example of
360-video frame, and Fig. 6b shows visualization of this
frame in 360-player.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The paper considers the task of increasing the
efficiency of VE visualization using 360-degree video
based on cubemap projection. High-quality visualization
(providing the effect of immersion into VE) requires
snapshots of high-resolution VE cubemap, which causes
overloading graphics card and impedes smooth changing
the frames. To solve this task, an effective technology is
proposed, based on the extraction and visualization of
visible cube faces, which can significantly reduce the
amount of data sent to graphics card without any loss of
visual quality. The paper proposes algorithms to extract
visible cube faces both in the case of falling cube edges
into FOV, and in the case of the absence of visible edges
(the case of single visible face). The resulting technology
and algorithms were implemented in a software and tested
on 360-video containing the visualization of virtual
experiment on observing the Earth from space. The testing
of the software confirmed the correctness of the solution
obtained, as well as its applicability for virtual
environment systems and scientific visualization, video
simulators, virtual laboratories, etc. In the future, we plan
to expand the results to increase the efficiency of
visualization of VE projected onto the dodecahedron.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgements</title>
      <p>The publication is made within the state task on
carrying out basic scientific researches (GP 14) on topic
(project) “34.9. Virtual environment systems:
technologies, methods and algorithms of mathematical
modeling and visualization” (0065-2019-0012).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Bondarev</surname>
            <given-names>A.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Galaktionov</surname>
            <given-names>V.A.</given-names>
          </string-name>
          <article-title>Construction of a Generalized Computational Experiment and Visual Analysis of Multidimensional Data //</article-title>
          <source>CEUR Workshop Proceedings: Proc. 29th Int. Conf. Computer Graphics and Vision</source>
          (GraphiCon
          <year>2019</year>
          ), Bryansk,
          <year>2019</year>
          , vol.
          <volume>2485</volume>
          , p.
          <fpage>117</fpage>
          -
          <lpage>121</lpage>
          ., http://ceurws.org/Vol-
          <volume>2485</volume>
          /paper27.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Gavrilov</surname>
            <given-names>N.</given-names>
          </string-name>
          , Turlapov V.
          <article-title>General implementation aspects of the GPU-based volume rendering</article-title>
          algorithm // Scientific Visualization.
          <article-title>-</article-title>
          <year>2011</year>
          . - Vol.
          <volume>3</volume>
          , № 1. - p.
          <fpage>19</fpage>
          -
          <lpage>31</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Mikhaylyuk</surname>
            ,
            <given-names>M.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timokhin</surname>
            ,
            <given-names>P.Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maltsev</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          <article-title>A method of Earth terrain tessellation on the GPU for space simulators // Programming</article-title>
          and Computer Software -
          <year>2017</year>
          . - Vol.
          <volume>43</volume>
          , p.
          <fpage>243</fpage>
          -
          <lpage>249</lpage>
          . DOI:
          <volume>10</volume>
          .1134/S0361768817040065.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Mikhaylyuk</surname>
            <given-names>M.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Timokhin</surname>
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Yu</surname>
          </string-name>
          .
          <article-title>Memoryeffective methods and algorithms of shader visualization of digital core material model</article-title>
          // Scientific Visualization -
          <year>2019</year>
          . - Vol.
          <volume>11</volume>
          , №
          <fpage>5</fpage>
          . - p.
          <fpage>1</fpage>
          -
          <lpage>11</lpage>
          . DOI:
          <volume>10</volume>
          .26583/sv.11.5.01.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Porter</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Galactic Center Visualization Delivers</surname>
          </string-name>
          Star Power // https://chandra.harvard.edu/photo/2019/gcenter/ (
          <source>review date 25.05</source>
          .
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>El-Ganainy</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hefeeda</surname>
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Streaming Virtual</surname>
          </string-name>
          Reality Content // https://www.researchgate.net/publication/3119
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <fpage>25694</fpage>
          _Streaming_Virtual_Reality_
          <source>Content (review date 25.05</source>
          .
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Kuzyakov</surname>
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pio</surname>
            <given-names>D</given-names>
          </string-name>
          .
          <article-title>Next-generation video encoding techniques for 360 video</article-title>
          and VR // https://code.facebook.com /posts/1126354007399553/next-generation-videoencodin
          <source>(review date 25.05</source>
          .
          <year>2020</year>
          ).
          <article-title>Timokhin Petr Yu., senior researcher of Federal State Institution «Scientific Research Institute for System Analysis of the Russian Academy of Sciences»</article-title>
          . E-mail: webpismo@yahoo.de. Mikhaylyuk Mikhail V.,
          <string-name>
            <given-names>Dr. Sc.</given-names>
            (
            <surname>Phys</surname>
          </string-name>
          .-Math.),
          <article-title>chief researcher of Federal State Institution «Scientific Research Institute for System Analysis of the Russian Academy of Sciences». E-mail: mix@niisi</article-title>
          .ras.ru.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>