<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Development of Technique for Generating Adaptive Visualization of Three-dimensional Ob jects in the Cloud Educational Environment?</article-title>
      </title-group>
      <contrib-group>
        <aff id="aff0">
          <label>0</label>
          <institution>Orenburg State University</institution>
          ,
          <addr-line>Orenburg</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <fpage>67</fpage>
      <lpage>75</lpage>
      <abstract>
        <p>In this study, we consider the technology of cloud virtual reality for the educational platform through the formation of the threedimensional scene to prepare students for distance learning. Such advanced learning technologies and learning systems are constantly in demand and are in the spotlight. In this work, three-dimensional models were created. It was integrated into the real scene for providing the 360degree view of this scene. The study conducted the analysis of the developed 3D models of the prototype module of adaptive visualization of three-dimensional objects for the virtual educational environment which constructed on the cloud platform. The study showed a change in the dependence of the minimum number of frames per second on the screen resolution, which is more often used to display the graphical component. We prepared the structural model of the cloud infrastructure for processing data and visualization of three-dimensional objects. In the framework of experimental research shows the results work of the prototype module adaptive visualization of three-dimensional objects for the cloud educational platform.</p>
      </abstract>
      <kwd-group>
        <kwd>Multicloud platforms visualization</kwd>
        <kwd>3D models</kwd>
        <kwd>Virtual reality</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        High-quality visualization of large areas is the main condition for the creation
of computer graphics in the di erent information systems, such as geographic
information systems (GIS), various simulators, landscape editors, etc. An
important role in the formation of the photorealistic image of the landscape is
played by textures imposed on the polygon model. Let us single out a number
of features characteristic of such textures. First, they must have a high
resolution. This needs when the camera is close to the surface of the landscape. The
closer the camera the more visible the details. Secondly, from a great height, the
repeatability of the image should not be noticeable, if the same type of texture
is repeated many times over the entire surface [
        <xref ref-type="bibr" rid="ref6 ref8">6, 8</xref>
        ].
      </p>
      <p>
        Modern technologies allow to move to a new level of training through the use
of three-dimensional interactive virtual systems that are most adequate to the
real world [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
      </p>
      <p>
        The development of 3D training systems can be considered a multi-criteria
optimization problem, the solution of which should ensure the adequacy of the
virtual environment and the speed of computing processes su cient for the
formation of images in real time [
        <xref ref-type="bibr" rid="ref11 ref12 ref14">11, 12, 14</xref>
        ].
2
      </p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        To solve the problem of photorealistic texture relief in recent years, many
methods have been proposed. In some works [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], the whole scene is represented by
a single image.
      </p>
      <p>
        The approach [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], is quadra drive, each node of which corresponds to the
texture that contains the rectangular area of the original image and ltered in
the full MIP pyramid. Textures that lie higher in the tree cover a larger area
but contain less detail. At runtime, depending on the observation settings, the
desired level of texture detail and the corresponding tree node are selected to
visualize di erent parts of the surface. Missing sections loaded from disk, the
junk is unloaded from memory.
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], the image is divided into rectangular areas of the same size, each of
which can be loaded into the video memory of the graphics card. Approaches
that use a unique image provide a high variety of images and avoid its repetition.
However, to maintain acceptable visual quality when approaching the surface,
the texture must have a very high resolution, and as a result, a very large size,
which can easily exceed the capacity of the computer's memory. Therefore, such
approaches use complex algorithms of dynamic loading of the required image
areas and are associated with high memory costs.
      </p>
      <p>
        Scientists from the Taiwan University proposed a navigation system in virtual
reality, which automatically generates animation to objects using algorithms
borrowed from robotics. This system is adapted to one particular building, and
the nal scene image is low polygonal, which signi cantly a ects its quality [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        G. Snook in his work proposes to use a linear combination of colors 4
textures with weights, which are contained in the components of color and control,
texture. This method is e ective but does not give a high realism of the image,
because only 4 materials are used [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
      </p>
      <p>
        Some other approaches suggest calculating color based on surface
characteristics. For example, in the work of a team of scientists led by C. Dachsbacher
texture is generated procedurally on the GPU based on the height and slope of
the terrain at a given point. However, this approach cannot provide high
image quality when approaching the surface at close range due to limited texture
resolution [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>Therefore, the development of 3D training systems can be considered a
multicriteria optimization problem, the solution of which should ensure the adequacy
of the virtual environment and the speed of computing processes su cient for
the formation of images in real time.
3</p>
      <p>
        The Architecture of the Virtual Learning Environment
As a methodological basis for the creation and application of the module of
visualization of three-dimensional objects of virtual reality, integrated into the
cloud Moos platform, a concept is proposed, which uses the following parameters
of the environment[
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]:
      </p>
      <p>1) the mechanism of interaction of the educational platform with the students
by using the genetic algorithm;</p>
      <p>2) ensuring the operation of self-learning circuits, learning management and
educational platform;</p>
      <p>3) creation of the educational environment, models and format of training
that are adequate to the chosen subject area and motivation tasks.</p>
      <p>The work of the contours of self-learning, learning management and
management of the educational platform is shown in Fig. 1.</p>
      <p>This approach in training solves the following problems:
- to use optimal technical and technological solutions in order to take them
for the reference conditions that are necessary for the calculation of the values
of didactic criteria;</p>
      <p>- gain skills in developing rational strategies for solving complex and
multicriteria applications;
- implement the control loops in the learning process.</p>
      <p>A schematic representation of the educational platform is shown in Fig. 2
The obtained results are the necessary basis for the e ective use of the latest
technical and technological solutions in the educational environment.</p>
      <p>The creation of such an environment, models and format of training with
an adequate subject area and motivation tasks is used on the basis of the use
of 3D - virtual systems. In such an environment, it is possible to achieve the
criteria of adequacy to the real world due to the presence of both functional and
spatial correspondence. For this purpose, the module of adaptive visualization
of three-dimensional objects for an innovative platform in the MOOCs virtual
educational environment is integrated into the training courses [3{5].</p>
      <p>This technology is aimed primarily at the technical sphere of education, the
study of which is the most di cult for the student.
To create a module for adaptive visualization of three-dimensional objects, a
mathematical model is used with the use of applied instrumental systems.
Systems of this level must meet the following requirements:
- interactive model description and task design;
- providing the possibility to change the model structure and its parameters;
- managing the process of modeling and problem solving;
- implementation of the exchange databases;
- presentation of data in di erent formats.</p>
      <p>
        Based on the literature analysis [15{17], the SURF method has optimal
requirements: The method allows to SURF [
        <xref ref-type="bibr" rid="ref10 ref5">5, 10</xref>
        ]:
      </p>
      <p>1) nd special points of the image (such points, the parameters of which
di er from similar values of neighboring points);</p>
      <p>2) to build the descriptor points (a build descriptor enables you to de ne a
particular point);
3) change the scale value of the rendered scene.</p>
      <p>Singular points are determined using the Hesse matrix:</p>
      <p>H(x; y; z) = L x;y(x; y; z)Lx;y(x; y; z)Lx;y(x; y; z)Lx;y(x; y; z)
(1)</p>
      <p>where H(x; y; z) is the Hessian; I(x; y) - the location of the source image;
x; y; z - coordinates of a three-dimensional object; Lxx, Lxy, Lyy - convolution
of the image with Laplacian from Gaussian.</p>
      <p>The convolution of the image with Laplacian of Gaussian can be represented
as next functions:</p>
      <p>Lxx(x; y; z) = I(x; y)
Lyy(x; y; z) = I(x; y)
Lxy(x; y; z) = I(x; y)
d2 gz</p>
      <p>dx2
d2 gz</p>
      <p>dy2
d2 gz
dxdy
(2)
(3)
(4)</p>
      <p>
        Combining SURF method for nding the plots in which was applied the
textures when you rotate a three-dimensional scene and Shader X4 [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. It is
possible to obtain a three-dimensional scene that is e cient in performance and
quality.
5
      </p>
    </sec>
    <sec id="sec-3">
      <title>Experimental Part</title>
      <p>The developed infrastructure in high-performance computing to obtain the nal
scene of a three-dimensional course when working with Big data is shown in g.
3.</p>
      <p>This approach in obtaining a photorealistic visualization of relief with a
calculated texture allows you to combine di erent surface materials, adjusting the
most optimal texture for each area of the relief surface taking into account its
local characteristics. Each material is given a unique texture, which ensures high
image quality even when considering the surface from a low height. The addition
of procedural details and the use of noise and shaders allow for a more realistic
image (Fig. 4).
The article deals with the technology of virtual reality, through the formation
of the three-dimensional scene for training students in distance learning. In the
work done, three-dimensional models have been created, which are integrated
into the educational platform and allow for a 360-degree view of the scene. The
prototype of the module of adaptive visualization of three-dimensional objects
for an innovative platform in the virtual educational environment is developed.
A change in the dependence of the minimum number of frames per second on
the resolution of the issued scene was revealed, a structural model of the cloud
infrastructure for processing data visualization of three-dimensional objects was
built.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Hua</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          , Zhang, H.,
          <string-name>
            <surname>Lu</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bao</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Peng</surname>
            <given-names>Q.</given-names>
          </string-name>
          :
          <article-title>Huge Texture Mapping for RealTime Visualization of Large-Scale Terrain</article-title>
          . In: VRST '04:
          <article-title>Proceedings of the ACM symposium on Virtual reality software and technology</article-title>
          . N.Y.: ACM Press, pp.
          <volume>154</volume>
          {
          <issue>157</issue>
          (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Brodersen</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Real-time visualization of large textured terrains</article-title>
          .
          <source>In: Conference: Proceedings of the 3rd International Conference on Computer Graphics and Interactive Techniques in Australasia and Southeast Asia</source>
          , p.
          <volume>7</volume>
          (
          <year>2005</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <surname>Tsai-Yen</surname>
          </string-name>
          , Lien, Jyh-Ming, Chiu, Shih-Yen, Yu, Tzong-Hann:
          <article-title>Automatically Generating Virtual Guided Tours</article-title>
          .
          <source>Proceedings Computer Animation</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ),
          <volume>8</volume>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Snook</surname>
            ,
            <given-names>G:</given-names>
          </string-name>
          <article-title>Real-Time 3D Terrain Engines Using C++ and DirectX 9</article-title>
          . Charles River Media, Inc. (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Dachsbacher</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Stamminger</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          :
          <article-title>Cached procedural textures for terrain rendering. Shader X4 Advanced rendering techniques Charles River Media. ch. Cached procedural textures for terrain rendering 2(5</article-title>
          ),
          <volume>457</volume>
          {
          <fpage>466</fpage>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Sugihara</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Murase</surname>
            <given-names>T.</given-names>
          </string-name>
          :
          <article-title>Automatic Generation of a 3D Terrain Model by Straight Skeleton Computation</article-title>
          .
          <source>In: CGDIP '17 Proceedings, No. 4</source>
          , p.
          <volume>7</volume>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Shreiner</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>Performance OpenGL: Platform Independent Techniques</article-title>
          .
          <source>In: SIGGRAPH</source>
          <year>2001</year>
          , p.
          <volume>15</volume>
          (
          <year>2001</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Segal</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Akeley</surname>
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>The OpenGL</article-title>
          .
          <source>Graphics System. USA: Silicon Graphics, Ins</source>
          . (
          <year>2004</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Bolodurina</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Parfenov</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shukhman</surname>
            <given-names>A.</given-names>
          </string-name>
          :
          <article-title>Approach to the e ective controlling cloud computing resources in data centers for providing multimedia services</article-title>
          .
          <source>In: SIBCON 2015, Proceedings</source>
          <volume>7147170</volume>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Drummond</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rosten</surname>
          </string-name>
          , E.:
          <article-title>Machine learn-ing for high-speed corner detection</article-title>
          .
          <source>In: ECCV</source>
          <year>2006</year>
          , pp.
          <volume>430</volume>
          {
          <issue>443</issue>
          (
          <year>2006</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Al-Kodmany</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Visualization Tools and Methods in Community Planning: From Free-hand Sketches to Virtual Reality</article-title>
          .
          <source>First Published</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ),
          <volume>189</volume>
          {
          <fpage>211</fpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Guttentag</surname>
            ,
            <given-names>D.A.</given-names>
          </string-name>
          :
          <article-title>Virtual reality: Applications and implications for tourism</article-title>
          .
          <source>Tourism Management</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ), No.
          <volume>31</volume>
          ,
          <issue>637</issue>
          {
          <fpage>651</fpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Dalgarno</surname>
            <given-names>B.</given-names>
          </string-name>
          :
          <article-title>The Potential of 3D Virtual Learning Environments: A Constructivist Analysis</article-title>
          .
          <source>e-Journal of Instructional Science and Technology</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ),
          <volume>19</volume>
          (
          <year>2014</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Fernandez-Palacios</surname>
            ,
            <given-names>B.J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Morabito</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Remondino</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Access to complex realitybased 3D models using virtual reality solutions</article-title>
          .
          <source>Journal of Cultural Heritage</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ), Vol.
          <volume>23</volume>
          ,
          <issue>40</issue>
          {
          <fpage>48</fpage>
          (
          <year>2017</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Fang</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          :
          <article-title>An improved RANSAC homography algorithm for feature based image mosaic</article-title>
          .
          <source>In: 7th WSEAS Intern. Conf. on Signal Processing, Computational Geometry &amp; Arti cial Vision</source>
          . World Scienti c and Engineering Academy and Society, pp.
          <volume>202</volume>
          {
          <issue>207</issue>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Khan</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>McCane</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wyvill</surname>
          </string-name>
          , G.:
          <article-title>SIFT and SURF performance evaluation against various image deformations on benchmark dataset</article-title>
          .
          <source>In: Proc. of Intern. Conf. on Digital Image Computing: Techniques and Applications</source>
          , Queensland, Australia, pp.
          <volume>499</volume>
          {
          <issue>504</issue>
          (
          <year>2011</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Bay</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Ess</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Tuytelaars</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Van Gool</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          :
          <article-title>SURF: Speed up Robust Features</article-title>
          .
          <source>Computer Vision and Image Understanding</source>
          <volume>2</volume>
          (
          <issue>5</issue>
          ), Vol.
          <volume>110</volume>
          , N
          <volume>3</volume>
          ,
          <issue>346</issue>
          {
          <fpage>359</fpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>