<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Synthetic Data for AUV Technical Vision Systems Testing</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Aleksandr Kamaev</string-name>
          <email>kamaev_an@mail.ru</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sergey Smagin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Viacheslav Sukhenko</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmitry Karmanov</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Computing Center of FEB RAS</institution>
          ,
          <addr-line>Khabarovsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Pacific National University</institution>
          ,
          <addr-line>Khabarovsk</addr-line>
          ,
          <country country="RU">Russia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <fpage>126</fpage>
      <lpage>140</lpage>
      <abstract>
        <p>Development of the autonomous underwater vehicle technical vision systems is impossible without precise debugging and testing. Due to the factors described in the paper, such testing in most cases cannot be implemented with the help of real AUV. Therefore, the usage of the procedurally generated virtual testing areas is suggested. The algorithm for generation and visualization of the seabed 3D model that is suitable for AUV technical vision systems testing and debugging is described. This algorithm allows building of high detailed surface of the seabed, where each part is absolutely unique and does not contain repeating texture patterns. Also, software system “AUV Vision Debugger” that consists of seabed generator and AUV simulator is considered. The simulator provides interaction between generated seabed, AUV model, and technical vision system that is currently being tested.</p>
      </abstract>
      <kwd-group>
        <kwd>AUV</kwd>
        <kwd>computer vision</kwd>
        <kwd>procedural</kwd>
        <kwd>texturing</kwd>
        <kwd>height map</kwd>
        <kwd>tessellation</kwd>
        <kwd>fractal noise</kwd>
        <kwd>seabed</kwd>
        <kwd>simulator</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>Introduction</title>
      <p>Complexity of the underwater navigation and inability to organize
communication with high capacity data exchange between autonomous underwater vehicle
(AUV) and operator leads to necessity of onboard technical vision systems
development to improve underwater vehicle autonomy. High reliability and quality
operation in various conditions is required from such systems. Their
development and further usage is impossible without precise debugging and testing.
Currently testing process organization and testing data obtaining becomes a
serious problem.</p>
      <p>At present time, testing areas with markers and targets located on seabed
and specifically equipped pools are used for tasks of testing and debugging. Such
methods are well suited for testing AUV devices and equipment, but they are
not applicable for AUV technical vision systems testing and debugging because
of following reasons.
1. Obtaining data from AUV is a time consuming and expensive process. It
includes not only mission accomplishment itself, but also transportation to
testing area and device launching that is unacceptable especially at the early
stages of development.
2. It is not always possible to obtain necessary data for testing from areas with
different types of relief due to geographic reasons.
3. It is impossible to evaluate vision system accuracy while using data from
real testing areas, because there are no other ways to measure seabed with
the required accuracy level (up to cm) on large areas.
4. It is impossible to interfere in the vision system work at any required time.
5. There is a risk to lose the AUV due to technical vision system errors.
To avoid all mentioned problems of real data usage in AUV technical vision
system testing and debugging processes, it is suggested to replace real experiments
with tests in virtual environment. The software system “AUV Vision Debugger”
that allows holding such tests is considered in the paper.
1</p>
    </sec>
    <sec id="sec-2">
      <title>Related Work</title>
      <p>
        Usage of the virtual simulations for tasks related to AUV is an actively
developing research area. Currently a lot of modeling software for different purposes
is developed. The workbench [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] supports 3D visualization and can be applied
for missions planning. The simulator [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] is intended for AUV control systems
developing. The modeling software system [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] is designed to be used for AUV
operators learning. The system [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ] allows debugging of different systems and
devices of AUV in virtual environment. Since AUV technical vision systems are
using feature points and their descriptors [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6–8</xref>
        ], such requirements as high levels
of detail (up to one pixel), absolute uniqueness of all relief parts and absence
of repeated texture patterns are imposed to seabed model. These requirements
make it impossible to use existing modeling software for AUV vision systems
testing and debugging.
      </p>
      <p>
        Generation and visualization of the seabed 3D model, that has good enough
quality for AUV vision system testing and debugging, is the most complex task.
Despite large number of existing methods for landscape generation [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], they
cannot be directly applied for procedural seabed model synthesis. Methods based
on fractal brushes [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] allow obtaining a landscape of needed form, but they
require long time of artist work. Approaches based on multifractals [
        <xref ref-type="bibr" rid="ref11 ref12">11, 12</xref>
        ] do
not provide enough control over the result that is necessary for generating the
seabed of required form. Methods based on modeling of the physical process of
landscapes formation [
        <xref ref-type="bibr" rid="ref13 ref14">13, 14</xref>
        ], do not take in account the specifics of the
underwater relief formation processes. The methods of fractal interpolation [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] require
basic constraints, but they do not solve problem of obtaining such constraints.
Creation of the seabed 3D model that will be suitable for AUV technical vision
system testing and debugging requires complex approaches and combinations of
different generation methods.
2
      </p>
    </sec>
    <sec id="sec-3">
      <title>Program System “AUV Vision Debugger”</title>
      <p>Structural scheme of the developed program system for AUV technical vision
algorithms testing and debugging – “AUV Vision Debugger” is shown in Fig.
1. This system consists of two programs: seabed 3D model generator and AUV
simulator. The generator is a GUI program that allows building a seabed 3D
model from the set of user defined parameters using approaches described in
section 3. The simulator is a program that simulates AUV dynamic in virtual
environment and allows user to observe all AUV movements by controlling the
view camera. The physical model used by simulator is described in section 4.</p>
      <p>One of the simulator’s capabilities is a selection of the controlling program –
technical vision system combined with control system. The procedure of vision
system testing and debugging may be different for different development stages
and tasks that system must solve. Therefore, specific control system is required
to implement testing of concrete vision system task. “AUV Vision debugger”
does not impose any limitations on organization of technical vision and control
systems and their interaction, and provides universal interface with simulator.</p>
      <p>The simulator runs the controlling program and scans its standard output
stream with approximately 50 Hz frequency to detect passed commands. As an
answer to the commands the simulator writes messages to the standard input
stream of the vision system and saves graphical information in the file system if
required.</p>
      <p>Commands supported by simulator can be divided into three groups:
control commands, state requests and image requests. Control commands serve to
change such states of AUV parts (see section 4.1 for parts description) as
position, orientation, power and etc. State requests can be used to know
information about AUV position, orientation and velocity, distances to objects (if AUV
equipped with sonar) and etc. As an answer to state requests, the simulator
passes state information to the standard input stream of AUV technical vision
system. Image requests are necessary for obtaining images with debug
information from AUV onboard cameras. Images and debug information are not passed
directly through standard input stream, but stored in file system. Instead, a
notification that images are ready to use is passed to the input stream. The
debug information contains accurate camera external and internal parameters and
distances from camera to image pixels. Using this information, the controlling
program can evaluate results produced by the technical vision system and make
conclusions about the system’s accuracy.</p>
      <p>Thus, “AUV Vision Debugger” provides just a physical and visual model
of environment and AUV. Development of the control system for testing AUV
technical vision algorithms is the user’s responsibility.
3</p>
    </sec>
    <sec id="sec-4">
      <title>Generation and Visualization of the Seabed 3D</title>
    </sec>
    <sec id="sec-5">
      <title>Model</title>
      <p>The algorithm that was developed for “AUV Vision Debugger” to generate and
visualize the seabed 3D model has five stages: generation of low frequency relief
map (section 3.1), fractal noise computation to increase level of details (section
3.2), 3D model mesh building (section 3.3), model refinement during
visualization (section 3.4) and texturing (section 3.5).
3.1</p>
      <sec id="sec-5-1">
        <title>Relief Map</title>
        <p>Relief map H = (  ),  = 1, 2, ..., ℎ,  = 1, 2, ...,  , where  and ℎ – map’s size,
defines relief height   in point ( · ,  ·  ), where  is map resolution
specified in meters per point (m/p). The maps with  = 10 m/p and , ℎ 6
1000 were used in the “AUV Vision Debugger”. Such maps can describe the
seabed with length up to 10 km that is more than enough for AUV technical
vision algorithms testing and debugging.</p>
        <p>The relief map contains such basic seabed elements as coast, shelf, continental
slope, ocean bed, submarine canyons and mountains (Fig. 2a). These elements
are defined by user through setting a few parameters’ values, and they do not
require specific artist knowledge.</p>
        <p>Contours of the basic elements are specified with fractal lines and heights
are interpolated, using different interpolation functions, based on distances to
defined contour lines (Fig. 2b).
3.2</p>
      </sec>
      <sec id="sec-5-2">
        <title>Fractal Noise</title>
        <p>The relief details that have frequency more than 1/(2 ) m−1 cannot be
represented by H map. Meanwhile, to provide correct work of technical vision
algorithms on distances from seabed that could be reached with onboard light
equipment, accuracy up to millimeters is required. To improve relief level of
details we will use fractal noise that consists of summed Perlin noise functions,
taken at following frequencies:
2 −1</p>
        <p>,
f = (  ) =
fℎ = ( ℎ ) =
2 + 

where  = 0, 1, ...,   ,  = 0, 1, ...,  ℎ ,   = ⌊log2(2·</p>
        <p>– drawing fragment size (pixel of the screen or AUV camera) in meters. If
fragment size could not be computed, the desired accuracy should be directly
assigned to   . Low frequencies f will be added to seabed 3D model and high
)⌋,  ℎ = ⌊log2( −12−  
) ,
⌋
frequencies fℎ will be utilized during visualization process.</p>
        <p>Amplitudes on the same frequencies may be different for different seabed
parts, depending on their slope roughness and some random factor. Let S = (  )
be the slope map (Fig. 3a) and R =   be the roughness map (Fig. 3b), where
 = 1, 2, ..., ℎ ,  = 1, 2, ...,  ,
  = min 1,
︂(
 (  ) ︂)
 
  =
2 arccos( , )

,
,
where  (  ) is standard deviation of high frequency part of height map in
 
9 × 9 neighborhood (weighted with Gauss function with  = 2) of the point   ,
is a normalization factor and n

= ( ,
,  ,
,  , ) is a normal to low
frequency part of height map in point   .</p>
        <p>We divide the frequencies range (f , fℎ ) into three groups each of which is
controlled by own basis function   ( ),  = 1, 2, 3, – Fig. 4a. The logarithmic
scale is used on the charts; at this scale functions   ( ) are piecewise-linear. As
well we introduce functions of roughness and slope influences on the 1st and 2nd
frequency groups:</p>
        <p>I 1( ) = 0.45 2 + 0.05, I 1( ) =
︃{ 6 2</p>
        <p>
          − 8 3,
(8 − 2)( − 1)2,  &gt; 0.5
ence function for  -th frequency group that unites roughness, slope and random
factors:
  (x) = max (0, min (1, I (R( 1,  2)) + I (S( 1,  2)) +   P (x))) ,
(1)
where x = ( 1,  2,  3) is a space point,  = 1, 2, R( 1,  2) and S( 1,  2) are the
user defined coefficient of the random influence on  -th group,
values obtained from R and S by means of bilinear interpolation,  ∈ [0..1] is
P (x) =
︁∑
4

(M x + T ) ,
︂)
where  = 1, 2, 3, P(x) is a Perlin noise function [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], M is a random rotation
matrix, and T is a random translation vector. The noise amplitude in x point
corresponding to frequency 

 (︃
︁∑
2
 =1
A(x,  ) =
  ( )  (x) +  3( )
︂( sin (2 P3(x)) + 1 )︂ )︃
2
,
(2)
define noise in x
where  – coefficient defining general roughness of height differences on all
frequencies. Recommended value lies in range  ∈ [0.5..1]. Using amplitude (2) we
N(x,  ) =
        </p>
        <p>A(x,   )P(  x).</p>
        <p>(3)
︁∑
 
 =0
3.3</p>
      </sec>
      <sec id="sec-5-3">
        <title>Building of 3D Model</title>
        <p>
          For mesh construction we will use approach [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], but unlike [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], we will build
mesh during generation step on CPU and not at rendering time on GPU. We
define relief density in point x = ( 1,  2,  3)
 (x) =  3 − H( 1,  2) + N(x,  ),
(4)
where value H( 1,  2) is obtained from H by means of bilinear interpolation.
Surface  ( ) = 0 defines seabed. To obtain 3D model of seabed surface  ( ) = 0
is approximated using marching cubes algorithm [
          <xref ref-type="bibr" rid="ref18">18</xref>
          ]. Figure 5 depicts result of
such approximation.
3.4
        </p>
      </sec>
      <sec id="sec-5-4">
        <title>Visualization</title>
        <p>It is necessary to tessellate mesh during visualization to provide size close to one
pixel for all visible triangles. The most of modern GPUs supported hardware
accelerated tessellation with subdivision by up to 64 parts for each triangle side.
Such subdivision will not be enough for mesh triangles located near the camera.
Therefore, tessellation based on precalculated triangles sets (PTS) is suggested
for these triangles.</p>
        <p>Each triangle side could be divided into 2  parts, where  = 1, 2, 3 is an index
number of triangle side (counterclockwise numeration), and   = 0, 1, ...,   .
AUV operation distance to seabed and onboard cameras resolution allows to
choose   = 9 that leads to  = (  + 1)3 = 1000 of different variants
of tessellation. Let us consider  -th tessellation set,  = 1, 2, ...,  , that consists
of   triangles. Each triangle of  -th set is described by three vertices v =
(  1,   2),  = 1, 2, ...,   ,  = 1, 2, 3 (counterclockwise numeration). Values
  1 and   2 are coordinates of  -th vertex of  -th triangle of  -th set, they
defined in the coordinate system with basis vectors represented by 1st and 3rd
sides of  -th triangle and origin in the intersection point of these sides – Fig. 6a.</p>
        <p>Using such coordinate system we can draw  -th tessellation set instead of
some mesh triangle that have coordinates p1, p2, p3, by projecting local  -th set
coordinates to the global space:
x
=   1(p2 − p1) +   2(p3 − p1) + p1.
(5)
The fact that tessellation sets store coordinates independent from mesh
coordinates allows us to put these sets into video memory once, and then just use
through one function call. The attempt to draw a triangle is replaced by suitable
tessellation set drawing under control of vertex shader that computes global
vertices coordinates using (5). Coordinates p1, p2, p3 are passed to vertex shader
as uniform parameters. The number of tessellation set  is defined based on
triangle size and its remoteness from the camera to provide pixel accuracy in the
screen space.</p>
        <p>To speed up visualization process the seabed model is divided into
rectangular blocks, so that blocks number and triangles number per block would allow
iterating through them before each frame rendering. Blocks located behind
camera’s clipping planes are discarded. For rendering of the triangles located in
blocks that are close to the camera, the PTS-based tessellation is used. If the
block is far enough for using hardware accelerated tessellation only, it renders
by single drawing function call. The process of seabed subdivision is shown in
Fig. 6b.</p>
        <p>After tessellation the fractal noise (3) is added to each vertex in the direction
of the interpolated normal n, computed in this vertex:</p>
        <p>
          x^ = x + n (N(x, ℎ ) +  (x)) ,
where  (x) is a density function (4). The example of tessellated and amplified
with high frequency noise seabed model is presented in Fig. 7.
Texturing of landscapes, developing for testing AUV technical vision system has
its own specific:
1. Only procedural textures could be used, because using of the bitmap images
leads to the appearing of repeated texture patterns that makes impossible
the testing of the algorithms based on feature points [
          <xref ref-type="bibr" rid="ref6 ref7 ref8">6–8</xref>
          ].
2. Complex relief makes impossible calculation of correct 2D texture
coordinates, so just 3D textures could be used.
3. Textures should be correctly represented on all scale levels that are used
during testing.
        </p>
        <p>Surface type and, respectively, texture type is defined based on functions  1(x)
and  2(x) (1). The sand procedural texture with weight  is mixed with the
stone procedural texture with weight 1 −  , where
 (x) = ∏︁
 =1,2
(1 − smoothstep( 
− ,   + ,   (x))) .</p>
        <p>Smoothstep is a standard GLSL function. Constants   define boundaries
between relief types and  is a transition width. These constants are set during
seabed generation. The Fig. 9 is obtained with  1 = 0.2,  2 = 0.12 and
 = max(0.0025, 0.05  ), where   – is a drawing fragment size in meters.</p>
        <p>
          Creation of procedural texture for each type of surface requires individual
approach. Sum of Perlin noise functions or cellular basis [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] is used to create
different unique texture patterns at different scales. When the screen’s pixel size
becomes too large to depict a pattern, this pattern is replaced by its average
color and intensity. A single texture could have up to three different patterns
at different scales. An example of sand and stone textures used in AUV Vision
debugger is presented in Fig. 8.
4
        </p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>AUV Simulator</title>
      <p>The main tasks of the simulator are visualization of seabed and AUV models,
simulation of AUV dynamics, and computation of interactions with the
environment. Simulator also provides the possibility to observe the mission process.
“AUV Vision Debugger” was created mainly for testing and debugging of
technical vision system and not for testing AUV control systems and devices, therefore
simplified AUV model (section 4.1), its dynamics (section 4.2) and interactions
with seabed (section 4.3) are used.
AUV model consists of parts, described in text file of the model. Each part has
its own type: shell, engine, control surface, sonar, floodlight and camera. In the
model description there could be only one shell and arbitrary number of parts
having other type. Let us consider main parameters of different types:
1. Shell is defined by its weight and 3D model in 3DS format. The coordinate
system of the shell is a basis for all other parts.
2. Engine and control surface are defined by their weights, 3D model,
transformation matrix, describing position and orientation relative to shell.
Directions and ranges of engine and control surface allowable movements and
rotations are also defined. Thrust vector and its magnitude range could be
defined for engine.
3. Sonar, floodlight and camera do not have their own weight and graphic
representation. They defined by position and orientation relative to shell.
Directions and ranges of allowable movements and rotations are also defined.
Light power could be set for floodlight and focus distance, resolution, radial
distortion, and lighting dependent errors could be set for camera.
Apart from mentioned parameters, each part has a name that allows the vision
system to interact with this part.
4.2</p>
      <sec id="sec-6-1">
        <title>AUV Dynamics</title>
        <p>
          From position of dynamics AUV with all its parts is considered as one rigid
body. To calculate its motion Newton–Euler equations are used. The details of
motion computation process based on forces and torques acting on the rigid
body are well described in [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]. The weight of one part (if part has weight) is
uniformly distributed between all the part’s points. The part’s points correspond
to vertices of the part’s 3D model. All relative movement of AUV parts leads to
redistribution of AUV weight and to recalculation of inertia tensor. Forces that
lead to relative movements of parts, as well as forces induced by this movement,
are not counted.
        </p>
        <p>Simulator takes into account following forces: gravity force F , thrust force
of  -th engine F , = 1, 2, ...,   , where   is engines number, pressure force F
and hydrodynamic force Fℎ . All forces except F
 induce torques relative AUV
center of mass, that accounted in motion calculation. Let us consider how these
forces act.</p>
        <p>F
 = (0, 0, −</p>
        <p>Gravity force is applied to AUV center of mass and pointed vertically down
), where</p>
        <p>is sum of weight of all parts, and  is acceleration
of gravity. Thrust force F</p>
        <p>is applied to  -th engine center of mass and directed
along thrust vector, described in AUV text file. If rotation is applied to  -th
engine then its thrust vector is also undergo this rotation. Magnitude of F is
chosen by vision system from the range defined in model text file.</p>
        <p>Forces F
 and F</p>
        <p>ℎ are calculated for each face of AUV 3D model if this face
has nonzero area  , external unit normal vector n and located out of the shell.
Forces F and Fℎ are applied to geometric center of the face c = ( 1,  2,  3):
where  is a water density, v is velocity of c relative to environment, F
ℎ
and
F
ℎ
are components of Fℎ in direction of v and normal to the v vector direction:</p>
        <p>F = − | 3| n,
F
ℎ =  ℎ</p>
        <p>+  ℎ
−v
v
| |
n × v × v
|n × v × v
|
,
 ℎ
= |n · v|
 |v| ,</p>
        <p>2
 ℎ
=  √</p>
        <p>n · v
 |v|
2
︃√
1 −
︂(
n · v
| |
v )︂ 2
.</p>
        <p>Presented forces are sufficient to describe AUV dynamics accurate enough for
technical vision systems testing and debugging.
4.3</p>
      </sec>
      <sec id="sec-6-2">
        <title>Interactions with Seabed</title>
        <p>Collision of AUV parts with seabed surface is a very dangerous situation that
should be necessarily prevented by a technical vision system during visual
navigation. Therefore, it is very important to detect such situation and to inform the
vision system in the case it occurs. Informing is implemented by passing a
message to the standard input stream of the vision system. We consider a process
of detecting collision by the simulator and its reaction on this collision.</p>
        <p>Collisions are tested on each simulation iteration for parts points. Let us
consider collision detection process for point x. The collision is tested for all
seabed model triangles that are closer to x than one meter. If there are no such
triangles then point x is above the seabed. Let p be the vertices of triangle that
is closer to x than one meter,  = 1, 2, 3, and n
 be the normals to the model in
these vertices. To detect a collision we should perform following steps:
Step 1: Computing face normal: n = (p2 − p1) × (p3 − p1).</p>
        <p>Step 2: Computing intersection points between rays from p in directions
n and plane (n, −n · x):
p′ = p +
n · x − n · p
n · n
n .
Step 3: If x does not lie inside triangle (p′1, p′2, p′3), then no collision detected,
else go to step 4.</p>
        <p>Step 4: Finding of interpolated unit normal n˜ in x point of triangle (p′1, p′2, p′3)
with vertex normals n , by means of bilinear interpolation.</p>
        <p>Step 5: Computing point of triangle (p1, p2, p3), corresponding to x:</p>
        <p>If  &gt; 0, then x point is located inside seabed, therefore collision happened.
If value  &gt; 0 computed for more than one triangle, maximum value should be
chosen.</p>
        <p>If collision of depth  for some point x with normal n˜ is detected, then
following force is applied to x:</p>
        <p>F = (   −   v · n˜)n˜,
where   is an elasticity coefficient,   is a damping coefficient, and v is a velocity
of x point. Application of F force prevents AUV penetration into the seabed.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>Conclusion</title>
      <p>Methods and algorithms, suggested in the paper with developed program system
“AUV Vision Debugger”, allow testing and debugging of AUV technical vision
systems in virtual environment that leads to following advantages:
1. high speed and low cost of testing data acquisition,
2. ability to obtain testing data from different surface types, starting from sand
valleys and ending with rocky canyons,
3. ability to evaluate technical vision accuracy since the investigated seabed
surface is precisely known,
4. ability to interrupt system working exactly at the moment when error occurs,
5. testing result repeatability.</p>
      <p>Currently AUV Vision Debugger allows getting high detailed images of seabed,
all parts of which are absolutely unique – Fig. 9. For further researches it is
planned to add a possibility of procedural modeling and visualization of marine
flora and fauna, including dynamically changing objects, to add more
procedural textures and submarine caves network. It is also planned to learn ways of
integrating our work with existing general purpose software modeling systems.</p>
      <p>It is evident that using of synthetic tests in “AUV Vision Debugger” does
not allow us to completely abandon the real experiments, but it significantly
reduces their amount. As a result, the time required for AUV technical vision
system development is decreased and reliability is increased.</p>
      <p>Acknowledgments. This work was supported by Russian Foundation for Basic
Research (grant 16-31-00187 mol a).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <surname>Davis</surname>
          </string-name>
          , D.T,
          <string-name>
            <surname>Brutzman</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>The autonomous unmanned vehicle workbench: mission planning, mission rehearsal, and mission replay tool for physics-based x3d visualization</article-title>
          .
          <source>In: 14th International Symposium on Unmanned Untethered Submersible Technology (UUST)</source>
          ,
          <source>Autonomous Undersea Systems Institute (AUSI)</source>
          , pp.
          <fpage>21</fpage>
          -
          <lpage>24</lpage>
          . Durham New Hampshire (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <surname>Dantas</surname>
            ,
            <given-names>J.L.D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>de Barros</surname>
            ,
            <given-names>E.A.:</given-names>
          </string-name>
          <article-title>A real-time simulator for auv development</article-title>
          . ABCM Symposium Series in Mechatronics vol.
          <volume>4</volume>
          ,
          <fpage>499</fpage>
          -
          <lpage>508</lpage>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <surname>Hanychev</surname>
            <given-names>V.V.</given-names>
          </string-name>
          <article-title>Trenazhorniy compleks dlya obuchenia operatorov teleupravlyaemykh neobitayemykh podvodnykh apparatov razlichnykh tipov In: 6-th Russian Conf</article-title>
          .
          <source>Tehnicheskie Problemi Osvoenia Mirovogo Okeana</source>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>60</lpage>
          ,
          <string-name>
            <surname>Vladivostok</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <surname>Inzartsev</surname>
            ,
            <given-names>A.V</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sidorenko</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Senin</surname>
            ,
            <given-names>R.A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Matvienko</surname>
          </string-name>
          , V.Y.:
          <article-title>Kompleksnoye testirovanie programmnogo kompleksa na base imitacionnogo modeliruyuschevo kompleksa</article-title>
          .
          <source>In: Podvodnie issledovaniya i robototehnika</source>
          vol.
          <volume>1</volume>
          (
          <issue>7</issue>
          ),
          <fpage>9</fpage>
          -
          <lpage>14</lpage>
          (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>Inzartsev</surname>
            ,
            <given-names>A.V.</given-names>
          </string-name>
          et al.:
          <article-title>Integrirovannaya informatsionno-upravlyayuschaya i modeliruyushcaya sreda dlya avtonomnogo podvodnogo robota</article-title>
          .
          <source>In: 6-th Russian Conf. Tehnicheskie Problemi Osvoenia Mirovogo Okeana</source>
          , pp.
          <fpage>129</fpage>
          -
          <lpage>133</lpage>
          ,
          <string-name>
            <surname>Vladivostok</surname>
          </string-name>
          (
          <year>2015</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <surname>Herbert</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          et al.:
          <article-title>SURF: Speeded Up Robust Features</article-title>
          .
          <source>CVIU</source>
          , vol.
          <volume>110</volume>
          ,
          <fpage>346</fpage>
          -
          <lpage>369</lpage>
          (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <surname>Mikolajczyk</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Schmid</surname>
            <given-names>C.</given-names>
          </string-name>
          :
          <article-title>Scale &amp; afine invariant interest point detectors</article-title>
          .
          <source>IJCV</source>
          vol.
          <volume>60</volume>
          ,
          <fpage>63</fpage>
          -
          <lpage>86</lpage>
          (
          <year>2004</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <surname>Verma</surname>
            <given-names>A.</given-names>
          </string-name>
          et al.:
          <article-title>A New Color SIFT Descriptor and Methods for Image Category Classification</article-title>
          . In: IRAST International
          <string-name>
            <surname>Congress</surname>
            <given-names>CACS</given-names>
          </string-name>
          , pp.
          <fpage>819</fpage>
          -
          <lpage>822</lpage>
          .
          <string-name>
            <surname>Singapore</surname>
          </string-name>
          (
          <year>2010</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <surname>Smelik</surname>
          </string-name>
          , R.M.,
          <string-name>
            <surname>de Kraker K.J.</surname>
          </string-name>
          ,
          <string-name>
            <surname>Tutenel</surname>
          </string-name>
          . T.:
          <article-title>A Survey of Procedural Methods for Terrain Modelling</article-title>
          .
          <source>In: CASA Workshop on 3D Advanced Media In Gaming And Simulation (3AMIGAS)</source>
          , pp.
          <fpage>25</fpage>
          -
          <lpage>34</lpage>
          . Amsterdam (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <surname>Giliam</surname>
          </string-name>
          , J.P. de Carpentier, Bidarra, R.:
          <article-title>Interactive GPU-based procedural heightifeld brushes</article-title>
          .
          <source>In: 4th International Conference on Foundations of Digital</source>
          Games pp.
          <fpage>55</fpage>
          -
          <lpage>62</lpage>
          . ACM (
          <year>2009</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>Ebert</surname>
            ,
            <given-names>D.S.</given-names>
          </string-name>
          et al.
          <article-title>Texturing and Modeling A Procedural Approach</article-title>
          . Morgan Kaufmann, Sasn Francisco (
          <year>2003</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <surname>Schneider</surname>
            <given-names>J.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Boldte</surname>
            <given-names>T.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Westermann</surname>
            <given-names>R.</given-names>
          </string-name>
          :
          <string-name>
            <surname>Real-Time</surname>
            <given-names>Editing</given-names>
          </string-name>
          , Synthesis, and
          <source>Rendering of Infinite Landscapes on GPUs In: Conf. on Vision</source>
          , Modeling, and Visualization, pp.
          <fpage>145</fpage>
          -
          <lpage>152</lpage>
          . Aachen,
          <string-name>
            <surname>Germany</surname>
          </string-name>
          (
          <year>2006</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <surname>Belhadj</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Audibert</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          :
          <article-title>Modeling Landscapes with Ridges and Rivers: bottom up approach</article-title>
          .
          <source>In: 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia</source>
          , pp.
          <fpage>447</fpage>
          -
          <lpage>450</lpage>
          . ACM (
          <year>2005</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>Genevaux</surname>
            ,
            <given-names>J.D.</given-names>
          </string-name>
          et al.:
          <article-title>Terrain generation using procedural models based on hydrology</article-title>
          .
          <source>ACM Transactions on Graphics (TOG)</source>
          vol.
          <volume>32</volume>
          (
          <issue>4</issue>
          ), p.
          <fpage>163</fpage>
          . (
          <year>2013</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <surname>Belhadj</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          :
          <article-title>Terrain modeling: a constrained fractal model</article-title>
          .
          <source>In: 5th international conference on Computer graphics, virtual reality, visualization and interaction in Africa</source>
          , pp.
          <fpage>197</fpage>
          -
          <lpage>204</lpage>
          . ACM (
          <year>2007</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>Perlin</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          :
          <article-title>Improving noise</article-title>
          .
          <source>ACM Transactions on Graphics (TOG)</source>
          vol.
          <volume>21</volume>
          (
          <issue>3</issue>
          ),
          <fpage>681</fpage>
          -
          <lpage>682</lpage>
          (
          <year>2002</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <surname>Geiss</surname>
          </string-name>
          . R.:
          <article-title>Generating Complex Procedural Terrains Using the GPU</article-title>
          .
          <source>In: GPU Gems 3</source>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>37</lpage>
          . Addison-Wesley (
          <year>2008</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <surname>Lorensen</surname>
            ,
            <given-names>W.E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cline</surname>
            ,
            <given-names>H.E.</given-names>
          </string-name>
          :
          <article-title>Marching Cubes: A High Resolution 3D Surface Construction Algorithm</article-title>
          .
          <source>COMPUTER GRAPHICS</source>
          vol
          <volume>21</volume>
          (
          <issue>4</issue>
          ),
          <fpage>163</fpage>
          -
          <lpage>169</lpage>
          (
          <year>1987</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <surname>Baraf</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          :
          <article-title>An Introduction to Physically Based Modeling Rigid Body Simmulation 1 - Unconstrained Rigid Body Dynamics</article-title>
          .
          <source>SIGGRAPH Course Notes</source>
          (
          <year>1997</year>
          )
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>