=Paper= {{Paper |id=Vol-3066/spaper6 |storemode=property |title=Modeling Situations in Virtual Environment Systems (short paper) |pdfUrl=https://ceur-ws.org/Vol-3066/spaper6.pdf |volume=Vol-3066 |authors=Mikhail Mikhaylyuk,Dmitry Kononov,Dmitry Loginov |dblpUrl=https://dblp.org/rec/conf/ssi/MikhaylyukKL21 }} ==Modeling Situations in Virtual Environment Systems (short paper)== https://ceur-ws.org/Vol-3066/spaper6.pdf
Modeling Situations in Virtual Environment Systems
Mikhail V. Mikhaylyuk, Dmitry A. Kononov, Dmitry M. Loginov

Scientific Research Institute for System Analysis RAS, Nakhimovsky avenue 36/1, Moscow, 117218, Russia

                Abstract
                The technology of modeling various situations in virtual environment systems, which are
                computer tree-dimensional models of a real or artificial environment, is considered. The user can
                view these scenes directly on the computer screen, wall screen, stereo glasses, virtual reality
                glasses, etc. He can also move inside a virtual scene and interact with its objects. In turn, the
                environment can also change. This allows modeling of various situations (situational modeling)
                in the virtual environment system. With such modeling, some static or dynamic situation is set in
                the virtual environment system in which the operator must perform the tasks assigned to him.
                A mechanism for setting situations by changing a virtual three-dimensional scene using
                configuration files and virtual control panels is proposed. A special language and editor has been
                developed for writing configuration files and for creating virtual control panels. The approbation
                of the proposed methods is presented on the example of two virtual scenes: a training ground for
                mobile robots and a jet backpack of an astronaut in outer space.

                Keywords 1
                Virtual environment system, situational modeling, three-dimensional scene, configuration
                file, virtual control panel

1. Introduction
    Virtual environment systems are the computer 3D model of real or virtual surroundings, which a user
can observe directly on the computer monitor, wall screen, in stereo or virtual reality glasses etc. Besides
that, the user can move within this surroundings, interact with its objects and analyze the results of such
interaction. The environment can also change by itself or under the influence of any factors. All these lead
to the possibility of modeling in virtual environment systems.
    It is known some kind of modeling, which can be used within virtual environment systems. At
imitation modeling a computer model of real or developed system or process is created, on which the
modeling results (system or process behavior and parameters) are investigated or predict the future
results. The visualization of all processes and their visual observation are essential here. The scene
modeling let you to study the durability and survivability of the complex systems. Its main tasks are
prevention of emergency situations, risks management, restoring system survivability, etc. The system
structure and the elements interaction during its functioning are represented in the form of an oriented
graph, the arcs and vertices of which are assigned parameters and functionalities that adequately describe
the processes of functioning of the elements of the system under study (modeled). The visualization of
such a system in the virtual environment systems allows you to visually observe the processes taking
place. Simulation and training complexes are designed for professional training of operators by repeatedly
performing the necessary actions. In these complexes, virtual environment systems are used to simulate
the dynamics and visualization of the environment. Expanding the tasks of such complexes by creating
different situations for operators leads to situational modeling.
    Situational modeling [1] creates a special static or dynamic environment in virtual environment sys-
tem, in which the operator must perform the assigned tasks. At the same time, he must follow both the
objective properties of the created situation, and his subjective ideas about how to act in such a situation.
It is not so much the training of specific role is important here, as the ability to cope with difficult situa-
tions. The instructor can offer both individual situations and several situations following each other. Such


SSI-2021: Scientific Services & Internet, September 20–23, 2021, Moscow (online)
EMAIL: mix@niisi.ras.ru (M.V. Mikhaylyuk); mix@niisi.ras.ru (D.A. Kononov); mix@niisi.ras.ru (D.M. Loginov)
ORCID: 0000-0002-7793-080X (M.V. Mikhaylyuk); 0000-0002-6059-5590 (D.A. Kononov); 0000-0002-2717-6909 (D.M. Loginov)
           © 2021 Copyright for this paper by its authors.
           Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
           CEUR Workshop Proceedings (CEUR-WS.org)
modeling allows you to check not only the qualifications of the operator, but also his psychological quali-
ties: courage, risk-taking, perseverance, learning ability, emotionality, stress-resistance, adequacy, pur-
posefulness, capacity for work, etc. The paper [2] considers motivation in solving such a problem as the
state of situational cognitive and epistemic readiness to solve a problem in order to reduce the perceived
discrepancy between the expected and real states. In [3], a classification of situations and their corre-
sponding behavior algorithms is proposed using the example of controlling a small turbojet engine. The
paper [4] considers the work of an operator with a control panel in the virtual interior of the space module
“Pirs” for the implementation of the astronaut’s spacewalk. A whole range of different tasks is presented
in [5]. In it, based on the analysis of astronauts’ preparation for extravehicular activity, classes of tasks are
identified that are physically difficult to model on technical means. These include the tasks of emergency
situations, management of advanced facilities, information support for astronauts, etc. Some other tasks
are presented in [6–8].
    In this paper, the problem of situational modeling in virtual environment systems is considered on the
examples of two virtual scenes: a training ground for mobile robots and a jetpack astronaut rescue in outer
space. Methods of setting different situations in one 3D virtual scene using configuration files and virtual
control panels are proposed, as well as possible modeling tasks in turn.

2. Virtual Environment System
   The virtual environment system consists of a three-dimensional (3D) virtual scene, a control subsys-
tem, a dynamics calculation subsystem and a visualization subsystem (see Figure 1).

                                                  3D scene

                              Control                         Dynamics calculation
                             subsystem                            subsystem



                                       Instructor                  Visualization
                                                                    subsystem

                           Operator

Figure 1: Structure of the virtual environment system

    A virtual 3D scene is created in a three-dimensional modeling system (for example, 3ds Max) and sets
the scene in which all actions will take place. Figure 2 shows the scene of the mobile robots training
ground, and Figure 4 shows the scene of an astronaut with a jetpack rescue in outer space. The selected
scene is initially loaded into the dynamics and visualization subsystems. The control subsystem includes
control panels for dynamic objects (for example, robots or jetpack engines), functional circuits and soft-
ware modules for calculating control signals resulting from the operator’s impacts on the control panel
elements. At the same time, you can use both real and virtual control panels.
    To create virtual control panels, a special interactive editor has been developed, containing a large set
of types of control elements (buttons, toggle switches, joysticks, regulators, etc.). Functional circuits are
created in other special editor, which includes a wide set of various functional blocks (arithmetic, logical,
trigonometric, dynamic, automatic control, signal generators, etc.). Each block has inputs and outputs that
can be connected in the editor with lines, thus obtaining a functional circuit. The control actions of the
operator, sensor readings from the virtual scene, the parameters of its setting, etc. are fed to the inputs of
the circuit.



                                                        174
Figure 2: Three-dimensional scene of training ground for mobile robots




Figure 3: Three-dimensional scene of a jet backpack of an astronaut in outer space

    The calculated signals are transmitted to the dynamics subsystem, which calculates new coordinates,
orientation angles and states of controlled objects after a period Δt of simulation time. This takes into ac-
count all dynamic parameters, collisions of objects, friction forces, gravity, etc. The calculation results are
transmitted into the visualization subsystem. This subsystem synthesizes the image of a virtual scene with
new parameters of dynamic objects. Various types of lighting, special objects (fire, jets of water and
foam, relief, etc.), as well as states of the environmental conditions (time of day, rain, snow, fog) are also
modeled. Since one cycle of operation of all subsystems takes no more than 40 milliseconds, the operator
gets the impression of a continuous controlled process in a virtual environment system.
    In the Scientific Research Institute for System Analysis Russian Academy of Science has developed
an original virtual environment system, named “VirSim”, which includes all the subsystems, described
above. Its main purpose is to be used as a simulator for control operators of complex dynamic systems.
Therefore, the work of all subsystems should be as realistic as possible, so as not to form so-called false
skills among operators. This system was used to train operators of mobile and anthropomorphic robots, as
well as to train cosmonauts on the ISS. Figures 2 and 3 show examples of virtual scenes in this virtual
environment system.
    The scene of the virtual environment system can be considered as a certain situation (situational mod-
el) determined by the relative location of objects and their properties.

                                                       175
3. Situational modeling
    Situational modeling here means the creation of a certain situation (static or dynamic) in the virtual
environment system and the solution of a certain problem in it by an operator (or a group of operators).
The instructor carries out setting the situation, setting the task, monitoring and evaluating the operator’s
actions. The instructor can offer both individual situations and several consecutive situations. The purpose
of the simulation is not only to train the operator to perform known actions, but also to be able to find the
right solution in difficult situations. Therefore, the operator must not only take into account the objective
properties of the given situation, but also use his abilities to act in a complex and unexpected situation.
The examples are abnormal situations in the simulated system. At the end of the training, participants
usually analyze and take an introspection of their actions and their results. This allows you to gain experi-
ence in such situations without endangering your life and health. The paper [5] identifies classes of tasks
in the process-preparing astronauts for extravehicular activity, which are physically difficult to simulate
using conventional technical means, but it is possible using virtual reality technologies. The first class
includes the tasks of information support for the astronaut, namely, the demonstration of the current situa-
tion and necessary actions of the astronaut in this situation. The second class includes modeling the ac-
tions of an astronaut to perform a number of tasks and demonstrate the results of mistakes made (failure
to secure the safety halyard, incorrect trajectory of launching small satellites, skipping an emergency sig-
nal, etc.). The third class includes tasks for managing promising means of moving the spacecraft (rescue
satchel, robotic system, mobile robots etc.). A similar classification one can be carried out for any field of
activity.

4. Methods for setting the situation in the virtual environment system
    A static situation is a situation in which all objects and the surrounding environment do not change
their parameters during the simulation. The simplest way to simulate a static situation is to create a new
three-dimensional scene for your virtual environment system in a 3D modeling system. For example, you
can use the 3ds Max system for this purpose. However, in many tasks, the same scene can be used as the
base for different situations. In this case, there will be numerous specific modifications of it, which will
take up a large amount of memory and differ little from each other. If you change the base scene, you
have to track the changes in all its modifications.
    Another way to set the situation is possible with the help of a configuration file. The configuration file
is an xml file in which for any scene element you can set the value of any of its parameter. You can set
the position and orientation of the object, as well as make it invisible or replace it with a damaged one;
the lighting source can be turn on or off, set the time of day, turn on rain or snow and set their intensity,
etc. Figure 4 shows an example of a base scene, and Figure 5 shows the same scene in which some ob-
jects are set invisible using a configuration file.
    Virtual control panels can be used to manage configuration files. Such panels are created using a spe-
cially designed editor that allows you to place controls (buttons, toggle switches, switches, etc.) on the
substrate of the panel. The positions of the control elements during simulation are transmitted to a func-
tional scheme, which is built from functional blocks of various types (arithmetic, trigonometric, control
and others). In particular, it has time tracking blocks and startup blocks for executing configuration files.
    Using this mechanism, you can change the situation in the virtual scene at a certain point in time. Fig-
ure 6 shows the example of an instructor’s control panel, with which he can set one of 12 situations and
choose weather conditions (rain or snow).




                                                      176
Figure 4: Base scene on the training ground for mobile robots




Figure 5: Changing the situation using the configuration file

5. Possible tasks in the situational modeling
    Let us consider the possible tasks in the situational modeling using the example of two scenes in the
virtual environment system: a mobile robot training ground and an astronaut’s rescue jetpack (safer). In
the first scene (see Figure 2), the operator uses a virtual or real panel to control the robot’s movement and
can capture objects with a manipulator. In the second scene (see Figure 3), the operator uses a virtual or
real joystick to control the jetpack, moving the astronaut in space.
    At the initial stage, the operator must master the management and execution of simple basic operations
(studying the control panel, moving the controlled vehicle forward and backward, turning left and right,
grabbing objects, etc.). More complex tasks consist in moving the robot (or the safer) to a given point.
The point can be set by coordinates (for this it is necessary to simulate the local positioning system and
                                                      177
display the current coordinates on the control panel) or by a description (for example, the entrance to the
hangar, the exit hatch of the International Space Station, etc.). This type of tasks also includes searching
for specified objects or inspecting the environment. When searching for objects, special virtual sensors
can be used. For example, a virtual gamma detector shows the level of gamma radiation of a radioactive
object falling into its cone of action. It can be used to search for infected radioactive objects for the pur-
pose of their further transportation to specialized containers. An object can also be specified by a descrip-
tion (for example, a fire source, a fuel barrel, a solar battery, a certain space module, etc.). The tasks of
inspection of the surrounding environment may include shooting with a virtual camera (including 360
degree panoramic video), searching for damage, checking the correctness of structural fasteners, etc.).
The performance of all this operations can be limited by time and the presence of obstacles.




Figure 6: Situational control panel

   Operations that are more complex are extinguishing burning objects with the power of a jet of water or
foam from a mobile robot water cannon, orientation when a sensor fails, “jamming” the robot in a pit,
restoring its operability, etc. In the scene with a safer, you can work out an automatic return (in case of
fogging of the spacesuit or loss of visibility of the ISS), return to the nearest handrail (in case of lack of
fuel for the flight to the gateway), failure of one engine, stabilization of the astronaut (in case of its twist-
ing), etc. Having the underlying surface of the Earth in the scene, you can change the time of day, set
tasks for recognizing the place of flight, detecting fires, floods, and others cataclysms. Figure 7 shows an
example of extinguishing a fire by a robot, while the operator controls the process using a virtual control
panel in the lower left corner. Figure 8 shows a model of underlying earth’s surface from a height of 300
km, on which it is necessary to recognize the visible area of the terrain.

6. Control in situational modeling
    An important task in situational modeling is control. The type of the control is determined by the task
being solved and can be performed using a real or virtual panel (command mode), skeleton, operator’s
hands and head tracking system, by voice or gestures, semi or full automatic mode etc. Switching ele-
ments of the virtual panel are carried out either with a computer mouse or with fingers if the panel is dis-
played on a touch screen. This type of control is not universal, it is also inconvenient to use it to control
manipulator models with many degrees of freedom (e.g. human or animal models). The control with con-
trol panels was discussed above.
    To control the model of an anthropomorphic robot, you can use an exoskeleton, which is a frame worn
by the operator. The operator’s movements are digitized and transmitted to the dynamics subsystem to
simulate the operation of motors in the robot joints. The disadvantage of this control method is not very
high accuracy and inconveniences connected with wearing an exoskeleton.




                                                        178
Figure 7: The task of extinguishing a fire by a robot




Figure 8: The model of underlying earth’s surface

   Tracking systems of the operator’s head and hands allow coordinates and orientations of parts of the
operator’s body to be obtained and used to repeat movements by a virtual model. This control mode is
called copying.




                                                        179
Figure 9: The performing the voice command “Open the valve” by android




Figure 10: Execution by a mobile robot of a transition command to a given point in automatic mode
    Voice control (supervisory mode) involves the utterance of a command by the operator, followed by
its execution by a controlled object. To implement this technology, it is necessary to have a command
recognition module. The action itself can be performed according to a pre-recorded trajectory.
    Figure 9 shows in the “VirSim” system the unscrewing of the valve by an android with voice or ges-
ture command, and Figure 10 shows the automatic transition of the mobile robot to a specified point.
    In the supervisory control mode, the operator uses positions or movements (gestures) of his hands or
fingers to set commands; the robot recognizes the position (gesture) and executes a predefined command
corresponding to this position (gesture). The disadvantage of this method is a small number of meaningful
gestures and recognition inaccuracies. Semi-automatic mode involves controlling the manipulator's end
effector (the point on the end of the last link of the manipulator) using inverse kinematics. This means
that the operator controls the movement of the manipulator's end effector (using one of the previous
methods), and all other links of the manipulator automatically adjust to this movement. It can also be
specified the automatic mode in which the operator indicates the end point, and the robot moves to it in-
                                                      180
dependently bypassing obstacles. At the same time the robot’s movement may be inefficient, take longer,
or the path may not be found at all. Further development may include intelligent control systems.

7. Conclusion
   The paper proposes methods for setting various situations, as well as possible tasks of situational
modeling in virtual environment systems on the example of two virtual scenes: a training ground for
mobile robots and a jet backpack for saving an astronaut in outer space. Situational modeling was carried
out in the virtual environment system “VirSim”, developed at the Scientific Research Institute for System
Analysis of the Russian Academy of Science. The approbation of the creation of various situations using
three-dimensional virtual scenes, configuration files and virtual control panels showed the adequacy of
the proposed methods to the tasks set.

8. Acknowledgements
    The publication is made within the state task of Federal State Institution “Scientific Research Institute
for System Analysis of the Russian Academy of Sciences” on “Carrying out basic scientific researches
(47 GP)” on topic No. FNEF-2021-0012 “Virtual environment systems: technologies, methods and algo-
rithms of mathematical modeling and visualization. 0580-2021-0012” (Reg. No. 121031300061-2).

9. References
[1] D. A. Pospelov, Situatsionnoe upravlenie: teoriia i praktika. M.: Nauka. Gl. red. fiz.-mat. lit., 1986.
    288 s.
[2] Jeong-Nam Kim, James E. Grunig, Problem Solving and Communicative Action: A Situational The-
    ory of Problem Solving. Journal of Communication 61 (2011) 120–149.
[3] R. Andoga, L. Főző, L. Madarász, Digital Electronic Control of a Small Turbojet Engine MPM 20.
    Acta Polytechnica Hungarica. 4(4) (2007) 83–95.
[4] A. V. Maltsev, M. V. Mikhaylyuk, Virtual Environment System for Pirs Space Module Interior.
    CEUR Workshop Proceedings: Proc. of the 29th International Conference on Computer Graphics
    and Vision 2485 (2019) 1–3. URL: http://ceur-ws.org/Vol-2485
[5] A. A. Altunin, P. P. Dolgov, N. R. Zhamaletdinov, E. Yu. Irodov, V. S. Korennoj, Napravleniya
    primeneniya tekhnologij virtual'noj real'nosti pri podgotovke kosmonavtov k vnekorabel'noj
    deyatel'nosti. Pilotiruemye polety v kosmos 1 (38) (2021) 72–88.
[6] A. V. Maltsev, M. V. Mikhaylyuk, Visualization and virtual environment technologies in the tasks of
    cosmonaut training. Scientific Visualization. 12 (3) (2020) 16–25.
[7] M. V. Mikhailiuk, A. V. Maltsev, P. Iu. Timokhin, E. V. Strashnov, B. I. Kriuchkov, V. M. Usov,
    Sistemy virtualnogo okruzheniia dlia prototipirovaniia na modeliruiushchikh stendakh ispolzovaniia
    kosmicheskikh robotov v pilotiruemykh poletakh. Pilotiruemye polety v kosmos. 2 (35) (2020) 61–
    75.
[8] T. Tomchinskaya, M. Shaposhnikova, N. Dudakov, Training Beginners and Experienced Drivers
    using mobile-based Virtual and Augmented Reality. CEUR Workshop Proceedings: Proc. of the 30th
    International Conference on Computer Graphics and Vision 2744 (2020).




                                                      181