<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Chyrun);</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>VR simulators using generative AI and photogrammetry⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>SofiaChyrun</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Victoria Vysotska</string-name>
          <email>Victoria.A.Vysotska@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>S. Bandera 12, 79013 Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <volume>000</volume>
      <fpage>0</fpage>
      <lpage>0001</lpage>
      <abstract>
        <p>The article presents an approach to developing an interactive VR/AR simulator for first aid in crisis and war situations, utilising generative artificial intelligence (Stable Diffusion, Tripo, Meshy, Trellis3D) and mobile photogrammetry. The goal of the study is to create a safe, realistic and economically optimised learning environment implemented on the Unreal Engine 5 game engine. The proposed methodology allows for a significant reduction in the cost of creating 2D/3D content due to automated model generation and the experimental method of "damaged realism" - a deliberate reduction in the number of input photogrammetric frames for modelling destroyed objects. The MVP prototype implements key VR mechanics, including a combined movement system (Smooth Locomotion with teleportation), physical interaction with medical instruments (Grabbable Objects), spatial audio, and the integration of instructional videos. The experiments carried out confirm the effectiveness of combining GMI and photogrammetry for the rapid development of specialised simulation environments, capable of improving the quality and effectiveness of training in first aid skills.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;VR/AR simulation</kwd>
        <kwd>first aid</kwd>
        <kwd>generative artificial intelligence</kwd>
        <kwd>Unreal Engine 5</kwd>
        <kwd>photogrammetry</kwd>
        <kwd>"damaged realism"</kwd>
        <kwd>medical simulators</kwd>
        <kwd>immersive technologies</kwd>
        <kwd>3D modelling</kwd>
        <kwd>tactical medicine</kwd>
        <kwd>1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The relevance of providing quick and effective first aid in wartime and emergencies is critically
high. Traditional teaching methods are often limited to theory and static dummies, which do not
provide adequate psychological and practical preparation for stressful conditions, which are
accompanied by injuries, bleeding, shock and limited time for decision-making.</p>
      <p>This research aims to develop an innovative approach to training first aid skills by creating an
interactive VR/AR simulator (Virtual/ Augmented Reality) based on the Unreal Engine 5 game
engine. A key feature of the project is the use of generative artificial intelligence (Generative AI),
such as Stable Diffusion, Trellis3D, and Tripo, to quickly create realistic and unique 2D and 3D
content, including destroyed city scenes, damaged objects, and
models of victims
with
characteristic injuries. The project encompasses the entire development cycle, beginning with the
creation of a business model (using a Business Model Canvas) and the detailed planning of a Work
Breakdown Structure (WBS). As part of the implementation, the VR scene of a city street after a
missile strike was successfully prototyped, key VR interaction mechanics such as Smooth
Locomotion and the Grabbable Objects system
were implemented, physical collisions were
configured, and educational and atmospheric content (UMG menu, video instructions, spatial
audio) was integrated. Additionally, an experiment using photogrammetry was conducted to create
unique real-world assets.</p>
      <p>The purpose of the study is to develop an information technology to create a safe, flexible and
realistic learning environment based on virtual reality in Unreal Engine, which allows users
(military, medics, students and civilians) to practice critical skills of first aid in conditions as close
as possible to combat, war/crises, using VR/AR format and game mechanics WBS and practical
solutions, including:
1. Develop the concept and architecture of the VR/AR simulator, including the formation of
the target audience, value proposition, and key resources (according to the Business Model
Outline).
2. Create visual content (2D and 3D) for the VR/AR scene using generative artificial
intelligence (GAI) (Stable Diffusion, Leonardo.Ai, Trellis3D, Meshy, Tripo) to simulate the
destroyed environment and victims.
3. Develop a VR project in Unreal Engine through the VR Template and build a basic structure
of the VR scene.
4. Organise content migration, set up collisions for 3D models, and use photogrammetry to
create unique real-world asset sets.
5. Implement key VR interaction mechanics, including Smooth Locomotion, teleport, and</p>
      <p>Grabbable Objects settings for medical instruments.
6. Integrate training elements and user interface (UI/UX), in particular, create a VR menu, add
spatial audio accompaniment, and embed training video instructions (Triage, CPR, Bleed
Stop, etc.).
7. Test the VR project among the control group of participants in the experimental trial.</p>
      <p>The object of research is the processes of development, integration and optimisation of content
and mechanics of virtual (VR) and extended (AR) reality for the creation of educational simulators.
The subject of the study is an interactive VR/AR simulator for first aid in crisis and war situations,
implemented using the Unreal Engine 5 game engine.</p>
      <p>The scientific novelty of the study is as follows:
1. System integration of GAI for accelerated development of VR scenes, in particular, for the
first time, a combination of GAI tools (Tripo, Meshy, Trellis3D) and the Unreal Engine 5
game engine was used to quickly create specialised and highly detailed content (casualties,
ruins, injuries), which significantly reduces the development time of MVP (Minimum Viable
Product).
2. Experimental use of incomplete data in photogrammetry, including experimental evidence
that a conscious reduction in the number of photographs (e.g., 37.5–62.5% of the
recommended number) in mobile photogrammetry (RealityScan) can be used as a creative
method for modelling "affected zone" style assets (incomplete detail, mesh distortion), which
is relevant for military simulations.
3. Development of a combined movement system for VR comfort, in particular, implemented
and tested a combined approach to navigation that combines Smooth Locomotion and
teleportation to ensure maximum immersion and minimise VR sickness.</p>
      <sec id="sec-1-1">
        <title>The practical value is as follows:</title>
        <p>1. Creation of a functional prototype of a VR simulator/simulator with an interactive scene of
assisting a missile strike, which has direct applied value for the Ministry of Defence, the Red
Cross, the Ministry of Health, military academies and public organisations.
2. Realistic training that provides a safe way to practice critical skills (tourniquet, CPR, Triage)
without risk to real people, preparing users for stressful conditions (simulation of shock,
panic).
3. A ready-made methodology for quick content creation, as clear instructions and comparison
tables are provided for the use of free/paid GAI tools (2D/3D) and 3D model platforms,
which can be used by developers to quickly fill VR projects.</p>
        <p>Multiplatform and accessibility, in particular, the project provides support for both high-quality
VR headsets and mobile AR applications, providing flexibility and accessibility for learning.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Problem statement</title>
      <p>The challenge of the study is to develop and optimise a model of a highly realistic, interactive and
accessible simulation environment for training first aid skills, which are critical in conditions of
limited time and stressors inherent in crisis and military situations. The key task is to maximise the
effectiveness of training Enach while minimising the cost of developing a product Srr and the time to
bring it to the market TMVP, utilising the resources of the GAI to create high-quality content.</p>
      <p>Learning effectiveness is defined as the weighted sum of key indicators reflecting immersion
depth R, level of interactivity I, feedback quality F, and the correctness of critical skills Akr.</p>
      <p>
        Enach=∑ ω j⋅P j→max , j=1 , N ,
∑ ω j=1 ,
(1)
where Pj is an indicator of efficiency according to the j-th criterion, in particular, P1 = R (realism
of the scene and immersion) at R ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ], P2 = I (VR/AR interactivity) at I ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] (interaction with
objects, use of controllers), P3 = F (quality of feedback and evaluation) at F ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] (reaction time,
correctness of tourniquet application, error analysis), and P4 = Akr (accuracy of the algorithm of
actions) at Akr ∈ [
        <xref ref-type="bibr" rid="ref1">0, 1</xref>
        ] (adherence to Triage protocols, CPR), ωj is the weight factor of the
importance of the criterion (∑j = 14, ωj = 1).
      </p>
      <p>The cost of developing a product Srr and the time to market TMVP should be minimised
through the use of GAI.</p>
      <p>Srr +α ⋅T MVP→min ,
where TMVP is the total development time of the MVP (42 weeks according to the Work
Breakdown Structure, WBS), Srr is the total cost of development, and α is the weight factor
(representing the cost of time).</p>
      <p>Taking into account the contribution of the GAI:</p>
      <p>Srr=Ctrad⋅(1 − SGAI )+CGAI ,</p>
      <p>T MVP=T trad⋅(1 − δGAI ) ,
where SGAI is the share of saved costs for 3D modelling/concepts due to GAI, δGAI is the share of
time saved on 3D modelling/texturing due to GAI (e.g., Trellis3D, Meshy, Tripo, Stable Diffusion),
CGAI is the cost of licenses, server capacity, and AI tools.</p>
      <p>The project must comply with the following restrictions:
1. Technological limitations: Ptech ∈ {Unity, Unreal Engine 5} ∧ Vtech ∈ {VR headsets, Mobile AR
devices}.
2. Time limits (with WBS): TMVP ≤ 42 weeks.
3. Limitations of realism and correctness (with certification): Akr ≥ Amin, where Amin is the
minimum threshold of accuracy that corresponds to the official protocols of first aid
(Ministry of Health, Red Cross).
4. Localisation restrictions: L ≥ Lmin, where L is the number of supported languages and
cultural adaptations of content (including English, Chinese, Spanish).</p>
      <p>Thus, the task of the study can be formulated as a multi-purpose optimisation task: to find the
optimal combination of development parameters (choice of technologies, level of integration of
GAI and allocation of resources), which maximises the effectiveness of training Enach in compliance
with all technological, time and regulatory constraints, while minimising the overall costs and time
of MVP development.
(2)
(3)
(4)</p>
    </sec>
    <sec id="sec-3">
      <title>3. Related works</title>
      <p>
        An analysis of the literature and related developments reveals a rapid increase in interest in the use
of immersive technologies (VR/AR) and GAI to enhance learning effectiveness, particularly in
critical areas such as medicine and military training. The research presented in the file combines
three key scientific and practical areas, each with a substantial research base. Over the past decade,
VR simulations in tactical and emergency medicine have been proven to be a highly effective
alternative to traditional dummies, especially for training complex, high-risk, and low-frequency
events [
        <xref ref-type="bibr" rid="ref1 ref2">1–2</xref>
        ]. In particular, research highlights the ability of VR environments to recreate any
patient condition in any environment [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], a capability that is not possible with physical simulators.
Research similar to that of Immersiveness confirms that immersive technologies significantly
enhance learning effectiveness and user satisfaction in emergencies [
        <xref ref-type="bibr" rid="ref4 ref5">4–5</xref>
        ]. The study of the efficacy
of TacMedVR emphasises the importance of assessing interaction and response to stress [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], which
directly correlates with the value proposition of this project (simulation of emotional reactions of
victims, learning under pressure) [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6–12</xref>
        ].
      </p>
      <p>
        The use of VR simulators, such as the SimX platform [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], has confirmed their superiority in
developing critical thinking, effective information communication, and enhancing team dynamics
in assisting in wartime (Damage Control Resuscitation/Surgery) [11]. The context of teamwork and
critical thinking directly justifies the need for development focused on combat scenarios.
Traditional modelling of 3D content is the most resource-intensive and time-consuming stage of
simulator development. Therefore, based on GAI and the automation of 3D content (Tripo, Meshy)
for the accelerated creation of 3D objects, it is part of a global trend. Modern developments,
particularly the integration of GAI, such as Ludus AI, into Unreal Engine 5.5 [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], demonstrate a
paradigm shift [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. The workflow acceleration approach enables you to generate 3D models from
text descriptions and images in near real-time, which drastically reduces development time (TMVP
in terms of WBS project). While traditional modelling still offers more precise detail [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], platforms
like Sloyd (not indexed in the list) and Rodin AI (not indexed in the list) are actively developing the
ability to create high-quality, game-optimised 3D models from text or images, confirming the
viability of the chosen method for creating assets (ruins, damaged objects) for the simulator
[
        <xref ref-type="bibr" rid="ref9">13–26</xref>
        ]. The use of photogrammetry to create 3D models of real-world objects (e.g., a swing)
demonstrates a desire to enhance the photorealism of the scene, thereby improving the quality of
learning in realistic environments. It aligns with the direction of research that utilises this
technique to create learning resources [
        <xref ref-type="bibr" rid="ref10 ref11 ref12 ref13 ref14 ref15 ref16 ref17 ref18">27–35</xref>
        ].
      </p>
      <p>
        Photogrammetry is a cost-effective and accessible method [
        <xref ref-type="bibr" rid="ref8">8, 9</xref>
        ] for creating highly detailed,
realistic 3D models of anatomical preparations or real objects for medical education. Its integration
into engines like Unity [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], or evaluation in RealityCapture [10] confirms that this technology
significantly enhances immersion and realism [9] in simulation environments. A special novelty of
modern VR projects lies in a creative approach based on a deliberate reduction in the quality of
input data for photogrammetry, aiming to achieve the effect of damaged content. Although most
studies (e.g., [
        <xref ref-type="bibr" rid="ref3">3, 10</xref>
        ]) focus on maximising accuracy (60–80% overlap, noise minimisation), the
proposed approach using RealityScan to create "hit zone" objects is unique in the context of content
creation for military simulators.
      </p>
    </sec>
    <sec id="sec-4">
      <title>4. Materials and methods</title>
      <p>The development and implementation of an interactive VR/AR simulator for first aid is based on an
interdisciplinary approach that combines methodologies from game design, computer graphics,
reality modelling (photogrammetry), and generative artificial intelligence (GAI) technology. The
experimental part focuses on creating a minimum viable product (MVP). For project management
and cost control, the Work Decomposition Structure (WBS) methodology and the Gantt Chart were
utilised. The project is divided into six key stages:</p>
      <p>T zag=max (T i) , i ∈{1 , ... , 6 }.</p>
      <p>Therefore, Tzag = T6 = 42 weeks (taking into account parallel management). The target audience
of AI is segmented into scenarios A = {Ast, Agr, Avik}, where Ast refers to pupils, students, and
teachers, Agr relates to citizens, volunteers, and Avik refers to military personnel, doctors, and
instructors.</p>
      <p>Key MVP scenario: city street after a missile strike/mine explosion. The scenario has four types
of victims, classified according to the Triage system:P = {Pcrit_dit, Pcrit_mat, Pser_op, Pleg}, where Pcrit_dit is a
child (7 years old) with respiratory arrest (CPR); Pcrit_mat – mother with severe bleeding (tourniquet);
Pser_op – a man with burns (shock); Pleg – minor injuries.</p>
      <p>
        To accelerate the creation of assets (3D models of buildings, vehicles, and characters), GAI tools
were utilised, specifically neural networks: G = {Tripo, Meshy, Trellis3D}. The primary methods are
Text-to-3D and Image-to-3D, and the supported export formats are OBJ, FBX, and GLB/GLTF. To
maximise the quality of 2D concepts and 3D models, a detailed Qprompt containing the object,
scenario, style, lighting and detail, i.e was used Qprompt {"Low-poly city street after a missile strike
with stylised lighting"}. To create assets with a high level of realism that simulate damage, mobile
photogrammetry (utilising the Samsung Galaxy A52 and RealityScan) was employed. The
experimental method of "corrupted realism" involves the deliberate reduction of the number of
input frames, Nphoto, to induce non-critical distortions of the grid and textures. The condition of the
experiment is Nphoto ([
        <xref ref-type="bibr" rid="ref13">30, 50</xref>
        ] frames). The recommended range is Nrekom ([80, 100] frames). Scan
objects are the main urban elements of the sleeping array, for example, models of a children's
swing. The output format is GLB. The working environment utilises the Unreal Engine 5 (UE5)
game engine, featuring a basic template, as shown in Figs. 1-3 – VR Template. The scene
components include VRPawn (virtual player character) and NavMeshBounds Volume (navigation
area for teleportation). For the Locomotion relocation system, a combined approach was used:
      </p>
      <p>Lteleport⊕ Lsmooth ,
where Lteleport – teleportation (to avoid VR disease); Lsmooth – Smooth Locomotion (smooth
movement using analogue controller joints).</p>
      <p>For medical instruments (tourniquet, scissors), class Grabbable_SmallCube with physics
activation is used. Capture condition:</p>
      <p>G:{Object ∈ Grabbable_Component ∧ Simulate Physics = true ∧ Collision Preset = PhysicsActor}.</p>
      <p>For all imported Static Mesh (including GAI models), Simple Collision was used to optimise VR
performance: Collision → Add Box Collision/Convex Decomposition → Apply → Save. Content
integration, in particular, import of generated models, was carried out in FBX format with
subsequent manual adjustment of PBR textures in the Material Editor.
(5)
(6)</p>
      <p>For the User Interface (UI), the WidgetMenu (UMG Blueprint) has been modified to add the
"Instructions" and "Settings" buttons. Multimedia integration:
1. Tutorial Video – 5 video instructions (Triage, CPR, Bleed Stop, Burn, Panic) displayed on the</p>
      <p>Static Mesh Plane via Media Player.
2. Spatial Audio – 2 key sound effects are implemented: Siren S: Audio Actor ∧ Auto Activate =
true ∧ Looping = true; Missile hit R: Audio Actor ∧ Activate with Delay = 10,0 s ∧ Looping =
false.</p>
      <sec id="sec-4-1">
        <title>VR Immersion Z:</title>
        <p>Z =α R⋅Rviz +α I⋅I mech+α A⋅S Aud ,
where Rviz is the realism of visual content (GAI, photogrammetry); Imech – interactivity (Smooth
Locomotion, Grabbable Objects); SAud – audio quality (sirens, explosions), α – weight factors.</p>
        <p>The evaluation of the training Enach quality was based on the effectiveness of the training, as
defined in the problem statement, which is a key tool for assessing the achievement of the research
objective.</p>
        <p>
          Enach=∑ ω j⋅P j→max ,
j=1 , N ,
∑ ω j=1 ,
where ωj is the weighting factor of the importance of the criterion (∑j = 14 ωj =1), and Pj is the
performance indicator according to the j-th criterion, which are measured during alpha and beta
testing (Stage E3):
(7)
(8)
1. P1 = R (scene realism and immersion) – evaluated by feedback from specialists (alpha
testing) at R ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ].
2. P2 = I (VR/AR interactivity) – evaluated by the intuitiveness of control (beta testing) at
        </p>
        <p>
          I∈[
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] (interaction with objects, use of controllers).
3. P3 = F (quality of feedback and evaluation) – evaluated by the system of automatic
evaluation of actions at F∈[
          <xref ref-type="bibr" rid="ref1">0,1</xref>
          ] (reaction time, correctness of tourniquet application, error
analysis).
4. P4 = Akr (accuracy of the algorithm of actions) – evaluated on the basis of compliance with
first aid protocols atAkr ∈ [
          <xref ref-type="bibr" rid="ref1">0, 1</xref>
          ] (adherence to Triage protocols, CPR).
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Testing took place in three stages:</title>
      </sec>
      <sec id="sec-4-3">
        <title>1. Internal testing (functionality, performance). 2. Alpha testing (specialists: doctors, military instructors) – validation of the realism of scenarios and algorithms. 3. Beta testing (volunteers, students) – assessment of UX/UI and intuitiveness.</title>
        <p>The effectiveness of using GAI in content creation (i.e., minimising Srr and TMVP) was evaluated
by comparing the time spent on creating assets using GAI with traditional modelling estimates.</p>
        <p>Srr +α ⋅T MVP→min .
(9)
It confirms the economic feasibility of the methods used.</p>
        <p>Educational content, including video instructions and interactive prompts, is integrated into the
system and is based on official protocols for first aid (Ministry of Health, Red Cross) (Fig. 4 -7).
Content validation condition:</p>
        <p>L∈{Official protocols of the Ministry of Health , Red Cross , Military medicine }.
(10)</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experiments</title>
      <p>The experimental part of the study aims to implement the key functional modules of the VR First
Aid Simulator (MVP) in a practical setting and validate innovative methods of content creation,
specifically the use of GAI and mobile photogrammetry. The experiments were conducted
according to the stages of MVP Development E2 and Testing and Improvement E3 (Table 1).</p>
      <p>The purpose of the experiment "Validation of the Integration of GAI into the Workflow (Stage
E2)" was to confirm the hypothesis that the use of GAI can provide fast and cost-effective
generation of 3D models that meet the requirements of a specific scene ("affected area" – Fig. 8-10).
Generative models were used in the creation of visual concepts and 3D objects for the scene "City
Street after a Missile Attack" (Fig. 11-13). Generation tools: Stable Diffusion, DALL-E (ChatGPT),
KREA, Ideogram (for 2D concepts). Tripo, Meshy, Trellis3D (for 3D models).</p>
      <p>The primary method involves using detailed prompts to create specific characters (Fig. 14-15)
and environments (for example, "the wounded mother is standing with a bleeding hand ... stylised
low-polygonal aesthetics...").</p>
      <p>3D models for key aspects of the scene were successfully generated (Fig. 16-17):</p>
      <sec id="sec-5-1">
        <title>1. Damaged environment (destroyed houses, damaged cars). 2. Human models of victims (a child in an unconscious state, a mother with bleeding, a man with burns).</title>
        <p>Generative neural networks made it possible to quickly (within the planned time TMVP) create a
unique library of 3D assets suitable for further import into UE, which significantly expanded the
capabilities of the scene and its plausibility. Models were exported to FBX/GLB formats and
imported into UE5.</p>
        <p>Physical collisions (Simple Collision/Convex Collision) are implemented on the models to
ensure the correct interaction of the player (VR character) with the environment (Fig. 18). The
models generated by the GAI (Tripo, Meshy) turned out to be suitable for VR scenes, which
confirmed the possibility of using the GAI to minimise the cost Srr of artistic modeling.</p>
        <p>The purpose of the experiment "Validation of the Damaged Realism Method through
Photogrammetry (Stage E2)" was to determine whether a controlled reduction in the quality of the
photogrammetry input data could be used to simulate damage to objects without additional 3D
processing, such as playground objects (5 models). Tool – RealityScan (Epic Games) on a mobile
device. The main experimental condition (controlled reduction) was that the number of shots
Nphoto was deliberately limited to the range of 30–50 frames, which is approximately 37,5% to
62,5% of the recommended number (80–100 frames). The models were created with partial mesh
distortions, unfilled areas and inaccuracies in textures (Fig. 19). The number of polygons ranged
from 221,789 to 669,954. The experiment confirmed that consciously limiting the number of photos
leads to the effect of a "damaged" view (incomplete detail), which is desirable for the visual style of
the affected area and can be used as a creative method for modelling VR environments.</p>
        <p>The goal of the experiment "Implementation and Testing of Key VR Mechanics (Stages E2–E3)" is
to achieve a high level of P2 interactivity and user comfort, which is necessary for the successful
implementation of the Learning Efficiency function.</p>
        <p>The combined approach of Lteleport ⊕ Lsmooth ensures optimal immersion and comfort. Two attempts
were made:
1. Implementation of Smooth Locomotion exclusively with teleportation disabled. The result
was that the character could not move (complete failure of the function).
2. Implementation of smooth movement with teleportation enabled.</p>
        <p>The combined approach of Lteleport ⊕ Lsmooth turned out to be effective for comfortable use, which
confirmed the need to preserve teleportation as a "fallback" to minimise virtual disorientation
(Fig. 20).</p>
        <p>The gripping functionality for medical instruments (tourniquet, scissors) has been implemented
by:</p>
      </sec>
      <sec id="sec-5-2">
        <title>1. Adding a GrabComponent to an object.</title>
        <p>2. Activation of Simulate Physics = true.
3. Collision Preset: PhysicsActor settings.</p>
        <p>Created VR pickup items that can be physically captured and used in the scene (Fig. 21),
confirming the achievement of the required level of P2 interactivity to practice skills.</p>
        <p>The standard VR menu has been modified (Fig. 22); function buttons ("Instructions" and
"Settings") have been added. The functionality of displaying training videos (Triage, CPR, Bleed
Stop, etc.) on the virtual screen (Figs. 23–24) and the automatic activation of spatial audio (sirens,
explosions – Figs. 25-26) has been implemented.</p>
        <p>The integration provided the ability to receive interactive prompts and automatically evaluate
actions (Fig. 27), which lays the foundation for quantifying the P3 feedback quality score at the final
testing stage.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Results</title>
      <sec id="sec-6-1">
        <title>A functional prototype of the scene and key mechanics</title>
        <p>(including VR/AR interaction and injury simulation)
was created.</p>
        <p>Three phases of testing were conducted (internal, alpha
testing with doctors, and beta testing on volunteers).</p>
        <p>The results obtained during the planning phase correlate with studies that confirm the high
effectiveness of VR/AR in medical education. The selected value proposition – a realistic simulation
of critical situations and VR/AR interactivity – reflects an approach that, in similar randomised
controlled trials, has shown a statistically significant improvement in training effectiveness
compared to conventional methods. The use of GAI tools (Tripo, Meshy) made it possible to
quickly create specific content for the "Affected Area" scene. The GAI successfully generated the
necessary models: destroyed houses, damaged cars, and characters with characteristic injuries (for
example, a man with burns, a mother with bleeding). It confirms that the integration of GAI
minimises reliance on traditional modelling, which is a key factor in minimising the total cost of
R&amp;D and accelerating TMVP, as predicted in the 3D content automation studies. A mobile
photogrammetry experiment (RealityScan) was successfully conducted to create five models of a
baby swing. The developed models had a polygonality of 221,789 to 669,954 polygons. 30–50 frames
instead of 80–100 recommended) resulted in controlled mesh distortions and texture inaccuracies.
This result confirms that the reduction in input data (approximately 37,5% to 70% less than
recommended) can be used as a creative method for modelling the affected area, unlike most
photogrammetry studies that seek to maximise accuracy.</p>
        <p>A combined approach to movement has been implemented: Lteleport ⊕ Lsmooth. An attempt to
implement smooth VR navigation (Smooth Locomotion) proved to be successful, allowing the
character to move smoothly using the controller stick. Saving teleportation provides a fallback
option for movement, which is critical for minimising VR sickness and increasing user comfort.
This result meets the requirements for high-quality VR simulators.</p>
        <p>Interactivity and management of medical objects through the Grabbable Objects functionality
for key medical instruments (tourniquet, scissors) using physical simulation (Simulate Physics =
true) was implemented. It provided a high level of P2 interactivity, necessary for practising
technical skills (e.g., applying a tourniquet), which is the basis for the automatic evaluation of the
effectiveness of P3 actions.</p>
        <p>The standard VR menu has been modified, with function buttons ("Instructions" and "Settings")
added. The integration of 5 training videos (Triage, CPR, Burn, Bleed Stop, Panic) and spatial audio
(siren, explosion) created a comprehensive learning environment. The creation of this complex
confirmed the possibility of implementing an interactive learning environment, which, unlike
traditional simulators, combines practical skills with immediate access to theoretical material
(video instructions). The graphs visualise the time distribution into the main phases of MVP
development (WBS) and the results of the photogrammetry experiment (comparison of input data).
1. Research and planning</p>
      </sec>
      <sec id="sec-6-2">
        <title>2. MVP Development 3. Testing and Improvement 4. Marketing and promotion 5. Scaling and Partnerships</title>
      </sec>
      <sec id="sec-6-3">
        <title>Category</title>
        <p>Recommended amount</p>
      </sec>
      <sec id="sec-6-4">
        <title>Quantity used</title>
        <p>The time-sharing schedule for the key phases of MVP development reflects the primary time
costs associated with implementing a minimum viable VR/AR simulator product, as outlined in the
WBS structure. The most extended duration falls in the phase of direct MVP development. The
graph of the photogrammetry experiment illustrates an experimental approach to creating "affected
area" content by deliberately reducing the number of input frames for mobile photogrammetry.</p>
        <p>Despite the decrease in the number of input frames, the resulting five models retained high
polygonality, confirming that the "damaged realism" method does not require additional
highquality modelling.</p>
      </sec>
    </sec>
    <sec id="sec-7">
      <title>7. Discussion</title>
      <p>The results obtained, as shown in Figures 30-32, confirm the hypothesis that integrating GAI and
mobile modelling methods (photogrammetry) with the Unreal Engine game engine is an effective
and rational method for the rapid and cost-effective development of highly realistic VR/AR
simulators for first aid. The discussion centres on the interpretation of quantitative and qualitative
indicators, their comparison with existing research, and the justification of the project's scientific
novelty. Time Planning (WBS) defined 42 weeks to implement an MVP, which is a competitive
metric for creating an immersive simulator with great detail.</p>
      <p>
        The key optimisation was achieved in the MVP Development phase (16 weeks), where a
synergy was established between the GAI and manual refinement. The traditional development of
simulators of this level of detail often requires much more time for artistic modelling. The use of
GAI (Tripo, Meshy) to generate key assets, such as destroyed buildings and damaged vehicles,
ensured a reduction in the share of manual labour and confirmed the possibility of minimising Srr
and TMVP, as envisaged in modern works on 3D content automation. The successful implementation
of the "Street after a missile strike" scenario, with support for realism and simulation of four types
of victims (from CPR to severe bleeding), lays the foundation for a high scene realism indicator, P1.
It aligns with the recommendations of research in tactical medicine, which emphasises the need to
simulate stressors and critical conditions [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The most significant innovative result is the
validation of the damaged realism method.
      </p>
      <p>The deliberate limitation of photogrammetry input data to 30–50 frames (a decrease of 37,5% to
70% of the recommended amount) resulted in controlled defects in 5 final models. 10]) focused on
maximising accuracy and minimising errors, the proposed method purposefully uses the
shortcomings of the process as a creative tool. It opens up a new direction for the rapid creation of
authentic "hit zone" content for military and crisis simulators. The implementation of key VR
mechanics confirmed the achievement of high interactivity in P2, necessary for practical training.
The successful implementation of the combined navigation approach (Locomotion) Lteleport ⊕ Lsmooth
provides a balance between immersion (smooth movement) and comfort (avoidance of VR disease).
An environment where users can physically practice skills (harness overlay) significantly improves
the quality of P3 feedback compared to non-physical simulations. Overall, the results demonstrate
that the developed VR simulator is not only technically functional but also methodologically
innovative, combining modern advances in GAI with the applied requirements of tactical medicine.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusions</title>
      <p>Based on the research and experimental implementation of the prototype VR/AR simulator for first
aid, all tasks have been completed, and the study's goal has been achieved. The developed VR/AR
simulator is an innovative and cost-effective tool capable of providing a high level of interactivity
(P2) and realism for training critical pre-medical skills in conditions as close as possible to those
found in a military environment. The integration of GAI and optimised photogrammetry
minimised the time and cost of creating specialised content, a key factor in scaling the project.
Below is a conclusion on the implementation of each of the tasks:
1. The concept and architecture were successfully formed on the basis of the Business Model
Canvas and the Work Decomposition Structure (WBS), defining the target audience
(military, medics, students) and the key value proposition: risk-free learning with realistic
simulation of critical situations.
2. The use of GAI tools (Tripo, Meshy) confirmed the possibility of rapid generation of unique
3D models (ruins, victims). It provided a high level of visual realism of the scene, necessary
to achieve the target immersion indicator (P1).
3. Experimented with the Unreal Engine 5 VR Template, including creating baselines,
configuring VRPawn, and defining the navigation area (NavMeshBounds Volume). The
scene "Street after the explosion" was successfully prototyped using simple geometric
shapes.
4. Implemented a combined movement system (Lteleport ⊕ Lsmooth) to ensure comfort and
immersion. It made it possible to achieve the required level of interactivity ( P2) while
avoiding VR sickness.
5. The VR interface (UMG) has been modified with the addition of function buttons
("Instructions", "Settings"). The integration of five training videos (Triage, CPR, and bleeding
stop) and spatial audio (sirens, missile hit) has been implemented, creating the basis for
providing feedback (P3) and evaluating actions.
6. The import and migration of GAI content was successfully carried out, and the correct
configuration of physical collisions was implemented. The photogrammetry experiment
confirmed that a conscious reduction in input data (up to 37.5% of the recommended
volume) is an effective creative method for modelling "affected zone" assets.</p>
      <p>The project develops a methodological framework for creating specialised VR training products,
confirming that GAI technologies, game engines, and mobile photogrammetry are effective
methods for achieving a high level of realism and applied value in emergency and military
medicine.</p>
    </sec>
    <sec id="sec-9">
      <title>Declaration on Generative AI</title>
      <sec id="sec-9-1">
        <title>The authors have not employed any Generative AI tools.</title>
        <p>[9] A. Yiğit, Y. Kaya, Augmented reality and photogrammetry based anatomical models in medical
education, SN Computer Science 6 (2025). doi:10.1007/s42979-025-04218-4.
[10] S. Berrezueta-Guzman, A. Koshelev, S. Wagner, From reality to virtual worlds: The role of
photogrammetry in game development, arXiv preprint arXiv:2505.16951 (2025). Access mode:
doi:10.48550/arXiv.2505.16951.
[11] Military-Medicine.com, Immersive technologies for sustainable, scalable military medical
simulation training, URL:
https://military-medicine.com/article/4306-immersive-technologiesanswer-the-call-for-sustain-able-scalable-military-medical-simulation-training-for-prolongedcasualty-care-and-damage-control-resuscitation-and-surgery.html.
[12] A. Berko, V. Vysotska, O. Naum, N. Borovets, S. Chyrun, V. Panasyuk, Big Data Analysis for
Startup of Supporting Ukraine Internet Tourism, in: Proceedings of the 5th International
Conferance on Advanced Information and Communication Technologies, AICT ’2023, IEEE,
New York, NY, 2023, pp. 164–169. doi:10.1109/AICT61584.2023.10452425.
[13] R. Sun, Y. Wang, Q. Wu, S. Wang, X. Liu, P. Wang, H. Zheng, Effectiveness of virtual and
augmented reality for cardiopulmonary resuscitation training: a systematic review and
metaanalysis, BMC Medical Education 24 (1) (2024). doi:10.1186/s12909-024-05720-8.
[14] R. Trevi, S. Chiappinotto, A. Palese, A. Galazzi, Virtual reality for cardiopulmonary
resuscitation healthcare professionals training: a systematic review, Journal of Medical
Systems 48 (1) (2024). doi:10.1007/s10916-024-02063-1.
[15] J. M. Castillo-Rodríguez, J. L. Gómez-Urquiza, S. García-Oliva, N. Suleiman-Martos,
Effectiveness of virtual and augmented reality for emergency healthcare training: A
randomized controlled trial, Healthcare 13 (9) (2025). doi:10.3390/healthcare13091034.
[16] A. Cheng, N. Fijacko, A. Lockey, R. Greif, C. Abelairas-Gomez, L. Gosak, Use of augmented
and virtual reality in resuscitation training: A systematic review, Resuscitation Plus 18 (2024).
doi:10.1016/j.resplu.2024.100643.
[17] P. L. Ingrassia, G. Mormando, E. Giudici, F. Strada, F. Carfagna, F. Lamberti, A. Bottino,
Augmented reality learning environment for basic life support and defibrillation training:
usability study, Journal of Medical Internet Research 22 (5) (2020). doi:10.2196/14910.
[18] SimX VR, Virtual reality medical simulation, URL: https://www.simxvr.com/.
[19] K. Thompson, Context key for medical trauma training, Study finds,
URL:
https://www.ntsa.org/news-and-archives/2024/7/22/context-key-for-medical-traumatraining-study-finds.
[20] J. Xiang, Z. Lv, S. Xu, Y. Deng, R. Wang, B. Zhang, D. Chen, X. Tong, J. Yang, Structured 3D
latents for scalable and versatile 3D generation, arXiv preprint arXiv:2412.01506 (2025).
doi:10.48550/arXiv.2412.01506.
[21] S. Zhu, Z. Li, Y. Sun, L. Kong, M. Yin, Q. Yong, Y. Gao, A Serious Game for Enhancing Rescue
Reasoning Skills in Tactical Combat Casualty Care: Development and Deployment Study,
JMIR Formative Research 8 (1) (2024). doi:10.2196/50817.
[22] N. Stathakarou, A. A. Kononowicz, E. Mattsson, K. Karlgren, Gamification in the design of
virtual patients for Swedish military medics to support trauma training: interaction analysis
and semistructured interview study, JMIR Serious Games 12 (1) (2024). doi:10.2196/63390.
[23] L. Hou, X. Dong, K. Li, C. Yang, Y. Yu, X. Jin, S. Shang, Effectiveness of a novel augmented
reality cardiopulmonary resuscitation self-training environment for laypeople in China: a
randomized controlled trial, Interdisciplinary Nursing Research 1 (1) (2022) 43–50.
doi:10.1097/NR9.0000000000000010.
[24] Z. Zhao, Z. Lai, Q. Lin, Y. Zhao, H. Liu, S. Yang, C. Guo, Hunyuan3D 2.0: Scaling diffusion
models for high-resolution textured 3D assets generation, arXiv preprint arXiv:2501.12202
(2025). doi:10.48550/arXiv.2501.12202.
[25] Reuters, Tencent expands AI push with open-source 3D generation tools, URL:
https://www.reuters.com/technology/artificial-intelligence/tencent-expands-ai-push-withopen-source-3d-generation-tools-2025-03-18/.</p>
      </sec>
    </sec>
    <sec id="sec-10">
      <title>Appendix</title>
      <p>
        R
I
F
Akr
Enach
Srr
TMVP
Pj
P1=R
P2=I
– immersion depth;
– level of interactivity;
– quality of feedback;
– the correctness of the performance of critical skills;
– the effectiveness of learning;
– total cost of development;
– time to market of the product, in particular, the total development time of the MVP
(approximately 42 weeks according to WBS)
– performance indicator according to the j-th criterion, which are measured during alpha
and beta testing of the E3 stage;
– realism of the scene and immersion, R[
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ], which is evaluated by the feedback of
specialists (alpha testing);
– VR/AR interactivity, I[
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] (interaction with objects, use of controllers), which is evaluated
P4=
Akr
ωj
α
SGAI
δGAI
by intuitive control (beta testing);
– quality of feedback and evaluation, F[
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ], which is evaluated by the system of automatic
evaluation of actions (reaction time, correctness of tourniquet application, error analysis);
– the accuracy of the algorithm of actions, Akr [
        <xref ref-type="bibr" rid="ref1">0,1</xref>
        ] (adherence to Triage protocols, CPR), is
evaluated on the basis of compliance with first aid protocols;
– the weighting factor of the importance of the criterion;
– weight factor (cost of time);
– the share of saved costs for 3D modelling/concepts due to the GAI;
– share of time saved on 3D modeling/texturing thanks to GAI (e.g. Trellis3D, Meshy, Tripo,
Stable Diffusion);
– costs for licenses, server capacities and AI tools
– the minimum threshold of accuracy that corresponds to the official protocols of first aid
(Ministry of Health, Red Cross);
– the number of supported languages and cultural adaptations of the content (including
English, Chinese, Spanish);
– pupils, students, teachers;
– citizens, volunteers;
– military, doctors, instructors;
– a 7-year-old child with respiratory arrest (CPR);
– mother with severe bleeding (tourniquet);
– a man with burns (shock);
– minor injuries;
–Teleportation;
– Smooth Locomotion (smooth movement using analogue controller sticks).
      </p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>R. J.</given-names>
            <surname>Stone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guest</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Mahoney</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Lamb</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Gibson</surname>
          </string-name>
          ,
          <article-title>A “mixed reality” simulator concept for future Medical Emergency Response Team training</article-title>
          ,
          <source>BMJ Military Health</source>
          <volume>163</volume>
          (
          <issue>4</issue>
          ) (
          <year>2017</year>
          )
          <fpage>280</fpage>
          -
          <lpage>287</lpage>
          . doi:
          <volume>10</volume>
          .1136/jramc-2016-000726.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>SimX</surname>
            <given-names>VR</given-names>
          </string-name>
          ,
          <article-title>Virtual reality medical simulation</article-title>
          , URL: https://www.simxvr.com/.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>Laerdal</given-names>
            <surname>Medical</surname>
          </string-name>
          ,
          <article-title>3 benefits of VR simulation training for hospitals</article-title>
          , URL: https://laerdal.com/information/3
          <article-title>-benefits-of-vr-simulation-training-for-hospitals/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>V.</given-names>
            <surname>Tretyak</surname>
          </string-name>
          , E. Gröller,
          <article-title>TacMedVR: Immersive VR training for tactical medicine - evaluating interaction and stress response</article-title>
          ,
          <source>in: Proceedings of the 11th International Conference on Virtual Reality</source>
          , ICVR '
          <year>2025</year>
          , IEEE, New York, NY,
          <year>2025</year>
          , pp.
          <fpage>345</fpage>
          -
          <lpage>350</lpage>
          . doi:
          <volume>10</volume>
          .1109/ICVR66534.
          <year>2025</year>
          .
          <volume>11172647</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Castillo-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. L.</given-names>
            <surname>Gómez-Urquiza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>García-Oliva</surname>
          </string-name>
          ,
          <string-name>
            <surname>N.</surname>
          </string-name>
          <article-title>Suleiman-Martos, Effectiveness of virtual and augmented reality for emergency healthcare training: A randomized controlled trial</article-title>
          ,
          <source>Healthcare</source>
          <volume>13</volume>
          (
          <issue>9</issue>
          ) (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .3390/healthcare13091034.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>XR</given-names>
            <surname>Stager</surname>
          </string-name>
          ,
          <article-title>AI-powered 3D model generation in enreal engine</article-title>
          , URL: https://www.xrstager.com/en/ai-powered
          <string-name>
            <surname>-</surname>
          </string-name>
          3d
          <article-title>-model-generation-in-unreal-engine.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <issue>Alpha3D</issue>
          ,
          <article-title>Creating sellable 3D assets with generative AI: A guide for developers</article-title>
          , URL: https://www.alpha3d.
          <article-title>io/kb/creator-economy-and-community/creating-sellable-3d-assetsgenerative-ai/.</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>K. M. Wesencraft</surname>
            ,
            <given-names>J. A.</given-names>
          </string-name>
          <string-name>
            <surname>Clancy</surname>
          </string-name>
          ,
          <article-title>Using photogrammetry to create a realistic 3D anatomy learning aid with Unity game engine</article-title>
          , in P. M.
          <string-name>
            <surname>Rea</surname>
          </string-name>
          (Ed.),
          <source>Biomedical Visualisation</source>
          , volume
          <volume>5</volume>
          , Springer, Cham, Switzerland,
          <year>2019</year>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>104</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -31904-
          <issue>5</issue>
          _
          <fpage>7</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [26]
          <string-name>
            <given-names>E.</given-names>
            <surname>Dubreucq</surname>
          </string-name>
          , S. B.
          <string-name>
            <surname>De La Vega</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Bouaoud</surname>
            ,
            <given-names>A. L.</given-names>
          </string-name>
          <string-name>
            <surname>Philippon</surname>
            ,
            <given-names>P. C.</given-names>
          </string-name>
          <string-name>
            <surname>Thiebaud</surname>
          </string-name>
          ,
          <article-title>Impact of virtual, augmented or mixed reality in basic life support training: A scoping review</article-title>
          ,
          <source>Clinical Simulation in Nursing</source>
          <volume>99</volume>
          (
          <year>2025</year>
          ). doi:
          <volume>10</volume>
          .1016/j.ecns.
          <year>2024</year>
          .
          <volume>101672</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [27]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Smelyakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Sharonova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Vakulik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Filipov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kotelnykov</surname>
          </string-name>
          ,
          <article-title>Fast color images clustering for real-time computer vision and AI system</article-title>
          ,
          <source>in: Proceedings of the 8th International Conference on Computational Linguistics and Intelligent Systems. Volume I: Machine Learning Workshop</source>
          , MLW-CoLInS '
          <year>2024</year>
          , CEUR Workshop Proceedings, Aachen, Germany,
          <year>2024</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>177</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [28]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Naum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Borovets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Panasyuk</surname>
          </string-name>
          ,
          <article-title>Big data analysis for startup of supporting Ukraine internet tourism</article-title>
          ,
          <source>in: Proceedings of the 2023 IEEE 5th International Conference on Advanced Information and Communication Technologies</source>
          , AICT '
          <year>2023</year>
          , IEEE, New York, NY,
          <year>2023</year>
          , pp.
          <fpage>164</fpage>
          -
          <lpage>169</lpage>
          . doi:
          <volume>10</volume>
          .1109/AICT61584.
          <year>2023</year>
          .
          <volume>10452425</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [29]
          <string-name>
            <given-names>L.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tchynetskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ushenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Uhryn</surname>
          </string-name>
          ,
          <article-title>Information technology for sound analysis and recognition in the Metropolis based on machine learning methods</article-title>
          ,
          <source>IJISA</source>
          <volume>16</volume>
          (
          <issue>6</issue>
          ) (
          <year>2024</year>
          )
          <fpage>40</fpage>
          -
          <lpage>72</lpage>
          . doi:
          <volume>10</volume>
          .5815/ijisa.
          <year>2024</year>
          .
          <volume>06</volume>
          .03. asd'
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [30]
          <string-name>
            <given-names>B.</given-names>
            <surname>Dokhnyak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <article-title>Intelligent smart home system using Amazon Alexa tools</article-title>
          ,
          <source>in: Proceedings of the Modern Machine Learning Technologies and Data Science Workshop</source>
          , MoMLeT&amp;DS '2021 CEUR Workshop Proceedings, Aachen, Germany,
          <year>2021</year>
          , pp.
          <fpage>441</fpage>
          -
          <lpage>464</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [31]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Hu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Mykytyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Nagachevska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hazdiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Uhryn</surname>
          </string-name>
          ,
          <article-title>Development and testing of voice user interfaces based on BERT models for speech recognition in distance learning and smart home systems</article-title>
          ,
          <source>IJCNIS 17 3</source>
          (
          <year>2025</year>
          )
          <fpage>109</fpage>
          -
          <lpage>143</lpage>
          . doi:
          <volume>10</volume>
          .5815/ijcnis.
          <year>2025</year>
          .
          <volume>03</volume>
          .07.
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [32]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Mykhailyshyn</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Peleshchak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Peleshchak</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kohut</surname>
          </string-name>
          ,
          <article-title>Intelligent system of a smart house</article-title>
          ,
          <source>in: Proceedings of the 2019 3rd International Conference on Advanced Information and Communications Technologies</source>
          , AICT '
          <year>2019</year>
          , IEEE, New York, NY,
          <year>2019</year>
          . pp.
          <fpage>282</fpage>
          -
          <lpage>287</lpage>
          . doi:
          <volume>10</volume>
          .1109/AIACT.
          <year>2019</year>
          .
          <volume>8847748</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [33]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          et al.,
          <article-title>A smart home system development</article-title>
          , in: N.
          <string-name>
            <surname>Shakhovska</surname>
            ,
            <given-names>M. O.</given-names>
          </string-name>
          <string-name>
            <surname>Medykovskyy</surname>
          </string-name>
          (Eds.),
          <source>Advances in Intelligent Systems and Computing IV</source>
          , Springer, Cham, Switzerland,
          <year>2020</year>
          , pp.
          <fpage>804</fpage>
          -
          <lpage>830</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -33695-0_
          <fpage>54</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [34]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matseliukh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bublyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <article-title>Development of intelligent system for visual passenger flows simulation of public transport in smart city based on neural network</article-title>
          ,
          <source>in: Proceedings of the 5th International Conference on Computational Linguistics and Intelligent Systems. Volume I: Main Conference</source>
          , COLINS '
          <year>2021</year>
          , CEUR Workshop Proceedings, Aachen, Germany,
          <year>2021</year>
          , pp.
          <fpage>1087</fpage>
          -
          <lpage>1138</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [35]
          <string-name>
            <given-names>I.</given-names>
            <surname>Krislata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Katrenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Burov</surname>
          </string-name>
          ,
          <article-title>Traffic flows system development for smart city</article-title>
          ,
          <source>in: Proceedings of the 1st International Workshop IT</source>
          Project Management, ITPM '
          <year>2020</year>
          , CEUR Workshop Proceedings, Aachen, Germany,
          <year>2020</year>
          , pp.
          <fpage>280</fpage>
          -
          <lpage>294</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>