<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Augmented Reality in Education, May</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Theoretical and practical aspects of using artificial intelligence technologies in the field of sound design</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Oleksandr A. Bobarchuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Svitlana M. Halchenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Serhii O. Hnidenko</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ivan P. Zavadetskyi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>State Non-Commercial Company “State University “Kyiv Aviation Institute”</institution>
          ,
          <addr-line>1 Liubomyra Huzara Ave., Kyiv, 03058</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>14</volume>
      <issue>2024</issue>
      <fpage>319</fpage>
      <lpage>328</lpage>
      <abstract>
        <p>The theoretical and practical aspects of using artificial intelligence technologies in the field of sound design are considered. An analysis of modern technologies, their capabilities and limitations is conducted, the advantages and risks are examined, and the prospects for development in this field are outlined. The results of the research are aimed at increasing the understanding of the potential of AI in working with sound and determining ways to efectively implement these technologies in the creative process.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;sound design</kwd>
        <kwd>artificial intelligence</kwd>
        <kwd>sound creation for music</kwd>
        <kwd>Suno AI</kwd>
        <kwd>sound plugins</kwd>
        <kwd>visual novels</kwd>
        <kwd>AudioGen</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modern artificial intelligence (AI) technologies are becoming an integral part of many spheres of human
activity, including creative industries [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. One such area where AI demonstrates significant potential
is sound design. This discipline combines art and technology to create sound compositions used in
cinema, video games, advertising, music and other media. The use of AI in sound design opens up new
possibilities for process automation, sound generation and interactive sound accompaniment, changing
traditional approaches to working with sound.
      </p>
      <p>Despite significant interest in the use of artificial intelligence in creative industries, the topic of
AI application in sound design is not yet fully explored in modern scientific literature. Most studies
focus on specific aspects such as sound synthesis, audio signal processing or adaptive sound systems
for interactive environments. However, a holistic analysis of the theoretical foundations, practical
applications, and the impact of these technologies on the industry as a whole remains fragmented.</p>
      <p>Some works highlight the technical aspects, describing the algorithms and methods used to generate
or process sound. Others focus on applied cases, such as the integration of AI in the production of
music or sound efects for cinema and games. Meanwhile, a comprehensive approach that would take
into account both creative and technical challenges, ethical aspects and development prospects is still
lacking.</p>
      <p>This indicates the need for deeper research that would create a general concept of using AI in sound
design. This article attempts to fill this gap by analysing not only existing technologies, but also their
impact on the process of sound creation, as well as outlining future prospects for this field.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Transformation of modern sound design</title>
      <p>
        In the classical sense, sound design is the process of obtaining (generating), editing, and implementing
sound elements (samples) in a multimedia composition [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. It covers a wide range of applications,
including cinema, theatre, video games, advertising, the music industry and even architectural design
of sound environments. The main principles of classical sound design include the following aspects [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]:
• Realism and authenticity. The main principle underlying the classical approach is the creation
of realistic sounds that correspond to the visual or dramatic context.
• Technical skill. Sound designers rely on traditional methods of recording sound using
microphones, field recorders, analogue and digital processing tools.
• Foley art. A special place in classical sound design is occupied by the art of creating sound efects
manually, using real objects and materials to imitate various sounds. Real sounds are recorded
and processed by appropriate means of artistic sound processing, such as reverberation, echo,
chorus, etc., to achieve the desired result.
• Composition and editing. The sound designer combines sounds into a sound composition,
using editing to achieve the desired rhythm, harmony and dramatic impact. That is, a synergistic
combination of sound and dynamic change of images is performed.
      </p>
      <p>The classical approach laid the fundamental principles of sound design, which still remain relevant.
However, changes in the digital landscape pose new challenges. Classical methods of sound design
have their limitations (for example, lack of adaptability, instrumental and technical limitations, time
and resource costs).</p>
      <p>
        The gradual development of digital technologies has also changed the principles and means of sound
design. With the advent of VST (Virtual Studio Technology) and AU (Audio Units), sound designers
gained access to thousands of digital instruments, simulators of analogue and digital synthesisers,
classical musical instruments and efects [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ]. This significantly reduced equipment costs and expanded
the possibilities for experimentation. The development of the gaming industry stimulated the emergence
of adaptive audio systems, where sound changes depending on the player’s actions or environment.
Wwise (figure 1) and FMOD technologies have become the standard in the field of interactive sound
design [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ].
      </p>
      <p>
        Digital sound libraries gradually began to appear – sets of pre-recorded samples that can be quickly
applied to one’s own projects. The use of ready-made sounds was not a new practice in itself – the
1950s60s saw the creation of the first commercial libraries storing sounds of gunshots, natural phenomena,
transport, etc. [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. They were recorded on analogue media (e.g. magnetic tape) and used in cinema and
television.
      </p>
      <p>Modern sound design has reached a level where technologies allow the creation of high-quality sound
for various types of media – from cinema and video games to advertising and virtual reality. Audio
processing tools have become more powerful, and access to large sound libraries, virtual instruments
and modern technical means for recording and processing sound have greatly simplified the process of
creating sound content. But the demand for constant improvement remains unchanged. That is why
the question arises: how to adapt and use the possibilities of artificial intelligence, which continues to
develop comprehensively, in the sound design industry. And how expedient is such use?</p>
    </sec>
    <sec id="sec-3">
      <title>3. Artificial intelligence in sound design: main directions</title>
      <p>
        The use of artificial intelligence for sound generation has become one of the most promising areas in
the field of sound design. At a basic level, this process is based on the ability of algorithms to learn from
large volumes of audio data, analyse them and create new sound textures that can be used in music,
cinema, video games and virtual reality [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        In the early stages of development, artificial intelligence algorithms worked primarily with existing
sounds. They could restore audio, remove noise or imitate the sound character of specific instruments.
However, with the development of machine learning [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] and neural networks [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], AI has become
capable of creating completely new soundscapes that did not exist before. For example, generative
adversarial networks (GANs) allow systems to synthesise sounds that have a natural timbre, and
recurrent neural networks (RNNs) learn to predict the next sound segments, creating a continuous
audio stream.
      </p>
      <p>
        One of the most striking examples of AI applications is the creation of sounds for music. Algorithms
analyse thousands of music tracks, extracting patterns and harmonies, and then generate melodies
or rhythms. Programs like AIVA (Artificial Intelligence Virtual Artist) are capable of creating entire
compositions in various genres, providing composers with a foundation for further work [
        <xref ref-type="bibr" rid="ref12">12</xref>
        ]. In the
ifeld of electronic music, AI is often used to create unique samples or synthetic textures that can be
integrated into compositions.
      </p>
      <p>
        At the beginning of 2024, the Suno AI network gained a high level of popularity [
        <xref ref-type="bibr" rid="ref13">13</xref>
        ]. Suno AI is an
innovative platform that uses artificial intelligence to generate music based on text prompts. The user
enters a description of the desired song, specifying the style, genre or theme, and the system creates a
corresponding composition. This process takes about two minutes, after which the user receives two
versions of the track: one with vocals, the other instrumental.
      </p>
      <p>Suno AI technology is based on artificial intelligence models such as Bark and Chirp, which are
capable of generating not only instrumental music but also adding vocal parts to songs. The algorithm
analyses the entered text, determines its rhythmic and semantic features, and then synthesises a melody
and harmony that match the given description. The vocals are synthesised taking into account the
rhythm and intonation of the text, giving the song a natural sound.</p>
      <p>
        The system uses an approach similar to large language models, such as ChatGPT [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]: it splits the
text into individual segments (tokens), studies millions of usage variants, styles and structures, and
then reconstructs them on request. However, creating audio, especially music, is a more complex task,
as it requires taking into account many parameters such as melody, harmony, rhythm and timbre.
      </p>
      <p>Of course, the tracks generated by both Suno AI and other platforms and models have noticeable
subjective flaws, which most often manifest themselves in distortions in vocal parts, sharp changes
in volume or banal misunderstanding or neglect of prompts. Artificial intelligence can work much
more accurately with small sounds. Classic elements of sound design are various sound transitions, for
example, a gradually increasing sound (rise), a sound of impact (hit), a sound of cutting air (whoosh),
etc.</p>
      <p>Artificial intelligence is able to generate and process such short sound efects with high accuracy due
to its ability to analyse thousands of samples and extract key sound characteristics. These efects have a
clear structure and predictable dynamics, making them ideal material for algorithm work. AI models
can create variations of hits, rises or noises based on text descriptions or user settings, providing precise
adjustment of the duration, frequency spectrum and amplitude of each sound.</p>
      <p>In addition, thanks to machine learning technologies, artificial intelligence can automatically select
sounds for diferent scenes, creating smooth transitions and adapting them to the visual content. For
example, AI can generate a whoosh sound of varying intensity depending on the speed of an object in
the frame or synchronise impact efects with moments of climax.</p>
      <p>There are already services that provide the ability to create cinematic sounds with a text prompt. But
such sounds are only a small part of sound design. For us, sound design is primarily a complex sound
landscape, an immersive global environment. Is artificial intelligence capable of forming something like
this?</p>
      <p>Immersive environment sound design requires not only layering sounds on top of each other, but
also fine-tuning spatial acoustics, dynamics and the emotional content of each layer. In the real world,
sounds interact with each other in unpredictable ways – echoes in space, gradual fading or swelling, the
influence of textures of materials and objects that create or reflect sound. It is dificult for algorithms to
reproduce this chaos and versatility of the sound environment in the same way as human hearing and
perception.</p>
      <p>Currently, artificial intelligence does an excellent job of reconstructing real environments through
recordings and spatial analysis, but creating completely fictional sound landscapes that have no
analogues in reality requires creative intuition. A human sound designer works not only with sounds as
such, but with a concept – they create a story through sound, using audio as a tool to evoke emotions
and build atmosphere.</p>
      <p>However, there are also positive aspects. Artificial intelligence algorithms are becoming increasingly
efective in creating procedural sound landscapes. They are able to analyse visual sequences or text
descriptions and generate corresponding sound environments, automatically adding necessary elements:
the sound of wind, raindrops, city bustle or any other simple ambient.</p>
      <p>Artificial intelligence cannot fully construct a multi-layered sound environment. But it can be used as
a tool that provides a certain foundation to work with. For example, it is possible to create simple patterns
of classical instruments in a given key and rhythm, perform their gradual processing using classical
means in any sound editing environment, mix the tracks, supplement them with various generated
sounds, and further integrate the created composition into complex multimedia environments.</p>
      <p>Another equally important aspect is the integration of artificial intelligence technologies into various
plugins for working with sound. Often these are a wide variety of tools for diferent tasks, which
have appeared quite a lot recently. AI assistants are used to perform general mastering (such as the
built-in assistant in Izotope Ozone 10/11), for individual tasks such as compression, limiting, saturation,
equalisation, etc. AI plugins are trained on large volumes of audio data. Developers train the neural
network based on various recordings manually processed by professional sound engineers. The model
analyses how classical tools for saturation, compression or limiting work, and learns patterns of efect
application depending on the type of sound, genre or processing style.</p>
      <p>When the user loads the plugin, the algorithm performs a multivariate analysis of the audio signal –
analyses the frequency spectrum, dynamics, harmonics and noise level. AI models are able to detect
problem areas or potentially weak zones and suggest processing parameters. Practical experience shows
that these parameters are often not optimal, but can be used as a basis for further work with sound.</p>
      <p>
        Much more interesting from the point of view of sound design are plugins such as Synplant 2. The
Genopatch technology (figure 2) built into the plugin allows generating a variety of new sounds based
on a single loaded sample [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]. The unique capabilities and interface of Synplant 2 promote experiments
in the field of intuitive sound design, allowing to explore how non-standard methods of interaction
with technology can influence the creative process.
      </p>
      <p>Considering all of the above, we can note that artificial intelligence is already transforming the field
of sound design today, opening up new possibilities for creativity and automation. Despite significant
achievements, AI technologies in sound design face certain significant challenges: limited emotional
depth of generated sounds, complexity of creating complex immersive environments, and various
technical defects that can distort the perception of the overall picture. However, these limitations
stimulate the development of the industry and create space for improving algorithms, integrating new
approaches and synergy with human creative ideas. AI does not replace sound engineers, but becomes
a powerful tool that helps accelerate the workflow and expand creative horizons.</p>
      <p>Now, let’s demonstrate the possibilities and ways of applying artificial intelligence technologies in
sound design through practical experience.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Practical aspects of using artificial intelligence technologies in sound design</title>
      <p>In this part, as an example of the possibilities of using AI in sound design, we will form a simple sound
design for several scenes that are planned to be used in a visual novel. Visual novels (VN) are a genre of
interactive games where the main emphasis is on the plot and characters, and sound plays an important
role in creating an emotional response. The scenes themselves were created using Dalle-E 3 and refined
in Adobe Photoshop (figure 3).</p>
      <p>To begin with, we break down the scenes into components and determine the overall mood and
which sounds we need (figure 4).</p>
      <sec id="sec-4-1">
        <title>Dark ambient;</title>
      </sec>
      <sec id="sec-4-2">
        <title>Low frequency noise;</title>
      </sec>
      <sec id="sec-4-3">
        <title>Wind noise;</title>
      </sec>
      <sec id="sec-4-4">
        <title>Melancholic classical instruments.</title>
      </sec>
      <sec id="sec-4-5">
        <title>Humming of wires</title>
      </sec>
      <sec id="sec-4-6">
        <title>Cracking of branches</title>
        <p>Now let’s determine the necessary artificial intelligence tools. For forming the general landscape, the
aforementioned Suno AI from the previous section will work. Using the following prompt, we generate
two compositions for download: Violin and piano, melancholic style, slow tempo, dark ambient. We
transfer the downloaded result to the FL Studio environment and perform the following sequential
processing: slowing down, equalisation, adding reverb and echo efects using the Crystalize granular
generator from the developer SoundToys (figure 5).</p>
        <p>It is worth noting that the recording generated in Suno AI without further processing would not fully
correspond to the general concept of sound design for this project. As already mentioned above, Suno
AI often does not interpret prompts very accurately, which leads to problems with generating music in
less well-known genres. However, artistic processing tools allow to significantly change and improve
the nature of the input sound and adapt it to the needs of the project.</p>
        <p>Unlike Suno AI, the AudioGen AI instrument handled the prompt more accurately, generating distant
humming of electric wires and cracking of branches with a rather short request (figure 6).</p>
        <p>To create various variations of the sample generated using AudioGen, we will use Synplant2 and
its Genopatch technology. We load the sample into the plugin environment, after which Synplant2
automatically generates new sound samples based on the provided one (figure 7).</p>
        <p>After combining all the generated sounds, we have as a result a simple but subjectively quite
highquality sample of sound design for scenes from a visual novel. Thus, based on practical experience,
the feasibility of using artificial intelligence technologies as a tool for quickly obtaining the necessary
sound samples for their further processing and combining into a coherent composition was confirmed.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The article provides a thorough analysis of the theoretical and practical aspects of using artificial
intelligence technologies in the field of sound design. The current state of the industry is highlighted,
considering its technical capabilities and creative challenges. A study of classical and modern tools for
forming sound environments is conducted.</p>
      <p>It has been proven that artificial intelligence significantly changes the traditional approach to sound
creation, providing process automation, time savings, and expanded possibilities for experimentation.
For the first time, a detailed analysis of the main trends and directions of the impact of artificial
intelligence on sound design is presented. Special attention is paid to tools such as Suno AI, AudioGen,
Synplant2, which demonstrate significant potential for generating sound textures and integration into
creative projects.</p>
      <p>The practical aspect of the research is based on the example of creating a sound accompaniment
for visual novels, where artificial intelligence was used to generate musical compositions and sound
efects. These materials, after further processing, can become the basis for high-quality full-fledged
sound design. It is important to emphasise that although AI tools provide speed and adaptability in
working with sound, their results often require refinement to match creative ideas.</p>
      <p>In this article, for the first time, an integrated approach to the use of various artificial intelligence
tools for creating sound design is proposed. This approach takes into account both technical capabilities
and creative needs. The study outlines the advantages of modern AI algorithms, such as eficiency
in creating short sound efects, as well as their limitations, including dificulties in forming complex
immersive sound landscapes.</p>
      <p>Also, for the first time, the article presents a methodology for selecting artificial intelligence tools for
specific tasks in the context of sound design for multimedia projects. For example, it is determined that
tools such as Suno AI are appropriate for creating music and musical efects, AudioGen for generating
sounds of certain environments, and Synplant2 for editing sounds. This methodology is formed on the
basis of practical work with these tools and subjective evaluation of the generation results.</p>
      <p>The overall results and prospects for further development of this problem can be defined as follows:
• A study of the main directions of using artificial intelligence in sound creation has been conducted,
including the generation of musical compositions, short sound efects, and procedural sound
landscapes. It is shown that tools like Suno AI, AudioGen, and Synplant2 are able to efectively
perform sound generation and processing tasks, which greatly simplifies the complex process of
creating sound design;
• The article presents an example of creating sound design for visual novels, which illustrates the
capabilities of AI for quickly obtaining basic sound textures. It is shown that artificial intelligence
can be used to automate sound creation with further processing and refinement, which allows
achieving high-quality final results;
• An integration approach is proposed, which consists in using various artificial intelligence tools
for diferent tasks that may include short musical compositions, simple sound landscapes, and
short sounds. Subjective evaluation of the quality of the created samples shows that they are
quite suitable for use in various multimedia projects.</p>
      <p>The practical significance of the obtained results lies in increasing the eficiency of sound design
creation processes through the integration of artificial intelligence technologies. In particular, the
proposed methods allow automating routine tasks, such as generating basic sound textures and creating
simple sound or musical efects. This reduces the time and resources required for work and allows
designers to focus on the creative aspects of projects.</p>
      <p>Prospects for further research are primarily related to improving algorithms for creating immersive
sound environments, deepening the synergy of AI and human creativity, and more active integration
of generated sounds into multimedia projects. These prospects demonstrate the potential for further
transformation of the sound design industry, expanding the capabilities of creative professionals and
stimulating the development of innovations in the use of artificial intelligence.</p>
      <p>The study confirmed the practical value of artificial intelligence in transforming sound design,
expanding the toolkit for creating sound compositions and opening up new horizons in creative
industries.</p>
      <p>Declaration on Generative AI: The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>M. V.</given-names>
            <surname>Marienko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O. M.</given-names>
            <surname>Markova</surname>
          </string-name>
          ,
          <article-title>Artificial intelligence literacy in secondary education: methodological approaches and challenges</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>3679</volume>
          (
          <year>2024</year>
          )
          <fpage>87</fpage>
          -
          <lpage>97</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>I.</given-names>
            <surname>Mintii</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          ,
          <article-title>Optimizing Teacher Training and Retraining for the Age of AI-Powered Personalized Learning: A Bibliometric Analysis</article-title>
          , in: E. Faure,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tryus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vartiainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Danchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bazilo</surname>
          </string-name>
          , G. Zaspa (Eds.),
          <source>Information Technology for Education, Science, and Technics</source>
          , volume
          <volume>222</volume>
          <source>of Lecture Notes on Data Engineering and Communications Technologies</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>339</fpage>
          -
          <lpage>357</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -71804-5_
          <fpage>23</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Zizza</surname>
          </string-name>
          , Sound Design, in: Game Audio Fundamentals:
          <article-title>An Introduction to the Theory, Planning, and Practice of Soundscape Creation for Games</article-title>
          , Focal Press, London,
          <year>2023</year>
          , pp.
          <fpage>142</fpage>
          -
          <lpage>163</lpage>
          . doi:
          <volume>10</volume>
          . 4324/
          <fpage>9781003218821</fpage>
          -
          <lpage>11</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>E.</given-names>
            <surname>Miranda</surname>
          </string-name>
          , Computer sound synthesis fundamentals, in: Computer Sound Design:
          <article-title>Synthesis techniques and programming, 2 ed</article-title>
          .,
          <string-name>
            <surname>Routledge</surname>
          </string-name>
          , New York,
          <year>2012</year>
          , pp.
          <fpage>19</fpage>
          -
          <lpage>36</lpage>
          . doi:
          <volume>10</volume>
          .4324/ 9780080490755-7.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rosiński</surname>
          </string-name>
          ,
          <article-title>The Use of Virtual Musical Instruments in Timbre Recognition Training</article-title>
          ,
          <source>International Journal of Learning and Teaching</source>
          <volume>9</volume>
          (
          <year>2023</year>
          )
          <fpage>256</fpage>
          -
          <lpage>257</lpage>
          . doi:
          <volume>10</volume>
          .18178/ijlt.9.3.
          <fpage>256</fpage>
          -
          <lpage>260</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>T.</given-names>
            <surname>Suzuki</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Nakabayashi</surname>
          </string-name>
          , Virtual Studio,
          <source>The Journal of The Institute of Image Information and Television Engineers</source>
          <volume>61</volume>
          (
          <year>2007</year>
          )
          <fpage>657</fpage>
          -
          <lpage>659</lpage>
          . doi:
          <volume>10</volume>
          .3169/itej.61.657.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Zecevic</surname>
          </string-name>
          , G. Durity, Handbook of Game Audio Using Wwise, Taylor &amp; Francis Group,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>M.</given-names>
            <surname>Katz</surname>
          </string-name>
          ,
          <article-title>Music in 1s and 0s: The Art and Politics of Digital Sampling</article-title>
          , in: Capturing Sound: How Technology has Changed Music, University of California Press, Berkeley,
          <year>2004</year>
          , pp.
          <fpage>137</fpage>
          -
          <lpage>157</lpage>
          . URL: https://ia600409.us.archive.org/29/items/mat-bib_
          <article-title>201710/ Capturing-sound-how-technology-has-changed-music</article-title>
          .pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K.</given-names>
            <surname>Saraf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. D.</given-names>
            <surname>Amritphale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Akhand</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Vijayvargiya</surname>
          </string-name>
          ,
          <string-name>
            <surname>Music</surname>
            <given-names>AI</given-names>
          </string-name>
          ,
          <source>International Research Journal of Modernization in Engineering Technology and Science</source>
          <volume>6</volume>
          (
          <year>2024</year>
          )
          <fpage>11174</fpage>
          -
          <lpage>11177</lpage>
          . doi:
          <volume>10</volume>
          .56726/ irjmets54679.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P. V.</given-names>
            <surname>Zahorodko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. O.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V. N.</given-names>
            <surname>Soloviev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. M.</given-names>
            <surname>Striuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. I.</given-names>
            <surname>Striuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. M.</given-names>
            <surname>Shalatska</surname>
          </string-name>
          ,
          <article-title>Comparisons of performance between quantum-enhanced and classical machine learning algorithms on the IBM Quantum Experience</article-title>
          ,
          <source>Journal of Physics: Conference Series</source>
          <year>1840</year>
          (
          <year>2021</year>
          )
          <article-title>012021</article-title>
          . doi:
          <volume>10</volume>
          .1088/
          <fpage>1742</fpage>
          -
          <lpage>6596</lpage>
          /
          <year>1840</year>
          /1/012021.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Kucherova</surname>
          </string-name>
          , V. Los, D. Ocheretin,
          <article-title>Neural network analytics and forecasting the country's business climate in conditions of the coronavirus disease (COVID-19)</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2845</volume>
          (
          <year>2021</year>
          )
          <fpage>22</fpage>
          -
          <lpage>32</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Aiva</given-names>
            <surname>Technologies</surname>
          </string-name>
          <string-name>
            <surname>SARL</surname>
          </string-name>
          , AIVA,
          <source>the AI Music Generation Assistant</source>
          ,
          <year>2025</year>
          . URL: https://www.aiva. ai/.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Suno</surname>
          </string-name>
          , Inc.,
          <string-name>
            <surname>Suno</surname>
          </string-name>
          ,
          <year>2025</year>
          . URL: https://suno.com/.
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R.</given-names>
            <surname>Liashenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Semerikov</surname>
          </string-name>
          ,
          <article-title>The Determination and Visualisation of Key Concepts Related to the Training of Chatbots</article-title>
          , in: E. Faure,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tryus</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vartiainen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Danchenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bondarenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Bazilo</surname>
          </string-name>
          , G. Zaspa (Eds.),
          <source>Information Technology for Education, Science, and Technics</source>
          , volume
          <volume>222</volume>
          <source>of Lecture Notes on Data Engineering and Communications Technologies</source>
          , Springer Nature Switzerland, Cham,
          <year>2024</year>
          , pp.
          <fpage>111</fpage>
          -
          <lpage>126</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -71804-
          <issue>5</issue>
          _
          <fpage>8</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>NuEdge</given-names>
            <surname>Development</surname>
          </string-name>
          , Sonic Charge - Synplant,
          <year>2025</year>
          . URL: https://soniccharge.com/synplant.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>