<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Theoretical and practical aspects of using artificial intelligence technologies in the field of sound design</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Oleksandr</forename><forename type="middle">A</forename><surname>Bobarchuk</surname></persName>
							<email>a.bobarchuk@interactiveklass.com</email>
							<affiliation key="aff0">
								<orgName type="institution">State Non-Commercial Company &quot;State University &quot;Kyiv Aviation Institute&quot;</orgName>
								<address>
									<addrLine>1 Liubomyra Huzara Ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Svitlana</forename><forename type="middle">M</forename><surname>Halchenko</surname></persName>
							<email>smgalchenko@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">State Non-Commercial Company &quot;State University &quot;Kyiv Aviation Institute&quot;</orgName>
								<address>
									<addrLine>1 Liubomyra Huzara Ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Serhii</forename><forename type="middle">O</forename><surname>Hnidenko</surname></persName>
							<email>serhii.hnidenko@npp.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">State Non-Commercial Company &quot;State University &quot;Kyiv Aviation Institute&quot;</orgName>
								<address>
									<addrLine>1 Liubomyra Huzara Ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ivan</forename><forename type="middle">P</forename><surname>Zavadetskyi</surname></persName>
							<email>ivan.zavadetskyi@npp.nau.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">State Non-Commercial Company &quot;State University &quot;Kyiv Aviation Institute&quot;</orgName>
								<address>
									<addrLine>1 Liubomyra Huzara Ave</addrLine>
									<postCode>03058</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Theoretical and practical aspects of using artificial intelligence technologies in the field of sound design</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FC60E6E698130E5EA8AC1BB86687AD23</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:23+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>sound design, artificial intelligence, sound creation for music, Suno AI, sound plugins, visual novels, AudioGen (I. P. Zavadetskyi) 0000-0003-3176-7231 (O. A. Bobarchuk)</term>
					<term>0000-0003-0531-1572 (S. M. Halchenko)</term>
					<term>0009-0002-3215-8577 (S. O. Hnidenko)</term>
					<term>0000-0002-6854-3971 (I. P. Zavadetskyi)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The theoretical and practical aspects of using artificial intelligence technologies in the field of sound design are considered. An analysis of modern technologies, their capabilities and limitations is conducted, the advantages and risks are examined, and the prospects for development in this field are outlined. The results of the research are aimed at increasing the understanding of the potential of AI in working with sound and determining ways to effectively implement these technologies in the creative process.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Modern artificial intelligence (AI) technologies are becoming an integral part of many spheres of human activity, including creative industries <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. One such area where AI demonstrates significant potential is sound design. This discipline combines art and technology to create sound compositions used in cinema, video games, advertising, music and other media. The use of AI in sound design opens up new possibilities for process automation, sound generation and interactive sound accompaniment, changing traditional approaches to working with sound.</p><p>Despite significant interest in the use of artificial intelligence in creative industries, the topic of AI application in sound design is not yet fully explored in modern scientific literature. Most studies focus on specific aspects such as sound synthesis, audio signal processing or adaptive sound systems for interactive environments. However, a holistic analysis of the theoretical foundations, practical applications, and the impact of these technologies on the industry as a whole remains fragmented.</p><p>Some works highlight the technical aspects, describing the algorithms and methods used to generate or process sound. Others focus on applied cases, such as the integration of AI in the production of music or sound effects for cinema and games. Meanwhile, a comprehensive approach that would take into account both creative and technical challenges, ethical aspects and development prospects is still lacking.</p><p>This indicates the need for deeper research that would create a general concept of using AI in sound design. This article attempts to fill this gap by analysing not only existing technologies, but also their impact on the process of sound creation, as well as outlining future prospects for this field.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Transformation of modern sound design</head><p>In the classical sense, sound design is the process of obtaining (generating), editing, and implementing sound elements (samples) in a multimedia composition <ref type="bibr" target="#b2">[3]</ref>. It covers a wide range of applications, including cinema, theatre, video games, advertising, the music industry and even architectural design of sound environments. The main principles of classical sound design include the following aspects <ref type="bibr" target="#b3">[4]</ref>:</p><p>• Realism and authenticity. The main principle underlying the classical approach is the creation of realistic sounds that correspond to the visual or dramatic context. • Technical skill. Sound designers rely on traditional methods of recording sound using microphones, field recorders, analogue and digital processing tools. • Foley art. A special place in classical sound design is occupied by the art of creating sound effects manually, using real objects and materials to imitate various sounds. Real sounds are recorded and processed by appropriate means of artistic sound processing, such as reverberation, echo, chorus, etc., to achieve the desired result.</p><p>• Composition and editing. The sound designer combines sounds into a sound composition, using editing to achieve the desired rhythm, harmony and dramatic impact. That is, a synergistic combination of sound and dynamic change of images is performed.</p><p>The classical approach laid the fundamental principles of sound design, which still remain relevant. However, changes in the digital landscape pose new challenges. Classical methods of sound design have their limitations (for example, lack of adaptability, instrumental and technical limitations, time and resource costs).</p><p>The gradual development of digital technologies has also changed the principles and means of sound design. With the advent of VST (Virtual Studio Technology) and AU (Audio Units), sound designers gained access to thousands of digital instruments, simulators of analogue and digital synthesisers, classical musical instruments and effects <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. This significantly reduced equipment costs and expanded the possibilities for experimentation. The development of the gaming industry stimulated the emergence of adaptive audio systems, where sound changes depending on the player's actions or environment. Wwise (figure <ref type="figure" target="#fig_0">1</ref>) and FMOD technologies have become the standard in the field of interactive sound design <ref type="bibr" target="#b6">[7]</ref>.</p><p>Digital sound libraries gradually began to appear -sets of pre-recorded samples that can be quickly applied to one's own projects. The use of ready-made sounds was not a new practice in itself -the 1950s-60s saw the creation of the first commercial libraries storing sounds of gunshots, natural phenomena, transport, etc. <ref type="bibr" target="#b7">[8]</ref>. They were recorded on analogue media (e.g. magnetic tape) and used in cinema and television.</p><p>Modern sound design has reached a level where technologies allow the creation of high-quality sound for various types of media -from cinema and video games to advertising and virtual reality. Audio processing tools have become more powerful, and access to large sound libraries, virtual instruments and modern technical means for recording and processing sound have greatly simplified the process of creating sound content. But the demand for constant improvement remains unchanged. That is why the question arises: how to adapt and use the possibilities of artificial intelligence, which continues to develop comprehensively, in the sound design industry. And how expedient is such use?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Artificial intelligence in sound design: main directions</head><p>The use of artificial intelligence for sound generation has become one of the most promising areas in the field of sound design. At a basic level, this process is based on the ability of algorithms to learn from large volumes of audio data, analyse them and create new sound textures that can be used in music, cinema, video games and virtual reality <ref type="bibr" target="#b8">[9]</ref>.</p><p>In the early stages of development, artificial intelligence algorithms worked primarily with existing sounds. They could restore audio, remove noise or imitate the sound character of specific instruments. However, with the development of machine learning <ref type="bibr" target="#b9">[10]</ref> and neural networks <ref type="bibr" target="#b10">[11]</ref>, AI has become capable of creating completely new soundscapes that did not exist before. For example, generative adversarial networks (GANs) allow systems to synthesise sounds that have a natural timbre, and recurrent neural networks (RNNs) learn to predict the next sound segments, creating a continuous audio stream.</p><p>One of the most striking examples of AI applications is the creation of sounds for music. Algorithms analyse thousands of music tracks, extracting patterns and harmonies, and then generate melodies or rhythms. Programs like AIVA (Artificial Intelligence Virtual Artist) are capable of creating entire compositions in various genres, providing composers with a foundation for further work <ref type="bibr" target="#b11">[12]</ref>. In the field of electronic music, AI is often used to create unique samples or synthetic textures that can be integrated into compositions.</p><p>At the beginning of 2024, the Suno AI network gained a high level of popularity <ref type="bibr" target="#b12">[13]</ref>. Suno AI is an innovative platform that uses artificial intelligence to generate music based on text prompts. The user enters a description of the desired song, specifying the style, genre or theme, and the system creates a corresponding composition. This process takes about two minutes, after which the user receives two versions of the track: one with vocals, the other instrumental.</p><p>Suno AI technology is based on artificial intelligence models such as Bark and Chirp, which are capable of generating not only instrumental music but also adding vocal parts to songs. The algorithm analyses the entered text, determines its rhythmic and semantic features, and then synthesises a melody and harmony that match the given description. The vocals are synthesised taking into account the rhythm and intonation of the text, giving the song a natural sound.</p><p>The system uses an approach similar to large language models, such as ChatGPT <ref type="bibr" target="#b13">[14]</ref>: it splits the text into individual segments (tokens), studies millions of usage variants, styles and structures, and then reconstructs them on request. However, creating audio, especially music, is a more complex task, as it requires taking into account many parameters such as melody, harmony, rhythm and timbre.</p><p>Of course, the tracks generated by both Suno AI and other platforms and models have noticeable subjective flaws, which most often manifest themselves in distortions in vocal parts, sharp changes in volume or banal misunderstanding or neglect of prompts. Artificial intelligence can work much more accurately with small sounds. Classic elements of sound design are various sound transitions, for example, a gradually increasing sound (rise), a sound of impact (hit), a sound of cutting air (whoosh), etc.</p><p>Artificial intelligence is able to generate and process such short sound effects with high accuracy due to its ability to analyse thousands of samples and extract key sound characteristics. These effects have a clear structure and predictable dynamics, making them ideal material for algorithm work. AI models can create variations of hits, rises or noises based on text descriptions or user settings, providing precise adjustment of the duration, frequency spectrum and amplitude of each sound.</p><p>In addition, thanks to machine learning technologies, artificial intelligence can automatically select sounds for different scenes, creating smooth transitions and adapting them to the visual content. For example, AI can generate a whoosh sound of varying intensity depending on the speed of an object in the frame or synchronise impact effects with moments of climax.</p><p>There are already services that provide the ability to create cinematic sounds with a text prompt. But such sounds are only a small part of sound design. For us, sound design is primarily a complex sound landscape, an immersive global environment. Is artificial intelligence capable of forming something like this?</p><p>Immersive environment sound design requires not only layering sounds on top of each other, but also fine-tuning spatial acoustics, dynamics and the emotional content of each layer. In the real world, sounds interact with each other in unpredictable ways -echoes in space, gradual fading or swelling, the influence of textures of materials and objects that create or reflect sound. It is difficult for algorithms to reproduce this chaos and versatility of the sound environment in the same way as human hearing and perception.</p><p>Currently, artificial intelligence does an excellent job of reconstructing real environments through recordings and spatial analysis, but creating completely fictional sound landscapes that have no analogues in reality requires creative intuition. A human sound designer works not only with sounds as such, but with a concept -they create a story through sound, using audio as a tool to evoke emotions and build atmosphere.</p><p>However, there are also positive aspects. Artificial intelligence algorithms are becoming increasingly effective in creating procedural sound landscapes. They are able to analyse visual sequences or text descriptions and generate corresponding sound environments, automatically adding necessary elements: the sound of wind, raindrops, city bustle or any other simple ambient.</p><p>Artificial intelligence cannot fully construct a multi-layered sound environment. But it can be used as a tool that provides a certain foundation to work with. For example, it is possible to create simple patterns of classical instruments in a given key and rhythm, perform their gradual processing using classical means in any sound editing environment, mix the tracks, supplement them with various generated sounds, and further integrate the created composition into complex multimedia environments.</p><p>Another equally important aspect is the integration of artificial intelligence technologies into various plugins for working with sound. Often these are a wide variety of tools for different tasks, which have appeared quite a lot recently. AI assistants are used to perform general mastering (such as the built-in assistant in Izotope Ozone 10/11), for individual tasks such as compression, limiting, saturation, equalisation, etc. AI plugins are trained on large volumes of audio data. Developers train the neural network based on various recordings manually processed by professional sound engineers. The model analyses how classical tools for saturation, compression or limiting work, and learns patterns of effect application depending on the type of sound, genre or processing style.</p><p>When the user loads the plugin, the algorithm performs a multivariate analysis of the audio signalanalyses the frequency spectrum, dynamics, harmonics and noise level. AI models are able to detect problem areas or potentially weak zones and suggest processing parameters. Practical experience shows that these parameters are often not optimal, but can be used as a basis for further work with sound.</p><p>Much more interesting from the point of view of sound design are plugins such as Synplant 2. The Genopatch technology (figure <ref type="figure" target="#fig_1">2</ref>) built into the plugin allows generating a variety of new sounds based on a single loaded sample <ref type="bibr" target="#b14">[15]</ref>. The unique capabilities and interface of Synplant 2 promote experiments in the field of intuitive sound design, allowing to explore how non-standard methods of interaction with technology can influence the creative process.</p><p>Considering all of the above, we can note that artificial intelligence is already transforming the field of sound design today, opening up new possibilities for creativity and automation. Despite significant achievements, AI technologies in sound design face certain significant challenges: limited emotional depth of generated sounds, complexity of creating complex immersive environments, and various technical defects that can distort the perception of the overall picture. However, these limitations stimulate the development of the industry and create space for improving algorithms, integrating new approaches and synergy with human creative ideas. AI does not replace sound engineers, but becomes a powerful tool that helps accelerate the workflow and expand creative horizons. Now, let's demonstrate the possibilities and ways of applying artificial intelligence technologies in sound design through practical experience.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Practical aspects of using artificial intelligence technologies in sound design</head><p>In this part, as an example of the possibilities of using AI in sound design, we will form a simple sound design for several scenes that are planned to be used in a visual novel. Visual novels (VN) are a genre of interactive games where the main emphasis is on the plot and characters, and sound plays an important role in creating an emotional response. The scenes themselves were created using Dalle-E 3 and refined in Adobe Photoshop (figure <ref type="figure" target="#fig_2">3</ref>). To begin with, we break down the scenes into components and determine the overall mood and which sounds we need (figure <ref type="figure" target="#fig_3">4</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Dark ambient; Low frequency noise;</head><p>Wind noise; Melancholic classical instruments. Now let's determine the necessary artificial intelligence tools. For forming the general landscape, the aforementioned Suno AI from the previous section will work. Using the following prompt, we generate two compositions for download: Violin and piano, melancholic style, slow tempo, dark ambient. We transfer the downloaded result to the FL Studio environment and perform the following sequential processing: slowing down, equalisation, adding reverb and echo effects using the Crystalize granular generator from the developer SoundToys (figure <ref type="figure" target="#fig_4">5</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Humming of wires Cracking of branches</head><p>It is worth noting that the recording generated in Suno AI without further processing would not fully correspond to the general concept of sound design for this project. As already mentioned above, Suno AI often does not interpret prompts very accurately, which leads to problems with generating music in less well-known genres. However, artistic processing tools allow to significantly change and improve the nature of the input sound and adapt it to the needs of the project.</p><p>Unlike Suno AI, the AudioGen AI instrument handled the prompt more accurately, generating distant humming of electric wires and cracking of branches with a rather short request (figure <ref type="figure" target="#fig_5">6</ref>).  To create various variations of the sample generated using AudioGen, we will use Synplant2 and its Genopatch technology. We load the sample into the plugin environment, after which Synplant2 automatically generates new sound samples based on the provided one (figure <ref type="figure" target="#fig_6">7</ref>).</p><p>After combining all the generated sounds, we have as a result a simple but subjectively quite highquality sample of sound design for scenes from a visual novel. Thus, based on practical experience, 319-328 the feasibility of using artificial intelligence technologies as a tool for quickly obtaining the necessary sound samples for their further processing and combining into a coherent composition was confirmed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>The article provides a thorough analysis of the theoretical and practical aspects of using artificial intelligence technologies in the field of sound design. The current state of the industry is highlighted, considering its technical capabilities and creative challenges. A study of classical and modern tools for forming sound environments is conducted.</p><p>It has been proven that artificial intelligence significantly changes the traditional approach to sound creation, providing process automation, time savings, and expanded possibilities for experimentation. For the first time, a detailed analysis of the main trends and directions of the impact of artificial intelligence on sound design is presented. Special attention is paid to tools such as Suno AI, AudioGen, Synplant2, which demonstrate significant potential for generating sound textures and integration into creative projects.</p><p>The practical aspect of the research is based on the example of creating a sound accompaniment for visual novels, where artificial intelligence was used to generate musical compositions and sound effects. These materials, after further processing, can become the basis for high-quality full-fledged sound design. It is important to emphasise that although AI tools provide speed and adaptability in working with sound, their results often require refinement to match creative ideas.</p><p>In this article, for the first time, an integrated approach to the use of various artificial intelligence tools for creating sound design is proposed. This approach takes into account both technical capabilities and creative needs. The study outlines the advantages of modern AI algorithms, such as efficiency in creating short sound effects, as well as their limitations, including difficulties in forming complex immersive sound landscapes. Also, for the first time, the article presents a methodology for selecting artificial intelligence tools for specific tasks in the context of sound design for multimedia projects. For example, it is determined that tools such as Suno AI are appropriate for creating music and musical effects, AudioGen for generating sounds of certain environments, and Synplant2 for editing sounds. This methodology is formed on the basis of practical work with these tools and subjective evaluation of the generation results.</p><p>The overall results and prospects for further development of this problem can be defined as follows:</p><p>• A study of the main directions of using artificial intelligence in sound creation has been conducted, including the generation of musical compositions, short sound effects, and procedural sound landscapes. It is shown that tools like Suno AI, AudioGen, and Synplant2 are able to effectively perform sound generation and processing tasks, which greatly simplifies the complex process of creating sound design; • The article presents an example of creating sound design for visual novels, which illustrates the capabilities of AI for quickly obtaining basic sound textures. It is shown that artificial intelligence can be used to automate sound creation with further processing and refinement, which allows achieving high-quality final results; • An integration approach is proposed, which consists in using various artificial intelligence tools for different tasks that may include short musical compositions, simple sound landscapes, and short sounds. Subjective evaluation of the quality of the created samples shows that they are quite suitable for use in various multimedia projects.</p><p>The practical significance of the obtained results lies in increasing the efficiency of sound design creation processes through the integration of artificial intelligence technologies. In particular, the proposed methods allow automating routine tasks, such as generating basic sound textures and creating simple sound or musical effects. This reduces the time and resources required for work and allows designers to focus on the creative aspects of projects.</p><p>Prospects for further research are primarily related to improving algorithms for creating immersive sound environments, deepening the synergy of AI and human creativity, and more active integration of generated sounds into multimedia projects. These prospects demonstrate the potential for further transformation of the sound design industry, expanding the capabilities of creative professionals and stimulating the development of innovations in the use of artificial intelligence.</p><p>The study confirmed the practical value of artificial intelligence in transforming sound design, expanding the toolkit for creating sound compositions and opening up new horizons in creative industries.</p><p>Declaration on Generative AI: The authors have not employed any Generative AI tools.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Wwise Software -a tool for creating sound for interactive media and video games.</figDesc><graphic coords="2,72.00,492.87,451.30,244.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Working in the Genopatch editor.</figDesc><graphic coords="5,100.76,65.61,393.75,483.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Prepared scenes.</figDesc><graphic coords="6,72.00,116.89,216.61,153.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Determining sounds for the scene.</figDesc><graphic coords="6,220.80,346.52,297.85,185.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Work in the FL Studio environment.</figDesc><graphic coords="7,72.00,65.61,451.27,255.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Work with the AudioGen tool.</figDesc><graphic coords="7,72.00,360.78,451.28,287.41" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Generation of new sounds using Synplant2.</figDesc><graphic coords="8,72.00,65.60,451.28,391.29" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Artificial intelligence literacy in secondary education: methodological approaches and challenges</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Marienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Semerikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">M</forename><surname>Markova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CEUR Workshop Proceedings</title>
		<imprint>
			<biblScope unit="volume">3679</biblScope>
			<biblScope unit="page" from="87" to="97" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Optimizing Teacher Training and Retraining for the Age of AI-Powered Personalized Learning: A Bibliometric Analysis</title>
		<author>
			<persName><forename type="first">I</forename><surname>Mintii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Semerikov</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-71804-5_23</idno>
	</analytic>
	<monogr>
		<title level="m">Information Technology for Education, Science, and Technics</title>
		<title level="s">Lecture Notes on Data Engineering and Communications Technologies</title>
		<editor>
			<persName><forename type="first">E</forename><surname>Faure</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Tryus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Vartiainen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Danchenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Bondarenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Bazilo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Zaspa</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland; Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="volume">222</biblScope>
			<biblScope unit="page" from="339" to="357" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Sound Design</title>
		<author>
			<persName><forename type="first">K</forename><surname>Zizza</surname></persName>
		</author>
		<idno type="DOI">10.4324/9781003218821-11</idno>
	</analytic>
	<monogr>
		<title level="m">Game Audio Fundamentals: An Introduction to the Theory, Planning, and Practice of Soundscape Creation for Games</title>
				<meeting><address><addrLine>London</addrLine></address></meeting>
		<imprint>
			<publisher>Focal Press</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="142" to="163" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Computer sound synthesis fundamentals</title>
		<author>
			<persName><forename type="first">E</forename><surname>Miranda</surname></persName>
		</author>
		<idno type="DOI">10.4324/9780080490755-7</idno>
	</analytic>
	<monogr>
		<title level="m">Computer Sound Design: Synthesis techniques and programming</title>
				<meeting><address><addrLine>New York</addrLine></address></meeting>
		<imprint>
			<publisher>Routledge</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page" from="19" to="36" />
		</imprint>
	</monogr>
	<note>2 ed</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">The Use of Virtual Musical Instruments in Timbre Recognition Training</title>
		<author>
			<persName><forename type="first">A</forename><surname>Rosiński</surname></persName>
		</author>
		<idno type="DOI">10.18178/ijlt.9.3.256-260</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Learning and Teaching</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="256" to="257" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Virtual Studio</title>
		<author>
			<persName><forename type="first">T</forename><surname>Suzuki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nakabayashi</surname></persName>
		</author>
		<idno type="DOI">10.3169/itej.61.657</idno>
	</analytic>
	<monogr>
		<title level="j">The Journal of The Institute of Image Information and Television Engineers</title>
		<imprint>
			<biblScope unit="volume">61</biblScope>
			<biblScope unit="page" from="657" to="659" />
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Handbook of Game Audio Using Wwise</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zecevic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Durity</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Taylor &amp; Francis Group</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Katz</surname></persName>
		</author>
		<ptr target="https://ia600409.us.archive.org/29/items/mat-bib_201710/Capturing-sound-how-technology-has-changed-music.pdf" />
		<title level="m">Music in 1s and 0s: The Art and Politics of Digital Sampling</title>
				<meeting><address><addrLine>Berkeley</addrLine></address></meeting>
		<imprint>
			<publisher>University of California Press</publisher>
			<date type="published" when="2004">2004</date>
			<biblScope unit="page" from="137" to="157" />
		</imprint>
	</monogr>
	<note>Capturing Sound: How Technology has Changed Music</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Music AI</title>
		<author>
			<persName><forename type="first">K</forename><surname>Saraf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Amritphale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Akhand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Vijayvargiya</surname></persName>
		</author>
		<idno type="DOI">10.56726/irjmets54679</idno>
	</analytic>
	<monogr>
		<title level="j">International Research Journal of Modernization in Engineering Technology and Science</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="11174" to="11177" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Comparisons of performance between quantum-enhanced and classical machine learning algorithms on the IBM Quantum Experience</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">V</forename><surname>Zahorodko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Semerikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">N</forename><surname>Soloviev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Striuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">I</forename><surname>Striuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Shalatska</surname></persName>
		</author>
		<idno type="DOI">10.1088/1742-6596/1840/1/012021</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Physics: Conference Series</title>
		<imprint>
			<biblScope unit="volume">1840</biblScope>
			<biblScope unit="page">12021</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Neural network analytics and forecasting the country&apos;s business climate in conditions of the coronavirus disease (COVID-19)</title>
		<author>
			<persName><forename type="first">S</forename><surname>Semerikov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kucherova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Los</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ocheretin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CEUR Workshop Proceedings</title>
		<imprint>
			<biblScope unit="volume">2845</biblScope>
			<biblScope unit="page" from="22" to="32" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<ptr target="https://www.aiva.ai/" />
		<title level="m">AIVA, the AI Music Generation Assistant</title>
				<imprint>
			<date type="published" when="2025">2025</date>
		</imprint>
		<respStmt>
			<orgName>Aiva Technologies SARL</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">Inc</forename><surname>Suno</surname></persName>
		</author>
		<author>
			<persName><surname>Suno</surname></persName>
		</author>
		<ptr target="https://suno.com/" />
		<imprint>
			<date type="published" when="2025">2025</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The Determination and Visualisation of Key Concepts Related to the Training of Chatbots</title>
		<author>
			<persName><forename type="first">R</forename><surname>Liashenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Semerikov</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-71804-5_8</idno>
	</analytic>
	<monogr>
		<title level="m">Information Technology for Education, Science, and Technics</title>
		<title level="s">Lecture Notes on Data Engineering and Communications Technologies</title>
		<editor>
			<persName><forename type="first">E</forename><surname>Faure</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Tryus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Vartiainen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Danchenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Bondarenko</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Bazilo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Zaspa</surname></persName>
		</editor>
		<meeting><address><addrLine>Switzerland; Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="volume">222</biblScope>
			<biblScope unit="page" from="111" to="126" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<ptr target="https://soniccharge.com/synplant" />
		<title level="m">Sonic Charge -Synplant</title>
				<imprint>
			<date type="published" when="2025">2025</date>
		</imprint>
	</monogr>
	<note>NuEdge Development</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
