<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Peculiarities of matching the text and sound components in the Ukrainian language system development</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Taras</forename><surname>Basyuk</surname></persName>
							<email>basyuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Bandera str.12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Vasyliuk</surname></persName>
							<email>andrii.s.vasyliuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Bandera str.12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Peculiarities of matching the text and sound components in the Ukrainian language system development</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">41E68DC0D977DE1E2C8D48F71CBA10F1</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:25+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>ukrainian-language content</term>
					<term>speech recognition</term>
					<term>text analysis</term>
					<term>GMM algorithm</term>
					<term>transcription</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This article analyzes the existing methods and known systems that provide tools for recognizing Ukrainian language and describes approaches and methods for synchronizing text and audio information. The relevance of creating a system is proved, and the prefaces of scientific research in this area are described. To present the main aspects of the studied subject area, the classification of sounds in Ukrainian language was considered, and the features of their detection and formation were given. The next stage was to determine the study of spectral analysis and its influence on the recognition process. Namely, it has shown an influence on the acoustic features selection of speech, which subsequently made it possible to determine the sequence of phonemes that correspond to the input signal. The stage of the audio stream and phoneme units' synchronization, using the GMM algorithm, is described. The main idea was to build a model of the audio stream that can be compared with vectors of phonemic features to determine the correspondence between them. The mathematical description of the specified process is performed using algebra of algorithms. An applied software system has been developed that implements text and audio information synchronization. The main stages of the system work: text analysis, transcription creation, spectral analysis of the audio track, search for phoneme characteristics in the audio track, application of the GMM algorithm and output of results. At the current moment, the software solution works in the form of a prototype. Further research will be directed to testing and improving the system, eliminating conflicts, and expanding functionality in accordance with the specified requirements.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The development of a system for matching the text and audio components of Ukrainianlanguage content is of great relevance in the modern information environment. It allows you to improve the user experience of audio content consumers, improve the efficiency of voice interfaces, and ensure content accessibility for people with disabilities. Technological progress in natural language processing and artificial intelligence makes the development of such systems more effective and promising. In general, the process consists of two stages: speech recognition and comparison of the received content with the existing textual component <ref type="bibr">[1,</ref><ref type="bibr">2]</ref>.</p><p>Speech recognition is a technology that allows a computer to identify individual words or phrases spoken by a person and convert them into text. This field includes knowledge and research in computer science, linguistics, and electrical engineering. Speech recognition systems are gradually becoming an intermediary between humans and technological devices, providing alternative methods of information exchange. Along with software for dictation on personal computers, more advanced systems are being developed, such as voice assistants (Siri, Google Assistant, Alexa, Cortana), which, in addition to executing commands, can conduct a live dialogue and solve applied problems. However, most of them require access to the Internet, which limits their use, and the speed of operation depends on the quality of the Internet connection <ref type="bibr">[3,</ref><ref type="bibr">4]</ref>. It is important to note that most such systems do not support the Ukrainian language due to its specificities, such as high inflection and free order of words in a phrase or sentence. This leads to difficulties in recognition and reduces the accuracy of work. Therefore, it is necessary to look for new methods and algorithms for recognizing the Ukrainian language and adapt them to solve the given task.</p><p>To date, various approaches have been tested to recognize the words of fused speech <ref type="bibr" target="#b8">[5]</ref><ref type="bibr" target="#b9">[6]</ref><ref type="bibr" target="#b10">[7]</ref>. In the first case, with the global approach, the word to be recognized is compared with every word in the dictionary. When comparing, as a rule, the spectral representation of each word is used <ref type="bibr" target="#b11">[8]</ref>. Among various methods of this type, the dynamic programming method gave satisfactory results <ref type="bibr" target="#b12">[9]</ref>. In the second case, with an analytical approach, each word or group of words is first segmented into smaller units. This allows you to perform recognition at the syllable or the phoneme level and store in memory the parameters (duration, energy, etc.) associated with the event. Segmentation can be based on finding vowels, which are often located near the maximum of the integrative energy of the spectrum. With this approach, the first criterion for segmentation is the change in energy over time <ref type="bibr" target="#b13">[10]</ref>.</p><p>In view of the above information, an urgent task is to develop a system for matching the text and audio components of Ukrainian-language content, which will provide means of effective recognition and reproduction of content in Ukrainian-language fusion broadcasting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Analysis of recent researches and publications</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.1.">Analysis of automatic speech recognition stages</head><p>The problem of automatic speech recognition can be solved step by step. At the first stage, the task of recognition consists in the external search for characteristics and only superficially characterized classes of acoustic events. For the second stage, the generalization of external criteria for the classification of internally undetected classes is crucial, which makes it possible to predict the characteristics of an unknown signal <ref type="bibr" target="#b14">[11]</ref>. In automatic speech recognition, firstly, it is necessary to find out whether the signal is phonetic (speech) <ref type="bibr" target="#b15">[12]</ref>.</p><p>It is known about the division of the speech flow into micro and macro segments. The distinction between two macro-segments (phrases, syntagms) is, as a rule, discrete, and between two micro-segments (sub sounds, sounds, syllables) is blurred. Sounds change their suprasegmental (duration, intensity, frequency of the fundamental tone) and segmental (spectral) characteristics according to the influence of other parameters. For example, an increase in the duration of a vowel component in a speech stream may indicate semantically highlighted words, etc. Therefore, to predict, for example, the duration of a sound, several linguistic factors should be considered <ref type="bibr" target="#b16">[13]</ref>.</p><p>Here we should dwell on some segmentation problems related to the specificity of the phonetic level. Automatic recognition of nasal and smooth phonemes of fused speech can be included among the difficulties <ref type="bibr" target="#b13">[10]</ref>. Uncertainties arising from the limitations of any language processing system and often due to poor pronunciation are considered as sources of information for stochastic or uncertain set grammar <ref type="bibr" target="#b17">[14,</ref><ref type="bibr" target="#b18">15]</ref>. Currently, available methods of micro segmentation of speech (segmentation into sub sounds, sounds, syllables) are classified as follows <ref type="bibr" target="#b19">[16,</ref><ref type="bibr" target="#b20">17]</ref>:</p><p>1. Using the degree of stability over time of any acoustic parameters of the speech signal, such as the concentration of energy in the frequency spectrum. 2. Superimposition of acoustic labels on the speech signal at regularly repeated short intervals. 3. Comparison of speech signal samples in abbreviated time windows at regular intervals with samples from phoneme prototypes.</p><p>There are context-dependent and context-independent methods of segmentation. The simplest method of context-independent marking is comparison of standards <ref type="bibr" target="#b21">[18]</ref>. This requires that the device stores a model for each vocabulary item. Context-dependent segmentation allows the connection of a set of features use and thresholds with the phonetic context. Usually, the task of speech recognition is reduced to the task of recognizing individual sounds with the subsequent use of algorithms that consider the peculiarities of pronunciation, word formation and phrasing of some individuals <ref type="bibr" target="#b22">[19]</ref>.</p><p>In this case, the task of distinguishing speech sounds can be considered as a task of pattern recognition, the number of which is limited, although it reaches several dozen. At the same time, classifying the proposed sound samples can be reduced to multi-alternative hypothesis testing. At the same time, the speech sound recognition system can be built using the principles of 'learning with a teacher' <ref type="bibr" target="#b23">[20]</ref>, that is, a preliminary set of information base of classified data with which comparison is made. The procedure for recognizing speech sounds should consider the peculiarities of their implementation. First, these realizations have their own appearance for each sound. Secondly, they have a limited length of time <ref type="bibr" target="#b24">[21]</ref>.</p><p>Speech signal analysis methods can be considered using a model in which the speech signal is the response of a system with slowly changing parameters to periodic or noise-exciting oscillations <ref type="bibr" target="#b25">[22]</ref>. A speech signal can be modeled by the response of a linear system with variable parameters (vocal tract) to the corresponding excitatory signal. With an unchanged form of the vocal tract, the output signal is equal to the convolution of the excitatory signal and the impulse response of the vocal tract. However, all the variety of sounds is obtained by changing the shape of the vocal tract. If the shape of the vocal tract changes slowly, then at short time intervals, the output signal is logically approximated by the convolution of the excitatory signal and the impulse response of the vocal tract <ref type="bibr" target="#b26">[23,</ref><ref type="bibr" target="#b27">24]</ref>. Since the shape of the vocal tract changes when creating different sounds, the spectral envelope of the speech signal will also change over time. Similarly, when the period of the signal that excites ringing sounds changes, the frequency difference between the harmonics of the spectrum will change.</p><p>Therefore, in the process of recognition, it is necessary to know the type of speech signal in short periods of time and the nature of its change over time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.2.">Analysis of speech signals</head><p>Having analyzed the stages of automatic speech recognition, we can conclude that speech signal analysis systems usually try to separate the excitatory function and the characteristics of the vocal tract. Next, depending on the specific method of analysis, the parameters describing each component are obtained <ref type="bibr" target="#b28">[25]</ref>. In the frequency domain, the spectrum of short segments of the speech signal can be represented as the product of the contour characterizing the state of the vocal tract and the function characterizing the excitatory signal. Since the main parameter of the signal, an exciting ringing sound, is the spread of harmonics of the fundamental tone, and the characteristics of the vocal tract are determined with sufficient completeness by the formant frequencies, it is very convenient to proceed from the representation of speech in the frequency domain during the analysis. When creating different sounds, the vocal tract's shape, and the excitatory signal change, so the spectrum of the speech signal also changes. Therefore, the spectral representation of speech should be based on the short-term spectrum, which can be obtained from the Fourier transform <ref type="bibr" target="#b29">[26]</ref>.</p><p>Consider a discretized speech signal represented by the sequence s(n). Its short-time Fourier transform is defined as <ref type="bibr" target="#b30">[27]</ref>:</p><formula xml:id="formula_0">∞ # 𝑠 (𝑘)ℎ(𝑛 − 𝑘)𝑒 !"#$</formula><p>k = -∞ This expression describes the Fourier transform of the weighted segment of the speech oscillation, and the weighting function h(n) shifts in time.</p><p>Linear prediction is one of the most effective methods of speech signal analysis. This method becomes the most common when evaluating the main parameters of speech signals, such as the period of the main tone, formants, spectrum, and when abbreviating speech for its low-speed transmission and economical storage. The importance of the method is due to the high accuracy of the obtained estimates and the relative simplicity of the calculation <ref type="bibr" target="#b31">[28]</ref>.</p><p>The basic principle of the linear prediction method is that the current count of the speech signal can be approximated by a linear combination of previous counts. At the same time, the prediction coefficient is uniquely determined by the minimization of the mean square of the difference between the readings of the speech signal and its predicted values (at the final interval). Prediction coefficients are weights used in a linear combination. The linear prediction method can be used to reduce the volume of a digital speech signal <ref type="bibr" target="#b32">[29]</ref>.</p><p>The main goal of processing speech signals is to obtain the most convenient and compact representation of their content. The accuracy of the presentation is determined by the information that needs to be preserved or highlighted. For example, digital processing can be applied to determine whether this oscillation is a speech signal. Most speech processing methods are based on the idea that the properties of the speech signal slowly change over time. This assumption leads to short-term analysis methods, in which segments of the speech signal are extracted and processed as if they were short segments of individual sounds with distinct properties. In the general case, the energy function can be determined as follows <ref type="bibr" target="#b33">[30]</ref>:</p><formula xml:id="formula_1">∞ #[𝑥(𝑚)𝜔(𝑛 − 𝑚)] %</formula><p>m = -∞ This expression can be rewritten in the form:</p><formula xml:id="formula_2">𝐸 &amp; = # 𝑥 % (𝑚)ℎ(𝑛 − 𝑚), 𝑤ℎ𝑒𝑟𝑒 ℎ(𝑛) = 𝜔 % (𝑛) ' ()!'</formula><p>The impulse response h(n) choice or the window forms the basis of the signal description using the energy function. To understand how the choice of time window affects the short-term energy function of the signal, suppose that h(n) is long enough and has a constant amplitude; the value of E will change slightly over time. Such a window is equivalent to a low-pass filter with a narrow bandwidth. The band of the low-pass filter should not be so narrow that the output signal is constant. A narrow window (short impulse response) is desirable to describe rapid amplitude changes, but too small a window width can lead to insufficient averaging and, therefore, insufficient smoothing of the energy function. The influence of the time window width on the accuracy of the short-term average value measurement (average energy) is determined by the dependence: if N (the width of the window) is insignificant (close to the period of the main tone and less), then En will change very quickly, according to the fine structure of the speech oscillation; if N is large (several periods of the main tone), then En will change slowly and will not adequately describe changes in the features of the speech signal <ref type="bibr" target="#b34">[31]</ref>.</p><p>This means that there is no single value of N that fully satisfies the listed requirements, since the period of the fundamental tone varies from 10 counts (at a sampling rate of 10 kHz) for high children's and female voices to 250 counts for extremely low male voices. The main purpose of En is that this value allows you to distinguish vocalized speech segments from non-vocalized ones. The value of the function of the short-term mean value of the signal for non-vocalized segments is significantly smaller than for vocalized ones.</p><p>A characteristic feature of the speech signal analysis method is binary quantization of the input speech signal <ref type="bibr" target="#b35">[32]</ref>. The used mathematical model of the speech signal has the form: 𝑆(𝑡) = 𝐴(𝑡) ⋅ 𝑒 " 𝜓(𝑡), where A(t) is the law of change in the amplitude of the speech signal, Y(t) is the full phase function of the speech signal.</p><p>The law of the signal amplitude change is not a sufficiently informative parameter for evaluating a speech message since it is not constant for the same word or phrase uttered with different intonation and volume. The speech signal's full phase function is assumed to be the informative characteristic of the speech signal in this method. The full phase function of the speech signal is presented in the form of a Taylor series expansion <ref type="bibr" target="#b36">[33]</ref>:</p><formula xml:id="formula_3">𝛹(𝑡) = 𝛹 (+) (𝑡 + ) 0! 𝑡 + + 𝛹 (-) (𝑡 + ) 1! 𝑡 -+ 𝛹 (%) (𝑡 + ) 2! 𝑡 % + 𝛹 (.) (𝑡 + ) 3! 𝑡 . + ...</formula><p>The specified expression can be rewritten as follows:</p><formula xml:id="formula_4">𝛹(𝑡) = 𝜇 + + 𝜇 -𝑡 + 𝜇 % 𝑡 % 2 + 𝜇 . 𝑡 . 6 + ...</formula><p>The first three expansion coefficients are taken in the schedule. At the same time, the first coefficient m0, which is the initial phase of the speech signal, is taken equal to zero, due to insignificant informativeness. Then the complete phase function will be determined: 𝛹(𝑡) = 𝜇 -𝑡 + 0,5𝑚𝑢 % 𝑡 % where, m1 is the decomposition coefficient, which is the average frequency of the speech signal, m2 is the decomposition coefficient, which is the change in the frequency of the speech signal. After discretization, the complete phase function has the following form:</p><p>𝛹(𝑖 ⋅ 𝛥𝑡) = 𝜇 -⋅ (𝑖 ⋅ 𝛥𝑡) + 0,5 ⋅ µ % ⋅ (𝑖 ⋅ 𝛥𝑡) % where i is the number of the current count in the discretized sequence, Δt is the discretization step.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.3.">Analysis of software products</head><p>Commercial programs for speech recognition appeared in the early nineties. They were usually used by people who, due to a hand injury, were unable to type many texts. These programs (e.g., Dragon NaturallySpeaking, VoiceNavigator) translated the user's voice into text, thus relieving his hands. The translation reliability of such programs was not high, but it gradually improved over the years.</p><p>The increase in the computing power of mobile devices has made it possible to create programs with the function of speech recognition for them as well. Among such programs, it is worth noting the Microsoft Voice Command application, which allows you to work with many programs using your voice. For example, you can enable music playback in the player or create a new document. Intelligent language solutions that allow automatic synthesis and recognition of human speech are the next step in the development of interactive voice systems (IVR) <ref type="bibr" target="#b37">[34]</ref>. Using an interactive phone application is not a fad but a vital necessity. Reducing the load on contact center operators and secretaries, reducing labor costs, and increasing the productivity of service systems -these are just some of the advantages that prove the feasibility of such solutions.</p><p>The next step in speech recognition technologies can be considered the development of the so-called Silent Speech Interfaces (SSI). These speech processing systems are based on the acquisition and processing of speech signals at the early stage of articulation <ref type="bibr" target="#b38">[35]</ref>. This stage of speech recognition development is caused by two significant shortcomings of modern recognition systems: excessive sensitivity to noise, and the need for clear and distinct speech when addressing the system. The approach based on SSI is to use new sensors that are not affected by noise as a supplement to the processed acoustic signals.</p><p>Today, there are two types of speech recognition systems -client-based and client-server. When using client-server technology, a voice command is entered on the user's device and transmitted via the Internet to a remote server, where it is processed and returned to the device in the form of a command (Google Voice, Vlingo, etc.). Due to the enormous number of server users, the recognition system receives a significant training base. The first option works on other mathematical algorithms and is rare (Speereo Software) -in this case, the command is entered on the user's device and processed on it. The advantage of processing 'on the client' is mobility, independence from the presence of communication and the operation of remote equipment. In particular, the system working 'on the client' seems more reliable, but is limited, at times, by the power of the device on the user's side <ref type="bibr" target="#b39">[36]</ref>.</p><p>Speech recognition systems can also be divided into announcer-oriented and announcerindependent. Speech recognition systems aimed at working with announcers, or announceroriented systems, aimed at recognizing and analyzing the speech of specific individuals or groups of announcers. These systems can be configured to detect the unique pronunciation, intonation, and other aspects of each speaker's speech. They are often used in situations where a person needs to be identified or authenticated by their speech pattern, such as in biometric identification systems or automatic voice authentication systems. In addition, speaker-oriented systems can be used in speech analysis to study the peculiarities of speaker style or to create personalized voice assistant interfaces that respond to commands or requests of a specific user. Among the common systems, we can highlight: Voice Biometrics by Verint (speech recognition system specializes in identifying a person by his voice, it can identify and authenticate the user based on his unique voice characteristics); Speaker Recognition by NICE (the system uses voice biometric data to identify a person and allows to recognize announcers based on their voice and emphasizes the identification of specific individuals); VoicePIN by Nuance Communications (speech recognition system offers individual voice recognition for user authentication, allows setting a unique 'voice PIN' for each user and works with announcers regardless of their speech); VoiceKey by VoiceVault (the system is used for voice authentication of users, it allows to recognize the user's voice even if he uses different phrases or speech patterns).</p><p>Speaker-independent speech recognition systems are designed to recognize speech without reference to specific speakers or individuals. They are designed to recognize general speech features and patterns that can be applied to many speakers. These systems are usually trained on substantial amounts of diverse speech data to become more versatile and accurate in speech recognition. Speaker-independent speech recognition systems are widely used in large companies where a large stream of voice commands or data needs to be processed without the need to train a model for each individual user. They can also be used in various applications such as voice assistants, automatic speech recognition systems in the medical or legal fields, as well as in video games and other user interaction scenarios. Among the common systems, we can highlight: Google Speech Recognition (Google offers a widely used speech recognition system that works based on neural networks. It is speaker-independent and capable of recognizing speech from different speakers in different language contexts); Amazon Alexa Voice Service (Amazon's Alexa voice control system is also speaker-independent, and is able to recognize the speech of users from different language areas and with different accents); Microsoft Azure Speech Recognition (Microsoft's Azure speech recognition service offers a scalable and accurate speech recognition system that can work with the speech of different speakers).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">The main tasks of the research and their significance</head><p>The purpose of this study is to create a system that can be used to align the Ukrainian text to its audio reproduction. The project will serve as a part of creating a system of Ukrainian language synthesis and recognition of Ukrainian speech. To achieve the goal, the following tasks must be solved: analyze the existing approaches, methods and software tools used in the field of Ukrainian language recognition; identify the main tasks that arise in this case; analyze the methods and algorithms of sign language recognition that can be adapted during system development; implement a system prototype.</p><p>The results of the study solve the actual scientific and practical task of harmonizing the text and sound components of Ukrainian-language content, which consists in providing means for effective recognition and reproduction of words in Ukrainian-language fusion speech. Such a system would be useful for a wide range of applications, including speech recognition in audio and video content, development of voice assistants and interfaces, and support for users with disabilities. Given the rapid pace of development of deep learning and natural language processing technologies, the development of such a system has exciting potential for improving the ways of interacting with Ukrainian-language content, ensuring more accurate and faster recognition of Ukrainian speech.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Major research results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Sounds in the Ukrainian language</head><p>In the process of research, the mechanisms of phonetics will be used. The main object of the study of phonetics is sounds -the smallest units of the speech stream, which make up words in the language. Sounds form the outer, sound shell of words and thus help to distinguish one word from another. Words are divided by the number of sounds from which they are made, the set of these sounds and their sequence. The sound system of the Ukrainian language includes 38 sounds: 6 vowels and 32 consonants. Speech sounds are produced by the speech apparatus, which includes the larynx with vocal cords, oral and nasal cavities, lips, tongue, teeth, and palate <ref type="bibr" target="#b40">[37]</ref>.</p><p>According to the method of creation, sounds are divided into vowels and consonants <ref type="bibr" target="#b41">[38]</ref>. Vowels are the sounds of human speech, the basis of which is the voice. Consonant sounds are the sounds of human speech, the basis of which is noise with a greater or lesser part of the voice or only noise. Active speech organs make certain movements when creating sounds. These are the vocal cords, back wall of the pharynx, uvula (palatal veil), tongue and lips. Active speech organs play the key role in the process of sound formation. Passive speech organs are motionless speech organs approached by active speech organs or even close to them, causing noises. These include the hard palate, teeth, and alveoli. Passive speech organs perform an auxiliary role during sound production <ref type="bibr" target="#b42">[39]</ref>.</p><p>There are six vowel sounds in the Ukrainian language: Consonant sounds. There are 32 consonant sounds in the Ukrainian language:</p><formula xml:id="formula_5">[б], [п], [д], [д'], [т], [т'], [ґ], [к], [ф], [ж], [з], [з'], [ш], [с], [с'], [г], [х], [дж], [дз], [дз'], [ч], [ц], [ц'], [в], [й], [м], [н], [н'], [л], [л'], [р], [р'].</formula><p>The division of consonants into loud and sonorous, voiced, and voiceless is based on the participation of voice and noise in their creation. Consonant sounds can be <ref type="bibr" target="#b41">[38]</ref>:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Spectral analysis of an audio fragment</head><p>Spectral analysis is one of the signal processing methods that allows characterizing the frequency composition of the measured signal. The Fourier transform is a mathematical basis that connects a temporal or spatial signal (or some model of this signal) with its presentation in the frequency domain. Real-time signal processing includes the tasks of analyzing audio, speech, and multimedia signals, in which, in addition to the difficulties directly related to the analysis of the spectral content and subsequent classification of the sequence of counts (as in the task of speech recognition) or changes in the shape of the spectrum, filtering in the frequency area (mainly refers to multimedia signals), the problem of data flow management in modern computer systems arises. When processing signals, it is customary to solve two types of tasks -detection tasks and evaluation tasks. When detecting, it is necessary to answer the question, whether we are observing a signal with a priori known parameters. Evaluation is the task of measuring the values of the parameters describing the signal <ref type="bibr" target="#b43">[40]</ref>.</p><p>The signal often contains a lot of noise, and interfering signals can be superimposed on it. Therefore, to simplify these tasks, the signal is usually decomposed into the basic components of the signal space. For many applications, periodic signals are of greatest interest. It is quite natural that the functions sin and cos are used. Such decomposition can be performed using the classical Fourier transform <ref type="bibr" target="#b30">[27]</ref>. When processing signals of finite duration, interdependent issues must be considered during harmonic analysis. Completion of the observation interval affects the search for tones in the presence of loud noises, the ability to resolve tones of variable frequency, and the accuracy of parameter estimates of all the above-mentioned signals.</p><p>Currently, there are many algorithms and groups of algorithms that solve the main task of spectral analysis in one way or another: estimating the power spectral density to judge the nature of the processed signal based on the result. However, each of the algorithms has its own scope of application. For example, gradient adaptive autoregressive methods cannot be applied to data processing with a rapidly changing time spectrum. Classical methods have a wide scope of application but lose to autoregressive methods based on eigenvalues in terms of evaluation quality. However, on a real time scale, the use of the latter is difficult due to computational complexity. Moreover, the application of each of the methods usually requires the selection of parameter values (selection of the data window and correlation window in classical methods, the order of the model in the autoregressive algorithm, the estimated number of eigenvectors in the noise space) and the correct choice requires conducting experiments with each class of algorithms <ref type="bibr" target="#b33">[30]</ref>.</p><p>Thus, the following task arises from existing algorithms, analyze the possibility of application to sequential processing of signals in real time and to block processing and evaluate the obtained results' quality. The statement of the task implies the need to conduct numerous experiments. Experimental input data are formed in the following way: for the task of analyzing the block processing algorithms of the entire sequence of reports, discretized reports of the testsignal data are formed from the sum of complex sinusoids and additive noise processes, formed by passing white noise through a filter with a frequency characteristic of the raised cosine type or a Hamming window. The initial data of the experiments are for the task of analyzing block processing algorithms of the entire sequence. For real-time signal analysis, it is advisable to use power spectral density. The spectral estimate obtained from a finite data record characterizes some assumption about the spectral function that would be obtained if we had a data record of infinite length at our disposal, while accepted statistical criteria for the quality of the estimate are its shift and dispersion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Synchronization of the audio stream and phonemic units</head><p>Synchronization of the audio stream and phoneme units using the GMM (Gaussian mixture models) algorithm is used in speech recognition tasks. The basic idea is to build a model of the audio stream that can be compared with vectors of phonemic features to determine the correspondence between them. The GMM algorithm uses statistical methods to model the distribution of data in the feature space. In the context of audio stream synchronization and phoneme units, GMM can be used to model various acoustic characteristics of phonemes, such as frequency, amplitude, spectral shape, etc. When the GMM algorithm is trained on a large set of audio data, it becomes able to determine the probability of each phoneme for each part of the audio stream. With the help of these probabilities, it is possible to determine the moments of time when phonemes appear in the input audio stream <ref type="bibr" target="#b44">[41]</ref>.</p><p>The basic idea behind the GMM algorithm discussed in this context is to assume that we know the parameters of this model, and then calculate the probability that each data point belongs to one or another component. After that, the components should be re-aligned so that each component is aligned with the entire data set, each point of which is assigned a weight corresponding to the probability that it belongs to the given component. This process continues iteratively until convergence is reached. The data is 'supplemented' by calculating probability distributions for hidden variables based on the current model. When using a mixed Gaussian distribution, the model of the mixed distribution is initialized with arbitrary values of the parameters, and then iterations are carried out according to the two steps described below <ref type="bibr" target="#b45">[42]</ref>. </p><formula xml:id="formula_6">𝑚 / ⟵ # 𝑝 /" 𝑥 " 𝑝 / " 𝑠 / ⟵ # 𝑝 /" 𝑥 " 𝑥 " 𝑝 / " 𝑤 / ← 𝑝 /</formula><p>The Ε-step, or the expectation step, can be considered as the calculation of expected values and hidden indicator variables, where the value is equal to 1 if the data was not formed by the i-th component, and 0 -otherwise. At the M-step, or the maximization step, a search is made for new parameter values that maximize the logarithmic likelihood of the data, considering the expected values of the hidden indicator variables.</p><p>The final model, the parameters of which are determined in learning using the GMM algorithm, does not differ from the primary model, on which the data was generated. The logarithmic likelihood of the model obtained in the training process is slightly higher than the corresponding value for the initial model, on which the initial data were formed. This phenomenon may seem strange at first, but it simply reflects the fact that the data was generated randomly, so there is a possibility that it is not an accurate representation of the underlying model. Therefore, the synchronization of the audio stream and phoneme units using the GMM algorithm allows assigning each fragment of the audio stream to the corresponding phoneme, which is a key step in the speech recognition process.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.4.">Mathematical description of the process</head><p>The sequence of actions for matching the text and audio components of the Ukrainian-language content was carried out using the algebra of algorithms <ref type="bibr" target="#b46">[43]</ref>. The first stage of the implementation of the algebra of algorithms is the description of unit terms and the synthesis of sequences, which is given below.</p><p>Formed uniterms: I(t) -uniterm of entering/editing text content; A(t) -is the uniterm of analysis of the text for the correctness of the specified characteristics; С(tr) -uniterm of creating a transcription; L(a) -is the uniterm of load/reading of audio content; As(a) -uniterm of spectral analysis of the audio track; F(f) -uniterm of search for phoneme characteristics; S(a) -uniterm of synchronization of transcription and audio track; V(r) -uniterm of displaying the result; u1сheck if a value has been entered for analysis; u2 -сhecking the correctness of the result. As a result of the use of the apparatus of the algebra of algorithms, the following sequences and eliminations were synthesized: S11 -the sequence of operation of the system in case of availability of values for analysis and a correct result: S12 -the sequence of operation of the system in case of availability of values for analysis and incorrect result: S21 -the sequence of system operation in the absence of values for analysis and correct result: S22 -the sequence of operation of the system in case of no values for analysis and an incorrect result: L1 -elimination of сheck if a value has been entered for analysis: L2 -elimination of сhecking the correctness of the result: Sm -the main sequence of the system:</p><p>The next stage is the substitution of the corresponding sequences in the elimination.</p><p>As a result of using the properties of the algebra of algorithms <ref type="bibr" target="#b17">[14]</ref>, we subtract the common unit terms by the sign of the elimination operation and obtain the following formula of the algebra of algorithms:</p><p>Characteristics of the solution and practical implementation. The C++ programming language was used to implement the prototype of the software product. It is characterized by such features as: simplicity, object orientation and cross-platform. The main advantages are <ref type="bibr" target="#b47">[44]</ref>:</p><p>• Scalability. Programs are developed in the C++ language for various platforms and systems. • Ability to work at a low level with memory, addresses, ports.</p><p>• Ability to create generalized algorithms for diverse types of data, their specialization, and calculations at the compilation stage, using templates. • Various programming styles and technologies are supported, including traditional directive programming, OOP, generalized programming, metaprogramming (templates, macros).</p><p>The developed system is presented as a desktop application. The application is created in the environment of the Windows operating system. To carry out this work, the project was divided into two parts: work with text and work with sound.</p><p>Working with the text included the following tasks: reading words, applying the rules of assimilation to them, creating a basic transcription, and considering the effects of sounds on each other. Work with audio included: splitting the wave into frequencies, searching for sound parameters, and synchronizing transcription with audio playback. When performing the last task, the GMM (Gaussian Mixture Model) algorithm was used, which helped to achieve high quality results <ref type="bibr" target="#b44">[41]</ref>.</p><p>During prototype testing, different texts were used and read by different voices. The system was configured for fast learning and adapted to different timbres of voices. The requirement for audio reading is the absence of noise and a moderate pace. We will illustrate the operation of the system and display the results of three main stages: creating a transcription, searching for sound characteristics throughout the audio track, and synchronizing text and audio.</p><p>As a control example, a fragment of the text was used: 'По тих слідах пройшли в лісну гущавину' (Following those tracks, they went into the forest thicket). First, let us transcribe the fragment (Fig. <ref type="figure" target="#fig_2">1</ref>).   Using the GMM (Gaussian Mixture Model) algorithm and predefined phonetic unit characteristics, the text is synchronized with the incoming audio stream. Figure <ref type="figure" target="#fig_5">4</ref> shows the graphical results: As it can be noted that the created prototype of the software system successfully compared the text and sound components of the fragment: 'По тих слідах пройшли в лісову гущавину'.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>As a result of the conducted research, existing methods and known systems that provide tools for recognizing the Ukrainian language and describe approaches and methods for synchronizing text and audio information have been analyzed. The stages and software tools for automatic speech recognition were analyzed, which made it possible to identify the features of existing approaches. As the analysis showed, today there are several software systems, but all of them are characterized by certain shortcomings, the main ones of which are limited accuracy in complex language constructions, neglect of the context, and the impossibility of application for the recognition of Ukrainian-language audio content, which makes the task of constructing a system from matching text and audio components of Ukrainian-language content. To present the main aspects of the studied subject area, the classification of sounds in the Ukrainian language was considered, and the features of their detection and formation were given. The next stage was determination of the spectral analysis study and its influence on the recognition process. The stage of the audio stream synchronization and phoneme units using the GMM algorithm is described. The main idea was to build a model of the audio stream that can be compared with vectors of phonemic features to determine the correspondence between them. The mathematical description of the specified process is performed using algebra of algorithms. An applied software system has been developed that implements text and audio information synchronization. At the current moment, the software solution works in the form of a prototype.</p><p>Further research will be directed to testing and improving the system, eliminating conflicts, and expanding functionality in accordance with the specified requirements.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>[і], [и], [е], [у], [о], [а]. They can be: • Front and back rows. According to the place of production (meaning the movement of the tongue in the horizontal plane of the oral cavity), vowel sounds are divided into front row vowels and back row vowels: front row vowels: [е], [и], [і]; back row vowels: [а], [о], [у]. • Low, medium, and high lift. Depending on the degree of raising the tongue, i.e., on its movement in the vertical plane, vowels of low, medium, and high elevation are distinguished: vowels of low elevation: [а]; middle raised vowels: [е], [о]; high rising vowels: [і], [и], [у]. • Rounded or neutral. With the participation of the lips, vowels are divided into rounded (labialized) and neutral: rounded vowels: [о], [у]; neutral vowels: [і], [и], [е], [а]. • Unstressed and stressed. Depending on the place of stress in the word, vowel sounds can be stressed or unstressed.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>1. E-step. Calculate the probabilities 𝑝 𝑖𝑗 = 𝑃(𝐶 = 𝑖|𝑥 𝑗 ), that data 𝑥 " were formed by component i. According to the Bayes rule, the relation 𝑝 𝑖𝑗 = 𝛼𝑃(𝑥 𝑖 |𝐶 = 𝑖)𝑃 (𝐶 = 𝑖). The term 𝑃(𝑥 𝑗 |𝐶 = 𝑖) represents the probability of data values 𝑥 𝑗 in the i-th Gaussian distribution, and the term P {C = i) -is a parameter for determining the weight of the ith Gaussian distribution; by definition 𝑝 𝑖 = ∑ 𝑝 𝑖𝑗 . 2. М-step. Calculate the new values of the mathematical expectation, covariance and weight of the component as follows:</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Creating a transcription</figDesc><graphic coords="14,150.48,319.44,308.64,116.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Waveform of the input audio file</figDesc><graphic coords="14,120.40,477.18,368.36,221.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Calculated wave frequencies of the input audio file</figDesc><graphic coords="15,106.10,126.43,396.78,238.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: The result of text and audio track synchronization</figDesc><graphic coords="15,112.97,435.66,383.23,230.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="13,103.05,85.05,403.09,230.55" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Voiceless consonants are consonants in which the voice prevails over the noise</title>
		<author>
			<persName><forename type="first">Voiceless</forename><surname>Voiced</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">There are nine of these sounds in the Ukrainian language</title>
				<imprint/>
	</monogr>
	<note>н&apos;. р. р&apos;</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">glottal</title>
		<author>
			<persName><surname>Bilabial</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">According to the active speech organ, consonants are divided into plosive, alveolar and glottal: bilabial consonants</title>
				<imprint/>
	</monogr>
	<note>м. ф</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">According to the sign of hardness or softness, consonants are divided into hard and soft: hard consonants</title>
		<author>
			<persName><forename type="first">Soft</forename><surname>Hard</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Separate consonants form pairs according to the &apos;softnesshardness&apos; feature: hard-soft consonants</title>
				<imprint/>
	</monogr>
	<note>words of foreign origin, they occur before other vowels</note>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Considering auditory perception, consonants are also divided into sibilant and affricate. A small group consists of nasal consonants</title>
		<author>
			<persName><surname>Sibilant</surname></persName>
		</author>
		<author>
			<persName><surname>Consonants</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">in which the nasal cavity participates</title>
				<imprint/>
	</monogr>
	<note>They are divided into sibilant consonants. з&apos;. ц&apos;. дз. дз&apos;. affricate consonants. ч. nasal consonants. м. н&apos;</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Improving Readability for Automatic Speech Recognition Transcription</title>
		<author>
			<persName><forename type="first">J</forename><surname>Liao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Eskimez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Shou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Qu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zeng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Asian and Low-Resource Language Information Processing</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="issue">5</biblScope>
			<biblScope unit="page" from="1" to="23" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>Article No</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explainability for Natural Language Processing</title>
		<author>
			<persName><forename type="first">M</forename><surname>Danilevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dhanorkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Popa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Qian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">KDD &apos;21: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining</title>
				<imprint>
			<date type="published" when="2021-08">August 2021</date>
			<biblScope unit="page" from="4033" to="4034" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Voice Marketing</title>
		<author>
			<persName><forename type="first">L</forename><surname>Minsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Westwater</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Westwater</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
			<publisher>Rowman &amp; Littlefield Publishers</publisher>
			<biblScope unit="page">216</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Innerlinking website pages and weight of links</title>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th International Scientific and Technical Conference «Computer Science and Information Technologies CSIT-2017</title>
				<meeting>the 12th International Scientific and Technical Conference «Computer Science and Information Technologies CSIT-2017<address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">September 12-15, 2017</date>
			<biblScope unit="page" from="12" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Gated Recurrent Fusion With Joint Training Framework for Robust End-to-End Speech Recognition</title>
		<author>
			<persName><forename type="first">C</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE/ACM Transactions on Audio, Speech and Language Processing</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="198" to="209" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Natural Language Processing Pretraining Language Model for Computer Intelligent Recognition Technology</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Asian and Low-Resource Language Information Processing</title>
		<imprint>
			<biblScope unit="page" from="937" to="943" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Peculiarities of an Information System Development for Studying Ukrainian Language and Carrying out an Emotional and Content Analysis</title>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Computational Linguistics and Intelligent Systems</title>
				<meeting>the 7th International Conference on Computational Linguistics and Intelligent Systems<address><addrLine>Kharkiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-04-20">2023. 2023. April 20-21, 2023</date>
			<biblScope unit="volume">3396</biblScope>
			<biblScope unit="page" from="279" to="294" />
		</imprint>
	</monogr>
	<note>Computational Linguistics Workshop</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Applied Phonetics Workbook: A Systematic Approach to Phonetic Transcription</title>
		<author>
			<persName><forename type="first">H</forename><surname>Edwards</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gregg</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2003">2003</date>
			<publisher>Cengage Learning</publisher>
			<biblScope unit="page">288</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Mastering Dynamic Programming in Python</title>
		<author>
			<persName><forename type="first">E</forename><surname>Norex</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Independent Creating Platform</title>
		<imprint>
			<biblScope unit="page">219</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Automatic Speech and Speaker Recognition: Large Margin and Kernel Methods</title>
		<author>
			<persName><forename type="first">J</forename><surname>Keshet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>Wiley</publisher>
			<biblScope unit="page">268</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Automatic Speech Recognition: A Deep Learning Approach</title>
		<author>
			<persName><forename type="first">D</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Deng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Signals and Communication Technology</title>
		<imprint>
			<biblScope unit="page">347</biblScope>
			<date type="published" when="2015">2015</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Fundamentals of Speaker Recognition</title>
		<author>
			<persName><forename type="first">H</forename><surname>Beigi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1003">2011. 1003</date>
			<publisher>Springer</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Improving text recognition by combining visual and linguistic features of text</title>
		<author>
			<persName><forename type="first">C</forename><surname>Tran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Nguyen-Trong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Tran-Anh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nguyen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th International Symposium on Information and Communication Technology</title>
				<meeting>the 11th International Symposium on Information and Communication Technology</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="329" to="335" />
		</imprint>
	</monogr>
	<note>SoICT &apos;</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Applying Phonetics: Speech Science in Everyday Life</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kulkarni</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Society Publishing</publisher>
			<biblScope unit="page">272</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Reetz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jongman</surname></persName>
		</author>
		<title level="m">Phonetics: Transcription, Production, Acoustics, and Perception (Blackwell Textbooks in Linguistics</title>
				<imprint>
			<publisher>Wiley-Blackwell</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page">400</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Natural Language Processing Pretraining Language Model for Computer Intelligent Recognition Technology</title>
		<author>
			<persName><forename type="first">J</forename><surname>Dong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Asian and Low-Resource Language Information Processing</title>
		<imprint>
			<biblScope unit="page" from="56" to="78" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Natural Language Processing with Transformers</title>
		<author>
			<persName><forename type="first">L</forename><surname>Tunstall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Werra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wolf</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>O&apos;Reilly Media</publisher>
			<biblScope unit="page">406</biblScope>
		</imprint>
	</monogr>
	<note>Revised Edition</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Features of designing and implementing an information system for studying and determining the level of foreign language proficiency</title>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lytvyn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Vlasenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Modern Machine Learning Technologies and Data Science Workshop</title>
				<meeting>the Modern Machine Learning Technologies and Data Science Workshop<address><addrLine>MoMLeT&amp;DS; Leiden, The Netherlands</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-11-25">2023. 2022. November 25-26, 2022</date>
			<biblScope unit="volume">3312</biblScope>
			<biblScope unit="page" from="212" to="225" />
		</imprint>
	</monogr>
	<note>Modern Machine Learning Technologies and Data Science Workshop</note>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Andreichuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Babeliuk</surname></persName>
		</author>
		<title level="m">Contrastive lexicology of English and Ukrainian languages: theory and practice: Textbook</title>
				<meeting><address><addrLine>Kherson</addrLine></address></meeting>
		<imprint>
			<publisher>Helvetica</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page">236</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Deep Learning for NLP and Speech Recognition</title>
		<author>
			<persName><forename type="first">U</forename><surname>Kamath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Whitaker</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Springer</publisher>
			<biblScope unit="page">649</biblScope>
		</imprint>
	</monogr>
	<note>1st ed</note>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<title level="m" type="main">Learning Deep Learning: Theory and Practice of Neural Networks, Computer Vision, Natural Language Processing, and Transformers Using TensorFlow</title>
		<author>
			<persName><forename type="first">M</forename><surname>Ekman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2021">2021</date>
			<publisher>Addison-Wesley Professional</publisher>
			<biblScope unit="page">752</biblScope>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Machine learning applied in natural language processing</title>
		<author>
			<persName><forename type="first">A</forename><surname>Butnaru</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM SIGIR Forum</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="9" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>Article No</note>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<title level="m" type="main">Digital Signals Theory</title>
		<author>
			<persName><forename type="first">B</forename><surname>Mcfee</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
			<publisher>Chapman and Hall/CRC</publisher>
			<biblScope unit="page">259</biblScope>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Signal Processing and Machine Learning Theory</title>
		<author>
			<persName><forename type="first">P</forename><surname>Diniz</surname></persName>
		</author>
		<idno>P. 1234</idno>
	</analytic>
	<monogr>
		<title level="m">Academic Press Library in Signal Processing</title>
				<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b28">
	<monogr>
		<author>
			<persName><forename type="first">T</forename><surname>Holton</surname></persName>
		</author>
		<idno>P.1058</idno>
		<title level="m">Digital Signal Processing: Principles and Applications Illustrated Edition</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Stone</surname></persName>
		</author>
		<title level="m">The Fourier Transform: A Tutorial Introduction</title>
				<imprint>
			<publisher>Sebtel Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page">103</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Discrete Fourier And Wavelet Transforms: An Introduction Through Linear Algebra With Applications To Signal Processing</title>
		<author>
			<persName><forename type="first">R</forename><surname>Goodman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016</date>
			<publisher>World Scientific Publishing Company</publisher>
			<biblScope unit="page">300</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<title level="m" type="main">Linear Prediction: The Problem, its Solution and Application to Speech</title>
		<author>
			<persName><forename type="first">A</forename><surname>O'cinneide</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dorran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gainza</surname></persName>
		</author>
		<idno>. 2008. P.19</idno>
		<imprint/>
	</monogr>
	<note type="report_type">DIT Internal Technical Report</note>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">The Linear Predictive Modeling of Speech From Higher-Lag Autocorrelation Coefficients Applied to Noise-Robust Speaker Recognition</title>
		<author>
			<persName><forename type="first">P</forename><surname>Alku</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Saeidi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rahim</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Speech, and Language Processing</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Understanding Digital Signal Processing</title>
		<author>
			<persName><forename type="first">R</forename><surname>Lyons</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2010">2010</date>
			<publisher>Pearson</publisher>
			<biblScope unit="page">954</biblScope>
		</imprint>
	</monogr>
	<note>3rd edition</note>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jiang</surname></persName>
		</author>
		<idno>P. 920</idno>
		<title level="m">Digital Signal Processing: Fundamentals and Applications</title>
				<imprint>
			<publisher>Academic Press</publisher>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note>3rd edition</note>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Manolakis</surname></persName>
		</author>
		<title level="m">Digital Signal Processing</title>
				<imprint>
			<publisher>Pearson</publisher>
			<date type="published" when="1004">2007. 1004</date>
		</imprint>
	</monogr>
	<note>4th edition</note>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Laurent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Staines</surname></persName>
		</author>
		<title level="m">Partial Fractions, Laurent Series, and Residues</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page">44</biblScope>
		</imprint>
	</monogr>
	<note>Independent Creating Platform</note>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m" type="main">Designing Voice User Interfaces: Principles of Conversational Experiences</title>
		<author>
			<persName><forename type="first">C</forename><surname>Pearl</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>O&apos;Reilly Media</publisher>
			<biblScope unit="page">275</biblScope>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Instruments of Articulation: Signal Processing in Live Performance</title>
		<author>
			<persName><forename type="first">S</forename><surname>Thorn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wei</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th International Conference on Movement and Computing</title>
				<meeting>the 6th International Conference on Movement and Computing</meeting>
		<imprint>
			<date type="published" when="2019-10">October 2019</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
	<note>MOCO &apos;19</note>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Design and Implementation of a Ukrainian-Language Educational Platform for Learning Programming Languages</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lytvyn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Modern Machine Learning Technologies and Data Science Workshop</title>
				<meeting>the Modern Machine Learning Technologies and Data Science Workshop<address><addrLine>MoMLeT&amp;DS; Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-06-03">2023. 2023. June 3, 2023</date>
			<biblScope unit="volume">3426</biblScope>
			<biblScope unit="page" from="406" to="420" />
		</imprint>
	</monogr>
	<note>Modern Machine Learning Technologies and Data Science Workshop</note>
</biblStruct>

<biblStruct xml:id="b40">
	<monogr>
		<author>
			<persName><forename type="first">R.-A</forename><surname>Knight</surname></persName>
		</author>
		<title level="m">Phonetics: A Coursebook Illustrated Edition</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2012">2012</date>
			<biblScope unit="page">314</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<title level="m" type="main">Articulatory Phonetics</title>
		<author>
			<persName><forename type="first">B</forename><surname>Gick</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Wilson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Derrick</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
			<publisher>Wiley-Blackwell</publisher>
			<biblScope unit="page">272</biblScope>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

<biblStruct xml:id="b42">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Mackay</surname></persName>
		</author>
		<title level="m">Phonetics and Speech Science</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page">458</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<monogr>
		<title level="m" type="main">Functional Analysis, Spectral Theory, and Applications</title>
		<author>
			<persName><forename type="first">M</forename><surname>Einsiedler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Ward</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2018">2018</date>
			<publisher>Springer</publisher>
			<biblScope unit="page">628</biblScope>
		</imprint>
	</monogr>
	<note>Softcover reprint of the original 1st ed</note>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Theory of the GMM Kernel</title>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-H</forename><surname>Zhang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th International Conference on World Wide Web</title>
				<meeting>the 26th International Conference on World Wide Web</meeting>
		<imprint>
			<date type="published" when="2017-04">April 2017</date>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="1053" to="1062" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Stream processing with dependency-guided synchronization</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kallas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Niksic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stanford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming</title>
				<meeting>the 27th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming</meeting>
		<imprint>
			<date type="published" when="2022-04">April 2022</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
	<note>PPoPP &apos;22</note>
</biblStruct>

<biblStruct xml:id="b46">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Ovsyak</surname></persName>
		</author>
		<title level="m">Algorithms: methods of construction, optimization, probability research</title>
				<editor>
			<persName><forename type="first">L</forename></persName>
		</editor>
		<editor>
			<persName><forename type="first">Viv</forename></persName>
		</editor>
		<imprint>
			<publisher>Svit</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page">268</biblScope>
		</imprint>
	</monogr>
	<note>In Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b47">
	<monogr>
		<title level="m" type="main">C++ Programming: An Object-Oriented Approach</title>
		<author>
			<persName><forename type="first">B</forename><surname>Forouzan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Gilberg</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>McGraw Hill</publisher>
			<biblScope unit="page">960</biblScope>
		</imprint>
	</monogr>
	<note>1st edition</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
