<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Using Recurrent Neural Network to Noise Absorption from Audio Files</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Nataliya</forename><surname>Boyko</surname></persName>
							<email>nataliya.i.boyko@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Profesorska Street 1</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Аnastasiia</forename><surname>Hrynyshyn</surname></persName>
							<email>anastasiia.hrynyshyn.knm.2018@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Profesorska Street 1</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Using Recurrent Neural Network to Noise Absorption from Audio Files</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">25D661A49301C4CB87982BBAE143A420</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-19T16:26+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Artificial Intelligence</term>
					<term>fourier transform</term>
					<term>fast fourier transform</term>
					<term>discrete fourier transform</term>
					<term>Convolutional Neural Network</term>
					<term>Recurrent Neural Network</term>
					<term>Short-time objective intelligibility</term>
					<term>mean square error 0000-0002-6962-9363 (N.Boyko); 0000-0003-4289-9475 (А.Hrynyshyn)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The study reveals the idea of noise absorption, which is reducing any noise from input signal with minimal distortion of speech. During the study and research of this topic, many articles and publications were analyzed, in which new approaches to solving the problem of noise absorption or modification of existing ones were presented. This paper considers noise absorption algorithms. Also, highperformance algorithms for noise and human speech separation in the audio stream are analyzed. The paper uses traditional algorithms for digital signal processing. The practical value of the results will help improve the quality of video and audio calls by eliminating background noise, as well as voice recognition. The paper uses classic solutions for filtering unwanted noise. Experiments were performed to compare three different methods of noise processing in audio files. Statistical methods are used to build a noise model, which is then used to recover the output sound of the input signal with noise. The study uses deep learning for comparison. STOI and PESQ scores are used to evaluate audio recordings obtained after noise removal.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today there are many means of communication. The companion can be on the other side of the world, yet talking to them is not a problem for us, is it? There are situations when communication is impaired due to ambient background noise, as it is impossible to find a quiet place to talk. In this case, noise absorption algorithms are used.</p><p>There is traditional noise suppression -the introduction of two or more microphones <ref type="bibr" target="#b14">[16]</ref>. The first microphone is located in the lower front of the phone, closest to the user's mouth, to directly capture their voice during a conversation. The second microphone is as far away from the first one as possible, usually on the top back of the phone.</p><p>Both microphones pick up ambient sounds. The microphone closer to the mouth captures more of the speaker's voices,while another -less. The software effectively separates them from each other, giving an almost clear "voice". This may sound easy, but there are many situations where such technology does not work. For example, when a person does not speak, so the microphones receive only noise, or actively shakes/turns the phone during a conversation, as during a run. Solving these problems is a complex process.</p><p>Traditional digital signal processing (DSP) algorithms <ref type="bibr" target="#b15">[17]</ref> constantly try to find a noise pattern and adapt it. These algorithms work well in some cases; however, they don't scale to the variety and variability of noise in our everyday environment. That is why deep learning is used to solve this problem.</p><p>The relevance of the topic: There are many definitions of noise, but in general, it is background sounds caused by people, music, car buzzing, and more. These are primarily the sounds that should not be present in a conversation, video, or audio file. Noise distracts the audience's attention from the core material and therefore deteriorates the perception of information. But the main risk of noise for audio files is poor speech recognition. Many technologies work with voice commands, but due to excessive noise, the voice may be poorly recognized, due to which the program will not perform the correct task or could not receive the signal at all. Noise suppression is used to eliminate this risk.</p><p>The main idea of noise absorption is that the input was a signal with noise and the output without minimal speech distortion. This topic has been considered since the 70s. One example was the absorption of acoustic noise in a speech by spectral subtraction <ref type="bibr" target="#b16">[18]</ref>. Although the research of this problem began a long time ago, the topic's relevance remains to this day since there is no perfect solution.</p><p>Having received a signal with noise at the input, we strive to filter out unwanted noise without degrading the input signal. There are classic solutions to this problem. First, they use generative modeling, which uses statistical methods like Gaussian filters to build a noise model. Next, we can use it to recover the output sound of the input signal with noise. But recently, developments have shown that deep learning is superior to that decision and provided enough data.</p><p>The work's goal is to increase noise absorption efficiency to reduce the risk of incorrect speech recognition and train the recurrent network on different types of noise.</p><p>The practical value of the results obtained in this work will help improve the quality of video and audio calls, eliminating background noise. This model will also reduce the risks of incorrect voice recognition caused by background noise.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Review of Literature Sources</head><p>During the study and research of this topic, I found many different articles and publications. Each of them represents a new approach to solving noise absorption or modification of existing ones. These materials are presented below with a brief analysis of the use of specific techniques.</p><p>First covered the idea of using deep neural networks in the article «A regression approach to speech enhancement based on deep neural network» <ref type="bibr" target="#b17">[19]</ref>, authored by Yong Xu, Jun Doo, Lee-Rong Dai, and Chin-Huel Lee. The basic idea is to use a regression method, which produces a mask of relations for each sound frequency. The purpose of this mask is to remove extraneous noise, leaving the human voice intact. This method was far from perfect but an excellent early solution.</p><p>After the publication of the idea using deep neural networks, various theories were proposed, one of which is using a recurrent neural network. This method was demonstrated in the RNNoise project. Combining classical signal processing with deep learning to create a real-time noise absorption algorithm is the main idea. A more detailed description is given in the article «A Hybrid DSP/Deep Learning Approach to Real-Time Full-Band Speech Enhancement» <ref type="bibr" target="#b18">[20]</ref>, authored by Jean-Marc Valin.</p><p>Another exciting example of the use of neural networks for noise absorption was proposed in «Practical Deep Learning Audio Denoising»[21], authored by Thalles Santos Silva. This article used a convolutional neural network (CNN) to create a statistical model that can extract a pure signal and return it to the user. But in most results, the model manages to smooth out the noise, not get rid of it. Therefore, my choice was for recurrent neural networks.</p><p>The following article, authored by Michael Michelashvili, Lior Wolf, proposed a sound absorption method that trains on a noisy sound signal and provides a pure baseline signal <ref type="bibr" target="#b19">[22]</ref>. However, the technique is not entirely controlled and is taught only on a specific audio file that is denominated. Disadvantages of this implementation: if the type of noise changes, the neural network, which was trained on other data, will not provide sound absorption.</p><p>Another method of teaching recurrent neural networks was proposed in the article: «Listening to Sounds of Silence for Speech Denoising» <ref type="bibr" target="#b20">[23]</ref> by Ruilin Xu, Rundi Wu, Yuko Ishiwaka, Carl Vondrick, and Changxi Zheng. The proposed approach is based on the observation of human language, namely the pauses in speech between words and sentences. They are using these intervals to study the model. Since this algorithm studies noise in real-time, it is possible to learn the models of noise dynamics and absorb them. This method, in my opinion, is one of the best because it can adapt to noise changes in contrast to the previous one.</p><p>Noise absorption is not only used for audio and video calls. It is also used for hearing aids. An article entitled "Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality" <ref type="bibr" target="#b21">[24]</ref>, written by Mahmoud Keshavarzi, Tobias Goehring, Justin Zakis Richard E. Turner та Brian C. J. Moore. It demonstrated the use of RNN to reduce wind noise, which added sound quality. Recurrent neural networks were significantly better than high-frequency filtering. Tested these results were with the help of eighteen participants, nine of whom had mild or moderate hearing impairments. According to them, the sound quality and intelligibility were much better when using RNN.</p><p>Analysis of the sources described above gave more information about using deep neural networks and various practical applications. In addition, multiple methods and modifications have also been developed, with the help of which noise absorption had a much better result.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Materials and Methods</head><p>The separation of noise and human speech in the audio stream is a complex problem for which there are no high-performance algorithms.</p><p>Traditional digital signal processing (DSP) algorithms <ref type="bibr" target="#b15">[17]</ref> try to constantly find the noise pattern and adapt it by processing the sound frame by frame.</p><p>There are two types of basic types of noise: stationary and nonstationary. An example is shown below in Fig. <ref type="figure" target="#fig_0">1</ref>. Digital Signal Processing (DSP) is a field of computer technology that is dynamically evolving and covers both hardware and software <ref type="bibr" target="#b22">[25]</ref>. In particular, related areas for digital signal processing are information theory, optimal signal reception theory, and pattern recognition theory. In the first case, the main task is to select the signal against the background noise and interference of different physical nature; in the second -automatic recognition, i.e., classification and identification of the signal.</p><p>Digital processing uses the representation of signals in the form of sequences of numbers or symbols. The purpose of such processing may be to evaluate the signal's characteristic parameters or convert the signal into a format that is in some sense more convenient. For classical numerical analysis, formulas such as interpolation, integration, and differentiation are digital processing algorithms. High-speed digital computers contribute to increasingly complex and efficient signal processing algorithms; recent advances in integrated circuit technology promise high costeffectiveness in building very complex digital signal processing systems.</p><p>Digital signal processing is an alternative to traditional analog. Its most critical qualitative advantages include implementing any arbitrarily complex (optimal) processing algorithms guaranteed and independent of destabilizing factors accuracy; programmability and functional flexibility; the possibility of adaptation to the processed signals; manufacturability.</p><p>The development of a new perspective on digital signal processing was accelerated by the discovery in 1965 of efficient algorithms for calculating Fourier transforms. This class of algorithms became known as fast Fourier transform (FFT).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Fast Fourier Transform</head><p>Fast Fourier transform (FFT) is a mathematical algorithm that calculates the discrete Fourier transform (DFT) of a given sequence <ref type="bibr" target="#b18">[20]</ref>. The only difference between FT (Fourier transform) and FFT is that FT considers a continuous signal, while FFT receives a discrete signal at the input. DFT converts a sequence into its frequency components in the same way that FT does for a continuous signal. FFT converts the time domain to a frequency domain.</p><p>The visualization of the process is demonstrated below (Fig. <ref type="figure">2</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 2: Geometric Fourier transform</head><p>FFT works as follows. In the first step, the signal portion is scanned and stored in memory for further processing. Two parameters are appropriate:</p><p>1. Sampling frequency (fs) of the measuring system (for example, 48 kHz). This is the average number of samples obtained per second.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Selected number of samples; block length (BL).</head><p>From the two main parameters fs and BL you can determine further measurement parameters. For example, bandwidth (fn) indicates the theoretical maximum frequency determined using FFT (Formula 1).</p><formula xml:id="formula_0">2 / fs fn = (1)</formula><p>For example, at a sampling frequency of 48 kHz, it is theoretically possible to determine frequency components up to 24 kHz. However, in an analog system, the practically realizable value is usually slightly lower than this, thanks to analog filters -for example, at a frequency of 20 kHz.</p><p>Measurement of duration (D). The measurement duration is determined by the sampling frequency fs and the length of the block BL (Formula 2). Frequency resolution (df) indicates the frequency interval between two measurement results (Formula 3).</p><formula xml:id="formula_1">BL fs df / = (3)</formula><p>In practice, the sampling rate fs is usually a variable given by the system. However, by selecting the length of the BL block, you can determine the measurement duration and frequency resolution. The following applies:</p><p>• The short block length results in rapid repeats of measurements with coarse frequency resolution.</p><p>• Long block length results in slower repetitions of measurements with accurate frequency resolution.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Spectral subtraction</head><p>The method of spectral subtraction is widespread.</p><p>Additive stationary noise -generated by the environment, sound recording equipment, etc. Stationarity means that the properties of noise (power, spectral composition) do not change over time. Additivity implies that the noise is summed with the "pure" signal y [t] and does not depend on it (Formula 4): ]</p><formula xml:id="formula_2">, [ ] [ ] [ t noice t y t x + = (4)</formula><p>where t is the time.</p><p>A spectral subtraction algorithm is used to suppress additive stationary noise. It consists of the following stages:</p><p>1. Signal decay by short-term (window) Fourier transform (STFT) compactly localizes the signal energy. 2. Assembling the noise footprint subtractor. The noise model is obtained by averaging the amplitudes of the spectrum taken from a pre-prepared area of noise that does not contain a proper signal (Formula 5).</p><formula xml:id="formula_3">, 1 ] , [ ] int[ ∑ = = k t t f noice f footpr (5) where ] , [ t f noice</formula><p>is the noise spectrum; f is the Fourier transform index corresponding to the frequency, t is the number of the current STFT window, k is the number of windows in the area with noise.</p><p>3. "Subtraction" (in the generalized sense) of the amplitude spectrum of noise from the amplitude spectrum of the signal. 4. Inverse conversion of STFT -synthesis of the resulting signal. Subtraction of amplitude spectra is carried out by formula 6: -the amplitude spectrum of the resulting purified signal; k is the suppression factor. The phase spectrum of the cleared signal is equal to the phase spectrum of the signal interference. The result of this method is shown in Fig. <ref type="figure" target="#fig_3">3</ref>. The problem with these methods is that FFT and spectral subtraction are not suitable for nonstationary signal analysis because nonstationary signals consist of frequency components that change over time. As is known, the Fourier transform is suitable for those signals that have frequencies fixed at a specific time (e.g., sine waves, voice signals). Therefore, the Fourier transform cannot give the proper spectrum, and we will not know which frequencies are present at what time. In spectral subtraction, the STFT coefficients of noise signals are statistically random, which leads to uneven noise elimination.</p><formula xml:id="formula_4">{ }, 0 ], , [ * ] , [ max ] , [ t f W k t f X t f Y − = (6) where ] , [ t f X</formula><p>Nonstationary noises have complex patterns that are difficult to distinguish from the human voice. However, the signal can be concise and come and go very quickly (for example, keyboard input or siren). To handle both stationary and nonstationary noise, you need to go beyond traditional DSP.</p><p>To better eliminate noise, various methods of neural networks, some of which have been superficially discussed in the analysis of literature sources. Consider some of the methods in more detail.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Method using convolutional neural networks</head><p>This method is based on "A Fully Convolutional Neural Network for Speech Enhancement" <ref type="bibr">[21]</ref>. In it, the author offers a cascade backup convolutional network encoder-decoder (CR-CED).</p><p>The model is based on symmetric encoder-decoder architectures. Both components contain repetitive convolution blocks, ReLU, and batch normalization. In total, the network includes 16 such blocks -this adds up to 33K parameters.</p><p>In addition, there are connection gaps between some encoder and decoder units. Here, the function vectors of both components are combined by addition. Like ResNets, bandwidth accelerates convergence and reduces gradient disappearance.</p><p>Another essential feature of the CR-CED network is that the convolution is performed in only one dimension. More specifically, given the input spectrum of the form (129 x 8), the convolution is performed only on the frequency axis (i.e., the first). This ensures that the frequency axis remains unchanged during forwarding propagation.</p><p>Combining a small number of learning parameters and model architecture makes this model extremely easy, with fast execution, especially on mobile devices.</p><p>Once the network evaluates the output, we optimize (minimize) the root mean square difference (MSE) between the output and target (pure sound) signals (Fig. <ref type="figure">4</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: The principle of operation of the backup convolution network</head><p>The results of this method are presented in Fig. <ref type="figure">5</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 5:</head><p>The results of methods using convolutional neural networks Figure <ref type="figure">5</ref> shows the initial audio without noise, the audio to which the noise was added, and the result of processing the method. As you can see, given the complexity of the task, the results are somewhat acceptable but not perfect because the noise remained on this audio file.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Method using a recurrent neural network (GRU)</head><p>This method started with the removal of noise using artificial intelligence <ref type="bibr" target="#b17">[19]</ref>. The method consists not only of in-depth training; it uses a hybrid approach. The central processing cycle is based on 20 ms windows with 50% overlap (10 ms offset). Use both analysis and synthesis of a Worbis window that satisfies the PrincenBradley criterion. The window is defined using the following formula 7:</p><formula xml:id="formula_5">)], ( 2 sin 2 sin[ ) ( N Г n w π π = (<label>7</label></formula><formula xml:id="formula_6">)</formula><p>where N is the length of the window.</p><p>In fig. <ref type="figure" target="#fig_5">7</ref> shows a block diagram of this method. . For the converted signal X (k), the energy value in the band is calculated by formula 8:</p><formula xml:id="formula_7">∑ = k k X k b w b E 2 ) ( ) ( ) (<label>(8)</label></formula><p>The gain in the band is defined as gb.</p><formula xml:id="formula_8">, ) ( ) ( b x E b s E b y = (9)</formula><p>where</p><formula xml:id="formula_9">) (b s E</formula><p>is the energy of pure speech,</p><formula xml:id="formula_10">) (b x E</formula><p>is the energy of the input (noisy) speech.</p><p>Considering the ideal gain b g ˆ, the following interpolated gain is applied to each basket of frequencies k (formula 10):</p><formula xml:id="formula_11">∑ = b b g k b w k r ) ( ) (<label>(10)</label></formula><p>The main drawback of the lower resolution we get from using bands is that we do not have a fine enough resolution to suppress the noise between pitch harmonics.But this task is not essential and can be easily implemented with a comb filter.</p><p>Since the result we calculate is based on 22 bands, using a higher input resolution would make sense, so the same 22 bands are used to supply spectral information to the neural network <ref type="bibr" target="#b19">[22]</ref>.</p><p>To improve the preparation of data for training, DCT is used on the logs of the spectrum. At the output, we obtain 22 Cestral Barca frequency coefficients (BFCC). The data obtained is a bar current based on the Barca scale, closely related to the MFC coefficients, often used for speech recognition.</p><p>In addition to our cepstral coefficients, the following is also added:</p><p>• The first and second derivatives of the first 6 coefficients across frames • The pitch period (1/frequency of the fundamental)</p><p>• The pitch gain (voicing strength) in 6 bands • A special non-stationarity value that's useful for detecting speech (but beyond the scope of this demo).</p><p>This makes a total of 42 neural network input functions. The traditional approach to noise suppression inspires the neural network architecture used in this method. Most of the work is performed by three layers of GRU. Figure <ref type="figure" target="#fig_5">7</ref> shows the layers used to calculate the bands. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>We will conduct experiments comparing three different methods of noise processing in audio files. Two of them relate to algorithms using artificial intelligence, namely CNN and RNN.</p><p>The first algorithm to be used for comparison is spectral subtraction. The main steps of this algorithm:</p><p>• Calculate the FFT using an audio clip that contains noise To begin, download the data without noise (Fig. <ref type="figure" target="#fig_7">8</ref>). Then, divide the data from the file by 32768 because the file we download has a wav extension. The data in it is in the range of [32768, 32767], so dividing by 32768, we get the appropriate degree of additions in two [-1, 1].  The STFT algorithm consists of the following steps:</p><p>• Select a data segment from the overall signal • Multiply this segment by the semi-cosine function • Zero the end of the segment with zeros • Normalize the Fourier transform for this segment into positive and negative frequencies • Combine the energy of positive and negative frequencies and return the one-way spectrum • Scale the resulting spectrum in dB for easy viewing • Record the signal to eliminate the noise beyond the noise threshold.</p><p>After performing the function, we obtain a complex-valued matrix of short-term Fourier transform coefficients. Reduce it to type dB and proceed to the next step.</p><p>We calculate noise statistics. Namely, we find the mean and standard deviation. Next, multiply the standard deviation by the change n_std_thresh. It shows how many standard deviations the sound must be considered a signal and not a noise. By default, the change has a value of 1,5. Add this value to the standard deviation.</p><p>Calculate the STFT for a non-noisy signal and also reduce it to the type dB. Now we create a mask, for this, we look for the minimum value of the complex matrix obtained in the previous step, and we create a smoothing filter for the mask by time and frequency. Calculate the threshold for each frequency interval.</p><p>Convert the mask using the smoothing filter fftconvolve. Convolution is a simple mathematical operation that requires the multiplication of vectors, so the complexity of execution is О(n 2 ). But to speed up the process, the convolution is performed with a fast Fourier transform. Using FFT, the complexity decreases from О(n 2 ) to О(nlog(n)). The algorithm is presented in Fig. <ref type="figure" target="#fig_9">11</ref>.   The following algorithm uses a convolutional neural network (CNN) to reduce noise in an audio file, the architecture of which consists of an encoder and a decoder with residual connections between pairs of layers.</p><p>The first step is to initialize the scales. Initializing the scales is an important step. If the scales are too small, then the dispersion of the input signal begins to decrease as it passes through each layer in the network. As a result, the input eventually falls to shallow values and can no longer be valid. On the other hand, if the weights are too large, the variance of the input data tends to increase rapidly with each subsequent layer. Thus, initializing a network with suitable scales is very important for a neural network to work correctly. We need to make sure that the scales are within reasonable limits before we start training the net. That's why Xavier's initialization is used.</p><p>Xavier initialization is an initialization scheme for neural networks. Changes are initialized to 0, and the weight Wij at each level is initialized as:</p><formula xml:id="formula_12">] n 1 , n 1 U[- ≈ ij w</formula><p>, where U is a uniform distribution and n is the size of the previous layer (number of columns in W).</p><p>In the second step, we initialize the vector z with random values from 0 to 1. The next step in obtaining a mask the size of STFT signals with values in the range [0,1], as the method's input, used the signal Y.</p><p>Once we have obtained the vector z the method goes through iterations. The number of iterations is set by changing t and the function is passed along with the audio file. Each iteration has the following steps. In the next step, the fi-1 network learns in one iteration, obtaining fi. The following line calculates fi (z) and its STFT for each Yi. Next, we find Hi value, the absolute difference between i Y and 1 − i Y , and normalize the resulting difference with i Y .</p><p>The following steps check the obtained value of Hi . To get rid of extreme values, it truncates all values below 10 and above 90. The value of C is the product of the matrices. C will have high values in the coordinates of the frequency-time domains, in which the lowest stability of recovery y over the network f.</p><p>After completing all iterations, the value of C is normalized to be in the range from 0 to 1. High accumulation of variability implies noise, and therefore flip the value (max(C) -C, not C -min(C)) before returning the mask M.</p><p>The method using recurrent neural networks uses a recurrent network with GRUs designed to overcome the noise in the audio recording. This neural network architecture is based on the assumption that there are three repeating layers, each responsible for one of the main components. It includes 215 units, 4 hidden layers, and the largest layer hides 96 units. Increasing the number of layers does not significantly improve the quality of noise absorption. However, the loss function and the way the training data is constructed substantially influence the final grade.</p><p>One of the essential parts of learning is the dataset. To teach the network, you need to use both noisy and pure speech to test, so the learning data is built artificially, as for previous algorithms.</p><p>Noise is mixed at different levels to provide a wide range of signal-to-noise ratios, including clear speech and noise segments only. The algorithm does not use central average normalization, and data augmentation is used to make the network resistant to changes in frequency response. This is achieved by filtering noise and speech signals independently for each training example using second-order filters (formula 11). In total, there are 6 hours of speech and 4 hours of noise data, which we use to generate 140 hours of speech noise using various combinations of gains and filters and by oversampling the data to frequencies from 40 kHz to 54 kHz.</p><p>The RNNoise class consists of the following methods:</p><p>• read_wav (): Takes the name of the .wav audio recording, converts it to a supported format (16 bits, mono), and returns the pydub.AudioSegment object with the audio recording • write_wav (): Accepts the name of the .wav audio recording, the pydub.AudioSegment object (or a byte string with audio data without wav headers) and saves the audio recording under the transmitted name • filter (): Accepts the pydub.AudioSegment object (or byte string with audio data without wav headers) leads it to a sampling rate of 48000 Hz, splits the audio into frames (10 milliseconds long), clears them of noise, and returns the object pydub.AudioSegment (or byte string without wav headers) while preserving the original sampling rate • filter_frame (): Clear only one frame (10 ms, 16 bits, mono, 48000 Hz) of noise (access directly to the binary file of the RNNoise library)</p><p>The input is an audio file that has some noise (Fig. <ref type="figure" target="#fig_13">14 (a</ref>)), and the output is an audio file with reduced noise (Fig. <ref type="figure" target="#fig_13">14 (b)</ref>). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>The above algorithms were tested on 4 different audio, stationary and nonstationary, such as music or background conversation.</p><p>STOI and PESQ scores were used to evaluate the audio recordings obtained after noise removal. STOI is a metric for predicting the intelligibility of noisy speech, not the quality of speech (which is usually evaluated in silence). The main subjective tests of this method are tests of intelligibility (request for recognized words /symbols, etc.) <ref type="bibr" target="#b16">[18]</ref>.</p><p>PESQ is a family of standards that includes a testing methodology for automatically assessing the quality of speech experienced by a telephone system user. It was standardized as Recommendation ITU-T P <ref type="bibr" target="#b17">[19]</ref>.</p><p>The results are presented in table <ref type="table" target="#tab_0">1</ref>. When working with stationary noises, each network showed high results. However, the worst noise elimination spectral subtraction showed on data with street noise because street noise has sudden declines or rises. Because of this, the results of spectral subtraction for these data were the worst. Recurrent neural network, the worst result shows the removal of music in the background and becomes noise, which is quite challenging to deal with. We will deduce audio diagrams with noise and without for the addition of musical noise in Fig <ref type="figure" target="#fig_0">15</ref>. The above is an example of graphs (Fig. <ref type="figure" target="#fig_0">15</ref>), which display the sound before processing and after, using different methods. The convolutional network coped best with sound, reduced the amount of noise. It is also worth mentioning that one of the essential tasks of noise absorption is not to degrade the sound itself. Because of this, spectral subtraction is the worst in use, because in addition to noise, it takes away the sound itself, which sometimes gives difficulties in recognizable languages. The advantages of this algorithm are simplicity and no need for training.</p><p>In this case, the CNN algorithm was better than RNN, but as shown in Table <ref type="table" target="#tab_0">1</ref>, RNN was better for noise removal such as street sound and stationary at 0.0024297737659774 0.0147917149013511, respectively. This method also showed promising results of PESQ evaluation. Diagrams of sound from the addition of street noise and sound charts after processing methods are presented in Fig. <ref type="figure" target="#fig_15">16</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>The experiments section demonstrates various methods for removing noise from the audio file, such as spectral subtraction, recurrent neural networks, and convolutional neural networks. These methods were tested on different types of sound: stationary noise, background conversation sounds, street sounds, and background music noises. We will deduce results using the bar chart.</p><p>To begin with, let's analyze the STOI estimate (Fig. <ref type="figure" target="#fig_5">17</ref>).</p><p>Figure <ref type="figure" target="#fig_5">17</ref>: STOI estimates for different audio files using noise absorption using spectral subtraction, convolutional neural networks and recurrent neural networks</p><p>From Fig. <ref type="figure" target="#fig_5">17</ref> you can see that the worst result with noise removal was the method of spectral subtraction, which, unlike the other two, does not belong to the algorithms of artificial intelligence. The best results were demonstrated for stationary sound, as the primary purpose of this method was to remove this noise. But the results presented were still worse than CNN at 0.0695942286068124 and from RNN (GRU) at 0.0843859435081635. So, suppose you compare simple algorithms and methods using artificial intelligence. In that case, the latter is preferred, as they can adapt to different noises. The quality of speech in the audio file suffers much less, which gives a higher STOI score because this assessment is based on language intelligibility. And since spectral subtraction is more damaging to speech, which is the leading indicator of deterioration. Because of this, for data preprocessing, for further speech recognition, this algorithm will work worse and degrade the final data.</p><p>The comparison of AI methods showed that each copes with this task, but there is no exact winner. This probably reflects the fact that both signal processing methods reflect a trade-off between different factors. RNN processing reduced stationary and street noise, but CNN processing performed better in removing noise such as background conversations and music noise according to the STOI score. But it should note that the difference between the estimates is not significant, which can be seen in Fig. <ref type="figure" target="#fig_16">18</ref>.</p><p>Let's analyze the indicators of the following assessment, which is based on the quality of speech. To begin with, we will deduce the diagram. From this diagram, we can see that the best, as for the preliminary assessment, showed when removing stationary noise. Since this assessment is based on speech quality, it is not surprising that spectral analysis showed such low results. The RNN removal method showed the best results in all cases, except for musical noise, which indicates that this algorithm does not severely damage the audio file when removing noise. The audio file itself remains of good quality.</p><p>Therefore, algorithms using artificial intelligence are more advantageous, as they can adapt to sound and less damage to the data itself.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusion</head><p>Sound noise is a problem that is a classic and began a long time ago but has not yet been fully resolved to this day. It can damage audio files, which can lead to a risk of impaired audio recognition, making it difficult to recognize speech. Most AI technologies are now used to solve this problem, and this paper demonstrates the results that have shown the benefits of these technologies. This technology has outstripped spectral analysis, both in noise removal and in preserved speech intelligibility, which are the main tasks for this topic.</p><p>The studies were performed using methods such as CNN and RNN. There was no exact winner in these studies. Although CNN is more commonly used in image processing, but also with problems related to noise in audio files, the algorithm has proven itself on the excellent side. RNN is not far behind in this matter. Each method performed better on different noises. RNN outperformed CNN in removing noise such as stationary and street noise, while CNN voiced background and music. RNN also showed high results in sound quality, in contrast to CNN, which offers some advantages in using this algorithm.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Two types of noise stationary (left) and non-stationary (right) Stationary means that noise statistics regarding intensity, spectrum shape, or other factors are unchanged over time. Metaphorically speaking, stationary means that none of the statistical parameters of the process changes its position in the parameter space. Traditional DSP algorithms (adaptive filters) can be quite effective in filtering such noise. Let's take a closer look.Digital Signal Processing (DSP) is a field of computer technology that is dynamically evolving and covers both hardware and software<ref type="bibr" target="#b22">[25]</ref>. In particular, related areas for digital signal processing are information theory, optimal signal reception theory, and pattern recognition theory. In the first case, the main task is to select the signal against the background noise and interference of different physical nature; in the second -automatic recognition, i.e., classification and identification of the signal.Digital processing uses the representation of signals in the form of sequences of numbers or symbols. The purpose of such processing may be to evaluate the signal's characteristic parameters or convert the signal into a format that is in some sense more convenient. For classical numerical analysis, formulas such as interpolation, integration, and differentiation are digital processing algorithms. High-speed digital computers contribute to increasingly complex and efficient signal processing algorithms; recent advances in integrated circuit technology promise high costeffectiveness in building very complex digital signal processing systems.Digital signal processing is an alternative to traditional analog. Its most critical qualitative advantages include implementing any arbitrarily complex (optimal) processing algorithms guaranteed and independent of destabilizing factors accuracy; programmability and functional flexibility; the possibility of adaptation to the processed signals; manufacturability.The development of a new perspective on digital signal processing was accelerated by the discovery in 1965 of efficient algorithms for calculating Fourier transforms. This class of algorithms became known as fast Fourier transform (FFT).</figDesc><graphic coords="4,308.25,85.35,147.08,115.33" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>where fs = 48 kHz and BL = 1024 it gives 1024/48000 Hz = 21.33 ms.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Spectrograms of noisy signal (shower) and cleared (right)</figDesc><graphic coords="6,134.15,550.95,336.30,119.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Block diagram of the method using a recurrent neural network</figDesc><graphic coords="8,172.97,504.23,272.85,151.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Scheme of neural network architecture</figDesc><graphic coords="10,231.32,80.80,156.15,153.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>•</head><label></label><figDesc>Statistically calculate FFT by noise • Calculate the threshold based on the statistical noise • FFT is calculated by the signal • The mask is determined by comparing the FFT signal with the threshold value • The mask is smoothed by the filter by frequency and time • The mask is applied to the FFT signal and inverted</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Noise-free signal The next step is to add noise to the audio file (Fig. 9-10). The noise file also has a wav extension.</figDesc><graphic coords="10,153.65,535.89,311.50,68.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 9 :Figure 10 :</head><label>910</label><figDesc>Figure 9: Noise signal</figDesc><graphic coords="11,143.42,80.80,331.95,77.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Schematic representation of the FFT workflowAfter creating the mask, we proceed to the final stage, removing noise from the audio file. To do this, we use the inverse Fourier transform. The inverse transformation is when each subsequent window is returned to the time domain using IFFT. Then each window is shifted by the size of the step and added to the result of the previous shift. The following diagram represents this process.</figDesc><graphic coords="12,172.70,106.10,259.20,72.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: Schematic representation of the ISTFT workflow And in the end, we return the received audio file in which noise decreased. It is presented in Fig. 13.</figDesc><graphic coords="12,248.77,280.17,121.25,159.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 13 :</head><label>13</label><figDesc>Figure 13: Audio file signal after noise cancellation</figDesc><graphic coords="12,128.75,503.44,347.10,81.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head></head><label></label><figDesc>random values, evenly distributed ranges from -3/8 to 3/8. Reliability to the amplitude of the signal is achieved by varying the final level of the mixed signal.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 14 :</head><label>14</label><figDesc>Audio file diagram with noise (a) and after processing (b)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_14"><head></head><label></label><figDesc>Figure 15: (a) -sound to add music noise, (b) -sound after processing by a spectral subtraction, (c) -sound after processing by a recurrent neural network , (d) -sound after processing by a convolutional neural network</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_15"><head>Figure 16 :</head><label>16</label><figDesc>Figure 16: (а) -sound with street noise, (b) -sound after processing by a spectral subtraction, (c) -sound after processing by a recurrent neural network , (d) -sound after processing by a convolutional neural network</figDesc><graphic coords="16,337.65,290.64,164.40,82.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_16"><head>Figure 18 :</head><label>18</label><figDesc>Figure 18: PESQ estimates for different audio files using noise absorption using spectral subtraction, convolutional neural networks and recurrent neural networks</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell cols="2">Noise removal results</cell><cell></cell><cell></cell></row><row><cell>Method</cell><cell>Video</cell><cell>STOI</cell><cell>PESQ</cell></row><row><cell></cell><cell cols="3">audio_statistical_noise.wav 0,8962362471828313 2.0765810012817383</cell></row><row><cell>Spectral</cell><cell>audio_offise_noise.wav</cell><cell>0.686388118838794</cell><cell>1.6020467281341553</cell></row><row><cell>subtraction</cell><cell>audio_street_noise.wav</cell><cell>0.6626346873426312</cell><cell>1.25617253780365</cell></row><row><cell></cell><cell>audio_music_noise.wav</cell><cell>0.6668875301659041</cell><cell>1.273597002029419</cell></row><row><cell></cell><cell cols="3">audio_statistical_noise.wav 0.9658304757896437 3.4577016830444336</cell></row><row><cell>CNN</cell><cell>audio_offise_noise.wav audio_street_noise.wav</cell><cell cols="2">0.8712570598849072 2.6612484455108643 0.8299030860960181 2.374866485595703</cell></row><row><cell></cell><cell>audio_music_noise.wav</cell><cell cols="2">0.8298315520589445 2.6806342601776123</cell></row><row><cell></cell><cell cols="3">audio_statistical_noise.wav 0,9806221906909948 3.5431809425354004</cell></row><row><cell>RNN</cell><cell>audio_offise_noise.wav audio_street_noise.wav</cell><cell cols="2">0.8428522793199281 3.0143574367834783 0.8323328598619955 3.0021986961364746</cell></row><row><cell></cell><cell>audio_music_noise.wav</cell><cell cols="2">0.7372574604375085 1.5429587364196777</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A Two-Microphone Noise Reduction Method in Highly Nonstationary Multiple-Noise-Source Environments</title>
		<author>
			<persName><forename type="first">L</forename><surname>Junfeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Masato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yoiti</surname></persName>
		</author>
		<idno type="DOI">E91A.10.1093/ietfec/e91-a.6.1337</idno>
	</analytic>
	<monogr>
		<title level="j">IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences</title>
		<imprint>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Edmonson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tucker</surname></persName>
		</author>
		<title level="m">Digital Signal Processing System for Active Noise Reduction</title>
				<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">49</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Suppression of acoustic noise in speech using spectral subtraction</title>
		<author>
			<persName><forename type="first">S</forename><surname>Boll</surname></persName>
		</author>
		<idno type="DOI">10.1109/TASSP.1979.1163209</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Acoustics, Speech, and Signal Processing</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="113" to="120" />
			<date type="published" when="1979-04">April 1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A Regression Approach to Speech Enhancement Based on Deep Neural Networks</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lee</surname></persName>
		</author>
		<idno type="DOI">10.1109/TASLP.2014.2364452</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE/ACM Transactions on Audio, Speech, and Language Processing</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="7" to="19" />
			<date type="published" when="2015-01">Jan. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A Hybrid DSP/Deep Learning Approach to Real-Time Full-Band Speech Enhancement</title>
		<author>
			<persName><forename type="first">J</forename><surname>Valin</surname></persName>
		</author>
		<idno type="DOI">10.1109/MMSP.2018.8547084</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 20th International Workshop on Multimedia Signal Processing (MMSP)</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Practical Deep Learning Audio Denoising</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">Santos</forename><surname>Silva</surname></persName>
		</author>
		<ptr target="https://sthalles.github.io/practical-deep-learning-audio-denoising/" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Speech Denoising by Accumulating Per-Frequency Modeling Fluctuations</title>
		<author>
			<persName><forename type="first">M</forename><surname>Michelashvili</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wolf</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Listening to Sounds of Silence for Speech Denoising</title>
		<author>
			<persName><forename type="first">X</forename><surname>Ruilin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Rundi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yuko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Carl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zh</forename><surname>Changxi</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Use of a Deep Recurrent Neural Network to Reduce Wind Noise: Effects on Judged Speech Intelligibility and Sound Quality</title>
		<author>
			<persName><forename type="first">M</forename><surname>Keshavarzi</surname></persName>
		</author>
		<idno type="DOI">10.1177/2331216518770964</idno>
	</analytic>
	<monogr>
		<title level="j">Trends in Hearing</title>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Computer Simulation of Systems and Processes</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">N</forename><surname>Kvetny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename><surname>Bogach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">R</forename><surname>Boyko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">Y</forename><surname>Sofina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">M</forename><surname>Shushura</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Chapter</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
		</imprint>
	</monogr>
	<note>: Digital Signal Processing</note>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Reay</surname></persName>
		</author>
		<idno type="DOI">.10.1002/9781119078227.ch5</idno>
		<title level="m">Fast Fourier Transform</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">A Fully Convolutional Neural Network for Speech Enhancement</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">R</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Lee</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2017">2017</date>
			<publisher>INTERSPEECH</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Removing Noise from Speech Signals Using Different Approaches of Artificial Neural Networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Omaima</surname></persName>
		</author>
		<idno type="DOI">10.5815/ijitcs.2015.07.02</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Technology and Computer Science</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="8" to="18" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Real-Time RNN Speech Noise Suppression on a MCU, STM32</title>
		<author>
			<persName><forename type="first">J</forename><surname>Ma</surname></persName>
		</author>
		<ptr target="https://medium.com/analytics-vidhya/real-time-rnn-speech-noise-suppression-on-amicrocontroller-stm32-e17d8c3eac57" />
	</analytic>
	<monogr>
		<title level="m">Real-Time RNN Speech Noise Suppression on a MCU (STM32</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Recurrent Neural Active Noise Cancellation</title>
		<author>
			<persName><forename type="first">M</forename><surname>Baranov</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<ptr target="https://towardsdatascience.com/deep-active-noise-cancellation-e364ce4562d4" />
		<title level="m">Recurrent Neural Active Noise Cancellation</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Machine-learning nonstationary noise out of gravitational wave detectors</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vajente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Isi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Driggers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kissel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Szczepanczyk</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Comparison of Neural Networks and Least Mean Squared Algorithms for Active Noise Canceling</title>
		<author>
			<persName><forename type="first">S</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Kyung</surname></persName>
		</author>
		<ptr target="https://tigerprints.clemson.edu/all_theses/2920" />
		<imprint>
			<date type="published" when="2018">2018. 2920</date>
		</imprint>
	</monogr>
	<note type="report_type">All Theses</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">A short-time objective intelligibility measure for time-frequency weighted noisy speech</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">H</forename><surname>Taal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Hendriks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Heusdens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jensen</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICASSP.2010.5495701[21</idno>
		<ptr target="https://en.wikipedia.org/wiki/Perceptual_Evaluation_of_Speech_Quality" />
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Acoustics, Speech and Signal Processing</title>
				<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="4214" to="4217" />
		</imprint>
	</monogr>
	<note>Perceptual Evaluation of Speech Quality</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Noise reduction using spectral gating in python</title>
		<author>
			<persName><forename type="first">T</forename><surname>Sainburg</surname></persName>
		</author>
		<ptr target="https://timsainburg.com/noise-reduction-python.html" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">A convolutional recurrent neural network with attention framework for speech separation in monaural recordings</title>
		<author>
			<persName><forename type="first">C</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41598-020-80713-3</idno>
		<ptr target="https://doi.org/10.1038/s41598-020-80713-3" />
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">1434</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">The issue of access sharing to data when building enterprise information model</title>
		<author>
			<persName><forename type="first">N</forename><surname>Boiko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IX International Scientific and Technical conference, Computer science and information technologies (CSIT 2014)</title>
				<meeting><address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="23" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Application of Machine Algorithms for Classification and Formation of the Optimal Plan</title>
		<author>
			<persName><forename type="first">N</forename><surname>Boyko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hlynka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2021)</title>
		<title level="s">Main Conference</title>
		<meeting>the 5th International Conference on Computational Linguistics and Intelligent Systems (COLINS 2021)<address><addrLine>Lviv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">April 22-23, 2021</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1853" to="1865" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
