<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Implementation of Audio Navigation for Smart Campus</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">O</forename><surname>Petrova</surname></persName>
							<email>petrovaoa353@gmail.com</email>
							<affiliation key="aff0">
								<address>
									<postBox>P. Arras 3</postBox>
									<postCode>0000-0002-9625-9054]</postCode>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">1Zaporizhzhia National Technical University</orgName>
								<address>
									<addrLine>Zhukovsky str., 64</addrLine>
									<postCode>69063</postCode>
									<settlement>Zaporizhzhia</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff2">
								<orgName type="institution">KU Leuven</orgName>
								<address>
									<addrLine>Jan De Nayerlaan 5</addrLine>
									<postCode>2860</postCode>
									<settlement>Sint Katelijne Waver</settlement>
									<country key="BE">Belgium</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Implementation of Audio Navigation for Smart Campus</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">D790792F8F77E2973CDB3573696A9E8E</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T08:30+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>audio navigation</term>
					<term>SMART-CAMPUS</term>
					<term>BLE</term>
					<term>voice navigator</term>
					<term>indoor-positioning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article deals with the task of in-door navigation of visually impaired people. The authors have carried out an analysis of audio navigation software programs such as Google Assistant, Siri and Cortana. Authors suggest a model of the voice navigator, which helps a person to conveniently find the location and build the desired route. Developed software is integrated into the smart-campus solution, which improve the infrastructure of the university.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>According to statistics nowadays in the European Union, people with disabilities make out about 1/6 of all citizens of working age. In Ukraine, the amount of persons with disabilities is 6.1% of the total population <ref type="bibr" target="#b0">[1]</ref>. A person with disabilities faces many problems that are unknown to other people. Mainly this is caused by the restriction of the access of persons with disabilities to social benefits known to the majority of the population, such as shops, pharmacies, underground stations, stations, hairdressers, educational establishments, et cetera <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>. This is due to the fact that in such places there are no special devices for assistance to people with disabilities. Ukraine is trying to adapt public buildings to this reality by constructing ramps and buttons for the disabled people. At the legislative level, the Government makes changes to the laws that regulate the rights of persons with disabilities, namely, the Laws "On the Basis of Social Protection of Persons with Disabilities in Ukraine" <ref type="bibr" target="#b3">[4]</ref>, "On Amendments to Some Laws of Ukraine on Increasing Access to the Blind, persons with visual impairments and persons with dyslexia to works published in a special format " <ref type="bibr" target="#b4">[5]</ref>," On Amending Certain Legislative Acts of Ukraine on the Protection of the Rights of Persons with Disabilities " <ref type="bibr" target="#b5">[6]</ref>," On Amendments to Certain Laws of Ukraine on Education on the organization of inclusive n the voice " <ref type="bibr" target="#b6">[7]</ref>. However, there should pass a long period before our country achieves the result that we can see in Europe today. Therefore, the development of audio navigation systems to improve social adaptation of people with visual disabilities is a very important task.</p><p>The idea of smart campus based on BLE 4.0 where objects could talk to the students, staff and visitors were described in an number of publications <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. The use of voice for navigation systems could allow visually impaired people: to connect many objects and events; to provide access to the information in the navigation systems; to support new systems of interaction with users, sensors, mobile devices, devices and applications <ref type="bibr" target="#b9">[10]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Problem definition</head><p>For correctly detection of the location inside the building, it is necessary to determine the current coordinates, compare the position with the cartographic representation, update the location in real time, and check the compliance of the current position with respect to the planned route <ref type="bibr" target="#b10">[11]</ref>. Further we will consider a system that uses data from beacons based on BLE 4.0 to identify the current location [112]:</p><formula xml:id="formula_0">  K Z R B X S , , , ,<label>( 1 )</label></formula><p>where X -input data (x 1 -data from sensors x 2 -accelerometer readings, x 3 -gyro readings, x 4 data from beacons, x 5 -voice commands), B -cartographic representation (map represented as matrix [M, N], where M -X, N is the number of points along the Y-axis), R is information about the decisions taken (r 1 , r 2 ... r n ), Z-output devices (z 1camera, z 2 -audio recording, z 3 -phone), K -robot mode (k 1 -autonomous, k 2 controlled).</p><p>Positioning methods were developed for this class of systems <ref type="bibr" target="#b12">[13]</ref>. However, the task of integrating audio navigation in Smart-Campus systems has not been solved yet. Solving this problem will allow the existing system to be adapted for people with disabilities.</p><p>The aim of the work was to develop a voice navigator and integrate it into the indoor positioning and navigation system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">An analysis of existing approaches to the implementation of voice navigation</head><p>Voice Navigator should help a person to navigate in the building using only voice.</p><p>However, for the correct information exchange between the user and the application, there should be developed a module which could recognize speech signals.</p><p>Problems of voice navigation and speech recognition were investigated by D. Shpakov <ref type="bibr" target="#b13">[14]</ref>, E. A. Vereshchagina <ref type="bibr" target="#b14">[15]</ref>, Jen-Tzung Chien <ref type="bibr" target="#b15">[16]</ref>, Shinji Watanabe <ref type="bibr" target="#b16">[17]</ref>, Mohamed Afify <ref type="bibr" target="#b17">[18]</ref>, Chia-Yu <ref type="bibr" target="#b18">[19]</ref>, Mark D.Skowronski <ref type="bibr" target="#b19">[20]</ref>.</p><p>Automated speech recognition systems can be classified according to many features: by type of language, by a set of dictators, by volume and completeness of the vocabulary that needs to be recognized. By type, the language is divided into discrete and continuous <ref type="bibr" target="#b20">[21]</ref>. Discrete language is a language in which pauses between words are much longer than natural pauses inside words. In continuous speech, there are no significant pauses between words. The natural human mode of communication is a continuous language.</p><p>Each person has a unique voice, but from a phonetic point of view language consists of many different sounds that have articulation differences. In general, these sounds are called phonemes. But in different words one and the same phonemes may be exaggerated, so there is the notion of alophon -phonemes <ref type="bibr" target="#b21">[22]</ref>.</p><p>For successful speech recognition, areas of the audio signal are considered in a few tens of milliseconds, which are called freemas <ref type="bibr" target="#b21">[22]</ref>. The difficulty is that some phonemes are quite similar to one another, but one can solve this problem in terms of "probabilities". Some phonemes are more likely for a given signal, others -less. A acoustic-on-model is being built, which is a function that receives an area of a small audio signal (frame) at the input and outputs the distribution of the probabilities of different phonemes on this frame. On the basis of the acoustic model, one can say with a certain degree of confidence that it was said.</p><p>The acoustic model can be built on the basis of such methods and algorithms as neural networks, a model of Gaussian mixtures, dynamic programming <ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref><ref type="bibr" target="#b24">[25]</ref><ref type="bibr" target="#b25">[26]</ref>. In practice, hidden mark models are widely used in practice <ref type="bibr" target="#b26">[27]</ref>.</p><p>. In the system discussed previously, incoming data can come in the form of voice messages. In this case, the task of recognizing audio events will look like this: an audio signal arrives at the audio event detector input, represented by the sequence:</p><formula xml:id="formula_1">  oM o o ,... 2 , 1   ( 2 )</formula><p>where: 𝑜𝑖 -the value of the sound signal parameter (one of 𝑀) taken by the detector at the i th moment of time. The segments of time in which the detector takes off these parameters are states 𝑆 = {𝑠1, 𝑠2, ..., 𝑠𝑁} of the model λ = (𝑃, Φ, π). Each of these models corresponds to different types of audio events, such as certain words. In order for the system to be able to select the audio event that corresponds most to the initial segment of the audio signal (in other words, to recognize the word), it is necessary to find the fidelity of the appearance of the sequence Ω = {𝑜1, 𝑜2, ..., 𝑜𝑀} for each available models λ = (P, Φ, π). In this way, there is a set of observed states (speech signal) and a probabilistic model that conveys a hidden state (phonemes) and observable quantities.</p><p>Thus, the processing of a voice message occurs in a few steps:</p><p>Step 1. The input of the system for identifying the current location S is the input data X. One of the input parameters is a voice message x5.</p><p>Step 2. A voice message Ω arrives at the audio event detector, which starts with one of the keywords: start navigation, build route, cancel, stop the starting position, destination.</p><p>Step 3. The resulting sequence falls into the audio processing block where we get the λ model.</p><p>Step 4. This step defines a specific audio event in a probabilistic way. That is, the record is divided into frames and each frame is skipped through the acoustic model. System with machine learning, defines variants of spoken words and context. The accuracy of the results depends on the completeness of the phonetic alphabet of the system. For each sound, a complex statistical model is first constructed that describes the pronunciation of this sound in the language. The system of recognition compares the incoming speech signal with phonemes, and from them they collect words.</p><p>Step 5. In this step, the data fall into the next level of the system as a text for decision. The main teams will be: In what building am I? What floor? I need room №? What room do I have a couple of? How do I get to the room №?</p><p>Step 6. After receiving the request, the commands will be mapped to the source data, which include: schedule, group lists, placement and maps of the building and each floor, the list of audiences.</p><p>Step 7. Next, using the integrated method will determine the current position on the map of the room.</p><p>Step 8. In this step, the data is verified using a neuro-fuzzy method of verification <ref type="bibr" target="#b11">[12]</ref>.</p><p>Step 9. After processing the system, we receive z 2 messages and the route is being built <ref type="bibr">[28]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Realization of the subsystem of voice indoor navigation</head><p>Within the Smart-Campus application, the ability to display the current position of the user inside the building and the search for the shortest path to the specified beacon <ref type="bibr" target="#b8">[9]</ref> was implemented. The next step is to modify the Smart-Campus subsystem of voice navigation.</p><p>The Smart-Campus, is a system with Bluetooth Low Energy devices and a backend database with dedicated content management system (CMS). The idea is to find the location from one beacon to the others, for an interactive tour around the campus or to guide visitors to their specific location of interest. To provide navigation, first a map of the building should be provided or developed. Next is showing the appropriate path to another beacon location. This is why the newly developed solution consists of two parts: a map editor and path detection.</p><p>The map editor allows creating a map of a floor. You can use a background picture of a known area or develop it from scratch with the easy-to-use editor. The app-user is the client of information related to a certain beacon at a certain location and our solution allow user to get this information in an attractive way on his or her smart phone through a dedicated application. The app itself fetches the information from the server, related to the unique user ID (UUID) the beacon broadcasts on regular basis. On this server the information is added and edited by the beacon owners through the developed CMS. The users can decide on groups of beacons which are allowed to display their information <ref type="bibr" target="#b7">[8]</ref>.</p><p>The voice navigator will help a person find the location of the audience and the body in which it is located. After the audience is found, the navigator will answer the question about the building in which the audience is located, on which floor and construct the map-device from the current position of the user to the required body. Also, the mobile add-on will provide the user with the opportunity to create and manage their class schedules. The timetable will be displayed for the week and the current day. From the schedule, the user will be able to build a route to the required building.</p><p>Let us consider software which contains similar functionality: Google Assistant, Siri, Cortana. For analysis following characteristics were selected: dependence on the Internet, speed of operation of the recognizer, understanding the request, number of satisfactory answers to questions, construction of the route, vocabulary, number of supported languages,. The summary is presented in the table 1.</p><p>After analyzing the applications, the main characteristics that should have a voice navigator have been highlighted. The voice navigator, for integration into the Smart-  First the application is trained with commands which are stored at the local databases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Fig. 3. -Menu</head><p>With each beacon there is connected the voice identification of the location. After the final location is recognised the path is built according to the shortest path algorithm <ref type="bibr" target="#b10">[11]</ref>. One of the options is that the user can see the previous voice requests (fig. <ref type="figure">3</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Conclusion</head><p>For the in-door navigation system was designed a voice navigator. Integrating the audio navigator into the Smart-Campus system is improving the social adaptation of visually impaired people. Usage of the voice for navigation systems allow user to provide access to information in navigation systems; to connect many objects and events among themselves; to support new user interaction systems, sensors, mobile devices, devices and applications</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Fig. 1 .</head><label>1</label><figDesc>Fig. 1.-Acoustic model<ref type="bibr" target="#b12">[13]</ref> </figDesc><graphic coords="3,125.28,397.44,344.64,151.44" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Fig. 2 .</head><label>2</label><figDesc>Fig. 2. -Interactions in the voice navigator</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 .</head><label>1</label><figDesc>Mobile applications comparison</figDesc><table><row><cell cols="2">Campus must have the following features:  to record a voice sentence to get an audience that the user is looking for;</cell></row><row><cell></cell><cell>to recognize vocal sentences and convert them to text;</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>The work was partly done within the framework of Erasmus+ [BIOART] Also, the work was carried out within the framework of the agreement on scientific and technical cooperation Agency # 417/156 / 1.4917 dated May 4, 2017 between ZNTU and Limited Liability Company Infocom LTD.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<author>
			<persName><forename type="first">І</forename><forename type="middle">F</forename><surname>Gnibіdenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Kravchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">M</forename><surname>Koval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">F</forename><surname>Novikova</surname></persName>
		</author>
		<title level="m">Social defencing of the population of Ukraine: higher posibilities, per community</title>
				<editor>
			<persName><forename type="first">K</forename></persName>
		</editor>
		<meeting><address><addrLine>Phoenix</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page">212</biblScope>
		</imprint>
	</monogr>
	<note>View at NAPA; View of</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Project Oriented Teaching Approaches for E-learning Environment</title>
		<author>
			<persName><forename type="first">P</forename><surname>Arras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Van Merode</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<idno type="DOI">10.1109/idaacs.2017.8095097</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 9th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="317" to="320" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Engineering Education for HealthCare Purposes: A Ukrainian Perspective</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Parkhomenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Morshchavka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Luengo</surname></persName>
		</author>
		<idno type="DOI">10.1109/MEMSTECH.2018.8365743</idno>
	</analytic>
	<monogr>
		<title level="m">The XIV-th International Conference on Perspective Technologies and Methods in MEMS Design (MEMSTECH)</title>
				<meeting><address><addrLine>Lviv, Polyana</addrLine></address></meeting>
		<imprint>
			<date>18-21 April</date>
			<biblScope unit="page" from="245" to="249" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">On the basis of social protection of persons with disabilities in Ukraine</title>
	</analytic>
	<monogr>
		<title level="j">Law of Ukraine</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">2249</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">On Amendments to Some Laws of Ukraine on the Expansion of Access to the Blind, Visually Impaired, and Dyslexic Individuals for Works Published in a Special Format</title>
	</analytic>
	<monogr>
		<title level="j">Law of Ukraine</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="2015" to="2927" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">On amendments to certain legislative acts of Ukraine concerning the protection of the rights of persons with invalidity</title>
	</analytic>
	<monogr>
		<title level="j">Law of Ukraine</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page">1519</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">On amendments to some laws of Ukraine on education regarding the organization of inclusive education</title>
	</analytic>
	<monogr>
		<title level="j">Law of Ukraine</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page">1324</biblScope>
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Intellectual Flexible Platform for Smart Beacons</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Van Merode</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-64352-6_83</idno>
	</analytic>
	<monogr>
		<title level="m">Online Engineering and Internet of Things</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Auer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Zhutin</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="895" to="900" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Smart-campus infrastructure development based on BLE4.0</title>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Van Merode</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Goncharov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Patrakhalko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">J. Electrotechn. Comput. Syst</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">94</biblScope>
			<biblScope unit="page" from="17" to="20" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<ptr target="http://buchuk.domen.uz.ua/index.php?id=realspeaker" />
		<title level="m">Speech Recognition</title>
				<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Modelling of location detection for indoor navigation systems</title>
		<author>
			<persName><forename type="first">O</forename><surname>Petrova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<idno type="DOI">10.1109/IDAACS.2017.8095229</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 9th International Conference on Intelligent Data Acquisition and Advanced Computing Systems (IDAACS)</title>
				<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="961" to="964" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Method for determining the current location in positioning systems and indoor navigation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Petrova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Van Merode</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Electrotechnical and Computer Systems</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="270" to="278" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Fuzzy Verification Method for Indoor-Navigation Systems</title>
		<author>
			<persName><forename type="first">O</forename><surname>Petrova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Tabunshchyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kaplienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Kapliienko</surname></persName>
		</author>
		<idno>https://doi: 0.1109/TCSET.2018.8336157</idno>
	</analytic>
	<monogr>
		<title level="m">14th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET 2018 -Proceedings</title>
				<meeting><address><addrLine>Slavske</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-02-24">20-24 February 2018. 2018</date>
			<biblScope unit="page" from="65" to="68" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Voice Recognition in the Sphere of Information Technologies</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">V</forename><surname>Shpakov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Young Scientist</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="8" to="11" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">The application of modern speech recognition technologies in the creation of a linguistic simulator to enhance the level of linguistic competence in the field of intercultural communication</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Kolesnikova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Rudnichenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">A</forename><surname>Vereshchagina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">R</forename><surname>Fominova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Internet journal &quot;Naukovedenie</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">6</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Linear Regression Based Bayesian Predictive Classification for Speech Recognition</title>
		<author>
			<persName><forename type="first">J.-T</forename><surname>Chien</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Speech and Audio Processing</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2003-01">January (2003</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Variational Bayesian Es-tation and Clustering for Speech Recognition</title>
		<author>
			<persName><forename type="first">Sh</forename><surname>Watanabe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Speech and Audio Processing</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A New Verification-Based Fast-Match for Large Vocabulary Continuous Speech Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Afify</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jiang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Speech and Audio Processing</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">4</biblScope>
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Histogram-based quantization for Roboust and / or Distributed speech recognition</title>
		<author>
			<persName><forename type="first">Chia-Yu</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Audio, Speech And Language Processing</title>
		<imprint>
			<biblScope unit="volume">16</biblScope>
			<date type="published" when="2008-01-01">Jan. 1, 2008. 2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Noise Robust Automatic Speech Recognition using a predictive Echo state Network</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Skowronski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE transactions on Audio, Speech and Language processing</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="issue">5</biblScope>
			<date type="published" when="2007-06">June 2007. 2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Algorithm and Methods of Speech Recognition</title>
		<author>
			<persName><forename type="first">Zh</forename><forename type="middle">V</forename><surname>Alborova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">I</forename><surname>Rubtsov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">youth scientific and technical weight №FS77</title>
		<imprint>
			<biblScope unit="volume">51038</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Digital processing of speech signals</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Rabiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">V</forename><surname>Schafer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Radio and communication</title>
		<imprint>
			<biblScope unit="page">496</biblScope>
			<date type="published" when="1981">1981</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Subbotin</surname></persName>
		</author>
		<idno type="DOI">10.3103/S1060992X10020037</idno>
		<ptr target="https://doi.org/10.3103/S1060992X10020037" />
	</analytic>
	<monogr>
		<title level="j">Opt. Mem. Neural Networks</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">126</biblScope>
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Parallel Computer System Resource Planning for Synthesis of Neuro-Fuzzy Networks</title>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Skrupsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Subbotin</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-48923-0_12</idno>
	</analytic>
	<monogr>
		<title level="m">Recent Advances in Systems, Control and Information Technology. SCIT 2016</title>
				<editor>
			<persName><forename type="first">R</forename><surname>Szewczyk</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Kaliczyńska</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">543</biblScope>
		</imprint>
	</monogr>
	<note>Advances in Intelligent Systems and Computing</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Classification by fuzzy decision trees inducted based on Cumulative Mutual Information</title>
		<author>
			<persName><forename type="first">J</forename><surname>Rabcan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rusnak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">14th International Conference on Advanced Trends in Radioelectronics, Telecommunications and Computer Engineering, TCSET 2018 -Proceedings</title>
				<meeting><address><addrLine>Slavske</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-02-24">20-24 February 2018. 2018</date>
			<biblScope unit="page" from="208" to="212" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Using Modern Architectures of Recurrent Neural Networks for Technical Diagnosis of Complex Systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Leoshchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliinyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subbotin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Zaiko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Scientific-Practical Conference on Problems of Infocommunications Science and Technology</title>
		<title level="s">-Proceedings</title>
		<meeting><address><addrLine>PIC S and T</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<ptr target="http://www.machinelearning.ru/wiki/images/8/83/GM12_3.pdf" />
		<title level="m">Hidden Markov Models</title>
				<imprint/>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
