<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Control Actions Using Voice and Gestures at the Level of the Operating System</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Serhii</forename><surname>Kulibaba</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska str</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleg</forename><surname>Kurchenko</surname></persName>
							<email>oleg.kurchenko@knu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska str</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Liudmyla</forename><surname>Zubyk</surname></persName>
							<email>zubyk.liudmyla@knu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<addrLine>60 Volodymyrska str</addrLine>
									<postCode>01601</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Control Actions Using Voice and Gestures at the Level of the Operating System</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">2B57FFF8E21DF381FC4A786F14CA004A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:52+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Voice</term>
					<term>gesture</term>
					<term>control</term>
					<term>command</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The task of developing a tool that will perform various user tasks at the level of the operating system with the help of voice and / or gestures is under consideration. Processing of input data will take place at the expense of the created modules, which are associated with readymade solutions. Analyzing the status of the issue, physical remote control devices were seen that could interact with the operating system, but no software was found. The application of this tool can be in various spheres of activity -commercial and general. The purpose of this application is to provide some part of the community with the opportunity to use most of the functions in various applications, in particular, the general use of a certain operating system.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Currently, modern technologies are rapidly developing and are in demand among society. It is possible to notice a variety of sensors that allow performing actions automatically, software for performing a number of tasks <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>It has been found that a certain part of the society cannot use most of the applications due to the respective defects. Software or application developers pay little attention to maintaining or developing projects that could solve common problems.</p><p>Therefore, it was decided to develop a model that would allow solving most problems in the use of information technologies at the level of the operating system with the help of a single software tool.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Analysis of publications, state of the issue and statement of the problem</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1">Analysis of research and publications</head><p>Using voice assistants is becoming a normal everyday thing. Various companies are implementing voice assistants in their applications. The reason for this is to simplify the use of the product due to additional technologies. In <ref type="bibr" target="#b2">[3]</ref>, the principle of using ready-made solutions in voice recognition is considered. In addition to voice assistants, gesture management is rapidly developing. Currently, companies are implementing such solutions in their cars in order to provide their customers with a certain convenience in using their product.</p><p>The work <ref type="bibr" target="#b3">[4]</ref> shows an example of the application of gesture control, which is adapted to another system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2">Analysis of the issue in the applied industry</head><p>At the moment, there are various sensors and software that allow solving a number of people's problems. But the search for software that would allow solving specific problems in the use of different applications from different manufacturers was not found. Therefore, it was decided to develop a model that can solve different problems of different users using voice and gestures. Thanks to ready-made solutions in a certain programming language, as well as their combination, you can create a universal and unique solution that can be in demand -this is a feature of this model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3">Formulation of the problem</head><p>Different products are adapted for different tasks <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>. But no universal one was noticed. Thanks to the use of innovative technologies, it is possible to create a software application that could solve a number of problems without additional costs from the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Application development</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Application of ready solutions</head><p>Different programming languages have similar data libraries. Thanks to this, you can solve similar problems in different programming languages.</p><p>Computer vision is in demand among the famous. Its application has various directions: automation of actions, tracking of objects, processing of input data, etc. <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>.</p><p>As an example, the Python programming language and the OpenCV data library are used. This library allows you to process both streaming videos and images <ref type="bibr" target="#b8">[9]</ref>.</p><p>Another off-the-shelf solution that is needed is voice recognition. Thanks to voice recognition, it is possible to adapt certain actions to the application <ref type="bibr" target="#b9">[10]</ref>. There are several libraries that allow you to implement voice recognition. Among the well-known are Pocket Sphinx and Speech Recognition <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12]</ref>. Pocket Sphinx allows you to create a model that will recognize only those words that are listed in the dictionary <ref type="bibr" target="#b12">[13]</ref>. Speech Recognition is a model that is trained using a set of specified words. The set of words that includes Speech Recognition can be enough for most tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2">Structure of the project</head><p>The project contains various directories and executable files, as well as additional configuration files for saving user settings.</p><p>The structure of the project is divided in such a way that it would be easy to navigate through it. There are SOLID design principles, where the first principle is applied to the project structure (Single Responsibility Principle), which indicates the division of responsibility between components <ref type="bibr" target="#b13">[14]</ref>.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> shows a diagram of the application components.</p><p>Libraries. Two libraries are used -OpenCV and Speech Recognition. The diagram shows the connections between the libraries and other system components.</p><p>Voice recognition. This directory contains a basic set of functions and classes that will allow recognition and processing of input data, where the specified user action can be performed as a result. Since the set of necessary words can be large, the commands themselves are placed in a separate file, where the system can be flexibly scaled in the future.</p><p>Gesture recognition. A directory that contains the necessary file that will help recognize gestures. Component interfaces. This directory contains the interfaces of all project classes.</p><p>The precondition for creating interfaces is the second principle of SOLIDclasses are open to extension, but closed to modifications (Open-Closed Principle).</p><p>Main and configuration files. The main file (main.py) contains a set of functions that work compositionally with speech and voice recognition modules. To save the user configuration, a separate file (.conf) is created. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3">Principle of operation of the application</head><p>At the beginning of working with the program, the user has the opportunity to make some settings or directly start managing actions automatically (Fig. <ref type="figure" target="#fig_1">2</ref>). The setup includes several stages: determining the position of the eyes, nose, and main voice commands. When the setup process is complete, the data will be saved to a separate file. The setting is required in order to reduce the possibility of incorrect processing of user commands. The program can run until the "stop word" is triggered. The voice recognition module will wait for the corresponding word when it needs to pause the program execution process. Otherwise, all commands will be processed.</p><p>Since the application runs in a single thread, only one of the action management methods can be executed at a time. In this case, you need to use 2 separate streams for gesture control and voice control. Thanks to this, the user can perform actions faster and more conveniently.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusions</head><p>Voice assistants and gesture control are implemented in various products or systems of companies. Having analyzed the state of the problem of the use of existing applications in society, it was found necessary to create some automation to solve most of them.</p><p>This work reflects the principle of project construction, which allows you to manage actions with the help of various existing means. Thanks to the combination of several components, a new unique and universal goal can be achieved.</p><p>For further scaling of the project, SOLID design principles were used, thanks to which the application is built in such a way that the structure of any module can be changed without problems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Component diagram</figDesc><graphic coords="3,155.75,147.60,283.45,563.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Activity diagram After the setup is complete, you can start working with the application. You can use both voice control and gesture control.The program can run until the "stop word" is triggered. The voice recognition module will wait for the corresponding word when it needs to pause the program execution process. Otherwise, all commands will be processed.</figDesc><graphic coords="4,95.75,180.50,403.20,463.90" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Towards Implementation of Interoperable Smart Sensor Services in IEC 61499 for Process Automation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Jhunjhunwala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><forename type="middle">D</forename><surname>Atmojo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vyatkin</surname></persName>
		</author>
		<idno type="DOI">10.1109/ETFA46521.2020.9211925</idno>
	</analytic>
	<monogr>
		<title level="m">25th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA)</title>
				<meeting><address><addrLine>Vienna, Austria</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1409" to="1412" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A System for Energy Management and Home Automation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Arunachalam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Raghuraman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Obed</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vishnupriyan</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSCAN53069.2021.9526526</idno>
	</analytic>
	<monogr>
		<title level="m">2021 International Conference on System, Computation, Automation and Networking (ICSCAN)</title>
				<meeting><address><addrLine>Puducherry, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="3" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Comparative Analysis of Smart Voice Assistants</title>
		<author>
			<persName><forename type="first">R</forename><surname>Sivapriyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Sakshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Vishnu Priya</surname></persName>
		</author>
		<idno type="DOI">10.1109/CSITSS54238.2021.9683722</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Computation System and Information Technology for Sustainable Solutions (CSITSS)</title>
				<meeting><address><addrLine>Bangalore, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Design of Smart Car Control System for Gesture Recognition Based on Arduino</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Wen</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICCECE51280.2021.9342137</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Consumer Electronics and Computer Engineering (ICCECE)</title>
				<meeting><address><addrLine>Guangzhou, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="695" to="699" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Arduino Based Voice Controlled Wheelchair For Physically Challenged Persons</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Pulleti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Inturi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">V</forename><surname>Valluru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Oc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">K</forename><surname>Putta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">V</forename><surname>Jaliparthi</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICEES57979.2023.10110267</idno>
	</analytic>
	<monogr>
		<title level="m">2023 9th International Conference on Electrical Energy Systems (ICEES)</title>
				<meeting><address><addrLine>Chennai, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="499" to="503" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Arduino mobile robot with Myo Armband gesture control</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kristof</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Ciupe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Moldovan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Maniu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Banda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A. -M</forename><surname>Stoian</surname></persName>
		</author>
		<idno type="DOI">10.1109/SACI46893.2019.9111627</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 13th International Symposium on Applied Computational Intelligence and Informatics (SACI)</title>
				<meeting><address><addrLine>Timisoara, Romania</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="294" to="000297" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Air xylophone Using OpenCV</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Vijila</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Shastika</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSES55317.2022.9914191</idno>
	</analytic>
	<monogr>
		<title level="m">2022 International Conference on Innovative Computing, Intelligent Communication and Smart Electrical Systems (ICSES)</title>
				<meeting><address><addrLine>Chennai, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Design of intelligent car based on WiFi video capture and OpenCV gesture control</title>
		<author>
			<persName><forename type="first">G</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1109/CAC.2017.8243499</idno>
	</analytic>
	<monogr>
		<title level="m">2017 Chinese Automation Congress (CAC)</title>
				<meeting><address><addrLine>Jinan, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4103" to="4107" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Paint / Writing Application through WebCam using MediaPipe and OpenCV</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gulati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">K</forename><surname>Rastogi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Virmani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pradhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gupta</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICIPTM54933.2022.9753939</idno>
	</analytic>
	<monogr>
		<title level="m">2022 2nd International Conference on Innovative Practices in Technology and Management (ICIPTM)</title>
				<meeting><address><addrLine>Gautam Buddha Nagar, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="287" to="291" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Advanced Communication Model with the Voice Control and the Increased Security Level Cybersecurity Providing in Information and Telecommunication Systems</title>
		<author>
			<persName><forename type="first">Serhii</forename><surname>Kulibaba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Svitlana</forename><surname>Popereshnyak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yuri</forename><surname>Shcheblanin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Oleg</forename><surname>Kurchenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nataliia</forename><surname>Mazur</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="volume">3288</biblScope>
			<biblScope unit="page" from="64" to="72" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Comparison of acoustical models of GMM-HMM based for speech recognition in Hindi using PocketSphinx</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Manasa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">J</forename><surname>Priya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gupta</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICCMC.2019.8819747</idno>
	</analytic>
	<monogr>
		<title level="m">2019 3rd International Conference on Computing Methodologies and Communication (ICCMC)</title>
				<meeting><address><addrLine>Erode, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="534" to="539" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Emotion Recognition Through Speech Signal Using Python</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Rohan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">S</forename><surname>Swaroop</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mounika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Renuka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nivas</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSTCEE49637.2020.9277338</idno>
	</analytic>
	<monogr>
		<title level="m">2020 International Conference on Smart Technologies in Computing, Electrical and Electronics (ICSTCEE)</title>
				<meeting><address><addrLine>Bengaluru, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="342" to="346" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Investigation on speech recognition Accuracy via Sphinx toolkits</title>
		<author>
			<persName><forename type="first">O</forename><surname>Zealouk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hamidi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Satori</surname></persName>
		</author>
		<idno type="DOI">10.1109/IRASET52964.2022.9738105</idno>
	</analytic>
	<monogr>
		<title level="m">2nd International Conference on Innovative Research in Applied Science, Engineering and Technology (IRASET)</title>
				<meeting><address><addrLine>Meknes, Morocco</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="1" to="6" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An approach to class diagrams verification according to SOLID design principles</title>
		<author>
			<persName><forename type="first">E</forename><surname>Chebanyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Markov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">4th International Conference on Model-Driven Engineering and Software Development (MODELSWARD)</title>
				<meeting><address><addrLine>Rome, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<biblScope unit="page" from="435" to="441" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Cyber Security Risk Modeling in Distributed Information Systems</title>
		<author>
			<persName><forename type="first">D</forename><surname>Palko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Babenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bigdan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kiktev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hutsol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kuboń</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hnatiienko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tabor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Gorbovy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Borusiewicz</surname></persName>
		</author>
		<idno type="DOI">10.3390/app13042393</idno>
		<ptr target="https://doi.org/10.3390/app13042393" />
	</analytic>
	<monogr>
		<title level="j">Appl. Sci</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">2393</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The Role of Innovation in Economic Growth: Information and Analytical Aspect</title>
		<author>
			<persName><forename type="first">O</forename><surname>Kalivoshko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kraevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Burdeha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Lyutyy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kiktev</surname></persName>
		</author>
		<idno type="DOI">10.1109/PICST54195.2021.9772201</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE 8th International Conference on Problems of Infocommunications, Science and Technology (PIC S&amp;T)</title>
				<meeting><address><addrLine>Kharkiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="120" to="124" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
