<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Methodological Foundations of an Information System Construction for the Recognition of Ukrainian Sign Language</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Taras</forename><surname>Basyuk</surname></persName>
							<email>taras.m.basyuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Bandera str.12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andrii</forename><surname>Vasyliuk</surname></persName>
							<email>andrii.s.vasyliuk@lpnu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Lviv Polytechnic National University</orgName>
								<address>
									<addrLine>Bandera str.12</addrLine>
									<postCode>79013</postCode>
									<settlement>Lviv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Methodological Foundations of an Information System Construction for the Recognition of Ukrainian Sign Language</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">B3502A556836654150C75F40FCE497DF</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:15+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Ukrainian sign language</term>
					<term>pattern recognition</term>
					<term>learning process</term>
					<term>communication</term>
					<term>information system</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The article analyzes existing methods and known systems that provide means of recognizing Ukrainian sign language and describes the mechanisms of their implementation. Technologies and software tools for sign language recognition were analyzed, which made it possible to identify the main shortcomings of existing approaches and showed the relevance of the research. The diagram reflecting the main stages that must be implemented in the process of gesture recognition has been finalized. The structural design of the software system was carried out with the display of created diagrams in accordance with the IDEF0 standard. The article presents a context diagram and a decomposition diagram, which created the basis for the study of features and the formation of methodological foundations for the construction of an information system. The main stages of gesture recognition are highlighted and described, namely: transformation of the input image, its filtering and actual recognition. The justification of the choice of methods for displaying contours and recognizing gestures in the incoming information message was made and their analysis was carried out. The constructed prototype of the system for recognizing Ukrainian sign language consists of four main modules: HandGesturesRecognitionForm, NeuralNetwork, CsvManager, TrainingImageDataManager, which provide basic functionality. At the current stage, it can be useful as an additional communication tool for people with special needs. Further research will be aimed at testing and improving systems, eliminating conflicts and expanding functionality in accordance with the specified requirements.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Today, computer technologies are involved in almost all spheres of human life. With the help of various technical solutions, a person is able to solve daily tasks with greater simplicity and efficiency. If at the end of the 20th century computer technologies were primarily associated with the scientific and military spheres, then in the second decade of the 21st century they are associated with almost all spheres of human life. It is quite natural that various computer solutions are widely used in the field of communication between individuals with special needs. At the same time, one of the communication devices is sign language <ref type="bibr" target="#b0">[1]</ref>. Sign language is a type of speech that makes it possible to express thoughts using facial expressions, emotions, and hand gestures that correspond to letters, words, or individual phrases. Despite the large number of people who suffer from hearing or speech impairments, sign language has received little attention from linguistics. In the world, the share of people with hearing problems is about 5% or 430 million <ref type="bibr" target="#b1">[2]</ref> of the total population. Sign languages are not universal in all countries, as they arise and develop naturally in different territories and change over time with the emergence of new vocabulary. The debate about sign language has been going on for about half a century. educational institutions for children with hearing impairments to ignoring the existence of the language and even completely banning it <ref type="bibr" target="#b2">[3]</ref>.</p><p>The hearing-impaired community is often embarrassed by their difficulties in communicating with the rest of the world. Although sign language is used as a means of conveying its message, there are still problems in communication because there are few people who are familiar with this type of language. In addition, the number of available translators is insufficient to solve the problem. This motivated scientists from different countries to study this problem and work on it. In general, this issue can be divided into two parts: the first involves the development of automatic sign language synthesizers that allow people with hearing impairments to understand messages transmitted by people who can hear; the second part -the opposite -concerns the development of sign language interpreters that allow the hearing community to understand sign language <ref type="bibr" target="#b3">[4]</ref>. In view of the mentioned patterns, the urgent task is to develop an information system for recognizing Ukrainian sign language, which will provide additional means of overcoming the language barrier between communication subjects.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Analysis of recent researches and publications</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.1.">Analysis of known research</head><p>Although sign language has been a subject of study for centuries, it was only at the end of the last century that the subject became the focus of linguistic research. This was facilitated by the publication of William K. Stoke "The Structure of Sign Language", which marked the beginning of the linguistics of sign language. The proposed structure consisted of 55 symbols, which formed three groups according to the parameters (place of execution of the gesture, nature of movement and shape of the hand), which Stoke considered relevant in determining the structure of the gesture. Stoke's notation formed the basis of the organizing principles of the first dictionary of American Sign Language <ref type="bibr" target="#b4">[5]</ref>.</p><p>As the analysis showed, the existing methods of gesture recognition in computer systems are divided into two types -recognition based on the creation of a 3D model and methods built on the principle of feature selection <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. The first class of methods is based on the creation of a kinematic model. This model must take into account each of the possible degrees of freedom. When building such a model, hand gestures are evaluated using a comparison of hand coordinates on the image. Methods of this type make it possible to recognize a significant number of gestures, but when implementing them, you need to create a large-scale database with images. Images from the database will also be used to resolve conflicts during feature selection that arise due to various shapes and sizes of recognition objects. The second class of methods is based on the processing of details of the input data stream, which are designed to determine the coordinates of the object of recognition. This method can be applied only if it is possible to determine characteristic anchor points or features on the images of objects. Then the object itself can be defined as a combination of these points or planes that they form. In this case, instead of creating a complete object, a subset of its characteristic points or areas is created. This approach is resistant to deformations and changes in input sequences. In the presence of characteristic features, the object can always be unambiguously classified <ref type="bibr" target="#b7">[8]</ref>. A separate approach to gesture recognition is a method based on artificial neural networks <ref type="bibr" target="#b8">[9]</ref>. Convolutional neural networks can successfully identify individual dactyls, but this applies only to static gestures, the analysis of dynamic movements based on images is too cumbersome and resource-consuming <ref type="bibr" target="#b9">[10]</ref>.</p><p>In general, scientific research in this direction can be presented in the form of the following publications:</p><p>• "Sign language recognition using Microsoft Kinect" <ref type="bibr" target="#b10">[11]</ref> authors developed a method for recognizing sign language using depth images from the Kinect sensor. The depth and motion profile are calculated from the generated images and used to construct a feature matrix for each gesture. Recognition is performed on the basis of a linear classifier based on the method of support vectors.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>In the work "Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network" <ref type="bibr" target="#b11">[12]</ref>, a multi-sensor fusion structure based on convolutional neural network and dynamic bayesian network for sign language recognition is proposed. In this framework, Microsoft Kinect, which is an RGB-D sensor, is used as a human-computer interaction tool. In particular, in the proposed approach, data is first collected using Kinect, then all features of the image sequence are extracted using a convolutional neural network. Sequences of color and depth features are input to the DBN as observation data. The maximum level of recognition of dynamic isolated sign language is calculated based on the union of the graph model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>In the work "A Real-time Hand Gesture Recognition System for Human-Computer and Human-Robot Interaction" <ref type="bibr" target="#b12">[13]</ref>, the proposed gesture recognition system is designed to improve human-computer interaction and human-robot interaction. As the authors of the study assure, such interaction ensures natural and intuitive communication between people and technology using gestures.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>The robot "3D Dynamic Hand Gesture Recognition with Fused RGB and Depth Images" <ref type="bibr" target="#b13">[14]</ref> offers dynamic gesture recognition technology. In order to solve the existing technology problems, the authors propose to use a network model of three-dimensional dynamic gesture recognition, which uses CNN and LSTM networks and can combine information about RGB and image depth.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>In the work "Hand gesture recognition using convolutional neural network and histogram of oriented gradients features" <ref type="bibr" target="#b14">[15]</ref>, the authors emphasize that gesture recognition is the main part of creating a sign language recognition system for people with hearing impairments and is widely used in human-computer interaction. The selected dataset for building the gesture recognition system model is based on American Sign Language using pre-trained AlexNet Convolutional Neural Network and Oriented Gradient Histogram.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>In the work "Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes" <ref type="bibr" target="#b15">[16]</ref>, the authors propose the technology of using microwave radar sensors for human-computer interaction. The peculiarity is that the raw signals generated by such radars have a large dimension and are very difficult to process and interpret for gesture recognition. For these reasons, machine learning techniques are mainly used for gesture recognition, but require numerous gesture patterns for training and calibration, which are specific to each radar <ref type="bibr" target="#b16">[17]</ref>. The given list of studies that are used in the process of sign language recognition is not exhaustive. But the conducted analysis shows that the ideal method does not exist and is unlikely to exist. Therefore, it can be concluded that the mentioned approaches can be ensured by their further adaptation for Ukrainian-language content.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.2.">Analysis of the Ukrainian sign language development</head><p>In many countries of the world, the possibility of creating and popularizing translators from audio language to sign language and vice versa is being investigated. However, the problem of translating sign language into Ukrainian-language audio content still remains unresolved. It is worth noting that Ukrainian sign language, like any other sign language, has its own rules and grammar, which in turn does not allow the use of existing dictionaries of foreign sign languages. On the territory of modern Ukraine, sign language began to develop in the 19th century -the time of the founding of the first communities, that is, Ukrainians have been creating their own sign language for about two centuries. In 1830, the Lviv school for hearing-impaired children was opened, and in 1843 -in Odesk -these are the approximate dates of the beginning of the development of Ukrainian sign language <ref type="bibr" target="#b17">[18]</ref>.</p><p>It was only recently that sign language was recognized and equated with verbal language. UN General Assembly Resolution 48/96 of December 20, 1993, "Standard Rules for Ensuring Equal Opportunities for Persons with Disabilities," stated that care should be taken to ensure that sign language is used in the education of deaf children, in their families and communities, and it was also recommended to provide sign language interpretation services to facilitate the communication of sign language people with other people. Subsequently, the issue of using sign language became more active in Ukraine, but the use of Ukrainian sign language in education in independent Ukraine was not introduced until 2006 <ref type="bibr" target="#b18">[19]</ref>.</p><p>The study of sign language linguistics in Ukraine was started by R. Kraevskyi. The speechlanguage pathologist worked on the study of sign language, thus carried out its linguistic description on the basis of Ukrainian studies material and created a unique sign dictionary in the form of a manual "Sign Language of the Deaf" <ref type="bibr" target="#b19">[20]</ref>. For each gesture, the spatial position and way of movement of the hands are described. In the 21st century in Ukraine, N. Adamyuk, O. Drobot, S. Kulbida, O. Lozynska, M. Davydov are engaged in studying the peculiarities of the syntax of Ukrainian sign language.</p><p>Most of N. Adamyuk's scientific works are aimed at studying the peculiarities and linguistic didactic technologies of teaching Ukrainian sign language to deaf and hard-of-hearing children, studying the linguistic features of Ukrainian sign language, as well as studying the basic requirements for teachers of sign language in higher educational institutions and an innovative model their training and retraining <ref type="bibr" target="#b20">[21]</ref>. The works of O. Drobot are devoted to the formation of communication skills and comprehensive development of preschool children with hearing impairment <ref type="bibr" target="#b21">[22]</ref>. S. Kulbida's research is related to deaf pedagogy of the socio-cultural direction and the conceptual foundations of the development of Ukrainian sign language as a means of learning and a subject of study <ref type="bibr" target="#b22">[23]</ref>. The researches of O. Lozynska and M. Davydov are related in their main emphasis on the translation of Ukrainian sign language based on ontology <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25]</ref>.</p><p>The analysis of the completed work shows significant progress in popularizing the study of Ukrainian sign language, but the lack of problem-oriented software solutions makes its further research an urgent task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.2.">The main tasks of the research and their significance</head><p>The purpose of the research is to develop an information system for the recognition of Ukrainian sign language. The conducted research will provide means for creating on its basis software for managing information and reference content, generating/transforming elements of sign language and forming an individual learning environment for people with special needs. To achieve the goal, the following tasks must be solved: analyze the existing approaches, methods and software tools used in the field of Ukrainian sign language recognition; to determine the main tasks that arise at the same time; analyze the methods and algorithms of sign language recognition that can be adapted during system development; implement a prototype system for recognizing Ukrainian sign language.</p><p>The results of the study solve the actual scientific and practical problem of recognizing Ukrainian sign language and will provide the means to open up additional opportunities for individualizing the educational process for people with special needs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Major research results</head><p>People are constantly faced with the task of object recognition. Namely, the human brain processes the information received from the senses, on the basis of which an appropriate decision is made. After that, thanks to the transmission of electrochemical impulses, certain organs carry out the decision made. The above process will occur every time there is a change in the environment. A key stage in this process is the recognition and classification of the surrounding environment, which will help to make the right decision. Given the current development of computer technology, pattern recognition tasks have become the beginning of an independent field and a multitude of tasks that can be solved using gesture recognition.</p><p>In order to present the main aspects of the studied subject area, a scheme was finalized that reflects the main stages that must be implemented in the gesture recognition system (Fig. <ref type="figure">1</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1: The sequence of stages in gesture recognition</head><p>As can be seen from Figure <ref type="figure">1</ref>, the main tasks in the process of gesture recognition are:</p><formula xml:id="formula_0">•</formula><p>Obtaining an image -usually, this process is implemented using two or more synchronized infrared cameras or smartphone cameras, which continuously transmit a video stream to the system in real time (25-30 frames/sec.); • Localization of the hand area in the image -on each frame (or series of frames) obtained from the video stream of the camera, the area on which the hand is located is determined. This procedure mainly consists of two stages. The first stage is segmentation (selection) and analysis from the received data of the hand area. This process is performed to remove artifacts from the image and separate the hand region in the image from the background region. As a result of this stage, a selected image of the hand suitable for further processing is formed in the system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Gesture recognition -at this stage, the contours of the hand and its characteristics are determined on the image obtained as a result of localization of the hand region. Based on the received data, the gesture is classified. Let's consider in more detail the main tasks and stages of gesture recognition. The first step is to obtain images that are processed to separate the hand region from the background <ref type="bibr" target="#b25">[26]</ref>. This phase is called object localization in the image. After collecting the information, it becomes possible to apply the primary information about the hand in order to filter the data and remove noise from the image. Noises can appear, for example, due to changes in lighting. Artifacts (such as the presence of tattoos, jewelry, etc. on the hand) are also removed. This procedure is very important, considering the set of gestures that need to be distinguished. At the phase of recognition of hand gestures, feature selection is performed <ref type="bibr" target="#b26">[27]</ref>. This stage is an important part of the recognition process because hand movements have a significant number of shapes and textures. To recognize a static image of a hand, geometric features are used, among them: the location of the fingertips and their direction. The problem is that these features are not always available due to self-shading and lighting features. Next comes the stage of identifying specific gestures using methods of analyzing filtered data that carry information about hand movements. For this, the classification procedure is used. Before this stage, it is necessary to carry out a process of training the system to enable it to respond to gestures, and to carry out their adaptation for the correct detection of movements <ref type="bibr" target="#b27">[28]</ref>. To create a comfortable environment for the user, all processes of capturing, classifying and transforming gestures into text instructions must be performed in real time with an update rate of 25-30 frames per second.</p><p>Further work was aimed at conducting a systematic analysis of the subject area using the methodology of functional modeling and graphic description of processes. For these purposes, a structural approach and the IDEF0 standard, which is intended for the formalization and description of business processes, were used. A context diagram showing the process of recognizing Ukrainian sign language is presented in Fig. <ref type="figure" target="#fig_1">2</ref>. In the specified model, the input receives information about the gesture, the value of which must be displayed on the screen. The gesture can be transferred in the form of a photo or a video sequence. The output data of the system is the recognized gesture and its text value of the gesture. The driving influences are: image capture methods (methods and algorithms needed to capture an image and localize a gesture on the captured image. These methods are based on the analysis of external features of the gesture); image processing methods (methods and algorithms needed to process the image and extract the outline of the gesture for further analysis. A contour is a curve of a function of two variables along which the function has a constant value. Contours are straight or curved lines that describe sharp changes in brightness in the image <ref type="bibr" target="#b28">[ 29]</ref>. There is a high probability of obtaining more than one contour, which is formed in the image due to the presence of noise in the background. Methods for processing the image are necessary in order to remove excess noise from the image and select a clean contour of the gesture for further analysis); the rules of the Ukrainian sign language (information about the gestures of the Ukrainian language. As it is known <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31]</ref>, the Ukrainian sign language differs from other sign languages. The rules of the Ukrainian sign language are necessary to highlight the specific features of each gesture. This information is used during image analysis and the formation of the original result). Smartphone cameras or computer web cameras act as mechanisms.</p><p>For a more detailed understanding of the logic of the processes taking place in the gesture recognition system, the developed context diagram was decomposed into several sub-processes. The decomposition diagram is presented in Figure <ref type="figure" target="#fig_2">3</ref>. As can be seen from the decomposition diagram, the entire process of gesture recognition has been broken down into several subprocesses for greater detail and understanding. Each of the sub-processes has its own input data, output data, control influences and mechanisms necessary for the operation of the process. The entire gesture recognition system is divided into the following three sub-processes: image capture process; image processing process; image analysis process. Image capture is the first subprocess of the entire system. At this stage, the input device converts the gesture into digital form and transfers it to the image processing unit. Image processing is the second sub-process in the entire system. At this stage, the image is processed in such a way that the outline of the hand is clearly visible. The result of the recognition of the gesture of the Ukrainian sign language depends on the result of this process. An incorrectly selected processing algorithm or an incorrectly set parameter of the selected algorithm (for example, the binarization threshold for the image binarization algorithm) will lead to a poor-quality selection of the hand contour, which in turn will not allow accurate identification of the gesture. After successful image processing, it is time for the final stage of gesture analysis. It is here that the image of the gesture is translated into text.</p><p>Considering the described stages, the most important stages of the work are contour selection and actual recognition of gestures. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Contour selection methods</head><p>To highlight the contour, you can use a number of methods: image binarization, wavelet transformation, Canny edge detecto algorithm. In order to choose the optimal one for this task, we will analyze them. The process of binarization is the conversion of a color image or a grayscale image into two-color black and white. The main parameter of this transformation is the threshold, with the value of which the brightness is then compared. After comparing a single image pixel, it is assigned one of two possible values: 0 -"object boundary" or 1 -"another area" <ref type="bibr" target="#b31">[32]</ref>. The main goal of binarization is to reduce the amount of information you have to work with. Successful binarization greatly simplifies further work with the image. There are various methods of binarization, which can be conditionally divided into two groups: global (threshold); local (adaptive). Global binarization methods work with the entire image at once. Threshold methods of binarization include: binarization by the lower threshold; upper threshold binarization; double-constrained binarization; incomplete threshold processing; multilevel boundary transformation <ref type="bibr" target="#b32">[33]</ref>.</p><formula xml:id="formula_1">F ' (m, n)= # 0, F(m,n) ≥ t, 1, F(m,n) &lt; t (1)</formula><p>If the first condition is fulfilled for the image point in the given formula 1, then such a point is an object point, if the second condition is fulfilled, then the point will be a background point. In some cases, you can use a variant of the binarization method with a lower threshold <ref type="bibr" target="#b33">[34]</ref>, which results in a negative of the original image. This method is called binarization with an upper threshold and is represented by the formula:</p><formula xml:id="formula_2">F ' (m, n)= # 0, F(m,n) ≤ t, 1, F(m,n) &gt; t (<label>2</label></formula><formula xml:id="formula_3">)</formula><p>If it is necessary to highlight certain areas in which the brightness values of pixels can vary in a certain range, then the binarization method with a double constraint is used <ref type="bibr" target="#b34">[35]</ref>. This method is called binarization with an upper threshold and is represented by the formula:</p><formula xml:id="formula_4">F ' (m, n)= $ 0, F(m,n)≥ t1, 1,t1&lt; F(m,n) ≤ t2 0, F(m,n)&gt;t2 (<label>3</label></formula><formula xml:id="formula_5">)</formula><p>If it is necessary to obtain the simplest image for further analysis, then it is worth applying the incomplete threshold processing algorithm, during which the image is deprived of the background with all its details that were in the original photo. Incomplete threshold binarization is represented by the formula:</p><formula xml:id="formula_6">F ' (m, n)= # F(m,n), F(m,n) &gt; t, 0, F(m,n) ≤ t (4)</formula><p>If you need to get an image that contains segments with different brightness, you can apply the method of multi-level threshold transformation. However, at the same time, the image obtained during the transformation will no longer be binary <ref type="bibr" target="#b35">[36]</ref>.</p><p>The formula for this transformation is presented below:</p><formula xml:id="formula_7">F ' (m, n)= ⎩ ⎪ ⎨ ⎪ ⎧ 1, F(m,n) ϵ D1, 2, F(m,n) ϵ D2, … n,F(m,n) ϵ Dn 0, в усіх інших випадках (5)</formula><p>The conducted analysis showed that, taking into account the peculiarities of the input information, it is advisable to use a single binarization threshold, which is used to divide into black and white. The result of the threshold binarization method is shown in Figure <ref type="figure">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 4: An example of an image after conversion by the method of threshold binarization</head><p>Wavelet transforms are effectively used in signal compression and spectrum analysis <ref type="bibr" target="#b36">[37]</ref>. Virtually all wavelets are traditionally defined as functions of a single real variable. Depending on the mathematical model (the structure of the domain of definition, the structure of the domain of possible values and the type of transformations), discrete and continuous wavelets are distinguished. Since the decomposition of wavelets is carried out using floating-point arithmetic, inaccuracies may occur, the magnitude of which is affected by the degree of approximation of the signal. Taking into account the specifics of the subject area, it is possible to use the Haar wavelet <ref type="bibr" target="#b37">[38]</ref>. Its technical drawback is that it is not continuous, and therefore not differentiable. However, this property is an advantage when analyzing signals with sudden transitions (discrete signals) that are inherent in this area. In the traditional setting, the wavelet transformation in the Haar basis consists in the linear transformation of a vector of even dimension into another vector of the same dimension. Each pixel of the image can be represented in a binary number system. This decomposition determines the number of bits (N, usually N = 1, 8, 24) and their specific values for storing each pixel <ref type="bibr" target="#b38">[39]</ref>.</p><formula xml:id="formula_8">J= ∑ J k • N-1 k=0 2 k</formula><p>(6) To apply the wavelet transformation over the field GF (p), each pixel of the image must be represented in some number system. This decomposition determines the number of digits of the number system and their specific values that are used in the wavelet transform. The algorithm of order wavelet-transformation of the image is carried out according to the following stages: each pixel of the image is decomposed according to (6) into the digits of a certain numbering system p. A transformation is applied to all p digits with the same numbers. The digits of a certain numbering system p of the transformation result are folded into one number according to <ref type="bibr" target="#b5">(6)</ref>. In fig. <ref type="figure" target="#fig_3">5</ref> presents the variants of the initial image and the image after wavelet transformation by rows in the Haar basis over the field GF (3) and GF <ref type="bibr" target="#b12">(13)</ref>. The Canny edge detector algorithm was developed taking into account such criteria as fast detection and good contour localization. Based on these criteria, an objective function of the cost of errors was constructed, the minimization of which is the "optimal" linear operator for image convolution <ref type="bibr" target="#b39">[40]</ref>. In general, Kenny's algorithm consists of five stages.</p><p>1. Smoothing. At this stage, the image is blurred using a Gaussian filter for localization and noise removal <ref type="bibr" target="#b40">[41]</ref>.</p><formula xml:id="formula_9">f(x,y)= 1 2•π•σ 2 •exp(- x 2 +y 2 2•σ 2 (7)</formula><p>2. Search for gradients. Boundaries are searched -where the gradient reaches its maximum value, the boundaries are there:</p><formula xml:id="formula_10">T = +G x 2 + G y 2 (8)</formula><p>θ=arctan(</p><formula xml:id="formula_11">G y G x ) (<label>9</label></formula><formula xml:id="formula_12">)</formula><p>The angle of the direction of the gradient vector is rounded and can take the following values: 0, 45, 90, 135. If the angle is from 1 to 20, then it refers to the value 0, and if it is greater than 20, then to the value 45, etc.</p><p>3. Muting the lows. Only local maxima are marked as limits. 4. Double threshold filtering. Potential limits are determined by thresholds. 5. Tracing the area of ambiguity. End boundaries are defined by muting all edges not connected to certain (strong) boundaries.</p><p>Before applying the detector, the image is usually converted to shades of gray to reduce computational losses. The contour detector algorithm is not limited to calculating the gradient of the smoothed image. Only the points of maximum gradient of the image remain in the contour of the border, and all others lying next to the border are removed. The inclusion of noise suppression in the Kenny algorithm, on the one hand, increases the stability of the results, and on the other hand, increases the computational costs and leads to distortion and even loss of contour accuracy. The result of the Kenny algorithm is shown in Figure <ref type="figure" target="#fig_4">6</ref>. Comparing the results of image processing by the mentioned algorithms showed that the image binarization method works faster than the wavelet transformation and the Kenny algorithm. However, it should be noted that clearer borders of objects in the image are obtained during processing based on the application of the Kenny algorithm. However, to detect a highquality contour of the palm, the image binarization method is quite sufficient. In view of that, the image binarization method will be applied in further work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Gesture recognition</head><p>There are many methods that can be used to recognize gestures, among the most common are methods based on the hidden Markov model and neural networks. The hidden Markov model <ref type="bibr" target="#b41">[42]</ref> is a statistical model in which the system for which it is created is represented as a Markov process with invisible states. The model can also be represented as the simplest Bayesian network. The main application of hidden Markov models was in the field of recognition of images (gestures), speech, writing and bioinformatics. In addition, they are used in cryptanalysis, machine translation. The simplified structure of the hidden Markov model is represented by the following elements: ovals (these are variables that have random values, namely, the random variable x(t) is the value of the hidden variable at the time t, and the random variable y(t) is the value of the observed variable at the time t); arrows (indicate conditional dependencies).</p><p>The probability of finding a sequence Y = y(0), y(1), … , y(L-1) of length L is determined by the dependence:</p><formula xml:id="formula_13">P(Y)= ∑ 𝑃(𝑌|𝑋)𝑃(𝑋) ! (10)</formula><p>This modeling technology gained considerable popularity as a result of its successful application and further development in the field of automatic recognition of speech and gestures. Research on hidden Markov models has outperformed all competing approaches, and is the dominant processing paradigm. Their ability to describe processes or signals has been successfully studied for a long time. The reason for this, in particular, is that the technology of building artificial neural networks is rarely used for gesture recognition and similar segmentation problems. However, there are a number of hybrid systems that consist of a combination of hidden Markov models and artificial neural networks, in which the advantages of both modeling methods are used <ref type="bibr" target="#b42">[43]</ref>.</p><p>In general, hidden Markov models describe a two-stage stochastic process. The first stage consists of a discrete stochastic process that is static, causal, and simple. The state space is considered finite. Thus, the process probabilistically describes the state of transition to discreteness, a finite space of states. It can be visualized as a finite automaton with transitions between any pairs of states that are denoted by the transition probability. The behavior of the process at the current moment of time t depends only on the immediate state of the previous element and can be determined by the dependence:</p><formula xml:id="formula_14">P (𝑆 " |𝑆 # , 𝑆 $ …𝑆 "%# ) = P (𝑆 " |𝑆 "%# )<label>(11)</label></formula><p>At the second stage, for each moment of time t, additionally, by derivation or on the basis of output data, Оt is generated. The associative probability distribution depends only on the current state St, not on any previous states or inputs.</p><formula xml:id="formula_15">P (𝑂 " |𝑂 #… 𝑂 "%# , 𝑆 # …𝑆 " ) = P (𝑂 " |𝑆 " ) (12)</formula><p>The specified sequence of output data is the only thing that can be observed in the behavior of the model. On the other hand, the sequence state assumed during data generation cannot be examined. This is the so-called "hiddenness" from which the definition of hidden Markov models is derived. If you look at the model from the outside -that is, observe its behavior -quite often there are references to the sequence of initial states O1, O2 ... Ot, as the reason for observing the sequence. Individual elements of this sequence are called observation results <ref type="bibr" target="#b43">[44]</ref>.</p><p>In the literature, behavior recognition patterns of the hidden Markov model are always considered at a certain time interval t. To initialize the model at the beginning of this period, additional probabilities are used to describe the probability distribution of states at time t = 1. An equivalent final state criterion is generally absent. Thus, the action of the model enters the final state as soon as an arbitrary state is reached at the time t. As for gesture recognition, in order to reliably determine the semantics of the movement, it is necessary to allocate it to one of the classes of gestures. Next comes the stage of calculating the probability of receiving a read gesture from models of available gestures. Then the received gesture is classified using the Bayesian classifier. Based on the classification, the gesture can be recognized as one of the available options.</p><p>The task of determining the end of a gesture is also not easy. For this, edge cases are considered <ref type="bibr" target="#b42">[43]</ref>. When using this classification algorithm, it is highly undesirable to obtain unclear values (data about movements that cannot be clearly attributed to a certain class of gestures). To reduce the number of errors, in the algorithm described above, when situations arise that cannot be unambiguously attributed to a certain class, a weighted sum of the consequences of performing all the gestures classified by the algorithm should be used, or one of the gestures classified with the highest probability should be selected.</p><p>As for neural networks, the main research and scientific results obtained in the field of their application for gesture recognition include various methods and architectures that allow to perform this task effectively. In general, since an artificial neural network usually learns with a teacher, this means the presence of a training set (dataset). Ideally, this set contains examples with true values: tags, classes, metrics. An artificial neural network consists of three components: an input layer; hidden (computing) layers; source layer <ref type="bibr" target="#b44">[45]</ref>. Neural network training takes place in two stages: direct error propagation; error back propagation. During direct error propagation, a response prediction is made. In backpropagation, the error between the actual response and the predicted one is minimized. Initial weights are randomly assigned. Next, the input data are multiplied by weights to form a hidden layer <ref type="bibr" target="#b45">[46]</ref>:</p><formula xml:id="formula_16">h 1 =(x 1 •w 1 )+(x 2 •w 1 ) (13) h 2 =(x 1 •w 2 )+(x 2 •w 2 ) (14) h 3 =(x 1 •w 3 )+(x 2 •w 3 ) (15)</formula><p>The output data from the hidden layer is passed through a nonlinear function (activation function) to obtain the output of the network: y=f(h 1 ,h 2 ,h 3 ) (16) During the backpropagation of the error, the total error is calculated as the difference between the expected value from the training set and the obtained value (calculated at the stage of forward error propagation), passing through the loss function. The derivative of the error is calculated for each weight (these differentials reflect the contribution of each weight to the total error). These differentials are then multiplied by the learning rate number. The obtained result is then subtracted from the corresponding weights. As a result, the following updated weights will be obtained:</p><formula xml:id="formula_17">w 1 =w 1 -(η• ∂(err) ∂(w 1 ) ) (<label>17</label></formula><formula xml:id="formula_18">)</formula><formula xml:id="formula_19">w 2 =w 2 -(η• ∂(err) ∂(w 2 ) ) (<label>18</label></formula><formula xml:id="formula_20">)</formula><formula xml:id="formula_21">w 3 =w 3 -(η• ∂(err) ∂(w 3 ) ) (19)</formula><p>Summarizing the conducted analysis, it can be determined that among the priority areas of neural networks application in the process of gesture recognition are:</p><p>• Deep neural networks (DNN). They are used to interact with complex and variable gestures. The application of deep learning allows to automatically identify important features from a large amount of data <ref type="bibr" target="#b46">[47]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Recurrent Neural Networks (RNN). Can be used to analyze time dependencies in gestures. This is especially useful when interacting with a sequence of gestures, for example, in the case of sign language recognition <ref type="bibr" target="#b47">[48]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Convolutional Neural Networks (CNN). Effective in image processing and can be used to recognize spatial features in gestures, such as hand position or finger movement <ref type="bibr" target="#b48">[49]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>Transfer learning. There are examples of using the transfer learning technique for gesture recognition, especially when the amount of annotated data is limited <ref type="bibr" target="#b49">[50]</ref>. In summary, neural networks have been successfully used for gesture recognition in various contexts, including virtual reality, medical applications, and gaming industry. Considering that, the mechanism of neural networks will be used to implement the Ukrainian sign language recognition system.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">System design</head><p>The next stage was the construction of the system, using modern software tools. To implement the software product, it was decided to use the C# programming language and the .NET crossplatform technology. We will use Visual Studio as a development environment. To work with a single-camera system and process the image for further analysis, the OpenCV library is used, or rather, the C# version of EmguCV. The Math.NET library is used to perform matrix operations. The constructed prototype system for recognizing Ukrainian sign language can be conditionally divided into several main independent parts: HandGesturesRecognitionForm, NeuralNetwork, CsvManager, TrainingImageDataManager.</p><p>HandGesturesRecognitionForm is the main class of the program, it contains methods for working with the form and analyzing and processing images. The constructor of the HandGesturesRecognitionForm class initializes all components located on the form: fields, buttons, menus, switches. Next, an object of the VideoCapture class is created, which is a class from the EmguCV library designed to capture an image from the device's camera. The Rectangle object is used to position and size the red rectangle in the video from which the gesture will be read for recognition. The result of the recognition zone reproduction is presented in Fig. <ref type="figure" target="#fig_5">7</ref>. NeuralNetwork -a class that represents a neural network, contains information about the number of nodes of the input, output and hidden layers, the learning coefficient, the matrix of weights between the input and hidden layers, the matrix of weights between the hidden and output layers. To create it, you need to set the following mandatory parameters:</p><p>• inputLayerNodesCount -the number of input layer nodes. In this case, a value of 4096 is passed, which corresponds to the value of each pixel of the 64 by 64 binary image.</p><p>• hiddenLayerNodesCount -the number of hidden layer nodes. • outputLayerNodesCount -the number of output layer nodes. • learningRate is a learning rate, a parameter of gradient learning methods of neural networks, which allows you to control the amount of weight correction at each iteration.</p><p>• epochs -the number of steps (epochs) required to find the optimal value. CsvManager is a class responsible for saving the training set to a csv file. Contains private file name and save path information.</p><p>TrainingImageDataManager is a class that is responsible for saving pictures for neural network training. It contains private information about how photos are saved.</p><p>Let's take a closer look at the implementation of some key methods of the class. The EmguCV Threshold library method is used for binarization. The method accepts a binarization threshold that is obtained from a control of the form binaryImageThresholdTrackBar. The result of image binarization is shown in Figure <ref type="figure" target="#fig_6">8</ref>. The DrawContours library method is used to select the contour. When the method is called, the contour color and its thickness are set. The result of the method is shown in Figure <ref type="figure" target="#fig_7">9</ref>.   Therefore, the weights are selected from a normal distribution centered at zero and with a standard deviation whose value is inversely proportional to the square root of the number of input nodes. The Query method accepts the input data of the neural network as an argument and returns its output data. To do this, signals from the input layer nodes must be passed through the hidden layer to the output layer nodes to receive the output data. At the same time, as the signals spread, it is necessary to smooth them using the weighting coefficients of the connections between the relevant nodes, and also As a result of the work, a prototype of the application was developed, which is able to recognize the gesture of the alphabet of the Ukrainian sign language. For clarity, the program outputs the result at each iteration, starting with the raw video and ending with the recognition result in the form of a gesture value. It all starts with video capture. An example of a frame from the original, unprocessed video stream and its binarized version is shown in Fig. <ref type="figure" target="#fig_11">12</ref>.  After selecting the working surface, it is necessary to reduce it to a square image, since the selected largest contour is not always a square, as shown in Figure <ref type="figure" target="#fig_12">13</ref>. Since the neural network contains 4096 input layers, the final image is reduced to a size of 64 by 64 pixels. After capturing the image and clicking on the "Recognize the gesture" button, the recognition settings panel looks like this: The text value of the gesture is displayed in the "Recognition result" text field. The program successfully recognized the demonstrated gesture and displayed its explanation on the screen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conclusion</head><p>As a result of the conducted research, the existing methods and known systems that provide means of recognizing Ukrainian sign language and describe the mechanisms of their implementation were analyzed. Technologies and software tools for sign language recognition were analyzed, which made it possible to identify the features of existing approaches. As the analysis showed, today there are many software systems, but all of them are characterized by certain shortcomings, from the commerciality of the application to the impossibility of the application for the recognition of Ukrainian-language content, which makes the task of constructing an information system for the recognition of Ukrainian sign language urgent. In order to present the main aspects of the studied subject area, a scheme was finalized that reflects the main stages that must be implemented in the gesture recognition system. The next stage was the design of the software system using a structural approach and displaying the created diagrams in accordance with the IDEF0 standard. The study presents a context diagram and its decomposition, which created the basis for the study of features and the formation of methodological foundations for the construction of an information system. The analysis and justification of the choice of methods for the selection of contours and recognition of gestures in the incoming information message was carried out. The developed prototype is characterized by modular construction, the ability to recognize gestures of the Ukrainian alphabet and can be useful as an additional communication tool. The conducted research provides methodological and algorithmic foundations for building a communication environment for people with special needs.</p><p>Further research will be directed to testing and improving the system, eliminating conflicts and expanding functionality in accordance with the specified requirements.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Until recently, the attitude towards it in different countries ranged from introducing it for learning in Paragraph text. Paragraph text. Paragraph text. Paragraph text. Paragraph text. Paragraph text.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Context diagram of the designed system</figDesc><graphic coords="6,133.08,72.00,328.84,228.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Decomposition diagram of the system</figDesc><graphic coords="7,121.35,97.79,352.29,245.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Wavelet transform by rows in the Haar basis over the field GF (3) and GF<ref type="bibr" target="#b12">(13)</ref> </figDesc><graphic coords="9,84.77,110.69,199.78,143.15" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Contour selection using Kenny's algorithm</figDesc><graphic coords="10,130.15,72.00,334.70,146.19" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Selection of the handGestureArea outline</figDesc><graphic coords="12,186.20,490.42,206.08,202.14" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Image binarization</figDesc><graphic coords="13,158.60,268.47,277.80,138.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: Selection of the contour of the palm</figDesc><graphic coords="13,158.60,458.61,277.80,138.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: Selection of the largest contour</figDesc><graphic coords="14,158.60,97.79,277.80,138.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Structure of the NeuralNetwork class</figDesc><graphic coords="14,215.50,313.72,164.00,108.20" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head></head><label></label><figDesc>to apply the sigmoid to reduce the output signals of the nodes. To obtain the output signals of the hidden layer, it is necessary to apply them to each sigmoid value. Training includes two phases: the first is the calculation of the output signal, which is what the Query function does, and the second is the backpropagation of errors, which informs what the corrections to the weighting factors should be. The first part is the calculation of output signals for a given training example. The second part is a comparison of the calculated output signals with the desired response and updating the weighting coefficients of connections between nodes based on the differences found.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: The original frame and its binarized version</figDesc><graphic coords="15,102.20,72.00,164.95,164.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 13 :</head><label>13</label><figDesc>Figure 13: Contour of the palm and selection of the largest contour</figDesc><graphic coords="15,103.70,301.48,161.45,161.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_13"><head>Figure 14 :</head><label>14</label><figDesc>Figure 14: A fragment of the recognition panel</figDesc><graphic coords="15,119.95,553.75,355.06,190.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="5,81.00,72.00,432.52,160.10" type="bitmap" /></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Machine translation from signed to spoken languages: state of the art and challenges</title>
		<author>
			<persName><forename type="first">M</forename><surname>Coster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Shterionov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Herreweghe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dambre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Universal Access in the Information Society</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="1" to="27" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<ptr target="https://www.who.int/news-room/fact-sheets/detail/deafness-and-hearing-loss" />
		<title level="m">World Health Organization</title>
				<imprint/>
	</monogr>
	<note>Deafness and hearing loss</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">A survey on Sign Language machine translation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Núñez-Marcos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Perez-De-Viñaspre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Labaka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">213</biblScope>
			<biblScope unit="page" from="1" to="28" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<author>
			<persName><forename type="first">N</forename><surname>Adaloglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chatzis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Papastratis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Stergioulas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zacharopoulou</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2007.12530</idno>
		<title level="m">A comprehensive study on sign language recognition methods</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Sign Language: History of Research</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mcburney</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Encyclopedia of Language &amp; Linguistics</title>
				<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="310" to="318" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Multimodal Human Discourse: Gesture and Speech</title>
		<author>
			<persName><forename type="first">F</forename><surname>Quek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mcneill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bryll</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Duncan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kirbas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Mccullough</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ansari</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Computer-Human Interaction</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="171" to="193" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Skeleton aware multi-modal sign language recognition</title>
		<author>
			<persName><forename type="first">S</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition</title>
				<meeting>the IEEE/CVF Conference on Computer Vision and Pattern Recognition</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="3413" to="3423" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">A Extensive Survey on Sign Language Recognition Methods</title>
		<author>
			<persName><forename type="first">R</forename><surname>Minu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 7th International Conference on Computing Methodologies and Communication (ICCMC)</title>
				<meeting>the 2023 7th International Conference on Computing Methodologies and Communication (ICCMC)<address><addrLine>Erode, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-02">February 2023</date>
			<biblScope unit="page" from="613" to="619" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Specifics of Designing and Construction of the System for Deep Neural Networks Generation</title>
		<author>
			<persName><forename type="first">O</forename><surname>Mediakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Computational Linguistics and Intelligent Systems 2022 : Proceedings of the 6th International conference on computational linguistics and intelligent systems (COLINS 2022)</title>
				<meeting><address><addrLine>Gliwice, Poland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-05-12">2022. May 12-13, 2022</date>
			<biblScope unit="volume">3171</biblScope>
			<biblScope unit="page" from="1282" to="1296" />
		</imprint>
	</monogr>
	<note>Main conference</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Neural Machine Translation Methods for Translating Text to Sign Language Glosses</title>
		<author>
			<persName><forename type="first">D</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Czehmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Avramidis</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="12523" to="12541" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Sign language recognition using Microsoft Kinect</title>
		<author>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Thakur</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the sixth International Conference on Contemporary Computing (IC3)</title>
				<meeting>the sixth International Conference on Contemporary Computing (IC3)<address><addrLine>Noida, India</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="181" to="185" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Multi-sensor data fusion for sign language recognition based on dynamic Bayesian network and convolutional neural network</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Huan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimed Tools Appl</title>
		<imprint>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="15335" to="15352" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A Real-time Hand Gesture Recognition System for Human-Computer and Human-Robot Interaction</title>
		<author>
			<persName><forename type="first">V</forename><surname>Ponzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Iacobelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Napoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Starczewski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference of Yearly Reports on Informatics, Mathematics, and Engineering</title>
				<meeting>the International Conference of Yearly Reports on Informatics, Mathematics, and Engineering<address><addrLine>Catania, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">August 26-29, 2022</date>
			<biblScope unit="page" from="52" to="58" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">3D Dynamic Hand Gesture Recognition with Fused RGB and Depth Images</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Qingshan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wenjie</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 3rd International Conference on Big Data &amp; Artificial Intelligence &amp; Software Engineering, Virtual Event</title>
				<meeting>the 2022 3rd International Conference on Big Data &amp; Artificial Intelligence &amp; Software Engineering, Virtual Event<address><addrLine>Guangzhou, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">October 21-23, 2022</date>
			<biblScope unit="page" from="38" to="44" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Hand gesture recognition using convolutional neural network and histogram of oriented gradients features</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kika</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 3rd International Conference on Recent Trends and Applications in Computer Science and Information Technology</title>
				<meeting>the 3rd International Conference on Recent Trends and Applications in Computer Science and Information Technology<address><addrLine>Tirana, Albania</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">November 23rd to 24th, 2018</date>
			<biblScope unit="page" from="75" to="79" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Mid-air Gesture Recognition by Ultra-Wide Band Radar Echoes</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sluÿters</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Workshops on Engineering Interactive Computing Systems (EICS-WS 2022) co-located with teh 14th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (SIGCHI 2022)</title>
				<meeting>the Workshops on Engineering Interactive Computing Systems (EICS-WS 2022) co-located with teh 14th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (SIGCHI 2022)<address><addrLine>Sophia Antipolis, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-06-21">June 21, 2022</date>
			<biblScope unit="page" from="28" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Specialized interactive methods for using data on radar application models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Lytvyn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd International workshop on modern machine learning technologies and data science</title>
				<meeting>the 2nd International workshop on modern machine learning technologies and data science<address><addrLine>MoMLeT+DS; Lviv-Shatsk, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-06-02">2020. June 2-3, 2020</date>
			<biblScope unit="volume">I</biblScope>
			<biblScope unit="page" from="1" to="11" />
		</imprint>
	</monogr>
	<note>Main conference</note>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<ptr target="https://onp.ucoz.ua/news/navchalni_zaklady_dlja_gluxyx_v_dorevoljuciyniy_period/2013-06-13-51" />
		<title level="m">Association of deaf teachers</title>
				<imprint/>
	</monogr>
	<note>pre-revolutionary period</note>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Kulbida</surname></persName>
		</author>
		<title level="m">Gesture bilingual approach in the practice of special institutions of Ukraine. Special child: training and education</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="7" to="18" />
		</imprint>
	</monogr>
	<note type="report_type">N 3</note>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Sign Language of the Deaf</title>
		<author>
			<persName><forename type="first">R</forename><surname>Kraevskyi</surname></persName>
		</author>
		<editor>P.</editor>
		<imprint>
			<date type="published" when="1964">1964</date>
			<biblScope unit="page">220</biblScope>
		</imprint>
	</monogr>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Features of socio-cultural communication of sign language people in the educational process</title>
		<author>
			<persName><forename type="first">N</forename><surname>Adamyuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Abstracts of XXX International Scientific and Practical Conference Interaction Of Society And Science: Problems And Prospects</title>
				<meeting><address><addrLine>London, England</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">June 15 -18, 2021</date>
			<biblScope unit="page" from="307" to="311" />
		</imprint>
	</monogr>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Psychophysiological features of the formation of the lexical competence of national verbal languages among students with hearing impairments</title>
		<author>
			<persName><forename type="first">O</forename><surname>Drobot</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Education of persons with special needs: ways of development</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="67" to="77" />
		</imprint>
	</monogr>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">A competent approach in the training of deaf-pedagogical personnel. Modern technologies for the development of professional skills of future teachers: coll</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kulbida</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">science Proceedings of the First International Internet Conference</title>
				<meeting><address><addrLine>Uman</addrLine></address></meeting>
		<imprint>
			<publisher>FOP</publisher>
			<date type="published" when="2017-10-26">October 26, 2017. 2017</date>
			<biblScope unit="page" from="122" to="124" />
		</imprint>
	</monogr>
	<note>in Ukrainian</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Information technology for Ukrainian Sign Language translation based on ontologies</title>
		<author>
			<persName><forename type="first">O</forename><surname>Lozynska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Davydov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Econtechmod. An International Quarterly Journal</title>
		<imprint>
			<biblScope unit="volume">04</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="13" to="18" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Approach to a Subject Area Ontology Visualization System Creating</title>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 5rd International Conference on Computational Linguistics and Intelligent Systems (COLINS-2021). Volume I: Main Conference</title>
				<meeting>the 5rd International Conference on Computational Linguistics and Intelligent Systems (COLINS-2021). Volume I: Main Conference<address><addrLine>Kharkiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">April 22-23, 2021</date>
			<biblScope unit="volume">2870</biblScope>
			<biblScope unit="page" from="528" to="540" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Natural language processing: state of the art, current trends and challenges</title>
		<author>
			<persName><forename type="first">D</forename><surname>Khurana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Koli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Khatter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimed Tools Appl</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<biblScope unit="page" from="3713" to="3744" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Bragg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Koller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bellard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Berke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Boudrealt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Braffort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Caselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Huenerfauth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kacorri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Verhoef</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1908.08597</idno>
		<title level="m">Sign language recognition, generation, and translation: An interdisciplinary perspective</title>
				<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="16" to="31" />
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">A Review of Pattern Recognition and Machine Learning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Adugna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Haldorai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine and Computing</title>
		<imprint>
			<biblScope unit="page" from="210" to="220" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Artificial Intelligence Models in Pattern Recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Solanki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Handbook of Artificial Intelligence Applications for Industrial Sustainability</title>
				<imprint>
			<publisher>CRC Press</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="18" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">The category of quantity in signs of Ukrainian Sign Language</title>
		<author>
			<persName><forename type="first">A</forename><surname>Zamsha</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International scientific conference &quot;Current trends and fields of philological studies in the challenging reality</title>
				<meeting>the International scientific conference &quot;Current trends and fields of philological studies in the challenging reality<address><addrLine>Riga, the Republic of Latvia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">July 29-30, 2022</date>
			<biblScope unit="page" from="268" to="270" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Peculiarities of an Information System Development for Studying Ukrainian Language and Carrying out an Emotional and Content Analysis</title>
		<author>
			<persName><forename type="first">T</forename><surname>Basyuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vasyliuk</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th International Conference on Computational Linguistics and Intelligent Systems</title>
				<meeting>the 7th International Conference on Computational Linguistics and Intelligent Systems<address><addrLine>Kharkiv, Ukraine</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-04-20">2023. 2023. April 20-21, 2023</date>
			<biblScope unit="volume">3396</biblScope>
			<biblScope unit="page" from="279" to="294" />
		</imprint>
	</monogr>
	<note>Computational Linguistics Workshop</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Document Image Binarization Process</title>
		<author>
			<persName><forename type="first">M</forename><surname>Prodan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-A</forename><surname>Boiangiu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">BRAIN. Broad Research in Artificial Intelligence and Neuroscience</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="93" to="114" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Ensemble of Steerable Local Neighbourhood Greylevel Information for Binarization</title>
		<author>
			<persName><forename type="first">F</forename><surname>Kasmin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Abdullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Prabuwono</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">98</biblScope>
			<biblScope unit="page" from="8" to="15" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Novel Adaptive Binarization Method for Degraded Document Images</title>
		<author>
			<persName><forename type="first">S</forename><surname>Abdullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ismail</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hasan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shivakumara</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers, Materials &amp; Continua</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="3815" to="3832" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Double-Constraint Inpainting Model of a Single-Depth Image</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data, Signal and Image Processing and Applications in Sensors</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">6</biblScope>
			<biblScope unit="page" from="345" to="364" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Multilevel Thresholding for Image Segmentation Using Mean Gradient</title>
		<author>
			<persName><forename type="first">A</forename><surname>Abubakar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Electrical and Computer Engineering</title>
		<imprint>
			<biblScope unit="volume">2022</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Signal Processing Algorithm Based on Discrete Wavelet Transform</title>
		<author>
			<persName><forename type="first">A</forename><surname>Osadchiy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kamenev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Saharov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chernyi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Designs</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="issue">41</biblScope>
			<biblScope unit="page" from="1" to="13" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Haar Wavelet Downsampling: A Simple but Effctive Downsampling Module for Semantic Segmentation</title>
		<author>
			<persName><forename type="first">X</forename><surname>Guoping</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wentao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Xinwei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Xinglong</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">143</biblScope>
			<biblScope unit="page" from="678" to="689" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">The Haar Wavelet Transformation</title>
		<author>
			<persName><forename type="first">P</forename><surname>Fleet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computer Engineering</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="125" to="181" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">A modified Canny edge detector based on weighted least squares</title>
		<author>
			<persName><forename type="first">X</forename><surname>Qin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Statistics</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="641" to="659" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Fuzzy inference based edge detection system using Sobel and Laplacian of Gaussian operators</title>
		<author>
			<persName><forename type="first">J</forename><surname>Patel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Patwardhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sankhe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kumbhare</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference &amp; Workshop on Emerging Trends in TechnologyFebruary</title>
				<meeting>the International Conference &amp; Workshop on Emerging Trends in TechnologyFebruary</meeting>
		<imprint>
			<date type="published" when="2011">2011</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="694" to="697" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Hidden Markov Models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Franzese</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Iuliano</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Encyclopedia of Bioinformatics and Computational Biology</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="753" to="762" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Recurrent Neural Hidden Markov Model for High-order Transition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Hiraoka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Takase</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Uchiumi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Keyaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Asian and Low-Resource Language Information Processing</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1" to="15" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Evaluation of hidden Markov models using deep CNN features in isolated sign recognition</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Keles</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimed Tools Appl</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="19137" to="19155" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">Hand Gesture Recognition for Doors with Neural Network</title>
		<author>
			<persName><forename type="first">H</forename><surname>Ahn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Research in Adaptive and Convergent Systems (RACS &apos;17)</title>
				<meeting>the International Conference on Research in Adaptive and Convergent Systems (RACS &apos;17)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="15" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">Hand Gesture Recognition using Neural Networks</title>
		<author>
			<persName><forename type="first">G</forename><surname>Murthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jadon</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advance Computing Conference (IACC). 2019 IEEE 2nd International</title>
				<imprint>
			<biblScope unit="page" from="134" to="138" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">Radar micro moving gesture recognition method based on multi-scale fusion deep network</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Bao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition (AIPR &apos;22)</title>
				<meeting>the 2022 5th International Conference on Artificial Intelligence and Pattern Recognition (AIPR &apos;22)</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="657" to="663" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Deep Recurrent Neural Network Approach with LSTM Structure for Hand Movement Recognition Using EMG Signals</title>
		<author>
			<persName><forename type="first">H</forename><surname>Alimam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Mohamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Selmy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 12th International Conference on Software and Information Engineering (ICSIE &apos;23)</title>
				<meeting>the 2023 12th International Conference on Software and Information Engineering (ICSIE &apos;23)</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="58" to="65" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">Gesture Recognition with Complex Background Based on Improved Convolutional Neural Network</title>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Jiang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 5th International Conference on Electronic Information Technology and Computer Engineering (EITCE &apos;21)</title>
				<meeting>the 2021 5th International Conference on Electronic Information Technology and Computer Engineering (EITCE &apos;21)</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1345" to="1349" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Multi-View Fusion for Sign Language Recognition through Knowledge Transfer Learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Xue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Feng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry</title>
				<meeting>the 18th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1" to="9" />
		</imprint>
	</monogr>
	<note>VRCAI &apos;22</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
