<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>CITI'</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Method of Commands Identification to Voice Control of the Electric Wheelchair</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vasil Dozorskyi</string-name>
          <email>vasildozorskij1985@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Iryna Dediv</string-name>
          <email>iradediv@gmail.com</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Sofiia Sverstiuk</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vyacheslav</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrii</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ternopil Ivan Puluj National Technical University</institution>
          ,
          <addr-line>Ruska str., 56, Ternopil, 46001</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Ternopil National Pedagogical University</institution>
          ,
          <addr-line>2 Maxyma Kryvonosa St., Ternopil, 46027</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <fpage>14</fpage>
      <lpage>16</lpage>
      <abstract>
        <p>The paper shows the relevance of the task of improving the electric wheelchair control system for patients with lost or functionally limited upper limbs. A method of indirect control using voice commands is proposed. At the same time, the biometric principle is used, according to which, in the process of voice signal analysis, signs that are biometric parameters of the patient's speech are identified to prevent the operation of the control system when registering similar commands from outsiders. The main such sign is the value of the main tone frequency. To process voice signals in order to identify the four main control commands, namely forward, backward, left and right, the sliding window method is used, within which the presence of signs of the main tone is evaluated. For a specific person, this value will be individual and present only in those parts of the voice signal that correspond to vowels and consonant vocalized sounds. In this way, it is possible to segment the voice signal into such areas and identify individual voice commands based on pre-set durations of these areas. A threshold function is also used for this, which takes a certain value when the main tone frequency is present in the voice signal structure within each implementation of the sliding window and is equal to zero when there are no signs of this frequency. At the same time, it became possible to identify individual commands with higher accuracy. Electric wheelchair, voice control, sliding window, main tone frequency ORCID: 0000-0001-6744-3015 (A. 1); 0000-0003-0530-3919 (A. 2); 0000-0001-5595-4918 (A. 3); 0000-0003-1547-8042 (A. 4); 0000-</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Development a voice control system is an urgent task. These systems are useful when working with
computerized systems such as voice typing, control of electronic devices in the home, etc. This
technology is relevant for people with disabilities, in particular, with disorders of the musculoskeletal
system.</p>
      <p>
        Thus, according to the World Health Organization [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], as of 2020, more than 15% of the world's
population had some form of disability. According to official statistics in Ukraine [
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ], as of January
1, 2021, there were 2,703,000 people with disabilities in Ukraine (according to the State Statistics
Service), including 163,900 children among people with disabilities. It is also stated that in Ukraine
there are 222.3 thousand people with disabilities of the first group, 900.8 thousand - of the second
group, 1 million 416 thousand - of the third group. In connection with the war in Ukraine, the number
of people with disabilities has increased significantly over the past year. Everyone understands this,
despite the lack of official data today (according to the conditions of martial law). At the same time, it
EMAIL:
(A.
      </p>
      <p>1);</p>
      <p>2);
khrystynasofia@gmail.com
(A.</p>
      <p>3);</p>
      <p>2023 Copyright for this paper by its authors.
is very important for people with limited physical abilities to feel like equal members of society and to
be able to move freely.</p>
      <p>
        Today, the industry of goods for the disabled offers special means of transportation – wheelchairs
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. There are several types of wheelchairs, the choice of which depends on the degree of impaired
mobility of the patient. In the past, people with partial mobility could only use mechanical devices. But
today, designers are trying to do everything possible to make life easier for patients with disabilities.
Today, the preference is given to wheelchairs with an electric drive. Such devices are more
comfortable to use, because to activate them, it is enough to press the joystick. A small movement is
enough, and a person sitting in such wheelchair can start moving independently without waiting for
help.
      </p>
      <p>Using an electric wheelchair does not require much effort. This is what makes such means of
transportation popular among people with serious illnesses, after injuries: with weak lower and upper
limbs, the body as a whole; with cardiovascular diseases; with spinal injuries resulting in leg problems;
with paralysis or paresis of the legs, the whole body; after limb amputation. Limb tremors, partial
paralysis or amputations may also limit the use of such wheelchairs, as a result of which it will be
difficult or even impossible for the patient to control such wheelchair using a joystick.</p>
      <p>
        However, today, with the development of information technologies and innovative developments,
attempts are being made to create alternative methods of wheelchairs controlling. So, the Intel
company together with the Brazilian startup Hoobox Robotics presented Wheelie 7 - a smart
wheelchair whose movements can be controlled by changing facial expressions [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. The wheelchair
uses a software system with a camera that recognizes ten facial expressions, each of which the chair
owner can match with a specific control command.
      </p>
      <p>
        There is also active development of wheelchairs that are controlled by the mind [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. But they
require a long time to learn and adapt for each individual user.
      </p>
      <p>Another example is the control of an electric wheelchair using signals from sensors that register
active muscle contractions (for example, Stephen Hawking's wheelchair, also developed by Intel, and
the attached on-board computer wass controlled almost entirely by cheek muscle contractions ) or
cameras recording eye movements.</p>
      <p>
        Similar technologies include the "Vocalizal" technology described in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. In this case, a method of
voice control of a smartphone or mobile phone is proposed for disabled people using electric
wheelchairs. With voice recognition, Vocalizal constantly listens for key words, eliminating the need
to press any buttons before or while using the mobile phone. It offers new and improved voice control
features with improved recognition quality. Such a device is attached to a wheelchair and
communicates with a mobile phone wirelessly using the Bluetooth protocol.
      </p>
      <p>
        In the research, an attempt is being made to develop a prototype of the voice control system for an
electric wheelchair, in particular, the development of algorithms for the operation of such a system and
appropriate methods of processing voice signals to form the necessary command signals, taking into
account non-standard and dangerous situations. Some of such situations are described in works [
        <xref ref-type="bibr" rid="ref8 ref9">8,9</xref>
        ].
      </p>
      <p>In similar systems, which are used today in the field of computer control, for example, voice signals
are processed and the signs are selected in their structure, which are then used to identify or distinguish
individual commands, which are certain keywords. For this, signs of the phonemic composition of the
speech, amplitude and time parameters of the voice signal, its composition, transitions between
phonemes, syllables or words can be used.</p>
      <p>There are a number of problems when creating such a system. First, the lack of a mathematical
model of the semantics of a speech signal, since only probabilistic and heuristic methods can be used to
determine the semantics of a speech signal, which do not give an exact result and the accuracy of
which is inversely proportional to the number of semantic units. Secondly, the individual
characteristics of the announcer: specifics of pronunciation, accents, etc. Thirdly, work with
spontaneous speech and the need to determine the presence of a keyword. Fourth, differences in the
acoustic environment, noises.</p>
      <p>Thus, the proposed system must necessarily take into account all possible risk factors related in
particular to the ability of the system to recognize the patient's voice commands in a certain way, not to
be sensitive to registered voice signals similar in value from outsiders, to identify only those voice
signals, which are actually commands for the system and do not identify commands uttered by the
patient in the process of conversation, which do not relate to the process of driving a wheelchair, etc.
Taking into account these and similar factors will exclude the possibility of spontaneous start or, on the
contrary, stopping the execution of the control command and creating dangerous situations for both the
patient and others. So, if the wheelchair is commanded to move forward, the wheelchair should not
stop in the middle of the road when the patient in the context of the conversation utters a word that
should mean a command to stop, etc.</p>
      <p>
        Thus, the information system being developed by us should be able to be adjusted exclusively to the
acoustic parameters and characteristics of the patient's speech, ignoring the voice signals of people who
may be talking nearby. Systems of this type belong to the class of biometric systems [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ].
      </p>
      <p>Usually, the recognition process in a biometric system goes through two stages:
• Learning – a physical or behavioral sample is memorized by the system to the database, unique
information is determined and a "print" of the voice is created;</p>
      <p>• Comparison – the presented sample is compared with the original and an answer is formed based
on the obtained result.</p>
      <p>
        The principles of operation of information and complex cyber-physical systems in medical
applications were also analyzed [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11,12,13</xref>
        ]. Accordingly, the principle of operation of the proposed
system should be based on the analysis of biometric parameters of the human voice [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. For this
purpose, a method of processing voice signals and algorithms for identifying individual commands
were developed.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Materials and Methods</title>
      <p>The essence of the voice signals processing method for the selection of command words and,
accordingly, the formation of control signals based on them is reduced to the following. We need to
recognize four commands: "left", "right", "back" and "forward". The work of the system was focused
on the possibility of working with the Ukrainian language, so the following four commands (in
transliteration) were recognized: [vlivo], [vpravo], [vzad], [vpered].</p>
      <p>At the first stage, the user speaks these words into the microphone, the information system
processes them and selects informative signs, which will be used to identify these commands in the
next step.</p>
      <p>First, the signal is recorded, then it is filtered using adaptive methods. Next, the word length of
each command is determined. At the next stage, each word is segmented into phonemes. Actually, on
the last step, areas with signs of vowels and consonant vocalized sounds are determined. All four
commands differ in their phonemic composition for Ukrainian language. So, the first command
contains 5 phonemes, of which 2 vowels and 3 consonants are vocalized. The second command
contains 6 phonemes, of which 2 are vowels, 3 are vocalized consonants, and one is voiceless. The
third command contains 4 phonemes with one vowel and three vocalized consonants. And the fourth
command contains 6 phonemes with two vowels, one voiceless and three consonant vocalized
phonemes. The alternation of phonemes in commands is different. Therefore, for phoneme
segmentation, the value of the main tone frequency, which is also the main biometric characteristic of
each person, is used as an informative sign.</p>
      <p>
        At the first stage, the time limits of each command are identified. For this, a sliding window is
used, within which the average value of the voice signal is calculated [
        <xref ref-type="bibr" rid="ref15 ref16 ref17">15,16,17</xref>
        ]. Further, this value is
compared with the threshold, and if it is equal to or exceeds the threshold, then the value of the
threshold function h is taken as 1, otherwise - 0. The algorithm of such processing is shown in Figure
1, a.
      </p>
      <p>At the next stage, the main tone frequency is determined, which is an individual characteristic and
is present in vowels and consonants vocalized phonemes. For this, the method of formant analysis
was used, when the main tone frequency corresponds to the frequency of the first maximum in the
power spectrum. For this, the processing was also carried out using a sliding window. The algorithm
of such processing is shown in Figure 1, b.</p>
      <p>And at the last stage, the command is identified. For this, the unknown command is processed
within the sliding window as follows. The presence of the main tone frequency is assessed. It will be
present only in the sections within the duration of the command that correspond to vowels and
consonant vocalized sounds. Further, intervals with the presence of such a frequency are formed, and
already by their sequence and size, the command is identified, since the sequences of these sections
and durations must be different for the four considered commands. Having identified the commands,
generating control signals is not a difficult task. The command recognition algorithm is shown in
Figure 1, c.
c)
Figure 1: The algorithm for finding the duration of the command (a), the algorithm for and for
estimating the value of the main tone frequency (b), command recognition algorithm (c)</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiment and Results</title>
      <p>A standard computer headset and the Adobe Audition program were used to select voice signals
with spoken commands. In the same environment, adaptive filtering was carried out by the method of
spectral subtraction. The appearance of the signal after such filtering is shown in Figure 2.</p>
      <p>Further, the filtered signal was loaded into the Matlab environment, in which its further processing
was carried out. The appearance of such a signal is shown in Figure 3.</p>
      <p>According to the algorithm in Figure 1, a, the duration of each command is determined. For this, a
sliding window with a duration of 5ms was used, which was chosen a priori and was more than 100
times shorter than the implementation of the command. The threshold was also chosen a priori. The
view of the value of the threshold function is shown in Figure 4.</p>
      <p>As it can be seen, in this way, the duration of voice commands can really be determined by the
threshold function.</p>
      <p>At the next stage, the value of the main tone frequency was determined, which was 172 Hz for this
case (according to the algorithm in Figure 1, b).</p>
      <p>At the last stage, phonemic segmentation of each individual command was carried out. In Figure 5,
a the appearance of the voice signal of the first command [vlivo] is given.</p>
      <p>Further, according to the algorithm described above, estimates of the spectrum were constructed
and its averaging was carried out around the values of the main tone frequency. Processing was
carried out within the limits of the sliding window. If the main tone frequency was present within
these limits, it was manifested in the signal spectrum. These averaged spectrum values around the
main tone frequency were plotted on the same time axis according to where the sliding window was
placed. The resulting graph is shown in Figure 5, b.</p>
      <p>Next, a threshold function was constructed with a threshold of 0.1, which was chosen a priori. Its
appearance is shown in Figure 5, c. It can be concluded that the main tone frequency is present in the
signal for the full duration of the command, which is true since all phonemes are either vowels or
vocalized consonants.</p>
      <p>In Figure 6 shows the view of the signal of the second command [vpravo], the graph of the main
tone frequency presence in the signal and the view of the threshold function. In this case, there is a
gap that corresponds to the voiceless phoneme [p].</p>
      <p>In Figure 7 shows the view of the signal of the third command [vzad], the graph of the main tone
frequency presence in the signal and the view of the threshold function. In the picture it can be seen
an interphonemic gap.</p>
      <p>In Figure 8 shows the view of the signal of the third command [vpered], the graph of the main
tone frequency presence in the signal and the view of the threshold function.</p>
      <p>Figure 8 also clearly shows the omission due to the presence of a voiceless phoneme. In addition,
analyzing the given graphs for four commands, we can come to the conclusion that based on the values
of the threshold function and the durations of the commands, their identification and the subsequent
formation of the corresponding control signals can be carried out.</p>
      <p>In order for the system not to be sensitive to similar commands that can be uttered by the patient in
the context of a conversation, it is suggested to use a certain key word, after the utterance of which the
command word must be unambiguously followed. Also, the second keyword will mean the end of the
command. These keywords can be selected by the system user individually, but with the lowest
possible frequency of occurrence in the speech of the user himself. Ideally, it can be a meaningless set
of sounds or numbers. Actually, in this case, it will be possible to avoid the dangerous situations
mentioned above.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Conclusion</title>
      <p>A method of voice signals processing has been developed to recognize four commands in the voice
signal. First, the user speaks these words into the microphone, the information system processes them
and selects informative signs, which will be used to identify these commands in the future.</p>
      <p>First, the signal is recorded, then it is filtered using adaptive methods. Next, the word length of each
command is selected. At the next stage, each word is segmented into phonemes. Actually, on the last
step, the areas with signs of vowel and consonant vocalized sounds are selected.</p>
      <p>The sliding window method and the threshold function were used for the actual processing. At the
same time, the values of the energy characteristics of the signal and the excess of their value over the
predetermined threshold value were evaluated. During the next processing, the signals were divided
into sections that correspond to individual phonemes, in particular vowels and vocalized consonants. A
sliding window and a threshold function were also used in this case. Analyzing the results of
processing four commands, it was concluded that based on the values of the threshold function and the
duration of the commands, their identification and the subsequent formation of the corresponding
control signals can be carried out.
5. References</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>[1] World Health Organization. URL: https://www.who.int/</mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>[2] State Statistics Service of Ukraine. URL: https://www.ukrstat.gov.ua/</mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>[3] National Assembly of People with Disabilities of Ukraine</article-title>
          . URL: https://www.facebook.com/vgonaiu/posts/1709214279269040/?locale=hi_IN
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <article-title>[4] Electric wheelchairs</article-title>
          . URL: https://techno-med.com.ua/ua/elektrokoljaski
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <article-title>[5] Wheelie 7</article-title>
          . URL: https://theindexproject.org/award/nominees/3326
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <article-title>[6] Yes, mind-controlled wheelchairs are a thing</article-title>
          . URL: https://psmag.com/environment/mindcontrolled-wheelchairs
          <string-name>
            <surname>-</surname>
          </string-name>
          are-here
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <article-title>[7] Voice control bluetooth mobile phone control system for wheelchair</article-title>
          . URL: https://inclusiveinc.org/uk-ua/products/vocalize-wheelchair
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Hevko</surname>
            ,
            <given-names>B.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Hevko</surname>
            ,
            <given-names>R.B.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klendii</surname>
            ,
            <given-names>O.M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Buriak</surname>
            <given-names>M.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dzyadykevych</surname>
            ,
            <given-names>Y.V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rozum</surname>
            ,
            <given-names>R.I. Improvement</given-names>
          </string-name>
          <article-title>of machine safety devices</article-title>
          .
          <source>Acta Polytechnica</source>
          ,
          <volume>58</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>17</fpage>
          -
          <lpage>25</lpage>
          (
          <year>2018</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Buketov</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Maruschak</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sapronov</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zinchenko</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yatsyuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Panin</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Enhancing performance characteristics of equipment of sea and river transport by using epoxy composites</article-title>
          .
          <source>Transport</source>
          ,
          <volume>31</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>333</fpage>
          -
          <lpage>342</lpage>
          (
          <year>2016</year>
          )
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>What is Biometric Authentication? By Dean Nicolls</surname>
          </string-name>
          ,
          <year>2019</year>
          . URL: https://www.jumio.com/whatis-biometric-authentication/
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>Martsenyuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klos-Witkowska</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sverstiuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bernas</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Witos</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <article-title>Intelligent big data system based on scientific machine learning of cyber-physical systems of medical and biological processes</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2021</year>
          , Vol.
          <volume>2864</volume>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>48</lpage>
          . https://ceur-ws.org/Vol2864/paper4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Lypak</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rzheuskyi</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kunanets</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Pasichnyk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <article-title>Formation of a Consolidated Information Resource by Means of Cloud Technologies</article-title>
          . 2018
          <source>International Scientific-Practical Conference on Problems of Infocommunications Science and Technology, PIC S and T 2018 - Proceedings 8632106</source>
          , pp.
          <fpage>157</fpage>
          -
          <lpage>160</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <surname>Martsenyuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sverstiuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bahrii-Zaiats</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rudyak</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Shelestovskyi</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <article-title>Software complex in the study of the mathematical model of cyber-physical systems</article-title>
          .
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2020</year>
          , Vol.
          <volume>2762</volume>
          , pp.
          <fpage>87</fpage>
          -
          <lpage>97</lpage>
          . https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2762</volume>
          /paper5.pdf
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <surname>Geoffrey</surname>
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Meltzner</surname>
          </string-name>
          , James T. Heaton, Yunbin Deng, Gianluca De Luca,
          <string-name>
            <surname>Serge H. Roy</surname>
          </string-name>
          , and Joshua C.
          <article-title>Kline: Silent Speech Recognition as an Alternative Communication Device for Persons with Laryngectomy</article-title>
          ,
          <source>IEEE/ACM Trans Audio Speech Lang Process</source>
          , pp.
          <fpage>2386</fpage>
          -
          <lpage>2398</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <surname>Dozorskyi</surname>
            <given-names>V.G.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dozorska</surname>
            <given-names>O.F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yavorska</surname>
            <given-names>E.B.</given-names>
          </string-name>
          :
          <article-title>Selection and processing of biosignals for the task of Human Communicative Function Restoration</article-title>
          , Kremenchug National University,
          <volume>4</volume>
          (
          <issue>105</issue>
          ), pp.
          <fpage>9</fpage>
          -
          <lpage>14</lpage>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>Vyacheslav</surname>
            <given-names>Nykytyuk</given-names>
          </string-name>
          , Vasyl Dozorskyi, Oksana Dozorska:
          <article-title>Detection of biomedical signals disruption using a sliding window</article-title>
          ,
          <source>Scientific Journal of TNTU</source>
          , Ternopil,
          <volume>91</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>125</fpage>
          -
          <lpage>133</lpage>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <surname>Martsenyuk</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sverstiuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Klos-Witkowska</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Horkunenko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Rajba</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <article-title>Vector of diagnostic features in the form of decomposition coefficients of statistical estimates using a cyclic random process model of cardiosignal</article-title>
          .
          <source>Proceedings of the 2019 10th IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, IDAACS 2019</source>
          , Vol.
          <volume>1</volume>
          , pp.
          <fpage>298</fpage>
          -
          <lpage>303</lpage>
          . DOI:
          <volume>10</volume>
          .1109/IDAACS.
          <year>2019</year>
          .
          <volume>8924398</volume>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>