<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>International Workshop on Modern Machine Learning Technologies and Data Science, June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Diana Koshtura</string-name>
          <email>Diana.Koshtura.sa.2017@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vasyl Andrunyk</string-name>
          <email>Vasyl.A.Andrunyk@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tetiana Shestakevych</string-name>
          <email>Tetiana.V.Shestakevych@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Stepana Bandery Street, 12, Lviv, 79000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2021</year>
      </pub-date>
      <volume>5</volume>
      <issue>2021</issue>
      <abstract>
        <p>The process of inclusion of a person with special needs into society can be improved with different information technologies. For people with hearing impairments, it should be based on speech-to-text converter. The main functions of such an application should allow the user to see the text on the screen of the device. To create such an application, its main functions were chosen based on the analyzed analogs, so the designed application met all the necessary demands. The UML diagrams were chosen to model the designed application, called CLON. The software was tested with different single words, phrases, and sentences. Speech-to-text, hearing impairment, risk low, speech recognition, quick answer, convert The effective development of society can be determined by how government measures are aimed at human well-being. This development can be considered at a high level if the state directs its forces to help the least protected and most vulnerable members of society. Issues of social assistance and protection of the rights of persons with disabilities are considered relevant throughout the civilized world. Having hearing impairments causes troubles for not only persons with such disability, but also for their families, friends, all the other members of society, and creating a mean to improve such communication, will be of greater help to the society of hearing people. Taking into account the level of the electronic devices spread, such helpful mean should be designed and developed as an information technology. The object of research is the process of communication between people with hearing impairments. The subject of the research is a development of an application as a mean of interaction between people. The practical value of the results is to develop and test new tools and methods for converting speech into text, saving human resources and money, and building convenient, useful, and important for people with hearing impairments software and systems. The data that will be extracted and directed to the development of such a science as speech recognition, other systems that use the human voice to convert to text will be very useful and will improve these systems.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>speech, voice message</p>
    </sec>
    <sec id="sec-2">
      <title>Introduction</title>
      <p>Ukraine</p>
      <p>2021 Copyright for this paper by its authors.
with the speech-to-text keyword, the Scopus database as a result gives 953 documents, 470 of those
papers were published in 2017-2020. The word cloud from the abstracts of these documents is in fig. 1
(numbers and common words were removed).</p>
      <p>
        The results of the implementation of speech-to-text methodology can be used in various fields, such
as advertisment, automatic keywords extraction, education, linguistics, military, data analysis, etc.
[212]. The peculiarities of translation of spoken language into sign language were investigated in [
        <xref ref-type="bibr" rid="ref13 ref14">13,
14</xref>
        ]. Silent speech recognition is actual in a noisy place can be a part of a wider system of human body
image recognition [
        <xref ref-type="bibr" rid="ref15 ref16 ref17 ref18 ref19 ref20 ref21 ref22 ref23 ref24">15-24</xref>
        ].
      </p>
      <p>
        The speech-to-text method was used to assess the risk of Alzheimer’s disease when patients` texts
were analyzed to find special markers [
        <xref ref-type="bibr" rid="ref25 ref26 ref27 ref28">25-28</xref>
        ]. Also, the speech-to-text method is used to treat a motor
speech disorder resulting from neurological injuries [
        <xref ref-type="bibr" rid="ref29 ref30">29-30</xref>
        ].
      </p>
      <p>
        In [
        <xref ref-type="bibr" rid="ref31">31</xref>
        ], scientists, while examining the language and literacy needs among students in youth
detention, declared the need for consistent text-level language assessment. To better identify functional
difficulties within their language, authors used speech-to-text methods [
        <xref ref-type="bibr" rid="ref25 ref26 ref27 ref28">25-28</xref>
        ]. To control electrical
appliances and door, the authors [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ] constructed a prototype with a speech recognition system.
      </p>
      <p>
        The effects of assistive technology for students with severe disabilities (reading and writing) were
investigated in [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ]. The researchers concluded such technology to be supportive, motivating, but not
with obstacles in implementation [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ].
      </p>
      <p>
        The peculiarities of stemming as a part of non-formal conversation converting into the text were
investigated by authors who concluded, that the non-formal sentences should be normalized to formal
ones to be used for text classification [
        <xref ref-type="bibr" rid="ref35 ref36 ref37 ref38 ref39 ref40 ref41 ref42 ref43">35-47</xref>
        ], hate speech detection and content analisys, etc [48-61].
      </p>
      <p>Let us analyze the capabilities of the developed software product in comparison with modern
analogs. Programs similar to those present on the market are products of foreign production:
RogerVoice, Live Transcribe, App MyEar, AVA, Voxsci.
2.1</p>
      <p>RogerVoice (https://rogervoice.com/) is the world's first mobile application for creating subtitles for
telephone conversations, ie converts mobile voice calls into a much more accessible text format.
Developed by deaf engineer Olivier Jeannel. The application uses speech recognition technology to
convert voice to text, so deaf and deaf people can read what the other person is telling them.</p>
      <p>The application was created thanks to a successful campaign on Kickstarter, a beta version of the
application is currently available, which can be downloaded from the RogerVoice website (Fig. 2a).</p>
    </sec>
    <sec id="sec-3">
      <title>2.2 Live Transcribe</title>
      <p>The application reads human language and translates it into text. To do this, the application uses a
smartphone microphone and Google's language API (Fig. 2b), which supports more than 70 languages.
To ensure that the transcription is as error-free as possible, machine learning technologies are
responsible for recognition. But Live Transcribe can not only decrypt the voice, but also notify the user
that someone wants to talk to him, and also allows you to participate directly in the dialogue, giving
answers using the built-in keyboard. The spoken text is perceived by the phone's microphone and
delivered to the screen of the Android phone via Wi-Fi or other network connection. This can be useful
for people who do not hear and attend conferences or lectures, for example. Spoken words will appear
on the phone of the person who has the applications.</p>
    </sec>
    <sec id="sec-4">
      <title>MyEar app</title>
      <p>The MyEar app was developed by Gerald Isobe, a deaf golfer and his son Brandon.</p>
      <p>Gerald grew up reading his lips, but was disappointed to understand only 30% of what he said, and
tired of asking people around him, "What are you talking about?"</p>
      <p>This program was created based on these frustrations, and it is a program that he and others use to
communicate with their hearing colleagues, friends, and family.</p>
      <p>Direct pricing. It costs a one-time $ 9.99 with free updates when we launch a new software update.</p>
      <p>For emergencies. If a police officer stopped you, would you be able to effectively understand what
he was saying, for example, at night? Most likely not. Here the MyEar application can help. It will
record what the officer is saying so that you do not guess or try to understand what the officer is saying
at night and at any other time. Last minutes of a meeting or presentation, and no time to call an
interpreter. Sometimes last-minute meetings can happen and then just open the MyEar program and you
can use it for long presentations and you will understand what it is about.</p>
      <p>Influence on new words and topics. TV subtitles and ASL translators do not translate 100% word
for word what people say on TV or on the phone. However, you may not realize this until you try the
MyEar app. The MyEar app will translate every word a person says to you.</p>
      <p>AVA</p>
      <p>Lip reading can be more difficult in a group of people, and this is one of the main reasons for creating
AVA. If a person who is deaf or suffers from hearing problems is with a group of friends, he or she can
force those friends to join the program - then the person (s) with hearing impairments will see the live
broadcast of the group conversation. The language is perceived by the phone's microphone, and the
name of the speaker is displayed on the screen before the person speaks.</p>
      <p>AVA works with employers, teachers, event organizers and other accessibility professionals who
seek to fully engage their deaf and hard of hearing members.</p>
      <p>How can Ava fit into your daily life? See how users use Ava → Ava gives you a whole new level
of autonomy for many situations in your daily life.
2.5</p>
    </sec>
    <sec id="sec-5">
      <title>Voxsci</title>
      <p>Voxsci (https://www.voxsci.com/) is a language-to-text program that converts voicemail messages
into texts and emails that you can store, search and share. Expenses start at £ 5 a month for 30 voicemails
or emails.</p>
      <p>Listening to voice messages can be very inconvenient. VoxSciences provides a paradigm shift by
transcribing voice messages into text messages. This gives voice messages a quantum leap for joining
email, SMS, and instant messaging on the same basis with all the inherent benefits such as texture
search. Our VERBS engine (virtual engine for basic language recognition) converts voice messages
into text messages and delivers them as e-mail, SMS, or via the API. Voicemail to Text (SMS) is ideal
for personal or corporate voicemail systems.</p>
      <p>Voice messages, transcribed and delivered by e-mail, are mainly used by call centers, comment or
contest lines, and corporate voicemail systems.</p>
      <p>The Client's Voice is a market research technique that provides a detailed set of needs and customer
needs. It includes analysis of feedback from various sources, such as e-mail surveys, the Internet, and
IVR. VoxSciences provides a key component to facilitate the analysis of audio feedback for analysis,
providing transcription of transcripts almost in real-time in the business class through our API.</p>
      <p>Therefore, we can conclude that the availability of different operating systems does not depend on
the price of the application. Therefore, applications that are free offer the use of the application on
different operating systems. From the ratings, we can conclude that the applications are not difficult to
use, and they are popular among users, and often used. Applications were evaluated according to
reviews on official websites, Play Market, App Store (Fig.6 presents the feedback about MyEar, with
an overall rating 4.1 and positive text reviews; Fig.7 presents a review of Live Transcribe. Reviews are
mostly positive, often mentioned the usefulness of the application). In general, there are no negative
reviews, there are simply lower ratings, where people are dissatisfied with the quality of translation
from language to text, dissatisfied with the pricing policy, or have additional wishes, such as quick
answers. Therefore, it can be concluded that the complexity of development depends on the
technologies used to develop the application and whether the application uses a network connection.</p>
      <p>Thus, we can conclude that all analogs are competitive, have many advantages, but also have
disadvantages. The disadvantages that emerged during the comparison of analogs are:
 Difficulty of use - complex interface.</p>
      <p>VoxSci
All OSs
5
2
3
3
 High fee for the application - people like the free application more.
 A small number of settings - users would like to be able to give short answers, adjust call filters.
 Some operating systems are not supported.
 Complexity of development - many companies create their own technologies for language
recognition, which makes the application more difficult to use and more expensive.</p>
      <p>Compared to analogs, the developed application will have many advantages and will include and
correct those disadvantages that have analogs. The main advantage of the application will be that it will
allow you to configure call filters, the first time will not charge for the use of services, and will allow
you to provide quick answers.
3</p>
    </sec>
    <sec id="sec-6">
      <title>Materials and methods</title>
      <p>Basic Success Scenario: This section of the specification describes a "success scenario". That is,
actions that lead to the successful completion of events in the main process, for example:
 The user installs the CLON application.
 The user logs in / registers (or skips this step) in the application.
 The user gets acquainted with the settings and interface of the application.
 The user selects a live broadcast recording point.
 The application shows the user the text that has just been spoken.
 The application analyzes the text and offers quick answers.
 The user selects one of the suggested answers or selects a typing item independently.</p>
      <p>Baseline extensions or alternate streams: This section indicates all other possible scenarios that lead
to the successful completion of the main scenario or alternative scenarios that lead to incorrect
completion of the precedent. In this case, after processing all possible extensions of the precedent, the
aircraft must ensure the return of the user to the main scenario, if the aircraft does not provide an
alternative course of events. For example, baseline extension:
 The user has logged in to the application.
 The user makes settings for the application - language and color settings.</p>
      <p> The user makes conversation filters.</p>
    </sec>
    <sec id="sec-7">
      <title>4 Terms of reference</title>
    </sec>
    <sec id="sec-8">
      <title>4.1 The structure and functionality of the application</title>
      <p>operations: public: grant access to application functions, provide access to filters, provide access to
application settings.</p>
      <p>Add class objects from the class chart with the following names: System User, Application, Convert
Speech to Text, User Interface, Use Microphone.</p>
      <p>Add links that connect class objects to names: System User, Application, Convert Speech to Text,
User Interface, Use Microphone. Add Message 1 - 14:
 Log in / register in the system.
 Configure the application.
 Exit the application.
 Convert recorded speech into text, provide access to a microphone.
 Record speech.</p>
      <p>The development of the activity diagram shows how the program will work at the stage of speech
perception from the speaker. The activity diagram assumes that after the user selects a text item, the
system will record the speech in the device memory. After this step, the program uses API, which was
specially developed for text recognition. API performs speech recognition. Then the program will create
text based on the speech and pass it to the program interface. Then the user will be able to see the text.</p>
      <p>The sequence diagram is constructed from the classes of the class diagram in chronological order.</p>
      <p>The deployment diagram, that visualizes the hardware and software of the designed system, is in fig.
14. The nodes are CLONE application, API and text analysis Servers, and a connector between server
and host.</p>
    </sec>
    <sec id="sec-9">
      <title>Experiment</title>
      <p>Type of application (mockap, prototype). The application provides login or registration so that the
user's settings are saved, his statistics are processed and improvements are provided.</p>
      <p>If the user is not yet registered in the system, one can register.</p>
      <p>You can also sign in with a social network, such as Twitter or Facebook.</p>
      <p>The following prototype shows how the user will interact with the person. When a person says
something, it will be automatically written as text in the user's application.</p>
      <p>The following mockap shows how a person will interact with the application. To get the text from
the person speaking, the user needs to click on the button to start writing the text. The user can also go
back, select options from the radio button, such as add text yourself, use quick answers.</p>
      <p>Next, the program must analyze the text so that the user can provide a quick response from the
proposed options, which will be voiced. If the user does not find the right answer, he can type the text
and it will be sounded through the phone speaker.
The first version will only receive speech from the user and translate it into text.</p>
      <p>The first example will look like this. The user will be asked to say something.</p>
      <p>After that, there will be a pause of 3 seconds after which the speech will be interrupted. After that,
the user of the application will see the text that was said by another person.</p>
      <p>Text to be spoken by the user: Hello it's my first program speech recognition app.</p>
    </sec>
    <sec id="sec-10">
      <title>Results</title>
      <p>You can see that the recognition was almost accurate. One word did not match - program - CLON
interpreted it as a problem. Other results are satisfactory.</p>
      <p>The following versions are needed in order to improve the results and implement all these features
for the mobile application.</p>
      <p>Testing will follow these steps.</p>
      <p>1. Single words recognition. At this stage, a certain vocabulary of some words will be checked to
see if there are any difficulties with a certain category of words, or whether the recognition error does
not depend on the category of the word.</p>
      <p>2. Phrases recognition. At this stage, certain phrases consisting of two or more words will be
checked. Here you can see how recognition occurs when more words flow.</p>
      <p>3. Sentence recognition. Check whole sentences. Check more than two sentences.
6.1</p>
    </sec>
    <sec id="sec-11">
      <title>Single words recognition</title>
      <p>The first check is a check of certain words, pronounce for example. Fig. 20 shows the results for the
personal pronoun I recognition.</p>
      <p>The result (bye) is not satisfactory. The next pronoun is you.</p>
      <p>The result is just as disappointing. The recognized word is yo. The next pronoun is he.</p>
      <p>As the result, the application did not recognize the speech. The next pronoun is she.</p>
      <p>The result is also not successful. Instead of the pronoun, skip is recognized. Now you can try to
recognize some other words, much longer. For example, you can start with greetings. The first greeting
is hello.</p>
      <sec id="sec-11-1">
        <title>The result is positive. The next word is bye.</title>
      </sec>
      <sec id="sec-11-2">
        <title>The result is positive. The next word is goodnight.</title>
        <p>Then other words were checked, both complex and easy to pronounce and to perceive.</p>
        <p>The first word is kernel.</p>
        <p>Then the word is linguistics.</p>
      </sec>
      <sec id="sec-11-3">
        <title>The next word is help.</title>
      </sec>
      <sec id="sec-11-4">
        <title>The last word is name.</title>
      </sec>
      <sec id="sec-11-5">
        <title>The first phrase is my name.</title>
      </sec>
      <sec id="sec-11-6">
        <title>Another phrase is help me.</title>
      </sec>
      <sec id="sec-11-7">
        <title>The third phrase is nice day.</title>
      </sec>
      <sec id="sec-11-8">
        <title>The fourth phrase is a computer program. It can be concluded that very short single words are not very well received by the application. And words that are much more commonly used, or that are longer than 4-5 letters are better recognized.</title>
        <p>Thus, we can conclude that the more words, the better the recognition. The text is self-correcting
and it is possible that the artificial intelligence itself can correct or add the right words.</p>
      </sec>
    </sec>
    <sec id="sec-12">
      <title>6.3 Sentences recognition</title>
      <sec id="sec-12-1">
        <title>The first sentence is Today is Monday.</title>
        <p>The next sentence is How tall are you.</p>
        <p>The third sentence is Computational linguistics is the scientific and engineering discipline
concerned with understanding written and spoken language from a computational perspective, and
building artifacts that usefully process and produce language, either in bulk or in a dialogue setting.</p>
      </sec>
    </sec>
    <sec id="sec-13">
      <title>Conclusions</title>
      <p>Many language recognition problems are constantly being studied, improved, and understood. One
such problem is modeling the process of translating speech into text. Translating speech into text is one
of the problems that the science of speech recognition is trying to solve.</p>
      <p>Speech recognition is the process of converting a speech signal into a text stream. This technology
is used to solve such tasks as computer control, voice information, dictation, phonogram transcription.
This technology is also used for speech recognition in Google translators, as well as in applications that
help people with hearing impairments to communicate.</p>
      <p>In the last few years, the demand for communication applications between people has been growing,
because it is a fast way to communicate. Modern messengers are used for messaging, but their idea
prompted people around the world to create a similar application for people with hearing impairments.
This is one of the first and defining steps to solving the global problem because people with disabilities
live among us and also want to communicate. Not everyone understands sign language and not everyone
can immediately understand that a person has hearing problems, so this application is very necessary.</p>
      <p>The system developed a system that recognizes the speaker's text and translates speech into text
yes, people with hearing impairments will be able to "hear" their interlocutor and understand him if he
does not know sign language. Therefore, the developed application can and should be improved by
expanding its functionality for further use in commercial companies seeking to help people with hearing
impairments, as well as those entrepreneurs who intend to monetize such applications to increase their
profits. You can provide the following options:
 adding filters for speech perception - that is, the conversion of speech into text will be more
accurate if you specify the topic of conversation;
 integration with modern platforms;
 adding advertising banners of interactive format, namely mini-games based on the canvas
model of the user's Internet browser.
Sciences and Information Technologies, CSIT, 2016, pp. 190-192. DOI:
10.1109/STCCSIT.2016.7589903
44. V. Vysotska, Linguistic Analysis of Textual Commercial Content for Information Resources
Processing, in: Modern Problems of Radio Engineering, Telecommunications and Computer
Science, TCSET, 2016, pp. 709-713. DOI: 10.1109/TCSET.2016.7452160
45. V. Vysotska, V. Lytvyn, Y. Burov, P. Berezin, M. Emmerich, V. B. Fernandes, Development of
Information System for Textual Content Categorizing Based on Ontology, volume Vol-2362 of
CEUR Workshop Proceedings, 2019, pp. 53-70.
46. R. Bekesh, L. Chyrun, P. Kravets, A. Demchuk, Y. Matseliukh, T. Batiuk, I. Peleshchak, R. Bigun,
I. Maiba, Structural modeling of technical text analysis and synthesis processes, CEUR Workshop
Proceedings, 2020, 2604, pp. 562-589
47. A. Yarovyi, D. Kudriavtsev, Method of Multi-Purpose Text Analysis Based on a Combination of</p>
      <p>Knowledge Bases for Intelligent Chatbot, CEUR Workshop Proceedings, 2021, 2870, 1238-1248.
48. Y. Bodnia, M. Kozulia, Web Application System of Handwritten Text Recognition, CEUR</p>
      <p>Workshop Proceedings, 2021, Vol-2870, pp. 1323-1337.
49. A. B. Rianto, Mutiara, E.P. Wibowo, P.I. Santosa, Improving the accuracy of text classification
using stemming method, a case of non-formal Indonesian conversation, Journal of Big Data, 2021,
8 (1), art. no. 26.
50. O. Kuropiatnyk, V. Shynkarenko, Text Borrowings Detection System for Natural Language</p>
      <p>Structured Digital Documents, CEUR workshop proceedings, 2020, Vol-2604, 294-305.
51. O. Cherednichenko, N. Babkova, O. Kanishcheva, Complex Term Identification for Ukrainian</p>
      <p>Medical Texts, CEUR Workshop Proceedings, 2018, Vol-2255, 146-154.
52. A. Berko, V. Andrunyk, L. Chyrun, M. Sorokovskyy, O. Oborska, O. Oryshchyn, M. Luchkevych,
O. Brodovska, The Content Analysis Method for the Information Resources Formation in
Electronic Content Commerce Systems, CEUR Workshop Proceedings, 2021, 2870, pp 1632-1651
53. V. Kuchkovskiy, V. Andrunyk, M. Krylyshyn, L. Chyrun, A. Vysotskyi, S. Chyrun, N. Sokulska,
I. Brodovska, Application of Online Marketing Methods and SEO Technologies for Web
Resources Analysis within the Region, CEUR Workshop Proceedings, 2021, 2870, pp 1652-1693.
54. N. Antonyuk, L. Chyrun, V. Andrunyk, A. Vasevych, S. Chyrun, A. Gozhyj, I. Kalinina, Y.</p>
      <p>Borzov, Medical news aggregation and ranking of taking into account the user needs, CEUR
Workshop Proceedings, 2019, 2488, pp. 369-382.
55. V. Andrunyk, A. Vasevych, L. Chyrun, N. Chernovol, N. Antonyuk, A. Gozhyj, V. Gozhyj, I.</p>
      <p>Kalinina, M. Korobchynskyi, Development of information system for aggregation and ranking of
news taking into account the user needs, CEUR Workshop Proceedings, 2020, 2604, 1127-1171.
56. V. Andrunyk, L. Chyrun, V. Vysotska, Electronic content commerce system development, in:
Proceedings of 13th International Conference: The Experience of Designing and Application of
CAD Systems in Microelectronics, CADSM, 2015.
57. A. Demchuk, B. Rusyn, L. Pohreliuk, A. Gozhyj, I. Kalinina, L. Chyrun, N. Antonyuk,
Commercial content distribution system based on neural network and machine learning, CEUR
Workshop Proceedings, 2019, 2516, pp. 40-57.
58. L. Chyrun, V. Andrunyk, L. Chyrun, A. Gozhyj, A. Vysotskyi, O. Tereshchuk, N. Shykh, V.</p>
      <p>Schuchmann, The Electronic Digests Formation and Categorization for Textual Commercial
Content, CEUR Workshop Proceedings, 2021, Vol-2870, pp. 1816-1831.
59. A. Berko, I. Pelekh, L. Chyrun, M. Bublyk, I. Bobyk, Y. Matseliukh, L. Chyrun, Application of
ontologies and meta-models for dynamic integration of weakly structured data, in: Proceedings of
the IEEE 3rd International Conference on Data Stream Mining and Processing, DSMP, 2020, pp.
432-437. DOI: 10.1109/DSMP47368.2020.9204321
60. A. Berko, I. Pelekh, L. Chyrun, I. Dyyak, Information resources analysis system of dynamic
integration semi-structured data in a web environment, in: Proceedings of the IEEE 3rd
International Conference on Data Stream Mining and Processing, DSMP, 2020, pp. 414-419. DOI:
10.1109/DSMP47368.2020.9204101
61. I. Pelekh, A. Berko, V. Andrunyk, L. Chyrun, I. Dyyak, Design of a system for dynamic integration
of weakly structured data based on mash-up technology, in: Proceedings of the IEEE 3rd
International Conference on Data Stream Mining and Processing, DSMP, 2020, pp. 420-425. DOI:
10.1109/DSMP47368.2020.9204160</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <string-name>
            <given-names>E. T.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <surname>WILL TYPEWRITERS EVER TAKE DICTATION? Speech</surname>
            <given-names>Technology</given-names>
          </string-name>
          ,
          <year>1982</year>
          ,
          <volume>1</volume>
          (
          <issue>3</issue>
          ), pp.
          <fpage>35</fpage>
          -
          <lpage>42</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <string-name>
            <given-names>G.</given-names>
            <surname>Szymanski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Lipinski</surname>
          </string-name>
          ,
          <article-title>Model of the effectiveness of Google Adwords advertising activities</article-title>
          ,
          <source>2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2018 - Proceedings</source>
          , 2, art. no.
          <issue>8526633</issue>
          ,
          <year>2019</year>
          , pp.
          <fpage>98</fpage>
          -
          <lpage>101</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <string-name>
            <given-names>S.</given-names>
            <surname>Popova</surname>
          </string-name>
          , G. Skitalinskaya,
          <article-title>Extended list of stop words: Does it work for keyphrase extraction from short texts?</article-title>
          <source>Proceedings of the 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          <year>2017</year>
          ,
          <volume>1</volume>
          ,
          <year>2017</year>
          ,
          <volume>8098815</volume>
          , pp.
          <fpage>401</fpage>
          -
          <lpage>404</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>T.</given-names>
            <surname>Shestakevych</surname>
          </string-name>
          ,
          <article-title>Modeling the process of analysis of statistical characteristics of student digital text</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2021</year>
          ,
          <volume>2870</volume>
          , pp.
          <fpage>657</fpage>
          -
          <lpage>669</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <given-names>I.</given-names>
            <surname>Dilai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Dilai</surname>
          </string-name>
          ,
          <source>Automatic Extraction of Keywords in Political Speeches</source>
          ,
          <source>2020 IEEE 15th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2020 - Proceedings</source>
          ,
          <year>2020</year>
          , 1, art. no.
          <issue>9322011</issue>
          , pp.
          <fpage>291</fpage>
          -
          <lpage>294</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6.
          <string-name>
            <given-names>N.</given-names>
            <surname>Kunanets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Levchenko</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. Hadzalo,</surname>
          </string-name>
          <article-title>The application of AntConc concordanger in linguistic researches</article-title>
          ,
          <source>2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2018 - Proceedings, 2</source>
          ,
          <year>2018</year>
          ,
          <volume>8526591</volume>
          , pp.
          <fpage>144</fpage>
          -
          <lpage>147</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kanishcheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hlavcheva</surname>
          </string-name>
          ,
          <article-title>Authorship Identification of the Scientific Text in Ukrainian with Using the Lingvometry Methods</article-title>
          ,
          <source>2018 IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2018 - Proceedings, 2</source>
          ,
          <year>2018</year>
          , art. no.
          <issue>8526735</issue>
          , pp.
          <fpage>34</fpage>
          -
          <lpage>38</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>O.</given-names>
            <surname>Boreiko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kryvinska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Logoyda</surname>
          </string-name>
          ,
          <article-title>Structure model and means of a smart public transport system</article-title>
          ,
          <source>Procedia Computer Science</source>
          ,
          <year>2019</year>
          ,
          <volume>155</volume>
          , pp.
          <fpage>75</fpage>
          -
          <lpage>82</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>N.</given-names>
            <surname>Hrytsiv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shestakevych</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shyyka</surname>
          </string-name>
          ,
          <article-title>Quantitative parameters of Lucy Montgomery's literary style</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2021</year>
          ,
          <volume>2870</volume>
          , pp.
          <fpage>670</fpage>
          -
          <lpage>684</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          , T. Cherna,
          <article-title>The method of automatic summarization from different sources</article-title>
          ,
          <source>Econtechmod</source>
          ,
          <year>2016</year>
          , Vol.
          <volume>5</volume>
          , No.
          <volume>1</volume>
          ,
          <fpage>103</fpage>
          -
          <lpage>109</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <given-names>V.</given-names>
            <surname>Larin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Chichikalo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.M.</given-names>
            <surname>Kardo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Larina</surname>
          </string-name>
          ,
          <article-title>Integrated Intellectual Approach to the Diagnostics of Defects of Operations of Induction Motors</article-title>
          ,,
          <source>2020 IEEE 15th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT 2020 - Proceedings</source>
          ,
          <year>2020</year>
          , 1, art. no.
          <issue>9322004</issue>
          , pp.
          <fpage>352</fpage>
          -
          <lpage>356</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kryvenchuk</surname>
          </string-name>
          , I. Helzynskyy,
          <string-name>
            <given-names>T.</given-names>
            <surname>Helzhynska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Boyko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Danel</surname>
          </string-name>
          ,
          <article-title>Synthesis сontrol system physiological state of a soldier on the battlefield</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2019</year>
          ,
          <volume>2488</volume>
          , pp.
          <fpage>297</fpage>
          -
          <lpage>306</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13.
          <string-name>
            <given-names>R.A.</given-names>
            <surname>Ramadani</surname>
          </string-name>
          ,
          <string-name>
            <surname>I.K.G.D. Putra</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Sudarma</surname>
            ,
            <given-names>I.A.D.</given-names>
          </string-name>
          <string-name>
            <surname>Giriantari</surname>
          </string-name>
          ,
          <article-title>A new technology on translating Indonesian spoken language into Indonesian sign language system</article-title>
          ,
          <source>International Journal of Electrical and Computer Engineering</source>
          ,
          <year>2021</year>
          ,
          <volume>11</volume>
          (
          <issue>4</issue>
          ), pp.
          <fpage>3338</fpage>
          -
          <lpage>3346</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <article-title>Speech-to-text is used as a part of speech-to-sign communication [</article-title>
          <string-name>
            <surname>D.D. Chakladar</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Kumar</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Mandal</surname>
            ,
            <given-names>P.P.</given-names>
          </string-name>
          <string-name>
            <surname>Roy</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Iwamura</surname>
            ,
            <given-names>B.-G. Kim,</given-names>
          </string-name>
          <article-title>3d avatar approach for continuous sign movement using speech/text,</article-title>
          <source>Applied Sciences (Switzerland)</source>
          ,
          <year>2021</year>
          ,
          <volume>11</volume>
          (
          <issue>8</issue>
          ), art. no.
          <issue>3439</issue>
          , .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15. L.
          <string-name>
            <surname>Pandey</surname>
            ,
            <given-names>A.S.</given-names>
          </string-name>
          <string-name>
            <surname>Arif</surname>
          </string-name>
          ,
          <article-title>Liptype: A silent speech recognizer augmented with an independent repair model</article-title>
          ,
          <source>Conference on Human Factors in Computing Systems - Proceedings</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <given-names>K.</given-names>
            <surname>Fornalczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wojciechowski</surname>
          </string-name>
          ,
          <article-title>Robust face model based approach to head pose estimation</article-title>
          ,
          <source>Proceedings of the 2017 Federated Conference on Computer Science and Information Systems, FedCSIS</source>
          <year>2017</year>
          ,
          <year>2017</year>
          , art. no.
          <issue>8104720</issue>
          , pp.
          <fpage>1291</fpage>
          -
          <lpage>1295</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17. G. Glonek,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wojciechowski</surname>
          </string-name>
          ,
          <article-title>Hybrid orientation based human limbs motion tracking method</article-title>
          ,
          <source>Sensors (Switzerland)</source>
          ,
          <year>2017</year>
          ,
          <volume>17</volume>
          (
          <issue>12</issue>
          ), art. no.
          <issue>2857</issue>
          , .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>A.</given-names>
            <surname>Wojciechowski</surname>
          </string-name>
          ,
          <article-title>Hand's poses recognition as a mean of communication within natural user interfaces</article-title>
          ,
          <source>Bulletin of the Polish Academy of Sciences: Technical Sciences</source>
          ,
          <year>2012</year>
          ,
          <volume>60</volume>
          , pp.
          <fpage>331</fpage>
          -
          <lpage>336</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>P.</given-names>
            <surname>Napieralski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kowalczyk</surname>
          </string-name>
          ,
          <article-title>Detection of vertical disparity in three-dimensional visualizations</article-title>
          ,
          <source>Open Physics</source>
          ,
          <year>2017</year>
          ,
          <volume>15</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>1028</fpage>
          -
          <lpage>1033</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <given-names>V.</given-names>
            <surname>Khavalko</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Tsmots</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kostyniuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Strauss</surname>
          </string-name>
          ,
          <article-title>Classification and recognition of medical images based on the SGTM neuroparadigm</article-title>
          ,
          <source>Workshop Proceedings</source>
          ,
          <year>2019</year>
          ,
          <volume>2488</volume>
          , pp.
          <fpage>234</fpage>
          -
          <lpage>245</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <surname>M. Nazarkevych</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Logoyda</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Dmytruk</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Voznyi</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Smotr</surname>
          </string-name>
          ,
          <article-title>Identification of biometric images using latent elements</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2019</year>
          ,
          <volume>2488</volume>
          , pp.
          <fpage>99</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <given-names>D.</given-names>
            <surname>Uchkin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Korotyeyeva</surname>
          </string-name>
          , T. Shestakevych,
          <article-title>Bitmap Image Recognition with Neural Networks</article-title>
          , Econtechmod,
          <year>2020</year>
          , Vol.
          <volume>09</volume>
          , No.
          <volume>1</volume>
          ,
          <fpage>30</fpage>
          -
          <lpage>35</lpage>
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>S.</given-names>
            <surname>Kudubayeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Ryumin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Sndetbayeva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Krak</surname>
          </string-name>
          ,
          <article-title>Computing of hands gestures' informative video features</article-title>
          ,
          <source>Computer Sciences and Information Technologies - Proceedings of the 11th International Scientific and Technical Conference</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          <year>2016</year>
          ,
          <year>2016</year>
          , art. no.
          <issue>7589867</issue>
          , pp.
          <fpage>55</fpage>
          -
          <lpage>58</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24.
          <string-name>
            <given-names>N.</given-names>
            <surname>Jaworski</surname>
          </string-name>
          , I. Farmaha, U. Marikutsa,
          <string-name>
            <given-names>T.</given-names>
            <surname>Farmaha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Savchyn</surname>
          </string-name>
          ,
          <article-title>Implementation features of wounds visual comparison subsystem</article-title>
          ,
          <source>2018 14th International Conference on Perspective Technologies and Methods in MEMS Design, MEMSTECH - Proceedings</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>114</fpage>
          -
          <lpage>117</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>A.</given-names>
            <surname>Roshanzamir</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Aghajan</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Soleymani Baghshah, Transformer-based deep neural network language models for Alzheimer's disease risk assessment from targeted speech</article-title>
          ,
          <source>BMC Medical Informatics and Decision Making</source>
          ,
          <year>2021</year>
          ,
          <volume>21</volume>
          (
          <issue>1</issue>
          ), art. no.
          <issue>92</issue>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <surname>M. Sazhok</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Robeiko</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Seliukh</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          <string-name>
            <surname>Fedoryn</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Yukhymenko</surname>
          </string-name>
          ,
          <article-title>Written Form Extraction of Spoken Numeric Sequences in Speech-to-Text Conversion for Ukrainian</article-title>
          ,
          <source>CEUR workshop proceedings</source>
          ,
          <year>2020</year>
          ,
          <volume>2604</volume>
          ,
          <fpage>442</fpage>
          -
          <lpage>451</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <given-names>N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Basystiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <article-title>Development of the Speech-to-Text Chatbot Interface Based on Google API</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          ,
          <year>2019</year>
          , Vol-
          <volume>2386</volume>
          ,
          <fpage>212</fpage>
          -
          <lpage>221</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <given-names>K.</given-names>
            <surname>Tymoshenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kovtun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Holoshchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Holoshchuk</surname>
          </string-name>
          ,
          <article-title>Real-time Ukrainian text recognition and voicing</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          .
          <year>2021</year>
          . Vol-
          <volume>2870</volume>
          , pp.
          <fpage>357</fpage>
          -
          <lpage>387</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <surname>M.S. Börjesson</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Hartelius</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          <string-name>
            <surname>Laakso</surname>
          </string-name>
          ,
          <article-title>Communicative Participation in People with Amyotrophic Lateral Sclerosis</article-title>
          ,
          <source>Folia Phoniatrica et Logopaedica</source>
          ,
          <year>2021</year>
          ,
          <volume>73</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>101</fpage>
          -
          <lpage>108</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>N.</given-names>
            <surname>Sharonova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Cherednichenko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kupriianov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Kanishcheva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Hamon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Grabar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kowalska-Styczen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>Jonek-Kowalska</surname>
          </string-name>
          ,
          <source>Preface: Computational Linguistics and Intelligent Systems, CEUR Workshop Proceedings</source>
          , Vol-
          <volume>2870</volume>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <given-names>N.R.</given-names>
            <surname>Kippin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Leitao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Finlay-Jones</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Baker</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Watkins</surname>
          </string-name>
          ,
          <article-title>The oral and written narrative language skills of adolescent students in youth detention and the impact of language disorder</article-title>
          ,
          <source>Journal of Communication Disorders</source>
          ,
          <year>2021</year>
          ,
          <volume>90</volume>
          , art. no.
          <issue>106088</issue>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <given-names>A.</given-names>
            <surname>Abdulkareem</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.E.</given-names>
            <surname>Somefun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.K.</given-names>
            <surname>Chinedum</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Agbetuyi</surname>
          </string-name>
          ,
          <article-title>Design and implementation of speech recognition system integrated with internet of things</article-title>
          ,
          <source>International Journal of Electrical and Computer Engineering</source>
          ,
          <year>2021</year>
          ,
          <volume>11</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>1796</fpage>
          -
          <lpage>1803</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33. I. Svensson,
          <string-name>
            <given-names>T.</given-names>
            <surname>Nordström</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Lindeblad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Gustafson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Björn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Sand</surname>
          </string-name>
          , G. Almgren/Bäck,
          <string-name>
            <given-names>S.</given-names>
            <surname>Nilsson</surname>
          </string-name>
          ,
          <article-title>Effects of assistive technology for students with reading and writing disabilities</article-title>
          ,
          <source>Disability and Rehabilitation: Assistive Technology</source>
          ,
          <year>2021</year>
          ,
          <volume>16</volume>
          (
          <issue>2</issue>
          ), pp.
          <fpage>196</fpage>
          -
          <lpage>208</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <given-names>S.</given-names>
            <surname>Ahmad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Asmai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zaid</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kama</surname>
          </string-name>
          ,
          <article-title>Shopping Assistant App For People With Visual Impairment: An Acceptance Evaluation</article-title>
          ,
          <source>International Journal of Computing</source>
          ,
          <year>2019</year>
          ,
          <volume>18</volume>
          ,
          <fpage>285</fpage>
          -
          <lpage>292</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35. I.
          <string-name>
            <surname>Khomytska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <article-title>Modelling of phonostatistical structures of the colloquial and newspaper styles in English sonorant phoneme group</article-title>
          ,
          <source>in: Proceedings of the 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>67</fpage>
          <lpage>70</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36. I.
          <string-name>
            <surname>Khomytska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <article-title>Modelling of phonostatistical structures of English backlingual phoneme group in style system</article-title>
          ,
          <source>in: 14th International Conference The Experience of Designing and Application of CAD Systems in Microelectronics, CADSM - Proceedings</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>324</fpage>
          -
          <lpage>327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37. I.
          <string-name>
            <surname>Khomytska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <article-title>Authorship Attribution by Differentiation of Phonostatistical Structures of Styles</article-title>
          ,
          <source>in: IEEE 13th International Scientific and Technical Conference on Computer Sciences and Information Technologies, CSIT - Proceedings</source>
          ,
          <year>2018</year>
          , 2, pp.
          <fpage>5</fpage>
          -
          <lpage>8</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38. I.
          <string-name>
            <surname>Khomytska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Teslyuk</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Holovatyy</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          <string-name>
            <surname>Morushko</surname>
          </string-name>
          ,
          <article-title>Development of methods, models, and means for the author attribution of a text</article-title>
          , volume
          <volume>3</volume>
          (
          <issue>2</issue>
          -
          <fpage>93</fpage>
          ) of Eastern-
          <source>European Journal of Enterprise Technologies</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>41</fpage>
          -
          <lpage>46</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39. I.
          <string-name>
            <surname>Khomytska</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Teslyuk</surname>
          </string-name>
          ,
          <article-title>Authorship and Style Attribution by Statistical Methods of Style Differentiation on the Phonological Level</article-title>
          , volume
          <volume>871</volume>
          <source>of Advances in Intelligent Systems and Computing</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>105</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <given-names>B.</given-names>
            <surname>Rusyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Pohreliuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rzheuskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Kubik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ryshkovets</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vysotskyi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.B.</given-names>
            <surname>Fernandes</surname>
          </string-name>
          ,
          <article-title>The mobile application development based on online music library for socializing in the world of bard songs and scouts' bonfires</article-title>
          , volume
          <volume>1080</volume>
          <source>of Advances in Intelligent Systems and Computing</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>734</fpage>
          -
          <lpage>756</lpage>
          . DOI:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -33695-0_
          <fpage>49</fpage>
        </mixed-citation>
      </ref>
      <ref id="ref41">
        <mixed-citation>
          41.
          <string-name>
            <given-names>A.</given-names>
            <surname>Batyuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Voityshyn</surname>
          </string-name>
          ,
          <article-title>Apache storm based on topology for real-time processing of streaming data from social networks</article-title>
          ,
          <source>in: Proceedings of the 2016 IEEE 1st International Conference on Data Stream Mining and Processing</source>
          , DSMP,
          <volume>016</volume>
          , pp.
          <fpage>345</fpage>
          -
          <lpage>349</lpage>
          . DOI:
          <volume>10</volume>
          .1109/DSMP.
          <year>2016</year>
          .7583573
        </mixed-citation>
      </ref>
      <ref id="ref42">
        <mixed-citation>
          42.
          <string-name>
            <surname>J. Su</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Vysotska</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Sachenko</surname>
            ,
            <given-names>V.</given-names>
          </string-name>
          <string-name>
            <surname>Lytvyn</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Burov</surname>
          </string-name>
          ,
          <article-title>Information resources processing using linguistic analysis of textual content</article-title>
          ,
          <source>in: Proceedings of the International Conference on Intelligent Data Acquisition and Advanced Computing Systems Technology and Applications</source>
          , Romania,
          <year>2017</year>
          , pp.
          <fpage>573</fpage>
          -
          <lpage>578</lpage>
          . DOI:
          <volume>10</volume>
          .1109/IDAACS.
          <year>2017</year>
          .8095038
        </mixed-citation>
      </ref>
      <ref id="ref43">
        <mixed-citation>
          43.
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Veres</surname>
          </string-name>
          , I. Rishnyak,
          <string-name>
            <given-names>H.</given-names>
            <surname>Rishnyak</surname>
          </string-name>
          ,
          <article-title>Content linguistic analysis methods for textual documents classification</article-title>
          ,
          <source>in: Proceedings of the International Conference on Computer</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>