<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>Lviv, Ukraine</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.15587/1729-4061.2018.132052</article-id>
      <title-group>
        <article-title>Computer linguistic system architecture for Ukrainian language content processing based on machine learning</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Victoria Vysotska</string-name>
          <email>victoria.a.vysotska@lpnu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>Stepan Bandera 12, 79013 Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>MoDaST-2024: 6th International Workshop on Modern Data Science Technologies</institution>
          ,
          <addr-line>May, 31 - June, 1, 2024, Lviv- Shatsk</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <volume>1</volume>
      <issue>16</issue>
      <fpage>9</fpage>
      <lpage>21</lpage>
      <abstract>
        <p>The general architecture of computer linguistic systems (CLS) is developed based on the main processes of processing information resources such as integration, maintenance and content management, as well as using methods of intellectual and linguistic analysis of text flow using machine learning technology. The IT of intellectual analysis of the text flow based on the processing of information resources has been improved, which made it possible to adapt the generally typical structure of content integration, management and support modules to solve various of natural language processing (NLP) problems and increase the efficiency of CLS functioning by 6-9%. The main NLP methods based on regular expression (RE) matching with patterns in grapheme and morphological analyses of Ukrainian-language texts are described. NLP methods based on pattern-matching regular expressions have been improved, which made it possible to adapt methods of text tokenization and normalization by cascades of simple substitutions of regular expressions and finite state machines. The main valid operations of regular expressions are defined as union and disjunction of symbols/strings/expressions, number and precedence operators, as well as anchors as special symbols for identifying the presence/absence of symbols in RE. The main stages of tokenization and normalization of the Ukrainian text by cascades of simple substitutions of regular expressions and finite state machines are defined. The morphological analysis (MA) method of the Ukrainian-language text based on word segmentation and normalization, sentence segmentation and modified Porter's stemming algorithm was improved as an effective means of identifying lem affixes for the possibility of marking the analyzed word, which made it possible to increase the accuracy of keyword searches by 9%. Algorithms for word segmentation and normalization, sentence segmentation, and Porter's modified stemming are implemented and described as an effective way of identifying lem affixes for the possibility of marking the analyzed word. Unlike the classic Porter algorithm (it does not have high accuracy even for English-language texts), the modified one is adapted specifically for the Ukrainian language and gives an accurate result in 85-93% of cases, depending on the quality, style, genre of the text and, accordingly, the content of CLS dictionaries. The algorithm for the minimum editorial distance of lines of Ukrainian texts is described as the minimum number of operations necessary to transform one into another.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;natural language processing</kwd>
        <kwd>Ukrainian text</kwd>
        <kwd>NLP</kwd>
        <kwd>computer linguistics</kwd>
        <kwd>machine learning 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Let's consider the architectural patterns of CLS design based on supporting the life cycle of
the ML model for monitoring/managing the pipeline (information flow) of content (Fig. 1)
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The standard content processing pipeline implements an iterative process consisting of
the stages of creating and deploying the machine learning (ML) process [
        <xref ref-type="bibr" rid="ref2 ref3 ref4 ref5">2-5</xref>
        ]. The process
of monitoring/managing the content pipeline should still consist of additional stages to
improve the quality/efficiency/efficiency of NLP problem solving [
        <xref ref-type="bibr" rid="ref6 ref7 ref8 ref9">6-9</xref>
        ]. At the construction
stage, raw integrated content is filtered from noise/duplicates and formatted into a suitable
form for further processing/management, conducting experiments on it, transferring it to
ML models for classification/clustering/prediction/evaluation, etc. [
        <xref ref-type="bibr" rid="ref10 ref11 ref12">10-12</xref>
        ]. At the stage of
content analysis and support, the content is deployed to determine the best ML model for
making assessments/forecasts that directly affect the regular user and target audience.
      </p>
      <sec id="sec-1-1">
        <title>Input content</title>
      </sec>
      <sec id="sec-1-2">
        <title>Relevant content</title>
      </sec>
      <sec id="sec-1-3">
        <title>User requests</title>
        <sec id="sec-1-3-1">
          <title>Interaction</title>
        </sec>
      </sec>
      <sec id="sec-1-4">
        <title>Integration</title>
        <sec id="sec-1-4-1">
          <title>Presentation</title>
        </sec>
      </sec>
      <sec id="sec-1-5">
        <title>Feedback</title>
        <sec id="sec-1-5-1">
          <title>CLS website</title>
        </sec>
        <sec id="sec-1-5-2">
          <title>Processes of monitoring, development and management of content</title>
        </sec>
        <sec id="sec-1-5-3">
          <title>Formatting filtering</title>
          <p>Transformation</p>
        </sec>
        <sec id="sec-1-5-4">
          <title>Interpretation</title>
          <p>API
NLP</p>
        </sec>
      </sec>
      <sec id="sec-1-6">
        <title>Normalization</title>
        <sec id="sec-1-6-1">
          <title>Prognostication</title>
        </sec>
      </sec>
      <sec id="sec-1-7">
        <title>Assessment</title>
        <sec id="sec-1-7-1">
          <title>Machine learning</title>
        </sec>
      </sec>
      <sec id="sec-1-8">
        <title>Classification</title>
        <sec id="sec-1-8-1">
          <title>Deployment</title>
        </sec>
      </sec>
      <sec id="sec-1-9">
        <title>Modeling</title>
        <sec id="sec-1-9-1">
          <title>Accumulation of content/ analysis of features</title>
        </sec>
      </sec>
      <sec id="sec-1-10">
        <title>Data storage</title>
        <sec id="sec-1-10-1">
          <title>Computer linguistic system</title>
        </sec>
        <sec id="sec-1-10-2">
          <title>Content analysis and support processes</title>
        </sec>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related works</title>
      <p>
        Based on Feedback and model output, the target audience interacts with CLS, which
facilitates the adaptation of the selected learning model. Five stages of related processes
define the basic architectural principles for building a typical CLS. The processes of
monitoring, processing and managing content are interaction, formatting/filtering, NLP,
machine learning [
        <xref ref-type="bibr" rid="ref13">13-15</xref>
        ] and data accumulation in DS. For content analysis and support
processes, respectively, these are feature analysis, deployment, prediction, interpretation,
and content/result presentation. At the interaction stage, a set of rules for integrating
content from multiple reliable sources at certain time intervals is necessary. Also, in
parallel, a set of rules for checking the data entered by the CLS user is required as a
preliminary stage for the formatting/filtering stage according to a collection of rules
preset by the moderator and content from DS [16-21]. The next stage of NLP is a preparatory
intermediate stage for machine learning and data accumulation. The machine learning stage
can take various forms from SQL queries to various software modules. The support process
is easier to implement than the management stage, provided that the latter is implemented
correctly, especially during NLP analysis, in which additional lexical resources and artefacts
(dictionaries, translators, regular expressions, etc.) are created, on which the effectiveness
of CLS functioning directly depends (Fig. 2) [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ].
      </p>
      <p>Input
content</p>
      <p>Interaction</p>
      <p>
        The process of transition from raw text to a developed machine-learning model consists
of a sequence of additional content transformations. First, the input textual content is
transformed into an input corpus as a collection of texts, accumulated and stored in the DS.
The incoming content is further grouped, filtered, formatted, linguistically processed,
marked, normalized and converted into vectors for further processing. In the final
transformation, the model/models (Fig. 3) are trained on the vector corpus, and a
generalized presentation of the original content is created for further use in solving a
specific NLP problem [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1-6</xref>
        ]. An ML-based CLS architecture with accelerated or even
automatic model generation should support and optimize content transformation with ease
of testing and tuning. The process of generating an optimal ML model is a complex cyclic
algorithm, the main stages of which are the formation of a collection of features, model
selection, and hyperparameter adjustment. After each iteration, the results are evaluated to
determine the optimal collection of features, models, and parameters for solving a specific
NLP problem with the appropriate input data [
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1-6</xref>
        ].
      </p>
      <sec id="sec-2-1">
        <title>The process of generating an optimal machine learning model</title>
        <sec id="sec-2-1-1">
          <title>Processed content</title>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>Monitoring</title>
        <sec id="sec-2-2-1">
          <title>Data collection</title>
        </sec>
      </sec>
      <sec id="sec-2-3">
        <title>Processing</title>
        <p>Transformation</p>
        <sec id="sec-2-3-1">
          <title>Content archive</title>
        </sec>
        <sec id="sec-2-3-2">
          <title>Data filtering Set of content</title>
        </sec>
        <sec id="sec-2-3-3">
          <title>Content selection</title>
        </sec>
      </sec>
      <sec id="sec-2-4">
        <title>Optimization of the ML model</title>
      </sec>
      <sec id="sec-2-5">
        <title>Generation of the ML model</title>
        <sec id="sec-2-5-1">
          <title>Forming features set</title>
        </sec>
        <sec id="sec-2-5-2">
          <title>Choice of ML model</title>
        </sec>
        <sec id="sec-2-5-3">
          <title>Adjustment of parameters</title>
        </sec>
      </sec>
      <sec id="sec-2-6">
        <title>Model control</title>
      </sec>
      <sec id="sec-2-7">
        <title>Data management</title>
        <p>Analysis of signs
and parameters</p>
        <sec id="sec-2-7-1">
          <title>Model</title>
          <p>repository</p>
        </sec>
        <sec id="sec-2-7-2">
          <title>Choosing the optimal model</title>
        </sec>
      </sec>
      <sec id="sec-2-8">
        <title>Learning the ML model</title>
        <sec id="sec-2-8-1">
          <title>Testing of the</title>
        </sec>
        <sec id="sec-2-8-2">
          <title>ML model</title>
        </sec>
        <sec id="sec-2-8-3">
          <title>Content</title>
          <p>repository</p>
        </sec>
        <sec id="sec-2-8-4">
          <title>Model settings</title>
        </sec>
      </sec>
      <sec id="sec-2-9">
        <title>CLS cloud storage</title>
        <p>
          According to [
          <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6">1-6</xref>
          ], there are 3 main areas of statistical ML: a class of models, a form of a
model, and a trained model. The class of models defines the relationship between the
variables and the formed goal (for example, a linear model, a recurrent neural network,
etc.). A model form is a specific component of a model: a collection of features, an algorithm,
or a collection of hyperparameters. A trained model is a form of model that is trained on a
specific data set and adapted to make predictions. CLSs consist of many trained models built
during their selection, which creates and evaluates model shapes.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Materials and Methods</title>
      <p>Any natural language text is initially a collection of non-random unstructured data as input
content to CLS. But usually, the text is formed based on certain linguistic rules for the
possibility of understanding these data. The purpose of the integration module is to
transform this collection of non-random unstructured data into
structured/semistructured fields (records) or markup for convenient interpretation by CLS modules. ML
methods (for example, learning by a teacher) allow you to train (and retrain) statistical
models as the language changes during NLP processes. By generating ML models on
context-sensitive corpora, CLSs can apply narrow semantic values to improve accuracy
without the need for additional interpretation.</p>
      <p>Formally, the ML model of the Ukrainian language has to supplement the input
incomplete phrase with missing words/phrases that are most likely to complete the content
of the statement according to the previous text (context analysis for further
guessing/predicting the meaning). Usually, a competently and correctly constructed text is
predictable based on its coherence. Calculation of the entropy (degree of
uncertainty/unpredictability) of the probability distribution of the model of the Ukrainian
language measures the degree of predictability of the text. Thus, unfinished phrases Київ
столиця... [Kyyiv - stolytsya...] (Kyiv - the capital...) and сонце сходить на... [sontse
skhodytʹ na...] (the sun rises on...) have low entropy and statistical speech models are highly
likely to guess the continuation of України [Ukrayiny] (Ukraine) and сході [skhodi] (the
east), respectively. And expressions with high entropy like ми йдемо в гості до... [my
ydemo v hosti do...] (we go to visit...) and я зустрів сьогодні... [ya zustriv sʹohodni...] (I met
today...) offer many continuation options (parents, friends, neighbours, colleagues are
equally likely without analyzing the previous context). Speech models can make inferences
or identify connections between lexemes. Formally, the model uses context to identify a
narrow decision space from a set of a small number of options. The application of statistical
ML methods (with and without a teacher) allows the generation of speech models for
extracting meaning from texts to support its predictability. First, the characteristic features
of the content are identified to predict the goal. Textual data provides many opportunities
to extract surface features based on parsing and breaking up sentences/utterances/phrases
(e.g. bag of words), as well as based on extracted morphological/syntactic/semantic
features. Special attention is paid to linguistic/ contextual/ structural features.</p>
      <p>
        1. An example of the analysis of a linguistic feature can be the identification of the
predominant gender in a fragment of the news text (the role of gender) in different contexts
[
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] to identify gender biases regarding the subject of publications. In the gender analysis of
where   is the number of words in the analysed text   ;   is the number of
classifications by gender (in this particular case – 4);    is the number of words in
sentences of a certain gender sign;   is the set of the number of sentences in the analyzed
text of a certain gender sign;   is the percentage of publication text belonging to a
certain gender sign;    is a specific gender sign;    is the number of sentences
in the analyzed text of a specific gender sign;   is a set of sentences identified by
parsing in the analysed text   ;   is a collection of sets of words identified by
parsing in each sentence of the analyzed text   ;   is the set of all words of the text
  ;  is the number of all words in the analysed text   . Such a deterministic
mechanism demonstrates how the content/frequency of use of words/phrases (especially
stereotypical ones) affects the predictability of the content according to the previous
context (the gender sign is built directly into the Ukrainian language – every noun has a
gender). But speech signs are not always decisive, for example, plural and time are used to
analyse language/processes/actions/events in time.
      </p>
      <p>2. An example of the analysis of a contextual feature can be the analysis of moods or
sentiment analysis of a text (emotional colouring when discussing a specific topic by a
relevant group of people). Usually used in complex analysis of feedback from users, for
example, e-commerce, the polarity of messages or reactions to events/phenomena, in social
networks or in political/economic discussions/forums, etc. In superficial sentiment
analysis, the mechanism of gender classification (positive/negative/neutral coloured word)
is usually used. For example, for positive – чудовий [chudovyy] (wonderful), прекрасний
[chudovyy] (beautiful), правдивий [pravdyvyy,] (true), negative– лінивий [linyvyy] (lazy),
поганий [pohanyy] (bad), дратівливий [drativlyvyy] (annoying), and neutral – білий
[bilyy] (white), сонячний [sonyachnyy] (sunny), космічний [sonyachnyy] (cosmic). But the
mood is not a feature of the language and depends on the meaning of the words/phrases
according to the surrounding context of the text, for example, the word кумедний
[kumednyy] (funny) has several interpretations of conveying the mood, in particular,
positive – смішний клоун [smishnyy kloun] (funny clown), negative – кумедний одяг
[kumednyy odyah] (funny clothes), and neutral – кумедний кіт [kumednyy kit] (funny cat)
or кумедна іграшка [kumedna ihrashka] (funny toy). The word гострий [hostryy] (sharp)
from the word перець [peretsʹ] (pepper) or ніж [nizh] (knife) has a positive meaning when
buying, but from the word біль [bilʹ] (pain) and ніж [nizh] (knife) in a criminal case, it has
a negative meaning. Also, negation turns the meaning of a positive text with positive words
into a negative one and vice versa, for example, ми дуже багато очікували від відпочинку
на морі сонячними гарними днями, але обіцяна курортна база відпочинку все
спаскудила [my duzhe bahato ochikuvaly vid vidpochynku na mori sonyachnymy harnymy
dnyamy, ale obitsyana kurortna baza vidpochynku vse spaskudyla] (we expected a lot from
a vacation at the sea on sunny, beautiful days, but the promised holiday resort spoiled
everything) (one negative word спаскудила [spaskudyla] (spoiled) all the previous positive
ones) or дощ, прохолода та вітер не стали перепонами гарно відпочити в чудовій
компанії [doshch, prokholoda ta viter ne staly pereponamy harno vidpochyty v chudoviy
kompaniyi doshch, prokholoda ta viter ne staly pereponamy harno vidpochyty v chudoviy
kompaniyi] (rain, coolness and the wind did not become an obstacle to a good rest in a
wonderful company). Only thanks to machine learning in such cases it is possible to get the
predictability of the text and reveal the emotional colouring according to the context. An a
priori deterministic/structural approach loses the flexibility of context and meaning, so
most speech models take into account the location of words in context, using ML methods
for prediction. The main method of developing simple speech models is the bag of words as
the frequency of co-occurrence of words in a narrow, limited context (Fig. 4).
1) інтелектуальна інформаційна система  інтелект інформ систем
2) інтелектуальний інформаційний пошук  інтелект інформ пошук
3) опрацювання інформаційних ресурсів  опрацюв інформ ресурс
4) система електронної комерції  систем електр комерц
5) комп’ютерна лінгвістична система  комп’ютер лінгвіст систем
6) аналіз природної мови  аналіз природ мов
7) опрацювання природної мови  опрацюв природ мов
8) опрацювання текстового контенту  опрацюв текст контент
9) аналіз текстового контенту  аналіз текст контент
10) пошук текстового контенту  пошук текст контент
11) лінгвістичний аналіз контенту  лінгвіст аналіз контент
12) лінгвістичний аналіз тексту  лінгвіст аналіз текст
з
і
л
а
н
а
р
т
к
е
л
е
т
к
е
л
е
т
н
і
м
р
о
ф
н
і
ц
р
е
м
о
к
т
н
е
т
н
о
к
т
с
і
в
г
н
і
л
в
о
м
в
ю
ц
а
р
п
о
к
у
ш
о
п
д
о
р
и
р
п
с
р
у
с
е
р
м
е
т
с
и
с
т
с
к
е
т
аналіз 0
електр 0
інтелект 0
інформ 0
комерц 0
комп’ютер 0
контент 2
лінгвіст 2
мов 1
опрацюв 0
пошук 0
природ 1
ресурс 0
систем 0
текст 2</p>
      <p>Such evaluation helps to determine the probability neighbourhood and to determine
their meaning from small fragments of text. Next, using statistical inference methods, word
order can be predicted. This is quite simple for English texts where words are not inflected.
For Ukrainian language tests, it is better to use not a bag of words, but a bag of word bases.
For example, for 12-word combinations as a 3-gram (36 words) without taking into account
declension, we will get a matrix of size 2020, and with consideration of declension, gender
and person (analysis of only the bases of words) – 1515. Moreover, for the Ukrainian
language, the location of bases in the 3-gram is usually not important and often has an
unambiguous probability of compatibility in terms of content, for example, інформаційний
ресурс (інформ ресурс) [informatsiynyy resurs (inform resurs)] (information resource
(inform resource)) and ресурс інформації (ресурс інформ) [resurs informatsiyi (resurs
inform)] (information resource (inform resource)). The bag-of-words/stems model is also
extended by analyzing the co-occurrence of stable phrases and fragments of expressions
that are of great importance for identifying the meaning of the text. The expressions
зелений край скатертини (межа) [zelenyy kray skatertyny (mezha)] (green edge of the
tablecloth (border)) and зелений край батьківщини (місцевість) [zelenyy kray
batʹkivshchyny (mistsevistʹ)] (green edge of the homeland (locality)) in the form of a
3gram carry a different meaning. That is, there are several interpretations only for the word
edge (the boundary of an object, a piece, the end of an action/state, a special area, a place of
residence, an administrative-territorial unit). Statistical analysis of n-grams makes it
possible to distinguish patterns of context. Speech models based on the analysis of n-gram
contexts require the ability to explore the relationship of text to some target variable. The
application of the analysis of linguistic and contextual features contributes to the formation
of the general predictability of the text. However, their identification and further use require
the ability to parse/identify the linguistic units of the language.</p>
      <p>3. An example of the analysis of a structural feature can be the construction of an
ontology for the implementation of IIS. Along with linguistic and contextual features, it is
then necessary to identify and process high-level language units to define a vocabulary of
operations for the text corpus. Different units of language are processed at different levels,
and the correct implementation of NLP methods based on ML is important for the
operational and correct identification of the linguistic context (sem relationship structure).
Based on a typical pattern of utterances (statement or simple phrase) in the form of the
subject  verb  object  object definition (subject  predicate  appendix) construct
ontologies that define specific relationships between entities. They make it possible to solve
the problem of the lack of a mandatory order of words in a Ukrainian sentence to identify
its semantics. It is advisable to use it for tasks where it is necessary to constantly process
large volumes of text data and there is long-term resource support for the project. Semantic
analysis consists not only in identifying the content of the text but also in generating data
structures to which logical reasoning can be applied. Thematic Meaning Representations
(TMR) are used to encode sentences in the form of predicate structures based on first-order
logic or lambda calculus (λ-calculus). Network/graph structures are used to encode
interactions of predicates of relevant text features. Then a traversal is implemented to
analyze the centrality of terms or subjects and the reasons for the relationships between
elements. Graph analysis is usually not a complete SEM (semantical analysis), but helps to
form part of important logical decisions or conclusions. Semantics, syntax and morphology
allow you to add data to simple text strings with linguistic meaning and generate new
meaningful text content. Nowadays, natural language is one of the most commonly used
forms of content. Its analysis makes it possible to increase the usefulness of data
applications and make them an integral part of everyday life. Scalable analysis and machine
learning of text primarily require up-to-date knowledge and text corpora of the relevant SA.
For example, in the field of finance, CLS needs to identify financial terms, stock
abbreviations and company names. Therefore, documents in the SA corpus must contain
these entities. That is, the development of any CLS begins with obtaining textual data of the
appropriate type and forming a corpus with structural and contextual features of SA.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments, results and discussions</title>
      <p>4.1. Method of grapheme analysis of the Ukrainian language
For the GA of text strings, it is best to use regular expressions (RE) as algebraic notations
for the features of a set of character strings. Commonly used in the
development/maintenance of each type of computer language (programming,
communication protocols, data markup, specification, and design), the operation of text
editors, and word processing software, especially with IIS templated or SA text corpora
collections. Identification/search of a fragment/string by pattern in a sequence of character
strings is implemented to find all matches or the first one. The templates use special
characters [, ], ^, \, -, ?, *, +, ., $, |, (, ), _, {, }, etc., including /, but the latter is not RE, but its
boundaries The simplest RE is a tuple of simple characters (Table 1) to recognize the first
or all pattern-like occurrences of character sequences.</p>
      <p>
        /[онві]/
/[0123456789]/
/[0123]/
/[
        <xref ref-type="bibr" rid="ref1 ref2 ref3 ref4 ref5 ref6 ref7 ref8 ref9">0-9</xref>
        ]/
/[а-я]/
/[А-Я]/
/[А-Яа-я]/
/[A-Z]/
/[^А-Я]/
/[^Кк]/
/[^\.]/
/[к^]/
/x^y/
/^[А-Я]/
/^а/
      </p>
      <p>Recognition
the exact sequence of substring
characters, taking into account the case
a specific character, taking into account</p>
      <p>the case
specific special character
exact sequence of characters without
taking into account the case of the 1st</p>
      <p>character
or о, or н, or в, or і
Any number in a string sequence</p>
      <p>or 0, or 1, or 2, or 3</p>
      <p>Any number in a string sequence
Any lowercase letter of the Ukrainian</p>
      <p>alphabet
Any uppercase letter of the Ukrainian</p>
      <p>alphabet
Any letter of the Ukrainian alphabet,</p>
      <p>regardless of case
Any uppercase letter of the English</p>
      <p>alphabet
Any character other than an uppercase</p>
      <p>letter of the English alphabet
Any character except the letters К and к
Any character except the dot character.</p>
      <p>or к, or ^</p>
      <p>String pattern x^y
Any uppercase letter of the Ukrainian</p>
      <p>alphabet at the beginning of a line
The letter а at the beginning of the line</p>
      <p>Example and result
Структурна схема лінгвістичного аналізу</p>
      <p>текстового контенту
Контент-аналіз застосовують для</p>
      <p>аналізу потоків контенту
Контент-аналіз застосовують
Контент-аналіз застосовують для
аналізу потоків контенту
Контент-аналіз застосовують
RE чутливі до регістру– правила 1, 2 та 4</p>
      <p>дають різні результати
RE чутливі до регістру– правила 1, 2 та 4</p>
      <p>дають різні результати
RE чутливі до регістру– правила 1, 2 та 4</p>
      <p>дають різні результати
Контент-аналіз застосовують</p>
      <p>Контент-аналіз застосовують
Контент-аналіз застосовують
RE чутливі до регістру– правила 1, 2 та 4</p>
      <p>дають різні результати
Контент-аналіз застосовують для</p>
      <p>аналізу потоків контенту
Контент-аналіз застосовують для</p>
      <p>аналізу потоків контенту
Контент-аналіз застосовують
аналіз потоків контенту</p>
      <p>функція x^y
Контент-аналіз застосовують для
аналізу потоків контенту в CLS
Контент-аналіз застосовують
38
39
40
41
42
43
44</p>
      <p>
        RE is case-sensitive – rules 1, 2 and 4 give different results. Using the special characters
[ and ] solves the case-sensitivity problem of RE. The string of characters in the middle of []
implements the disjunction of the values upon matching. RE-rule 6 recognizes any number
in a sequence of string characters. The dash special character - in the middle [] for RE-rules
8-12 allows not to list all characters but indicates any character in the corresponding range.
For example, Pattern /[
        <xref ref-type="bibr" rid="ref3 ref4 ref5 ref6">3-6</xref>
        ]/ indicates any of the characters 3, 4, 5, or 6, and /[в-ж]/
indicates one of the characters в, г, д, or ж in the grapheme analysis of the input test. The
caret or circumflex character ^ inside [] for RE-rules 13-18 carries a different content load
depending on the location. If at the beginning immediately after [ means, all characters after
it are rejected in the parsed character string (RE 13-15). The caret ^ has 3 purposes: to
indicate the beginning of a line (not inside [] – RE 18-19); to indicate negation within [] (RE
13-15); simply to denote carriages ^ (RE 16-17). Question mark special character ? for
RErules 20-21 allows you to mark optional characters in the searched string. This is useful in
cases where there may be both present/absent characters in a certain sequence that do not
resolve []. In [] – you can indicate the absence of a specific symbol from the range of possible
ones, but do not describe the absence of any symbol at all, indistinguishable from ?. The dot
special character . for RE-rules 22-23 allows you to mark the location of any symbol in the
sequence of the analyzed string. If the special character ? there is the absence or presence
of one symbol, then we can submit the doubling of the symbol through the special symbol *
(RE 26-29), which means the absence of a specific symbol or RE before * in the RE or its
arbitrary number in consecutively placed in the recognized line, i.e. the result can be a line
without this symbol. Therefore, to find at least one symbol from a possible sequence of the
same two - for example RE 29, and for two different ones - 30. The + special character for
RE-rules 30-31 allows you to mark one or more cases immediately preceding the /RE
symbol. {} (RE 32) is used to indicate the exact quantity (for example, exactly 2 times). The
dot special character. often used together with the special character * to indicate any string
of characters (RE 33).
      </p>
      <p>An anchor is a special symbol (for example, a double sign ^ or a dollar sign $) specifying
the location of the RE in the character string. In some cases, the caret ^ marks the beginning
of a line (RE 34). The dollar sign $ recognizes the end of a line (RE 35-36). The backslash \
allows you to recognize special characters in the character string of the input test (RE
3738). The anchors \b and \B identify the presence and absence of word boundaries,
respectively (RE 39-42). A word is any tuple of numbers, underscores or letters (without
special characters).</p>
      <p>To organize the selection of alternatives between, for example, synonyms, the
disjunction operation based on the special symbol | (RE 43-46). The combination of special
characters | inside () allows you to arrange disjunction recognition only for a specific
pattern, taking into account different inflexions/prefixes (RE 44). Special characters () are
used to organize counters of type * (RE 46). The difference is that * is used for one character,
not a whole sequence.</p>
      <p>For complex disjunctive RE operators, when grouping from different special symbols,
the concept of priority is used (Table 2): ()  *, +, ?, {}  string, ^, $  | from the highest
to the lowest (delimitation by the symbol ) (). Greedy RE patterns of the type /[a-ya]*/
recognize zero or more letters and no matches, expanding the identification to cover as
many strings as they can. Non-greedy RE based on *? and +? find the smallest possible text.
RE of the type /˽*/ is used to indicate the absence or presence of a certain number of spaces,
N
1
2
3
4
5
6
since there can always be additional spaces around. There are aliases for general ranges
that can be used primarily to preserve grapheme type (Table 3). Correctly constructed REs
avoid errors of assumption (overrecognition) and negation (accidentally missed). Reducing
the overall error rate for GA implies two antagonistic conditions for generating a collection
of REs increasing recall (minimizing false ignores) and increasing precision (minimizing
false recognitions).</p>
      <p>RE /{9}/ is the recognition of exactly 9 cases of the previous symbol/expression, RE
/а.{3}я/ – sequences v, RE /{3,12}/ – from 3 to 12 of the previous symbol/expression, RE
/(5,)/ is at least 5 occurrences of the preceding character/expression, and RE /(,13)/ is up
to 13 occurrences of the preceding character/expression. The special character s before RE
allow you to replace the expression with a pattern. The special character \k indicates the
location of the character/phrase/expression as a duplicate of the first element in the
capture group, i.e. the pattern in (), where k is the number of brackets or capture groups.
Thus, special characters () have a double function in RE: to group conditions and to
determine the order of application of operators. For grouping, without fixing the received
template in the register, the RE of the form (?: template) is used as a group that does not
capture the expression. When applying RE, the rank of use in the queue is determined. An
RE of type (?: template) is a positive statement (RE 23).</p>
      <p>The (?=pattern) operator is positive when identifying a zero-width pattern, i.e. the match
pointer is not advanced. The (?!pattern) operator is positive if the pattern does not match,
is zero-width, and the cursor does not advance. Negative statements are usually used in the
analysis of a complex content model when a special case needs to be removed (RE 24).</p>
      <p>Grapheme analysis is the preliminary processing and transformation of the text into a
certain marked and compressed format for the following NLP processes (Fig. 5):
extracting content  extracting paragraphs  extracting sentences within a paragraph 
extracting tokens within a sentence  marking tokens with tags for MA as part-of-speech
marking.</p>
      <p>A repository of
text corpuses</p>
      <p>WWWWWW</p>
      <p>WWІнWWфWоІнWрфмоарцмійанциійни</p>
      <p>WІWнWформацірйенсиурс
Інформацрійенсиурс
Informatрioеnсурс</p>
      <p>ресурс
resource</p>
      <p>HTML
Paragraphs</p>
      <p>Tags</p>
      <p>1</p>
      <p>Lexemes</p>
      <p>Sentence
Grapheme segmentation and labeling</p>
      <p>Grapheme</p>
      <p>analysis</p>
      <p>Marked content
Saving content</p>
      <p>At the first stages of the integration of content from various sources, it is necessary to
implement the processes of filtering, access and calculation of text sizes based on the
application of the standard API of pre-grapheme processing of the division of documents
through the execution of the following sequence of NLTK methods:








()is organization of access to previously unprocessed text;
()is the elimination of non-text content, scripts and style tags;
()is the identification of individual paragraphs from the content text;
()is the identification of individual sentences from the content text;
()is the identification of individual tokens from the content text;
()is grapheme labelling of identified tokens based on RE;
=  
( 
( 
( 
( ℎ
( 
( 
)))))),
(6)
and if necessary, additional methods, such as adding tags or parsing sentences, converting
annotated text into tree-like data structures, or extracting individual XML elements. To
identify and extract the main content from an information resource with an undefined
structure and high variability of documents from different sources,  ℎ
() based on the
Python readability-lxml library is used, which removes all anomalous artefacts, leaving only
the text. When processing HTML text,  ℎ</p>
      <p>()uses a collection of formal REs to identify and
remove navigation menus, declarations, script tags, and CSS, then creates a new content
object model tree, extracts the text from the source tree, and embeds it into the newly
created tree.</p>
      <p>Vectorization, feature extraction, and ML tasks rely heavily on CLS's ability to efficiently
break down textual content into its constituent components while preserving the original
structure. The accuracy and sensitivity of ML models depend on the efficiency of identifying
the connections of tokens with the corresponding context in the text. Paragraphs contain
complete ideas of context and are the structural unit of content. Based on NLTK, the  
()
operator is implemented as a paragraph generator, which is defined as blocks of text
separated by two newline characters. The  
() the operator scans all files and passes
each HTML text to the RE constructor, indicating that parsing of the HTML markup should
be done through the lxml HTMLparser. The resulting object maintains a tree structure that
can be navigated using native HTML tags and elements.</p>
      <p>If paragraphs are structural units of content, then sentences are semantic units. As a
paragraph expressing a single idea, a sentence contains a complete thought that the author
has formulated and expressed in many words. Grapheme segmentation is the division of
text into sentences for further processing by marking words with parts of speech in MA. The
()and returning an iterator (generator), sorts all sentences
operator</p>
      <p>(), calling  
from all paragraphs.
the</p>
      <p>The  
()operator bypasses all paragraphs selected by the  
()operator and uses
() operator to perform the actual grapheme segmentation. Internally, the
() operator uses</p>
      <p>(), a model pre-trained with RE recognition/identification
rules for various kinds of tokens, punctuation marks, abbreviations, geographical names,
abbreviations, and other marks that serve as sentence start/end or tab marks. Punctuation
marks do not always have an unambiguous interpretation, for example, or are a sign of the
end of a sentence, but they are also present in dates, abbreviations, abbreviations, ellipses,
etc. Determining sentence boundaries is not always an easy task. Punctuation is crucial for
identifying word boundaries (commas, spaces, colons) and for identifying certain aspects of
meaning (question marks, exclamation marks, quotation marks). For some tasks, such as
tagging parts of speech, and analyzing or synthesizing speech, it is sometimes necessary to
treat punctuation marks as if they were separate words. When analyzing speech,
punctuation marks replace pauses, accents, and changes in intonation dynamics.
Lexemization is the process of obtaining lexemes (syntactically encoded strings of symbols)
and for its implementation, the operator   () based on RE is used, which is selected
through   ()markers for spaces and punctuation marks and returns a list of alphabetic
and non-alphabetic characters. Like delimiting sentences, lexeme recognition is not always
an easy task: the presence of punctuation marks in a lexeme, punctuation marks as
independent lexemes, lexemes with and without hyphens, and lexemes as shortened forms
of words (one or more words). Different marker selection tools are chosen for these cases.
Any statement is a speech correlate of a sentence. The presence of lexemes of the dysfluency
type (loss of speech speed, for example, a longer pause when thinking) carries not so much
a semantic load as an emotional one. Exclamations such as мммм, ох, ах [mmmm, ohh, ah],
etc. are fillers or filled pauses and are also emotionally coloured, but not semantically
coloured. An unfinished word with further repetition and its ending or simply with
repetition is a fragment that does not carry a semantic load, but only an emotional one.
Therefore, when conducting PHA, depending on the goal of solving a specific problem
through CLS, it is important to take into account (mark accordingly) or ignore some types
of punctuation (ellipsis, exclamation points, etc.), dysfluencies, double fragments,
exclamations, etc. If CLS is just a transcription of speech, then such phonemes should be
ignored to avoid loss of speech rate. But they make it possible to determine the
psychological state of the speaker and his emotional state, to identify the peculiarity of the
speaker's authorial speech when the tone of the voice changes, they are relevant in
predicting the future word, because they signal that the speaker is restarting the
statement/idea, and therefore, for speech recognition, ordinary tokens are considered as
phonemes. Marking a lexeme as a lemma (a set of lexical forms having the same base, the
same main part of speech and the same word content) or as a word form (a fully inflected
or derived form of a word) is a significant difference for conducting the next stage of MA as
lemmatization or stemming, i.e. identification of word bases. For many NLP tasks in the
English language, it is enough to mark the corresponding lexemes as word forms, but for
the Ukrainian language – no, it is still necessary to identify the bases of the words (for
example, based on the analysis of inflexion according to the tree of endings).</p>
      <p>There are two ways to identify words with punctuation ignored - token recognition as
types (the number of different words |V| in the set of words of the corpus, i.e. the cardinality
of the alphabet of the corpus, where an element of the alphabet/dictionary is a unique word)
and tokens (the total number N of words of the analyzed corpus), i.e. |V|  N. The largest
Google N-grams corpus contains 13 million types among those displayed  40, so the true
number is much larger.</p>
      <p>The ratio between the number of types |V| and the number of tokens N is called Herdan's
law (Herdan, 1960) or Heaps' law (Heaps, 1978): | | =    , where  and  are positive
constants for 0 &lt;  &lt; 1. The value of x depends on the size of the corpus and the genre, for
large corpora x varies within [0.67; 0.75], when the size of the dictionary for the text grows
much faster than the square root of the length of its words. Another measure of the number
of words in a language is the number of lemmas rather than word types (for example, the
Oxford English Dictionary has over 615,000 entries).
4.2. Method of morphological analysis of the Ukrainian language
Morphology identifies the shape of things, and in textual analysis, the shape of individual
words/tokens. Lexemes are both words and punctuation marks, allowing you to conduct
the next SYA (syntactic analysis) more clearly. Word structure helps determine plural,
gender, tense, person, declension, etc. MA is a difficult task, as most languages have many
exceptions to the rules and special cases. The main task of MA is to identify parts of words
to assign them to certain classes (tags) of parts of speech. For example, sometimes it is
important to understand whether a noun is singular or plural, or is a proper name. It is also
often necessary to know whether the verb has an indefinite form, past tense, or is an
adjective. The resulting parts of speech are then used to generate larger structures
(fragments/phrases), or whole word trees, which are then used to build semantic reasoning
data structures. After GA (grapheme analysis), we have access to tokens in sentences in
paragraphs of integrated content texts, which makes it possible to apply MA to mark words
from the collection of tokens with parts of speech (e.g., verbs, nouns, prepositions,
adjectives) that indicate the role of the word in the context of the sentence. In the Ukrainian
language, the same word can usually take on different roles, depending on the inflexions.
Part-of-speech tagging based on MA rules consists of adding a corresponding tag to each
word from a collection of tokens that contains information about the definition of the word
and its role in the current context. MA rules are used for the development of
modules/subsystems for keyword identification, text classification (Fig. 4.6), machine
translation, and error correction, as well as for human psychological analysis, semantic
analysis, etc. When identifying words for further classification, the rub_id attribute
describes the rubric to which a specific keyword belongs (Table 4).
code/17,2,23,10,12,18,9
code compatible
compatible/17,5
compatibleness/13
compatibility/5,13,17</p>
      <p>compatibly/5
code/17,2,23,10,12,18,9
generators/1
кілобод/efg
копілефт/e
хакер/efg</p>
      <p>хеш/e
таймер/efg
стек/efgo
спам/e
смайл/ef
сайт/ef
рестарт/ef
рекурсія/ab
процесор/efg</p>
      <p>проксі
принтер/efg
подкаст/e
плотер/efg
піксель/efg
опція/ab
оффлайн/e
онлайн/e
модем/efg
сплайн/efg</p>
      <p>The flag of the attribute defines the properties of this keyword (the part of the language
to which it belongs). In thematic dictionaries, each word has its property, for example, a b c
d o – different types of nouns, A – verbs, V – adjectives (Fig. 7). To compare the complexity
in thematic dictionaries (23 rules in total), each English word also has a property, for
example, the numbers 1-23 are the numbers of rules of the PFX type (prefixes, rules 1-7)
and SFX (suffixes and endings, rules 8-23) and describe some nouns for English words (Fig.
8). For example, PFX-type rules describe the modification of some nouns for English words
with prefixes: re-(rule PFX 1), de- (rule PFX 2), dis- (rule PFX 3), con- (rule PFX 4), in- ( PFX
rule 5), pro- (PFX rule 6) and un- (PFX rule 7).</p>
      <p>SFX-type rules describe how some noun modifications for English words with suffixes or
endings (Fig. 8):
 -able [^aeiou], -able ee, -able [^aeiou]e (rule SFX ),
 -d e, -ied [^aeiou]y, -ed [^ey], -ed [aeiou]y (rule SFX 9),
 -ing e, -ing [^e] (rule SFX 10) and - ieth y, -th [^y] (rule SFX 11),
 -ment (rule SFX 14) and -ion e, -ication y, -en [^ey] (rule SFX 15),
 -ings e, -ings [^e] (rule SFX 12) and -'s (rule SFX 13),
 -iness [^aeiou]y, -ness [aeiou]y, -ness [^y] (rule SFX 16),
 -ies [^aeiou]y, -s [aeiou]y, -es [sxzh], -s [^sxzhy] (rule SFX 17),
 -r e, -ier [^aeiou]y, -er [aeiou]y, -er [^ey] (rule SFX 18),
 -st e, -iest [^aeiou]y, -est [aeiou]y, est [^ey] (rule SFX 19),
 -ive e, -ive [^e] (rule SFX 20) and -ly (rule SFX 21),
 -ions e, -ications y, -ens [^ey] (rule SFX 22),
 -rs e, -iers [^aeiou]y, -ers [aeiou]y, -ers [^ey] (rule SFX 23). The letters e and y near
the suffixes are decision markers.</p>
      <p>A file of affixes (parts of words that attach to the root and bring grammatical or
wordforming meaning, elements of word formation, for example, prefix, suffix, postfix, inflexion)
has the *.aff file type and may contain additional attributes - the rules of reduction to the
base of the word (Fig. 9). The notation SET is usually used to identify the sequence of parts
of affixes and directories. REP forms a lookup table to correct multiple characters for words.
TRY identifies sequences to replace. SFX and PFX identify the types of suffixes and prefixes
that are marked by word affixes.</p>
      <p>The flag of the flag attribute determines the type of word, the mask of the mask attribute
shows the ending identification rule, the value of the find attribute is the ending of the word
in the nominative case, the value of the repl attribute is the ending of the word in the
nonnominative case. Exceptions to the rules are given in square brackets. For example, the first
line (ordering 26) describes a specific example of recognizing nouns of group a with the
alternation of -і -о and the inflection -ін of the nominative case in the instrumental case
(inflection -оном), and the next entry (ordering 27) is the same nouns, but in the local case
(inflection -оні), but does not recognize other rules of that group or other groups in the
dative case - inflections -онові and -ону (Fig. 10). The third record (ordering 28) already
recognizes nouns with alternation -і -о with inflections v of the nominative case in the dative
case - inflection -огу, but does not recognize other rules of the same group and (rules 29-31
do this, respectively): -огові (Д.М.), -огом (О.), -озі (М.).</p>
      <p>The ninth entry (ordering 34) already recognizes nouns with inflexions on -[^л]ід of the
nominative case not after -л in the instrumental case with the inflexion -одом, but does not
recognize other rules of the same group and (according to rules 32-33 and 35 ): -[^л]оду
to their total number; the average number of paragraphs in the content; average number of
sentences per paragraph; total processing time.</p>
      <p>Since the corpus grows as new data is collected, pre-processed and compressed, the MA
method will allow us to calculate these features and analyze their dynamics of change. It is
an important content monitoring tool to identify possible problems in CLS, for example, in
an ML model, a significant change in lexical diversity and the number of paragraphs per
content affects the quality of the model. That is, the MA method and GA methods, in addition
to the identification of tokens and direct marking of words by parts of speech, are used to
collect additional information when determining the amount of changes in the corpus to
timely start further vectorization and restructuring of the ML model. The main stage of the
MA method is the identification of the bases of words (stemming) without taking into
account inflexions (suffixes and endings) and in some cases - prefixes. According to the
content of the inflexions, a part of the language is identified as a word (Fig. 16).</p>
      <p>For the next SYA, this is not enough (to mark the word only as a part of speech), it is still
necessary to determine, for example, gender/distinctiveness, etc., for a noun/adjective. The
classic Porter stemmer algorithm works by sequentially cutting off endings and suffixes. For
English-language texts, this is not a problem, as there are very few inflexions. For Ukrainian
words, a modified (extended) algorithm of Porter's stemmer should be applied with a check
of both additional inflexions depending on the part of the language (according to the tree of
endings), as well as the obtained word bases with a dictionary of bases to identify the
existing word (Fig. 17).</p>
      <p>Algorithm 4.1. Modified Porter stemmer algorithm
Stage 1. Identify the next token as the word   (  =   ).</p>
      <p>Stage 2. Check with the dictionary of stop words whether    or   is a service word. If yes, then
 =  + 1 and go to step 1, otherwise go to step 3.</p>
      <p>Stage 3. Go to the end of the word   . Recognize the inflection  1 in   from all possible ones (Fig.
4.16 - the longest one is chosen, for example, in   =текстова we choose the ending  1 =ова, not
 1 а) from the RE word type as   ,   or   and in the presence of the removal of
the inflexion  1 (Fig. 18).</p>
      <p>Stage 4. Preservation of inflection  1 in the word tag   .</p>
      <p>Stage 5. Mark   . as type    ,    or    respectively.</p>
      <p>Stage 6. Finding the deleted inflection  1 in the tree of inflexions   (the longest one is chosen).</p>
      <p>Checking the contents of the subtree    1 with the existing word ending  2 ( =  2 +  1 ). If  .
ends in  2 and has a counterpart in    1 , then we store it in   =  and delete in   .
Stage 7. We check the obtained base   of the initial word   with the content of the base dictionary
   of Ukrainian words. If there is no respondent, we save &lt;   ,   &gt; in the additional temporary
intermediate dictionary  &lt;  ,  &gt; for the moderator and proceed to stage 1, otherwise proceed to
stage 4.</p>
      <p>Stage 8. Analysis of inflexion and the presence/absence of alternation of letters in the
base/inflexions of the words &lt;   ,   &gt; and the analogue of the base of the word in    according
to the relevant MA RE-rule to identify additional features of the analyzed word   .
Stage 9. Addition of identified linguistic features of the recognized part of speech to the tag of the
word   of type    ,     or    respectively. Saving the results in the corresponding
dictionary    of the analyzed text.</p>
      <p>No
No
No
No
No</p>
      <p>Yes
end of text
word identification i</p>
      <p>i=i+1
tag retention with
features, stem, and</p>
      <p>inflection
Yes</p>
      <p>word marking
stop- word
inflection recognition</p>
      <p>noun inflection
adjective inflection</p>
      <p>verb inflection
Saving a word in the
stop-word dictionary</p>
      <p>Yes</p>
      <p>Yes
Yes
seving of inflection
and signs in the tag
search for inflection in
the ending tree
Yes</p>
      <p>No
the presence of inflection
search for max inflection</p>
      <p>prefix in subtree
No</p>
      <p>Yes
presence of a prefix
marking the word as</p>
      <p>unknown
saving words to the
cache dictionary for
moderation</p>
      <p>No</p>
      <p>Yes
presence of a base
search for word
base in dictionary
saving in stem/
inflection tag
inflection cutting
off from word
suffix+inflection
combination as
new inflection</p>
      <p>The increase in volume of MA RE-rules increases in a geometric progression the load on
CLS only due to the recognition of inflexions and the bases of word forms. For
Englishlanguage texts, the complexity is less due to several parameters, for example, for nouns 2
cases – 2 inflexions in the plural (s|es). For the German language, the complexity increases
- 4 cases (but inflexions almost do not change, only articles change), phrases with  2 words
are written together, etc. In the Ukrainian language, there are 7 cases of nouns, each of
which changes its inflexion depending on the gender and plural/singular, and some words
have different endings in some cases (for example, for втручання [vtruchannya]
(intervention) in the local case, there are two options – втручанню, втручанні), in
addition, there is often alternation of letters.</p>
      <p>Therefore, for Ukrainian words, Porter's simple classic stemming algorithm is not
suitable (reducing the word to the base root by cutting off inflexions). It is better to combine
such an algorithm with a search/check of the obtained intermediate results with a tree of
inflexions (so as not to go through all possible inflexions) and with the content of thematic
dictionaries of bases with a set of RE-rules for the identification of features (classification
by parts of speech). Only for text rubrication based on word identification, it is enough to
conduct MA only for some noun groups (adjectives with nouns and nouns with nouns)
without analyzing words of other parts of speech (recognition by the tree of inflexions - not
an adjective and not a noun - ignore, in addition, the key ones should be sometimes there
can be 1 preposition next to and only between nouns. It is enough to identify the bases of
nouns/adjectives/abbreviations in the text and analyze their probability of clustering in
different parts of the content relative to the total volume.</p>
      <p>The classic stemming algorithm - Porter's Stemmer - does not use dictionaries of word
bases but only applies a set of RE-rules for cutting off inflexions in sequence according to
the specifics of a specific language. The algorithm works with individual words without
analyzing and taking into account the context. Linguistic features such as features of word
formation (prefix, suffix, etc.) and parts of speech (noun, verb, etc.) are not taken into
account. The basis is the following techniques for words:


cutting off the inflexion from the analyzed word (for Ukrainian words, it can be
implemented with the obtained bases and inflexions check with analogues in DB).
the word has an invariable inflexion (the condition is impossible for most Ukrainian
words, but it is possible to identify particles, conjunctions, prepositions, some nouns
of foreign origin, abbreviations, etc.).





changes inflexion in declension due to dropping/alternating letters.
the change of word inflexion and word formation corresponds to a specific RE-rule,
for example, when forming words from some verb groups:
(ов)*ува(ти|нню|нням|нні|ння|ли|ло|ла|вшись|вши|в|вся|всь|лися|лись|тися|тись)
[(ov)*uva(ty|nnyu|nnyam|nni|nnya|ly|lo|la|vshysʹ|vshy|v|vsya|vsʹ|lysya|lysʹ|tysya|tysʹ)].
changing the inflexion of the word as an exception to the RE rules.
the ending of the word coincides with the envelope RE-rule of identification of
inflexion, but the word itself has no inflexion: вітер [viter] (wind), but відер [vider]
(bucket).</p>
      <p>most short words are invariable (stop word dictionary is sufficient).</p>
      <p>Such techniques significantly complicate the stemming algorithm of Ukrainian words.
Therefore, first, widespread inflections are analyzed, for example, for 1 letter ц (34), щ
(110), ф (214), б (281), п (341), ж (353), з (581), г (636), л (754), с (914), ч (959), д (1038),
н (2531), р (2709) or 1-4 letters (Table 2.2). Inflexions  5 (for example, max(йтесь)=6837,
max(ванням)=4656) are significantly less among keywords, therefore, for the
speed/efficiency of the solution in some CLS NLP tasks, they are ignored, but for SYA/ SEM
will not allow this. Many NLP tasks do not require full implementation of all NLP processes
from grapheme to pragmatic analyses. For example, to identify keywords, it is enough to
provide a grapheme and morphological analysis (algorithm 4.2). But before almost any NLP
process, the text must be normalized.</p>
      <p>Algorithm 4.2. Abbreviated naive processing of textual content</p>
      <p>Stage 1. Rough tokenization (or grapheme analysis) of special characters of the input text.
Step 1.1. Reading the text and removing repeated consecutive spaces and tags if they are present (if
the text is integrated from a Web resource), sequentially marking the service characters of the
beginning/end of the paragraph/heading/text, etc.</p>
      <p>Step 1.2. Grapheme parsing and segmentation between service characters or tags of the input text  ,
sequentially marking each sequence of non-alphabetic characters as tokens and recognizing
alphabetic sequences between spaces and other special characters (eg numbers and
punctuation) according to RE rules as token words to form a list  of identified alphabetic
tokens as words   .</p>
      <p>Step 1.3. Sort the list    identified tokens   alphabetically, counting occurrences of identical
chains and forming an alphabetic-frequency dictionary   , the record of which is in the form
of the number of occurrences – a word.</p>
      <p>Step 1.4. Transferring all letters of the upper register to the lower register and recalculating
occurrences of word-tokens in the alphabetic-frequency dictionary     .</p>
      <p>Step 1.5. Sort and save the dictionary     of identified   words by decreasing the frequency of
appearance (in Germanic languages, the top will be articles, pronouns, adjectives and
conjunctions, and in Slavic languages, most words with the same base and different inflexions
will occupy different lines of the list, which significantly distorts the picture of the real
distribution of words in texts).</p>
      <p>Stage 2. Segmentation/tokenization of words of the analyzed text content.</p>
      <p>Step 2.1. Word segmentation based on dictionaries, metrics such as the probability of an error in a
word, and statistical sequence models pre-trained from segmented text corpora (between
spaces, punctuation, etc.).</p>
      <p>Step 2.2. Tokenization based on RE-rules of marked tokens of the sequence type of non-alphabetic
characters as tokens (dates, prices, URLs, hashtags, e-mail addresses, etc.), punctuation (as the
end of a sentence or the boundary of a subordinate clause), mixed tokens of
alphabetic-nonalphabetic characters (abbreviations, complex hyphenated words, with an apostrophe, etc.),
lines with uppercase characters (such as the beginning of a sentence, geographical names,
proper names, abbreviations) and their normalization if necessary (for example, к.т.н.  ктн
(PhD) as a separate word- token or ML як машинне навчання [mashynne navchannya]
(machine learning)).</p>
      <p>emotion transfer.</p>
      <p>Step 2.3. Analysis of tokens with uppercase characters (except when only the first letters are
capitalized) for labelling based on the RE-rules of finite automata or as an abbreviation or
Step 2.4. Marking of unidentified   tokens and ambiguities (e.g. apostrophe as part of a word, etc.).
Stage 3. Lemmatization of a set of recognized and labelled alphabetic tokens of the text as lemmas,
identified as words of the analyzed text.</p>
      <p>Step 3.1. Normalization of tokens based on the identification of affixes from the termination tree as
stenocardia of marked token-words (reducing the word to its initial form based on RE-rules
MA for identification roots and affixes through Algorithm 1 of Porter's modified stemmer), i.e.
determination of whether the analyzed tokens have the same root and differ only in inflexion
with sequential identification of the part of the language of the analyzed words with
subsequent marking of them as lemmas with all accompanying linguistic features.
Step 3.2. Regrouping and recalculation of word frequencies in the alphabetic-frequency dictionary
    taking into account the normalized words in step 3.1.</p>
      <p>Stage 4. Additional analysis of unidentified tokens    by iteratively combining frequent
character/string pairs within token words (for example, whether tokens between spaces or
other punctuation marks контент-аналіз [kontent-analiz] (content-analysis), Web-сайт
[Web-sayt] (Web-site), контент-моніторинг [kontent-monitorynh] (content-monitoring)
or Web-resource [Web-resource] (Web-resource) are one word, or two) through bit-pair
encoding, or BPE based on text compression for further possible identification of words, their
labelling and normalization.</p>
      <p>Step 4.1. Formation of a set of symbols equal to the collection of properties with   . К We present
each word as a sequence of characters plus a special character at the end of the word or a
special character, such as a dash, within a token (for example, контент-, Web-, контент- or
Web-). We denote  = 0.</p>
      <p>numbers and other special characters.</p>
      <p>Step 4.2. Calculation of the number   of each pair of characters/lines ( 
 ,    ) as occurrences of
word stems in the input text when {     ,  
   } or {     ,  
   }, which are next to
each other and separated by a special character dash (compound words), period (date),
comma (real number) and/or space, or their combination, but not punctuation marks,
Step 4.3. Formation of the alphabetic-frequency dictionary  ′ based on ( 
 ,    ). Determination of
the number of occurrences of unique lexemes in  ′  ℎ = | ′ |.</p>
      <p>Step 4.4. Finding   = 
of the most frequent pair   = (   , 
  ) in  ′ , where ( 
 ,  
 ) ′ ,
{     ,  
   } або {     ,  
   }.
  and from   the values</p>
      <p>Step 4.7. Calculation of the number of occurrences in the input text   , occurrences of    and    at
     and/or 
   respectively, when they are used separately (not next to each other).</p>
      <p>4.3. Method of lexical analysis of the Ukrainian language
The process of lexical analysis of the Ukrainian-language text  ′ consists in parsing,
segmentation and tokenization of each sentence separately, which is characterized not by a
strict order of words, but at the same time by a constant arrangement of individual linguistic
units. In a complete simple Ukrainian sentence with direct word order, the structural
scheme is conditionally fixed. The main lexical categories of the corresponding sentence are
noun and verb groups. Type 0 grammar according to N. Chomsky's classification is not
appropriate for such sentences due to the complexity of implementation. With
contextdependent grammar, specific restrictions are applied, in particular, to the structure of a
Ukrainian-language sentence with some set of variations. Based on the syntactic rules of
generating Ukrainian-language sentences with partial word order (for example, there is no
strict order for the subject and predicate in the sentence, but the adjective is usually before
the noun or another adjective, if it is not a poetic passage, also the lexical units of the noun
group are placed around the subject, etc.), we derive the lexical scheme for the noun group
 ̃ based on regular expressions:</p>
      <p>̃ = ([ ]{0,  }[ ]{1,  }|[ ]), (7)
where  =  1 2 3 …   −1  is a sequence of adjectives, and the entry [ ]{0,  } is a
selection from 0 to  adjectives from  1 2 3 …   −1  , at   ;  =  1 2 3 …   −1  is a
sequence of nouns, and the entry [ ]{1,  } is a selection from 1 to  nouns from
 1 2 3 …   −1  , at   ;  =  1 2 3 …   −1  is a sequence of pronouns, and the entry
[ ] is the choice of 1 pronoun from  1 2 3 …   −1  ; record ( | )is a choice of either  , or
 ; the values of   and   agree in gender, number and case. Accordingly, for the verb group,
the lexical scheme based on RE-expressions:</p>
      <p>̃ = ([ ]{1,  }[ ̃ ′]{0,  }|[ ̃ ′]{0,  }[ ]{1,  }), (8)
where  =  1 2 3 …   −1  is a sequence of verbs, and the entry [ ]{1,  } is a choice from
1 to  verbs from  1 2 3 …   −1  , at   ;  ̃ ′ =  ̃1 ̃2 ̃3 …  ̃ −1 ̃ is a sequence of noun
groups, and the entry [ ̃ ′]{0,  }is a choice from 0 to  noun groups from  ̃1 ̃2 ̃3 …  ̃ −1 ̃ ,
at   ; entry ( | ) is choice of either  , or  ; agreement between   and  ̃ is carried out
by person, gender and number. The lexical scheme of a Ukrainian sentence based on
REexpressions:</p>
      <p>= ([ ̃ ′]{0,1}[ ̃ ′]{0,1}|[ ̃ ′]{0,1}[ ̃ ′]{0,1}), (9)
where  ̃ ′ =  ̃1 ̃2 ̃3 …  ̃ −1 ̃ is a sequence of verb groups, and the entry [ ̃ ′]{0,1} is a
selection from 0 to 1 verb groups with  ̃1 ̃2 ̃3 …  ̃ −1 ̃ with the presence of a predicate;
number.
sentence:
 ̃ ′ =  ̃1 ̃2 ̃3 …  ̃ −1 ̃ is a sequence of noun groups, and the entry [ ̃ ′]{0,1} is a selection
from 0 to 1 noun groups from  ̃1 ̃2 ̃3 …  ̃ −1 ̃ with the presence of a subject; record ( | )
is a choice of  or  ; agreement between  ̃ and  ̃ is carried out by person, gender and</p>
      <p>The main lexical features of the verb group are tense, number, person. For comparison,
the lexical scheme of the noun group based on the RE-expression for an English-language
 ̃ = (</p>
      <p>The lexical scheme of the English verb group based on the RE-expression:
Lexical scheme for an English-language sentence based on the RE-expression:
̃ = [ ][ ̃ ′]{0,  }.</p>
      <p>= [ ̃ ′][ ̃ ′].</p>
      <p>The agreement of cases between the lexical units of the Ukrainian-language sentence
affects the further syntactic and semantic analysis of the content:
(10)
(11)
(12)
(13)
→   
 ′,</p>
      <p>2.  ′  →    ′ ,  ,  = 1,2,3,</p>
      <p>where   ,  ′,  are the main lexical units;  ,   are auxiliary lexical units;  is the initial
symbol as an indicator of the type of sentence chain generation.</p>
      <p>Stages of lexical formation of a chain of tokens  2 1 1 3  2′
 1′
 1′ 3′:
An example of lexical generation of the type {  ′}: Саша, Софія, Катя, Данило, … –
а
спортсмен, співачка, художниця,поет, … respectively, where (
. . . ) is a sequence of
proper names,  ′ ( ′ ′ ′ ′. . . )is a sequence of professions agreed with proper names;  is
a dash. Any verb has the ability to act as a complement: моя дитина вподобала
книгочитання [moya dytyna vpodobala knyhochytannya] (My child liked reading books).
This process can theoretically be repeated an unlimited number of times: він
книгочитанняцікаводумає
про
книгочитанняцікавість
[vin
knyhochytannyatsikavodumaye pro knyhochytannyatsikavistʹ] (it is interesting to read
books, thinks about reading books, interesting), i.e.</p>
      <p>A language consisting of strings of the form 
 2,  3,  1′,  2′,  3′) is generated by a grammar of 6 rules:
Він к⏞ниго ч⏞итання ц⏞ікавість − думає про − к⏞ниго ч⏞итання ц⏞ікавість.
 ′
 ′</p>
      <p>′
. . .  ′ ′ ′ ′ (composed of symbols  1,</p>
      <p>Such grammar do not provide, for example, a natural description for the so-called
non</p>
      <sec id="sec-4-1">
        <title>Ukrainian. Наша мова, як і будь-яка інша, посідає унікальне місце.</title>
        <p>English.</p>
        <p>A theorem is stated which describes the properties of this function.</p>
        <p>German. ... die Tatsache, daß die Menschen die Fähigkeit besitzen, Verhältnisse der objektiven Realität in Aussagen wiederzuspiegeln.</p>
        <p>Francian. ... la guerre, dont la France portait encore les blessures...</p>
        <p>Hungarian. Azt hisszem, hogy késedelmemmel sikerült bebizonyítani.</p>
        <p>Serbo-Croatian. Regulacija procesa jedan je od najstarjih oblika regulacije.
To describe such constructions of sentences are used:
1. Right subordination: назва курсу, лист бумаги, une regle stricte, give him,
2. Left submission: основний курс, белый лист,
3. Sequential subordination (Fig. 20):
cette regle,
good advice.
досить повiльно рухлива черепаха
or</p>
        <p>очень быстро бегущий олень.
витяг з протоколу звiтування з наукової дiяльностi заступника завiдувача кафедри</p>
        <p>IСМ iнституту IКНI Нацiонального унiверситету "Львiвська полiтехнiка"</p>
        <p>мiста Львова країни Українa
жена сына заместителя председателя второй секции эклектики совета по</p>
        <p>прикладной мистике при президиуме Академии наук королевства Myрак</p>
        <p>Only with the correct identification and recognition of non-project constructions can a
grammatical and syntactic analysis of Ukrainian sentences be carried out to build
dependency trees of the components of these sentences.
4.4. The method of syntactic analysis of the Ukrainian language
The syntax is a set of relational rules for the formation of sentences/phrases, usually defined
by the grammar. Sentences are linguistic units of language for generating meaning and
encoding information. The purpose of SYA is to demonstrate meaningful relationships
between words based on the division of a sentence into parts, or between tokens in a
treelike structure  ′. Syntax is a necessary basis for reasoning about a system of concepts or
semantics because it is an important tool for determining the degree to which words
influence each other in the generation of phrases. For example, SYA identifies the
prepositional phrase в потяг [v potyah] (on the train) and the noun phrase чемодан в
потяг [chemodan v potyah] (the suitcase on the train) as constituents of the verb phrase
заніс чемодан в потяг [zanis chemodan v potyah] (carried the suitcase on the train). For
any derivable terminal chain (Fig. 21-22), the available such derivation in each sentence
occupies  last positions from the right. It is necessary to fulfil a set of requirements that
lead to the sequential derivation of the type . . . . . ... or nested . . . ... . . :
 ж, , → школа , , . ..,  ч, , → сміх , , школяр , , Львів , , . ..,
дуже  ч,од,н  ч,од,н  ч,од,н  ч,од,р  ж,од,р  с,од,р  ч,од,р.</p>
        <p>There are cases in the textual content when not only the right but also the left sequential
subordination has an unlimited depth of derivation, for example, due to subordinate clauses
with the operative word which, what, when, etc. (тваринка, яку врятувала Софія
[tvarynka, yaku vryatuvala Sofiya] - the animal that Sofia saved). Fig. 23 illustrates a phrase
with a depth of 22 and is completely grammatically correct (as is its Ukrainian version).
Moreover, nothing prevents you from continuing the phrase to the left на волю в обійми
зеленої пахучої трави [na volyu v obiymy zelenoyi pakhuchoyi travy] (freely into the
embrace of green, fragrant grass). The Ukrainian language allows you to generate phrases
with an unlimited number of sequentially subordinating from left to right constructions of
the type  1 2. . .   . .. (unlimited right subordination), and at the same time, unlimited left
subordination is possible in each of the constructions   - a sequence of chains
. . .   . . .   3  2  1; however, within the sequence 
further unlimited expansion is
impossible. According to the rules of the Ukrainian language   are interpreted as simple
sentences, each of which is an additional determiner to the previous one, and   are
interpreted as prepositive adjective inflexions.</p>
        <p>The grammar  ′ = ⟨ ′,  1′,  ′,  ′⟩ has a basic dictionary  ′ =  1,  2, . . . ,   symbols and
rules of the form  ′ = {
→    ,</p>
        <p>→   }, where   1′ and   1′. Each of   corresponds to
auxiliary dictionary for  1 ′ =   and  1 1′ =   ;   is the initial symbol; scheme rules
of the form   = { →  ,  →  } (heading Latin characters are non-terminal, and line
characters are terminal). The non-terminal dictionaries of the grammar   ′ are pairwise
disjoint. Association:</p>
        <p>=  ′ ∪  1′ ∪  2′ ∪ … ∪   ′ ,
where the main dictionary  in all</p>
        <p>′, and the auxiliary additional dictionary and scheme:
 1 =  ′ ∪  1′ ∪  11 ∪  12 ∪. . .∪  1 ,</p>
        <p>=  ′ ∪  1 ∪  2 ∪ … ∪  
The grammar  is special and equivalent to an automatic one, for example:
(15)
(16)
(17)
 →   4
{  →  2
 1 →   1
 1 → 
 3 →   3
 3 →   3
 3 →   3
 3 →   3
 3 →   3</p>
        <p>Algorithm 4.3. Algorithm of sentence syntactic analysis.</p>
        <p>Stage 1. An unconstrained generated sequence is generated to the right by   as a syntactic group or
sentence based on the rules of  ′.</p>
        <p>left - into a chain of terminal symbols as words.</p>
        <p>Stage 2. Any of   based on   is expanded indefinitely in the form of a tree (Fig. 24) from right to</p>
        <p>To analyze the syntactic structure of a sentence is to identify the order of words
depending on the syntactic structure and relationships, which is determined necessarily
according to the analysis of neighbours and something derived/secondary. It is advisable to
modify the grammar so that both parts of the predicate (Fig. 24) are trees of syntactic
relations. Lines with subscripts describe syntactic relations of various types; symbols
 ,  ,  , . .. are syntactic categories.</p>
        <p>A  B
x
C
або</p>
        <p>A  B
C
x</p>
        <p>B  B
z</p>
        <p>анаzлог або x
y коAнтекстнCо-вільнBих</p>
        <p>праwвил
D</p>
        <p>D</p>
        <p>A
y</p>
        <p>D</p>
        <p>z
x y
C B C E</p>
        <p>аналог правил
граматики типу 0</p>
        <p>As a result, the syntactic structures (rather than phrases) of the language are obtained
as part of the generative grammar. Another part of this grammar is the calculation in the
Ukrainian language - with mandatory consideration of the logical derivation of linear
sequences of words, solving the problem of discontinuous constituents.
4.5. The method of semantic analysis of the Ukrainian language
Semantic analysis consists not only in identifying the content of the text but also in
generating data structures to which logical reasoning can be applied. Thematic Meaning
Representations (TMR) are used to encode sentences in the form of predicate structures
based on first-order logic or lambda calculus (λ-calculus). Network/graph structures are
used to encode interactions of predicates of relevant text features. Then a traversal is
implemented to analyze the centrality of terms or subjects and the reasons for the
relationships between elements.</p>
        <p>Analysis of graphs, including ontology О, is usually not a complete SEM, but helps to form
part of important logical decisions/conclusions based on the taxonomy of concepts  :</p>
        <p>: &lt;   &gt;  ′. (20)</p>
        <p>The optimal definition of the tuple of relations between these concepts and the tuple of
the rules of the Ukrainian language, formalized by the descriptive logic of DL, will allow
effective processing of Ukrainian texts:</p>
        <p>=&lt;   ,   ,   ,   ,   &gt;, (21)
where tuples of concepts of morphology   , punctuation   , structure   , syntax  
(Fig. 25) and semantics   .</p>
        <p>In SEM, to identify the set of semes of the corresponding text and their relationship, first,
based on the results of SYA, a semantic graph of the relations of linguistic units is built,
taking into account the parts of the language of words:</p>
        <p>′ = ( ,  ,  ,  ),   =&lt;   ,  
where   is a tuple of word formation concepts;  
generation concepts in the Ukrainian language (Fig. 26).</p>
        <p>Tuple   according to the rules of the Ukrainian language syntax (Fig. 26):
&gt;,, (22)
is a tuple of sentence
Sign 1</p>
        <p>Sign 2</p>
        <p>Sign 3</p>
        <p>Sign 4</p>
        <p>Sign 5</p>
        <p>Sign 6</p>
        <p>Sign 7</p>
        <p>Sign 8
Uncommon</p>
        <p>Common</p>
        <p>Uncomplicated</p>
        <p>Simple</p>
        <p>Complicated
Noun</p>
        <p>Verb</p>
        <p>With
uncomplicated</p>
        <p>parts
of sentence</p>
        <p>With
separated
parts of
the sentence</p>
        <p>With
appeals</p>
        <p>With
build-in
components</p>
        <p>With
embedded
components</p>
        <p>The process of extracting data from the Ukrainian-language text based on the syntax
ontology allows you to supplement the conceptual weighting graphs of the content.
4.6. The method of pragmatic analysis of the Ukrainian language
Pragmatics examines the dependence of meaning on the context of the textual content of
the author and takes into account his prior knowledge, intentions, purpose, etc., in contrast
to semantics, which analyzes the meaning itself depending on the results of GA, MA, LA and
SYA within a particular text. Pragmatics is a continuation of SEM, taking into account the
peculiarities of the context of the analysed text, taking into account the ambiguity of the
statements of the analyzed text, based on the analysis of the features of the author's
statements in previous similar texts, based on the time, place, method, purpose and other
circumstances of the conversation.</p>
        <p>In PA, when resolving the ambiguity of the author's speech in a specific analyzed text,
taking into account the features of the author's speech in previous similar speeches, it is
best to use word prediction models, for example, N-grammatical Language Models (LM).
Each speaker, as a person with a unique life experience, has not only his dictionary of
thematic words but also a unique handwriting of the use of these words and their sequence
in a certain context of the relevant thematic direction. In the expression «лінгвістична
система опрацьовує …» [linhvistychna systema opratsʹovuye …] (the linguistic system
processes ...) the next word depends not only on the context but also on the so-called speech
handwriting of the author of the text: текст, контент, текстовий контент, вхідні дані,
вхідну інформацію, інтегровані дані, авторський контент, публікації [tekst, kontent,
tekstovyy kontent, vkhidni dani, vkhidnu informatsiyu, intehrovani dani, avtorsʹkyy
kontent, publikatsiyi] (text, content, text content, input data, input information, integrated
data, author content, publications), etc. The phrase «включіть свою виконану
лабораторну роботу ...» [vklyuchitʹ svoyu vykonanu laboratornu robotu ...] (include your
completed lab work...) as opposed to «додайте свою виконану лабораторну роботу ...»
[dodayte svoyu vykonanu laboratornu robotu ...] (add your completed lab work...) has a
broader meaning and depends significantly not only on the context but also on the speaker
(include can mean like download the developed software on the computer or in the sense
of adding it as an item to some list, etc.). Dialogue participants intuitively understand the
content based on their experience of communicating with the author of the phrase.
Pragmatic analysis requires the introduction of models that determine the probability for
each subsequent word. They are also intended for assigning the probability of the target
utterance for correct machine translation, identification/correction of grammatical and
stylistic errors, and handwriting or language recognition. Each language has special
statistical parameters, and the analysis of the probability of the appearance of only letters
and their combinations as N-grams of the corresponding language makes it possible to
identify the language itself or the style of the author (Fig. 31 - with greater probability, the
author of the benchmark wrote Excerpt 1).
&lt;&gt; о а н и в т е р с м к л д у п я з б ч г ю б х ц ж й ш щ ф …
Benchmark</p>
        <p>Excerpt 1</p>
        <p>Excerpt 2
.0740 .170 .201
а
.0680 .130 .201
н
.0540 .110 .101
и
.0470 .110 .700
в
.0460 .100 .800
т
.0380 .100 .801
е
.0360 .060 .600
р
.0330 .090 .101
с</p>
        <p>For Ukrainian texts, the statistical parameters of styles are the probabilities of vowels,
consonants, and gaps between words, as well as soft and sonorous groups of consonants.
Probability is also important for enhancing communication. Physicist Stephen Hawking
used simple movements to select words from a menu for speech synthesis. For such IS, it is
appropriate to use word prediction to generate suggestions for a list of likely words for the
menu. One of the most widespread and easiest to implement for English-language texts is
LM - N-gram, which assigns probabilities to sentences or sequences of words. For
Ukrainian-language texts, it is better to apply such LM to the sequence of word bases
without taking into account inflexions (otherwise incorrect PA results will be obtained) to
calculate  ( | ) is the probability of the appearance of the base of the word  after the
sequence of bases  . Taking into account words in N-grams of LM in Ukrainian-language
texts is appropriate for identifying grammatical errors.</p>
        <p>(систем|комп′ютер лінгвіст),  (системи|комп′ютерні лінгвістичні),
 (систему|комп′ютерну лінгвістичну).</p>
        <p>One of the best ways to calculate such a probability is to conduct a statistical analysis on
large corpora of texts of the relevant author or relevant thematic direction from reliable
Internet sources.:
 (систем|комп′ютер лінгвіст) =
 (систем|комп′ютер лінгвіст) =
 (комп′ютер лінгвіст систем),</p>
        <p>(комп′ютер лінгвіст)
 (комп′ютер лінгвіст систем).</p>
        <p>(комп′ют лінгвіст)</p>
        <p>This gives a probabilistic result for a certain period because the language is creative, not
homogeneous, and the vocabulary is updated and constantly develops both in general and
for a specific speaker - the author of the text. To analyze the corresponding random
linguistic event   = комп′ют,  (  )is found to calculate the probability of the appearance
of a certain sequence of linguistic events based on the chain rule or the general product rule
(chain rule of probability):
 ( 1 2 …   ) =  ( 1) ( 2| 1) ( 3| 12)…  (  | 1−1),
 ( 1 2 …   ) = ∏  (  | 1−1).</p>
        <p>=1
(38)
(39)</p>
        <p>The chain rule reflects the relationship between the overall probability of the
appearance of a specific sequence of bases and the conditional probability of the appearance
of a word base by specific previous word bases in this sequence. Taking into account the
entire dynamics of the occurrence of all word bases in the text to sequences of other word
bases is a redundant/inefficient process due to the variability of language/speech over time.
Prediction of the 2-gram model consists of approximating the dynamics of the appearance
of only the last few bases of words in a given sequence:
 1 =  1,  2 =  2,  3 =  3, ...,   =   calculate:</p>
        <p>We find the MLE estimate for the parameters of the N-gram model by statistically
analyzing the corresponding text corpus and normalizing the frequency of occurrences of
word bases and their sequences within [0;1]:
 =1
 =1
(40)
(41)
(43)
(44)
(45)
(46)
 (систем|лінгвіст) =
 (лінгвіст систем),  (систем|лінгвіст) =  (лінгвіст систем)
 (лінгвіст)
 (лінгвіст)</p>
        <p>To forecast the conditional probability of the following base of the word, we use the
Markov assumption (the probability of the word depends only on the previous one):</p>
        <p>To predict the conditional probability of the next base of the word in the N-gram based
on the metric of Maximum (greatest) Likelihood Estimation (MLE) we calculate:
Based on this, we calculate the probability of a complete sequence of word stems:
 (  | 1 −1) (  |  −1).
 (  | 1 −1) (  |  −−1+1).
 (  |  −1) =
 (  −1  )
∑  (  −1 )
 (  −1  ).</p>
        <p>(  −1)</p>
        <p>For example, for three sentences of the mini-corpus (conditionally, the &lt;p&gt; &lt;/p&gt; tags are
the boundaries of one sentence), we will calculate the Markov assumption of the 2-gram
occurrence of word bases:</p>
        <p>&lt;p&gt; CLS опрацьовує текстовий контент на основі NLP-процесів &lt;/p&gt;
&lt;p&gt; Інтеграція текстового контенту є одним із основних процесів CLS &lt;/p&gt;
&lt;p&gt; CLS розв’язує конкретну NLP-задачу для відповідного контенту&lt;/p&gt;
 (</p>
        <p>| &lt;  &gt;) = ;  (інтегр| &lt;  &gt;) = ;  (опрац|
 (&lt;/ &gt; |контент) = ;  (контент|текст) = ;  (задач|NLP) = .</p>
        <p>Estimation of the MLE parameter for the N-gram model as a relative frequency:
2
3
1
3
1
3
2
3
) = ;</p>
        <p>(  −−1+1)</p>
        <p>Algorithm 4.4. Algorithm for the analysis of MLE-parameter estimates for the N-gram
model.</p>
        <p>Stage 1. Parse the input text and break it into separate phrases (sentences) 1 2 …   , marking each
start-end with a corresponding &lt;p&gt; &lt;/p&gt; tag. Eliminate all non-alphabetic characters. Convert
uppercase letters to lowercase. Remove service words if necessary (for certain NLP tasks).
Stage 2. Apply Porter's stemming to obtain the sequence of word bases   1  2 …     of word bases
  taking into account word normalization.</p>
        <p>Stage 3. Receive input requests 1 2 …   as a sequence of words of the searched data. Find   for
For
each word   1  2 …     basis by stemming.</p>
        <p>example, for the
search
phrase
  :
  1
ме5т8од 1т9а0 за2с5іб оп6р2ац інф12о2рм ре8с3урс си1ст70ем елек8т9рон кон4т0е8нт ко3м0е0рц
Stage 4. Conduct a statistical analysis of the occurrence of word bases and sequences of query word
bases in the analyzed text.</p>
        <p>6
ресурс
  3
засіб</p>
        <p>4
опрац
  1
метод
Basics of words of</p>
        <p>analyzed text</p>
        <p>10
комерц</p>
        <p>1
метод
  2
та
  3
засіб
  4
опрац</p>
        <p>5
інформ
  6
ресурс</p>
        <p>7
систем</p>
        <p>8   9
електрон контент</p>
        <p>With each subsequent multiplication, the probability decreases. Applying the logarithm
of probabilities (log probabilities) will allow you to operate with not-so-small values for
calculating accuracy.</p>
        <p>=1</p>
        <p>∏   =  ∑ =1    .
(48)</p>
        <p>The resulting matrices will in most cases be sparse. Phrase and different variations
(plural/singular
and
cases)
система</p>
        <p>електронної контент-комерції [systema
elektronnoyi kontent-komertsiyi] (electronic content commerce system):
 (систем електрон контент комерц) =</p>
        <p>=  (електрон|систем) (контент|електрон) (комерц|контент) =</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>The general architecture of computer linguistic systems is developed based on the main
processes of processing information resources such as integration, maintenance and
content management, as well as using methods of intellectual and linguistic analysis of text
flow using machine learning technology. The IT of intellectual analysis of the text flow based
on the processing of information resources has been improved, which made it possible to
adapt the generally typical structure of content integration, management and support
modules to solve various NLP problems and increase the efficiency of CLS functioning by
69%. This became possible thanks to the combination of linguistic analysis methods adapted
to the Ukrainian language, improved IT processing of information resources, ML and a set
of metrics for evaluating the effectiveness of CLS functioning. The main principle of building
such CLS is modularity, which facilitates their construction according to the requirements
for the availability of appropriate processes for solving a specific NLP problem. The main
NLP methods based on regular expression matching with patterns in grapheme and
morphological analyses of Ukrainian-language texts are described. NLP methods based on
pattern-matching regular expressions have been improved, which made it possible to adapt
methods of text tokenization and normalization by cascades of simple substitutions of
regular expressions and finite state machines. The main valid operations of regular
expressions are defined as union and disjunction of symbols/strings/expressions, number
and precedence operators, as well as anchors as special symbols for identifying the
presence/absence of symbols in RE. The main stages of tokenization and normalization of
the Ukrainian text by cascades of simple substitutions of regular expressions and finite state
machines are defined. The MA method of the Ukrainian-language text based on word
segmentation and normalization, sentence segmentation and modified Porter's stemming
algorithm was improved as an effective means of identifying lem affixes for the possibility
of marking the analysed word, which made it possible to increase the accuracy of keyword
searches by 9%. Algorithms for word segmentation and normalization, sentence
segmentation, and Porter's modified stemming are implemented and described as an
effective way of identifying lem affixes for the possibility of marking the analysed word.
Unlike the classic Porter algorithm (it does not have high accuracy even for
Englishlanguage texts), the modified one is adapted specifically for the Ukrainian language and
gives an accurate result in 85-93% of cases, depending on the quality, style, genre of the text
and, accordingly, the content of CLS dictionaries. The algorithm for the minimum editorial
distance of lines of Ukrainian texts is described as the minimum number of operations
necessary to transform one into another.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Bengfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bilbro</surname>
          </string-name>
          , T. Ojeda,
          <article-title>Applied text analysis with Python: Enabling languageaware data products with machine learning</article-title>
          .
          <source>O'Reilly Media</source>
          , Inc. (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Deep Learning Architectures for Sequence Processing</article-title>
          . URL: https://web.stanford.edu/~jurafsky/slp3/9.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Naive</given-names>
            <surname>Bayes</surname>
          </string-name>
          and
          <string-name>
            <given-names>Sentiment</given-names>
            <surname>Classification</surname>
          </string-name>
          . URL: https://web.stanford.edu/~jurafsky/slp3/4.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          , Logistic Regression. URL: https://web.stanford.edu/~jurafsky/slp3/5.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Jurafsky</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. H.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <source>Neural Networks and Neural Language Models</source>
          . https://web.stanford.edu/~jurafsky/slp3/7.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <source>Modern State and Prospects of Information Technologies Development for Natural Language Content Processing, CEUR Workshop Proceedings</source>
          <volume>3368</volume>
          (
          <year>2024</year>
          )
          <fpage>198</fpage>
          -
          <lpage>234</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>A.</given-names>
            <surname>Berko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Matseliukh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ivaniv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Schuchmann</surname>
          </string-name>
          ,
          <article-title>The text classification based on Big Data analysis for keyword definition using stemming</article-title>
          ,
          <source>in: Proceedings of the IEEE 16th International conference on computer science and information technologies, CSIT-2021</source>
          , Lviv, Ukraine,
          <fpage>22</fpage>
          -25
          <source>September</source>
          <year>2021</year>
          , pp.
          <fpage>184</fpage>
          -
          <lpage>188</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Shvorob</surname>
          </string-name>
          ,
          <article-title>The method for detecting plagiarism in a collection of documents</article-title>
          ,
          <source>in: Proceedings of the International Conference on Computer Sciences and Information Technologies</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          ,
          <year>2015</year>
          , pp.
          <fpage>142</fpage>
          -
          <lpage>145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>R.</given-names>
            <surname>Romanchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Andrunyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chyrun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Brodyak</surname>
          </string-name>
          ,
          <article-title>Intellectual Analysis System Project for Ukrainian-language Artistic Works to Determine the Text Authorship Attribution Probability</article-title>
          ,
          <source>in: Proceedings of the 18th IEEE International Conference on Computer Science and Information Technologies</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          <year>2023</year>
          , Lviv, Ukraine,
          <source>October 19-21</source>
          ,
          <year>2023</year>
          . IEEE 2023.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Pukach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vovk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kholodna</surname>
          </string-name>
          ,
          <source>Identification and Correction of Grammatical Errors in Ukrainian Texts Based on Machine Learning Technology. Mathematics</source>
          <volume>11</volume>
          (
          <issue>4</issue>
          ) (
          <year>2023</year>
          )
          <article-title>904</article-title>
          . https://doi.org/10.3390/math11040904
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>K.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          , et al.,
          <article-title>An approach for a next-word prediction for Ukrainian language</article-title>
          .
          <source>Wireless Communications and Mobile Computing</source>
          <year>2021</year>
          (
          <year>2021</year>
          )
          <fpage>1</fpage>
          -
          <lpage>9</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kubinska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Holoshchuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Holoshchuk</surname>
          </string-name>
          , L. Chyrun,
          <article-title>Ukrainian Language Chatbot for Sentiment Analysis and User Interests Recognition based on Data Mining</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>3171</volume>
          (
          <year>2022</year>
          )
          <fpage>315</fpage>
          -
          <lpage>327</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>T N.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Basystiuk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Shakhovska</surname>
          </string-name>
          ,
          <article-title>Development of the Speech-to-Text Chatbot Interface Based on Google API</article-title>
          ,
          <source>CEUR Workshop Proceedings</source>
          <volume>2386</volume>
          (
          <year>2019</year>
          )
          <fpage>212</fpage>
          -
          <lpage>221</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>