<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>June</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Information Technology for Referencing Ukrainian- Language News at Detecting Disinformation in Cybersecurity⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Victoria Vysotska</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mariia Nazarkevych</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Dmytro Shamota</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>12 S. Bandery St., Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Lviv Polytechnic National University</institution>
          ,
          <addr-line>12 S. Bandery St., Lviv, Ukraine</addr-line>
          ,
          <institution>Ivan Franko University of Lviv</institution>
          ,
          <addr-line>Lviv, 1 Universytetska St., Lviv</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>The paper describes the analysis of existing approaches to the abstracting of English-language texts. The research results also include software development and the implementation of an abstract method based on machine learning for Ukrainian-language news abstracting to detect disinformation. The study object is the process of automatic referencing of Ukrainian-language news text. The methods and algorithms of automatic abstracting of Ukrainian-language news texts are studied. They are capable of automatically abbreviating natural language texts and providing the user with a secondary document containing the main content of the document. The scientific novelty of this work consists of the application of byte pair coding in the preliminary processing of Ukrainian texts and the proposed abstracting algorithm. The practical results obtained value in the work by using the developed methods in the system of referencing of Ukrainian-language news text. Currently, the model accuracy based on the Naive Bayesian classifier is 83%, which is a good indicator, but it needs to be increased in the future.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;referencing texts</kwd>
        <kwd>Ukrainian text analysis</kwd>
        <kwd>text rubrication</kwd>
        <kwd>text abstracting</kwd>
        <kwd>abstract method</kwd>
        <kwd>machine learning</kwd>
        <kwd>Naive Bayes classifier</kwd>
        <kwd>NLP</kwd>
        <kwd>TensorFlow</kwd>
        <kwd>PyTorch</kwd>
        <kwd>Gensim 1</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Modern people live in conditions of constant information load. With the development of information
technologies, more and more people use the Internet, which in turn gives them unlimited access to
the distribution and consumption of information [
        <xref ref-type="bibr" rid="ref1 ref2 ref3">1-3</xref>
        ]. A person is unable to comprehend the entire
amount of available information without directly studying it. In such cases, an automatic news
abstracting program could become a helpful assistant, help overcome information overload and
quickly make a decision about which information is worth further consideration [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4-7</xref>
        ]. Referencing is
reducing the volume of the text by highlighting the main theses. The main goal is to take raw natural
language data and, using linguistics and algorithms to transform or enrich the text, process it in such
a way that it provides more value. There are two general approaches to solving this problem:
extractive and abstract [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8-10</xref>
        ]. There is also an approach that combines the previous ones (hybrid). In
the extractive approach, the most critical parts of the input document (mostly sentences) are selected
based on their preliminary assessment of informativeness. After that, they are combined to form an
essay. An internal semantic representation of the original content is built and then used to create a
short representation closer to what a human can express. This method can transform the extracted
content by paraphrasing it to compress the text more than just extraction. It makes it possible to use
words that were not in the input data set. The annotations generated by this approach should be very
close to what people write in contrast. The basis of this research is the process of information
dissemination and consumption in global media spaces - a vast and multifaceted canvas on which
dynamic and often contradictory information flows unfold. In the era of digital technologies, this
process has become especially significant due to its impact on the formation of public opinion, political
sentiments, and socio-cultural trends. The speed and volume of information dissemination create
unique challenges for information verification and analysis. The focus of our research is on the
methods and tools used to identify, analyse and neutralize disinformation, fake news and propaganda
messages in the media space. We investigate modern technological approaches, such as machine
learning and natural language processing algorithms, to determine their effectiveness in detecting
distorted content. We also analyse how these techniques can be integrated into everyday media
consumption, providing users with powerful tools for self-assessment of the veracity of the
information they consume. This two-dimensional approach allows us to delve deeply into the
mechanisms of information influence and identify strategies for developing practical tools that could
resist manipulation and distortion in the media, thereby ensuring a higher level of information
transparency and trust in society.
      </p>
      <p>This project opens up new horizons in the application of machine learning by adapting advanced
algorithms to the specifics of the Ukrainian language. The uniqueness lies in the creation of new
methods of deep semantic analysis, which allow us to approach the structural and contextual features
of Ukrainian vocabulary and syntax with understanding. These developments provide more accurate
and effective detection of information distortions, offering algorithmic innovations that can become
the foundation for future research in the field of natural language processing (NLP). The project has
a significant practical impact, as it provides Ukrainian society with reliable and accessible tools for
detecting and analysing disinformation. These tools allow users not only to identify false content but
also to understand its sources and potential targets, thereby strengthening society's information
immunity. As a result, the project contributes to enhancing the information independence and
sovereignty of Ukraine, increasing the level of information transparency and trust, which is key to
the stability of democratic institutions and national security.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <sec id="sec-2-1">
        <title>2.1. Approaches to referencing Ukrainian-language news</title>
        <p>
          Several products on the market offer automatic reporting of Ukrainian News, each of which has its
features and advantages. Summarizer.ua uses machine learning (ML) algorithms to generate short
descriptions of news articles [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]. It offers a free plan with limited features and paid plans with
advanced features, including adjusting the length of essays, defining keywords, and translating them
into other languages. QuickText is an automated referencing text platform that offers automatic News
abstracting [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]. This product uses proprietary machine learning algorithms and provides integration
with other platforms, personalization, and analytics that help users track essay performance.
NewsBreak is a mobile app that offers short descriptions of news articles [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. The app uses machine
learning algorithms, offers personalized news feeds and offline access, and allows users to share essays
with friends and followers. Abstracting Ukrainian-language news, for example, referencing
Ukrainian-language news to detect disinformation, can be a valuable tool for saving time and
increasing productivity [
          <xref ref-type="bibr" rid="ref11 ref12">11-12</xref>
          ]. However, it is essential to remember that these products are not
always accurate and reliable [
          <xref ref-type="bibr" rid="ref13 ref14 ref15">13-15</xref>
          ]. Further research and development in this area are needed to
improve the accuracy and reliability of automatic referencing and to make it more accessible to a
broader range of users. There are several ways to solve the problem of accuracy and reliability of
automatic referencing of Ukrainian-language news [
          <xref ref-type="bibr" rid="ref16 ref17 ref18 ref19 ref20 ref21">16-21</xref>
          ]:
        </p>
        <p>1. Improving ML algorithms is necessary to continue the research and ML algorithms-based NLP
development that better understand the Ukrainian language and can generate more accurate and
reliable essays.</p>
        <p>2. Human review may be used to review and edit automated abstracts to ensure accuracy and
objectivity.</p>
        <p>3. Provide users with more context in abstracts, such as links to full texts of news articles or
additional information about the topic of the article.</p>
        <p>4. Develop more accessible products for automatic referencing to make them available to a broader
range of users.</p>
        <p>5. Simplify implementation is necessary to simplify the automatic referencing products
implementation to make them more accessible to smaller organizations and companies with limited
budgets.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Main Part</title>
      <sec id="sec-3-1">
        <title>3.1. Methods for automatic referencing</title>
        <p>Various machine learning algorithms can be used to generate abstracts of Ukrainian news articles.
Keyword extraction methods identify keywords and phrases that describe the content of a news article
and use them to create an abstract. Classification-based methods classify the sentences of a news
article as important or unimportant and then generate an abstract from the critical sentences. Neural
network-based methods use artificial neural networks to train on large data sets of news articles and
abstracts and then generate abstracts for new articles.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Tools for automatic referencing</title>
        <p>Many machine learning tools can be used to develop an automatic referencing system. TensorFlow is
an open-source machine-learning platform that offers extensive capabilities for developing and
training machine-learning models. PyTorch is another open-source machine learning platform that is
similar to TensorFlow but offers some advantages, such as flexibility and ease of use. Gensim is a
Python library for NLP that offers tools for tasks such as topic modelling, sentiment analysis, and
keyword extraction.</p>
        <sec id="sec-3-2-1">
          <title>Process modelling of news abstract generating</title>
          <p>Product development for Ukrainian-language news abstracting, which will generate short
descriptions of news articles containing the most critical information. The product should be accurate,
reliable, easy to use and accessible to a wide range of users. It is necessary to define the goals of the
development, the target audience, functional requirements and non-functional requirements.
Development goals are saving users' time, productivity improvement, improving news articles'
content understanding, News accessibility for people with reading disabilities and news content
personalization. The target audience is the following subjects as users who need to quickly familiarize
themselves with the News, people with reading disabilities, organizations that need to process large
volumes of text data and software developers who want to integrate automatic referencing into their
products. Functional requirements are the following items as generating short descriptions of news
articles, defining keywords and phrases, selecting the most critical information, adjusting the length
of essays, translation of essays into other languages, integration with other platforms, personalization
of essays and providing analytics. Non-functional requirements are accuracy and reliability, Ease of
use, Availability, Scalability and Security, and Speed of work. Consider the sequence diagram for the
process of generating an abstract of a news article (Fig. 1). Actors are User, System and ML Algorithm.
The user is the person who uses the system to generate abstracts of news articles. The system is
software that produces abstracts of news articles. ML Algorithm is an algorithm used to create
abstracts for news articles based on NLP and Naive Bayes classifier.</p>
          <p>This flow chart describes the process of generating an abstract for a news article. The user provides
the system with a link to the news article he wants to refer to. The system downloads a news article
from the Internet and sends it to an ML algorithm. It analyses a news article and generates an abstract
and text rubrication (classification). The system sends an abstract to the user.</p>
        </sec>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Testing and evaluation</title>
        <p>For research, ua_datasets are used, which is a collection of datasets in the Ukrainian language with
articles that will be classified. This library is provided by FIdo.ai (ML research department of the FIdo
non-profit student organization of the Kyiv-Mohyla Academy National University) for research
purposes in the field of data analysis (classification, clustering, keyword extraction, etc.). Ukrainian
News has been selected from the collection of datasets. Ukrainian News is a collection of more than
150,000 news articles collected from more than 20 news resources. Sample datasets are divided into
five categories: Policy, Sports, News, Business and Technologies. The number of records of the
training sample is 120417, of the test sample – 30105:
from ua_datasets import NewsClassificationDataset
train_data = NewsClassificationDataset(root = 'data/', split = 'train', return_tags = True)
test_data = NewsClassificationDataset(root = 'data/', split = 'test', return_tags = True)
1. Import the Pandas library and write data into variables:
import pandas as pd
train_data = pd.read_csv('data/train.csv')
test_data = pd.read_csv('data/test.csv')</p>
        <p>2. Convert text files into numeric feature vectors based on words bag and CountVectorizer from
sklearn.feature_extraction.text.</p>
        <p>count_vector = CountVectorizer()
X_train_counts = count_vector.fit_transform(train_data.text)</p>
        <sec id="sec-3-3-1">
          <title>X_train_counts.shape</title>
          <p>3. TF-IDF can reduce the weight of more common words that appear in all documents.</p>
          <p>X_train_tfidf
X_train_tfidf.shape
=
4. Different algorithms can be used to classify text, for example, Naive Bayes classifier:
from sklearn.naive_bayes import MultinomialNB
clf = MultinomialNB().fit(X_train_tfidf, train_data.target)
5. A text analysis pipeline construction:
text_clf = Pipeline([('vect', CountVectorizer()), ('tfidf', TfidfTransformer()),
('clf', MultinomialNB()),])
text_clf = text_clf.fit(train_data.text, train_data.target)
6. The system has learned to determine the heading of the text (Fig. 2). Derivation of matrix
dimension is X_train_tfidf.shape. The execution result is (120417, 494590). A few records (172, 2324
and 3 elements) from the test sample (Fig. 3-5) are taken to determine its rubric.</p>
          <p>test_data.text[A]
text_clf.predict([test_data.text[A]])</p>
          <p>Next, an arbitrary text from the Internet is taken to define its rubric (Fig. 6): test_clf.predict
([any_text]).</p>
        </sec>
        <sec id="sec-3-3-2">
          <title>The first five records as the head() method request result</title>
        </sec>
        <sec id="sec-3-3-3">
          <title>Text from the test sample (172 items) and text analysis result - Technology</title>
        </sec>
        <sec id="sec-3-3-4">
          <title>Text from the test sample (2324 elements) and text analysis result - News</title>
          <p>Text from the test sample (3 elements) and analysis result - Business
Free text from the Internet and text analysis result - Sport</p>
        </sec>
        <sec id="sec-3-3-5">
          <title>Determine the system accuracy after executing the code:</title>
          <p>predicted = text_clf.predict(test_data.text) np.mean(predicted == test_data.target)</p>
          <p>The execution result is 0.8301278857332669. All of the above results turned out to be correct, which
means that the accuracy is 83% true.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Building a system to detect fake messages from chat users</title>
      <p>We load a dataset to determine fake or true news.</p>
      <p>The dataset is preprocessed through the removal of redundant lines, unnecessary characters, and
other irrelevant elements. Following this, we visualize the data to gain insights into its structure.</p>
      <p>We train a model to classify whether the news is fake or true. For this purpose, the dataset is
divided into training and test sets.</p>
      <p>X_train, X_test, y_train, y_test = train_test_split(X_combined, y, test_size=0.2, random_state=42)</p>
      <sec id="sec-4-1">
        <title>Next, we create and train classifiers</title>
        <p>models = { "Logistic Regression": LogisticRegression(max_iter=1000), …}</p>
        <p>The proposed method involves the automated processing of news content to estimate the
probability of it being fake. This is achieved using a pre-trained artificial intelligence model that
analyzes a combination of linguistic, structural, and meta-informational features.</p>
        <p>Existing methods for fake news detection typically rely on manual content moderation, basic
keyword filtering, or URL blocking. These approaches suffer from significant limitations, including
low accuracy, slow processing time, and poor adaptability to evolving disinformation tactics.
c
d</p>
        <p>The most similar known method uses machine learning for fake news detection based on a limited
set of input features. However, it lacks a comprehensive evaluation of the news structure, the
credibility of the source, and the presence of external fact confirmations.</p>
        <p>Low Accuracy of Keyword-Based Filtering: Many existing solutions rely on analyzing the
frequency of specific keywords or phrases (e.g., “shock!”, “sensation!”, “never seen before”). However,
such filters are easily bypassed through changes in wording or stylistic obfuscation, resulting in low
effectiveness in identifying deliberately manipulative content.</p>
        <p>Lack of Contextual and Structural Analysis: These methods often ignore the deeper structure of
the text, such as sentence logic, coherence, stylistic consistency, and genre appropriateness. As a
result, sophisticated fake news articles that mimic the style of legitimate journalism frequently go
undetected.</p>
        <p>No Verification of the Information Source: Most systems do not assess the credibility of the source,
such as whether the website is verified, blacklisted, or exhibits suspicious domain characteristics. This
allows the creation of fake websites with convincing names that can mislead both human readers and
automated detection tools.</p>
        <p>Current approaches rarely verify news by cross-referencing with other authoritative sources such
as reputable news agencies, scientific portals, or fact-checking organizations. Consequently, even
blatant fake news can sometimes be mistakenly accepted as reliable.</p>
        <p>Many solutions lack self-learning or adaptive capabilities. They are vulnerable to evolving styles
of fake news and new deception techniques, quickly becoming obsolete without continuous manual
updates.</p>
        <p>Manual moderation or systems relying on non-automated analysis are time-consuming,
preventing prompt responses—especially critical during information attacks or crisis situations.</p>
        <p>Some solutions operate only with English-language content and do not consider local context
specifics. Additionally, they often support only a limited range of platforms.</p>
        <p>Two modern algorithms for community detection in networks have been developed with
additional constraints such as spatial connectivity and boundary size. These enhanced methods, called
ScLouvain and ScLeiden, are extensions of the traditional Louvain and Leiden algorithms,
respectively. Both optimize the network structure by maximizing intra-community flows while
minimizing inter-community flows.</p>
        <p>In many complex networks, nodes tend to cluster into relatively dense groups, often referred to as
communities[^1,^2]. This modular structure is typically unknown beforehand, making community
detection a fundamental problem in network analysis. One of the most widely used approaches for
this task is modularity optimization, which aims to maximize the difference between the actual
number of edges within communities and the expected number of such edges in a random network.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Research experimental</title>
      <p>As a result of the experiments, the following results were obtained by the classifiers. On the collected
dataset, fake news was investigated using the k-nearest Neighbors methods (show Fig.5, Fig.6),
support Vector Machines methods (show Fig.7, Fig. 8), decision tree classifier methods (show Fig.9,
Fig. 10), random forests methods (show Fig. 11, Fig. 12), naive Bayes method (show Fig. 13, Fig. 14),
Logistic Regression (show Fig. 15, Fig. 16).</p>
      <sec id="sec-5-1">
        <title>5.1. Method k-nearest Neighbors</title>
      </sec>
      <sec id="sec-5-2">
        <title>5.3. Decision Tree Classifier</title>
      </sec>
      <sec id="sec-5-3">
        <title>5.6. Logistic Regression</title>
        <p>
          The developed information system for detecting fake news in chatbots uses approaches to building
information systems with information visualization, which are described in [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]. When constructing
linear regression, functional studies on constructing curves in [
          <xref ref-type="bibr" rid="ref22">22</xref>
          ] were taken. When developing the
information system for detecting fake news, system protection was used, which is based on the
approaches described in [
          <xref ref-type="bibr" rid="ref23">23</xref>
          ]. Also, the ideology for building the system itself was taken from sources
[
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
        </p>
        <sec id="sec-5-3-1">
          <title>A comparison of all proposed methods is shown in Fig. 17.</title>
          <p>The best results in fake news detection were achieved by the Random Forest, Logistic Regression,
and Support Vector Machine classifiers, with accuracies ranging from 93% to 95%.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>In the digital age, information security is one of the key challenges for many societies, especially for
countries undergoing political change or conflict. Ukraine, as a country significantly affected by
information operations, faces the need to combat disinformation, fake News and propaganda.
Accordingly, the development of tools for identifying and analysing such information threats is an
urgent task that is of great importance for ensuring the country's information security. The relevance
of this project cannot be overestimated in the conditions of the modern information space, where the
fight for the truth has become almost synonymous with preserving national security. Information
wars, in which the truth becomes the first victim, mercilessly bombard the public consciousness of
millions of people, distorting reality and forming an artificial reality that serves the interests of
external and internal antagonists. In Ukraine, on the front lines of the fight against hybrid threats, the
lack of reliable tools for identifying disinformation can lead to systemic failures in public trust, the
erosion of fundamental democratic values, and the stability of state institutions. It is not just a matter
of media literacy; it is a matter of strategic defence of national security. The flywheel of disinformation
can have a devastating effect not only on domestic political stability but also on Ukraine's
international reputation, affecting the investment climate and bilateral relations with other states. A
qualitatively new level of aggression in the information sphere requires an adequate response in the
form of the development and implementation of advanced technological solutions. The project, which
aims to create comprehensive tools for identifying fakes and disinformation, not only meets the
critical need of Ukrainian society for reliable means of information verification but also improves the
general culture of information consumption, strengthening the information resilience of the nation.
This project will become a buffer that will protect Ukrainian society from false narratives and hostile
information interference, ensuring the stable development of democratic institutions and values in
the country. In this study, a text analysis system is developed using a Naive Bayesian classifier. Main
prospects are using the developed system made it possible to increase the number of potential
categories for classification, adding other languages and increasing the model accuracy. Currently,
the accuracy of the model is 83%, which is a good indicator, but it needs to be improved in the future.
The low accuracy of the results of text analysis is due to the complexity of processing
Ukrainianlanguage texts. It is necessary to analyse the endings of the words, plural/singular, gender and case to
process only nouns and reduce them to the nominative case. In the Ukrainian language, there are 7
cases with different endings for different words of different genders (feminine, neuter and masculine).
It is necessary to have dictionaries of all endings and regular rules for reducing nouns to the
nominative case (1000 rules), adjectives (100 rules) and verbs (several hundred rules). Future
research will be aimed at increasing the functionality of the system, including adding other types of
analysis and creating user interfaces.</p>
      <p>News classification into fake and true categories was performed using various algorithms,
including k-Nearest Neighbors, Support Vector Machines, Decision Trees, Random Forests, Naive
Bayes, Linear Discriminant Analysis, and Logistic Regression. The effectiveness of these models was
evaluated and their results compared. Support Vector Machines and Logistic Regression demonstrated
the highest accuracy, making them particularly effective tools for fake news detection.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>The research was carried out with the grant support of the National Research Fund of Ukraine,
"Information system development for automatic detection of misinformation sources and inauthentic
behaviour of chat users", project registration No 33/0012 from 3/03/2025 (2023.04/0012).
Declaration on Generative AI
The authors have not employed any Generative AI tools.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khairova</surname>
          </string-name>
          , et al.,
          <article-title>"Using BERT model to Identify Sentences Paraphrase in the News Corpus,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3171</volume>
          , pp.
          <fpage>38</fpage>
          -
          <lpage>48</lpage>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>N.</given-names>
            <surname>Khairova</surname>
          </string-name>
          , et al.,
          <article-title>"Topic Modelling of Ukraine War-Related News Using Latent Dirichlet Allocation with Collapsed Gibbs Sampling,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3688</volume>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Dar</surname>
          </string-name>
          ,
          <article-title>and</article-title>
          <string-name>
            <given-names>Dr. R.</given-names>
            <surname>Hashmy</surname>
          </string-name>
          ,
          <article-title>"A Survey on COVID-19 related Fake News Detection using Machine Learning Models,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3426</volume>
          , pp.
          <fpage>36</fpage>
          -
          <lpage>46</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>T.</given-names>
            <surname>Batura</surname>
          </string-name>
          , et al.,
          <article-title>"A method for automatic text summarisation based on rhetorical analysis and topic modeling,"</article-title>
          <source>International Journal of Computing</source>
          , vol.
          <volume>19</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>118</fpage>
          -
          <lpage>127</lpage>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          ,
          <article-title>"The similarity metric of scientific papers summaries on the basis of adaptive ontologies,"</article-title>
          <source>International Conference on Perspective Technologies and Methods in MEMS Design</source>
          , p.
          <fpage>162</fpage>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>B.</given-names>
            <surname>Korostynskyi</surname>
          </string-name>
          , et al.,
          <source>"Text Content Summarization Technology Based on Machine Learning Methods," Computer Sciences and Information Technologies</source>
          ,
          <string-name>
            <surname>CSIT</surname>
          </string-name>
          , Lviv, pp.
          <fpage>19</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>October 2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>V.</given-names>
            <surname>Lytvyn</surname>
          </string-name>
          , et al.,
          <article-title>"Abstracting Text Content Based on Weighing the TF-IDF Measure by the Subject Area Ontology,"</article-title>
          <source>International Conference on Smart Information Systems and Technologies</source>
          , Kazakhstan,
          <year>2021</year>
          . https://ieeexplore.ieee.org/document/9465978
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Summarizer.ua.</surname>
          </string-name>
          [Online]. Available: https://www.summarizer.org/
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>QuickText.</surname>
          </string-name>
          [Online]. Available: https://www.quicktext.im/
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>[10] NewsBreak [Online]. Available: https://play.google.com/store/apps/details?id=com.particlenews.newsbreak &amp;hl=en&amp;gl=US</mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          , et al.
          <article-title>"Information technology for identifying disinformation sources and inauthentic chat users' behaviours based on machine learning,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3723</volume>
          , pp.
          <fpage>427</fpage>
          -
          <lpage>465</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>V.</given-names>
            <surname>Vysotska</surname>
          </string-name>
          , et al.
          <article-title>"NLP Tool for Extracting Relevant Information from Criminal Reports or Fakes/Propaganda Content,"</article-title>
          <source>CSIT</source>
          , pp.
          <fpage>93</fpage>
          -
          <lpage>98</lpage>
          ,
          <year>2022</year>
          . doi:
          <volume>10</volume>
          .1109/CSIT56902.
          <year>2022</year>
          .
          <volume>10000563</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>I.</given-names>
            <surname>Afanasieva</surname>
          </string-name>
          , et al.
          <article-title>"Application of Neural Networks to Identify of Fake News,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3396</volume>
          ,
          <fpage>346</fpage>
          -
          <lpage>358</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wierzbicki</surname>
          </string-name>
          , et al.
          <article-title>"Synthesis of model features for fake news detection using large language models,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3722</volume>
          , pp.
          <fpage>50</fpage>
          -
          <lpage>65</lpage>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>A.</given-names>
            <surname>Mykytiuk</surname>
          </string-name>
          , et al.
          <source>"Technology of Fake News Recognition Based on Machine Learning Methods," CEUR Workshop Proceedings</source>
          , vol.
          <volume>3387</volume>
          , pp.
          <fpage>311</fpage>
          -
          <lpage>330</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>A.</given-names>
            <surname>Shupta</surname>
          </string-name>
          , et al.
          <article-title>"An Adaptive Approach to Detecting Fake News Based on Generalized Text Features,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3387</volume>
          , pp.
          <fpage>300</fpage>
          -
          <lpage>310</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>O.</given-names>
            <surname>Oborska</surname>
          </string-name>
          , et al.
          <article-title>"An Intelligent System Based on Ontologies for Determining the Similarity of User Preferences,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>3403</volume>
          , pp.
          <fpage>283</fpage>
          -
          <lpage>292</lpage>
          ,
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Chalyi</surname>
          </string-name>
          , and
          <string-name>
            <given-names>V.</given-names>
            <surname>Leshchynskyi</surname>
          </string-name>
          ,
          <article-title>Construction of patterns of user preferences dynamics for explanations in the recommender system,"</article-title>
          <source>Advanced Information Systems</source>
          , vol.
          <volume>5</volume>
          (
          <issue>1</issue>
          ), pp.
          <fpage>107</fpage>
          -
          <lpage>112</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>M.</given-names>
            <surname>Shvedova</surname>
          </string-name>
          , and
          <string-name>
            <given-names>R.</given-names>
            <surname>Waldenfels</surname>
          </string-name>
          ,
          <article-title>Regional Annotation within GRAC, a Large Reference Corpus of Ukrainian: Issues and Challenges,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>2870</volume>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>45</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>V.</given-names>
            <surname>Korolov</surname>
          </string-name>
          , et al.
          <article-title>"Information-Reference System Creation Prerequisites for the Ground Forces Identification on the Battlefield According to NATO Standards,"</article-title>
          <source>CEUR Workshop Proceedings</source>
          , vol.
          <volume>2870</volume>
          , pp.
          <fpage>1152</fpage>
          -
          <lpage>1172</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>M.</given-names>
            <surname>Haigh</surname>
          </string-name>
          , et al.
          <article-title>"Beyond fake news: learning from information literacy programs in Ukraine," Libraries and the Global Retreat of Democracy: Confronting Polarisation, Misinformation, and Suppression</article-title>
          . Emerald Publishing Limited, pp.
          <fpage>163</fpage>
          -
          <lpage>182</lpage>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <surname>Medykovsky</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Droniuk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nazarkevich</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fedevych</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>Modelling the pertubation of traffic based on ateb-functions</article-title>
          .
          <source>In Computer Networks: 20th International Conference, CN</source>
          <year>2013</year>
          ,
          <string-name>
            <given-names>Lwówek</given-names>
            <surname>Śląski</surname>
          </string-name>
          , Poland, June 17-21,
          <year>2013</year>
          . Proceedings 20 (pp.
          <fpage>38</fpage>
          -
          <lpage>44</lpage>
          ). Springer Berlin Heidelberg.
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <surname>Dronyuk</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nazarkevych</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Fedevych</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          (
          <year>2015</year>
          ,
          <article-title>October)</article-title>
          .
          <source>Synthesis of Noise -Like Signal Based on Ateb-Functions. In International Conference on Distributed Computer and Communication Networks</source>
          (pp.
          <fpage>132</fpage>
          -
          <lpage>140</lpage>
          ). Cham: Springer International Publishing.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <surname>Nazarkevych</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Oliiarnyk</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Nazarkevych</surname>
            ,
            <given-names>H.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Kramarenko</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Onyshschenko</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2016</year>
          ,
          <article-title>August)</article-title>
          .
          <article-title>The method of encryption based on Ateb-functions</article-title>
          .
          <source>In 2016 IEEE First International Conference on Data Stream Mining &amp; Processing (DSMP)</source>
          (pp.
          <fpage>129</fpage>
          -
          <lpage>133</lpage>
          ). IEEE.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>