<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Method for Political Propaganda Detection in Internet Content Using Recurrent Neural Network Models Ensemble</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Iurii</forename><surname>Krak</surname></persName>
							<email>yuri.krak@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="institution">Taras Shevchenko National University of Kyiv</orgName>
								<address>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Glushkov Institute of Cybernetics of NAS of Ukraine</orgName>
								<address>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Volodymyr</forename><surname>Didur</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Maryna</forename><surname>Molchanova</surname></persName>
							<email>m.o.molchanova@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olexander</forename><surname>Mazurets</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olena</forename><surname>Sobko</surname></persName>
							<email>olenasobko.ua@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olha</forename><surname>Zalutska</surname></persName>
							<email>zalutska.olha@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Olexander</forename><surname>Barmak</surname></persName>
							<email>alexander.barmak@gmail.com</email>
							<affiliation key="aff2">
								<orgName type="institution">Khmelnytskyi National University</orgName>
								<address>
									<settlement>Khmelnytskyi</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Method for Political Propaganda Detection in Internet Content Using Recurrent Neural Network Models Ensemble</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">0240013AECD58CDA95EBFDEDC31C9FD4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>propaganda detection, recurrent neural networks, ensemble of neural networks, natural language processing O. Barmak) 0000-0002-8043-0785 (I. Krak)</term>
					<term>0009-0008-2279-1487 (V. Didur)</term>
					<term>0000-0001-9810-936X (M. Molchanova)</term>
					<term>0000-0002-8900-0650 (O. Mazurets)</term>
					<term>0000-0001-5371-5788 (O. Sobko)</term>
					<term>0000-0003-1242-3548 (O. Zalutska)</term>
					<term>0000-0003-0739-9678 (O. Barmak)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The automation of propaganda detection processes in Internet content using natural language processing is extremely relevant in modern conditions and can provide fast and well-timed targeted detection of hostile manipulative influence in largescale amounts of Internet content. The paper proposes a method of automated propaganda detection that operates in the Ukrainian language. The method for detecting political propaganda in Internet content using ensemble of recurrent neural network models is intended to identify and analyze potentially propagandistic or manipulative content spread on the Internet. The input data of the method is an ensemble of trained models of recurrent neural networks with tokenizers and a text message for analysis. The output data are the level and percentage of propaganda presence for each neural network model of ensemble and in general.</p><p>To examine the effectiveness of developed method for detecting political propaganda in Internet content, which includes the ensemble use of recurrent neural network models of the BiLSTM and GRU architectures, a software implementation of the method was created. The software implementation allows training neural network models and using them to detect political propaganda in textual Internet content.</p><p>The training data set in Ukrainian was prepared. The applied efficiency research of propaganda detection by an ensemble of classifiers based on the BiLSTM and GRU recurrent neural network architectures was conducted. The proposed approach is capable of detecting political propaganda by an ensemble of RNN models with Accuracy 0.97, Precision 0.973, Recall 0.981, and F1 0.976 in the bagging mode, and Accuracy 0.95, Precision 0.977, Recall 0.987, and F1 0.981 in the stacking mode. The developed method has a limitation: it works with text posts from 200 to 6300 symbol long. For shorter and longer texts, performance degradation is observed.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Propaganda is an integral component of information manipulation and includes various forms, methods and means of influencing people in order to change their psychological attitudes in the desired direction, so its timely detection is an urgent task of information technologies. Such manipulations are often used to change the psychological climate in society, mobilize support or discredit opponents <ref type="bibr" target="#b0">[1]</ref>.</p><p>14th International Scientific and Practical Conference from Programming UkrPROG'2024, May <ref type="bibr">14-15, 2024, Kyiv, Ukraine</ref> In connection with the growth of the consumption of textual Internet content, the threat of the propagandistic destructive manipulative influence of textual political media is growing. Propaganda, which is distributed on the Internet, represents a large-scale threat to the national security of the country <ref type="bibr" target="#b1">[2]</ref>, the untimely solution of which can lead to devastating consequences <ref type="bibr" target="#b2">[3]</ref>. Therefore, the automation of the processes of detecting propaganda in textual Internet content by means of natural language processing is extremely relevant in modern conditions <ref type="bibr" target="#b3">[4]</ref>, and is capable of providing quick and timely targeted detection of hostile manipulative influence in large-scale volumes of Internet content.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Modern scientific publications highlight the relevance of the problem of automated detection of propaganda in textual Internet content. Research areas dedicated to the intellectualization of propaganda detection processes, which allows avoiding a number of technological problems in monitoring media sources <ref type="bibr" target="#b4">[5]</ref>, and the problem of separating manifestations of propaganda techniques from other manipulative influences <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref> are especially relevant at the moment. It is noted that the elements of the propaganda model include the subject, content, forms and methods <ref type="bibr" target="#b7">[8]</ref>, as well as means or channels of information transmission <ref type="bibr" target="#b8">[9]</ref>.</p><p>The subject of propaganda is a social group that seeks to influence the audience. The content of propaganda is determined by the subject's social interests and their relation to the interests of society in general. Forms and methods of propaganda are chosen depending on the goals and the audience to be influenced. Media include print media, radio, television, etc. The object of propaganda is the audience or social groups that are the target of influence. Social interests of the subject of propaganda influence its content and choice of forms, methods and means of information transmission <ref type="bibr" target="#b9">[10]</ref>.</p><p>Detecting propaganda using NLP in text is a challenging task due to propaganda's use of subtle manipulation techniques and context dependencies. To solve this problem, the authors of <ref type="bibr" target="#b10">[11]</ref> investigated the effectiveness of modern large language models, such as GPT-3 and GPT-4, for detecting propaganda. Experiments were performed using the SemEval-2020 task 11 dataset, which contains news articles tagged with 14 propaganda techniques. The performance of the models was determined by evaluating metrics such as F1 score, precision, and recall, comparing the results to the current state-of-the-art approach using RoBERTa. The obtained results show that GPT-4 achieves results comparable to the current state-of-the-art technology <ref type="bibr" target="#b11">[12]</ref>.</p><p>Statistical analysis of tests <ref type="bibr" target="#b13">[14]</ref>, multimodal visual-textual object graph attention network <ref type="bibr" target="#b14">[15]</ref> are noted as sufficiently promising and effective means of semantic analysis of textual content <ref type="bibr" target="#b12">[13]</ref> that can be used to detect propaganda. Also, at the present stage, the use of transformer-based neural network <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>, neural network models of complex architecture, such as RoBERTa <ref type="bibr" target="#b17">[18]</ref>, GPT <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref> and recurrent neural network <ref type="bibr" target="#b20">[21]</ref>, is a relevant direction for automated detection of propaganda. in turn focusing on the detection of our components or propaganda techniques, such as racial propaganda <ref type="bibr" target="#b21">[22]</ref> and fake news <ref type="bibr" target="#b22">[23]</ref>.</p><p>At the same time, the authors <ref type="bibr" target="#b23">[24]</ref> note that the existing methods of identifying propaganda are primarily focused on identifying the linguistic features of its content. However, these methods usually miss the information presented in the external news environment from which the propaganda news originated and spread. It is noted that methods for detecting propaganda in different languages may differ, depending on the type of language inflection <ref type="bibr" target="#b24">[25]</ref>. The authors of <ref type="bibr" target="#b25">[26]</ref> analyze how mass media influenced and reflected public opinion during the first month of the Russian invasion using articles and news channels in Telegram in Ukrainian, Russian, Romanian, French, and English. Two methods of multilingual automated identification of pro-Kremlin propaganda based on transformers (BERT) and linguistic features (SVM) were proposed and compared.</p><p>The purpose of the article is to create a method for political propaganda detection in internet content using recurrent neural network models ensemble, which will work with the Ukrainian language, as well as its approval.</p><p>As part of the research, the task was also completed: preparation of an educational Ukrainianlanguage data set; development of software that implements the created method; training of an ensemble of neural network classifiers; conducting a study of the effectiveness of the method using the developed software.</p><p>The main contribution of the article is the development of a workable method of automated detection of political propaganda in Ukrainian-language texts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Method for Political Propaganda Detection</head><p>Considering the insufficient amount of Ukrainian-language data, there is a need to create an own labeled data set that will be used for training neural networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Dataset Preparation</head><p>For training models of recurrent neural networks, a data set with the volume of more than 25,000 posts was formed, which were marked according to belonging to the categories "Propaganda" and "Non-propaganda". The lists of propaganda and verified sources were formed according to the official channels of the President and Verkhovna Rada of Ukraine, as well as according to the data of other analytical international authoritative studies and analytical summaries.</p><p>To normalize the input data, records with a length of less than 200 and more than 6300 characters were discarded. As a result of data filtering, a set consisting of 21,222 items was obtained, where 10,737 records belong to the "propaganda post" class and 10,485 records to the "nonpropaganda post" class.</p><p>To normalize the input data, entries shorter than 200 characters and longer than 6300 characters were discarded. As a result of data filtering, a set consisting of 21,222 items was obtained, where 10,737 records belong to the "propaganda post" class and 10,485 records belong to the "nonpropaganda post" class. The graph of data distribution by length in characters is shown in Figure <ref type="figure" target="#fig_0">1</ref>. As can be seen from Figure <ref type="figure" target="#fig_0">1</ref>, the number of records that do not contain propaganda and are in the length range of 200..800 characters makes up more than half of the set, and this may negatively affect the quality of classification in the future. At the same time, the set of propaganda texts is more evenly distributed. All multilingual fragments were automatically translated into Ukrainian.</p><p>The described data set will be used to train neural network models within the framework of the developed method of detecting political propaganda.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Scheme of Method for Political Propaganda Detection</head><p>The scheme of the method of detecting political propaganda is shown in Figure <ref type="figure" target="#fig_1">2</ref>. The input data of the method is an ensemble of trained models of recurrent neural networks with tokenizers, and a text post for analysis. In step 1, the ensemble of RNN models and their tokenizers are selected and loaded.</p><p>Next, step 2 pre-processes the user post for analysis, which includes converting the text to lowercase, removing stop chants and punctuation, etc. In step 3, the pre-processed text is converted into numerical sequences that will be fed to neural networks for further binary classification. Step 4 is the analysis of the post for the presence of propaganda, which includes obtaining the percentage indicators of the presence of propaganda in the post as analyzed by each RNN model.</p><p>At step 5, a conclusion is formed regarding the presence of propaganda. It is proposed to use two ensemble approaches -binary (stacking) and discrete (bagging). For the binary approach to determining the level of propaganda for ensemble neural networks, binary scores are obtained, where the score 0 is no propaganda, 1 is propaganda. In the discrete approach, the evaluation of neural networks is taken as a discrete value from 0 to 1, where 1 is the maximum manifestation of propaganda, and 0 is its absence.</p><p>In the case of stacking, a binary score is obtained, and a conclusion about the class of the post is formed according to the rules: "propaganda post", if more than 50% of the models received a binary score of 1; "post without propaganda" if more than 50% of the models received a binary score of 0; "suspicious post" if neural network models have parity voting results (about half with scores of 0 and half with scores of 1).</p><p>To determine the level of propaganda in the case of a discrete evaluation, the limits of three classes are determined by experts: the upper limit of the "post without propaganda" class and the lower limit of the "propaganda post" class.</p><p>After that, total discrete assessment of the post's belonging to specified classes is calculated (1):</p><formula xml:id="formula_0">+ + ⋅ + ⋅ = .. 2 2 1 1 RNN k RNN k Eval n n RNN k ⋅ + , (<label>1</label></formula><formula xml:id="formula_1">)</formula><p>where are chosen empirically, depending on the focus of the process on detecting the propaganda of the relevant species.</p><formula xml:id="formula_2">n k k k ,.., ,<label>2</label></formula><p>According to the above material, the result of the proposed method is the level and percentage estimate of the presence of propaganda as per each RNN-model of the ensemble, as well as the generalized level and percentage estimate of the presence of propaganda in the researched post.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>An ensemble of two neural network models was formed to conduct an experiment on the effectiveness of the developed method of detecting propaganda in Internet content. In particular, recurrent neural networks of BiLSTM and GRU architectures were used <ref type="bibr" target="#b26">[27]</ref>. The selection of different neural network models is due to their specific capabilities for analyzing text sequences <ref type="bibr" target="#b27">[28]</ref>.</p><p>The architectures of BiLSTM and GRU neural networks for detecting propaganda in Internet content are shown in Figure <ref type="figure" target="#fig_3">3</ref>.  BiLSTM by using the hidden state allows the analysis of text sequences in forward and reverse directions, which helps to eliminate the barriers of traditional RNNs in detecting propaganda <ref type="bibr" target="#b28">[29]</ref>. GRU has gate mechanisms that allow for more efficient management of gradients in time, which makes it more resistant to the problem of vanishing gradients compared to classical RNNs, which are also able to effectively detect propagation events <ref type="bibr" target="#b29">[30]</ref>.</p><p>In the case of using the BiLSTM and GRU architectures to conduct the experiment, formula (1) will take the form:</p><formula xml:id="formula_3">r r GRU k BiLSTM k Eval ⋅ + ⋅ = 2 1 ,<label>(2)</label></formula><p>where 1 k -influence coefficient of the discrete estimate obtained by the neural network BiLSTM, 2 k -influence coefficient of the discrete estimate obtained by the neural network GRU, r BiLSTM As can be seen from Table <ref type="table">1</ref>, GRU has higher accuracy than BiLSTM under the same parameters. Figure <ref type="figure" target="#fig_4">4</ref> shows the distribution of correctly classified texts by the GRU neural network (a) and the distribution of incorrectly classified texts (b).</p><p>3573 records were used as validation data, of which 1951 belonged to the "propaganda post" class, and 1622 belonged to the "post without propaganda" class. Of them, 1912 texts of the "propaganda post" class and 1565 texts of the "post without propaganda" class were correctly classified ". 57 texts of the "post without propaganda" class were falsely classified as propaganda by the neural network, and 39 texts of the "propaganda post" class were falsely classified as nonpropaganda. The overall accuracy on the validation data is 0.97. As can be seen from the numerical data, the class "post without propaganda" is classified somewhat worse than the class "propaganda post".  As can be seen from Figure <ref type="figure" target="#fig_4">4a</ref> and Figure <ref type="figure">5a</ref>, the texts have a sufficiently high level of interclass resolution, while as can be seen in Figure <ref type="figure" target="#fig_4">4b</ref> and Figure <ref type="figure">5b</ref>, the incorrectly classified data are concentrated closer to the central part of the graphs, which indicates the expediency of the partitioning approach into 3 classes: "propaganda post", "post without propaganda", "suspicious post".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Practical Implementation</head><p>The software implementation of method of detecting political propaganda shows in Figure <ref type="figure" target="#fig_6">6</ref>. To study the effectiveness of the developed method of detecting political propaganda in Internet content, which includes the ensemble use of RNN models of the BiLSTM and GRU architectures, a software implementation of the method was created using the Python language. The interface of the software component responsible for the learning module of neural network models is shown in Figure <ref type="figure" target="#fig_6">6a</ref>. The interface of the software part responsible for the process of detecting propaganda by the developed method is shown in Figure <ref type="figure" target="#fig_6">6b</ref>.</p><p>With the introduction of the "suspicious post" category, the percentage of errors of the first and second kind decreased. When using the binary approach, 178 samples out of 3573 turned out to be incorrectly classified. However, out of 178 samples, only 71 samples are false, the remaining 107 were classified as "suspicious post". Out of 71 false samples, only 26 texts containing signs of political propaganda were falsely assigned to the "non-propaganda post" class, and 45 texts were falsely assigned to the "propaganda post" class. As for the results of the discrete approach, 130 samples turned out to be incorrectly classified, which in general did not worsen the statistics of the GRU neural network indicators, but out of 130 samples, 37 texts containing signs of political propaganda were wrongly assigned to the "post without propaganda" class, and 52 texts were wrongly assigned to the "propaganda post".</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Results and Discussion</head><p>Accuracy, Precision, Recall and F1 metrics were used to study the effectiveness of detecting political propaganda in textual Internet content using the developed method <ref type="bibr" target="#b10">[11]</ref>. The values of the metrics for the discrete and binary variations of the method are shown in Table <ref type="table" target="#tab_2">2</ref>. Although the binary approach gave worse results for the Accuracy metric, it gave better results for the Precision, Recall and F1 metrics, while the discrete Accuracy approach practically did not deteriorate, but at the same time the metrics Precision, Recall and F1 are somewhat inferior to it. For the experiment, the parameters of discrete approach were as follows: k 1 =0.5, k 2 =0.5, l 2 = 0.45, l 4 = 0.55. Although binary approach gave worse results on Accuracy metric, it gave better Precision, Recall, and F1 metrics, while the discrete Accuracy approach did not deteriorate, but the Precision, Recall, and F1 metrics were somewhat inferior to it. The chart of metric values is shown in Figure <ref type="figure" target="#fig_7">7</ref>. However, the advantages of the discrete method are its flexibility and the ability to be customized depending on the task. Experiments in this direction in the future are promising.</p><p>As for the texts that were identified as suspicious, specific signs were found in them. For example: "let the l-t community calm down. They seem to be the same people (including nationalists), which means they can be joked about like everyone else. 2) Managers are satire, therefore fire. 3) Fortunately, Ukraine is not the USA, so you can joke about real graduates of the Ternopil Medka. P.S. The best joke where the musorina stops Best, takes a bribe. And then he comes back and takes it off in the heat of the moment. P.S.S. Where is the review of the sketch with Khmelnitsky and the Moscow ambassador? There's only one American subject worth anything!" (in original Ukrainian: «хай л-т спільнота успокояться . Вони ж начеб то такі самі люди (туди ж і націоналістів ) , а значить над ними можна жартувати як і над усіма іншими ..2) Менеджери то сатира тому агонь .3) Україн на щастя не США , тому можна жартувати і над реальними випускниками тернопольської медки.П. С. Найкращий жарт де мусоріна зупиняє Беста , бере хабар . А той потім повертається і знімає його на гарячому.П. С. С. Де огляд скетчу з Хмельницьким та послом московським ? Там один фак американський чтого тільки вартий !»). The text contains a number of trigger words, such as "nationalists" (in original Ukrainian: «націоналістів»), "Moscow" (in original Ukrainian: «московським»), and the context is similar to propaganda.</p><p>There were also erroneously assigned texts in the data set. For example, the following text in the dataset was marked as a "post without propaganda", but its content: "Russia does not claim the territory of Ukraine, if Ukraine did not attack Russia, there would be no military action. Rather, Russia was attacked by NATO countries on the territory of Ukraine, because the authorities of Ukraine sold the country and betrayed their people and agreed to fight for the interests of Biden, Macron, Sunak and Scholtz until the last living Ukrainian." (in original Ukrainian: «Росія не претендує на територію України, якби Україна не напала на Росію, жодних військових дій не було б. Вірніше на Росію напали країни НАТО на території України, тому що влада України продала країну і зрадила свій народ і погодилася воювати за інтереси Байдена, Макрона, Сунака та Шольца до останнього живого українця.») is outright propaganda, and both neural network models rated it highly for political propaganda, giving scores of 0.92 (GRU) and 0.97 (BiLSTM), for a total score of 0.944.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>A method for political propaganda detection in internet content using recurrent neural network models ensemble, which works with the Ukrainian language, has been proposed, and its approbation has been carried out. The method for detecting political propaganda in Internet content using ensemble of recurrent neural network models is intended to identify and analyze potentially propagandistic or manipulative content spread on the Internet. The input data of the method is an ensemble of trained models of recurrent neural networks with tokenizers and a text message for analysis. The output data are the level and percentage of propaganda presence for each neural network model of the ensemble and in general. As part of the research: preparation of an educational Ukrainian-language data set was completed; test training of an ensemble of classifiers based on BiLSTM and GRU neural network architectures was performed; software was developed that implements the created method for political propaganda detection in internet content using recurrent neural network models ensemble, and a study of its effectiveness was conducted.</p><p>The applied efficiency research of propaganda detection by an ensemble of classifiers based on the BiLSTM and GRU recurrent neural network architectures was conducted. The proposed approach is capable of detecting political propaganda by an ensemble of RNN models with Accuracy 0.97, Precision 0.973, Recall 0.981, and F1 0.976 in the bagging mode, and Accuracy 0.95, Precision 0.977, Recall 0.987, and F1 0.981 in the stacking mode . The developed method has a limitation: it works with text posts from 200 to 6300 symbols long. For shorter and longer texts, performance degradation is observed. Further research will be aimed at analyzing the dependence of the considered performance indicators of the proposed method on the features and parameters of the analyzed post, such as size, genre, and subject matter. A promising direction for further research is also an increase in the number of RNN models in the ensemble to improve performance indicators, and the specialization of models for certain types of propaganda.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Distribution of dataset elements by the number of characters.</figDesc><graphic coords="3,143.72,486.03,307.63,213.10" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Scheme of method for political propaganda detection in internet content.</figDesc><graphic coords="4,93.54,255.82,407.50,375.77" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: BiLSTM and GRU Neural Network Architectures for Detecting Propaganda.</figDesc><graphic coords="6,333.14,72.04,159.70,281.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>4 :</head><label>4</label><figDesc>(a) Distribution of correctly classified texts (b) Distribution of incorrectly classified texts Figure Distribution of correctly and incorrectly classified texts by the GRU neural network.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 5 5 :</head><label>55</label><figDesc>Figure 5 shows the distribution of correctly classified texts by the BiLSTM neural network (a) and the distribution of incorrectly classified texts (b).Out of 3573 validation posts, 1883 posts of the "propaganda post" class and 1572 texts of the "post without propaganda" class were correctly classified. 86 texts of the "post without propaganda" class were falsely classified as propaganda by the neural network, and 32 texts of the "propaganda post" class were falsely classified as non-propaganda. The overall accuracy on the validation data is 0.967.</figDesc><graphic coords="7,72.04,304.43,427.70,201.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>6 :</head><label>6</label><figDesc>(a) Neural network learning module (b) Political propaganda detection module Figure The main forms interface of applied software implementation.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: The value of metrics for binary and discrete approaches.</figDesc><graphic coords="9,173.79,534.48,247.44,184.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>1</head><label></label><figDesc>influence coefficients of discrete estimates obtained by neural networks</figDesc><table><row><cell>RNN 1</cell><cell>,</cell><cell>RNN</cell><cell>2</cell><cell>,..,</cell><cell>RNN</cell><cell>n</cell><cell>in accordance.</cell></row><row><cell cols="8">The influence coefficients of discrete neural network evaluations</cell><cell>k 1</cell><cell>k , 2</cell><cell>,..,</cell><cell>k</cell><cell>n</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>During experiments, neural networks were trained with different parameters (batch, epoch), the results of a comparison of the best models are shown in Table1.</figDesc><table><row><cell>Accuracy</cell><cell>0.97</cell><cell>0.96</cell><cell>0.96</cell><cell>0.95</cell></row><row><cell>Loss</cell><cell>0.04</cell><cell>0.06</cell><cell>0.04</cell><cell>0.07</cell></row><row><cell>and</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>Table 1</cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell cols="2">Dependence of metrics on neural network parameters</cell><cell></cell><cell></cell><cell></cell></row><row><cell>Parameters:</cell><cell>GRU</cell><cell></cell><cell>BiLSTM</cell><cell></cell></row><row><cell>Batch</cell><cell>32</cell><cell>64</cell><cell>32</cell><cell>64</cell></row><row><cell>Epoch</cell><cell>20</cell><cell>20</cell><cell>20</cell><cell>20</cell></row><row><cell>Metrics:</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table><note>r GRU -discrete evaluations of propaganda detection by BiLSTM and GRU neural networks, respectively.</note></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Value of metrics for begging and staking</figDesc><table><row><cell>Approach</cell><cell>Accuracy</cell><cell>Precision</cell><cell>Recall</cell><cell>F1</cell></row><row><cell>Bagging</cell><cell>0.97</cell><cell>0.973</cell><cell>0.981</cell><cell>0.976</cell></row><row><cell>Stacking</cell><cell>0.95</cell><cell>0.977</cell><cell>0.987</cell><cell>0.981</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Online Propaganda Detection, Data Mining and Knowledge Discovery Handbook</title>
		<author>
			<persName><forename type="first">M</forename><surname>Last</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-24628-9_31</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="703" to="719" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Detecting Propaganda in News Articles Using Large Language Models</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">G</forename><surname>Jones</surname></persName>
		</author>
		<idno type="DOI">10.13140/RG.2.2.34115.17446</idno>
	</analytic>
	<monogr>
		<title level="j">Eng. Open Access</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1" to="12" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Propaganda Detection And Challenges Managing Smart Cities Information On Social Media</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">N</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Khan</surname></persName>
		</author>
		<idno type="DOI">10.4108/eetsc.v7i2.2925</idno>
	</analytic>
	<monogr>
		<title level="j">EAI Endorsed Transactions on Smart Cities</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="e2" to="e2" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Propaganda Detection Robustness Through Adversarial Attacks Driven by eXplainable AI</title>
		<author>
			<persName><forename type="first">D</forename><surname>Cavaliere</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stanzione</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-44067-0_21</idno>
	</analytic>
	<monogr>
		<title level="m">World Conference on Explainable Artificial Intelligence</title>
				<meeting><address><addrLine>Cham; Nature Switzerland</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="405" to="419" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">How persuasive is AI-generated propaganda</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Goldstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chao</surname></persName>
		</author>
		<author>
			<persName><surname>Sh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Grossman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Stamos</surname></persName>
		</author>
		<author>
			<persName><surname>Tomz</surname></persName>
		</author>
		<idno type="DOI">10.1093/pnasnexus/pgae034</idno>
	</analytic>
	<monogr>
		<title level="j">PNAS Nexus</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">2</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Exposing Propaganda: an Analysis of Stylistic Cues Comparing Human Annotations and Machine Classification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Faye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Icard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Casanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chanson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Maine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bancilhon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gadek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Gravier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Egre</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Understanding Implicit and Underspecified Language, Association for Computational Linguistics</title>
				<meeting>the Third Workshop on Understanding Implicit and Underspecified Language, Association for Computational Linguistics<address><addrLine>Malta</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="62" to="72" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Detecting Propaganda Techniques in English News Articles using Pre-trained Transformers</title>
		<author>
			<persName><forename type="first">M</forename><surname>Abdullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Altiti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Obiedat</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICICS55353.2022.9811117</idno>
	</analytic>
	<monogr>
		<title level="m">13th International Conference on Information and Communication Systems (ICICS)</title>
				<meeting><address><addrLine>Irbid, Jordan</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="301" to="308" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Recognition of Propaganda Techniques in Newspaper Texts: Fusion of Content and Style Analysis</title>
		<author>
			<persName><forename type="first">A</forename><surname>Horak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Sabol</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Herman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Baisa</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2024.124085</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">251</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Fine-Grained Analysis of Propaganda in News Article</title>
		<author>
			<persName><forename type="first">G</forename><surname>Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barron-Cedeno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Petrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/D19-1565</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing</title>
				<meeting>the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="5640" to="5650" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Paper bullets: Modeling propaganda with the help of metaphor</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">B</forename><surname>Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dankers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Shutova</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.findings-eacl.35</idno>
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: EACL</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="472" to="489" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Task 11: Detection of Propaganda Techniques in News Articles</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D S</forename><surname>Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barron-Cedeno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wachsmuth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Petrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourteenth Workshop on Semantic Evaluation, International Committee for Computational Linguistics</title>
				<meeting>the Fourteenth Workshop on Semantic Evaluation, International Committee for Computational Linguistics<address><addrLine>Barcelona</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="1377" to="1414" />
		</imprint>
	</monogr>
	<note>SemEval-</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">The Imitation Game: Detecting Human and AI-Generated Texts in the Era of ChatGPT and BARD</title>
		<author>
			<persName><forename type="first">K</forename><surname>Hayawi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shahriar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Mathew</surname></persName>
		</author>
		<idno type="DOI">10.1177/01655515241227531</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Information Science</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Information technology for creation of semantic structure of educational materials</title>
		<author>
			<persName><forename type="first">O</forename><surname>Barmak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mazurets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Krak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kulias</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Smolarz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Azarova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gromaszek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Smailova</surname></persName>
		</author>
		<idno type="DOI">10.1117/12.2537064</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of SPIE -The International Society for Optical Engineering</title>
		<imprint>
			<biblScope unit="page">1117623</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">The Practice Investigation of the Information Technology Efficiency for Automated Definition of Terms in the Semantic Content of Educational Materials</title>
		<author>
			<persName><forename type="first">I</forename><surname>Krak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Barmak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mazurets</surname></persName>
		</author>
		<idno type="DOI">10.15407/pp2016.02-03.237</idno>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">1631</biblScope>
			<biblScope unit="page" from="237" to="245" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Multimodal Visual-Textual Object Graph Attention Network for Propaganda Detection in Memes</title>
		<author>
			<persName><forename type="first">P</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Piao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Cui</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-023-15272-6</idno>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="36629" to="36644" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Empowering Propaganda Detection in Resource-Restraint Languages: A Transformer-Based Framework for Classifying Hindi News Articles</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chaudhari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Pawar</surname></persName>
		</author>
		<idno type="DOI">10.3390/bdcc7040175</idno>
	</analytic>
	<monogr>
		<title level="j">Big Data and Cognitive Computing</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">175</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Combating Propaganda Texts Using Transfer Learning</title>
		<author>
			<persName><forename type="first">A</forename><surname>Malak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Abujaber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Al-Qarqaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Abbott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hadzikadic</surname></persName>
		</author>
		<idno type="DOI">10.11591/ijai.v12.i2.pp956-965</idno>
	</analytic>
	<monogr>
		<title level="j">IAES International Journal of Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="956" to="965" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Krak Method for Sentiment Analysis of Ukrainian-Language Reviews in E-Commerce Using RoBERTa Neural Network</title>
		<author>
			<persName><forename type="first">O</forename><surname>Zalutska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Molchanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sobko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mazurets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pasichnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Barmak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename></persName>
		</author>
		<idno type="DOI">10.15407/jai2024.02.085</idno>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3387</biblScope>
			<biblScope unit="page" from="344" to="356" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bhattacharjee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1145/3655103.3655106</idno>
	</analytic>
	<monogr>
		<title level="j">SIGKDD Explor, Newsl</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="14" to="21" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">A Survey of Bayesian Network Structure Learning</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Kitson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Constantinou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Guo</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10462-022-10351-w</idno>
	</analytic>
	<monogr>
		<title level="j">Artif Intell Rev</title>
		<imprint>
			<biblScope unit="volume">56</biblScope>
			<biblScope unit="page" from="8721" to="8814" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Abusive Speech Detection Method for Ukrainian Language Used Recurrent Neural Network</title>
		<author>
			<persName><forename type="first">I</forename><surname>Krak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Zalutska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Molchanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Mazurets</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bahrii</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sobko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Barmak</surname></persName>
		</author>
		<idno type="DOI">10.31891/2307-5732-2024-331-17</idno>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3387</biblScope>
			<biblScope unit="page" from="16" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Identification of Racial Propaganda in Tweets Using Sentimental Analysis Models: A Comparative Study</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yadav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Rathee</surname></persName>
		</author>
		<idno>doi: 0.1007/978-981-99-3716-5_28</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of Fourth Doctoral Symposium on Computational Intelligence</title>
				<meeting>Fourth Doctoral Symposium on Computational Intelligence</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">726</biblScope>
			<biblScope unit="page" from="327" to="341" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Hybrid Weakly Supervised Learning with Deep Learning Technique for Detection of Fake News from Cyber Propaganda</title>
		<author>
			<persName><forename type="first">L</forename><surname>Syed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alsaeedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Alhuri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">R</forename><surname>Aljohani</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.array.2023.100309</idno>
	</analytic>
	<monogr>
		<title level="j">Array</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="page">100309</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Graph-Based Multi-Information Integration Network with External News Environment Perception for Propaganda Detection</title>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><surname>Zh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><surname>Yang</surname></persName>
		</author>
		<idno type="DOI">10.1108/IJWIS-12-2023-0242</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Web Information Systems</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="195" to="212" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Comparative Analysis of Various Data Balancing Techniques for Propaganda Detection in Lithuanian News Articles</title>
		<author>
			<persName><forename type="first">I</forename><surname>Rizgelienė</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Korvel</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-63543-4_15</idno>
	</analytic>
	<monogr>
		<title level="m">International Baltic Conference on Digital Business and Intelligent Systems</title>
				<meeting><address><addrLine>Cham; Nature Switzerland</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="227" to="236" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Automated Multilingual Detection of Pro-Kremlin Propaganda in Newspapers and Telegram Posts</title>
		<author>
			<persName><forename type="first">V</forename><surname>Solopova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Popescu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Benzmüller</surname></persName>
		</author>
		<idno type="DOI">10.1007/s13222-023-00437-2</idno>
	</analytic>
	<monogr>
		<title level="j">Datenbank Spektrum</title>
		<imprint>
			<biblScope unit="page" from="5" to="14" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">An improved Sap Flow Prediction Model Based on CNN-GRU-BiLSTM and Factor Analysis of Historical Environmental Variables</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wen</surname></persName>
		</author>
		<idno type="DOI">10.3390/f14071310</idno>
	</analytic>
	<monogr>
		<title level="j">Forests</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">7</biblScope>
			<biblScope unit="page">1310</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Sentiment Analysis using a CNN-BiLSTM Deep Model Based on Attention Classification</title>
		<author>
			<persName><forename type="first">W</forename><surname>Yue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Li</surname></persName>
		</author>
		<idno type="DOI">10.47880/inf2603-02</idno>
	</analytic>
	<monogr>
		<title level="j">Information</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="issue">3</biblScope>
			<biblScope unit="page" from="117" to="162" />
			<date type="published" when="2023">2023</date>
		</imprint>
		<respStmt>
			<orgName>International Information Institute</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">An Attribute-Wise Attention Model with BiLSTM for an Efficient Fake News Detection</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">R</forename><surname>Merryton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Augasta</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-023-16824-6</idno>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">83</biblScope>
			<biblScope unit="issue">13</biblScope>
			<biblScope unit="page" from="38109" to="38126" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Analysis and Detection of Political Fake News Using Deep Learning with High-Performance Hybrid Model</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Alsaedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A K</forename><surname>Aladhamı</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Alwhelat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">L</forename><surname>Alshamı</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-981-99-8976-8_23</idno>
	</analytic>
	<monogr>
		<title level="m">International Conference on Intelligence Science</title>
				<meeting><address><addrLine>Singapore; Nature Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="261" to="271" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
