<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Leveraging Artificial Intelligence and Large Language Models for Fake Content Detection in Digital Media</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andriy</forename><surname>Matviychuk</surname></persName>
							<email>matviychuk@kneu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National Economic University named after Vadym Hetman</orgName>
								<address>
									<addrLine>Beresteysky Ave. 54/1</addrLine>
									<postCode>03057</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vasyl</forename><surname>Derbentsev</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National Economic University named after Vadym Hetman</orgName>
								<address>
									<addrLine>Beresteysky Ave. 54/1</addrLine>
									<postCode>03057</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Vitalii</forename><surname>Bezkorovainyi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National Economic University named after Vadym Hetman</orgName>
								<address>
									<addrLine>Beresteysky Ave. 54/1</addrLine>
									<postCode>03057</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tetiana</forename><surname>Kmytiuk</surname></persName>
							<email>kmytiuk.tetiana@kneu.edu.ua</email>
							<affiliation key="aff0">
								<orgName type="institution">Kyiv National Economic University named after Vadym Hetman</orgName>
								<address>
									<addrLine>Beresteysky Ave. 54/1</addrLine>
									<postCode>03057</postCode>
									<settlement>Kyiv</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oleksii</forename><surname>Hostryk</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Odesa National Economic University</orgName>
								<address>
									<addrLine>Preobrazhenskaya Str. 8</addrLine>
									<postCode>65082</postCode>
									<settlement>Odesa</settlement>
									<country key="UA">Ukraine</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Leveraging Artificial Intelligence and Large Language Models for Fake Content Detection in Digital Media</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">C5CB9F99C4A66D36566BDBC1750D530A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:49+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>fake news detection, text classification, risk information, artificial intelligence, deep learning, natural language processing, BERT-like models (O. Hostryk) 0000-0002-8911-5677 (A. Matviychuk)</term>
					<term>0000-0002-8988-2526 (V. Derbentsev)</term>
					<term>0000-0002-4998-8385 (V. Bezkorovainyi)</term>
					<term>0000-0001-5262-856X (T. Kmytiuk)</term>
					<term>0000-0001-6143-6797 (O. Hostryk)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The rapid proliferation of misinformation and fake news across online platforms has become a significant challenge, necessitating the development of advanced detection methods. This study explores the application of BERT-based models, including RoBERTa, DistilBERT, and XLM-RoBERTa, for the identification of fake news. Using diverse datasets (WELLFake, and PolitiFact) our approach involves finetuning these pre-trained models with minimal text preprocessing to preserve linguistic nuances. The models were evaluated based on their accuracy, F1-score, and computational efficiency, with experiments conducted on Google Colab using NVIDIA GPUs for acceleration. RoBERTa demonstrated the highest accuracy on the WELLFake dataset, while DistilBERT achieved the best performance on the more concise PolitiFact dataset, highlighting the importance of matching models to dataset characteristics. XLM-RoBERTa, with its multilingual capabilities, showed strong generalization on diverse data but faced challenges with domain-specific tasks. The results underscore that model selection should be tailored to the specifics of the dataset and available computational resources, offering valuable insights for deploying effective fake news detection systems.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The proliferation of fake content in electronic media has become a critical challenge in our increasingly digitized world. From misinformation and disinformation to sophisticated deepfakes, the spread of false or misleading content poses significant threats to social cohesion, democratic processes, and individual decision-making. As the volume and complexity of digital content continue to grow exponentially, traditional methods of fact-checking and content verification struggle to keep pace, necessitating the development of more advanced, automated approaches to identifying fake content <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>In recent years, the field of Natural Language Processing (NLP) has witnessed remarkable advancements, particularly in the domain of Large Language Models (LLMs). These sophisticated Artificial Intelligence (AI) systems, trained on vast corpora of text data, have demonstrated an unprecedented ability to understand and generate human-like text, making them promising candidates for tackling the fake content detection challenge. Models like CNN (Convolutional Neural Network), BERT (Bidirectional Encoder Representations from Transformers), GPT (Generative Pre-trained Transformer), and their variants have set new benchmarks in various NLP tasks, including text classification, sentiment analysis, and question answering <ref type="bibr" target="#b2">[3]</ref><ref type="bibr" target="#b3">[4]</ref><ref type="bibr" target="#b4">[5]</ref><ref type="bibr" target="#b5">[6]</ref>.</p><p>The potential of LLMs in identifying fake content lies in their capacity to capture subtle linguistic patterns, contextual nuances, and semantic relationships that may be indicative of fabricated or misleading information. By leveraging pre-trained models and fine-tuning them on specific datasets related to fake content, researchers and developers can create powerful tools for automated content authenticity verification <ref type="bibr" target="#b6">[7]</ref>. This article explores the application of large language models, with a focus on BERT and its variations, in the detection of fake content across various electronic media platforms. We will delve into the process of adapting these pre-trained models to the specific task of fake content identification, discussing the methodology of retraining several last layers on custom datasets chosen for this purpose.</p><p>The approach of fine-tuning pre-trained models offers several advantages in the context of fake content detection. Firstly, it allows us to benefit from the rich language understanding already encoded in these models, which have been trained on diverse and extensive datasets. Secondly, it provides a more efficient and resource-effective method compared to training models from scratch, which is especially valuable when working with limited labelled data specific to fake content <ref type="bibr" target="#b7">[8]</ref>.</p><p>However, the application of LLMs in this domain is not without challenges. Issues such as model bias, the need for continual updating to keep pace with evolving disinformation tactics, and the ethical implications of automated content analysis must be carefully considered <ref type="bibr" target="#b8">[9]</ref>. Moreover, the effectiveness of these models can vary depending on the type and source of fake content, necessitating a nuanced approach to model selection and fine-tuning.</p><p>Throughout this article, we will examine the architecture of the chosen pre-trained models, detail the process of dataset preparation and model fine-tuning, and present a comprehensive analysis of the results obtained. We will also discuss real-world applications, limitations of the current approach, and potential future developments in this rapidly evolving field.</p><p>By exploring the use of LLM and the detection of fake content, this study contributes to broader efforts to combat disinformation in the digital age. In an increasingly complex information environment, developing advanced AI-based tools for content verification is not only a technical challenge, but also an important measure to ensure the reliability and credibility of information.</p><p>It should be noted that illustration of the power and danger of fake content in modern warfare and international relations. The onset of the full-scale phase of the hostilities has been marked by unprecedented levels of information warfare, with a flow of fakes and disinformation flooding social media platforms, news feeds, and messaging apps.</p><p>The dissemination of fake content in this context ranges from fabricated stories about Ukrainian aggression to doctored videos purporting to show military actions that never occurred. This barrage of false information has not only complicated the internati situation, but has also affected public opinion, potentially influencing policy decisions and humanitarian aid efforts.</p><p>The situation highlights the critical need for reliable, rapid detection systems for fake content, as the consequences of unverified disinformation in such high-stakes geopolitical scenarios can be severe and far-reaching.</p><p>The objective of our study is to develop a set of fake news identification models based on pretrained BERT models by fine-tuning the last few layers, and compare their performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Literature review</head><p>With advancements in Machine Learning (ML), Deep Learning (DL), and LLMs, researchers have developed various methods to identify false information accurately. Recent studies in this domain have explored different approaches, ranging from traditional machine learning models to advanced neural networks, hybrid models, and even explainable AI. Recently have been published several overview of ML and DL approaches in the field of identification of fake content in the digital media <ref type="bibr" target="#b9">[10]</ref><ref type="bibr" target="#b10">[11]</ref><ref type="bibr" target="#b11">[12]</ref><ref type="bibr" target="#b12">[13]</ref>.</p><p>Harris et al. <ref type="bibr" target="#b9">[10]</ref> explore the emergence of information pollution and the infodemic resulting from the widespread use of digital technologies on online social networks, blogs, and websites. They highlight the negative consequences of the malicious broadcast of misleading content, including social unrest, economic impacts, and threats to national security and user safety. The authors critically evaluate existing fake news detection (FND) methods, emphasizing the lack of multidisciplinary approaches and theoretical considerations in current research. They argue for a more comprehensive analysis of FND through various fields such as linguistics, healthcare, and communication, while also examining the potential of pre-trained transformer models for multilingual, multidomain, and multimodal FND. The authors suggest future research directions that focus on large, diverse datasets and the integration of human cognitive abilities with AI to combat fake news and AI-generated content.</p><p>Hu et al. <ref type="bibr" target="#b10">[11]</ref> provide a comprehensive overview of fake news detection by analyzing its diffusion process through three intrinsic characteristics: intentional creation, heteromorphic transmission, and controversial reception. The authors classify existing detection approaches based on these characteristics and discuss the technological trends that are shaping this research field. They highlight the importance of designing effective and explainable detection mechanisms and offer insights into future research directions, helping to advance the understanding and development of fake news detection strategies.</p><p>Alghamdi et al. <ref type="bibr" target="#b11">[12]</ref> present a comparative study of different approaches to fake news detection. The authors evaluate the performance of traditional ML methods such as Support Vector Machines (SVMs) and Random Forests (RFs) alongside more advanced DL models like CNN and Long Short-Term Memory (LSTM) network. Their research highlights the superiority of deep learning models, particularly LSTM, in capturing the sequential nature of text data and achieving higher accuracy in fake news detection. The study also emphasizes the importance of feature selection and engineering in improving model performance, suggesting that a combination of content-based and metadata features can lead to more robust detection systems.</p><p>Hamed et al. <ref type="bibr" target="#b12">[13]</ref> offer a comprehensive review of fake news detection approaches, focusing on the challenges associated with datasets, feature representation, and data fusion. The authors critically analyze existing studies, highlighting the limitations of current datasets, which often lack diversity and real-world applicability. They discuss various feature representation techniques, from traditional bag-of-words models to more sophisticated word embeddings and contextual representations. The paper also explores the potential of multi-modal approaches that combine textual, visual, and social context information for more accurate fake news detection. The authors conclude by identifying key research gaps and suggesting future directions, including the need for more robust and diverse datasets, improved feature extraction methods, and the integration of explainable AI techniques to enhance the interpretability of FND models.</p><p>For example, in the article <ref type="bibr" target="#b13">[14]</ref>, the explainability of decision-making in the field of text analysis is ensured by the use of semiotic AI tools, namely fuzzy logic. In the article <ref type="bibr" target="#b14">[15]</ref>, explainable AI was implemented based on an artificial neural network, which provided the rationale for the formation of logical inference. Additional advantages in the interpretability of artificial intelligence can be provided by combining both such approaches, based on semiotic and biological principles of constructing AI systems and implemented in neuro-fuzzy hybrid systems, as shown in <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17]</ref>.</p><p>It is also worth noting the studies that have focused on comparing traditional ML and DL approaches for fake news detection. Thus, authors of the review <ref type="bibr" target="#b17">[18]</ref> compare the performance of such ML algorithms as Naïve Bayes, Logistic Regression, SVM, and RNNs. They noted that SVM and Naïve Bayes outperform the other models in terms of classification efficiency. This approach addresses the growing issue of misinformation on social media, where users often perceive content as reliable without verification.</p><p>In contrast, DL techniques have gained attention for their ability to automatically extract features from text data. Nasir et al. <ref type="bibr" target="#b18">[19]</ref> employed CNNs and RNNs for fake news detection, showing that CNNs excel at capturing local patterns in text, while RNNs, particularly LSTM models, are better at understanding sequential information. The combination of these two models led to superior results.</p><p>Tipper et al. <ref type="bibr" target="#b19">[20]</ref> provide a comprehensive review of video deepfake detection techniques using hybrid CNN-LSTM models. The paper systematically investigates feature extraction approaches and widely used datasets, while evaluating model performance across various datasets and identifying factors influencing detection accuracy. The authors here also compare CNN-LSTM models with non-LSTM approaches, discuss implementation challenges, and propose future research directions for improving deepfake detection.</p><p>Paka et al. <ref type="bibr" target="#b20">[21]</ref> introduced Cross-SEAN, a semi-supervised neural attention model for detecting COVID-19 fake news on Twitter, leveraging both labelled and unlabelled data. Their approach, which incorporates external knowledge from trusted sources, achieved significant performance improvement over seven state-of-the-art models. Despite some limitations, such as potential biases in external knowledge, the model shows promising results, particularly with its real-time application in the Chrome-SEAN extension, designed to label fake tweets and collect user feedback for continuous improvement.</p><p>Recent studies show that the use of LLMs such as GPT and BERT has led to more refined approaches in fake news detection <ref type="bibr" target="#b21">[22]</ref><ref type="bibr" target="#b22">[23]</ref><ref type="bibr" target="#b23">[24]</ref><ref type="bibr" target="#b24">[25]</ref>. These models enhance the ability to understand the context, semantics, and intricate relationships within news articles, which are essential for distinguishing between truthful and deceptive content. By leveraging deep learning techniques, LLMs have significantly improved the accuracy and effectiveness of fake news detection systems, making them more robust in combating misinformation in the digital landscape.</p><p>For instance, Radhi et al. <ref type="bibr" target="#b21">[22]</ref> examine the application of DL methods, including transformerbased models like BERT, to detect fake news. Their research highlights the growing impact of misleading content on social media platforms such as Facebook, Twitter, Instagram, and WhatsApp, and emphasizes the urgency of addressing the problem of fake news, particularly in the context of psychological warfare and revenue-driven clickbait.</p><p>Kaliyar et al. <ref type="bibr" target="#b22">[23]</ref> propose FakeBERT, a BERT-based model that combines BERT with a CNN to handle ambiguity in news content. This model achieves a remarkable accuracy of 98.90%, outperforming existing models by using bidirectional training to capture semantic and long-distance dependencies, thus improving classification performance.</p><p>Similarly, Alnabhan and Branco <ref type="bibr" target="#b23">[24]</ref> present BERTGuard, a multi-domain fake news detection system that employs a two-tiered approach for domain classification and domain-specific news validity verification. This system demonstrates its effectiveness through rigorous testing on various datasets and incorporates strategies to mitigate class imbalance, enhancing its reliability and generalizability.</p><p>Dhiman et al. <ref type="bibr" target="#b24">[25]</ref> propose a novel framework called GBERT, combining GPT and BERT to tackle the problem of fake news detection. The model's high performance, achieving 95.30% accuracy and a 96.23% F1 score, underscores its potential to address the challenges posed by fake news in the digital era.</p><p>Overall, these diverse approaches underline the evolving nature of research in fake news detection. While traditional ML models still provide a foundation, the rapid advancements in deep learning, LLMs, and hybrid methods have expanded the capabilities to combat disinformation. The integration of explainable AI and adversarial training techniques ensures that these models remain both transparent and robust, helping to build trust in automated fake news detection systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">BERT-like models</head><p>In our study, the methodology focuses on leveraging a pre-trained BERT (introduced by Devlin et al. in 2018 <ref type="bibr" target="#b5">[6]</ref>) and its modifications for fake news detection. BERT is a powerful transformer-based model known for its deep bidirectional nature, which allows it to understand the context of words in a text by looking both to the left and right of a given token.</p><p>Due to its ability to encode rich semantic information from large text corpora, BERT has been a popular choice for various NLP tasks, including text classification, sentiment analysis, and fake news detection. Here, the goal is to fine-tune BERT for classifying news articles into "real" or "fake" categories, aiming for accurate detection of misleading information.</p><p>At its core, BERT utilises a multi-layer bidirectional Transformer encoder. This bidirectional approach enables the model to consider context from both directions simultaneously, which is in stark contrast to traditional left-to-right language models. The standard BERT model comprises (BERT base) 12 transformer layers (encoders), 12 attention heads, and about 110 million parameters. The larger variant (BERT large) has 24 layers, 16 attention heads, and 340 million parameters (Fig. <ref type="figure" target="#fig_0">1</ref>). BERT consists of the following main components:</p><p>1. Tokenizer. BERT uses a WordPiece tokenizer that splits text into tokens, including subwords, to effectively handle rare words. This helps the model manage vocabulary more efficiently and capture the meaning of morphologically complex words.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Embeddings. BERT embeddings include:</head><p>• Token embeddings: vectors that represent individual tokens.</p><p>• Positional embeddings: encodes the position of each token in the sequence to capture word order.</p><p>• Segment embeddings: differentiate between segments (sentences) within the input sequence, enabling the model to distinguish sentences in tasks like question answering.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Encoders. BERT consists of multiple layers of encoders, each containing:</head><p>• Multi-head Self-Attention: this mechanism allows the model to focus on different parts of the input sequence, capturing dependencies between all tokens regardless of their distance from each other.</p><p>• Feed-Forward Networks (FFN): each attention layer is followed by an FFN that applies nonlinear transformations, enhancing the model's capacity to learn complex patterns.</p><p>• Residual connections and layer normalization: these are used to stabilize training and improve gradient flow through the network. The input representation in BERT is a combination of three embeddings: token embeddings, segment embeddings, and position embeddings. Token embeddings represent individual words or subwords, segment embeddings differentiate between pairs of sentences, and position embeddings provide information about the token's position in the sequence. These embeddings are summed to produce the final input representation.</p><p>BERT's transformer layers consist of multi-head self-attention mechanisms and feed-forward neural networks. The self-attention mechanism allows the model to assign varying importance to different words in the input when processing each word, capturing complex relationships within the text. The feed-forward networks then refine this processed information, applying non-linear transformations that enhance the model's capacity to recognize and learn complex patterns.</p><p>BERT's pre-training process involves two novel unsupervised tasks. The first is Masked Language Modeling (MLM), where the model attempts to predict randomly masked tokens in the input sequence (Fig. <ref type="figure" target="#fig_1">2</ref>). This task forces the model to consider context from both directions, enhancing its bidirectional understanding. The second task is Next Sentence Prediction (NSP), where the model learns to predict whether two sentences naturally follow each other, fostering a grasp of the relationships between sentences.</p><p>One of BERT's key strengths is its ability to generate contextualised word embeddings. Unlike static word embeddings, BERT's representations for a given word can vary depending on the surrounding context, capturing nuanced word usage and polysemy effectively.</p><p>The fine-tuning process allows BERT to be adapted to a wide range of downstream tasks. By adding task-specific layers to the pre-trained BERT model and fine-tuning on task-specific data, researchers can achieve state-of-the-art results on various NLP tasks, including question answering, sentiment analysis, text classification, summarisation, and named entity recognition.</p><p>BERT's impact extends beyond its architecture. It has sparked a new paradigm in NLP, demonstrating the power of unsupervised pre-training on large corpora followed by supervised finetuning. This approach has led to the development of numerous BERT variants and inspired new research directions in contextual language modeling.</p><p>Since its release, various modifications and improvements have been introduced to address specific limitations and further enhance the model's performance on a range of NLP tasks. Some of the notable modifications include RoBERTa, DistilBERT, and XLM-RoBERTa, each designed with unique features to optimize BERT's efficiency, scalability, and multilingual capabilities.</p><p>RoBERTa (a Robustly Optimized BERT pretraining Approach by Liu et al. <ref type="bibr" target="#b25">[26]</ref>) was developed to address some of the original training challenges in BERT. RoBERTa builds upon the BERT architecture by using more training data and a larger number of training steps, along with other optimizations like removing the Next Sentence Prediction objective.</p><p>Instead of focusing on the relationships between sentence pairs, RoBERTa concentrates purely on the Masked Language Modeling objective, which has shown to be more effective for a wide range of downstream NLP tasks. Additionally, RoBERTa utilizes dynamic masking, which allows for different masked tokens during each epoch, offering a more diverse learning experience. As a result, RoBERTa has consistently outperformed BERT on various benchmarks, making it a preferred choice for tasks like text classification and sentiment analysis.</p><p>DistilBERT (by Sanh et al. <ref type="bibr" target="#b26">[27]</ref>) is another significant modification aimed at making BERT lighter and faster while retaining most of its performance capabilities. Developed using a technique called knowledge distillation, DistilBERT is approximately 60% of the size of BERT, making it faster during both training and inference. In knowledge distillation, a smaller model (the student model) is trained to reproduce the behavior of a larger pre-trained model (the teacher model).</p><p>This process enables the student model, DistilBERT in this case, to learn a more compact representation of the language while preserving 97% of BERT's language understanding abilities. DistilBERT's smaller size makes it particularly suitable for scenarios where computational resources are limited or where real-time performance is critical, such as in mobile or edge computing applications.</p><p>XLM-RoBERTa (Cross-lingual Language Model) is an extension of the BERT architecture designed for multilingual tasks, building on the success of both RoBERTa and the earlier XLM. XLM-RoBERTa is pre-trained on a large-scale multilingual corpus covering over 100 languages, making it capable of handling cross-lingual understanding and translation tasks more effectively.</p><p>The model learns representations that are common across languages, which allows it to perform well on tasks involving low-resource languages by transferring knowledge from high-resource languages.</p><p>Proposed by Lan et al. <ref type="bibr" target="#b27">[28]</ref>, ALBERT (A Lite BERT) addresses BERT's limitations of model size and training time. It introduces parameter-reduction techniques like factorized embedding parameterization and cross-layer parameter sharing. Despite having fewer parameters, ALBERT achieves state-of-the-art results on several benchmarks while being more efficient.</p><p>Developed by Clark et al. <ref type="bibr" target="#b28">[29]</ref>, ELECTRA (Efficiently Learning an Encoder that Classifies Token Replacements Accurately) introduces a new pre-training task where the model learns to distinguish between real input tokens and fake tokens generated by a small masked language model. This approach is more sample-efficient than BERT's masked language modeling, allowing ELECTRA to achieve strong performance with less computation.</p><p>Each of these modifications brings unique strengths to the BERT family of models. RoBERTa's focus on robust training has made it highly accurate, but it comes with increased computational requirements due to the larger dataset and training time. DistilBERT addresses the issue of computational expense by providing a smaller, faster alternative, making it a practical option for deployment in environments where resources are constrained. XLM-RoBERTa, meanwhile, opens the door to advanced multilingual applications, offering a model that can understand and process a variety of languages effectively.</p><p>In this paper we used both BERT base model and its modifications (RoBERTa, DistilBERT, and XLM-RoBERTa).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">BERT-based classification pipeline</head><p>The pipeline starts with data collection and pre-processing, where text data is cleaned and tokenized. This involves removing any special characters, URLs, and unnecessary whitespace. Tokenization is done using the BERT tokenizer, which converts the text into a format that the BERT model can handle, specifically by converting words into tokens, adding special tokens like [CLS] and [SEP], and creating attention masks that help the model focus on relevant parts of the input data.</p><p>The data is then split into training, validation, and test sets to ensure that the model can be properly evaluated. The pre-trained BERT-like model is fine-tuned on the training dataset. Fine-tuning involves using the general knowledge gained during BERT-like initial training on a large corpus and tailoring it to the specific task of detecting fake news. In this study, we freeze all layers of the model except the last few encoders and the soft max classifier, which are retrained on our dataset (Fig. <ref type="figure">3</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 3: RoBERTa fine-tuning model</head><p>The model is trained on the labelled dataset, adjusting its weights using the cross-entropy loss by the Adam optimizer. During training, the learning rate is carefully controlled using a scheduler, starting from a small value to ensure stable updates and avoid overfitting.</p><p>Our study applies early stopping based on the validation loss to avoid training for too many epochs, which can lead to overfitting. This way, the model is more likely to generalize well on unknown data. The evaluation of the fine-tuned model is performed using metrics such as accuracy and F1-score.</p><p>These metrics provide a comprehensive understanding of the model's performance in detecting fake news. The F1-score, as the harmonic mean of precision and recall, provides a balanced measure insight into the types of errors the model may make.</p><p>Since BERT was pre-trained on a large corpus, it can leverage the linguistic patterns learned during this general training, thereby requiring less labelled data for the specific task of fake news detection. This is especially advantageous given the scarcity of high-quality labelled fake news -tuning allows it to adapt to the nuances of fake news language without starting from scratch, making this approach efficient and effective.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Datasets and software implementation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Datasets</head><p>To effectively train and test BERT-type models for fake news detection, we utilized a variety of datasets that are widely recognized in the field. These datasets (FakeNewsNet, particular, PolitiFact subset, and WELFake Dataset) offer diverse contexts and sources of fake news, allowing for robust model evaluation. Each dataset has unique characteristics that contribute to a comprehensive training and testing process, helping to ensure that the models generalize well across different types of misinformation.</p><p>FakeNewsNet (PolitiFact) <ref type="bibr" target="#b29">[30]</ref> is a well-established dataset that combines news content with social context to facilitate the study of fake news detection. It includes news articles verified by the PolitiFact fact-checking website, where each article is classified as either "fake" or "real" based on professional verification. This dataset not only includes the text of the news articles but also metadata such as user engagement and social media activity around each news item.</p><p>The social context allows models to capture the diffusion patterns of fake news, which is crucial for understanding how misinformation spreads online. By training BERT-type models on the textual content supplemented by considering both linguistic features and social spread patterns.</p><p>We used PolitiFact subset, which contains around 1,200 records, with the average length of the text being about 15 words, offering a moderate level of detail for each news item. This shorter length allows the BERT models to focus on concise stylistic and contextual indicators of fake news.</p><p>WELFake (Web Evaluated Fake News) <ref type="bibr" target="#b30">[31]</ref> is significantly larger, with over 70,000 news articles. Its structure is simpler, focusing primarily on the text, title of the articles and a label field that classifies each article as "fake" or "real". This minimal structure makes WELFake an ideal dataset for large-scale training, enabling models to learn from a vast variety of textual examples. Despite its large size, the dataset is not entirely balanced, with a higher number of fake news records compared to real news. This imbalance requires careful consideration during model training, such as using class weighting or oversampling techniques to prevent the model from overfitting to the majority class. The average length of texts in WELFake is around 540 words, which provides enough data for models to learn linguistic patterns while ensuring efficient training time due to shorter text sequences. This shorter length allows the BERT models to focus on concise stylistic and contextual indicators of fake news.</p><p>By using these datasets, we were able to train BERT-type models with a diverse range of textual inputs and associated features. This diversity ensures that the models are not only capable of recognizing the typical writing styles and topics of fake news but also understand how false information is often framed within a broader social context. Moreover, combining datasets with large-scale examples like WELFake and more specific examples like those in PolitiFact ensures a balance between the volume of data and the richness of context. This approach helps improve the generalization abilities of the models, making them better suited to real-world applications where misinformation can take on many different forms and reach audiences through various channels.</p><p>Thus, in the PolitiFact dataset after removing duplicates there are a few short records (about 1,000 with an average length of 15 tokens), and in the WELLFake dataset there are about 50,000 with an average length of 540 tokens. Such diversification will allow us to test the performance of BERTsimilar models in fundamentally different conditions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Software</head><p>In our study, we utilized a variety of software tools and libraries to train, test, and analyze the BERTtype models in identifying fake news. These tools facilitated the entire pipeline from data preprocessing and model training to evaluation and visualization of results. The combination of these software solutions allowed us to leverage state-of-the-art techniques and streamline our workflow.</p><p>PyTorch served as the core library for building, training, and fine-tuning our deep learning models. As an open-source deep learning framework, PyTorch offers dynamic computational graphs and an intuitive API, making it suitable for implementing complex BERT-like models. It also provided robust support for GPU acceleration, which was crucial for training large language models efficiently on the substantial datasets we used. The ease of integrating PyTorch with pre-trained models through the Hugging Face Transformers library allowed us to fine-tune these models specifically for the task of fake news detection.</p><p>Pandas played a critical role in managing and pre-processing our datasets. Given the size and complexity of datasets, Pandas' capabilities for data manipulation and analysis were invaluable. We used it to load, filter, and clean data, ensuring that the text fields were formatted properly for input into the models. Pandas also allowed us to explore dataset characteristics, such as class distribution and text length, which helped guide our approach to model training and evaluation. Its versatility in handling various data formats, including CSV and JSON, streamlined the process of preparing our data.</p><p>Python 3.8 served as the primary programming language for this project, owing to its simplicity, versatility, and extensive ecosystem of libraries. Python's flexibility enabled us to integrate diverse tools seamlessly, from data pre-processing with Pandas to model training with PyTorch. Additionally, Python's wide range of libraries for data visualization, like Matplotlib and Seaborn, made it easier to conduct analysis of community support and documentation further facilitated smooth implementation of cuttingedge methods.</p><p>Scikit-Learn was used for a range of pre-processing and evaluation tasks. This included splitting datasets into training and testing samples, calculating various performance metrics like accuracy, precision, recall, F1-score, and generating confusion matrices for deeper insights into model predictions. Scikit-Learn's easy-to-use API allowed us to quickly compare different models and preprocessing strategies, ensuring that we could iteratively refine our approach to achieve the best results.</p><p>Seaborn and Matplotlib were essential for data visualization throughout the project. Seaborn, with its high-level interface, was used to create aesthetically pleasing and informative plots, such as histograms of text lengths and confusion matrices, which helped us understand the distribution of data and model performance at a glance. Matplotlib provided additional customization capabilities, allowing us to tailor visualizations to our specific needs, such as adjusting axis scales or highlighting specific data points.</p><p>Additionally, we leveraged the Hugging Face Transformers library to access pre-trained BERTtype models and adapt them for our task. This library enabled us to import and fine-tune models with minimal efforts, allowing us to focus on the nuances of the fake news detection problem rather than the complexities of implementing models from scratch. The ease of integrating these models with PyTorch through the Transformers library made it possible to quickly experiment with different architectures and configurations.</p><p>We leveraged Google Colab as the primary development environment for training and evaluating our models. Google Colab is a cloud-based platform that offers a Jupiter notebook interface, providing a powerful and convenient setting for executing Python code and running deep learning experiments. One of the key advantages of Google Colab is its access to free GPUs and TPUs, which significantly accelerated the training process for our BERT-type models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental setup</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Final hyperparameter settings</head><p>The final hyperparameter settings are presented in Table <ref type="table" target="#tab_0">1</ref>. These settings describe our setup for fine-tuning pre-trained BERT-type models. A batch size of 16 provides a balance between memory usage and training speed that is suitable for most GPUs. Input sequences are limited to 128 (64 for PolitiFact) tokens, which is sufficient for our datasets. The model is trained for 5 epochs, allowing it to train on the data multiple times without overfitting.</p><p>The learning rate is small, allowing careful adjustment of the pre-trained weights. For optimization, the AdamW optimizer is used, an improved version of Adam that properly implements weight decay. CrossEntropyLoss serves as a loss function that is standard for many classification tasks in natural language processing. To prevent overfitting, a dropout rate of 0.3 is applied, randomly deactivating 30% of neurons during training. The optimization process takes advantage of GPU acceleration, in particular NVIDIA L4, which significantly speeds up the computation compared to CPU. Google Colab serves as a training environment, offering free access to GPUs for model development.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Text preprocessing</head><p>Text preprocessing for our fake identification task was intentionally minimal, leveraging the robust capabilities of BERT-type models. These models are pre-trained on vast corpora of unrefined text, allowing them to handle raw input effectively. This approach preserves the natural structure and nuances of the text, which can be crucial for detecting subtle indicators of fake content.</p><p>The primary preprocessing step was performed by the tokenizer specific to each BERT model variant. These tokenizers are designed to break down text into subword units, handling out-ofvocabulary words and maintaining semantic relationships. The tokenization effectively translates raw text into a format that BERT models can process, without losing important linguistic information.</p><p>Our preprocessing pipeline focused mainly on preparing the data structure for input into the model. This included binary encoding of labels, transforming the classification targets into a format suitable for machine learning. In addition, duplicates and data with gaps were removed.</p><p>We also concatenated various fields related to each piece of content, such as the author's name, the main text of the article, its title, and the URL. This concatenation allows the model to consider all relevant information simultaneously, potentially capturing relationships between different aspects of the content that might indicate its authenticity or lack thereof.</p><p>By keeping preprocessing minimal, we aimed to reveal the sophisticated language understanding capabilities of BERT models. This approach allows the models to work with text that closely resembles what it encountered during the pre-training phase, potentially improving its ability to detect nuanced signals of fake content across various writing styles and formats.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Evaluation metrics</head><p>To compare forecasting performance of the proposed models we used Accuracy metric and F1-score. Accuracy characterizes the share of correct answers of the classifier and can be calculated as</p><formula xml:id="formula_0">%, 100  + + = N P TN TP Accuracy</formula><p>where TP and TN are the number of correctly estimated positive (articles with fake news) and negative (articles without fakes) classes, respectively; P and N are the actual number of representatives of each class, respectively.</p><p>The F1-score provides a balanced measure of a model's performance, particularly when the dataset is imbalanced, i.e. when the number of positive and negative instances is significantly different. F1-score is calculated as:</p><formula xml:id="formula_1">, 2 1 Recall Precision Recall Precision F +   = where FN TP TP Recall FP TP TP Precision + = + = , ,</formula><p>and FP, FN are false positive (predicting fake news when there is none) and false negative (assessing news as real when it is fake) classes, respectively. Note that since we are primarily interested in the correct identification of fakes, we have chosen them as a positive class (label 1).</p><p>We also calculated Confusion Matrix, which provides a comprehensive view of the model's performance, allowing for the calculation of various metrics and providing insights into the types of errors the model is making. It's particularly useful for understanding the compromises between types of misclassifications and for fine-tuning the model to meet specific performance criteria.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4.">Empirical results</head><p>The results presented in Tables <ref type="table" target="#tab_2">2 and 3</ref> highlight the performance of various BERT-type models on the WELLFake and PolitiFact datasets, providing insights into their strengths and weaknesses in the context of fake news detection. The metrics of accuracy and F1-score, along with training time and the number of trainable parameters, provide a basis for a comprehensive comparison of these models. classifier with an accuracy of 0.998 and an F1-score of 0.994 on the WELLFake dataset. This suggests that RoBERTa is particularly adept at capturing the nuances of the language used in fake news, allowing it to make very accurate predictions. The relatively short training time of 2025 seconds further highlights the effectiveness of RoBERTa, indicating that it can quickly process and adapt to the dataset, making it a strong candidate for real-world applications where both accuracy and speed are important.</p><p>XLM-RoBERTa also demonstrated strong performance with an accuracy of 0.994 and an F1-score of 0.991. XLM--training allows it to effectively handle the diverse linguistic features present in the WELLFake dataset. However, despite the high accuracy, it took slightly longer to train (2048 seconds) compared to RoBERTa.</p><p>BERT base and DistilBERT achieved an accuracy of 0.985 with a corresponding F1-score. While they performed well, their accuracy did not reach the accuracy of RoBERTa or XLM-RoBERTa. This suggests that while the basic BERT architecture can effectively classify fake news, the additional fine-tuning and optimization present in RoBERTa and XLM-RoBERTa provide a noticeable advantage. Moreover, these two models took almost twice as long to train. It should be noted that the lighter DistilBERT architecture did not contribute to significant reduction in training time (it took 4410 seconds compared to 4733 for BERT base), which does not make it an efficient model in terms of computing resources.</p><p>On the PolitiFact dataset, the performance landscape shifts, as shown in Table <ref type="table" target="#tab_2">3</ref>. Here, DistilBERT outperforms the other models, achieving an accuracy of 0.917 and an F1-score of 0.931. This result is particularly noteworthy because it demonstrates that DistilBERT, despite being a lighter and more compact version of BERT, can achieve higher accuracy on smaller datasets. Its reduced the number of trainable parameters makes it easier to train and adapt, especially when computational resources are a constraint. The relatively short training time of 9.8 seconds further underscores its efficiency. BERT base followed DistilBERT with an accuracy of 0.901 and an F1-score of 0.912. This performance suggests that the original BERT architecture remains highly effective for fake news detection, particularly when fine-training time of 12.6 seconds compared to DistilBERT reflects the additional computational demands of its more complex architecture.</p><p>RoBERTa, which excelled on the WELLFake dataset, achieved an accuracy of 0.891 and an F1score of 0.891 on PolitiFact. This indicates that while RoBERTa is highly effective with larger datasets like WELLFake, it may not generalize as well to smaller datasets like PolitiFact without further finetuning. Its training time was slightly lower than BERT, at 12.2 seconds, suggesting some computational efficiency, but it also had lower accuracy.</p><p>XLM-RoBERTa achieved the lowest accuracy on the PolitiFact dataset at 0.872, with an F1-score of 0.883. This could be due to its design, which is optimized for multilingual tasks rather than domainspecific datasets like PolitiFact. Although it is highly versatile across different languages and contexts, this versatility may result in decreased performance when applied to a narrower task. The training time for XLM-RoBERTa was also substantial at 11.1 seconds, indicating that it is not the most efficient choice for this particular dataset.</p><p>Figures <ref type="figure" target="#fig_3">4, 5</ref> show the loss and accuracy graphs for the best models for the datasets we used, and Figures <ref type="figure" target="#fig_6">6, 7</ref> present the confusion matrices for these models.     <ref type="figure" target="#fig_4">6</ref> displays the confusion matrix for the RoBERTa model's performance on the WELLFake dataset. The confusion matrix provides a detailed view of the model's classification accuracy, including the TP, TN, FP, and FN. RoBERTa demonstrates a strong ability to correctly classify both real and fake news instances, with high counts in the TP and TN cells. The minimal number of misclassifications suggests that RoBERTa's understanding of linguistic features is effective in discerning deceptive content. This detailed insight into the types of errors made by the model is crucial for understanding the model's strengths in dealing with a large and diverse dataset like WELLFake.</p><p>Figure <ref type="figure" target="#fig_6">7</ref> provides the confusion matrix for the DistilBERT model on the PolitiFact dataset. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion and conclusion</head><p>The results of our study illustrate the strengths and compromises of different BERT-based models in the context of fake news detection. RoBERTa demonstrated exceptional efficiency on the WELLFake dataset, achieving an accuracy of 0.998, highlighting its ability to handle complex and diverse text data. Its robust training process, which focuses heavily on the masked language modeling task, allows RoBERTa to capture subtle linguistic cues and contextual relationships that are often indicative of fake news. This makes RoBERTa a suitable model for applications where high accuracy is paramount, even if it comes at the cost of increased computational requirements. DistilBERT, on the other hand, excelled on the smaller PolitiFact dataset, where it achieved an accuracy of 0.917. Its lightweight architecture, derived from the knowledge distillation process, enables it to learn efficiently from fewer data points while maintaining a high level of accuracy. This makes DistilBERT an ideal choice in scenarios where computational resources are limited, such as real-time fake news detection on edge devices. The model's rapid convergence and lower training time also make it more practical for applications that require quick deployment and frequent retraining.</p><p>However, the study also highlights certain limitations associated with each model. While RoBERTa offers superior accuracy on larger datasets like WELLFake, its performance on the smaller PolitiFact dataset was relatively low, with an accuracy of 0.891. This suggests that the model's complexity might require further fine-tuning to adapt to datasets with shorter text lengths and less diverse content.</p><p>XLM-RoBERTa's results provide additional insights into the role of multilingual models in fake news detection. Its high accuracy on the WELLFake dataset (0.994) suggests that cross-lingual training can enhance a model's ability to generalize across diverse linguistic styles. However, its relatively lower performance on the domain-specific dataset (accuracy of 0.872) indicates that models optimized for multilingual capabilities may not always perform best on specific, monolingual datasets without additional fine-tuning. This points to a potential compromise between multilingual versatility and domain-specific accuracy that researchers must consider when selecting models for fake news detection.</p><p>Overall, the comparison between these BERT-based models suggests that there is no one-sizefits-all solution for fake news detection. The choice of model depends largely on the characteristics of the dataset, the computational resources available, and the specific requirements of the application. For large-scale fake news detection tasks where accuracy is critical, RoBERTa is likely the most effective choice. For environments where speed and resource efficiency are priorities, such as mobile platforms or real-time applications, DistilBERT provides a viable alternative with its compact structure and faster training time.</p><p>Another key finding of our study is the importance of dataset diversity and structure in influencing model performance. The WELLFake dataset, with its large volume and longer average text length, allowed RoBERTa and XLM-RoBERTa to excel by leveraging their deeper architectures and advanced contextual understanding. Meanwhile, the PolitiFact dataset, characterized by shorter text samples, contributed to the effectiveness of DistilBERT in learning from more concise linguistic patterns. These differences emphasize the need for tailored approaches in selecting models for fake news detection, depending on the dataset's nature.</p><p>The study also underscores the role of minimal preprocessing in leveraging the strengths of BERT-based models. By allowing the models to handle raw text inputs, we preserved linguistic nuances that are critical for distinguishing fake news. This approach highlights the power of pretrained models in adapting to specific tasks without extensive preprocessing, making them versatile tools for a wide range of applications in the field of NLP.</p><p>In conclusion it should be noted, that proposed approach to detecting fake content in digital media based on fine-tuned models such as BERT focuses on understanding linguistic nuances and contextual relationships in text. However, these models do not directly check the factual content of claims against external databases or sources. Instead, they work by detecting hidden linguistic features and patterns commonly associated with fake or misleading information.</p><p>This approach is advantageous in situations where external fact-checking is either impossible or time-consuming, but it also introduces certain limitations as the models rely heavily on linguistic cues rather than external verification.</p><p>Thus, the findings of this study contribute to the broader effort of developing reliable AI-driven tools for combating misinformation, which is Future research could explore further fine-tuning techniques and hybrid approaches that combine the strengths of multiple models to create even more robust solutions for fake news detection.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Simplified architecture of the BERT model (designed by the authors based on [6])</figDesc><graphic coords="5,115.65,303.07,369.34,181.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: BERT masked language model predictions (designed by the authors based on [6])</figDesc><graphic coords="6,130.68,284.28,339.28,106.85" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: RoBERTa Loss and Accuracy graphs for WELLFake dataset</figDesc><graphic coords="13,132.70,570.30,335.16,152.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: DistilBERT Loss and Accuracy graphs for PolitiFact dataset</figDesc><graphic coords="14,79.83,158.91,440.44,202.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: RoBERTa Confusion Matrix for WELLFake dataset</figDesc><graphic coords="14,168.10,501.96,264.43,223.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure</head><label></label><figDesc>Figure6displays the confusion matrix for the RoBERTa model's performance on the WELLFake dataset. The confusion matrix provides a detailed view of the model's classification accuracy, including the TP, TN, FP, and FN. RoBERTa demonstrates a strong ability to correctly classify both real and fake news instances, with high counts in the TP and TN cells. The minimal number of misclassifications suggests that RoBERTa's understanding of linguistic features is effective in discerning deceptive content. This detailed insight into the types of errors made by the model is crucial for understanding the model's strengths in dealing with a large and diverse dataset like WELLFake.Figure7provides the confusion matrix for the DistilBERT model on the PolitiFact dataset.</figDesc><graphic coords="15,166.82,197.78,266.41,225.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: DistillBERT Confusion Matrix for PolitiFact dataset</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Final hyperparameter settings</figDesc><table><row><cell>Hyperparameters</cell><cell>Description</cell><cell>Value</cell></row><row><cell>Batch Size</cell><cell>Number of samples processed per batch</cell><cell>16</cell></row><row><cell>Max Sequence Length</cell><cell>Maximum length of the input sequence</cell><cell>64 (128)</cell></row><row><cell>Number of Epochs</cell><cell>Number of complete passes through the training dataset</cell><cell>5</cell></row><row><cell>Learning Rate</cell><cell>Learning rate used by the optimizer</cell><cell>2,00 0 -5</cell></row><row><cell>Optimizer</cell><cell>Optimization algorithm</cell><cell>AdamW</cell></row><row><cell>Loss Function</cell><cell>Loss function used for training</cell><cell>CrossEntropyLoss</cell></row><row><cell>Device</cell><cell>Computational device used</cell><cell>GPU (NVIDIA L4)</cell></row><row><cell>Training Environment</cell><cell>Platform used for model training</cell><cell>Google Colab</cell></row><row><cell>Validation Split</cell><cell>Proportion of data used for validation</cell><cell>20%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Classification performance for WELLFake dataset</figDesc><table><row><cell>Model</cell><cell>Number of trainable parameters</cell><cell>Training time, sec</cell><cell>Accuracy</cell><cell>F1-score</cell></row><row><cell>BERT base</cell><cell>7,088,641</cell><cell>4733</cell><cell>0.985</cell><cell>0.985</cell></row><row><cell>RoBERTa</cell><cell>7,679,233</cell><cell>2025</cell><cell>0.998</cell><cell>0.994</cell></row><row><cell>DistilBERT</cell><cell>7,680,002</cell><cell>4410</cell><cell>0.985</cell><cell>0.985</cell></row><row><cell>XLMRoBERTa</cell><cell>7,680,002</cell><cell>2048</cell><cell>0.994</cell><cell>0.991</cell></row><row><cell cols="2">RoBERTa emerges as the best</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Classification performance for PolitiFact dataset</figDesc><table><row><cell>Model</cell><cell>Number of trainable parameters</cell><cell>Training time, sec</cell><cell>Accuracy</cell><cell>F1-score</cell></row><row><cell>BERT base</cell><cell>7,088,641</cell><cell>12.6</cell><cell>0.901</cell><cell>0.912</cell></row><row><cell>RoBERTa</cell><cell>7,679,233</cell><cell>12.2</cell><cell>0.891</cell><cell>0.891</cell></row><row><cell>DistilBERT</cell><cell>7,680,002</cell><cell>9.8</cell><cell>0.917</cell><cell>0.931</cell></row><row><cell>XLMRoBERTa</cell><cell>7,680,002</cell><cell>11.1</cell><cell>0.872</cell><cell>0.883</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This paper is a part of the by the Swedish Institute within the Baltic Sea Neighbourhood Programme (project No. 00152/2024).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Declaration on Generative AI</head><p>The authors have not employed any Generative AI tools.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">The science of fake news</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Lazer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Baum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Benkler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Berinsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Greenhill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Menczer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">. J L</forename><surname>Zittrain</surname></persName>
		</author>
		<idno type="DOI">10.1126/science.aao2998</idno>
	</analytic>
	<monogr>
		<title level="j">Science</title>
		<imprint>
			<biblScope unit="volume">359</biblScope>
			<biblScope unit="issue">6380</biblScope>
			<biblScope unit="page" from="1094" to="1096" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Fake News Detection on Social Media: A Data Mining Perspective</title>
		<author>
			<persName><forename type="first">K</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sliva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1145/3137597.3137600</idno>
	</analytic>
	<monogr>
		<title level="j">ACM SIGKDD Explorations Newsletter</title>
		<imprint>
			<biblScope unit="volume">19</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page" from="22" to="36" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Machine learning approach of analysis of emotional polarity of electronic social media</title>
		<author>
			<persName><forename type="first">V</forename><surname>Derbentsev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Bezkorovainyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Akhmedov</surname></persName>
		</author>
		<idno type="DOI">%2010.33111/nfmte.2020.095</idno>
	</analytic>
	<monogr>
		<title level="j">Neuro-Fuzzy Modeling Techniques in Economics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="95" to="137" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A comparative study of deep learning models for sentiment analysis of social media texts</title>
		<author>
			<persName><forename type="first">V</forename><surname>Derbentsev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Bezkorovainyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Matviychuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Pomazun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hrabariev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hostryk</surname></persName>
		</author>
		<ptr target="https://ceur-ws.org/Vol-3465/paper18.pdf" />
	</analytic>
	<monogr>
		<title level="j">CEUR Workshop Proceedings</title>
		<imprint>
			<biblScope unit="volume">3465</biblScope>
			<biblScope unit="page" from="168" to="188" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Language Models are Few-Shot Learners</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">B</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">.</forename><forename type="middle">. D</forename><surname>Amodei</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/2005.14165" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1810.04805" />
		<title level="m">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Zellers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holtzman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Rashkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bisk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Farhadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Roesner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Choi</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1905.12616" />
		<title level="m">Defending Against Neural Fake News</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Howard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruder</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1801.06146" />
		<title level="m">Universal Language Model Fine-tuning for Text Classification</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Bender</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gebru</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mcmillan-Major</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shmitchell</surname></persName>
		</author>
		<idno type="DOI">10.1145/3442188.3445922</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</title>
				<meeting>the 2021 ACM Conference on Fairness, Accountability, and Transparency<address><addrLine>Canada</addrLine></address></meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="610" to="623" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Fake news detection revisited: An extensive review of theoretical frameworks, dataset assessments, model constraints, and forward-looking research agendas</title>
		<author>
			<persName><forename type="first">S</forename><surname>Harris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">J</forename><surname>Hadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ahmad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Alshara</surname></persName>
		</author>
		<idno type="DOI">%2010.3390/technologies12110222</idno>
	</analytic>
	<monogr>
		<title level="j">Technologies</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="issue">11</biblScope>
			<biblScope unit="page">222</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">An overview of fake news detection: From a new perspective</title>
		<author>
			<persName><forename type="first">B</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.fmre.2024.01.017</idno>
	</analytic>
	<monogr>
		<title level="j">Fundamental Research</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A comparative study of machine learning and deep learning techniques for fake news detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Alghamdi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Luo</surname></persName>
		</author>
		<idno type="DOI">10.3390/info13120576</idno>
	</analytic>
	<monogr>
		<title level="j">Information</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page">576</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">A review of fake news detection approaches: A critical analysis of relevant studies and highlighting key challenges associated with the dataset, feature representation, and data fusion</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K</forename><surname>Hamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Aziz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Yaakub</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.heliyon.2023.e20382</idno>
	</analytic>
	<monogr>
		<title level="j">Heliyon</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="issue">10</biblScope>
			<biblScope unit="page">e20382</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Enhancing Mood Detection in Textual Analysis through Fuzzy Logic Integration</title>
		<author>
			<persName><forename type="first">H</forename><surname>Melnyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Melnyk</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACIT62333.2024.10712628</idno>
	</analytic>
	<monogr>
		<title level="m">2024 14th International Conference on Advanced Computer Information Technologies, ACIT, IEEE</title>
				<meeting><address><addrLine>Ceske Budejovice, Czech Republic</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="23" to="26" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Artificial neural-like network as a basis for forming logical conclusions in systems of exceptional complexity</title>
		<author>
			<persName><forename type="first">V</forename><surname>Hraniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Mazur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Matvijchuk</surname></persName>
		</author>
		<idno type="DOI">10.33111/nfmte.2020.065</idno>
	</analytic>
	<monogr>
		<title level="j">Neuro-Fuzzy Modeling Techniques in Economics</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="65" to="94" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Intellectual capital management of the business community based on the neuro-fuzzy hybrid system</title>
		<author>
			<persName><forename type="first">S</forename><surname>Kozlovskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Syniehub</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kozlovskyi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Lavrov</surname></persName>
		</author>
		<idno type="DOI">10.33111/nfmte.2022.025</idno>
	</analytic>
	<monogr>
		<title level="j">Neuro-Fuzzy Modeling Techniques in Economics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="25" to="47" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Neuro-fuzzy model of country&apos;s investment potential assessment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Matviychuk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Lukianenko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Miroshnychenko</surname></persName>
		</author>
		<idno type="DOI">10.25102/fer.2019.02.04</idno>
	</analytic>
	<monogr>
		<title level="j">Fuzzy economic review</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="65" to="88" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Detecting fake news using machine learning and deep learning algorithms</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Tanvir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">M</forename><surname>Mahir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Akhter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Huq</surname></persName>
		</author>
		<idno type="DOI">10.1109/ICSCC.2019.8843612</idno>
	</analytic>
	<monogr>
		<title level="m">2019 7th International Conference on Smart Computing &amp; Communications, ICSCC, IEEE</title>
				<meeting><address><addrLine>Sarawak, Malaysia</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="5" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Fake news detection: A hybrid CNN-RNN-based deep learning approach</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">A</forename><surname>Nasir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">S</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Varlamis</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.jjimei.2020.100007</idno>
	</analytic>
	<monogr>
		<title level="j">International Journal of Information Management Data Insights</title>
		<imprint>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page">100007</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">An investigation into the utilisation of CNN with LSTM for video deepfake detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Tipper</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">F</forename><surname>Atlam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">S</forename><surname>Lallie</surname></persName>
		</author>
		<idno type="DOI">do:%2010.3390/app14219754</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="issue">21</biblScope>
			<biblScope unit="page">9754</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Cross-SEAN: A cross-stitch semi-supervised neural attention model for COVID-19 fake news detection</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">S</forename><surname>Paka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kaushik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sengupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.asoc.2021.107393</idno>
	</analytic>
	<monogr>
		<title level="j">Applied Soft Computing</title>
		<imprint>
			<biblScope unit="volume">107</biblScope>
			<biblScope unit="page">107393</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A comprehensive review of machine learning-based models for fake news detection</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">D</forename><surname>Radhi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A H</forename><surname>Al Naffakh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-I</forename><surname>Fuqdan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Hakim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Al-Attar</surname></persName>
		</author>
		<idno type="DOI">10.1051/bioconf/20249700123</idno>
	</analytic>
	<monogr>
		<title level="j">BIO Web of Conferences</title>
		<imprint>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page">123</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">FakeBERT: Fake news detection in social media with a BERT-based deep learning approach</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Kaliyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Goswami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Narang</surname></persName>
		</author>
		<idno type="DOI">10.1007/s11042-020-10183-2</idno>
	</analytic>
	<monogr>
		<title level="j">Multimedia Tools and Applications</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page" from="11765" to="11788" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">BERTGuard: Two-tiered multi-domain fake news detection with class imbalance mitigation</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Q</forename><surname>Alnabhan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Branco</surname></persName>
		</author>
		<idno type="DOI">10.3390/bdcc8080093</idno>
	</analytic>
	<monogr>
		<title level="j">Big Data and Cognitive Computing</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="issue">8</biblScope>
			<biblScope unit="page">93</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">GBERT: A hybrid deep learning model based on GPT-BERT for fake news detection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Dhiman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kaur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Juneja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nauman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Muhammad</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.heliyon.2024.e35865</idno>
	</analytic>
	<monogr>
		<title level="j">Heliyon</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="issue">16</biblScope>
			<biblScope unit="page">e35865</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1907.11692" />
		<title level="m">RoBERTa: A robustly optimized BERT pretraining approach</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Sanh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Debut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chaumond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wolf</surname></persName>
		</author>
		<idno>arXiv</idno>
		<ptr target="https://arxiv.org/abs/1910.01108" />
		<title level="m">DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">ALBERT: A lite BERT for selfsupervised learning of language representations</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Lan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goodman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gimpel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Soricut</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=H1eA7AEtvS" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations, ICLR 2020</title>
				<meeting><address><addrLine>Addis Ababa, Ethiopia</addrLine></address></meeting>
		<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">ELECTRA: Pre-training text encoders as discriminators rather than generators</title>
		<author>
			<persName><forename type="first">K</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-T</forename><surname>Luong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=r1xMH1BtvB" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations, ICLR 2020</title>
				<meeting><address><addrLine>Addis Ababa, Ethiopia</addrLine></address></meeting>
		<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<monogr>
		<ptr target="https://github.com/KaiDMML/FakeNewsNet" />
		<title level="m">FakeNewsNet Dataset</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">WELFake dataset for fake news detection in text data</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">K</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Agrawal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Prodan</surname></persName>
		</author>
		<idno type="DOI">10.1109/TCSS.2021.3068519</idno>
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
