<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Detecting fake news using Twitter social information</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Jesús</forename><forename type="middle">M</forename><surname>Fraile-Hernández</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">NLP &amp; IR Group</orgName>
								<orgName type="institution">UNED</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Álvaro</forename><surname>Rodrigo</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">NLP &amp; IR Group</orgName>
								<orgName type="institution">UNED</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Roberto</forename><surname>Centeno</surname></persName>
							<affiliation key="aff0">
								<orgName type="laboratory">NLP &amp; IR Group</orgName>
								<orgName type="institution">UNED</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Detecting fake news using Twitter social information</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">7F49DF01ABDB688AAA474BFAA0D03C6B</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:21+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Social information</term>
					<term>Classifying news</term>
					<term>Classifier model</term>
					<term>Social features</term>
					<term>Fake news detection</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In this paper, the aim is to study whether social information can provide useful information when classifying news. For this purpose, a set of news items in Spanish has been extended with social information. Subsequently, a classifier model has been proposed to carry out this task, mixing the social information previously extracted with the textual information of the news item. Finally, we have studied which social features are the most relevant in this task.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Due to the increase in communication channels in recent decades, users have access to an immense amount of information almost instantaneously. However, it is relatively easy to fall for hoaxes or misinformation on social media.</p><p>Traditional models of fake news detection focus on detecting the linguistic characteristics of the news. Subsequently, in <ref type="bibr" target="#b0">[1]</ref>, pre-trained embeddings were used along with LSTM. Finally, with the emergence of contextual models, <ref type="bibr" target="#b1">[2]</ref> leveraged the pre-trained BERT model, to perform transferred learning and identify the veracity of news.</p><p>However, due to the difficulty even for a human to discern between true and false news, sometimes the textual information in the news is not enough. In <ref type="bibr" target="#b2">[3]</ref> it is proposed at a theoretical level the possibility of creating a hybrid approach that incorporates the linguistic characteristics of the news and an analysis of the networks that are formed around that news. In <ref type="bibr" target="#b3">[4]</ref> the author uses different features to identify fake news in popular Twitter threads. In <ref type="bibr" target="#b4">[5]</ref> fake news is detected using only the extracted textual information. Regarding hybrid models, the CSI model proposed in <ref type="bibr" target="#b5">[6]</ref> performs a characterisation in three modules: capturing, scoring and integrating. In <ref type="bibr" target="#b6">[7]</ref>, a news detection model is proposed that considers the association of user interactions, the editor's bias and the users' stance towards the news.</p><p>The aim of this work is to study whether social information can provide useful information for the detection of fake news. To this end, social information has been collected from Twitter to extend FakeDeS, a relevant corpus of news in Spanish, and a model has been designed to include textual and social information. Furthermore, we intend to study which social features are the most relevant for news classification.</p><p>The rest of this paper is structured as follows: Section 2 describes the datasets to be used along with the task to be solved. Section 3 describes the methodology followed including the extraction of social information from Twitter along with the models proposed based on the data they use. Section 4 includes the evaluation metrics used. Section 5 then presents the results, which will be discussed in Section 6. Finally, conclusions and future work are given in Section 7. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Dataset and task</head><p>The dataset we will work with is the Spanish Fake News Corpus (FakeDeS) <ref type="bibr" target="#b7">[8]</ref>, which contains publications in Spanish about different events that were collected from November 2020 to March 2021. Each of these publications is labelled as true or false. Newspaper websites and fact-checking websites were mainly used to collect the information.</p><p>The dataset is divided into 3 files with a total of 1543 news items. Because of the methodology used, it has been decided to merge the training and development files to obtain what we will call the training set. Each of the news items contains information such as the topic, the name of the source, the headline, the text and the link to the news item.</p><p>The training set has a total of 971 news items, of which 480 are false and 491 are true. On the other hand, the test set consists of 572 news items, half of which are true and half of which are false. Therefore, we are dealing with balanced data sets.</p><p>The topics covered in the training corpus are: politics, entertainment, sport, society, science, health, economy, security and education.</p><p>It should be noted that the test set has news related to Covid-19, while the training set does not present any news related to this topic (the most similar are the health news, but in no case do they mention Covid-19). Therefore, the models that are proposed will have to correctly classify this topic without having seen it in the training.</p><p>In IberLEF 2021, a shared task was proposed whose objective was to classify a series of news items as true or false. To do so, the FakeDeS corpus described above was used. A report was published in <ref type="bibr" target="#b8">[9]</ref>, which collected the most important characteristics of the best-performing models. The results of this task by the different participants can be seen in Figure <ref type="figure" target="#fig_0">1</ref>. Among the approaches used to solve it, the participants of the GDUFS team, the team that achieved the best accuracy, used a BERT model and sample memory with an attention mechanism. The method consisted of taking the first and last segments of the texts and feeding them into a BERT system, obtaining two embeddings (head and tail). In addition, there is a matrix called 'sample memory', which is obtained by taking a random sample of the head and tail embeddings; this matrix is used in an attention mechanism with the rest of the texts. In contrast to the GDUFS_DM approach, the participants of team Haha, the second-placed team, employed feature selection with a weighted tf-idf and a multilayer perceptron. This model not only analysed the content of the news item, but also combined information such as the publisher of the news item or the topic of the news item.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>This section describes the methodology used to extract social information from Twitter users. In addition, the models trained according to the type of data they use are presented.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Social information extraction</head><p>The main objective of this work is to study the information provided by social information when detecting fake news, and as mentioned in Chapter 1, there is no corpus in Spanish that contains this information. This is why we decided to extract this information from the social network Twitter, using the API provided by the platform.</p><p>For each news item, we searched for those tweets that contained the headline of the news item or the link to it. To solve the problem of the maximum length of the queries, special characters have been eliminated from the news headlines.</p><p>According to <ref type="bibr" target="#b3">[4]</ref> and <ref type="bibr" target="#b4">[5]</ref> there is a series of metadata of the tweets that allow extracting information about whether the user may be prone to the propagation of fake news or the tweet may contain untruthful information. Therefore, it has been decided to extract the following metadata from each of the tweets.</p><p>• Tweet. Text of the tweet, id of the author, id of the tweet, number of retweets, number of replies to the tweet, number of likes, number of citations of the tweet. • User. username (str), user creation date (date) ISO 8601, verified user (bool), number of followers (int), number of followed (int), number of tweets (int), number of times listed (int).</p><p>We have managed to extract posts from 41.67% of the total number of news items. Of these, the distribution of the number of tweets collected per news item shows a high concentration in the (0, 200) interval, representing 86% of the news items. Within this interval, it is observed that true news tends to receive more interaction. However, as the number of tweets about a news item increases, it is evident that fake news receives a greater number of interactions. This trend can be seen in the violin diagram presented in Figure <ref type="figure" target="#fig_1">2</ref>.</p><p>It is worth noting that, although the news is written in Spanish, there are tweets in English or French that talk about the news. This is especially true for news related to Covid-19.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Textual models</head><p>In this section, the textual methods used for the binary classification of the news items will be presented. The full text of the news item has been used, so it has had to be preprocessed. For the non-contextual models, urls, emoticons or non-textual expressions, stopwords, the text has been converted to lowercase and the processes of lemmatisation and stemming have been applied. However, for the contextual models, only the urls have been eliminated.</p><p>Subsequently, 5 different approaches have been used.</p><p>1. Vector space model based on bags of words (BoW).</p><p>2. Vector space model using a weighted tf-idf. For approaches 1, 2 and 3, Naive Bayes, SVM, Logistic Regression, Decision Trees and Random Forest models have been trained. For approach 4, multilayer perceptrons with input the tf-idf weight vector, multilayer perceptrons and convolutional networks with an embedding layer and multilayer perceptrons, convolutional networks, LSTM, GRU and bidirectional networks with a pre-trained embedding layer. Finally, for approach 5, the BETO model has been selected: Spanish BERT <ref type="bibr" target="#b9">[10]</ref> with a final classification layer with two neurons. This model is a BERT model trained with the whole-word masking technique on a large corpus of more than three billion Spanish words.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Models with social information</head><p>The methods that use only the social information of the news collected use the following metadata for each published tweet: number of retweets, number of replies, number of likes of the tweet, number of quotes of the tweet, verified user, number of followers, number of followed, number of tweets of the author, number of times the author has been listed. Then, in order to record the impact of the news item on social networks, the number of tweets collected for this news item is added.</p><p>To represent all the tweets that talk about a certain news item, an average of the previous characteristics of each tweet has been calculated. Finally, the standard deviation of each characteristic was added. In this way, a data matrix with 20 columns is obtained (where the column relating to the deviation of the number of tweets of the news item is always 0).</p><p>Once the feature matrix has been obtained, different learning models have been used with different hyperparameter explorations such as Decision Trees, Random Forest, SVM, Gradient Boosting, Adaptive Boosting, MLP,...</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Hybrid model</head><p>A hybrid model has been developed that seeks to take advantage of both the textual information provided by the text of the news item and the social information extracted from the Twitter data (both the non-textual information of the section and the text of the tweets collected).</p><p>In this model, for each news item, a specialised model is used to classify the news using social information. For this purpose, the best model from the previous subsection (Random Forest) is selected. With this model, for each news item, the probabilities of being true or false are extracted using as input the corresponding row of the matrix of social characteristics with standard deviation described in that section. In the event that no tweets could be extracted from a news item, the output would be a vector of two zeros.</p><p>In parallel, the text of the news item is processed using the BETO: Spanish BERT model <ref type="bibr" target="#b10">[11]</ref>. The output is a vector of dimension 768.</p><p>In parallel to these two processes, for each news item with tweets collected, the text of each tweet is pre-processed (eliminating URLs and tokenising) and subsequently processed using the pre-trained XLM-roBERTa-base model <ref type="bibr" target="#b11">[12]</ref>. This transformer model has been trained on a corpus of about 198 million tweets in 8 different languages (Spanish, Arabic, English, French, German, Hindi, Portuguese and Italian) and is specialised in sentiment classification (positive, negative or neutral). In our case, the last layer of the model will be removed, obtaining as output a vector of length 768 that will represent the most relevant features of the text of the tweet.</p><p>For each available tweet, the previous process has been carried out, obtaining a vector of length 768. Finally, an average of all the vectors of the tweets of the news item has been made to obtain a vector that represents the tweets of that news item. If the news item had no social information, a vector of zeros is returned.</p><p>Then, the three vectors are joined to obtain a vector of dimensionality 1538. This flowchart can be seen in Figure <ref type="figure" target="#fig_2">3</ref> Once all the news has been processed following the previous diagram, several models have been trained such as Decision Trees, Random Forest, SVM, Gradient Boosting, Adaptive Boosting, MLP, ...</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Evaluation</head><p>Two different methodologies have been used to evaluate the models, a cross-validation and an evaluation on the test set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">𝑘-fold cross-validation</head><p>Cross-validation is one of the most widely used methods to estimate the prediction error of a model with a given set of hyperparameters. A 𝑘-fold (or 𝑘-fold cross-validation) has been used. This method divides the data set, in our case the train set together with the development set, into 𝑘 equal parts 𝑃 1 , . . . , 𝑃 𝑘 . For each 𝑃 𝑛 the model is trained using the other 𝑘 − 1 parts and the error in predicting the 𝑃 𝑛 data (data never seen by this model) is calculated. By doing this for the 𝑘 parts we obtain a set of errors. With these 𝑘 errors we calculate their mean and variance to obtain a measure of the average error of that model with those hyperparameters.</p><p>It should be noted that this method requires a fairly large computational cost, since for a crossvalidation of 𝑘-folds it would be necessary to train 𝑘 models. As a general rule, a value of 5 or 10 is usually chosen as a good compromise between bias and variance. In our case a 5-fold cross-validation has been used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Test set evaluation</head><p>Finally, for the model that has performed best in the previous cross-validations, the test set will be evaluated. This set will never be seen by the model and will provide a representation of the generalisability of the model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Evaluation metrics</head><p>To evaluate the performance of our classification model, we use the F1 metric. The F1 value will be calculated for both true and false classified news. With this, the value 𝑀 𝑎𝑐𝑟𝑜 -𝐹 1, or simply 𝐹 1, will be calculated as the average between the two previous values.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>In this section the results of the various trained models will be presented. For each approach in the section 3 the following results will be shown:</p><p>• Within the training of a particular approach, the 𝑀 𝑎𝑐𝑟𝑜 -𝐹 1 value of the best algorithms used will be shown. The average of the 𝑀 𝑎𝑐𝑟𝑜 -𝐹 1 values will be reflected using 5-fold cross-validation. • For each approach, the model with the best 𝑀 𝑎𝑐𝑟𝑜 -𝐹 1 will be selected during training. Subsequently, it will be retrained with all data and evaluated on the test set. The 𝐹 1 𝐹 𝑎𝑘𝑒 , 𝐹 1 𝑇 𝑟𝑢𝑒 , 𝑀 𝑎𝑐𝑟𝑜 -𝐹 1 and the Accuracy of the model will be exposed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Textual models</head><p>The training results of the methods described in section 3.2 are listed in Table <ref type="table" target="#tab_0">1</ref>.</p><p>It can be seen that the non-neural models stand out from those using neural networks. This could be due to the fact that the models being used have a large number of parameters to optimise and we have a rather limited data set. It is worth noting that the use of pre-trained embeddings has resulted in lower performance than training the embeddings from scratch. Also noteworthy is the poor performance obtained with recurrent networks, models that have required a large amount of training time and are commonly used for language processing problems. The best performing approach has been to use a weighted tf-idf together with a Random Forest model.</p><p>The results of the evaluation of this model on the test set and the results of the teams participating in IberLEF 2021 are shown in Table <ref type="table">4</ref>.</p><p>for the model, with 10 times more importance over the rest, was the variable that corresponds to the probability returned by the Random Forest that a news item is true using the social information of the news item.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions and Future Work</head><p>Throughout the development of this work, it has been observed how the introduction of social information, combined with textual information, has enabled the classification of news, helping to improve the performance of the models. This suggests that, when solving a problem, it would be useful to add social information to the dataset. However, obtaining this information is quite costly both economically and in terms of time.</p><p>Additionally, the importance of social features in classifier models has been studied, concluding that author-related features are more important than tweet-related features. The development of a model that combines all textual and social features achieves similar or better results than models that use only textual information.</p><p>However, it is crucial to acknowledge several important limitations:</p><p>• Impractical Approach: Many of the social signals being harvested are post-facto. While disinformation might actually be spreading, many features (such as the number of reposts) would not have stabilized. Thus, while the current approach of augmenting these signals might work post-facto, it is unlikely to work with live data. Even post-facto, it is unclear whether the approach will scale. • Flawed Methodology: The use of balanced training data, and a small set of data at that, is not meaningful. Particularly, it is unclear how learning from such a small corpus would generalize when new kinds of disinformation arise. In practice, the distribution of disinformation-carrying articles compared to genuine ones is far from balanced. Therefore, any realistic methodology needs to incorporate the ability to handle imbalance and transferability from the learning phase. Moreover, adversary behavior might change to emulate the features of good articles or at least stray away from its current behavior, rendering the specific features used for classification obsolete. • Too Static and Small Dataset: The dataset used is too static and small, and lacks adequate diversity to consider any results conclusive. A variety of distinct datasets ought to be used to determine if the ideas actually work in a more general setting.</p><p>As a line of future work, it would be a good approach not only to study the individual social metadata of each user, but also to study a social graph of the followers or followers to see the social relationships that exist between them. Additionally, the dataset should be expanded and diversified, and methods should be developed to handle imbalanced data and adapt to changing adversary behavior.</p><p>We acknowledge that this work, while preliminary, can trigger useful discussions and provides a foundation upon which more robust and scalable approaches can be built in the future.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Results IberLEF 2021 on the test set.</figDesc><graphic coords="2,136.31,65.61,322.65,271.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Violin diagram of the number of tweets collected.</figDesc><graphic coords="3,182.44,65.61,230.40,127.80" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>3 .</head><label>3</label><figDesc>Bigram counting. 4. Neural Networks and deep learning. 5. Contextual models.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Workflow of the hybrid model.</figDesc><graphic coords="5,107.14,65.60,381.00,258.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Cross-validation results of textual model training.</figDesc><table><row><cell>Textual models</cell><cell>𝐹 1</cell></row><row><cell>TF-IDF (RF)</cell><cell>0.849</cell></row><row><cell>BoW (RF)</cell><cell>0.825</cell></row><row><cell>Bigramas (RF)</cell><cell>0.822</cell></row><row><cell cols="2">MLP (Embedding) 0.786</cell></row><row><cell>MLP (TF-IDF)</cell><cell>0.751</cell></row><row><cell>CNN</cell><cell>0.740</cell></row><row><cell>BETO</cell><cell>0.727</cell></row><row><cell>GRU</cell><cell>0.678</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the HAMiSoN project grant CHIST-ERA-21-OSNEM-002, AEI PCI2022-135026-2 (MCIN/AEI/10.13039/501100011033 and EU "NextGenerationEU"/PRTR).</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Social information models</head><p>Cross-validation results of social models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Social information models</head><p>The training results of the methods described in section 3.3 are collected in Table <ref type="table">2</ref>.</p><p>We can see that the 𝐹 1 of the models is quite high. Tree-based models occupy the top 5 positions in the list. In addition, those based on clusters of trees stand out from individual decision trees. The best performing approach was a Random Forest model. It should be remembered that this model has only been trained and evaluated with those news items from which it has been possible to extract social information, so the training and test set is smaller than in the rest of the cases.</p><p>Due to these results, it has been decided to choose the Random Forest classifier for the social information for the hybrid model, as indicated in section 3.4.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Hybrid model</head><p>The training results of the methods described in section 3.4 are listed in Table <ref type="table">3</ref>.</p><p>In view of the training results, any of the first 2 models would be valid for your choice. The rest of the models have a very similar accuracy to the first three. It has been decided to select logistic regression over decision trees since it is a simpler algorithm, with a smaller number of hyperparameters and with a lower computational cost.</p><p>The results of the evaluation of this model on the test set and the results of the teams participating in IberLEF 2021 are shown in Table <ref type="table">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Discussion</head><p>This section presents a discussion of the results obtained.</p><p>In view of the results shown in Tables <ref type="table">1 and 3</ref>, it can be seen that the approach that obtains the best 𝐹 1 is a model that uses only textual information, more specifically a Random Forest with a weighted tf-idf. This approach obtains a higher 𝐹 1 compared to other types of models that include social information, so that a priori it could be thought that social information does not provide relevant information.</p><p>However, in Table <ref type="table">4</ref>, we can see how on the test set the model that uses only textual information obtains worse results compared to the hybrid model. This is due to the fact that when using a tf-idf weight it is possible that there are words in the corpus on which the weight is applied (training news corpus) that do not exist in the test set. This is why models such as transformer networks pre-trained on large corpora will have more generalisation capacity and, therefore, will be able to obtain better results. Once social information is introduced into the model, a significant increase in results can be seen. This is due to the fact that on the one hand the text is being processed using transformer models with a very high generalisation capacity and that the non-textual social information extracted from Twitter is the same regardless of the subject matter.</p><p>Comparing the models with respect to the best classified in IberLEF 2021, Figure <ref type="figure">1</ref>, it can be seen that the hybrid model is the one that best classifies Fake news. This hybrid model obtains the same Accuracy as the first ranked team.</p><p>In addition, a study has been carried out on which social information features are the most relevant for the model. For this purpose, the importance of the permutation set out in <ref type="bibr" target="#b12">[13]</ref> has been used. It can be seen that 8 of the 9 most relevant features only depend on the author's information and not on the content or information of the tweet. These 9 features are, in order of importance: listed_count, following_count_std, followers_count, tweet_count_std, followers_count_std, quote_count_std, verified, verified_std, tweet_count. In addition, within these characteristics, the information provided by those obtained from the standard deviation of the set of tweets collected for each news item stands out.</p><p>The percentage of importance of the most relevant features used in the logistic regression of the hybrid model has also been calculated. To calculate the importance of each feature, 𝑓 𝑖 , the coefficients of the regression, 𝑤 𝑖 , have been extracted and the following operation has been carried out 𝑓 𝑖 = 𝑒 𝑤 𝑖 . Finally, the percentage of each of them has been calculated. With this, the most relevant characteristic</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fake news detection with semantic features and text mining</title>
		<author>
			<persName><forename type="first">P</forename><surname>Bharadwaj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Shao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal on Natural Language Computing (IJNLC)</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Fakebert: Fake news detection in social media with a bert-based deep learning approach</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Kaliyar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Goswami</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Narang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Multimedia tools and applications</title>
		<imprint>
			<biblScope unit="volume">80</biblScope>
			<biblScope unit="page" from="11765" to="11788" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Automatic deception detection: Methods for finding fake news</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Conroy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">L</forename><surname>Rubin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the association for information science and technology</title>
		<imprint>
			<biblScope unit="volume">52</biblScope>
			<biblScope unit="page" from="1" to="4" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Automatically identifying fake news in popular twitter threads</title>
		<author>
			<persName><forename type="first">C</forename><surname>Buntain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Golbeck</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2017 IEEE International Conference on Smart Cloud (SmartCloud)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="208" to="215" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A hybrid model for fake news detection: Leveraging news content and user comments in fake news</title>
		<author>
			<persName><forename type="first">M</forename><surname>Albahar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IET Information Security</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="169" to="177" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Csi: A hybrid deep model for fake news detection</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ruchansky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</title>
				<meeting>the 2017 ACM on Conference on Information and Knowledge Management</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="797" to="806" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Liu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1712.077098</idno>
		<title level="m">Exploiting tri-relationship for fake news detection</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Detection of fake news in a new corpus for the spanish language</title>
		<author>
			<persName><forename type="first">J.-P</forename><surname>Posadas-Durán</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gómez-Adorno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J M</forename><surname>Escobar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent &amp; Fuzzy Systems</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="4869" to="4876" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Overview of fakedes at iberlef 2021: Fake news detection in spanish shared task</title>
		<author>
			<persName><forename type="first">H</forename><surname>Gómez-Adorno</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Posadas-Durán</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">B</forename><surname>Enguix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">P</forename><surname>Capetillo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="volume">67</biblScope>
			<biblScope unit="page" from="223" to="231" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Spanish pre-trained bert model and evaluation data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Canete</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chaperon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fuentes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pérez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Pml4dc at iclr</title>
		<imprint>
			<biblScope unit="page" from="1" to="10" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Spanish pre-trained bert model and evaluation data</title>
		<author>
			<persName><forename type="first">J</forename><surname>Cañete</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chaperon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fuentes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pérez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">PML4DC at ICLR</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Xlm-t: Multilingual language models in twitter for sentiment analysis and beyond</title>
		<author>
			<persName><forename type="first">F</forename><surname>Barbieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Espinosa-Anke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Camacho-Collados</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the LREC</title>
				<meeting>the LREC<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="20" to="25" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Permutation importance: a corrected feature importance measure</title>
		<author>
			<persName><forename type="first">A</forename><surname>Altmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Toloşi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sander</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lengauer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Bioinformatics</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="1340" to="1347" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
