<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Mela at CheckThat! 2024: Transferring Persuasion Detection from English to Arabic -A Multilingual BERT Approach Notebook for the CheckThat! Lab at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><roleName>Md</roleName><forename type="first">Sara</forename><surname>Nabhani</surname></persName>
							<email>sara.nabhani.23@um.edu.mt</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Artificial Intelligence</orgName>
								<orgName type="institution">University of Malta</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Abdur</forename><forename type="middle">Razzaq</forename><surname>Riyadh</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Artificial Intelligence</orgName>
								<orgName type="institution">University of Malta</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Mela at CheckThat! 2024: Transferring Persuasion Detection from English to Arabic -A Multilingual BERT Approach Notebook for the CheckThat! Lab at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">EC95920D4BCDDB14F49D094AA74C8F17</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>arabic</term>
					<term>propaganda</term>
					<term>persuasion</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents our system's participation in CheckThat! Lab Task 3, focuses on identifying persuasion techniques in Arabic text. We solely focused on Arabic, a low-resource language for this task. The task required identifying any persuasion technique applied to individual tokens within the text. Only the test set was provided for Arabic for this task, without any corresponding development or training sets. Our research aimed to investigate how a resource-rich language like English could benefit the low-resource Arabic language in the context of persuasion detection. To that end, we utilized a multilingual BERT which incorporated English and Arabic knowledge during its pre-training stage. Our system achieved first place on the Arabic leaderboard in the shared task. The result, achieved without training on Arabic data, highlights the effectiveness of multilingual BERT models. This also demonstrates the potential of using resource-rich languages like English to enhance performance in low-resource languages such as Arabic for persuasion detection tasks.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Throughout history, propaganda has played a significant role in shaping public opinion. Propaganda uses various persuasive techniques to influence the way people think and act. With the advent of the digital age, the impact of propaganda has grown even stronger. Nowadays, persuasive techniques are widely used as tools for spreading propaganda through digital platforms. The increasing use of these persuasion techniques highlights the need for advanced methods to identify and critically evaluate them. This need has become urgent as the volume of digital content continues to rise, making it easier for propaganda to spread rapidly.</p><p>This paper describes our approach to the CheckThat! task 3 <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>, focuses on the identification of persuasion techniques within the textual spans of Arabic articles. The goal of this task is to detect various techniques used to persuade readers within Arabic texts. However, a significant challenge we faced was the lack of training data for Arabic. While the task provided training data for several languages, including English, French, Italian, German, Russian, and Polish, there was no training set available for Arabic. This lack of training data for Arabic made it difficult to develop a model specifically trained on Arabic texts. To overcome this challenge, we used the training data from the English set to fine-tune a multilingual BERT model <ref type="bibr" target="#b2">[3]</ref> and then evaluated it on the Arabic test set. Thus, our study investigates the effectiveness of using a high-resource language, such as English, to enhance the performance of a model for a low-resource language like Arabic. In the context of the persuasion technique identification task, we aimed to demonstrate that a model trained on English data could still perform effectively when applied to Arabic texts. This approach is based on the idea of cross-lingual transfer learning, where knowledge gained from one language can be transferred to another language.</p><p>The paper is structured as follows: Section 2 reviews previous works in this area. Section 3 outlines our proposed system in detail. Section 4 presents the results. Section 4 discusses our findings and their implications. Finally, Section 5 concludes the paper and suggests directions for future research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>Persuasion detection has traditionally focused on analyzing entire documents or paragraphs. However, a fairly recent study introduced the task of identifying persuasion techniques at the token level <ref type="bibr" target="#b3">[4]</ref>. Their work is significant because it provides one of the earlier datasets annotated with propaganda techniques at the character level. This allows researchers to employ multi-label, multi-class classification techniques for persuasion detection with finer granularity <ref type="bibr" target="#b3">[4]</ref>. The author utilized BERT <ref type="bibr" target="#b2">[3]</ref> for this downstream task and evaluated using a modified F1 score to consider partial matching.</p><p>Several recent studies have explored persuasion detection via shared tasks like SemEval and ArAIEval <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>. BERT-based classifiers are a popular choice for these tasks due to their effectiveness in text classification <ref type="bibr" target="#b4">[5]</ref>. There is a challenge with label distribution, as some persuasion techniques appear much less frequently than others. Moreover, most tokens within the data lack any persuasion labels. This is addressed by employing techniques like class weighting during loss calculation <ref type="bibr" target="#b7">[8]</ref>. Additionally, multi-task architectures utilizing shared representations from pre-trained models like BERT have shown good results <ref type="bibr" target="#b8">[9]</ref>. For persuasion detection in Arabic, previous works are commonly based on AraBERT <ref type="bibr">[10] [7]</ref>. Propaganda detection in Arabic also benefits from preprocessing steps such as reversing code-switching and emoji conversion <ref type="bibr" target="#b10">[11]</ref>.</p><p>Pre-trained multilingual models are integral to the NLP tasks for low-resource languages. BERT <ref type="bibr" target="#b2">[3]</ref> itself offers two multilingual versions: cased and uncased. These models are impressive in their scope, being trained on over 100 languages. The training process leverages masked language modelling and next token prediction objectives, allowing the model to learn generalizable representations across languages. XLM <ref type="bibr" target="#b11">[12]</ref> is another set of multilingual models that uses translation objective alongside causal and masked language modeling for pre-training. Similarly, mBART <ref type="bibr" target="#b12">[13]</ref> builds upon the BART model <ref type="bibr" target="#b13">[14]</ref> by using a multilingual pre-training objective. The objective is reconstructing the original text from a corrupted version in multiple languages, allowing mBART to develop robust denoising capabilities.</p><p>The growing popularity of cross-lingual transfer learning offers a promising approach to improve performance on Arabic NLP tasks. This is demonstrated by employing task-specific fine-tuning on English and French data to improve Arabic NLU performance <ref type="bibr" target="#b14">[15]</ref>. Similarly, for abstractive summarization of Arabic text, fine-tuning multilingual models (mBERT and mBART) on Hungarian or English before fine-tuning again on Arabic data demonstrated performance gains <ref type="bibr" target="#b15">[16]</ref>. These findings highlight the effectiveness of cross-lingual transfer learning in improving the performance of Arabic language processing tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>In this section, we describe the methodology employed for detecting persuasion techniques in Arabic articles using a multilingual BERT model fine-tuned on English data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Data Preparation</head><p>The data for this task was provided in the form of article files, with the corresponding labels given in a separate file. The label file contained information about the persuasion techniques used and the offsets indicating the span of text within the articles where these techniques were applied. There are 23 labels representing different persuasion techniques. These techniques are identified within the text at the token level, allowing for multi-label classification where each token can be associated with one or more techniques. This detailed annotation allows the model to recognize and classify multiple techniques within a single span of text.</p><p>For preprocessing, we first split the articles into paragraphs. This was done based on empty lines, effectively treating each paragraph as a separate instance. Once the articles were divided into paragraphs, we calculated the offsets for the persuasive spans within each paragraph. This allowed us to align the provided labels with the appropriate paragraphs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Task Formulation</head><p>We formulated the task as a multi-class, multi-label token classification problem. This means that each token (or word) in the input text could be classified into one or more persuasion technique categories. This approach enables the model to recognize multiple techniques that may be present in a single span of text. After predicting labels for each of the tokens, consecutive tokens with the same labels define a span. Table <ref type="table">1</ref> demonstrates an example.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>An input sequence of length 256, where each token can be assigned one or more persuasion techniques. Consecutive tokens with the same labels form a span. In this example, {𝑥 3 , 𝑥 4 , 𝑥 5 } form a span for 𝑡 1 ; {𝑥 1 , 𝑥 2 } form a span for 𝑡 2 ; {𝑥 5 , 𝑥 6 } form a span for 𝑡 2 ; {𝑥 4 } is a span for 𝑡 3 ; and {𝑥 6 } is a span for 𝑡 3 . </p><formula xml:id="formula_0">𝑥 1 𝑥 2 𝑥 3 𝑥 4 𝑥 5 𝑥 6 ... 𝑥 256 𝑡 1 0 0 1 1 1 0 0 0 𝑡 2 1 1 0 0 1 1 0 0 𝑡 3 0 0 0 1 0 1<label>0</label></formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Model and Training</head><p>We employed a multilingual BERT model for this task. Multilingual BERT (mBERT ) is pre-trained on multiple languages, including Arabic and English, making it suitable for cross-lingual transfer learning.</p><p>For the loss calculation, we used binary cross-entropy, which is well-suited for multi-label classification tasks.</p><p>Given the lack of Arabic training data and the zero-shot nature of the task for Arabic, we used the provided English training data to fine-tune the mBERT model. Since there was no Arabic data provided for validation, we utilized the Arabic validation dataset from the ArAIEval shared task on propaganda detection 2024. <ref type="foot" target="#foot_0">1</ref> This validation dataset consists of 921 documents, with an average of 30.25 tokens per document, and follows the same labelling and annotation guidelines. The following hyperparameters were used during training:</p><p>• Learning Rate: 5e-5 • Number of Epochs: 75 • Maximum Input Length: 256 tokens Additionally, we utilized pos_weights to adjust the loss calculation. This helps in handling the class imbalance, ensuring that the model does not become biased towards the more frequent classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results and Discussion</head><p>For evaluation, we used the modified F1-micro score, which accounts for partial matching of the spans. All the scores reported in this paper use that modified F1. Our model, fine-tuned using the English training data and validated on the Arabic dev dataset, achieved an F1-micro score of 0.0998 on the dev set. When evaluated on the test set, the model's performance improved significantly, achieving an F1-micro score of 0.3009. The difference in performance between the dev and test sets could be attributed to the domain-specific nuances and potential distributional differences in the test set.</p><p>Below is a detailed breakdown of the F1-micro scores per technique on the validation set, as shown in Table <ref type="table" target="#tab_0">2</ref>. The results reveal a significant variation in the model's performance across different persuasion techniques. Techniques such as Appeal to Time, Consequential Oversimplification, and Appeal to Values were detected more reliably, indicating that the model can effectively identify these patterns.</p><p>In contrast, techniques like Loaded Language, Straw Man, and Whataboutism showed moderate performance. Techniques like Questioning the Reputation, Repetition, False Dilemma-No Choice, and Appeal to Hypocrisy posed significant difficulties for the model. These techniques may be underrepresented in the training data, further complicating their detection.</p><p>The variation in performance can also be attributed to the nature and categorization of the techniques. Techniques that belong to the same category, such as different types of logical fallacies or emotional appeals, may share linguistic features that the model struggles to distinguish. For example, both Straw Man and Whataboutism involve misrepresentation or diversion tactics, which could confuse the model. On the other hand, techniques like Appeal to Values and Appeal to Popularity, which are more explicit and direct, tend to be easier for the model to identify.</p><p>It's important to note that no Arabic data was available for training. We relied on the English training data to fine-tune the multilingual BERT model. This cross-lingual transfer learning approach introduces additional challenges due to differences in linguistic structures and contextual usage between English and Arabic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion and Future Work</head><p>With the increasing sophistication of persuasion techniques, particularly in Arabic-language content, it is crucial to focus research efforts on this area. This study investigated the effectiveness of a multilingual BERT model fine-tuned on English data for the task of Arabic persuasion detection. English was selected as the training language due to its extensive resources in Natural Language Processing (NLP) tasks, including propaganda detection. Our aim was to evaluate how these abundant resources could be leveraged to benefit languages with fewer resources, such as Arabic. This work achieved first place for Arabic on the leaderboard for the test set, demonstrating the potential of cross-lingual transfer learning <ref type="bibr" target="#b16">[17]</ref>. However, there is still room for improvement.</p><p>Future work can explore how other high-resource languages impact the performance on Arabic. There might be various strategies to enhance the model's performance. Increasing the diversity and quantity of training data, particularly for techniques where performance was low, through data augmentation or the collection of additional labelled data, can help balance the dataset. Advanced fine-tuning techniques like focal loss can adjust the loss function to focus more on hard-to-classify examples, while dynamic sampling strategies can address class imbalance.</p><p>Additionally, incorporating more sophisticated features such as syntactic and semantic information, part-of-speech tags, or dependency parsing can provide the model with greater context and improve classification accuracy. Exploring alternative hidden layer representations within BERT may also yield better classification performance. By addressing these areas, future research can further improve the accuracy and robustness of models in detecting a wide range of persuasion techniques, ultimately enhancing their utility in real-world applications.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 2</head><label>2</label><figDesc>F1-micro scores per technique on the validation set.</figDesc><table><row><cell>Technique</cell><cell cols="2">F1-micro Technique</cell><cell>F1-micro</cell></row><row><cell>Appeal to Values</cell><cell>0.6207</cell><cell>Questioning the Reputation</cell><cell>0.0000</cell></row><row><cell>Loaded Language</cell><cell>0.1855</cell><cell>Straw Man</cell><cell>0.2138</cell></row><row><cell>Consequential Oversimplification</cell><cell>0.6897</cell><cell>Repetition</cell><cell>0.0292</cell></row><row><cell>Causal Oversimplification</cell><cell>0.0542</cell><cell>Guilt by Association</cell><cell>0.0443</cell></row><row><cell>Appeal to Hypocrisy</cell><cell>0.0114</cell><cell>Conversation Killer</cell><cell>0.1724</cell></row><row><cell>False Dilemma-No Choice</cell><cell>0.0172</cell><cell>Whataboutism</cell><cell>0.2759</cell></row><row><cell>Slogans</cell><cell>0.0661</cell><cell>Obfuscation-Vagueness-Confusion</cell><cell>0.1034</cell></row><row><cell>Name Calling-Labeling</cell><cell>0.1257</cell><cell>Flag Waving</cell><cell>0.0709</cell></row><row><cell>Doubt</cell><cell>0.0483</cell><cell>Appeal to Fear-Prejudice</cell><cell>0.0472</cell></row><row><cell>Exaggeration-Minimisation</cell><cell>0.0983</cell><cell>Red Herring</cell><cell>0.5862</cell></row><row><cell>Appeal to Popularity</cell><cell>0.5517</cell><cell>Appeal to Authority</cell><cell>0.0949</cell></row><row><cell>Appeal to Time</cell><cell>0.8276</cell><cell></cell><cell></cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://araieval.gitlab.io/task1/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://mundus-web.coli.uni-saarland.de/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We acknowledge the assistance of the LT-Bridge Project (GA 952194) and DFKI for the use of their Virtual Laboratory. Also, authors have been supported financially by the EMLCT 2 programme during this entire work.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum, CLEF 2024</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<meeting><address><addrLine>Grenoble, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Piskorski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Stefanovitch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Alam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Campos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dimitrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jorge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pollak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ribin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Fijavž</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hasanain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Guimarães</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">F</forename><surname>Pacheco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sartori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Silvano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">V</forename><surname>Zwitter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Koychev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Da San</surname></persName>
		</author>
		<author>
			<persName><surname>Martino</surname></persName>
		</author>
		<title level="m">Overview of the CLEF-2024 CheckThat! lab task 3 on persuasion techniques</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<title level="m">Bert: Pre-training of deep bidirectional transformers for language understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Experiments in Detecting Persuasion Techniques in the News</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D S</forename><surname>Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="1911.06815" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Martino, SemEval-2021 task 6: Detection of persuasion techniques in texts and images</title>
		<author>
			<persName><forename type="first">D</forename><surname>Dimitrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Shaar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Alam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Silvestri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Firooz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><surname>Da San</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.semeval-1.7</idno>
		<ptr target="https://aclanthology.org/2021.semeval-1.7.doi:10.18653/v1/2021.semeval-1.7" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Palmer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Schneider</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Schluter</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Emerson</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Herbelot</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">X</forename><surname>Zhu</surname></persName>
		</editor>
		<meeting>the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="70" to="98" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">SemEval-2023 Task 3: Detecting the Category, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup</title>
		<author>
			<persName><forename type="first">J</forename><surname>Piskorski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Stefanovitch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Da San Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.semeval-1.317</idno>
		<ptr target="https://aclanthology.org/2023.semeval-1.317.doi:10.18653/v1/2023.semeval-1.317" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023), Association for Computational Linguistics</title>
				<meeting>the The 17th International Workshop on Semantic Evaluation (SemEval-2023), Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="2343" to="2361" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Hasanain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Alam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Mubarak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Abdaljalil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zaghouani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D S</forename><surname>Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Freihat</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2311.03179</idno>
		<title level="m">Araieval shared task: Persuasion techniques and disinformation detection in arabic text</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gautam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Mamidi</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="2106.00240[cs" />
		<title level="m">Volta at semeval-2021 task 6: Towards detecting persuasive texts and images using textual and multimodal ensemble</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Homados at semeval-2021 task 6: Multi-task learning for propaganda detection</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kaczyński</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Przybyła</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.semeval-1.141</idno>
		<ptr target="https://aclanthology.org/2021.semeval-1.141.doi:10.18653/v1/2021.semeval-1.141" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</title>
				<meeting>the 15th International Workshop on Semantic Evaluation (SemEval-2021), Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1027" to="1031" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">AraBERT: Transformer-based model for Arabic language understanding</title>
		<author>
			<persName><forename type="first">W</forename><surname>Antoun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Baly</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajj</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2020.osact-1.2" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, European Language Resource Association</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Al-Khalifa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Magdy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Darwish</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Elsayed</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Mubarak</surname></persName>
		</editor>
		<meeting>the 4th Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Offensive Language Detection, European Language Resource Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="9" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Detectiveredasers at araieval shared task: Leveraging transformer ensembles for arabic deception detection</title>
		<author>
			<persName><forename type="first">B</forename><surname>Tuck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Qachfar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Boumber</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Verma</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2023.arabicnlp-1.45</idno>
		<ptr target="https://aclanthology.org/2023.arabicnlp-1.45.doi:10.18653/v1/2023.arabicnlp-1.45" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of ArabicNLP 2023, Association for Computational Linguistics</title>
				<meeting>ArabicNLP 2023, Association for Computational Linguistics<address><addrLine>Singapore (Hybrid</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="494" to="501" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Conneau</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1901.07291</idno>
		<title level="m">Cross-lingual language model pretraining</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Multilingual denoising pre-training for neural machine translation</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Edunov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazvininejad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Transactions of the Association for Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="726" to="742" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<title level="m" type="main">Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension</title>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghazvininejad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mohamed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1910.13461</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Cross-lingual transfer for low-resource arabic language understanding</title>
		<author>
			<persName><forename type="first">K</forename><surname>Abboud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Golovneva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dipersio</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.wanlp-1.21</idno>
		<ptr target="https://aclanthology.org/2022.wanlp-1.21.doi:10.18653/v1/2022.wanlp-1.21" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Seventh Arabic Natural Language Processing Workshop (WANLP), Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Bouamor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Al-Khalifa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Darwish</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">O</forename><surname>Rambow</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Bougares</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Abdelali</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Tomeh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Khalifa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Zaghouani</surname></persName>
		</editor>
		<meeting>the Seventh Arabic Natural Language Processing Workshop (WANLP), Association for Computational Linguistics<address><addrLine>Abu Dhabi, United Arab Emirates (Hybrid</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="225" to="237" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Cross-lingual fine-tuning for abstractive arabic text summarization</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kahla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">G</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Novák</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the international conference on recent advances in natural language processing</title>
				<meeting>the international conference on recent advances in natural language processing<address><addrLine>ranlp</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="655" to="663" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The CLEF-2024 CheckThat! Lab: Check-worthiness, subjectivity, persuasion, roles, authorities, and adversarial robustness</title>
		<author>
			<persName><forename type="first">A</forename><surname>Barrón-Cedeño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Alam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Elsayed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Przybyła</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Struß</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Haouari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hasanain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ruggeri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Suwaileh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Information Retrieval</title>
				<editor>
			<persName><forename type="first">N</forename><surname>Goharian</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Tonellotto</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Lipani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Mcdonald</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Macdonald</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Ounis</surname></persName>
		</editor>
		<meeting><address><addrLine>Nature Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page" from="449" to="458" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
