<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Mela at CheckThat! 2024: Transferring Persuasion Detection from English to Arabic - A Multilingual BERT Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Sara Nabhani</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Md Abdur Razzaq Riyadh</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Artificial Intelligence, University of Malta</institution>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <abstract>
        <p>This paper presents our system's participation in CheckThat! Lab Task 3, focuses on identifying persuasion techniques in Arabic text. We solely focused on Arabic, a low-resource language for this task. The task required identifying any persuasion technique applied to individual tokens within the text. Only the test set was provided for Arabic for this task, without any corresponding development or training sets. Our research aimed to investigate how a resource-rich language like English could benefit the low-resource Arabic language in the context of persuasion detection. To that end, we utilized a multilingual BERT which incorporated English and Arabic knowledge during its pre-training stage. Our system achieved first place on the Arabic leaderboard in the shared task. The result, achieved without training on Arabic data, highlights the efectiveness of multilingual BERT models. This also demonstrates the potential of using resource-rich languages like English to enhance performance in low-resource languages such as Arabic for persuasion detection tasks.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;arabic</kwd>
        <kwd>propaganda</kwd>
        <kwd>persuasion</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Throughout history, propaganda has played a significant role in shaping public opinion. Propaganda
uses various persuasive techniques to influence the way people think and act. With the advent of the
digital age, the impact of propaganda has grown even stronger. Nowadays, persuasive techniques are
widely used as tools for spreading propaganda through digital platforms. The increasing use of these
persuasion techniques highlights the need for advanced methods to identify and critically evaluate
them. This need has become urgent as the volume of digital content continues to rise, making it easier
for propaganda to spread rapidly.</p>
      <p>
        This paper describes our approach to the CheckThat! task 3 [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ], focuses on the identification of
persuasion techniques within the textual spans of Arabic articles. The goal of this task is to detect
various techniques used to persuade readers within Arabic texts. However, a significant challenge
we faced was the lack of training data for Arabic. While the task provided training data for several
languages, including English, French, Italian, German, Russian, and Polish, there was no training set
available for Arabic. This lack of training data for Arabic made it dificult to develop a model specifically
trained on Arabic texts. To overcome this challenge, we used the training data from the English set
to fine-tune a multilingual BERT model [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] and then evaluated it on the Arabic test set. Thus, our
study investigates the efectiveness of using a high-resource language, such as English, to enhance
the performance of a model for a low-resource language like Arabic. In the context of the persuasion
technique identification task, we aimed to demonstrate that a model trained on English data could still
perform efectively when applied to Arabic texts. This approach is based on the idea of cross-lingual
transfer learning, where knowledge gained from one language can be transferred to another language.
      </p>
      <p>The paper is structured as follows: Section 2 reviews previous works in this area. Section 3 outlines
our proposed system in detail. Section 4 presents the results. Section 4 discusses our findings and their
implications. Finally, Section 5 concludes the paper and suggests directions for future research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Persuasion detection has traditionally focused on analyzing entire documents or paragraphs. However,
a fairly recent study introduced the task of identifying persuasion techniques at the token level [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].
Their work is significant because it provides one of the earlier datasets annotated with propaganda
techniques at the character level. This allows researchers to employ multi-label, multi-class classification
techniques for persuasion detection with finer granularity [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The author utilized BERT [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] for this
downstream task and evaluated using a modified F1 score to consider partial matching.
      </p>
      <p>
        Several recent studies have explored persuasion detection via shared tasks like SemEval and ArAIEval
[
        <xref ref-type="bibr" rid="ref5 ref6 ref7">5, 6, 7</xref>
        ]. BERT-based classifiers are a popular choice for these tasks due to their efectiveness in text
classification [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. There is a challenge with label distribution, as some persuasion techniques appear
much less frequently than others. Moreover, most tokens within the data lack any persuasion labels.
This is addressed by employing techniques like class weighting during loss calculation [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Additionally,
multi-task architectures utilizing shared representations from pre-trained models like BERT have shown
good results [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. For persuasion detection in Arabic, previous works are commonly based on AraBERT
[10] [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Propaganda detection in Arabic also benefits from preprocessing steps such as reversing
code-switching and emoji conversion [11].
      </p>
      <p>
        Pre-trained multilingual models are integral to the NLP tasks for low-resource languages. BERT
[
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] itself ofers two multilingual versions: cased and uncased. These models are impressive in their
scope, being trained on over 100 languages. The training process leverages masked language modelling
and next token prediction objectives, allowing the model to learn generalizable representations across
languages. XLM [12] is another set of multilingual models that uses translation objective alongside
causal and masked language modeling for pre-training. Similarly, mBART [13] builds upon the BART
model [14] by using a multilingual pre-training objective. The objective is reconstructing the original
text from a corrupted version in multiple languages, allowing mBART to develop robust denoising
capabilities.
      </p>
      <p>The growing popularity of cross-lingual transfer learning ofers a promising approach to improve
performance on Arabic NLP tasks. This is demonstrated by employing task-specific fine-tuning on
English and French data to improve Arabic NLU performance [15]. Similarly, for abstractive
summarization of Arabic text, fine-tuning multilingual models (mBERT and mBART) on Hungarian or English
before fine-tuning again on Arabic data demonstrated performance gains [ 16]. These findings highlight
the efectiveness of cross-lingual transfer learning in improving the performance of Arabic language
processing tasks.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Methodology</title>
      <p>In this section, we describe the methodology employed for detecting persuasion techniques in Arabic
articles using a multilingual BERT model fine-tuned on English data.</p>
      <sec id="sec-3-1">
        <title>3.1. Data Preparation</title>
        <p>The data for this task was provided in the form of article files, with the corresponding labels given in a
separate file. The label file contained information about the persuasion techniques used and the ofsets
indicating the span of text within the articles where these techniques were applied. There are 23 labels
representing diferent persuasion techniques. These techniques are identified within the text at the
token level, allowing for multi-label classification where each token can be associated with one or more
techniques. This detailed annotation allows the model to recognize and classify multiple techniques
within a single span of text.</p>
        <p>For preprocessing, we first split the articles into paragraphs. This was done based on empty lines,
efectively treating each paragraph as a separate instance. Once the articles were divided into paragraphs,
we calculated the ofsets for the persuasive spans within each paragraph. This allowed us to align the
provided labels with the appropriate paragraphs.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Task Formulation</title>
        <p>We formulated the task as a multi-class, multi-label token classification problem. This means that each
token (or word) in the input text could be classified into one or more persuasion technique categories.
This approach enables the model to recognize multiple techniques that may be present in a single span
of text. After predicting labels for each of the tokens, consecutive tokens with the same labels define a
span. Table 1 demonstrates an example.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Model and Training</title>
        <p>We employed a multilingual BERT model for this task. Multilingual BERT (mBERT ) is pre-trained on
multiple languages, including Arabic and English, making it suitable for cross-lingual transfer learning.
For the loss calculation, we used binary cross-entropy, which is well-suited for multi-label classification
tasks.</p>
        <p>Given the lack of Arabic training data and the zero-shot nature of the task for Arabic, we used the
provided English training data to fine-tune the mBERT model. Since there was no Arabic data provided
for validation, we utilized the Arabic validation dataset from the ArAIEval shared task on propaganda
detection 2024.1 This validation dataset consists of 921 documents, with an average of 30.25 tokens per
document, and follows the same labelling and annotation guidelines. The following hyperparameters
were used during training:
• Learning Rate: 5e-5
• Number of Epochs: 75
• Maximum Input Length: 256 tokens</p>
        <p>Additionally, we utilized pos_weights to adjust the loss calculation. This helps in handling the class
imbalance, ensuring that the model does not become biased towards the more frequent classes.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results and Discussion</title>
      <p>For evaluation, we used the modified F1-micro score, which accounts for partial matching of the spans.
All the scores reported in this paper use that modified F1. Our model, fine-tuned using the English
training data and validated on the Arabic dev dataset, achieved an F1-micro score of 0.0998 on the
dev set. When evaluated on the test set, the model’s performance improved significantly, achieving
an F1-micro score of 0.3009. The diference in performance between the dev and test sets could be
attributed to the domain-specific nuances and potential distributional diferences in the test set.</p>
      <p>Below is a detailed breakdown of the F1-micro scores per technique on the validation set, as shown
in Table 2.</p>
      <p>The results reveal a significant variation in the model’s performance across diferent persuasion
techniques. Techniques such as Appeal to Time, Consequential Oversimplification, and Appeal to
Values were detected more reliably, indicating that the model can efectively identify these patterns.</p>
      <p>In contrast, techniques like Loaded Language, Straw Man, and Whataboutism showed moderate
performance. Techniques like Questioning the Reputation, Repetition, False Dilemma-No Choice, and
Appeal to Hypocrisy posed significant dificulties for the model. These techniques may be
underrepresented in the training data, further complicating their detection.</p>
      <p>The variation in performance can also be attributed to the nature and categorization of the techniques.
Techniques that belong to the same category, such as diferent types of logical fallacies or emotional
appeals, may share linguistic features that the model struggles to distinguish. For example, both Straw
Man and Whataboutism involve misrepresentation or diversion tactics, which could confuse the model.
On the other hand, techniques like Appeal to Values and Appeal to Popularity, which are more explicit
and direct, tend to be easier for the model to identify.</p>
      <p>It’s important to note that no Arabic data was available for training. We relied on the English training
data to fine-tune the multilingual BERT model. This cross-lingual transfer learning approach introduces
additional challenges due to diferences in linguistic structures and contextual usage between English
and Arabic.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion and Future Work</title>
      <p>With the increasing sophistication of persuasion techniques, particularly in Arabic-language content, it
is crucial to focus research eforts on this area. This study investigated the efectiveness of a multilingual
BERT model fine-tuned on English data for the task of Arabic persuasion detection. English was selected
as the training language due to its extensive resources in Natural Language Processing (NLP) tasks,
including propaganda detection. Our aim was to evaluate how these abundant resources could be
leveraged to benefit languages with fewer resources, such as Arabic. This work achieved first place for
Arabic on the leaderboard for the test set, demonstrating the potential of cross-lingual transfer learning
[17]. However, there is still room for improvement.</p>
      <p>Future work can explore how other high-resource languages impact the performance on Arabic. There
might be various strategies to enhance the model’s performance. Increasing the diversity and quantity
of training data, particularly for techniques where performance was low, through data augmentation or
the collection of additional labelled data, can help balance the dataset. Advanced fine-tuning techniques
like focal loss can adjust the loss function to focus more on hard-to-classify examples, while dynamic
sampling strategies can address class imbalance.</p>
      <p>Additionally, incorporating more sophisticated features such as syntactic and semantic information,
part-of-speech tags, or dependency parsing can provide the model with greater context and improve
classification accuracy. Exploring alternative hidden layer representations within BERT may also yield
better classification performance. By addressing these areas, future research can further improve the
accuracy and robustness of models in detecting a wide range of persuasion techniques, ultimately
enhancing their utility in real-world applications.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments References</title>
      <p>We acknowledge the assistance of the LT-Bridge Project (GA 952194) and DFKI for the use of their
Virtual Laboratory. Also, authors have been supported financially by the EMLCT 2 programme during
this entire work.
[10] W. Antoun, F. Baly, H. Hajj, AraBERT: Transformer-based model for Arabic language
understanding, in: H. Al-Khalifa, W. Magdy, K. Darwish, T. Elsayed, H. Mubarak (Eds.), Proceedings of the 4th
Workshop on Open-Source Arabic Corpora and Processing Tools, with a Shared Task on Ofensive
Language Detection, European Language Resource Association, Marseille, France, 2020, pp. 9–15.</p>
      <p>URL: https://aclanthology.org/2020.osact-1.2.
[11] B. Tuck, F. Qachfar, D. Boumber, R. Verma, Detectiveredasers at araieval shared task:
Leveraging transformer ensembles for arabic deception detection, in: Proceedings of ArabicNLP
2023, Association for Computational Linguistics, Singapore (Hybrid), 2023, p. 494–501. URL:
https://aclanthology.org/2023.arabicnlp-1.45. doi:10.18653/v1/2023.arabicnlp-1.45.
[12] G. Lample, A. Conneau, Cross-lingual language model pretraining, arXiv preprint arXiv:1901.07291
(2019).
[13] Y. Liu, J. Gu, N. Goyal, X. Li, S. Edunov, M. Ghazvininejad, M. Lewis, L. Zettlemoyer,
Multilingual denoising pre-training for neural machine translation, Transactions of the Association for
Computational Linguistics 8 (2020) 726–742.
[14] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, L. Zettlemoyer,
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation,
and comprehension, arXiv preprint arXiv:1910.13461 (2019).
[15] K. Abboud, O. Golovneva, C. DiPersio, Cross-lingual transfer for low-resource arabic language
understanding, in: H. Bouamor, H. Al-Khalifa, K. Darwish, O. Rambow, F. Bougares, A. Abdelali,
N. Tomeh, S. Khalifa, W. Zaghouani (Eds.), Proceedings of the Seventh Arabic Natural Language
Processing Workshop (WANLP), Association for Computational Linguistics, Abu Dhabi, United
Arab Emirates (Hybrid), 2022, p. 225–237. URL: https://aclanthology.org/2022.wanlp-1.21. doi:10.
18653/v1/2022.wanlp-1.21.
[16] M. Kahla, Z. G. Yang, A. Novák, Cross-lingual fine-tuning for abstractive arabic text summarization,
in: Proceedings of the international conference on recent advances in natural language processing
(ranlp 2021), 2021, pp. 655–663.
[17] A. Barrón-Cedeño, F. Alam, T. Chakraborty, T. Elsayed, P. Nakov, P. Przybyła, J. M. Struß, F. Haouari,
M. Hasanain, F. Ruggeri, X. Song, R. Suwaileh, The CLEF-2024 CheckThat! Lab: Check-worthiness,
subjectivity, persuasion, roles, authorities, and adversarial robustness, in: N. Goharian, N.
Tonellotto, Y. He, A. Lipani, G. McDonald, C. Macdonald, I. Ounis (Eds.), Advances in Information
Retrieval, Springer Nature Switzerland, Cham, 2024, pp. 449–458.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>G.</given-names>
            <surname>Faggioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ferro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Galuščáková</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . García Seco de Herrera (Eds.), Working Notes of CLEF 2024 -
          <article-title>Conference and Labs of the Evaluation Forum</article-title>
          ,
          <string-name>
            <surname>CLEF</surname>
          </string-name>
          <year>2024</year>
          , Grenoble, France,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Piskorski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Stefanovitch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Campos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Jorge</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pollak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Ribin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Fijavž</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasanain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Guimarães</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. F.</given-names>
            <surname>Pacheco</surname>
          </string-name>
          , E. Sartori,
          <string-name>
            <given-names>P.</given-names>
            <surname>Silvano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. V.</given-names>
            <surname>Zwitter</surname>
          </string-name>
          , I. Koychev,
          <string-name>
            <given-names>N.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>Da San Martino, Overview of the CLEF-2024 CheckThat! lab task 3 on persuasion techniques</article-title>
          ,
          <source>in: [1]</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Devlin</surname>
          </string-name>
          , M.-
          <string-name>
            <given-names>W.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Toutanova</surname>
          </string-name>
          , Bert:
          <article-title>Pre-training of deep bidirectional transformers for language understanding</article-title>
          , arXiv preprint arXiv:
          <year>1810</year>
          .
          <volume>04805</volume>
          (
          <year>2018</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D. S.</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          , Experiments in Detecting Persuasion Techniques in the News,
          <year>2019</year>
          . URL: http://arxiv.org/abs/
          <year>1911</year>
          .06815, arXiv:
          <year>1911</year>
          .06815 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D.</given-names>
            <surname>Dimitrov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B. Bin</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Shaar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Silvestri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Firooz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          , G. Da San Martino, SemEval
          <article-title>-2021 task 6: Detection of persuasion techniques in texts and images</article-title>
          , in: A.
          <string-name>
            <surname>Palmer</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schneider</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Schluter</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Emerson</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Herbelot</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          Zhu (Eds.),
          <source>Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , pp.
          <fpage>70</fpage>
          -
          <lpage>98</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .semeval-
          <volume>1</volume>
          .7. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .semeval-
          <volume>1</volume>
          .7.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>J.</given-names>
            <surname>Piskorski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Stefanovitch</surname>
          </string-name>
          , G. Da San Martino, P. Nakov, SemEval
          <article-title>-2023 Task 3: Detecting the Category, the Framing, and the Persuasion Techniques in Online News in a Multi-lingual Setup</article-title>
          ,
          <source>in: Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval2023)</source>
          ,
          <source>Association for Computational Linguistics</source>
          , Toronto, Canada,
          <year>2023</year>
          , pp.
          <fpage>2343</fpage>
          -
          <lpage>2361</lpage>
          . URL: https://aclanthology.org/
          <year>2023</year>
          .semeval-
          <volume>1</volume>
          .317. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2023</year>
          .semeval-
          <volume>1</volume>
          .
          <fpage>317</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>M.</given-names>
            <surname>Hasanain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Mubarak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Abdaljalil</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Zaghouani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Nakov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. D. S.</given-names>
            <surname>Martino</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. A.</given-names>
            <surname>Freihat</surname>
          </string-name>
          ,
          <article-title>Araieval shared task: Persuasion techniques and disinformation detection in arabic text</article-title>
          ,
          <source>arXiv preprint arXiv:2311.03179</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>K.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Gautam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Mamidi</surname>
          </string-name>
          , Volta at semeval
          <article-title>-2021 task 6: Towards detecting persuasive texts and images using textual and multimodal ensemble (</article-title>
          <year>2021</year>
          ). URL: http://arxiv.org/abs/2106.00240, arXiv:
          <fpage>2106</fpage>
          .00240 [cs].
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>K.</given-names>
            <surname>Kaczyński</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Przybyła</surname>
          </string-name>
          , Homados at semeval
          <article-title>-2021 task 6: Multi-task learning for propaganda detection</article-title>
          ,
          <source>in: Proceedings of the 15th International Workshop on Semantic Evaluation (SemEval-2021)</source>
          ,
          <article-title>Association for Computational Linguistics</article-title>
          , Online,
          <year>2021</year>
          , p.
          <fpage>1027</fpage>
          -
          <lpage>1031</lpage>
          . URL: https://aclanthology.org/
          <year>2021</year>
          .semeval-
          <volume>1</volume>
          .141. doi:
          <volume>10</volume>
          .18653/v1/
          <year>2021</year>
          .semeval-
          <volume>1</volume>
          .
          <fpage>141</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>