<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Cross-lingual Transfer Learning for Detecting Negative Campaign in Israeli Municipal Elections: a Case Study</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Natalia Vanetik</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Marina Litvak</string-name>
          <email>litvak.marina@gmail.com</email>
          <xref ref-type="aff" rid="aff1">1</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Lin Miao</string-name>
          <email>linmiao@bistu.edu.cn</email>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computer Science, Beijing Information Science and Technology University</institution>
          ,
          <addr-line>Beijing</addr-line>
          ,
          <country country="CN">China</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Software Engineering, Shamoon College of Engineering (SCE)</institution>
          ,
          <addr-line>Beer-Sheva</addr-line>
          ,
          <country country="IL">Israel</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>In: R. Campos, A. Jorge, A. Jatowt, S. Bhatia, M. Litvak (eds.): Proceedings of the Text2Story'23 Workshop</institution>
          ,
          <addr-line>Dublin</addr-line>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>Political competitions are complex settings where candidates use campaigns to promote their chances to be elected. As we can recently observe, some candidates choose to focus on a negative campaign that emphasizes the negative aspects of the competing person and is aimed at ofending opponents or the opponent's supporters. The big challenge in this area is the lack of annotated datasets for training eficient classifiers. Therefore, transfer learning from other relevant domains and other languages could be very useful for this task. Considering the recent success of meta-learning in domain adaptation, we apply it to our task of utilizing available datasets from diferent domains and languages. This work explores the negative campaign detection task from multiple perspectives: the eficiency of diferent text representations and classification models, and the efect of transfer learning from ofensive language detection in diferent languages for negative campaign detection in Hebrew. We demonstrate that the lack of training data for negative campaign detection in a low-resourced language such as Hebrew can be compensated to some extent by available datasets for ofensive language detection in the same and other languages. We report an empirical case study for political campaigns in Israeli municipal elections.1 Political competitions aim at promoting the candidates' chances to be elected. The main decision in such competitions regards the nature of the campaign - that is, whether a candidate should apply a positive campaign that highlights the candidate's achievements, leadership skills, and future programs, or focus on a negative campaign that emphasizes the negative sides of the In recent years, we witness the intensive use of negative campaigns by political candidates which target the weaknesses and failures of the opponents promising to do the opposite [2, 3, 4]. 1Our dataset is freely available for researchers at https://github.com/NataliaVanetik1/TONIC. ∗Corresponding author. †These authors contributed equally.</p>
      </abstract>
      <kwd-group>
        <kwd>negative campaign</kwd>
        <kwd>text classification</kwd>
        <kwd>Hebrew</kwd>
        <kwd>BERT</kwd>
        <kwd>meta-learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>The application of language technologies in the political sciences is recently in high
demand [5]. However, despite some works dedicated to the analysis of elections-related
materials [6, 7, 8], we were unable to find any work on automated negative campaign analysis and
detection.</p>
      <p>
        Our work reports the results of extensive experiments, aimed at answering multiple research
questions: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) Which supervised model and representation are more efective at automatically
detecting negative campaigns in Hebrew? (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) Can we efectively detect negative campaigns with
a model trained to identify ofensive language? (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) Can meta-learning with diferent domains
and languages boost negative campaign detection in Hebrew?
      </p>
      <p>We adopt and extend the representation models applied in [9, 10, 11], where the gain of
semantic vectors and sentiment knowledge for ofensive language and negative campaign
detection was empirically shown. In order to increase classification accuracy in a mono-domain
setting, we use knowledge about cities, country districts (regions), and politicians. We use
this information in a meta-learning setting as well. In [10], we have also shown the eficiency
of transfer learning for cross-lingual training of ofensive language classifiers with Semitic
languages. We adopt and explore this idea for this study. The lack of Hebrew datasets is
addressed in this study by using cross-domain and cross-lingual transfer learning, in contrast
to [11].</p>
      <p>
        Our contribution is multi-fold: (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) we experimented with diferent representations and
classiifers for eficient encoding and classification of texts in Hebrew for negative campaign detection;
(
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) we explored the eficiency of meta-learning in mono-domain experiments; (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) we explored
an eficiency of a transfer learning from ofensive language detection in diferent languages
to negative campaign detection; (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) we explored a gain of meta-learning vs. conventional
ifne-tuning of language models in transfer learning for cross-domain experiments.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. TONIC dataset</title>
      <p>The data was collected from the Facebook accounts of local politicians from several big Israeli
cities running for mayor’s ofices. There was a total of 12 cities and 27 mayoral candidates
whose number for elections that took place in 2018. Data statistics appear in Table 1. The data
is freely available for download from GitHub at https://github.com/NataliaVanetik1/TONIC.
Collected posts were annotated as either negative or not by two independent annotators; in case
of a disagreement between them, the third annotator decided on a final label. The annotators
were instructed to label a post as a “negative campaign” only if it contained negative (but not
necessarily ofensive) content about the opponent of the post’s owner or her supporter. Kappa
agreement between the annotators was 0.862. The majority rule, i.e., the portion of the bigger
class in our data, is 0.78 (the distribution between two classes is 78% − 22%, with the majority
class being benign texts, and the minority class containing negative campaign texts).</p>
    </sec>
    <sec id="sec-3">
      <title>3. Proposed method for Negative Campaign classification</title>
      <p>Our approach follows a standard flow of supervised learning, including text representation,
model training, and its application on a test set for the model’s evaluation.
The following techniques were employed for the post representation:
• Term frequency-inverse document frequency (tf-idf), where every post is treated
as a separate document and the whole dataset as a corpus.
• N-grams of  consecutive words seen in the text, with  = 1, 2, 3 .
• BERT sentence embeddings using one of the pre-trained BERT models—a multilingual
model [12] and a Hebrew model [13]. We use BERT embeddings to represent post text,
region, and city.
• Sentiment weights generated by the HeBERT model [14], producing a probability
distribution for positive, negative, and neutral sentiments, for every post.</p>
      <sec id="sec-3-1">
        <title>For classification, we experimented with three diferent types of</title>
        <p>
          classifiers :
• Traditional classsifiers , including Random Forest (RF) [15], Logistic Regression
(LR) [16], and Extreme Gradient Boosting (XGB) [17].
• Fine-tuned BERT, including a multilingual model called bert-base-multilingual-cased
(denoted as mBERT) [18] and AlephBERT [13], a large pre-trained language model for
Modern Hebrew. Both models were fine-tuned on the train portion of our data.
• Meta-learning, where create a meta-model for detecting unfavorable campaigns when
training data for this particular task and language is missing (or not suficient). To
quickly adapt to new target cases, ModelAgnostic Meta-Learning (MAML) [19], a general
optimization framework, uses the gradient descent process to create a strong initial model.
Therefore, in this study, we used MAML for meta-learning. As the foundation for our
meta-learning, we use a pre-trained BERT language model as a base model. The goal of
meta-learning is to train a model on a variety of learning tasks, such that it can solve
new learning tasks using only a small number of training samples. We use three diferent
criteria to split our data into training tasks: (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) an account of politician, where one
training task aims at the identification of posts with negative campaigns published by
the same politician; (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) a city, where a training task focuses on the data generated by
politicians from the same particular city; and (
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) a region of the country, where we
train our model on the annotated posts generated by politicians from the same region of
the country.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>A full pipeline of our approach is depicted in Figure 1.</title>
        <p>BERT sentence vectors
n-gram vectors
posts
tokenization
tf ∗ idf vectors
sentiment analysis
prediction model</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Experiments</title>
      <p>
        Our experiments aim to evaluate (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) diferent models and representations of Hebrew data in the
negative campaign domain; (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) transfer learning from the hate speech domain, in Hebrew and
other languages; and (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) meta-learning approach in mono-domain and cross-domain learning.
Data and Software Setup
For the monolingual experiments on the TONIC dataset, RF, LR, and XGB are trained on 80% of
the dataset and evaluated on the remaining 20%. For the cross-domain monolingual experiments,
the models are trained on 100% of the other domain data and tested on 20% of the TONIC dataset.
For the cross-domain cross-lingual experiments, we train our models on 100% of the data in
another language, and test on the 20% of the TONIC dataset. In all cases, the test portion of the
TONIC dataset is the same which allows us to conduct proper statistical significance analysis.
Fine-tuned BERT was trained a 75% of the data with the validation set containing 5% of the
data, and it was tested on the remaining 20%. Fine-tuning was run for 10 epochs with batch
size 16. For the cross-domain experiments, we used the Hebrew ofensive language dataset [ 20]
called OLaH. Traditional models were implemented in sklearn [21] and neural models were
implemented in Keras [22] with the TensorFlow backend [23]. Experiments were performed on
google colab [24] with Pro settings.
      </p>
      <p>Mono-domain Evaluation Results
Here we report the results–precision, recall, f1-measure, and accuracy scores–of the evaluation
of and comparison of various models and text representations to detect negative campaigns
in political posts written in Hebrew. In particular, we explore whether or not BERT sentence
embeddings perform better than traditional text representations such as tf-idf and n-grams. We
also compare two pre-trained BERT models to determine whether a model specifically trained
in Hebrew is preferable.</p>
      <p>Table 2 (left) summarizes the results for the conventional models and representations without
sentence embeddings. All models were trained and tested on the TONIC training and test sets,
respectively. The text representations use either tf-idf or n-grams (ngX denotes n-grams for
 = 1, 2, 3 ), or their combinations (tfidf-ngX denotes a concatenation of tf-idf vectors with
model
RF  +
LR  +
XGB  +
RF1+
LR1+
XGB1+
RF2+
LR2+
XGB2+
RF3+
LR3+
XGB3+
RF  +1+
LR  +1+
XGB  +1+
RF  +2+
LR  +2+
XGB  +2+
RF  +3+
LR  +3+
XGB  +3+
model
n-grams of size = 1, 2, 3 ). All the systems are significantly better than the majority rule. Also,
the XGB classifier with tf-idf, unigrams, and sentiment labels outperforms the other classifiers.</p>
      <p>
        Confusion matrix of the top-performing model (XGB++ ) contains TP = 75, TN =
391, FP = 22, and FN = 39, with   = 0.77 and   = 0.66 . These results show that
the model does a good job of identifying and eliminating negative samples (non-negative
campaigns), but it misses positive samples (negative campaigns). As a result, TN is the most
important accuracy compound, while FN represents the biggest amount of errors. In a 10
misclassified case sample that we manually examined, more than half of the errors (
        <xref ref-type="bibr" rid="ref6">6</xref>
        ), including
four samples incorrectly identified as negative campaigns when we actually found them to be
neutral and two samples incorrectly labeled as neutral, were the result of incorrect labeling by
our annotators.
      </p>
      <p>Table 3 shows the scores for the same models over sentence embeddings, produced by two
diferent BERT models–multilingual BERT [ 25] and Hebrew-language AlephBERT [13]. We
can see that enriched sentence embeddings of cities and regions’ names boost the classification
performance. XGB outperforms the other classifiers as in the previous experiment. We cannot
recommend one particular BERT model, because both models seem to provide sentence
embeddings with similar quality. However, when we compare these BERT models fine-tuned on
the classification task on TONIC (see Table 4), AlephBERT, which is trained solely in Hebrew,
significantly outperforms multilingual BERT producing accuracy which falls below the majority
rule. Nonetheless, both models are outperformed by the best traditional models, probably due
to less information encoded in the text representation. While both BERT classifiers use only
self-produced embeddings, traditional models also utilize sentiment labels, and embeddings
representing the cities and regions of the candidates.</p>
      <p>Table 4 contains the results of meta-learning where tasks are specified by three diferent
criteria.</p>
      <p>We can see that multilingual BERT achieves the best accuracy score; however, for all the
options for task division, meta-learning scores are very close to the majority rule, that evidence
that there is not much information that can be eficiently learned and transferred between
tasks. We can also see that for a fine-tuned BERT, AlephBERT has a clear advantage over the
multilingual BERT model in all parameters.</p>
      <p>According to the scores in Tables 2 and 3 (we omitted meta-learning models because of their
low performance), the top performing model is XGB, applied on bert embeddings enriched by
region and location embeddings. In general, the XGB classifier outperforms other classifiers in
most cases.</p>
      <sec id="sec-4-1">
        <title>Cross-domain Mono-lingual Evaluation Results</title>
        <p>
          Cross-domain mono-lingual (all models were trained and tested on Hebrew data) experiments
in Table 2 (right) show that using an ofensive language dataset as a training set decreases
classification accuracy for all the models, indicating that the task of detecting negative campaigns
is diferent from the task of ofensive language detection. Only a few models trained on ofensive
language data achieved accuracy that is slightly higher than or equal to the majority rule.
Additionally, we can see that F1 scores are really low, meaning that these models simply ’guess’
the majority rule.
sentation for transfer learning from the ofensive language detection in Hebrew. From Table 2
(right) and Table 5, we can conclude that (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) the XGB classifier mostly performs better than
other classifiers and (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) its performance is slightly higher with BERT embeddings than with
tf-idf vectors and n-grams.
model
        </p>
        <p>mBERT
TONIC dataset. Two BERT models are initiated with the weights generated by meta-learning.
The table also contains the scores of fine-tuned BERT without meta-learning.</p>
        <p>
          We can see that (
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) best traditional models perform better than both fine-tuned language
models and meta-models when trained in foreign languages, the only exception is the recall
and F1 scores of meta-learning which is evidence of its better ability to recognize the positive
samples–negative political campaign–but fail at filtering out neutral posts (also confirmed by
lower Precision); (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) AlephBERT performs better with meta-learning than multilingual BERT;
(
          <xref ref-type="bibr" rid="ref3">3</xref>
          ) meta-learning outperforms fine-tuned language models in terms of both precision and recall.
        </p>
      </sec>
      <sec id="sec-4-2">
        <title>Cross-domain Cross-lingual Evaluation Results</title>
        <p>Table 7 shows the evaluation of traditional models for the cross-domain cross-lingual scenario.
In this setting, we train our models on hate speech datasets in other languages - English and
Arabic. The only text representation that we can use here is multilingual BERT sentence
embeddings generated by the pre-trained BERT model bert-base-multilingual-cased [18].</p>
        <p>Table 8 shows the results of meta-learning trained on hate speech data in other languages
(Arabic and English) and tested on the TONIC dataset. An English-language dataset is the
Ofensive Language Identification Dataset (OLID) [ 26], which is a collection of 14,100 tweets
(we used 13,240 annotated tweets from its training set). We used the OLaA dataset in Arabic,
which we collected and introduced in [9] previously. OLaA is a collection of 9,000 comments
from Twitter annotated for hate speech. We used a multilingual BERT model [18] for these
experiments. For comparison, we also show the scores of this BERT model fine-tuned on Arabic
and English hate-speech data and tested on TONIC.</p>
        <p>
          Both experiments evidence that meta-learning adapts pre-trained models much better to
the new domains than traditional fine-tuning and it can be eficiently applied for transfer
learning from other domains and even languages. In particular, we can observe the following:
(
          <xref ref-type="bibr" rid="ref1">1</xref>
          ) fine-tuned language models and meta-learning perform better than best traditional models
when trained on foreign languages; (
          <xref ref-type="bibr" rid="ref2">2</xref>
          ) meta-learning outperforms fine-tuned language models.
        </p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Future Work and Conclusions</title>
      <p>
        Based on the results of extensive experiments aimed to answer various research questions (see
Section 1), we can conclude that (
        <xref ref-type="bibr" rid="ref1">1</xref>
        ) the best combination of text representation and classification
model for negative campaign detection in Hebrew texts is XGB with sentence embeddings
enriched with region and location information; (
        <xref ref-type="bibr" rid="ref2">2</xref>
        ) transfer learning with models trained to
detect ofensive content is ineficient for the detection of a negative campaign; meaning that
there is no strong relation between ofensive language and negative campaigns; (
        <xref ref-type="bibr" rid="ref3">3</xref>
        ) transfer
learning from diferent languages can be applied to Hebrew in the negative campaign detection
task, while training on a large set in a foreign language can be even more eficient than training
on Hebrew; and (
        <xref ref-type="bibr" rid="ref4">4</xref>
        ) meta-learning outperforms language models traditionally fine-tuned in
crossdomain and cross-lingual scenarios, but not in a mono-lingual setting. We also observe that in
a monolingual setting that employs either a fine-tuned BERT or BERT sentence embedding, the
AlephBERT model trained on Hebrew is preferable to a multilingual BERT model. In the future,
we plan to apply our analysis to elections for the Israeli government, to explore the common
characteristics and diferences between political campaigns in diferent countries, and to study
possible relations between the candidate’s gender, perceived strength, initial support, etc. and
their engagement in a negative campaign.
and Evaluation Conference, European Language Resources Association, Marseille, France,
2022, pp. 3715–3723. URL: https://aclanthology.org/2022.lrec-1.396.
[11] M. Litvak, N. Vanetik, S. Talker, O. Machlouf, Detection of negative campaign in israeli
municipal elections, in: Proceedings of the Third Workshop on Threat, Aggression and
Cyberbullying (TRAC 2022), 2022, pp. 68–74.
[12] V. Sanh, L. Debut, J. Chaumond, T. Wolf, Distilbert, a distilled version of bert: smaller,
faster, cheaper and lighter, arXiv preprint arXiv:1910.01108 (2019).
[13] A. Seker, E. Bandel, D. Bareket, I. Brusilovsky, R. S. Greenfeld, R. Tsarfaty, Alephbert: A
hebrew large pre-trained language model to start-of your hebrew nlp application with,
arXiv preprint arXiv:2104.04052 (2021).
[14] A. Chriqui, I. Yahav, Hebert &amp; hebemo: a hebrew bert model and a tool for polarity analysis
and emotion recognition, arXiv preprint arXiv:2102.01909 (2021).
[15] M. Pal, Random forest classifier for remote sensing classification, International journal of
remote sensing 26 (2005) 217–222.
[16] R. E. Wright, Logistic regression, in: L. G. Grimm, P. R. Yarnold (Eds.), Reading and
understanding multivariate statistics, American Psychological Association, 1995, pp. 217–244.
[17] T. Chen, T. He, M. Benesty, V. Khotilovich, Y. Tang, H. Cho, K. Chen, et al., Xgboost:
extreme gradient boosting, R package version 0.4-2 1 (2015) 1–4.
[18] J. Devlin, M. Chang, K. Lee, K. Toutanova, BERT: pre-training of deep bidirectional
transformers for language understanding, CoRR abs/1810.04805 (2018). URL: http://arxiv.
org/abs/1810.04805. arXiv:1810.04805.
[19] C. Finn, P. Abbeel, S. Levine, Model-agnostic meta-learning for fast adaptation of deep
networks, in: International conference on machine learning, PMLR, 2017, pp. 1126–1135.
[20] M. Litvak, N. Vanetik, Y. Nimer, A. Skout, I. Beer-Sheba, Ofensive language detection in
semitic languages, in: Multimodal Hate Speech Workshop 2021, 2021, pp. 7–12.
[21] F. Pedregosa, G. Varoquaux, A. Gramfort, V. Michel, B. Thirion, O. Grisel, M. Blondel,
P. Prettenhofer, R. Weiss, V. Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher,
M. Perrot, E. Duchesnay, Scikit-learn: Machine learning in Python, Journal of Machine
Learning Research 12 (2011) 2825–2830.
[22] F. Chollet, et al., Keras, https://github.com/fchollet/keras, 2015.
[23] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis,
J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R.
Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mané, R. Monga, S. Moore, D. Murray,
C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V.
Vanhoucke, V. Vasudevan, F. Viégas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu,
X. Zheng, TensorFlow: Large-scale machine learning on heterogeneous systems, 2015.
      </p>
      <p>URL: https://www.tensorflow.org/, software available from tensorflow.org.
[24] E. Bisong, Building machine learning and deep learning models on Google cloud platform:</p>
      <p>A comprehensive guide for beginners, Apress, 2019.
[25] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, Bert: Pre-training of deep bidirectional
transformers for language understanding, arXiv preprint arXiv:1810.04805 (2018).
[26] M. Zampieri, S. Malmasi, P. Nakov, S. Rosenthal, N. Farra, R. Kumar, Predicting the
Type and Target of Ofensive Posts in Social Media, in: Proceedings of NAACL, 2019, p.
1415–1420.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D.</given-names>
            <surname>Bernhardt</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Ghosh</surname>
          </string-name>
          ,
          <article-title>Positive and negative campaigning in primary and general elections</article-title>
          ,
          <source>Games and Economic Behavior</source>
          <volume>119</volume>
          (
          <year>2020</year>
          )
          <fpage>98</fpage>
          -
          <lpage>104</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G. M.</given-names>
            <surname>Invernizzi</surname>
          </string-name>
          ,
          <article-title>Electoral competition and factional sabotage</article-title>
          ,
          <source>Available at SSRN</source>
          <volume>3329622</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Martin</surname>
          </string-name>
          ,
          <article-title>Inside the black box of negative campaign efects: Three reasons why negative campaigns mobilize</article-title>
          ,
          <source>Political psychology 25</source>
          (
          <year>2004</year>
          )
          <fpage>545</fpage>
          -
          <lpage>562</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Skaperdas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Grofman</surname>
          </string-name>
          , Modeling negative campaigning,
          <source>American Political Science Review</source>
          <volume>89</volume>
          (
          <year>1995</year>
          )
          <fpage>49</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>H.</given-names>
            <surname>Afli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Alam</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bouamor</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. B.</given-names>
            <surname>Casagran</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Boland</surname>
          </string-name>
          , S. Ghannay (Eds.),
          <source>Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</source>
          , Marseille, France,
          <year>2022</year>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .politicalnlp-1.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>M.</given-names>
            <surname>Baran</surname>
          </string-name>
          ,
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>WÃ³jcik</article-title>
          , P. Kolebski,
          <string-name>
            <given-names>M.</given-names>
            <surname>Bernaczyk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Rajda</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Augustyniak</surname>
          </string-name>
          , T. Kajdanowicz,
          <article-title>Electoral agitation dataset: The use case of the polish election</article-title>
          ,
          <source>in: Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</source>
          , Marseille, France,
          <year>2022</year>
          , pp.
          <fpage>32</fpage>
          -
          <lpage>36</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .politicalnlp-
          <volume>1</volume>
          .5.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>H.</given-names>
            <surname>Abdine</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Rennard</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Vazirgiannis</surname>
          </string-name>
          ,
          <article-title>Political communities on twitter: Case study of the 2022 french presidential election</article-title>
          ,
          <source>in: Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</source>
          , Marseille, France,
          <year>2022</year>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>71</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .politicalnlp-
          <volume>1</volume>
          .9.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Sanders</surname>
          </string-name>
          , A. van den Bosch,
          <article-title>Correlating political party names in tweets, newspapers and election results</article-title>
          ,
          <source>in: Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</source>
          , Marseille, France,
          <year>2022</year>
          , pp.
          <fpage>8</fpage>
          -
          <lpage>15</lpage>
          . URL: https://aclanthology.org/
          <year>2022</year>
          .politicalnlp-
          <volume>1</volume>
          .2.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Litvak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vanetik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Nimer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Skout</surname>
          </string-name>
          ,
          <article-title>Ofensive language detection in semitic languages</article-title>
          ,
          <source>in: 1st CFP:Multimodal and Multilingual Hate Speech Detection workshop at KONVENS</source>
          <year>2021</year>
          ,
          <year>2021</year>
          , pp.
          <fpage>7</fpage>
          -
          <lpage>13</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M.</given-names>
            <surname>Litvak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Vanetik</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Liebeskind</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Hmdia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. A.</given-names>
            <surname>Madeghem</surname>
          </string-name>
          ,
          <article-title>Ofensive language detection in hebrew: can other languages help?</article-title>
          ,
          <source>in: Proceedings of the Language Resources</source>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>