<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Cross-lingual Transfer Learning for Detecting Negative Campaign in Israeli Municipal Elections: a Case Study</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Natalia</forename><surname>Vanetik</surname></persName>
							<email>natalyav@sce.ac.il</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Software Engineering</orgName>
								<orgName type="institution">Shamoon College of Engineering (SCE)</orgName>
								<address>
									<settlement>Beer-Sheva</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marina</forename><surname>Litvak</surname></persName>
							<email>litvak.marina@gmail.com</email>
							<affiliation key="aff0">
								<orgName type="department">Department of Software Engineering</orgName>
								<orgName type="institution">Shamoon College of Engineering (SCE)</orgName>
								<address>
									<settlement>Beer-Sheva</settlement>
									<country key="IL">Israel</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Lin</forename><surname>Miao</surname></persName>
							<email>linmiao@bistu.edu.cn</email>
							<affiliation key="aff1">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">Beijing Information Science and Technology University</orgName>
								<address>
									<settlement>Beijing</settlement>
									<country key="CN">China</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Cross-lingual Transfer Learning for Detecting Negative Campaign in Israeli Municipal Elections: a Case Study</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">974BF08A5465BD82AF05F0395A3B3BEA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-04-29T06:29+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>negative campaign, text classification, Hebrew, BERT, meta-learning Orcid 0000-0002-4939-1415 (N. Vanetik)</term>
					<term>0000-0003-3044-3681 (M. Litvak)</term>
					<term>0000-0002-9421-8566 (L. Miao)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Political competitions are complex settings where candidates use campaigns to promote their chances to be elected. As we can recently observe, some candidates choose to focus on a negative campaign that emphasizes the negative aspects of the competing person and is aimed at offending opponents or the opponent's supporters. The big challenge in this area is the lack of annotated datasets for training efficient classifiers. Therefore, transfer learning from other relevant domains and other languages could be very useful for this task. Considering the recent success of meta-learning in domain adaptation, we apply it to our task of utilizing available datasets from different domains and languages. This work explores the negative campaign detection task from multiple perspectives: the efficiency of different text representations and classification models, and the effect of transfer learning from offensive language detection in different languages for negative campaign detection in Hebrew. We demonstrate that the lack of training data for negative campaign detection in a low-resourced language such as Hebrew can be compensated to some extent by available datasets for offensive language detection in the same and other languages. We report an empirical case study for political campaigns in Israeli municipal elections. 1</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Political competitions aim at promoting the candidates' chances to be elected. The main decision in such competitions regards the nature of the campaign -that is, whether a candidate should apply a positive campaign that highlights the candidate's achievements, leadership skills, and future programs, or focus on a negative campaign that emphasizes the negative sides of the competitors <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>In recent years, we witness the intensive use of negative campaigns by political candidates which target the weaknesses and failures of the opponents promising to do the opposite <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>.</p><p>The application of language technologies in the political sciences is recently in high demand <ref type="bibr" target="#b4">[5]</ref>. However, despite some works dedicated to the analysis of elections-related materials <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>, we were unable to find any work on automated negative campaign analysis and detection.</p><p>Our work reports the results of extensive experiments, aimed at answering multiple research questions: (1) Which supervised model and representation are more effective at automatically detecting negative campaigns in Hebrew? (2) Can we effectively detect negative campaigns with a model trained to identify offensive language? (3) Can meta-learning with different domains and languages boost negative campaign detection in Hebrew?</p><p>We adopt and extend the representation models applied in <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>, where the gain of semantic vectors and sentiment knowledge for offensive language and negative campaign detection was empirically shown. In order to increase classification accuracy in a mono-domain setting, we use knowledge about cities, country districts (regions), and politicians. We use this information in a meta-learning setting as well. In <ref type="bibr" target="#b9">[10]</ref>, we have also shown the efficiency of transfer learning for cross-lingual training of offensive language classifiers with Semitic languages. We adopt and explore this idea for this study. The lack of Hebrew datasets is addressed in this study by using cross-domain and cross-lingual transfer learning, in contrast to <ref type="bibr" target="#b10">[11]</ref>.</p><p>Our contribution is multi-fold: (1) we experimented with different representations and classifiers for efficient encoding and classification of texts in Hebrew for negative campaign detection;</p><p>(2) we explored the efficiency of meta-learning in mono-domain experiments; (3) we explored an efficiency of a transfer learning from offensive language detection in different languages to negative campaign detection; (4) we explored a gain of meta-learning vs. conventional fine-tuning of language models in transfer learning for cross-domain experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">TONIC dataset</head><p>The data was collected from the Facebook accounts of local politicians from several big Israeli cities running for mayor's offices. There was a total of 12 cities and 27 mayoral candidates whose number for elections that took place in 2018. Data statistics appear in Table <ref type="table" target="#tab_0">1</ref>. The data is freely available for download from GitHub at https://github.com/NataliaVanetik1/TONIC. Collected posts were annotated as either negative or not by two independent annotators; in case of a disagreement between them, the third annotator decided on a final label. The annotators were instructed to label a post as a "negative campaign" only if it contained negative (but not necessarily offensive) content about the opponent of the post's owner or her supporter. Kappa agreement between the annotators was 0.862. The majority rule, i.e., the portion of the bigger class in our data, is 0.78 (the distribution between two classes is 78% − 22%, with the majority class being benign texts, and the minority class containing negative campaign texts).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Proposed method for Negative Campaign classification</head><p>Our approach follows a standard flow of supervised learning, including text representation, model training, and its application on a test set for the model's evaluation. The following techniques were employed for the post representation:</p><p>• Term frequency-inverse document frequency (tf-idf), where every post is treated as a separate document and the whole dataset as a corpus. • N-grams of 𝑛 consecutive words seen in the text, with 𝑛 = 1, 2, 3.</p><p>• BERT sentence embeddings using one of the pre-trained BERT models-a multilingual model <ref type="bibr" target="#b11">[12]</ref> and a Hebrew model <ref type="bibr" target="#b12">[13]</ref>. We use BERT embeddings to represent post text, region, and city. • Sentiment weights generated by the HeBERT model <ref type="bibr" target="#b13">[14]</ref>, producing a probability distribution for positive, negative, and neutral sentiments, for every post.</p><p>For classification, we experimented with three different types of classifiers:</p><p>• Traditional classsifiers, including Random Forest (RF) <ref type="bibr" target="#b14">[15]</ref>, Logistic Regression (LR) <ref type="bibr" target="#b15">[16]</ref>, and Extreme Gradient Boosting (XGB) <ref type="bibr" target="#b16">[17]</ref>. • Fine-tuned BERT, including a multilingual model called bert-base-multilingual-cased (denoted as mBERT) <ref type="bibr" target="#b17">[18]</ref> and AlephBERT <ref type="bibr" target="#b12">[13]</ref>, a large pre-trained language model for Modern Hebrew. Both models were fine-tuned on the train portion of our data. • Meta-learning, where create a meta-model for detecting unfavorable campaigns when training data for this particular task and language is missing (or not sufficient). To quickly adapt to new target cases, ModelAgnostic Meta-Learning (MAML) <ref type="bibr" target="#b18">[19]</ref>, a general optimization framework, uses the gradient descent process to create a strong initial model. Therefore, in this study, we used MAML for meta-learning. As the foundation for our meta-learning, we use a pre-trained BERT language model as a base model. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. We use three different criteria to split our data into training tasks: (1) an account of politician, where one training task aims at the identification of posts with negative campaigns published by the same politician; (2) a city, where a training task focuses on the data generated by politicians from the same particular city; and (3) a region of the country, where we train our model on the annotated posts generated by politicians from the same region of the country.</p><p>A full pipeline of our approach is depicted in Figure <ref type="figure" target="#fig_0">1</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>Our experiments aim to evaluate (1) different models and representations of Hebrew data in the negative campaign domain; (2) transfer learning from the hate speech domain, in Hebrew and other languages; and (3) meta-learning approach in mono-domain and cross-domain learning.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Data and Software Setup</head><p>For the monolingual experiments on the TONIC dataset, RF, LR, and XGB are trained on 80% of the dataset and evaluated on the remaining 20%. For the cross-domain monolingual experiments, the models are trained on 100% of the other domain data and tested on 20% of the TONIC dataset. For the cross-domain cross-lingual experiments, we train our models on 100% of the data in another language, and test on the 20% of the TONIC dataset. In all cases, the test portion of the TONIC dataset is the same which allows us to conduct proper statistical significance analysis. Fine-tuned BERT was trained a 75% of the data with the validation set containing 5% of the data, and it was tested on the remaining 20%. Fine-tuning was run for 10 epochs with batch size 16. For the cross-domain experiments, we used the Hebrew offensive language dataset <ref type="bibr" target="#b19">[20]</ref> called OLaH. Traditional models were implemented in sklearn <ref type="bibr" target="#b20">[21]</ref> and neural models were implemented in Keras <ref type="bibr" target="#b21">[22]</ref> with the TensorFlow backend <ref type="bibr" target="#b22">[23]</ref>. Experiments were performed on google colab <ref type="bibr" target="#b23">[24]</ref> with Pro settings.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Mono-domain Evaluation Results</head><p>Here we report the results-precision, recall, f1-measure, and accuracy scores-of the evaluation of and comparison of various models and text representations to detect negative campaigns in political posts written in Hebrew. In particular, we explore whether or not BERT sentence embeddings perform better than traditional text representations such as tf-idf and n-grams. We also compare two pre-trained BERT models to determine whether a model specifically trained in Hebrew is preferable.</p><p>Table <ref type="table" target="#tab_1">2</ref> (left) summarizes the results for the conventional models and representations without sentence embeddings. All models were trained and tested on the TONIC training and test sets, respectively. The text representations use either tf-idf or n-grams (ngX denotes n-grams for 𝑋 = 1, 2, 3), or their combinations (tfidf-ngX denotes a concatenation of tf-idf vectors with  n-grams of size𝑋 = 1, 2, 3). All the systems are significantly better than the majority rule. Also, the XGB classifier with tf-idf, unigrams, and sentiment labels outperforms the other classifiers. Confusion matrix of the top-performing model (XGB 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛+𝑙𝑜𝑐 ) contains TP = 75, TN = 391, FP = 22, and FN = 39, with 𝑝𝑟𝑒𝑐𝑖𝑠𝑖𝑜𝑛 = 0.77 and 𝑟𝑒𝑐𝑎𝑙𝑙 = 0.66. These results show that the model does a good job of identifying and eliminating negative samples (non-negative campaigns), but it misses positive samples (negative campaigns). As a result, TN is the most important accuracy compound, while FN represents the biggest amount of errors. In a 10 misclassified case sample that we manually examined, more than half of the errors (6), including four samples incorrectly identified as negative campaigns when we actually found them to be neutral and two samples incorrectly labeled as neutral, were the result of incorrect labeling by our annotators.</p><p>Table <ref type="table" target="#tab_2">3</ref> shows the scores for the same models over sentence embeddings, produced by two different BERT models-multilingual BERT <ref type="bibr" target="#b24">[25]</ref> and Hebrew-language AlephBERT <ref type="bibr" target="#b12">[13]</ref>. We can see that enriched sentence embeddings of cities and regions' names boost the classification performance. XGB outperforms the other classifiers as in the previous experiment. We cannot recommend one particular BERT model, because both models seem to provide sentence embeddings with similar quality. However, when we compare these BERT models fine-tuned on the classification task on TONIC (see Table <ref type="table">4</ref>), AlephBERT, which is trained solely in Hebrew, significantly outperforms multilingual BERT producing accuracy which falls below the majority rule. Nonetheless, both models are outperformed by the best traditional models, probably due to less information encoded in the text representation. While both BERT classifiers use only self-produced embeddings, traditional models also utilize sentiment labels, and embeddings representing the cities and regions of the candidates.</p><p>Table <ref type="table">4</ref> contains the results of meta-learning where tasks are specified by three different criteria.</p><p>on Hebrew; and (4) meta-learning outperforms language models traditionally fine-tuned in crossdomain and cross-lingual scenarios, but not in a mono-lingual setting. We also observe that in a monolingual setting that employs either a fine-tuned BERT or BERT sentence embedding, the AlephBERT model trained on Hebrew is preferable to a multilingual BERT model. In the future, we plan to apply our analysis to elections for the Israeli government, to explore the common characteristics and differences between political campaigns in different countries, and to study possible relations between the candidate's gender, perceived strength, initial support, etc. and their engagement in a negative campaign.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Political posts classification pipeline.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Collected data by city.</figDesc><table><row><cell cols="2">region city</cell><cell cols="3">candidates posts pos</cell><cell cols="3">neg avg words avg characters</cell></row><row><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell>in post</cell><cell>in post</cell></row><row><cell>center</cell><cell>Herzliya</cell><cell>2</cell><cell>218</cell><cell>91</cell><cell>127</cell><cell>108.482</cell><cell>645.468</cell></row><row><cell>center</cell><cell>Jerusalem</cell><cell>3</cell><cell>412</cell><cell>32</cell><cell>380</cell><cell>72.471</cell><cell>428.964</cell></row><row><cell>center</cell><cell>Rishon LeZion</cell><cell>1</cell><cell>183</cell><cell>23</cell><cell>160</cell><cell>103.448</cell><cell>619.989</cell></row><row><cell>center</cell><cell>Tel Aviv</cell><cell>1</cell><cell>36</cell><cell>8</cell><cell>28</cell><cell>95.611</cell><cell>545.806</cell></row><row><cell>center</cell><cell>Petah Tikva</cell><cell>4</cell><cell>364</cell><cell>68</cell><cell>296</cell><cell>80.184</cell><cell>466.626</cell></row><row><cell>center</cell><cell>Hod Hasharon</cell><cell>2</cell><cell>266</cell><cell>45</cell><cell>221</cell><cell>85.128</cell><cell>498.432</cell></row><row><cell>south</cell><cell>Ashdod</cell><cell>4</cell><cell>363</cell><cell>139</cell><cell>224</cell><cell>92.377</cell><cell>528.044</cell></row><row><cell>south</cell><cell>Ashkelon</cell><cell>3</cell><cell>363</cell><cell>61</cell><cell>302</cell><cell>82.157</cell><cell>482.876</cell></row><row><cell>south</cell><cell>Dimona</cell><cell>1</cell><cell>50</cell><cell>7</cell><cell>43</cell><cell>92.280</cell><cell>542.240</cell></row><row><cell>south</cell><cell>Beer Sheva</cell><cell>1</cell><cell>14</cell><cell>9</cell><cell>5</cell><cell>192.500</cell><cell>1075.643</cell></row><row><cell>north</cell><cell>Netanya</cell><cell>4</cell><cell>316</cell><cell>81</cell><cell>235</cell><cell>72.215</cell><cell>427.886</cell></row><row><cell>north</cell><cell>Haifa</cell><cell>1</cell><cell>47</cell><cell>4</cell><cell>43</cell><cell>75.234</cell><cell>440.319</cell></row><row><cell></cell><cell>Total</cell><cell>27</cell><cell>2632</cell><cell cols="2">568 2064</cell><cell>85.384</cell><cell>500.771</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Evaluation of traditional models and representations on TONIC: mono-domain (left) and cross-domain monolingual (right).</figDesc><table><row><cell></cell><cell></cell><cell cols="2">mono-domain</cell><cell></cell><cell cols="3">cross-domain monolingual</cell><cell></cell></row><row><cell>model</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>acc</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>acc</cell></row><row><cell>RF 𝑡𝑓 𝑖𝑑𝑓 +𝑆𝐴 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑆𝐴 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑆𝐴 RF 𝑛𝑔1+𝑆𝐴 LR 𝑛𝑔1+𝑆𝐴 XGB 𝑛𝑔1+𝑆𝐴 RF 𝑛𝑔2+𝑆𝐴 LR 𝑛𝑔2+𝑆𝐴 XGB 𝑛𝑔2+𝑆𝐴 RF 𝑛𝑔3+𝑆𝐴 LR 𝑛𝑔3+𝑆𝐴 XGB 𝑛𝑔3+𝑆𝐴 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔1+𝑆𝐴 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔1+𝑆𝐴 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔1+𝑆𝐴 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔2+𝑆𝐴 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔2+𝑆𝐴 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔2+𝑆𝐴 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔3+𝑆𝐴 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔3+𝑆𝐴 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑛𝑔3+𝑆𝐴</cell><cell>0.8908 0.8341 0.8656 0.8460 0.8390 0.8220 0.8633 0.7633 0.7972 0.8978 0.7633 0.7972 0.8460 0.8390 0.8567 0.8935 0.7581 0.8317 0.9097 0.7581 0.8485</cell><cell cols="2">0.6467 0.7243 0.7662 0.6626 0.7601 0.7445 0.6399 0.7276 0.7417 0.6260 0.7276 0.7417 0.6299 0.7601 0.7681 0.8002 0.6813 0.7586 0.8010 0.6979 0.7892 0.7726 0.6715 0.7424 0.7633 0.6541 0.7424 0.7633 0.6580 0.7892 0.6128 0.6357 0.7156 0.7325 0.7545 0.7829 0.6009 0.6183 0.7156 0.7325 0.7701 0.7994</cell><cell>0.8444 0.8615 0.8824 0.8444 0.8729 0.8634 0.8387 0.8368 0.8539 0.8368 0.8368 0.8539 0.8330 0.8729 0.8805 0.8311 0.8330 0.8691 0.8273 0.8330 0.8786</cell><cell cols="4">0.6457 0.8926 0.8933 0.5171 0.5181 0.6956 0.5215 0.5222 0.5044 0.5088 0.5015 0.5068 0.4819 0.4885 0.5091 0.5098 0.4990 0.4998 0.4989 0.4994 0.5091 0.5098 0.5098 0.5018 0.6090 0.5166 0.5330 0.5124 0.8933 0.5088 0.4607 0.4856 0.5147 0.5161 0.5147 0.4931 0.4485 0.4575 0.7875 0.7837 0.7856 0.4531 0.7761 0.4870 0.7495 0.4882 0.7875 0.4785 0.7059 0.5090 0.6546 0.4576 0.7685 0.4880 0.7230 0.5090 0.6546 0.4639 0.7666 0.4842 0.7799 0.4948 0.7533 0.4575 0.7875 0.4568 0.7362 0.6546 0.3916 0.4988 0.4388 0.7818 0.5366 0.5109 0.4873 0.7609 0.5147 0.5161 0.5147 0.6546 0.3916 0.4988 0.4388 0.7818</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Evaluation of mono-domain training on TONIC with BERT sentence embeddings.</figDesc><table><row><cell></cell><cell></cell><cell cols="2">mBERT</cell><cell></cell><cell></cell><cell>AlephBERT</cell><cell></cell><cell></cell></row><row><cell>model</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>acc</cell><cell>P</cell><cell>R</cell><cell>F1</cell><cell>acc</cell></row><row><cell>RF 𝑏𝑒𝑟𝑡 LR 𝑏𝑒𝑟𝑡 XGB 𝑏𝑒𝑟𝑡 RF 𝑏𝑒𝑟𝑡+𝑙𝑜𝑐 LR 𝑏𝑒𝑟𝑡+𝑙𝑜𝑐 XGB 𝑏𝑒𝑟𝑡+𝑙𝑜𝑐 RF 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛 LR 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛 XGB 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛 RF 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛+𝑙𝑜𝑐 LR 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛+𝑙𝑜𝑐 XGB 𝑏𝑒𝑟𝑡+𝑟𝑒𝑔𝑖𝑜𝑛+𝑙𝑜𝑐 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔1 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔1 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔1 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔2 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔2 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔2 RF 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔3 LR 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔3 XGB 𝑡𝑓 𝑖𝑑𝑓 +𝑏𝑒𝑟𝑡+𝑛𝑔3</cell><cell cols="2">0.8607 0.8072 0.8059 0.8796 0.8251 0.8523 0.7864 0.7052 0.7699 0.7731 0.6957 0.7716 0.8461 0.6909 0.8097 0.7743 0.7782 0.7324 0.8718 0.6782 0.8228 0.7672 0.8702 0.7869 0.8792 0.5777 0.8340 0.7740 0.8418 0.7569 0.9057 0.5789 0.8316 0.7621 0.8221 0.7521 0.9130 0.6184 0.7619 0.7169 0.8320 0.7470 0.9130 0.6184 0.7619 0.7169 0.8174 0.7509</cell><cell>0.7452 0.7859 0.7874 0.7377 0.7933 0.8125 0.7287 0.7896 0.7508 0.7178 0.7895 0.8182 0.5827 0.7979 0.7875 0.5843 0.7886 0.7784 0.6438 0.7346 0.7771 0.6438 0.7346 0.7761</cell><cell>0.8615 0.8634 0.8634 0.8615 0.8710 0.8843 0.8539 0.8653 0.8444 0.8539 0.8691 0.8899 0.8159 0.8748 0.8729 0.8178 0.8710 0.8653 0.8349 0.8349 0.8672 0.8349 0.8349 0.8634</cell><cell cols="3">0.8283 0.8145 0.8160 0.8725 0.7990 0.8518 0.8504 0.8205 0.8160 0.8705 0.7974 0.8562 0.8611 0.8194 0.8432 0.8891 0.8400 0.8130 0.7231 0.7938 0.7799 0.7152 0.7814 0.8016 0.7235 0.7994 0.7799 0.7108 0.7878 0.8028 0.8250 0.7564 0.8034 0.7956 0.7568 0.7896 0.8227 0.7611 0.8092 0.7956 0.7522 0.7924 0.6562 0.6915 0.7919 0.8043 0.7765 0.8025 0.6423 0.6756 0.8253 0.8432 0.7765 0.8025 0.8816 0.6543 0.6903 0.7881 0.7532 0.7681 0.8408 0.7872 0.8092 0.8677 0.6694 0.7074 0.7881 0.7532 0.7681 0.8385 0.7752 0.8002</cell><cell>0.8596 0.8710 0.8691 0.8672 0.8615 0.8880 0.8653 0.8748 0.8691 0.8653 0.8615 0.8899 0.8444 0.8729 0.8786 0.8425 0.8861 0.8786 0.8463 0.8520 0.8805 0.8501 0.8520 0.8767</cell></row></table></figure>
		</body>
		<back>
			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We can see that multilingual BERT achieves the best accuracy score; however, for all the options for task division, meta-learning scores are very close to the majority rule, that evidence that there is not much information that can be efficiently learned and transferred between tasks. We can also see that for a fine-tuned BERT, AlephBERT has a clear advantage over the multilingual BERT model in all parameters.</p><p>According to the scores in Tables <ref type="table">2 and 3</ref> (we omitted meta-learning models because of their low performance), the top performing model is XGB, applied on bert embeddings enriched by region and location embeddings. In general, the XGB classifier outperforms other classifiers in most cases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Cross-domain Mono-lingual Evaluation Results</head><p>Cross-domain mono-lingual (all models were trained and tested on Hebrew data) experiments in Table <ref type="table">2</ref> (right) show that using an offensive language dataset as a training set decreases classification accuracy for all the models, indicating that the task of detecting negative campaigns is different from the task of offensive language detection. Only a few models trained on offensive language data achieved accuracy that is slightly higher than or equal to the majority rule. Additionally, we can see that F1 scores are really low, meaning that models simply 'guess' the majority rule.</p><p>Table <ref type="table">5</ref> shows the results of the traditional models with BERT embeddings as a text representation for transfer learning from the offensive language detection in Hebrew. From Table <ref type="table">2</ref> (right) and Table <ref type="table">5</ref>, we can conclude that (1) the XGB classifier mostly performs better than other classifiers and (2) its performance is slightly higher with BERT embeddings than with tf-idf vectors and n-grams. Table <ref type="table">6</ref> shows the results of meta-learning trained on hate speech data and tested on the TONIC dataset. Two BERT models are initiated with the weights generated by meta-learning. The table also contains the scores of fine-tuned BERT without meta-learning.</p><p>We can see that (1) best traditional models perform better than both fine-tuned language models and meta-models when trained in foreign languages, the only exception is the recall and F1 scores of meta-learning which is evidence of its better ability to recognize the positive samples-negative political campaign-but fail at filtering out neutral posts (also confirmed by lower Precision); (2) AlephBERT performs better with meta-learning than multilingual BERT;</p><p>(3) meta-learning outperforms fine-tuned language models in terms of both precision and recall. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Cross-domain Cross-lingual Evaluation Results</head><p>Table <ref type="table">7</ref> shows the evaluation of traditional models for the cross-domain cross-lingual scenario.</p><p>In this setting, we train our models on hate speech datasets in other languages -English and Arabic. The only text representation that we can use here is multilingual BERT sentence embeddings generated by the pre-trained BERT model bert-base-multilingual-cased <ref type="bibr" target="#b17">[18]</ref>. Table <ref type="table">8</ref> shows the results of meta-learning trained on hate speech data in other languages (Arabic and English) and tested on the TONIC dataset. An English-language dataset is the Offensive Language Identification Dataset (OLID) <ref type="bibr" target="#b25">[26]</ref>, which is a collection of 14,100 tweets (we used 13,240 annotated tweets from its training set). We used the OLaA dataset in Arabic, which we collected and introduced in <ref type="bibr" target="#b8">[9]</ref> previously. OLaA is a collection of 9,000 comments from Twitter annotated for hate speech. We used a multilingual BERT model <ref type="bibr" target="#b17">[18]</ref> for these experiments. For comparison, we also show the scores of this BERT model fine-tuned on Arabic and English hate-speech data and tested on TONIC.</p><p>Both experiments evidence that meta-learning adapts pre-trained models much better to the new domains than traditional fine-tuning and it can be efficiently applied for transfer learning from other domains and even languages. In particular, we can observe the following:</p><p>(1) fine-tuned language models and meta-learning perform better than best traditional models when trained on foreign languages; (2) meta-learning outperforms fine-tuned language models. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Future Work and Conclusions</head><p>Based on the results of extensive experiments aimed to answer various research questions (see Section 1), we can conclude that (1) the best combination of text representation and classification model for negative campaign detection in Hebrew texts is XGB with sentence embeddings enriched with region and location information; (2) transfer learning with models trained to detect offensive content is inefficient for the detection of a negative campaign; meaning that there is no strong relation between offensive language and negative campaigns; (3) transfer learning from different languages can be applied to Hebrew in the negative campaign detection task, while training on a large set in a foreign language can be even more efficient than training</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Positive and negative campaigning in primary and general elections</title>
		<author>
			<persName><forename type="first">D</forename><surname>Bernhardt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ghosh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Games and Economic Behavior</title>
		<imprint>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="98" to="104" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Electoral competition and factional sabotage</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Invernizzi</surname></persName>
		</author>
		<idno>SSRN 3329622</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">Available at</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Inside the black box of negative campaign effects: Three reasons why negative campaigns mobilize</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S</forename><surname>Martin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Political psychology</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="545" to="562" />
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Modeling negative campaigning</title>
		<author>
			<persName><forename type="first">S</forename><surname>Skaperdas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Grofman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">American Political Science Review</title>
		<imprint>
			<biblScope unit="volume">89</biblScope>
			<biblScope unit="page" from="49" to="61" />
			<date type="published" when="1995">1995</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<ptr target="https://aclanthology.org/2022.politicalnlp-1" />
		<title level="m">Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Afli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Alam</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Bouamor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><forename type="middle">B</forename><surname>Casagran</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Boland</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Ghannay</surname></persName>
		</editor>
		<meeting>The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Electoral agitation dataset: The use case of the polish election</title>
		<author>
			<persName><forename type="first">M</forename><surname>Baran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wã³jcik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kolebski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bernaczyk</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Rajda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Augustyniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kajdanowicz</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2022.politicalnlp-1.5" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</title>
				<meeting>The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="32" to="36" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Political communities on twitter: Case study of the 2022 french presidential election</title>
		<author>
			<persName><forename type="first">H</forename><surname>Abdine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Rennard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Vazirgiannis</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2022.politicalnlp-1.9" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</title>
				<meeting>The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="62" to="71" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Correlating political party names in tweets, newspapers and election results</title>
		<author>
			<persName><forename type="first">E</forename><surname>Sanders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Den</surname></persName>
		</author>
		<author>
			<persName><surname>Bosch</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2022.politicalnlp-1.2" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association</title>
				<meeting>The LREC 2022 workshop on Natural Language Processing for Political Sciences, European Language Resources Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="8" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Offensive language detection in semitic languages</title>
		<author>
			<persName><forename type="first">M</forename><surname>Litvak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vanetik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nimer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Skout</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">1st CFP:Multimodal and Multilingual Hate Speech Detection workshop at KONVENS 2021</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="7" to="13" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Offensive language detection in hebrew: can other languages help?</title>
		<author>
			<persName><forename type="first">M</forename><surname>Litvak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vanetik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liebeskind</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Hmdia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Madeghem</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/2022.lrec-1.396" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Language Resources and Evaluation Conference, European Language Resources Association</title>
				<meeting>the Language Resources and Evaluation Conference, European Language Resources Association<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="3715" to="3723" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Detection of negative campaign in israeli municipal elections</title>
		<author>
			<persName><forename type="first">M</forename><surname>Litvak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vanetik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Talker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Machlouf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Threat, Aggression and Cyberbullying</title>
				<meeting>the Third Workshop on Threat, Aggression and Cyberbullying<address><addrLine>TRAC</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022. 2022</date>
			<biblScope unit="page" from="68" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Sanh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Debut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chaumond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wolf</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1910.01108</idno>
		<title level="m">Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Seker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bandel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bareket</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Brusilovsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Greenfeld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tsarfaty</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2104.04052</idno>
		<title level="m">Alephbert: A hebrew large pre-trained language model to start-off your hebrew nlp application with</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Chriqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Yahav</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2102.01909</idno>
		<title level="m">Hebert &amp; hebemo: a hebrew bert model and a tool for polarity analysis and emotion recognition</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Random forest classifier for remote sensing classification</title>
		<author>
			<persName><forename type="first">M</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International journal of remote sensing</title>
		<imprint>
			<biblScope unit="volume">26</biblScope>
			<biblScope unit="page" from="217" to="222" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Logistic regression</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Wright</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Reading and understanding multivariate statistics</title>
				<editor>
			<persName><forename type="first">L</forename><forename type="middle">G</forename><surname>Grimm</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><forename type="middle">R</forename><surname>Yarnold</surname></persName>
		</editor>
		<imprint>
			<publisher>American Psychological Association</publisher>
			<date type="published" when="1995">1995</date>
			<biblScope unit="page" from="217" to="244" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Xgboost: extreme gradient boosting</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Benesty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Khotilovich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">R package version</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="issue">4-2 1</biblScope>
			<biblScope unit="page" from="1" to="4" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">BERT: pre-training of deep bidirectional transformers for language understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno>CoRR abs/1810.04805</idno>
		<ptr target="http://arxiv.org/abs/1810.04805.arXiv:1810.04805" />
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Model-agnostic meta-learning for fast adaptation of deep networks</title>
		<author>
			<persName><forename type="first">C</forename><surname>Finn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Abbeel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Levine</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International conference on machine learning</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="1126" to="1135" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Offensive language detection in semitic languages</title>
		<author>
			<persName><forename type="first">M</forename><surname>Litvak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Vanetik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Nimer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Skout</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Beer-Sheba</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Multimodal Hate Speech Workshop</title>
				<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="7" to="12" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine learning in Python</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thirion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prettenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dubourg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderplas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Passos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cournapeau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brucher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perrot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Duchesnay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title/>
		<author>
			<persName><forename type="first">F</forename><surname>Chollet</surname></persName>
		</author>
		<ptr target="https://github.com/fchollet/keras,2015" />
		<imprint/>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><surname>Abadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Barham</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Brevdo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Citro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">S</forename><surname>Corrado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Davis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dean</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Devin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ghemawat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Goodfellow</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Harp</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Irving</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Isard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jozefowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kudlur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Levenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mané</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Monga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Moore</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Murray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Olah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Schuster</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shlens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Steiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Talwar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Tucker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vanhoucke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Vasudevan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Viégas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Vinyals</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Warden</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wattenberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wicke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zheng</surname></persName>
		</author>
		<ptr target="https://www.tensorflow.org/,softwareavailablefromtensorflow.org" />
		<title level="m">TensorFlow: Large-scale machine learning on heterogeneous systems</title>
				<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">Building machine learning and deep learning models on Google cloud platform: A comprehensive guide for beginners</title>
		<author>
			<persName><forename type="first">E</forename><surname>Bisong</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
			<publisher>Apress</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<title level="m">Bert: Pre-training of deep bidirectional transformers for language understanding</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Predicting the Type and Target of Offensive Posts in Social Media</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zampieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Malmasi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rosenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Farra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of NAACL</title>
				<meeting>NAACL</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1415" to="1420" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
