<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Error Analysis in a Hate Speech Detection Task: the Case of HaSpeeDe-TW at EVALITA 2018</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Chiara</forename><surname>Francesconi</surname></persName>
							<email>chiara.francesconi@edu.unito.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Lingue e Letterature Straniere e Culture Moderne</orgName>
								<orgName type="institution">University of Turin</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
							<email>bosco@di.unito.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">University of Turin</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fabio</forename><surname>Poletto</surname></persName>
							<email>poletto@di.unito.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">University of Turin</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">University of Turin</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Error Analysis in a Hate Speech Detection Task: the Case of HaSpeeDe-TW at EVALITA 2018</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6BBFDB6D23C3C631769A8E96DF2AB02A</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T22:34+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Taking as a case study the Hate Speech Detection task at EVALITA 2018, the paper discusses the distribution and typology of the errors made by the five bestscoring systems. The focus is on the subtask where Twitter data was used both for training and testing (HaSpeeDe-TW). In order to highlight the complexity of hate speech and the reasons beyond the failures in its automatic detection, the annotation provided for the task is enriched with orthogonal categories annotated in the original reference corpus, such as aggressiveness, offensiveness, irony and the presence of stereotypes.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The field of Natural Language Processing witnesses an ever-growing number of automated systems trained on annotated data and built to solve, with remarkable results, the most diverse tasks. As performances increase, resources, settings and features that contributed to the improvement are (understandably) emphasized, but sometimes little or no room is given to an analysis of the factors that caused the system to misclassify some items.</p><p>This paper wants to draw attention to the importance of a thorough error analysis on the performance of supervised systems, as a means to produce advancement in the field. Errors made by a system may entail not only the poorness of the system itself but also the sparseness of the data used in training, the failure of the annotation scheme in describing the observed phenomena or a cue of the data inherent ambiguity. The presence of the same errors in the results of several systems involved in Copyright c 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0). a shared task may result in also more interesting hints about the directions to be followed in the improvement of both data and systems.</p><p>As a case study to carry out error analysis, data from a shared task have been used in this paper. Shared tasks offer clean, high-quality annotated datasets on which different systems are trained and tested. Although often researchers omit to reflect on what caused to system to collect some failures <ref type="bibr" target="#b11">(Nissim et al., 2017)</ref>, they are an ideal ground for sharing negative results and encourage reflections on "what did not work", an excellent opportunity to carry out a comparative error analysis and search for patterns that may, in turn, suggest improvements in both the dataset and the systems.</p><p>Here we analyze the case of the Hate Speech Detection (HaSpeeDe) task <ref type="bibr" target="#b4">(Bosco et al., 2018)</ref> presented at EVALITA 2018, the Evaluation Campaign for NLP and Speech Tools for Italian <ref type="bibr" target="#b5">(Caselli et al., 2018)</ref>. HS detection is a really complex task, starting from the definition of the notion on which it is centered. Considering the growing attention it is gaining, see e.g. the variety of resources and tasks for HS developed in the last few years, we believe that error analysis could be especially interesting and useful for this case, as well as in other tasks where the outcome of systems meaningfully depends on resources exploited for training and testing.</p><p>The paper outlines the background and motivations behind this research (Section 2), describes the sub-task on which the study is based (Section 3), reports on the error analysis process (Section 4) and discusses its results (Section 5), and presents some conclusive remarks (Section 6).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Background and Motivations</head><p>There are several issues connected to the identification of HS: its juridical definition, the subjectivity of its perception, the need to remove potentially illegal content from the web without unjustly re-moving legal content, and a list of linguistic phenomena that partly overlap to HS but need to be kept apart.</p><p>Many works have recently contributed to the field by releasing novel annotated resources or presenting automated classifiers. Two reviews on HS detection were recently published by <ref type="bibr" target="#b14">Schmidt and Wiegand (2017)</ref> and <ref type="bibr" target="#b9">Fortuna and Nunes (2018)</ref>. Since 2016, shared tasks on the detection of HS or related phenomena (such as abusive language or misogyny) have been organized, effectively enhancing advancements in resource building and system development. These include Hat-Eval at SemEval 2019 <ref type="bibr" target="#b2">(Basile et al., 2019)</ref>, AMI at IberEval 2018 <ref type="bibr" target="#b8">(Fersini et al., 2018)</ref>, HaSpeeDe at EVALITA 2018 <ref type="bibr" target="#b4">(Bosco et al., 2018)</ref> and more. Nevertheless, the growing interest in HS detection suggests that the task is far from being solved: to improve quality and interoperability of resources, to design suitable annotation schemes and to reduce biases in the annotation is still as needed as it is to work on system engineering. Establishing standards and good practices in error analysis can enhance these processes and push towards the development of effective classifiers for HS.</p><p>While academic literature is rich with works on human annotation and evaluation metrics, it is not as easy to find works dedicated to error analysis of automated classification systems. This is rather more often found as a section of papers describing a system (see, e.g., <ref type="bibr" target="#b10">(Mohammad et al., 2018)</ref>). This section, however, is not always present. To examine the errors made by a system, classify them and search for linguistic patterns appear to be a somewhat undervalued job, especially when the system had an overall good performance.Yet, it is crucial to understand why a system proved to be a weak solution to certain instances of a problem, even while being excellent for other instances.</p><p>In the context of COLING 2018, error analysis emerged as one of the most relevant features to be addressed in NLP research<ref type="foot" target="#foot_0">1</ref> . This attention to error analysis encouraged authors to submit papers with a dedicated section, with <ref type="bibr" target="#b16">Yang et al. (2018)</ref> winning the award for the best error analysis, and is a step towards establishing good practices in the NLP community.</p><p>In the wake of this awareness, we apply linguistic insights to one of the annotated corpora used within the HaSpeeDe shared task, namely the HaSpeeDe-TW sub-task dataset (described in Section 3). Characteristics of this dataset make it ideal for our purpose: each tweet is connected to a target and is annotated not only for the presence of HS but for four other parameters. If a comparative analysis of two corpora presenting different textual genres (HaSpeeDe-TW and HaSpeeDe-FB) might have offered interesting perspectives, the lack of such characteristic in the FB dataset prevents a thorough comparison. Furthermore, among the in-domain HaSpeeDe sub-tasks, HaSpeeDe-TW is the one where systems achieved the lower F 1 -scores, providing thus more material for our analysis.</p><p>3 HaSpeeDe-TW at EVALITA 2018: A Brief Overview</p><p>While a description of the HaSpeeDe task as a whole has been provided in the organizers' overview <ref type="bibr" target="#b4">(Bosco et al., 2018)</ref>, here we focus on HaSpeeDe-TW, one of the three sub-tasks into which the competition was structured<ref type="foot" target="#foot_1">2</ref> . The subtask consisted in a binary classification of hateful vs non-hateful tweets. Training set and test set contain 3,000 and 1,000 tweets respectively, labeled with 1 or 0 for the presence of HS, and with a distribution, in both sets, of around 1/3 hateful against 2/3 non-hateful tweets. Data are drawn from an already existing HS corpus <ref type="bibr" target="#b12">(Poletto et al., 2017)</ref>, whose original annotation scheme was simplified for the purposes of the task (see Section 4). Nine teams participated in the task, submitting fifteen runs. The five best scores, submitted by the teams ItaliaNLP (whose runs ranked 1st and 2nd) <ref type="bibr" target="#b6">(Cimino and De Mattei, 2018)</ref>, RuG <ref type="bibr" target="#b0">(Bai et al., 2018)</ref>, InriaFBK <ref type="bibr" target="#b7">(Corazza et al., 2018)</ref> and sb-MMP <ref type="bibr" target="#b15">(von Grünigen et al., 2018)</ref>, ranged from 0.7993 to 0.7809 in terms of macro-averaged F 1score 3 . They applied both classical machine learning approaches, Linear Support Vector Machine in particular (ItaliaNLP, RuG) and more recent deep learning algorithms, such as Convolutional Neural Networks (sbMMP) or Bi-LSTMs (ItaliaNLP, who adopted a multi-task learning approach ex-ploiting the SENTIPOLC 2016 <ref type="bibr" target="#b1">(Barbieri et al., 2016)</ref> dataset as well). Learning architectures resorted to both surface features such as word and character n-grams (RuG) and linguistic information such as Part of Speech (ItaliaNLP).</p><p>In the next section, we provide a description of the errors collected from these best five runs as put in relation with the specific factors we chose to analyze in this study, encompassing and merging qualitative and quantitative observations. Our analysis is strictly based on the results provided by those systems. An analysis focused on the features of the systems that determined the errors is unfortunately beyond the scope of this work, as in HaSpeeDe participants were only requested to provide the results after training their systems.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Error Analysis</head><p>Error analysis can be used in between runs to improve results or test different feature settings. With the aim of weaving a broader reflection on the especially hard linguistic patterns within a HS detection task, here it is performed a posteriori and on the aggregated results of five systems on the HaSpeeDe-TW test set (1,000 tweets). We focus on the answers given by the majority of the five best systems because we believe they provide a faithful representation of the errors without the noise due to the presence of the worst runs.</p><p>The test set was composed of 32.4% of hateful tweets and 67.6% non-hateful tweets. As the first step of our analysis, we compared the gold label assigned to each tweet in the test set with the one attributed by the majority of the five runs considered for the task. An error was considered to occur when the label assigned by the majority of the systems was different from the gold label. If we extend our analysis to all the fifteen submitted runs, 156 out of 1,000 tweets have been misclassified by the majority of them. However, this number increases to 172 if only the five best runs are taken into account.</p><p>Regardless of the correct label, agreement among the five best runs is higher than that among all runs and among any other set of runs: those systems which have best modeled the phenomenon on the data provided appear to have made similar mistakes. This supports our hypothesis that errors mostly depend on data-dependent features rather than on systems, which are all different in approach and feature setting.</p><p>Even though only the annotation concerning the presence of HS was distributed to the teams, the corpus from which the training and test set of HaSpeeDe-TW were extracted was provided with additional labels <ref type="bibr" target="#b12">(Poletto et al., 2017;</ref><ref type="bibr" target="#b13">Sanguinetti et al., 2018)</ref>. These labels (see Table <ref type="table" target="#tab_0">1</ref>) were meant to mark the user's intention to be aggressive (aggressiveness), the potentially hurtful effect of a tweet (offensiveness), the use of ironic devices to possibly mitigate a hateful message (irony), and whether the tweet contains any implicit or explicit reference to negative beliefs about the targeted group (stereotype These labels were conceived with the aim of identifying some particular aspects that may intersect HS but occur independently. As a matter of fact, hateful contents towards a given target might be expressed using aggressive tones or offensive/stereotypical slurs, but also in much subtler forms. At the same time, aggressive or offensive content, though addressed to a potential HS target, does not necessarily imply the presence of HS. Our assumption while carrying out this study was that such close, but at times misleading, relation between HS on one side and these phenomena on the other could be considered a source of error for the automatic systems.</p><p>In addition, other aspects of both linguistic and extra-linguistic nature were taken into account, so as to complement the analysis. We thus considered the tweets targets, i.e. Roma, immigrants and Muslims (also an information available from the original HS corpus). Finally, we selected three features that are typical of computer-mediated communication and social platforms such as Twitter, in particular, the presence of links, multi-word hashtags, and the use of capitalized words.</p><p>As for the method adopted, the percentage of errors for the gold positives and the gold negatives in the whole test set was calculated. First, the rates were calculated considering the two labels -hateful and non-hateful -separately, in order to bal-ance their different distribution in the test set; then the results were halved to represent the whole corpus in percentage and to maintain the proportion between the results of the tags. All the percentages correlating two different tags were calculated this way, so that the results could be easily compared. The percentages of mistakes for each label of the categories were determined and compared to the general result to understand whether they influenced it positively or negatively. Table <ref type="table" target="#tab_1">2</ref> summarizes the results for each label showing the distribution of the false negatives (FN), false positives (FP), true positives (TP) and true negatives (TN). The error percentages higher than the general result are in bold font.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Results and Discussion</head><p>In order to find some answers to our research questions and evidence of the influence of the annotated features on the systems' results, we provide in this section an analysis driven by the categories we described in the previous section.</p><p>Aggressiveness and Offensiveness. The different degrees of aggressiveness did not affect the systems recall, but we measured more FPs when weak or strong aggressiveness is involved (more than thrice as many as in the overall results when strong aggressiveness is present). Offensiveness seems to hold a similar but heavier influence on performance, causing better recall but worse precision: FPs are more than doubled when strong offensiveness is present.</p><p>The presence of offensiveness is often associated to slurs or vulgar terms: these are not a consistent presence in the dataset (the most vulgar tweets are probably quickly removed by the platform), and mostly appear in tweets classified as HS. However, about half of the non-hateful tweets containing offensive words were wrongly classified as hateful, proving that offensiveness can be misleading for systems. In these cases, a lexiconbased approach can fail, while attention to the context could be crucial: in the most common instances of false positives, in fact, offensive words did not refer to the targets. HS Targets. Analyzing the three targets of HS allowed us understanding how the systems reacted to different ways of expressing hate.</p><p>Most of the errors were caused by the target Roma: few hateful tweets were recognized, and FNs are more than 30%. Results for the target Immigrants are similar to the overall performance, only with a slightly higher number of FPs. The target Muslims caused a low number of FNs but almost twice as many FPs as in the general performance.</p><p>The systems seem to struggle to recognize hateful content against Roma: this may be caused by an imbalance in the test set (only 6.3% of tweets with the target Roma are labelled as HS, while the targets Immigrants and Muslims have 12.6% and 13.4% of hateful tweets respectively) or by biases in the annotation.</p><p>The poor results achieved in classifying messages with target Roma can also be explained by the subtler ways of expressing HS when this target is involved, more heavily based on stereotypes than it happens with the other targets. The hate against the other two targets, in particular Muslims, was instead very explicit. See the following examples extracted from the test set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>2235.</head><p>Roma, colpisce una pecora con il pallone: bambino rom accecato da un pastore https://t.co/KsSAS3fUx9 @ilmessaggeroit HA DIFESO I SUOI AVERI!<ref type="foot" target="#foot_3">4</ref> [FN, strong aggressiveness, target: Roma] 4749. @Corriere Uccidere gli islamici, prima di tutto.<ref type="foot" target="#foot_4">5</ref> [TP, strong aggressiveness, target: religion] Other features. Some other features were considered in our analysis. The presence of stereotype was more frequent in hateful tweets, which caused a slight increase in FPs; conversely, cases of HS without stereotype posed no issues to the systems. Moreover, as expected, the presence of irony slightly increased the errors rate both in hateful and non-hateful tweets.</p><p>The presence of Twitter's linguistic devices also negatively influenced the results, probably because of the difficulty encountered by systems when some semantic content assumes nonstandard forms, e.g. links, multi-word hashtags and capitalized words.</p><p>URLs frequently occur in the data, but mostly in non-hateful tweets (although this may be a peculiarity of this dataset). Systems appear to have troubles recognizing hateful tweets that contain URLs (errors increased by 14%). Conversely, the absence of URLs caused an increase in FPs. This feature is unlikely to be directly connected to hateful language: we rather believe that it could somehow affect predictions regardless of the actual content. Also multi-word hashtags influenced results, especially for hateful content: their presence increased FNs by 8%. The reason for this kind of error might lie in the fact that our dataset contains some cases where the crucial element in a hateful tweet is precisely the hashtag, as in the example below:</p><p>2149. Quando vedremo lo stessa tema portato in piazza con la stessa forza e determinazione? Mai credo. #stopislam 6 https://t.co/dDYLZB1BlJ [multi-word hashtag, FN] The text in this tweet is not hateful, but an element of hatred is conveyed by the hashtag "#stopislam". The ability to separate the multi-word hashtags into the words composing them would improve the 6 "When will we see people fighting for the same issue with the same strength and determination? Never, I believe." performances of the systems. The tweets with a multi-word hashtag clarifying the text would have a better chance of being correctly identified.</p><p>Finally, some capitalized words have been found in the data set, mostly in hateful tweets, which again caused an increase in FPs. Despite their small number, we noticed that, in non-hateful tweets, a higher percentage of capitalized words are named entities (nouns of places, people, newspapers, etc.), while in hateful tweets capitalized words are more often used to intensify opinions or feelings. Among all the features taken into account, offensiveness seems to have affected the performance in various ways: its absence led systems to classify as non-hateful tweets that are indeed hateful, while its presence caused the inverse error. A possible explanation for this is that, as shown in <ref type="bibr" target="#b13">Sanguinetti et al. (2018)</ref>, offensiveness does not correlate with HS even though it can be one of its features. The systems might have taken offensive terms as indicators for HS, as also humans tend to do (see for example <ref type="bibr" target="#b3">Bohra et al. (2018)</ref>), but this is a false assumption that systems should be trained to avoid. Aggressiveness also caused a certain degree of errors, but only affecting precision. This paper presents a detailed error analysis of the results obtained within the context of a shared task for HS detection. In our study, we took into account two types of data: content information, provided by gold standard labels assigned to each tweet; and metadata information, namely the presence of URLs, hashtags and capitalized words. Results prove the importance of considering other categories related to that on which the task was centered.</p><p>The analysis of performances in relation to URLs poses a controversial result. There are two reasons why tweets collected via Twitter's API may contain a URL: the tweet may have been cut off and a URL automatically generated as a link to the complete tweet, or the URL may be part of the original tweet and lead to an external page. In both cases, unless the URL is followed, the tweet is likely to be harder to understand compared to a tweet that contains no URL. This may cause lower agreement among human judges, and it is a very complicated issue for automated systems to deal with, especially when the meaning of the tweet is unintelligible without first opening the URL. Tweets containing URLs are, for the time being, less reliable as training data and pose a tougher challenge for Sentiment Analysis tasks at large; we encourage an effort towards solving this issue.</p><p>As for capitalized words, future work may include investigating how they affect human annotation, as some judges may show a bias towards associating capitalized words to HS or other categories. Furthermore, improvements may come from considering the PoS tags of such words, or the number of consecutive capitalized words.</p><p>Multi-word hashtags as well need to be treated with care, as they may affect and even overturn the meaning of the whole tweet. Yet, it happens that a hashtag might require syntactic, semantic and world-knowledge processing in order to be fully understood: for example, by comparing the phrase "stop Islam" with, e.g., "stop harassment", we can see that the word "stop" is not necessarily negative, and it becomes so only because it is followed by the name of a religion whose members are, nowadays and in Western society, particularly subject to discrimination.</p><p>Overall, our analysis suggests that systems failures are motivated by the difficulty in dealing with cases where HS is less directly expressed and pave the way for future work on, e.g., the development of tools that perform a more careful analysis of the text.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>). The original annotation scheme of the HS corpus that was (partially) used in HaSpeeDe-TW.</figDesc><table><row><cell>label</cell><cell>values</cell></row><row><cell cols="2">aggressiveness no, weak, strong</cell></row><row><cell>offensiveness</cell><cell>no, weak, strong</cell></row><row><cell>irony</cell><cell>yes, no</cell></row><row><cell>stereotype</cell><cell>yes, no</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Percentage of correct (TPs and TNs) and erroneous (FPs and FNs) results in relation to the features considered in the analysis, along with the actual distribution of these features in the test set.</figDesc><table><row><cell></cell><cell>FN</cell><cell>FP</cell><cell>TP</cell><cell>TN</cell><cell>Gold HS Gold Not-HS</cell></row><row><cell>general</cell><cell>15%</cell><cell cols="3">6% 35% 44%</cell><cell>32.3%</cell><cell>67.7%</cell></row><row><cell>no aggressiveness</cell><cell>15%</cell><cell cols="3">4% 35% 46%</cell><cell>13.5%</cell><cell>56.8%</cell></row><row><cell>weak aggressiveness</cell><cell cols="4">15% 10% 35% 40%</cell><cell>11.2%</cell><cell>10.1%</cell></row><row><cell cols="5">strong aggressiveness 15% 19% 35% 31%</cell><cell>7.6%</cell><cell>0.8%</cell></row><row><cell>no offensiveness</cell><cell cols="4">20% 5% 30% 45%</cell><cell>10.9%</cell><cell>60%</cell></row><row><cell>weak offensiveness</cell><cell cols="4">13% 11% 37% 39%</cell><cell>14.6%</cell><cell>4.9%</cell></row><row><cell>strong offensiveness</cell><cell cols="4">12% 16% 38% 34%</cell><cell>6.8%</cell><cell>2.8%</cell></row><row><cell>no irony</cell><cell>15%</cell><cell cols="3">5% 35% 45%</cell><cell>27.8%</cell><cell>59%</cell></row><row><cell>yes irony</cell><cell cols="4">18% 9% 32% 41%</cell><cell>4.5%</cell><cell>8.7%</cell></row><row><cell>no stereotype</cell><cell>15%</cell><cell cols="3">5% 35% 45%</cell><cell>11.6%</cell><cell>49.7%</cell></row><row><cell>yes stereotype</cell><cell cols="4">15% 8% 35% 42%</cell><cell>20.7%</cell><cell>18%</cell></row><row><cell>Immigrants</cell><cell cols="4">15% 9% 35% 41%</cell><cell>12.6%</cell><cell>22.4%</cell></row><row><cell>Muslims</cell><cell cols="4">8% 11% 42% 39%</cell><cell>13.4%</cell><cell>12.2%</cell></row><row><cell>Roma</cell><cell cols="4">31% 1% 19% 49%</cell><cell>6.3%</cell><cell>33.1%</cell></row><row><cell>no link</cell><cell cols="4">11% 13% 37% 39%</cell><cell>25.4%</cell><cell>24.4%</cell></row><row><cell>yes link</cell><cell cols="4">29% 1% 21% 49%</cell><cell>7%</cell><cell>43.2%</cell></row><row><cell>multi hashtags</cell><cell cols="4">23% 8% 27% 42%</cell><cell>3%</cell><cell>1.9%</cell></row><row><cell>no capitalized words</cell><cell>15%</cell><cell cols="3">5% 35% 45%</cell><cell>29.1%</cell><cell>64.1%</cell></row><row><cell>yes capitalized words</cell><cell cols="4">14% 9% 36% 41%</cell><cell>3.3%</cell><cell>3.5%</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://coling2018.org/ error-analysis-in-research-and-writing/.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">The other two being HaSpeeDe-FB, where Facebook data were used both for training and testing the systems, and Cross-HaSpeeDe, further subdivided into Cross-HaSpeeDe-FB and Cross-HaSpeeDe-TW, where systems were trained using Facebook data and tested against Twitter data in the former, and the opposite in the latter.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">3 All official ranks are available here: https://goo. gl/xPyPRW.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">"Rome, Roma child hits a sheep with a ball: blinded by a shepherd https://t.co/KsSAS3fUx9 @ilmessaggeroit HE DE-FENDED HIS PROPERTY!"</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">5 "@Corriere Kill the Muslims, first of all."</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The work of C. Bosco and M. Sanguinetti is partially funded by Progetto di Ateneo/CSP 2016 (Immigrants, Hate and Prejudice in Social Media, S1618 L2 BOSC 01), while that of F. Poletto is funded by Fondazione Giovanni Goria and Fondazione CRT (Talenti della Società Civile 2018).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">RuG @ EVALITA 2018: Hate Speech Detection In Italian Social Media</title>
		<author>
			<persName><forename type="first">Xiaoyu</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Flavio</forename><surname>Merenda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Claudia</forename><surname>Zaghi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Malvina</forename><surname>Nissim</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop</title>
				<meeting>Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of the Evalita 2016 SENTIment POLarity Classification Task</title>
		<author>
			<persName><forename type="first">Francesco</forename><surname>Barbieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valerio</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Danilo</forename><surname>Croce</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Malvina</forename><surname>Nissim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nicole</forename><surname>Novielli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop</title>
				<meeting>the Fifth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Semeval-2019 task 5: Multilingual detection of hate speech against immigrants and women in Twitter</title>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Valerio Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elisabetta</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Debora</forename><surname>Fersini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Nozza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francisco</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><surname>Manuel Rangel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Pardo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><surname>Sanguinetti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International Workshop on Semantic Evaluation</title>
				<meeting>the 13th International Workshop on Semantic Evaluation</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="54" to="63" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A dataset of Hindi-English code-mixed social media text for hate speech detection</title>
		<author>
			<persName><forename type="first">Aditya</forename><surname>Bohra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Deepanshu</forename><surname>Vijay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vinay</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Syed</forename><surname>Sarfaraz Akhtar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manish</forename><surname>Shrivastava</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second Workshop on Computational Modeling of Peoples Opinions, Personality, and Emotions in Social Media</title>
				<meeting>the Second Workshop on Computational Modeling of Peoples Opinions, Personality, and Emotions in Social Media</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="36" to="41" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Overview of the EVALITA 2018 hate speech detection task</title>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dell'orletta</forename><surname>Felice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabio</forename><surname>Poletto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tesconi</forename><surname>Maurizio</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop</title>
				<meeting>Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">EVALITA 2018: Overview of the 6th Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</title>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nicole</forename><surname>Novielli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</title>
				<meeting>Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note>Final Workshop</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Multitask Learning in Deep Neural Networks for Hate Speech Detection in Facebook and Twitter</title>
		<author>
			<persName><forename type="first">Andrea</forename><surname>Cimino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lorenzo</forename><surname>De</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mattei</forename></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</title>
				<meeting>Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note>Final Workshop</note>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Comparing Different Supervised Approaches to Hate Speech Detection</title>
		<author>
			<persName><forename type="first">Michele</forename><surname>Corazza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefano</forename><surname>Menini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pinar</forename><surname>Arslan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rachele</forename><surname>Sprugnoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elena</forename><surname>Cabrio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Tonelli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Serena</forename><surname>Villata</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian</title>
				<meeting>Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note>Final Workshop</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Overview of the Task on Automatic Misogyny Identification at IberEval</title>
		<author>
			<persName><forename type="first">Elisabetta</forename><surname>Fersini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maria</forename><surname>Anzovino</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018), co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018)</title>
				<meeting>the Third Workshop on Evaluation of Human Language Technologies for Iberian Languages (IberEval 2018), co-located with 34th Conference of the Spanish Society for Natural Language Processing (SEPLN 2018)</meeting>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="214" to="228" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A survey on automatic detection of hate speech in text</title>
		<author>
			<persName><forename type="first">Paula</forename><surname>Fortuna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sérgio</forename><surname>Nunes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys (CSUR)</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page">85</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Semeval-2018 task 1: Affect in tweets</title>
		<author>
			<persName><forename type="first">Saif</forename><surname>Mohammad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Felipe</forename><surname>Bravo-Marquez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mohammad</forename><surname>Salameh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Svetlana</forename><surname>Kiritchenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The 12th International Workshop on Semantic Evaluation</title>
				<meeting>The 12th International Workshop on Semantic Evaluation</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1" to="17" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Sharing is caring: The future of shared tasks</title>
		<author>
			<persName><forename type="first">Malvina</forename><surname>Nissim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lasha</forename><surname>Abzianidze</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kilian</forename><surname>Evang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rob</forename><surname>Van Der Goot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hessel</forename><surname>Haagsma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Barbara</forename><surname>Plank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martijn</forename><surname>Wieling</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computational Linguistics</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="issue">4</biblScope>
			<biblScope unit="page" from="897" to="904" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Hate Speech Annotation: Analysis of an Italian Twitter Corpus</title>
		<author>
			<persName><forename type="first">Fabio</forename><surname>Poletto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Stranisci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fourth Italian Conference on Computational Linguistics (CLiC-it</title>
				<meeting>the Fourth Italian Conference on Computational Linguistics (CLiC-it</meeting>
		<imprint>
			<date type="published" when="2017">2017. 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">An Italian Twitter Corpus of Hate Speech against Immigrants</title>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabio</forename><surname>Poletto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Stranisci</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th Language Resources and Evaluation Conference</title>
				<meeting>the 11th Language Resources and Evaluation Conference</meeting>
		<imprint>
			<publisher>LREC</publisher>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A Survey on Hate Speech Detection using Natural Language Processing</title>
		<author>
			<persName><forename type="first">Anna</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Michael</forename><surname>Wiegand</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth International Workshop on Natural Language Processing for Social Media. Association for Computational Linguistics</title>
				<meeting>the Fifth International Workshop on Natural Language Processing for Social Media. Association for Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">spMMMP at GermEval 2018 Shared Task: Classification of Offensive Content in Tweets using Convolutional Neural Networks and Gated Recurrent Units</title>
		<author>
			<persName><forename type="first">Ralf</forename><surname>Dirk Von Grünigen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fernando</forename><surname>Grubenmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pius</forename><surname>Benites</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mark</forename><surname>Von Däniken</surname></persName>
		</author>
		<author>
			<persName><surname>Cieliebak</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of GermEval 2018, 14th Conference on Natural Language Processing</title>
				<meeting>GermEval 2018, 14th Conference on Natural Language Processing</meeting>
		<imprint>
			<publisher>KON-VENS</publisher>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Sgm: sequence generation model for multi-label classification</title>
		<author>
			<persName><forename type="first">Pengcheng</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xu</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Shuming</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Wei</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Houfeng</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th International Conference on Computational Linguistics</title>
				<meeting>the 27th International Conference on Computational Linguistics</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="3915" to="3926" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
