<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Fontana-Unipi @ HaSpeeDe2: Ensemble of Transformers for the Hate Speech Task at Evalita</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Michele</forename><surname>Fontana</surname></persName>
							<email>m.fontana12@studenti.unipi.it</email>
							<affiliation key="aff0">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giuseppe</forename><surname>Attardi</surname></persName>
							<email>attardi@di.unipi.it</email>
							<affiliation key="aff1">
								<orgName type="department">Dipartimento di Informatica</orgName>
								<orgName type="institution">Università di Pisa</orgName>
							</affiliation>
						</author>
						<title level="a" type="main">Fontana-Unipi @ HaSpeeDe2: Ensemble of Transformers for the Hate Speech Task at Evalita</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">42ADB9A1F4139EB1A3219D9E495FF88C</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T01:06+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We describe our approach and experiments to tackle Task A of the second edition of HaSpeeDe, within the Evalita 2020 evaluation campaign. The proposed model consists in an ensemble of classifiers built from three variants of a common neural architecture. Each classifier uses contextual representations from transformers trained on Italian texts, fine tuned on the training set of the challenge. We tested the proposed model on the two official test sets, the in-domain test set containing just tweets and the out-of-domain one including also news headlines. Our submissions ranked 4th on the tweets test set and 17th on the second test set.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>The spreading of hateful messages on social media has become a serious issue, therefore techniques of hate speech detection have become quite relevant. The goal of the Hate Speech Detection task <ref type="bibr" target="#b11">(Sanguinetti et al., 2020)</ref> at Evalita 2020 <ref type="bibr" target="#b1">(Basile et al., 2020)</ref> is to improve the automatic detection of hate messages in Italian tweets. The organizers provided to the participants the dataset HaSpeeDe2, which consists of 6,837 Italian tweets, containing, besides the raw text, also hashtags and emojis. The Task A can be cast into a binary classification task: the model has to predict whether a given message contains hate speech or not.</p><p>Approaches based on transformer models have become quite popular recently and have proved effective in reaching state-of-the-art scores on major NLP tasks such as those of the GLUE benchmark <ref type="bibr" target="#b13">(Wang et al., 2018)</ref>. With our experiments we try to assess the effectiveness of transformers trained on Italian documents in a task involving Italian texts from different sources. We experiments with both a transformer model trained specifically on Italian tweets and one trained on generic web documents.</p><p>We combine several instances of classifiers based on these transformers, in order to address the problem of over-fitting due to the small size of the training set.</p><p>For this edition of the Evalita HaSpeeDe task, the organizers released two test sets, an in-domain one consisting of tweets and an out-of-domain one containing also news headlines.</p><p>The ensemble model of our official submission achieved a competitive score of 78.03 Macro-F1 on the in-domain test set but did not perform as well on the second test set.</p><p>We make available the source code for our experiments as Open Source at https:// github.com/mikelefonty/Haspeede2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Related Work</head><p>The first edition of HaSpeeDe was held in 2018. The results produced during this contest were the starting point of our research. As described in <ref type="bibr" target="#b2">(Bosco et al., 2018)</ref>, most of the systems were based on neural networks and used word embeddings, such as FastText <ref type="bibr" target="#b5">(Grave et al., 2018)</ref> or word2vec <ref type="bibr" target="#b8">(Polignano and Basile, 2018)</ref> in the first layer of their architecture. The embeddings layer was usually followed by a Recurrent Network or a Convolutional Neural Network to get an internal representation of the input text. This hidden representation was provided as input to a series of dense layers to obtain the final classification result.</p><p>Over the last couple of years, the trend in approaches to language analysis has changed considerably, as can be seen by examining the models used in competitions like SemEval 2020 OffensE-val 2 <ref type="bibr" target="#b14">(Zampieri et al., 2020)</ref>. In these new models, to get a better text representation, the embedding layer is often replaced by a Transformer <ref type="bibr" target="#b12">(Vaswani et al., 2017)</ref> such as BERT <ref type="bibr" target="#b4">(Devlin et al., 2019)</ref>, RoBERTa <ref type="bibr" target="#b6">(Liu et al., 2019)</ref>, or Multilingual BERT <ref type="bibr" target="#b4">(Devlin et al., 2019)</ref>.</p><p>We followed this trend but we also focused our attention on the problem raised by the small size of the dataset. As <ref type="bibr" target="#b10">Risch and Krestel (2020)</ref> mention, transformer models tend to have a high variance with respect to the input dataset, that often leads to overfitting. The authors therefore suggest to implement an ensemble of classifiers to reduce the variance and consequently improve the generalization capabilities of the trained model.</p><p>In the following, we describe a similar approach based on the Bagging technique <ref type="bibr" target="#b3">(Breiman, 1996)</ref>, where we apply three different transformer-based classifiers to populate the ensemble and to get the final prediction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">System Architecture</head><p>During the design phase of our classifier, we looked for a transformer trained directly on a significantly large collection of Italian texts and particularly on Italian tweets, in order to compensate for the small size of the training data. We found two possible models based on BERT: AlBERTo <ref type="bibr" target="#b9">(Polignano et al., 2019)</ref> <ref type="foot" target="#foot_0">1</ref> and DBMDZ<ref type="foot" target="#foot_1">2</ref> . The former is trained on TWITA <ref type="bibr">(Basile et al., 2018)</ref>, a 191 GB collection of Italian tweets gathered by the authors, and tested on the SENTIPOLC task during the EVALITA 2016 campaign, where it achieved state-of-the-art accuracy in subjectivity, polarity, and irony detection on Italian tweets. We considered this model suitable for hate speech detection, since its source are Italian tweets and the SENTIPOLC task is a classification task similar to ours. DBMDZ instead is trained on a more general domain, from a 13 GB dataset, which includes a dump of Italian Wikipedia and texts from web pages selected from the Opus Corpora. <ref type="foot" target="#foot_2">3</ref> We decided to test both transformer models, assessing their performance through a validation phase on a development set.</p><p>These transformers were used in the input stage of all our architectures, providing contextual embeddings for sentences that were fine tuned during training. We designed three architecture variants, which were employed as the basic building blocks to construct the ensembles:</p><p>• ALB-SINGLE: It consists of a first layer provided by the AlBERTo transformer, followed by a single neuron with a sigmoid activation function.</p><p>• DB-SINGLE: It follows the same structure of ALB-SINGLE; it just replaces AlBERTo with DBMDZ in the first layer.</p><p>• DB-MLP: Compared to DB-SINGLE, it adds a new dense layer, using a ReLU activation function, between the transformer and the output neuron.</p><p>The final model is an ensemble consisting of a number of instances of each of the above architectures. For each architecture, e.g. ALB-SINGLE, we construct instances in the following way. After initializing the weights randomly within a given interval and generating the training data by applying the bootstrap technique to the original dataset, we start training the model. When that phase is over, we insert the resulting model in the ensemble. We repeat this process several times with different random weights initialization. Note that, due to the random initialization, no two classifiers in the ensemble are identical to each other. More formally, the model consists of N elements,</p><formula xml:id="formula_0">N = N AL + N DB + N M LP</formula><p>where N AL , N DB , N M LP represent, respectively, the number of instances of ALB-SINGLE, DB-SINGLE and DB-MLP classifiers.</p><p>In retrospect, it might have been worth while to consider instances of the architecture obtained varying them more thoroughly than just in the initial weights, for example, by changing in the hyper-parameters or number of layers.</p><p>Our classification algorithm is a slight generalization of the most classical one, which collects results from each member of the ensemble and outputs the class which gets the majority of predictions over all iterations. The process, described by Algorithm 1, performs n run iterations. During the ith iteration, the algorithm starts sampling randomly from the ensemble a given number of instances for each type of classifier (line 3-5) and initializing to 0 the variable class1, which contains the total number of votes that the hate class A simpler variant of the algorithm would be to just add the counts of each class by all classifiers in all iterations and return the class with the highest count. We plan to compare these two approaches in a future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">Experiments</head><p>In this section we describe the experiments we performed to tune the hyper-parameters of our model. We will focus on the search to choose the best values for n DB , n AL , n M LP , that is how many instances to select at each iteration in the classification algorithm.</p><p>Before We report the expected value and the standard deviation of the F1 score computed with respect to the 3 validation folds.</p><p>dataset into two disjoint subsets, a development and an internal test set, in the proportion of 80% and 20%, respectively. The split was done by means of Stratified Sampling, according to the distribution of the target variable hs. We applied the Stratified 3-fold-CV technique to validate our model. Given that we are solving a binary classification problem, we picked the Binary Cross Entropy as our loss. We chose AdamW as our optimizer; we set the first 10% of the total steps as warmup steps. We conducted the experiences on a GPU offered by Google Colab<ref type="foot" target="#foot_3">4</ref> . Our models are implemented in PyTorch <ref type="bibr" target="#b7">(Paszke et al., 2019)</ref>. To extract as much information as possible from input texts, we preprocessed them through hashtag segmentation by means of Tweet Preprocessor. <ref type="foot" target="#foot_4">5</ref> We also converted emojis into their Italian description by using the emoji<ref type="foot" target="#foot_5">6</ref> and Google Translate<ref type="foot" target="#foot_6">7</ref> libraries.</p><p>We analyzed the behaviour of the three baseline architectures we planned to include in the ensemble.</p><p>We trained each model for a maximum of 4 epochs, using a batch of size 16 and setting the maximum text length to 100. A grid search revealed that the optimal learning rate for DB-MLP is 5 • 10 −5 , and 6 • 10 −5 for the remaining models. The optimal number of neurons in the hidden layer of DB-MLP is 50.</p><p>Table <ref type="table" target="#tab_0">1</ref> highlights the following aspect: DB-SINGLE achieves better performance than ALB-SINGLE, even though the dataset used to train AlBERTo was composed by a large collection of tweets. The obtained values of the macro-F1 are the baselines of our work.</p><p>We then describe the results obtained through Table <ref type="table">3</ref>: Scores by each architecture, both individually and together in the ensemble. We report the average value and the standard deviation of the F1 score computed with respect to the 3 validation folds.</p><formula xml:id="formula_1">n DB n M LP n AL Macro-F1</formula><p>the ensemble model. To build the classifier, we trained 30 instances of each architecture, keeping the same hyper-parameters obtained from the previous grid search. We thus set:</p><formula xml:id="formula_2">N AL = N DB = N M LP = 30</formula><p>We noted that the generalization capability of the ensemble is strictly related to the triple (n DB , n M LP , n AL ), so we performed another grid search, looking for the optimal combination of the three parameters. Table <ref type="table" target="#tab_1">2</ref> shows the five best configurations found by this search. The optimal values for the triple, <ref type="bibr">(20,</ref><ref type="bibr">25,</ref><ref type="bibr">30)</ref>, allow the ensemble to achieve an F1-score of 80.0%, with a gain of about 2 points with respect to the score by a single DB-MLP (see Table <ref type="table" target="#tab_0">1</ref>).</p><p>We analyzed the contribution of each architecture individually to the ensemble combination. As shown in Table <ref type="table">3</ref>, the best results are obtained with instances of all three architectures. Nevertheless, the results presented in We picked the first configuration from Table 2 for our final model and tested it on the internal test set, obtaining the results shown in Table <ref type="table" target="#tab_2">4</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Results and Discussion</head><p>The results of our final model applied to the data of the two official test sets of the competition are shown in Table <ref type="table" target="#tab_3">5</ref>. The model performs pretty well on the in-domain dataset, reaching the 4th position in the rankings. However, it did not rank as well in detecting hate speech on the out-of-domain dataset, obtaining an F1-score of just 65.46. The low recall for the hate class highlights that the model fails too often to identify news headlines containing some form of hate speech. In comparison with the official top rankings, listed in Table <ref type="table" target="#tab_4">6</ref>, our model achieved about 12 points below the top score of 77.44% F1.</p><p>Surprised by this fact, we investigated more deeply, looking for an explanation for such poor result on the out-of-domain dataset.</p><p>We randomly sampled from the test set some hateful headlines missed by the model, some of which are shown in Table <ref type="table" target="#tab_5">7</ref>.</p><p>In these headlines, the qualification as hate is implicit and harder to recognize, since it seems due more to the presence of stereotypes (nomads, asylum seekers, Muslims, foreigners), than to the presence of explicit hate expressions.</p><p>Broadly speaking, we identified some possible reasons for the difference in performance across the two test sets:</p><p>• Linguistic register: Tweets often exhibit a more informal and colloquial language, while headlines employ a more formal lexicon and a more objective tone. This is a crucial difference in identifying hateful messages: while in tweets the feeling of hatred transpires clearly and directly, in headlines this message is conveyed in a more subtle way, often alluding to concepts from political propaganda or common stereotypes. Prior knowledge about the subject and inference might be necessary  Salvini: "Il calcio? Rimpiango i tre stranieri in campo" (Salvini: "Soccer? I regret the three foreigners on the field") • Length of text: Tweets are usually longer than news headlines. Thus, the model has fewer elements to exploit to correctly classify a piece of news.</p><p>These difficulties seem to be shared with other submissions which all got lower scores on the outof-domain dataset. We expected that pretrained contextual embedding would be more effective in addressing the domain adaptation issue. Further experiments would be needed to improve the resilience of our model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Conclusions</head><p>We described an ensemble of neural classifiers, relying on contextual embeddings from transformers, for automated detection of hateful content in Italian texts. We presented the general architecture of our base classification models and how they were combined into an ensemble through a bagging technique. We performed extensive experiments to tune our models and the ensemble on a validation test set. The results achieved by our ensemble model on the in-domain test set confirm its ability in detecting hateful tweets; however the same model performed poorly on the out-ofdomain dataset, showing particularly an inability to adapt to handling news headlines. We plan to investigate this issue in future research.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Algorithm 1 12</head><label>1</label><figDesc>Classification Algorithm Input: t: the tweet to classify. Input: (n AL , n DB , n M LP ): number of classifiers of each type to be sampled. Input: (N AL , N DB , N M LP ): number of classifiers of each type in the ensemble. Input: n run : number of desired iterations. Output: c f inal : predicted class 1: preds = [] 2: for run = 1 to n run do 3: albs = sample al(n AL , N AL ) 4: dbs = sample db(n DB , N DB ) 5: mlps = sample ml(n M LP , N M LP ) 6: sampled classif = albs ∪ dbs ∪ mlps 7: class1 = 0 // votes for class 1 8: for cl in sampled classif do 9: class1 += cl(t) // cl's classification 10: end for 11: preds[run] = class1 ≥ n AL +n DB +n M LP 2 return c f inal receives during the iteration (line 7). It then collects the predictions of the selected models on the tweet t (line 8-10). cl(t) ∈ {0, 1} represents the prediction of classifier cl for the tweet t; in particular cl(t) = 1 if and only if cl classifies t as hateful. The output of iteration i is the most predicted class (line 11). The final result of the algorithm is then the class c f inal ∈ {0, 1}, which obtained the most votes over all the n run iterations (line 13-14). If c f inal = 1, it means that the tweet t has been classified as hateful.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>starting the experiments, we divided the Results of the experiments comparing the baseline architectures.</figDesc><table><row><cell>Classifier</cell><cell>Macro-F1</cell><cell>Std</cell></row><row><cell>ALB-SINGLE</cell><cell>76.896</cell><cell>0.7266</cell></row><row><cell>DB-SINGLE</cell><cell>77.613</cell><cell>0.3251</cell></row><row><cell>DB-MLP</cell><cell>78.562</cell><cell>0.521</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Ranking of the 5 best configurations we found, varying the number the number of instances selected from the ensemble. n DB stands for the number of instances of the DB-SINGLE model, and similarly for n M LP and n AL . We report the expected value and the standard deviation of the F1 score computed with respect to the 3 validation folds.</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell></cell><cell>Std</cell></row><row><cell>20</cell><cell>25</cell><cell>30</cell><cell>80.057</cell><cell>0.534</cell></row><row><cell>15</cell><cell>20</cell><cell>25</cell><cell>80.038</cell><cell>0.580</cell></row><row><cell>15</cell><cell>30</cell><cell>30</cell><cell>80.036</cell><cell>0.585</cell></row><row><cell>15</cell><cell>25</cell><cell>30</cell><cell>80.026</cell><cell>0.563</cell></row><row><cell>15</cell><cell>30</cell><cell>15</cell><cell>80.020</cell><cell>0.481</cell></row><row><cell cols="4">n DB n M LP n AL Macro-F1</cell><cell>Std</cell></row><row><cell>30</cell><cell>0</cell><cell>0</cell><cell>79.074</cell><cell>0.300</cell></row><row><cell>0</cell><cell>30</cell><cell>0</cell><cell>79.581</cell><cell>0.3787</cell></row><row><cell>0</cell><cell>0</cell><cell>30</cell><cell>79.482</cell><cell>0.596</cell></row><row><cell>30</cell><cell>30</cell><cell>30</cell><cell>79.832</cell><cell>0.525</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 4 :</head><label>4</label><figDesc>Table 2, show that a more balanced combination achieves better accuracy. Results of the final model on the internal test set.</figDesc><table><row><cell cols="3">Accuracy Precision Recall</cell><cell>F1</cell></row><row><cell>79.313</cell><cell>78.510</cell><cell cols="2">78.685 78.592</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 5 :</head><label>5</label><figDesc>Results of the submitted model on the official blind test sets.</figDesc><table><row><cell></cell><cell></cell><cell cols="2">NOT HATE</cell><cell></cell><cell></cell><cell>HATE</cell></row><row><cell></cell><cell></cell><cell cols="2">Precision Recall</cell><cell>F1</cell><cell cols="2">Precision Recall</cell><cell>F1</cell><cell>Macro-F1 Position</cell></row><row><cell cols="2">Tweets</cell><cell>81.93</cell><cell cols="2">72.85 77.12</cell><cell>74.89</cell><cell cols="2">83.44 78.94</cell><cell>78.03</cell><cell>4</cell></row><row><cell cols="2">News</cell><cell>71.88</cell><cell cols="2">99.37 83.42</cell><cell>96.61</cell><cell cols="2">31.49 47.50</cell><cell>65.46</cell><cell>17</cell></row><row><cell></cell><cell>Tweets</cell><cell></cell><cell cols="2">News</cell><cell></cell><cell></cell></row><row><cell>Position</cell><cell cols="2">F1 score</cell><cell>Position</cell><cell>F1 score</cell><cell></cell><cell></cell></row><row><cell>1</cell><cell cols="2">80.88</cell><cell>1</cell><cell>77.44</cell><cell></cell><cell></cell></row><row><cell>2</cell><cell cols="2">78.97</cell><cell>2</cell><cell>73.14</cell><cell></cell><cell></cell></row><row><cell>3</cell><cell cols="2">78.93</cell><cell>3</cell><cell>72.56</cell><cell></cell><cell></cell></row><row><cell>4</cell><cell cols="2">78.03 (ours)</cell><cell>4</cell><cell>71.83</cell><cell></cell><cell></cell></row><row><cell>5</cell><cell cols="2">77.82</cell><cell>5</cell><cell>70.2</cell><cell></cell><cell></cell></row><row><cell>6</cell><cell cols="2">77.66</cell><cell>17</cell><cell cols="2">65.46 (ours)</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 6 :</head><label>6</label><figDesc>Comparison between our final results and the top-5 F1-scores. The values are taken from the official rankings.</figDesc><table><row><cell>Hateful News Headlines</cell></row><row><cell>anziana rapinata sull'autobus, i due no-</cell></row><row><cell>madi in fuga si rifugiano al campo di via</cell></row><row><cell>Candoni</cell></row><row><cell>(elderly woman robbed on the bus, the two</cell></row><row><cell>fleeing nomads take refuge at the camp on</cell></row><row><cell>via Candoni)</cell></row><row><cell>Expo: Bordonali, richiedenti asilo in</cell></row><row><cell>campo base simbolo fallimento governo.</cell></row><row><cell>(Expo: Bordonali, asylum seekers in base</cell></row><row><cell>camp government failure symbol.)</cell></row><row><cell>Il cardinale Müller: "non possiamo pre-</cell></row><row><cell>gare come o con i musulmani"</cell></row><row><cell>("we cannot pray like nor with Muslims")</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 7 :</head><label>7</label><figDesc>Examples of hateful headlines, randomly picked from the out-of-domain test set, that are misclassified by our model. to decipher the presence of hate. Examining the entire body of the article might have been helpful.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/marcopoli/AlBERTo-it</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://huggingface.co/dbmdz/bert-base-italian-uncase</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">http://opus.nlpl.eu/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://colab.research.google.com/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://pypi.org/project/tweet-preprocessor/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">https://pypi.org/project/emoji</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">https://pypi.org/project/googletrans/</note>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Long-term social media data collection at the university of turin</title>
		<author>
			<persName><forename type="first">Mirko</forename><surname>Valerio Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><surname>Sanguinetti</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">Elena</forename><surname>Cabrio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Alessandro</forename><surname>Mazzei</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Fabio</forename><surname>Tamburini</surname></persName>
		</editor>
		<meeting>the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)<address><addrLine>Torino, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-12-10">2018. December 10-12, 2018. 2253</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Evalita 2020: Overview of the 7th evaluation campaign of natural language processing and speech tools for italian</title>
		<author>
			<persName><forename type="first">Danilo</forename><surname>Valerio Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maria</forename><surname>Croce</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lucia</forename><forename type="middle">C</forename><surname>Di Maro</surname></persName>
		</author>
		<author>
			<persName><surname>Passaro</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop</title>
				<editor>
			<persName><forename type="first">Danilo</forename><surname>Valerio Basile</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Maria</forename><surname>Croce</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Lucia</forename><forename type="middle">C</forename><surname>Di Maro</surname></persName>
		</editor>
		<editor>
			<persName><surname>Passaro</surname></persName>
		</editor>
		<meeting>Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Overview of the EVALITA 2018 hate speech detection task</title>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Felice</forename><surname>Dell'orletta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fabio</forename><surname>Poletto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Maurizio</forename><surname>Tesconi</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)</title>
		<title level="s">CEUR Workshop Proceed</title>
		<editor>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Nicole</forename><surname>Novielli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</editor>
		<meeting>the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)<address><addrLine>Turin, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-12-12">2018. December 12-13, 2018</date>
			<biblScope unit="volume">2263</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Bagging predictors</title>
		<author>
			<persName><forename type="first">L</forename><surname>Breiman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<imprint>
			<date type="published" when="1996">1996</date>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="123" to="140" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">BERT: pre-training of deep bidirectional transformers for language understanding</title>
		<author>
			<persName><forename type="first">Jacob</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming-Wei</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kenton</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kristina</forename><surname>Toutanova</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019</title>
				<editor>
			<persName><forename type="first">Jill</forename><surname>Burstein</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Christy</forename><surname>Doran</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Thamar</forename><surname>Solorio</surname></persName>
		</editor>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019<address><addrLine>Minneapolis, MN, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-06-02">2019. June 2-7, 2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Learning word vectors for 157 languages</title>
		<author>
			<persName><forename type="first">Edouard</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Piotr</forename><surname>Bojanowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Prakhar</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Armand</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tomas</forename><surname>Mikolov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the International Conference on Language Resources and Evaluation</title>
				<meeting>the International Conference on Language Resources and Evaluation</meeting>
		<imprint>
			<publisher>LREC</publisher>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<title level="m" type="main">Roberta: A robustly optimized BERT pretraining approach</title>
		<author>
			<persName><forename type="first">Yinhan</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Myle</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Naman</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jingfei</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mandar</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Danqi</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Omer</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mike</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luke</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Veselin</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno>CoRR, abs/1907.11692</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Pytorch: An imperative style, high-performance deep learning library</title>
		<author>
			<persName><forename type="first">Adam</forename><surname>Paszke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sam</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Francisco</forename><surname>Massa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Adam</forename><surname>Lerer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">James</forename><surname>Bradbury</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gregory</forename><surname>Chanan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Trevor</forename><surname>Killeen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeming</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Natalia</forename><surname>Gimelshein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luca</forename><surname>Antiga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alban</forename><surname>Desmaison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Andreas</forename><surname>Kopf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Edward</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zachary</forename><surname>Devito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Martin</forename><surname>Raison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alykhan</forename><surname>Tejani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sasank</forename><surname>Chilamkurthy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Benoit</forename><surname>Steiner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lu</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Junjie</forename><surname>Bai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Soumith</forename><surname>Chintala</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 32</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Beygelzimer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Alché-Buc</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Fox</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="8024" to="8035" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Hansel: Italian hate speech detection through ensemble learning and deep neural networks</title>
		<author>
			<persName><forename type="first">Marco</forename><surname>Polignano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pierpaolo</forename><surname>Basile</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Nicole</forename><surname>Novielli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
		</editor>
		<meeting>the Sixth Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2018) co-located with the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018)<address><addrLine>Turin, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018-12-12">2018. December 12-13, 2018. 2263</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Alberto: Italian BERT language understanding model for NLP challenging tasks based on tweets</title>
		<author>
			<persName><forename type="first">Marco</forename><surname>Polignano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pierpaolo</forename><surname>Basile</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>De Gemmis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Valerio</forename><surname>Basile</surname></persName>
		</author>
		<ptr target="CEUR-WS.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixth Italian Conference on Computational Linguistics</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">Raffaella</forename><surname>Bernardi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Roberto</forename><surname>Navigli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Giovanni</forename><surname>Semeraro</surname></persName>
		</editor>
		<meeting>the Sixth Italian Conference on Computational Linguistics<address><addrLine>Bari, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019-11-13">2019. November 13-15, 2019</date>
			<biblScope unit="volume">2481</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Bagging BERT models for robust aggression identification</title>
		<author>
			<persName><forename type="first">Julian</forename><surname>Risch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ralf</forename><surname>Krestel</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying, TRAC@LREC 2020</title>
				<editor>
			<persName><forename type="first">Ritesh</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Atul</forename><surname>Kr</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Bornini</forename><surname>Ojha</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Marcos</forename><surname>Lahiri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Shervin</forename><surname>Zampieri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Vanessa</forename><surname>Malmasi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Daniel</forename><surname>Murdock</surname></persName>
		</editor>
		<editor>
			<persName><surname>Kadar</surname></persName>
		</editor>
		<meeting>the Second Workshop on Trolling, Aggression and Cyberbullying, TRAC@LREC 2020<address><addrLine>Marseille, France</addrLine></address></meeting>
		<imprint>
			<publisher>ELRA</publisher>
			<date type="published" when="2020-05">2020. May 2020</date>
			<biblScope unit="page" from="55" to="61" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">HaSpeeDe 2@EVALITA2020: Overview of the EVALITA 2020 Hate Speech Detection Task</title>
		<author>
			<persName><forename type="first">Manuela</forename><surname>Sanguinetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gloria</forename><surname>Comandini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Elisa</forename><surname>Di Nuovo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simona</forename><surname>Frenda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Stranisci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Cristina</forename><surname>Bosco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tommaso</forename><surname>Caselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Viviana</forename><surname>Patti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Irene</forename><surname>Russo</surname></persName>
		</author>
		<ptr target="CEUR.org" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop</title>
				<editor>
			<persName><forename type="first">Danilo</forename><surname>Valerio Basile</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Maria</forename><surname>Croce</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Lucia</forename><forename type="middle">C</forename><surname>Di Maro</surname></persName>
		</editor>
		<editor>
			<persName><surname>Passaro</surname></persName>
		</editor>
		<meeting>Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop<address><addrLine>EVALITA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">Ashish</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Noam</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Niki</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jakob</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Llion</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aidan</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lukasz</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Illia</forename><surname>Polosukhin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017</title>
				<editor>
			<persName><forename type="first">Isabelle</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Samy</forename><surname>Ulrike Von Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Hanna</forename><forename type="middle">M</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Rob</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><forename type="middle">V N</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Roman</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><surname>Garnett</surname></persName>
		</editor>
		<meeting><address><addrLine>Long Beach, CA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017-12">2017. December 2017</date>
			<biblScope unit="page" from="5998" to="6008" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">GLUE: A multi-task benchmark and analysis platform for natural language understanding</title>
		<author>
			<persName><forename type="first">Alex</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Amanpreet</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julian</forename><surname>Michael</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Felix</forename><surname>Hill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Omer</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Samuel</forename><surname>Bowman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP</title>
				<meeting>the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP<address><addrLine>Brussels, Belgium</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2018-11">2018. November</date>
			<biblScope unit="page" from="353" to="355" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<author>
			<persName><forename type="first">Marcos</forename><surname>Zampieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Preslav</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sara</forename><surname>Rosenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pepa</forename><surname>Atanasova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Georgi</forename><surname>Karadzhov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hamdy</forename><surname>Mubarak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Leon</forename><surname>Derczynski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Zeses</forename><surname>Pitenis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>¸agri C ¸öltekin</surname></persName>
		</author>
		<idno>CoRR, abs/2006.07235</idno>
		<title level="m">Semeval-2020 task 12: Multilingual offensive language identification in social media</title>
				<meeting><address><addrLine>offenseval</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
