<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">MeVer Team Tackling Corona Virus and 5G Conspiracy Using Ensemble Classification Based on BERT</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Olga</forename><surname>Papadopoulou</surname></persName>
							<email>olgapapa@iti.gr</email>
							<affiliation key="aff0">
								<orgName type="department">Information Technologies Institute</orgName>
								<orgName type="institution">CERTH</orgName>
								<address>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giorgos</forename><surname>Kordopatis-Zilos</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Information Technologies Institute</orgName>
								<orgName type="institution">CERTH</orgName>
								<address>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Symeon</forename><surname>Papadopoulos</surname></persName>
							<affiliation key="aff2">
								<orgName type="department">Information Technologies Institute</orgName>
								<orgName type="institution">CERTH</orgName>
								<address>
									<settlement>Thessaloniki</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">MeVer Team Tackling Corona Virus and 5G Conspiracy Using Ensemble Classification Based on BERT</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">15FAB36B1D9E4CDD3522A42D7EBB9010</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T07:12+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents the approach developed by the Media Verification (MeVer) team to tackle the task of FakeNews: Coronavirus and 5G conspiracy at the MediaEval 2020 Challenge. We build a twostage classification approach based on ensemble learning of multiple classification networks. Due to the imbalanced and relatively small dataset, our ensemble method leads to improved performance compared to a single classification model. We fine-tune pre-trained Bidirectional Encoder Representations from Transformers (BERT), one of the most popular transformer models, on the problem of Coronavirus and 5G conspiracy detection. Our approach achieved a score of 0.413 in terms of the Matthews Correlation Coefficient (MCC), which is the official evaluation metric of the task.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>COVID-19 emerged as a health crisis (pandemic) and soon evolved into an infodemic ('infodemic' refers to an overabundance of information). There are already harmful impacts of COVID-19 Conspiracy theories and specifically around 5G disinformation on society. The incident of the British 5G towers fires because of coronavirus conspiracy theories <ref type="bibr" target="#b13">[14]</ref> is a representative example of how important is to detect and prevent the dissemination of such theories. The FakeNews: Coronavirus and 5G conspiracy task is a challenge of MediaEval 2020 that focuses on the analysis of tweets around Coronavirus and 5G conspiracy theories in order to detect misinformation spreaders. For further details on the subtasks and the respective dataset, the reader is referred to <ref type="bibr" target="#b8">[9]</ref>.</p><p>Our approach focuses on ensemble classification in order to overcome the relatively small training dataset and predict more accurately the Coronavirus and 5G conspiracy tweets. In short, a first-level classification is applied using majority voting over nine classifiers to detect conspiracy and non-conspiracy tweets. A second-level classification is then applied to detect the conspiracy tweets related to 5G over the other conspiracy ones. For the training process, we leverage on the pre-trained BERT <ref type="bibr" target="#b0">[1]</ref> model and the implementation provided by the HuggingFace library [15] 1 .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">RELATED WORK</head><p>In case of a pandemic such as that of the Coronavirus, the intentional or unintentional dissemination of manipulated content, conspiracy theories, and propaganda are critical <ref type="bibr" target="#b11">[12]</ref>. Several works have been recently published dealing with the detection and verification of COVID-19-related misinformation <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>. Misinformation can be spread in the form of text, images, and videos. Natural language processing (NLP) is a means of dealing with many types of content. For example, the authors of <ref type="bibr" target="#b7">[8]</ref> collected a database of debunked and verified user-generated videos and developed a method to detect them using the contextual information surrounding them rather than the video content. The emergence of BERT (Bidirectional Encoder Representations from Transformers) has led many researchers to use it for text classification and thus in the detection of fake news <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b6">7]</ref>. A key limitation of emerging topics and the need to build models dedicated to a specific topic is the lack of sufficient training samples. To this end, researchers are leaning towards solutions based on ensemble methods, unsupervised learning, and data augmentation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">PROPOSED APPROACH</head><p>Figure <ref type="figure" target="#fig_1">1</ref> illustrates the pipeline of the proposed approach. We follow a two-step classification approach:</p><p>• First step consists of an initial classification based on ensemble learning in order to provide a first-level classification of Conspiracy and Non-conspiracy tweets. • The second step consists of the final prediction that classifies the detected Conspiracy tweets as 5G-conspiracy or Other-conspiracy.</p><p>The provided dataset consists of 1,135 samples of the 5G-conspiracy class, 712 of the Other-conspiracy class and 4,198 samples of Nonconspiracy class. As described in <ref type="bibr" target="#b3">[4]</ref>, imbalanced datasets for training machine learning algorithms or deep learning approaches pose risks of bias towards the majority class. To this end, we sub-sample training tweets of the majority classes in order to balance the training sets and build the proposed classifiers. Specifically, Table <ref type="table" target="#tab_0">1</ref> presents the number of training samples considered per classifier. In 𝐶𝐿 𝑖 , the training samples of 5G-conspiracy and Other-conspiracy are concatenated into an overall Conspiracy class (1,847 tweets) and an equal number of tweets is randomly sampled from the Nonconspiracy tweets.</p><p>In the first step of our approach, we train 𝑁 classifiers 𝐶𝐿 𝑖 , which are used to predict Conspiracy and Non-conspiracy tweets. 𝑁 is empirically selected to be nine. An odd number of classifiers makes it possible to apply majority voting. Each classifier 𝐶𝐿 𝑖 predicts a label of 1 for Conspiracy or 0 for Non-conspiracy tweets. Majority voting is applied and a final prediction per tweet is given by 𝑁   In the second step, the predictions of Non-conspiracy are considered as final predictions without further processing while the Conspiracy tweets are further processed to distinguish 5G-conspiracy from Other-conspiracy. In this step, two additional models are trained focusing on the detection of 5G-conspiracy tweets. The first, 𝐶𝐿 𝑓 1 , is a three-class model (1: 5G-conspiracy, 2: Other-conspiracy and 3: Non-conspiracy) trained using random samples from the majority classes and the total number of minority class samples (Otherconspiracy). The other model, 𝐶𝐿 𝑓 2 , is a binary classifier trained on the two Conspiracy classes. The final decision is taken if 𝐶𝐿 𝑓 1 = 𝐶𝐿 𝑓 2 = 1 = 5G-conspiracy. In any other case, the tweet is labeled as Other-conspiracy.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1">Implementation details</head><p>For tokenization, we employ bert-base-uncased of BertTokenizer applied to the text of the tweets. The text is limited to 160 tokens as input to the network. Considering that the maximum tweet length is 280 characters, it is most likely that the entire text is processed to calculate the prediction. As a backbone network, we employ the bert-base-uncased version of BERT <ref type="bibr" target="#b12">[13]</ref>, which is a compact transformer model, trained on lower-cased English text. The network architecture consists of 12 layers (i.e., Transformer blocks), with 768 hidden units, and 12 heads for multi-head attention layers, resulting in a total of 109M parameters.</p><p>We fine-tune our networks using Adam optimizer <ref type="bibr" target="#b5">[6]</ref> with learning rate 2 * 10 −5 . The models are trained for 10 epochs with batch size 32 and categorical cross-entropy as the loss function. During training, we use dropout after the backbone network with 0.3 Table <ref type="table">2</ref>: Evaluation results in terms of MCC, the official metric proposed for the task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Method</head><p>MCC three-class BERT 0.42 Proposed approach 0.81 drop rate to prevent overfitting. Our models are evaluated against a validation set, and we select the versions that achieve the best performance in terms of accuracy as our final models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">RESULTS AND ANALYSIS</head><p>Initially, we trained a three-class model using the implementation details presented in subsection 3.1. From the annotated dataset, we randomly selected 100 samples per class as testing set and discarded them from the training phase in all runs. The performance of the model is 0.42 in terms of MCC. In order to improve the performance, we implemented the presented two-step classification approach resulting in increase of the MCC metric to 0.81 as presented in Table <ref type="table">2</ref>.</p><p>Our proposed approach achieved a score of 0.413 in terms of MCC on the provided testing set of unseen tweets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">DISCUSSION AND OUTLOOK</head><p>The proposed method achieves fairly accurate results in the task of FakeNews: Coronavirus and 5G conspiracy. More deep learning models, variants of BERT or other models, will be used in future experiments trying to achieve better performance. To tackle the limitation of insufficient training samples, we also intend to experiment with data augmentation approaches in order to create more samples of the minority classes and build more robust classifiers.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>𝑖=1𝐶𝐿</head><label></label><figDesc>𝑖 &gt; 𝑁 /2 where, 𝑁 = 9, and if true prediction = Conspiracy else prediction = Non-conspiracy. For each model, different sample of Non-conspiracy tweets is selected. MediaEval'20, December 14-15 2020, Online O. Papadopoulou et al.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Our proposed pipeline for tackling the challenge of Corona virus and 5G conspiracy</figDesc><graphic coords="2,53.80,83.68,504.40,178.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1 :</head><label>1</label><figDesc>Summary of the training samples used to build the respective modelsLabel𝐶𝐿 𝑖 𝐶𝐿 𝑚𝑢𝑙𝑡𝑖 𝐶𝐿 𝑐𝑜𝑛𝑠𝑝</figDesc><table><row><cell>5G conspiracy Other conspiracy</cell><cell>1847</cell><cell>712 712</cell><cell>712 712</cell></row><row><cell>Non-conspiracy</cell><cell>1847</cell><cell>712</cell><cell>-</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>ACKNOWLEDGMENTS</head><p>This work is supported by the WeVerify project, which is funded by the European Commission under contract number 825297.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Bert: Pre-training of deep bidirectional transformers for language understanding</title>
		<author>
			<persName><forename type="first">Jacob</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming-Wei</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kenton</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kristina</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1810.04805</idno>
		<imprint>
			<date type="published" when="2018">2018. 2018</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Detecting Misleading Information on COVID-19</title>
		<author>
			<persName><forename type="first">Kin</forename><forename type="middle">Fun</forename><surname>Mohamed K Elhadad</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fayez</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><surname>Gebali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="165201" to="165215" />
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">Tamanna</forename><surname>Hossain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Robert</forename><forename type="middle">L</forename><surname>Logan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">V</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Arjuna</forename><surname>Ugarte</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yoshitomo</forename><surname>Matsubara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sameer</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sean</forename><surname>Young</surname></persName>
		</author>
		<title level="m">Detecting covid-19 misinformation on social media</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Survey on deep learning with class imbalance</title>
		<author>
			<persName><forename type="first">Justin</forename><forename type="middle">M</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Johnson</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Taghi</forename><forename type="middle">M</forename><surname>Khoshgoftaar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Big Data</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="issue">1</biblScope>
			<biblScope unit="page">27</biblScope>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">exBAKE: Automatic Fake News Detection Model Based on Bidirectional Encoder Representations from Transformers (BERT)</title>
		<author>
			<persName><forename type="first">Heejung</forename><surname>Jwa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Dongsuk</forename><surname>Oh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kinam</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jang Mook</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Heuiseok</forename><surname>Lim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Applied Sciences</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">4062</biblScope>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Adam: A method for stochastic optimization</title>
		<author>
			<persName><forename type="first">P</forename><surname>Diederik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jimmy</forename><surname>Kingma</surname></persName>
		</author>
		<author>
			<persName><surname>Ba</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1412.6980</idno>
		<imprint>
			<date type="published" when="2014">2014. 2014</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A Two-Stage Model Based on BERT for Short Fake News Detection</title>
		<author>
			<persName><forename type="first">Chao</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xinghua</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Min</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Gang</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jianguo</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Weiqing</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Xiang</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Knowledge Science, Engineering and Management</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="172" to="183" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">A corpus of debunked and verified usergenerated videos</title>
		<author>
			<persName><forename type="first">Olga</forename><surname>Papadopoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Markos</forename><surname>Zampoglou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Symeon</forename><surname>Papadopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ioannis</forename><surname>Kompatsiaris</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
	<note type="report_type">Online information review</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">FakeNews: Corona Virus and 5G Conspiracy Task at MediaEval</title>
		<author>
			<persName><forename type="first">Konstantin</forename><surname>Pogorelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Daniel</forename><forename type="middle">Thilo</forename><surname>Schroeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Luk</forename><surname>Burchard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johannes</forename><surname>Moe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Stefan</forename><surname>Brenner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Petra</forename><surname>Filkukova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Johannes</forename><surname>Langguth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">MediaEval 2020 Workshop</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">NLP-based Feature Extraction for the Detection of COVID-19 Misinformation Videos on YouTube</title>
		<author>
			<persName><forename type="first">Juan</forename><surname>Carlos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Medina</forename><surname>Serrano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Orestis</forename><surname>Papakyriakopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Simon</forename><surname>Hegelich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st Workshop on NLP for COVID-19 at ACL 2020</title>
				<meeting>the 1st Workshop on NLP for COVID-19 at ACL 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">Karishma</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sungyong</forename><surname>Seo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Chuizheng</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sirisha</forename><surname>Rambhatla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Aastha</forename><surname>Dua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yan</forename><surname>Liu</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2003.12309</idno>
		<title level="m">Coronavirus on social media: Analyzing misinformation in Twitter conversations</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">Samia</forename><surname>Tasnim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Md</forename><surname>Mahbub Hossain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Hoimonty</forename><surname>Mazumder</surname></persName>
		</author>
		<title level="m">Impact of rumors or misinformation on coronavirus disease (COVID-19)</title>
				<imprint>
			<date type="published" when="2020">2020. 2020</date>
		</imprint>
	</monogr>
	<note>in social media</note>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">Iulia</forename><surname>Turc</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ming-Wei</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kenton</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Kristina</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1908.08962</idno>
		<title level="m">Well-read students learn better: On the importance of pre-training compact models</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">Tom</forename><surname>Warren</surname></persName>
		</author>
		<ptr target="https://www.theverge.com/2020/4/4/21207927/5g-towers-burning-uk-coronavirus-conspiracy-theory-link" />
		<title level="m">British 5G towers are being set on fire because of coronavirus conspiracy theories</title>
				<imprint>
			<date type="published" when="2020-04">2020. Apr 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Transformers: State-of-the-Art Natural Language Processing</title>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Wolf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lysandre</forename><surname>Debut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Victor</forename><surname>Sanh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julien</forename><surname>Chaumond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Clement</forename><surname>Delangue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Anthony</forename><surname>Moi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Pierric</forename><surname>Cistac</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Tim</forename><surname>Rault</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Rémi</forename><surname>Louf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Morgan</forename><surname>Funtowicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joe</forename><surname>Davison</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sam</forename><surname>Shleifer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Clara</forename><surname>Patrick Von Platen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Yacine</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Julien</forename><surname>Jernite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Canwen</forename><surname>Plu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Teven</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sylvain</forename><surname>Le Scao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mariama</forename><surname>Gugger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Quentin</forename><surname>Drame</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><forename type="middle">M</forename><surname>Lhoest</surname></persName>
		</author>
		<author>
			<persName><surname>Rush</surname></persName>
		</author>
		<ptr target="https://www.aclweb.org/anthology/2020.emnlp-demos.6" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations</title>
				<meeting>the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="38" to="45" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
