<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Overview of the Oppositional Thinking Analysis PAN Task at CLEF 2024</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Damir</forename><surname>Korenčić</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Ruđer Bošković Institute</orgName>
								<address>
									<country key="HR">Croatia</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Berta</forename><surname>Chulvi</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Universitat de València</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
							<affiliation key="aff4">
								<orgName type="institution">Symanto Research</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Xavier</forename><surname>Bonet-Casals</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Universitat de Barcelona</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mariona</forename><surname>Taulé</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Universitat de Barcelona</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author role="corresp">
							<persName><forename type="first">Paolo</forename><surname>Rosso</surname></persName>
							<email>prosso@dsic.upv.es</email>
							<affiliation key="aff3">
								<orgName type="institution">Universitat Politècnica de València</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
							<affiliation key="aff5">
								<orgName type="department">ValgrAI Valencian Graduate School and Research Network Analysis of Artificial Analysis</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Francisco</forename><surname>Rangel</surname></persName>
							<affiliation key="aff4">
								<orgName type="institution">Symanto Research</orgName>
								<address>
									<country key="ES">Spain</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Overview of the Oppositional Thinking Analysis PAN Task at CLEF 2024</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">CC56737BAC11AB492B93BCE384837775</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:03+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Conspiracy Theories, Oppositional Thinking, Computational Social Science, Natural Language Processing, Text Classification, Sequence Labeling (P. Rosso) 0000-0003-4645-2937 (D. Korenčić)</term>
					<term>0000-0003-1169-0978 (B. Chulvi)</term>
					<term>0009-0003-8827-0215 (X. Bonet-Casals)</term>
					<term>0000-0003-0089-940X (M. Taulé)</term>
					<term>0000-0002-8922-1242 (P. Rosso)</term>
					<term>0000-0002-6583-3682 (F. Rangel)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper describes the Oppositional Thinking Analysis task at CLEF 2024. The task focuses on analyzing conspiracy theories and critical thinking narratives, and is comprised of two subtasks. Subtask 1 is a binary classification task aimed at distinguishing between critical and conspiracy texts. Subtask 2 is a token classification task aimed at detecting text spans corresponding to the key elements of oppositional (critical and conspiracy) narratives. The subtasks are based on a dataset of English and Spanish COVID19-related texts obtained from oppositional Telegram channels, and labeled using a topic-agnostic annotation scheme <ref type="bibr" target="#b0">[1]</ref>. A total of 82 teams participated in the challenge, and 17 teams published working notes papers with system descriptions. The participants employed a range of NLP methods and pushed the state-of-art performance on both subtasks beyond the performance of the strong baseline systems [1] that were provided.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The first edition of the Oppositional Thinking Task, held at CLEF 2024, focused on distinguishing automatically between conspiratorial narratives and critical narratives that do not convey a conspiratorial mentality. Conspiracy Theories (CTs) are causal explanations of significant events that present them as a result of cover plots orchestrated by secret, powerful, and malicious groups <ref type="bibr" target="#b1">[2]</ref>. Since conspiracy narratives tend to convey a critical vision of mainstream policies, a common mistake, especially in the middle of a global crisis such as a pandemic or a war, is to categorize every critical narrative against the official discourse as conspiratorial. Criticism and free discussion are key values in democratic societies; however, conspiracy narratives severely weaken democratic systems because they place the ultimate agent of the crisis outside the control of our systems of governance. As a result, it is important not to confuse critical and conspiracy narratives.</p><p>The interest in the automatization of the critical-conspiracy distinction was recently highlighted by Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>, who argued that, if models monitoring the social media messages do not differentiate between critical and conspiratorial thinking, there is a high risk of pushing people toward conspiracy communities. The sociopsychological basis of this process is based on Social Identity Theory. Social Identity Theory (SIT) has been a cornerstone in understanding group processes and intergroup relations since its inception in the early 1970s <ref type="bibr" target="#b2">[3]</ref>. This theory posits that individuals derive a part of their self-concept from their membership in social groups, which influences their behavior and attitudes towards in-group and out-group members <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. As a result, being considered a conspiracist when you are not could be a threat to your social identity. Once the subject is the target of this accusation, a way to repair this stigmatization is to join conspiracist groups that will give the social support needed to recover a positive social identity. This process is not unusual. As several authors from the field of social sciences suggest, a fully-fledged conspiratorial worldview is the final step in a progressive "spiritual journey" that sets out questioning social and political orthodoxies <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>. Accordingly, the distinction between conspiratorial and critical thinking is crucial for automated content moderation: without it, there is a significant risk of driving individuals towards conspiracy communities. Specifically, mislabeling a text as conspiratorial when it merely challenges mainstream perspectives could inadvertently steer individuals who are simply questioning into the arms of conspiracy groups.</p><p>Furthermore, in the area of computational linguistics, Korenčić et al. <ref type="bibr" target="#b0">[1]</ref> have shown that conspiracist narrative and critical thinking are different due to their potential social effect on public opinion discourse, with the former being significantly more associated with violent words and expressions of anger. In their corpus, the authors have also labelled key elements in oppositional narratives (goals, effects, agents, and the two groups in conflict, facilitators of government decisions and campaigners against them), demonstrating that a greater level of intergroup conflict between facilitators and campaigners is associated especially with conspiracy narratives and correlates with a greater use of violent words and the emotional manifestation of anger.</p><p>Based on this recent research <ref type="bibr" target="#b0">[1]</ref>, the present task addresses two new challenges for the NLP research community: <ref type="bibr" target="#b0">(1)</ref> to distinguish the conspiracy narrative from other oppositional narratives that do not express a conspiracy mentality (i.e., critical thinking); and (2) to identify the key elements of the oppositional narrative in online messages. As demonstrated <ref type="bibr" target="#b0">[1]</ref>, predictive NLP systems for these two tasks have value for computational social scientists who are interested in analyzing oppositional narratives. Therefore, it is of interest to push the performance on these tasks beyond the previously proposed NLP approaches <ref type="bibr" target="#b0">[1]</ref>. This PAN task has attempted to achieve this goal.</p><p>For the two tasks described above, we provide the XAI-Disinfodemic corpus <ref type="bibr" target="#b0">[1]</ref>, a multilingual (English and Spanish) corpus consisting of 10,000 annotated Telegram messages that focus on oppositional narratives related to the COVID-19 pandemic. For each language, a training set of 4,000 messages has been provided to the participants, while the outputs of the systems were computed and evaluated using the testing set consisting of 1,000 messages. These messages contain oppositional non-mainstream views on the COVID-19 pandemic, classified into two categories: critical and conspiratorial messages. Messages have been annotated at the span level with a topic-agnostic schema that distinguishes the key elements of an oppositional narrative: objectives, negative effects, agents, victims, and facilitators and campaigners (the two groups in conflict). We also provide strong baseline solutions <ref type="bibr" target="#b0">[1]</ref>. The train and test splits of the dataset, as well as the code of the baseline systems, are freely available <ref type="foot" target="#foot_0">1</ref> .</p><p>The following sections of this paper describe the key aspects of this task. Section 2 summarizes the related work on the classification of conspiratorial narratives in NLP and on the span detection of different elements of these narratives. Section 3 presents the dataset used in this task. Section 4 describes the two subtasks proposed above, as well as evaluation measures and baseline solutions. Section 5 presents the systems used by the participants. Section 6 analyzes the results and the systems of the participants. Finally, Section 7 contains conclusions and directions for future work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Work</head><p>A recent literature review by Mahl et al. <ref type="bibr" target="#b8">[9]</ref> indicates a rising interest in conspiracy theories within online environments, particularly within the Social Sciences. Approximately 80% of the research focuses on written content, with about a third using automated content analysis methods. In this chapter, we review research from NLP area which are relevant to the present tasks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Conspiracy detection in NLP</head><p>The COVID-19 pandemic has been one of the topics that has garnered the most attention in the study of conspiracy narratives since 2020. The pandemic has been fertile ground for the expansion of conspiracy theories. Among the works oriented in this direction, Uscinski et al. <ref type="bibr" target="#b9">[10]</ref> collected a dataset of letters sent to a mainstream US publication, and labeled them as either containing a conspiracy or not. Another available corpus dedicated to conspiracy theories is LOCO corpus <ref type="bibr" target="#b10">[11]</ref> containing 96,743 texts from a diverse collection of mainstream and conspiracy outlets. The texts are enriched with website metadata and auto-generated topics. With more detail about the content of conspiracy theories, we find COCO, a corpus of 3,495 texts promoting COVID-19 conspiracies <ref type="bibr" target="#b11">[12]</ref>. The texts were manually annotated in the COCO corpus with a fine-grained classification scheme encompassing conspiracy sub-topics.</p><p>The problem has often been approached as a binary classification task with the goal of distinguishing conspiratorial from non-conspiratorial text. A good example is the two recent MediaEval challenges. Focusing on the classification of conspiracy texts <ref type="bibr" target="#b12">[13,</ref><ref type="bibr" target="#b13">14]</ref>, this task led to a number of approaches demonstrating that the state-of-the-art architecture is a multi-task classifier <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17]</ref> based on CT-BERT <ref type="bibr" target="#b17">[18]</ref>.</p><p>More nuanced methodologies using fine-grained approaches, like multi-label or multi-class classifications, have provided a detailed understanding <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b12">13,</ref><ref type="bibr" target="#b13">14]</ref> of the diffusion of conspiracies. For example, Moffitt et al. <ref type="bibr" target="#b19">[20]</ref> developed a classifier of conspiracy tweets and used it for propagation analysis. COVID-19 origin conspiracy theory tweets using this method and then used social cybersecurity methods to analyze communities, spreaders, and characteristics of the different origin-related conspiracy theory narratives. This research found that tweets about conspiracy theories were supported by news sites with low fact-checking scores and amplified by bots who were more likely to link to prominent Twitter users than in non-conspiracy tweets.</p><p>Other research in computational linguistics has dealt with different aspects related to the characteristics of the disseminators of conspiracy narratives or has focused on the characteristics of the messages. Bessi <ref type="bibr" target="#b20">[21]</ref> employed a text scaling method to map conspiratorial texts to personality traits and analyze these conspiracies. Giachanou et al. <ref type="bibr" target="#b18">[19]</ref> used psychological and linguistic features to classify and analyze the social media users who spread conspiracies. Topic modeling techniques were used by other authors <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23]</ref> to extract and examine common themes within conspiracy texts. Levy et al. <ref type="bibr" target="#b23">[24]</ref>, taking an approach different from the problem of classifying humans texts, analyze the capacity of large language models to generate conspiracies.</p><p>However, present research fails to differentiate between critical thinking and conspiratorial thinking, which is the main goal of this task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Span detection in conspiracy theories</head><p>In the field of conspiracy theories, several papers have addressed the challenge of span detection. Samory and Mitra <ref type="bibr" target="#b22">[23]</ref> utilized syntactic parsing to identify "motifs" (agent-action-target triplets) and analyze the patterns of their occurrence. Introne et al. <ref type="bibr" target="#b24">[25]</ref> propose a span-level scheme of six categories (event, actor, goal, action, consequence, target), and use it to analyze 236 messages from anti-vaccination forums. They distinguish between conspiracy theories and conspiratorial thinking, a category that implies only passive support for a conspiracy. This distinction is not based on annotations grounded in theory but on the requirement of all the categories being present in a given text. However, in practice, fewer elements can convey a conspiracy theory in a very strong manner. Although this research identifies different elements of discourse, it also fails to consider the role played by intergroup conflict in the conspiracy narrative, which is addressed in the XAI-DisInfodemic corpus <ref type="bibr" target="#b0">[1]</ref>.</p><p>Holur et al. <ref type="bibr" target="#b25">[26]</ref> focus on oppositional elements in the conspirational narrative, detecting the so-called insider and outsider entities within conspiracy texts by automatically labelling noun phrases. This insider and outsider schema is based on the positive or negative sentiment that each user conveys for each entity. Although this research starts a path that could arrive at the consideration of the important role of intergroup conflict in conspirational narratives, it fails in the proper identification of this intergroup conflict because objects and other inanimate realities which are clearly out of the social framework are also identified as insiders or outsiders.</p><p>The importance of detecting intergroup conflict, as proposed by Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>, relies on the growing and potentially violent participation of conspiratorial groups in political activities. This connection implies that CTs aim to strengthen group cohesion and facilitate coordinated actions <ref type="bibr" target="#b26">[27]</ref>. Consequently, detecting crucial aspects of the narrative at the level of span, such as intergroup conflict, can provide significant insights for content moderation.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Dataset</head><p>This task uses the XAI-DisInfodemic corpus <ref type="bibr" target="#b0">[1]</ref>, which consists of 10,000 annotated Telegram messages, 5,000 in English and 5,000 in Spanish. These messages contain oppositional, non-mainstream views on the COVID-19 pandemic, and were obtained from public Telegram channels in which users tend to post messages which oppose the mainstream discourse about the pandemic. They are classified into two categories: critical messages and conspiratorial messages. For the creation of this corpus, the authors developed an annotation scheme to differentiate between texts hinting at the existence of a conspiracy and those criticizing mainstream views on COVID-19 but without suggesting the existence of a conspiracy. Statistics of the text length, measured in number of words (whitespace separated tokens), for English and Spanish corpora: the average, the standard deviation, the minimum, the first quartile, the median, the third quartile, and the maximum.</p><p>In addition to the annotation into the two classes, the XAI-Disinfodemic corpus offers a second annotation that presents the key elements in oppositional narratives. The tagset includes six labels which can be applied both to messages containing a conspiracy theory and messages containing critical thinking: goals, effects, agents, facilitators (the group that collaborates with the mainstream authorities) and campaigners (the group that conveys the oppositional message). Korenčić et al. <ref type="bibr" target="#b0">[1]</ref> identified the following six categories of narrative elements (see Figure <ref type="figure" target="#fig_0">1</ref> for an example annotation of a Conspiracy message, and Figure <ref type="figure" target="#fig_1">2</ref> for an example annotation of a Critical message.):</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Conspiracy Theory</head><p>1. Agents (A): Those responsible for the actions and/or negative effects described in the comment. In Conspiracy, it could be the hidden power that pulls the strings (in Figure <ref type="figure" target="#fig_0">1</ref>, "Private owned WHO", "investors like Bill Gates", "pharma companies" and "very evil beings"). In Critical, it could be the actors that design the mainstream public health policies (in Figure <ref type="figure" target="#fig_1">2</ref>, "White House chief medical  </p><note type="other">Critical Thinking</note></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 2</head><p>Statistics for the gold span-level annotations of the narrative elements. Absolute number and percentage of spans are given for each of the binary text classes and for all texts, and for each of the six narrative categories:</p><formula xml:id="formula_0">Agents (A), Facilitators (F), Campaigners (C), Victims (V), Objectives (O), Negative Effects (E).</formula><p>advisor Dr. Anthony Fauci" and "the lead of CDC director Rochelle Walensky, who questioned natural immunity"). 2. Facilitators (F): Those who collaborate with the agents and contribute to the execution of their goals. In Conspiracy, they could be governments or institutions which, either intentionally or unwittingly, collaborate with the conspirators and help the conspiracy move forward (in Figure <ref type="figure" target="#fig_0">1</ref>, "the world governments ruled by their puppets", "their media", "the media" and "governments"). In Critical, the facilitators could be healthcare workers, mass media or authority figures who abide by governmental instructions (in Figure <ref type="figure" target="#fig_1">2</ref>, "university hospitals" and "the vaccinated work -fromhome hospital administrators who are firing her for not being vaccinated"). 3. Campaigners (C): Those who oppose the mainstream narrative. In Conspiracy, those who know the truth and expose it to society at large (in Figure <ref type="figure" target="#fig_0">1</ref>, "those awake already"). In Critical, those who oppose the enforcement of laws and/or refuse to follow health-related instructions from the authorities (in Figure <ref type="figure" target="#fig_1">2</ref>, "Dr Martin Kulldorff "). 4. Victims (V): Those who suffer the consequences of the actions and decisions of the agents and/or the facilitators. In Conspiracy, the people who are deceived by those in power, and suffer, become ill, lose their freedom, or die as a result of a hidden plan (in Figure <ref type="figure" target="#fig_0">1</ref>, "people", "most people" and "regular people"). In Critical, the people who receive the negative consequences of the actions and the decisions made by those in power, and also suffer, lose their freedom, become ill, or die as a result of incorrect decisions (in Figure <ref type="figure" target="#fig_1">2</ref>, "all nurses, doctors and other health care providers").</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Objectives (O):</head><p>The intentions and purposes that the agents are trying to achieve. In Conspiracy, the goals of the conspirators (in Figure <ref type="figure" target="#fig_0">1</ref>, "agenda" and "destroying us"). In Critical, the goals of public authorities, pharmaceutical companies, organizations, etc. (in Figure <ref type="figure" target="#fig_1">2</ref>, "pushing vaccine mandates"). 6. Negative Effects (E): The negative consequences suffered by the victims as a result of the actions and decisions of those in power and/or their collaborators (in Figure <ref type="figure" target="#fig_0">1</ref>, "the constant fear mongering"</p><p>and "pay a hefty price, often with their health, lives, the loss of their loved ones"; in Figure <ref type="figure" target="#fig_1">2</ref>, "will be fired if they do not get a Covid vaccine").</p><p>Table <ref type="table">2</ref> shows the amount and the percentages of spans in the GS that have been annotated with each label for each category (Conspiracy or Critical).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Task Setup</head><p>For each language, the corresponding dataset of 5,000 texts was divided into train and test sets using stratified sampling. The train set consisted of 4,000 messages while the test set consisted of 1,000 messages. The participants had access to the train set from the start of the task, and prior to the evaluation deadline they were provided with the unlabeled test set and asked to submit their predictions. Each team was allowed to submit up to two predictions for each combination of subtask and language.</p><p>The dataset, the code for building and applying the baseline systems, as well as the evaluation code and task instructions, are made available <ref type="foot" target="#foot_1">2</ref> .</p><p>Distinguishing Between Critical and Conspiratorial Messages (Subtask 1) This is a binary classification task differentiating between (1) critical messages, i.e. those that question major decisions in the public health domain, but do not promote a conspiracist mentality <ref type="bibr" target="#b0">[1]</ref>; and (2) conspiratorial messages, i.e. those that view the pandemic or public health decisions as a result of a malevolent conspiracy by secret, influential groups <ref type="bibr" target="#b0">[1]</ref>. Input data consists of a set of messages, each of which associated with one of two categories: either CONSPIRACY or CRITICAL. The evaluation metric used for this subtask is Matthews Correlation Coefficient (MCC) <ref type="bibr" target="#b27">[28]</ref>.</p><p>Detecting Elements of Oppositional Narratives (Subtask 2) This is a token-level classification task aimed at recognizing text spans corresponding to the key elements of oppositional narratives <ref type="bibr" target="#b0">[1]</ref>. Input data consists of a set of messages, each of which is accompanied by a (possibly empty) list of span annotations. Each annotation corresponds to a narrative element, and is described by its borders (start and end characters), as well as its category. There are six distinct span categories: AGENTS, FACILITATORS, VICTIMS, CAMPAIGNERS, OBJECTIVES, NEGATIVE_EFFECTS. The evaluation metric used for this subtask is macro-averaged span-F1 <ref type="bibr" target="#b28">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Evaluation Measures</head><p>As the main criterion for evaluation in Subtask 1 , we used the MCC <ref type="bibr" target="#b27">[28]</ref>. MCC serves the same purpose as the macro-averaged F1 measure -it aggregates performance across both classes. We opted for the MCC measure since it works well on imbalanced datasets, while being reliable and less optimistic than the macro-averaged F1 <ref type="bibr" target="#b29">[30]</ref>, and comparing favorably to other alternatives <ref type="bibr" target="#b27">[28]</ref>.</p><p>For evaluation in Subtask 2 , we used the span-F1 measure <ref type="bibr" target="#b28">[29]</ref>, which is an adapted version of the F1 measure and accounts for partially correct predictions by looking at span overlap. Specifically, a predicted span is not required to exactly match a gold standard span in terms of start and end characters. Instead, the proportion of overlapping characters is used to calculate precision and recall <ref type="bibr" target="#b28">[29]</ref>. This approach offers a fairer evaluation in tasks with long spans, and with inherent subjectivity of the span boundaries. For tasks like traditional, non-nested Named Entity Recognition (NER), where named entities are shorter and are expected to have well-defined boundaries, exact matching is a reasonable method of evaluation.</p><p>As the main criterion for evaluation we used macro-averaged span-F1, i.e., span-F1 averaged over all six span labels corresponding to six elements of oppositional narratives described in Section 3.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Baseline Solutions</head><p>Baselines for both subtasks are based on the approaches from Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>, where more details can be found. For each subtask, we took as a baseline the version based on the transformer model which resulted in the lowest performance in Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>. Hyperparameters were not changed, the models were trained on the entire train set, and then applied to the test set.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Distinguishing Critical and Conspiratorial Messages (Subtask 1)</head><p>The approach for this binary classification task is based on fine-tuning the BERT transformer model <ref type="bibr" target="#b30">[31]</ref> from the Hugging Face<ref type="foot" target="#foot_2">3</ref> repository, using the case-sensitive "base" version. The BETO <ref type="bibr" target="#b31">[32]</ref> version of BERT was used for the Spanish dataset. The number of tokens was set to 256. We tuned the models for three epochs using the AdamW optimizer, learning rate of 2𝑒 −5 , slanted triangular LR scheduler with a 10% warm-up period, a batch size of 16, and a weight decay of 0.01. All the layers of the transformers were fine-tuned. The dropout rate for the classification head was 0.1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Detecting Elements of Oppositional Narratives (Subtask 2)</head><p>The baseline for this sequence labeling task is based on fine-tuning a transformer model with added token classification heads. To account for the possibility of overlapping spans with different categories, we used six separate percategory heads that performed BIO sequence tagging. We employed multi-task learning <ref type="bibr" target="#b32">[33]</ref> by connecting the per-category taggers to the same transformer backbone. Multi-task learning has several advantages, such as improved regularization and implicit data augmentation <ref type="bibr" target="#b32">[33]</ref>, and the described approach was successfully deployed for a similar task of span-level skill extraction <ref type="bibr" target="#b33">[34]</ref>. We used the same configuration and hyperparameters as in the case of Subtask 1 . The exception was the number of epochs, which we increased to 10 in order to accommodate for the increased task complexity. The BERT model <ref type="bibr" target="#b30">[31]</ref> was used as the base transformer for the English dataset, while for the Spanish dataset the BETO version of BERT <ref type="bibr" target="#b31">[32]</ref> was used.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Participating Systems</head><p>A total of 82 teams submitted their solution for at least one of the tasks. The approaches included preneural NLP models, small transformers such as BERT <ref type="bibr" target="#b30">[31]</ref>, and Large Language Models <ref type="bibr" target="#b34">[35]</ref>. Techniques such as Ensemble Methods <ref type="bibr" target="#b35">[36]</ref> and Data Augmentation <ref type="bibr" target="#b36">[37]</ref> were also used to improve performance. Another important factor was the data on which the chosen transformer models were pretrainedparticipants experimented with both domain-specific models such as CT-BERT <ref type="bibr" target="#b17">[18]</ref> and multilingual models such as mBERT <ref type="bibr" target="#b37">[38]</ref>.</p><p>Most of the approaches relied on fine-tuning BERT-like transformers <ref type="bibr" target="#b30">[31]</ref>. This is not surprising since these models yield strong results for both classification <ref type="bibr" target="#b30">[31]</ref> and sequence labeling <ref type="bibr" target="#b30">[31]</ref>, and since baselines based on this approach were provided to the participants.</p><p>To describe the approaches based on transformer models <ref type="bibr" target="#b38">[39]</ref> we shall use the abbreviation SLM ("Small" Language Models) to describe transformers with fewer than one billion parameters. For the transformers with more than one billion parameters, we shall use the standard abbreviation LLM (Large Language Models).</p><p>Working Notes Submissions A total of 17 participating systems had their working notes papers accepted. Huertas-García et al. <ref type="bibr" target="#b39">[40]</ref> tackled Subtask 1 , experimenting with a range of SLMs and with the commercial LLM Claude<ref type="foot" target="#foot_3">4</ref> . Vallecillo-Rodríguez et al. <ref type="bibr" target="#b40">[41]</ref> experimented with the fine-tuning of two LLMs: LLaMA3-8B-instruct <ref type="bibr" target="#b41">[42]</ref> and GPT-3.5 <ref type="bibr" target="#b42">[43]</ref>. Hu et al. <ref type="bibr" target="#b43">[44]</ref> used SLMs with an added BiGRU LSTM layer <ref type="bibr" target="#b44">[45]</ref> to tackle both tasks. Damian et al. <ref type="bibr" target="#b45">[46]</ref> approached both tasks using ensembles of mono-and multi-lingual SLMs. Sánchez-Hermosilla et al. <ref type="bibr" target="#b46">[47]</ref> focused on Subtask 1 using a range of SLMs, data augmentation, and ensembling techinques. Zrnić <ref type="bibr" target="#b47">[48]</ref> experimented with mono-and multilingual SLMs in order to tackle both tasks. Sahitaj et al. <ref type="bibr" target="#b48">[49]</ref> approached Subtask 1 using SLMs and a LLM-based data augmentation technique. Gómez-Romero et al. <ref type="bibr" target="#b49">[50]</ref> used an approach based on OpenAI Embeddings and a deep feedforward network for Subtask 1 and, in addition, did entity masking in order to increase the models' generality. Mahesh et al. <ref type="bibr" target="#b50">[51]</ref> experimented with SLMs and non-neural approaches on Subtask 1 . Zeng et al. <ref type="bibr" target="#b51">[52]</ref> employed mono-and multi-lingual SLMs for both Subtask 1 and Subtask 2 . Huang et al. <ref type="bibr" target="#b52">[53]</ref> used SLMs for both tasks, and employed ensembling for Subtask 1 . Tulbure and Coll Ardanuy <ref type="bibr" target="#b53">[54]</ref> experimented with SLMs boosted by data augmentation and ensembling, and for Subtask 2 split the input texts into sentences. Liu et al. <ref type="bibr" target="#b54">[55]</ref> experimented with a range of LLMs using zero-shot chain-of-thoughts prompts to tackle Subtask 1 , and used a SLM approach for Subtask 2 . Mhalgi et al. <ref type="bibr" target="#b55">[56]</ref> approached Subtask 1 using data augmentation, non-neural classifiers, SLMs and LLMs, as well as model ensembles.</p><p>Several participants basically repeated what had been done in the baseline solution, i.e., fine-tuned and applied one or several SLMs <ref type="bibr" target="#b56">[57,</ref><ref type="bibr" target="#b57">58,</ref><ref type="bibr" target="#b58">59]</ref>.</p><p>Teams that did not submit working notes accounted for 65 submissions and provided a short description of their approaches. Many of these submissions were minor modifications of the provided baseline, i.e., changing of an SLM to be fine-tuned. However, a number of these teams achieved competitive results or provided useful datapoints using, for example, ensembling techniques, data and feature augmentation techniques, and non-neural NLP approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Results and Analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Distinguishing Critical and Conspiracy Texts (Subtask 1)</head><p>Table <ref type="table">6</ref>.1 displays the results of the most successful teams on Subtask 1 -the teams with performance equal to or greater than the provided baseline.</p><p>English Spanish TEAM MCC TEAM MCC IUCL <ref type="bibr" target="#b55">[56]</ref> 0.8388 SINAI <ref type="bibr" target="#b40">[41]</ref> 0.7429 AI_Fusion 0.8303 auxR 0.7205 SINAI <ref type="bibr" target="#b40">[41]</ref> 0.8297 RD-IA-FUN <ref type="bibr" target="#b39">[40]</ref> 0.7028 ezio <ref type="bibr" target="#b43">[44]</ref> 0.8212 Elias&amp;Sergio 0.6971 hinlole <ref type="bibr" target="#b52">[53]</ref> 0.8198 AI_Fusion 0.6872 Zleon <ref type="bibr" target="#b47">[48]</ref> 0.8195 zhengqiaozeng <ref type="bibr" target="#b51">[52]</ref>  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Results for English</head><p>The top IUCL team <ref type="bibr" target="#b55">[56]</ref> employed the DeBERTa model <ref type="bibr" target="#b59">[60]</ref> fine-tuned on an augmented dataset comprising the Subtask 1 dataset and the conspiracy-labeled examples from the LOCO corpus <ref type="bibr" target="#b10">[11]</ref> (cca. <ref type="bibr" target="#b15">16</ref>,000 examples were selected). The AI_Fusion team came a close second, simply by relying on the fine-tuned ELECTRA model <ref type="bibr" target="#b60">[61]</ref>. A close third was the SINAI team <ref type="bibr" target="#b40">[41]</ref>, which used the fine-tuned LLaMA3-8B-instruct LLM <ref type="bibr" target="#b41">[42]</ref> as a solution. Additionally, their experiments demonstrated that fine-tuned LLMs outperform the LLM-based zero-shot approaches by a large margin <ref type="bibr" target="#b40">[41]</ref>.</p><p>The rest of the top-performing models on English based their approaches on SLMs, with several teams using techniques such as ensembling and data augmentation. The Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref>, used by the teams ezio <ref type="bibr" target="#b43">[44]</ref>, hinlole <ref type="bibr" target="#b52">[53]</ref>, Zleon <ref type="bibr" target="#b47">[48]</ref>, and inaki <ref type="bibr" target="#b46">[47]</ref>, seems to be a successful transformer model for this use-case. Some teams with competitive results used standard transformer models: the theateam, trustno1, and ojo-bes teams used standard RoBERTa <ref type="bibr" target="#b61">[62]</ref>, while the virmel team used BERT <ref type="bibr" target="#b30">[31]</ref> and the yeste team relied on the ELECTRA model <ref type="bibr" target="#b60">[61]</ref>.</p><p>Two fully multilingual approaches performed competitively, those of the auxR and RD-IA-FUN <ref type="bibr" target="#b39">[40]</ref> teams. Both approaches were based on a multilingual transformer trained on joint English and Spanish data. The auxR team employed the Twitter-XLM-RoBERTa-large model, a derivative of the XLM-RoBERTa model <ref type="bibr" target="#b62">[63]</ref> domain-adapted using Twitter data, while the RD-IA-FUN <ref type="bibr" target="#b39">[40]</ref> team used the multilingual-e5-large model <ref type="bibr" target="#b63">[64]</ref>, a derivative of XLM-RoBERTa. The Elias&amp;Sergio team used monolingual RoBERTa, but fine-tuned the model using the Spanish dataset translated to English (in addition to the English dataset).</p><p>Notably different was the approach of the sail team <ref type="bibr" target="#b49">[50]</ref>, who used OpenAI Embeddings<ref type="foot" target="#foot_4">5</ref> in combination with a deep feed-forward neural network for fine-tuning. Additionally, they pre-processed the texts by replacing named entities with entity classes such as 'PERSON', in order to "enhance the model's generalization capabilities" <ref type="bibr" target="#b49">[50]</ref>. They showed that, for Subtask 1 , the masked model performs better than the non-masked one.</p><p>Results for Spanish Many of the teams that did well on Spanish also achieved top results on English. For these teams, we will briefly describe the differences between the two approaches, and we refer the reader to the English section of Subtask 1 for details.</p><p>Top performance was obtained by the SINAI team <ref type="bibr" target="#b40">[41]</ref>, which relied on LLMs. In contrast to what happened in English, the fine-tuned GPT-3.5 model <ref type="bibr" target="#b42">[43]</ref> outperformed LLaMA3-8B-instruct <ref type="bibr" target="#b41">[42]</ref> by a large margin, yielding the best overall solution.</p><p>The second and third positions are held by the two fully multilingual approaches of the auxR and RD-IA-FUN teams <ref type="bibr" target="#b39">[40]</ref>, which also performed well on English.</p><p>Interestingly, five out of the six following teams (Elias&amp;Sergio, AI_Fusion, zhengqiaozeng, virmel, trustno1, Zleon) employed standard SLM fine-tuning with PlanTL-GOB-ES/roberta-base-bne <ref type="bibr" target="#b64">[65]</ref> as the base model. The exception is the zhengqiaozeng team <ref type="bibr" target="#b51">[52]</ref>, which relied on the multilingual XLM-RoBERTa model. The tulbure team <ref type="bibr" target="#b53">[54]</ref> relied on an ensemble of three Spanish SLMs.</p><p>The sail team <ref type="bibr" target="#b49">[50]</ref> used the same approach as for English, based on multilingual OpenAI Embeddings. The nlpln team <ref type="bibr" target="#b54">[55]</ref> made it over the baseline using an unconventional approach in the context of this challenge -zero-shot prompting based on LLMs and the chain-of-thought prompting technique <ref type="bibr" target="#b65">[66]</ref>. We note that the same approach scored competitively on the English classification subtask, achieving an MCC of 0.7844 (see Table <ref type="table">A</ref>). The nlpln team <ref type="bibr" target="#b54">[55]</ref> tested a number of LLMs, including GPT, Claude, and Gemini, on the full training set. The DeepSeek V2 model <ref type="bibr" target="#b66">[67]</ref>, a large mixture-of-experts LLMs, achieved the best results. Surprisingly, the results on the test data proved this model to be relatively competitive with fine-tuned LLMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Analysis</head><p>The results of the top teams suggest that the most successful English transformer-based models are the DeBERTa model <ref type="bibr" target="#b59">[60]</ref>, the ELECTRA model <ref type="bibr" target="#b60">[61]</ref> and the large LLaMA3-8B-instruct LLM <ref type="bibr" target="#b41">[42]</ref>. The Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref> model was used by a number of high-performing teams, suggesting that pre-training on social media data probably influences performance. However, both BERT <ref type="bibr" target="#b30">[31]</ref> and RoBERTa <ref type="bibr" target="#b61">[62]</ref> were shown to be able to perform competitively. The performance edge obtained by the IUCL team <ref type="bibr" target="#b55">[56]</ref> suggests that the LOCO conspiracy corpus <ref type="bibr" target="#b10">[11]</ref> is a useful resource for boosting conspiracy-related classifiers for other use-cases.</p><p>In Spanish, the choice of a model seems to be more important, and many of the best teams used the Spanish 'Maria' RoBERTa model <ref type="bibr" target="#b64">[65]</ref>, trained exclusively on the data crawled from the web, while none of the top teams employed either the BETO <ref type="bibr" target="#b31">[32]</ref> or BERTIN <ref type="bibr" target="#b67">[68]</ref> models. Moreover, the top three teams employed either fine-tuned LLMs <ref type="bibr" target="#b40">[41]</ref> (GPT-3.5 <ref type="bibr" target="#b42">[43]</ref>) or multilingual models <ref type="bibr" target="#b39">[40,</ref><ref type="bibr" target="#b62">63]</ref>. These teams, especially the top one based on LLMs, outperformed the others by a significant margin. Interestingly, none of the participants used RoBERTuito <ref type="bibr" target="#b68">[69]</ref>, a model pretrained on Spanish social media text.</p><p>It would be interesting to perform ablation studies in both languages in order to measure the influence of both architectural improvements and the choice of the pretraining dataset on performance.</p><p>As for the application of the LLMs <ref type="bibr" target="#b34">[35]</ref>, the results on English show no big difference between finetuned LLMs and fine-tuned SLMs. Therefore, we hypothesize that the superiority of fine-tuned GPT-3.5 <ref type="bibr" target="#b42">[43]</ref> on Spanish is due to the pre-training data (GPT-3.5 has probably "seen" much more texts from then social media then the Spanish SLMs). The results of the nlpln team <ref type="bibr" target="#b54">[55]</ref> demonstrate the competitiveness, in both languages, of the DeepSeek V2 model <ref type="bibr" target="#b66">[67]</ref>, in combination with chain-of-thoughts prompting <ref type="bibr" target="#b65">[66]</ref>. Therefore, this approach seems to be a good way to quickly bootstrap a conspiracy vs. critical classifier for other use-cases and other supported languages. The approach of Sahitaj et al. <ref type="bibr" target="#b48">[49]</ref>, which was based on using LLM-based elaboration on text's context and argumentation as additional input for classification, might prove beneficial for improving LLM-based zero-shot prompting.</p><p>A number of teams opted to use non-neural text classifiers, such as LinearSVM <ref type="bibr" target="#b69">[70]</ref> or Random Forest <ref type="bibr" target="#b70">[71]</ref> in combination with tf-idf-or n-gram-based features. The average score of these approaches is 0.7080 MCC for English, and 0.5814 MCC for Spanish.</p><p>The baseline systems <ref type="bibr" target="#b0">[1]</ref> were based on BERT <ref type="bibr" target="#b30">[31]</ref> and BETO <ref type="bibr" target="#b31">[32]</ref>, respectively, for the English and Spanish dataset. These models were chosen as the baseline as they yielded the weakest performance in Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>. The best performance, corresponding to the state-of-art before this challenge, was obtained for DeBERTaV3 <ref type="bibr" target="#b71">[72]</ref> and 'BERTIN' RoBERTa <ref type="bibr" target="#b67">[68]</ref> models. When these models were applied to the train-test split of the challenge, the MCC scores of 0.8259 and 0.6681 were obtained, respectively, for English and Spanish. The score of DeBERTaV3 represents an improvement in relation to BERT. Even with this improvement, the participants managed to improve upon the state-of-art performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Detecting Elements of the Oppositional Narratives (Subtask 2)</head><p>Table <ref type="table">6</ref>.2 contains the results of the most successful teams on Subtask 2 -the teams with performance equal to or greater than that of the provided baseline.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Results for English</head><p>The most successful team, tulbure <ref type="bibr" target="#b53">[54]</ref>, relied on a combination of preprocessing techniques and data augmentation. While the provided baseline used multi-task learning to account for overlapping spans of different categories <ref type="bibr" target="#b0">[1]</ref>, Tulbure and Coll Ardanuy <ref type="bibr" target="#b53">[54]</ref> opted to use a single model for all the span categories and modified the data accordingly. Additionally, each Telegram text was segmented into sentences which were used as examples for learning. This solved the problem of texts longer than the maximum length supported by a transformer. Data augmentation was performed by "replacing words in the texts by synonyms or semantically-related words", and the RoBERTa model was used <ref type="bibr" target="#b61">[62]</ref>.</p><p>As the remaining teams mostly relied on modifying the multi-task sequence labeling approach of the baseline <ref type="bibr" target="#b0">[1]</ref>, this will be the assumed default approach. Only if another approach was used will the difference be described.</p><p>The second-placed team, Zleon <ref type="bibr" target="#b47">[48]</ref>, used a large variant of RoBERTa <ref type="bibr" target="#b61">[62]</ref> and increased the model's maximum sequence length to 512. The third-placed team, hinlole <ref type="bibr" target="#b52">[53]</ref>, used Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref> as the base model. The oppositional_opposition team used the DistilBERT model <ref type="bibr" target="#b72">[73]</ref> in combination with Conditional Random Fields <ref type="bibr" target="#b73">[74]</ref>. Interestingly, the same type of model was used for Subtask 2 in Spanish, but achieved a very low result (see Table <ref type="table" target="#tab_0">10</ref> in Appendix A), as if overfitting or failing to converge. The AI_Fusion team used the RoBERTa model <ref type="bibr" target="#b61">[62]</ref> and chose the best model over the 50 fine-tuning epochs. The virmel team used the RoBERTa model with the maximum sequence length set to 512. The zhengqiaozeng team <ref type="bibr" target="#b51">[52]</ref> employed the RoBERTa model, while the ALC_UPV_JD_2 team relied on the small ALBERT model <ref type="bibr" target="#b74">[75]</ref>.</p><p>The miqarn team used the multilingual mBERT model <ref type="bibr" target="#b37">[38]</ref>, trained on datasets in both languages. This approach also performed well on the Spanish dataset.</p><p>The TargaMarhuenda team used the RoBERTa model, and added pre-computed POS tags as input by concatenating them to the model's token embeddings to construct input to the initial layer of the transformer. The Elias&amp;Sergio team used a similar approach, but concatenated one-hot POS vectors with the token representations of the final layer of the transformer to construct input to the token classification head.</p><p>The ezio team <ref type="bibr" target="#b43">[44]</ref> modified the multi-tasking approach using "BiGRU LSTM", a bidirectional LSTM network based on gated recurrent units <ref type="bibr" target="#b44">[45]</ref>. Instead of using simple per-task classification heads, each task was assigned both a task-specific LSTM network and a task-specific classification head. Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref> was used as the base model.</p><p>The DSVS <ref type="bibr" target="#b45">[46]</ref> team created an ensemble of token classifiers based on different SLMs such as BERT, RoBERTa and ELECTRA, and performed "logit averaging" to obtain their final predictions.</p><p>The CHEEXIST team used the Fake-News-Bert-Detect model, a domain-adapted version of RoBERTa. Additionally, they replaced the final classification layer with a shallow neural network.</p><p>The rfenthusiasts team used the DeBERTaV3 model <ref type="bibr" target="#b71">[72]</ref> and did a data augmentation by replacing characters in text. The same approach, when used in combination with the XLM-RoBERTa model <ref type="bibr" target="#b62">[63]</ref>, did not work well on the Spanish dataset.</p><p>Results for Spanish All of the teams that achieved top results on the Spanish dataset did the same on the English dataset. Therefore, here we will only briefly describe the differences, which mostly pertain to a different choice of transformer model. Similarly as for English, the majority of the approaches relied on the multi-task sequence labeling approach of the baseline <ref type="bibr" target="#b0">[1]</ref>.</p><p>The same two teams -tulbure and Zleon -took the first and second place, as on the English dataset. Both relied on the same respective approach that they used on English, with the difference of using the Spanish 'Maria' RoBERTa model <ref type="bibr" target="#b64">[65]</ref>.</p><p>The AI_Fusion team, placed third, relied on the XLM-RoBERTa model <ref type="bibr" target="#b62">[63]</ref>, while the virmel team relied on Spanish 'BERTIN' RoBERTa model <ref type="bibr" target="#b67">[68]</ref>. The CHEEXIST team used the 'Maria' RoBERTa model <ref type="bibr" target="#b64">[65]</ref>.</p><p>The miqarn team used a single mBERT <ref type="bibr" target="#b37">[38]</ref> model fine-tuned on both datasets, and achieved good results on Spanish. The DSVS <ref type="bibr" target="#b45">[46]</ref> team's ensemble approach also achieved good results in the case of the Spanish dataset. The ensemble consisted of a number of Spanish and multilingual models <ref type="bibr" target="#b45">[46]</ref>.</p><p>Two approaches based on using POS tags as additional input to the model, used by the Targa-Marhuenda and Elias&amp;Sergio teams, relied on the Spanish RoBERTa model. The hinlole team <ref type="bibr" target="#b52">[53]</ref> relied on the Spanish BETO model <ref type="bibr" target="#b31">[32]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Analysis</head><p>The system that clearly outperformed the others in both languages was the one of the tulbure team <ref type="bibr" target="#b53">[54]</ref>. Its sentence-level processing of texts shows that signals for the inference of the elements of oppositional narrative are largely sentence-local. It would be interesting to perform ablation studies to determine how much data augmentation influences performance in contrast to sentence segmenting. Further improvements might be achieved by way of using multi-task learning and transformers other than RoBERTa, as well as other data augmentation techniques, possibly based on LLMs.</p><p>The competitive results of the Zleon team <ref type="bibr" target="#b47">[48]</ref> and several other teams relying on the multi-task baseline approach show its effectiveness in combination with an improved choice of the backbone SLM and increased maximum sequence length. Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref>, used by the second-and third-placed teams, seems to be a successful choice for English.</p><p>The performance of Subtask 2 seems to be less influenced by the choice of the transformer model, especially in the case of Spanish. Concretely, a larger variety of models appear among the top teams and, in the case of Spanish, all three families of models (BETO <ref type="bibr" target="#b31">[32]</ref>, BERTIN <ref type="bibr" target="#b67">[68]</ref>, and 'Maria' <ref type="bibr" target="#b64">[65]</ref>) are represented.</p><p>The approach of the miqarn team, based on the multilingual mBERT model <ref type="bibr" target="#b37">[38]</ref>, worked well for both languages and could be a good approach for the task of inferring the elements of oppositional narrative in other languages, especially under-resourced ones.</p><p>The baseline systems <ref type="bibr" target="#b0">[1]</ref> were based on BERT <ref type="bibr" target="#b30">[31]</ref> and BETO <ref type="bibr" target="#b31">[32]</ref> models, respectively, for the English and Spanish dataset. They were chosen since they yielded the weakest performance in Korenčić et al. <ref type="bibr" target="#b0">[1]</ref>. Top performance, corresponding to the state-of-art before this challenge, was obtained for DeBERTaV3 <ref type="bibr" target="#b71">[72]</ref> and BERTIN <ref type="bibr" target="#b67">[68]</ref> models. When these models were applied to the train-test split of the challenge, the MCC scores of 0.5786 and 0.5369 were obtained, respectively, for English and Spanish. These scores represent an improvement in relation to the baseline, but even so the participants managed to significantly raise the state-of-art performance on the task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7.">Conclusions</head><p>The Oppositional Thinking Analysis PAN Task presented to the NLP community two subtasks: distinguishing between critical and conspiratorial messages, and detecting elements of oppositional narratives. These subtasks are of interest to computational social scientists interested in text-based analysis of oppositional thinking <ref type="bibr" target="#b0">[1]</ref>.</p><p>A total of 82 teams participated in the challenge, while 17 teams provided working notes papers. The teams devised a range of solutions, the most successful of which exceeded previous state-of-the-art <ref type="bibr" target="#b0">[1]</ref> for both subtasks. The new solutions have the potential to facilitate researchers in applying the domain-agnostic annotation schemes proposed in Korenčić et al. <ref type="bibr" target="#b0">[1]</ref> to new corpora.</p><p>For Subtask 1 the most successful submitted English system <ref type="bibr" target="#b55">[56]</ref> relied on augmentation using the large news conspiracy corpus LOCO <ref type="bibr" target="#b10">[11]</ref>. The best result for Spanish was achieved using a fine-tuned GPT-3.5 <ref type="bibr" target="#b40">[41]</ref>. The multilingual approach of Huertas-García et al. <ref type="bibr" target="#b39">[40]</ref> also proved competitive. An LLM-based zero-shot approach of Liu et al. <ref type="bibr" target="#b54">[55]</ref> achieved results competitive with supervised baselines on Subtask 1 and demonstrated a cost-effective way to bootstrap conspiracy vs. critical classifiers for new use-cases. The experiments also point to the need to create better small-scale transformer models for Spanish, as the solutions that work best on the Spanish dataset rely either on LLMs, or on multilingual SLMs.</p><p>For Subtask 2, the top system in both languages relied on a combination of data augmentation by word replacement and sentence-level processing <ref type="bibr" target="#b53">[54]</ref>. Most of the other systems relied on improving the provided baseline solution by changing the underlying transformer model, or by modifying the training procedure.</p><p>There are many possible directions for creating even better-performing systems. Crafting new domainspecific SLMs would probably be beneficial, as demonstrated by the effectiveness of Covid-twitter-BERT <ref type="bibr" target="#b17">[18]</ref> on both subtasks. Having in mind the difficulty of creating high-quality annotated data, further work on the LLM-based zero-and few-shot approaches would be beneficial for practitioners. Similarly, multi-lingual approaches adaptable to new languages with few annotated examples <ref type="bibr" target="#b75">[76]</ref> would also be an interesting and potentially effective direction to pursue. If the topic-agnostic annotation scheme <ref type="bibr" target="#b0">[1]</ref> used for this task is applied to create new labeled corpora, it would be interesting to use these corpora for benchmarking the approach of Gómez-Romero et al. <ref type="bibr" target="#b49">[50]</ref>, which focuses on the generalization capabilities of the models. span-F1 span-P span-R micro-span-F1 </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 10</head><p>Results and rankings of the teams participating on Task 2 -token classification of span-level narrative elements, for Spanish texts. Performance metrics are: span-F1 (macro-averaged over span labels), span-precision, spanrecall, and micro-averaged span-F1 <ref type="bibr" target="#b28">[29]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A Conspiracy message annotated with elements of oppositional narrative: Agents (A), Facilitators (F), Campaigners (C), Victims (V), Objectives (O), Negative Effects (E).</figDesc><graphic coords="4,77.98,522.60,451.28,69.82" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: A Critical message annotated with elements of oppositional narrative: Agents (A), Facilitators (F), Campaigners (C), Victims (V), Objectives (O), Negative Effects (E).</figDesc><graphic coords="5,77.98,77.56,451.26,108.53" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc></figDesc><table><row><cell cols="8">Language Avg. Std. dev Min. Q1 median Q3 Max.</cell></row><row><cell>Spanish</cell><cell>128</cell><cell>123</cell><cell>23</cell><cell>49</cell><cell>98</cell><cell>148</cell><cell>766</cell></row><row><cell>English</cell><cell>265</cell><cell>528</cell><cell>12</cell><cell>32</cell><cell>65</cell><cell cols="2">266 4,108</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>Performance of top teams, in terms of span-F1 metric<ref type="bibr" target="#b28">[29]</ref> (macro-averaged over span labels), on Subtask 2token classification of span-level narrative elements.</figDesc><table><row><cell>English</cell><cell></cell><cell>Spanish</cell><cell></cell></row><row><cell>TEAM</cell><cell cols="2">span-F1 TEAM</cell><cell>span-F1</cell></row><row><cell>tulbure [54]</cell><cell>0.6279</cell><cell>tulbure [54]</cell><cell>0.6129</cell></row><row><cell>Zleon [48]</cell><cell>0.6089</cell><cell>Zleon [48]</cell><cell>0.5875</cell></row><row><cell>hinlole [53]</cell><cell>0.5886</cell><cell>AI_Fusion</cell><cell>0.5777</cell></row><row><cell cols="2">oppositional_opposition 0.5866</cell><cell>virmel</cell><cell>0.5616</cell></row><row><cell>AI_Fusion</cell><cell>0.5805</cell><cell>CHEEXIST</cell><cell>0.5621</cell></row><row><cell>virmel</cell><cell>0.5742</cell><cell>miqarn</cell><cell>0.5603</cell></row><row><cell>miqarn</cell><cell>0.5739</cell><cell>DSVS [46]</cell><cell>0.5529</cell></row><row><cell>TargaMarhuenda</cell><cell>0.5701</cell><cell cols="2">TargaMarhuenda 0.5364</cell></row><row><cell>ezio [44]</cell><cell>0.5694</cell><cell>Elias&amp;Sergio</cell><cell>0.5151</cell></row><row><cell>zhengqiaozeng [52]</cell><cell>0.5666</cell><cell>hinlole [53]</cell><cell>0.4994</cell></row><row><cell>Elias&amp;Sergio</cell><cell>0.5627</cell><cell>baseline-BETO</cell><cell>0.4934</cell></row><row><cell>DSVS [46]</cell><cell>0.5598</cell><cell></cell><cell></cell></row><row><cell>CHEEXIST</cell><cell>0.5524</cell><cell></cell><cell></cell></row><row><cell>rfenthusiasts</cell><cell>0.5479</cell><cell></cell><cell></cell></row><row><cell>ALC-UPV-JD-2</cell><cell>0.5377</cell><cell></cell><cell></cell></row><row><cell>baseline-BERT</cell><cell>0.5323</cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 9</head><label>9</label><figDesc>Results and rankings of the teams participating on Task 2 -token classification of span-level narrative elements, for English texts. Performance metrics are: span-F1 (macro-averaged over span labels), span-precision, span-recall, and micro-averaged span-F1<ref type="bibr" target="#b28">[29]</ref>.</figDesc><table><row><cell>1</cell><cell>tulbure [54]</cell><cell>0.6279</cell><cell>0.5859 0.6790</cell><cell>0.6120</cell></row><row><cell>2</cell><cell>Zleon [48]</cell><cell>0.6089</cell><cell>0.5537 0.6881</cell><cell>0.5856</cell></row><row><cell>3</cell><cell>hinlole [53]</cell><cell>0.5886</cell><cell>0.5243 0.6834</cell><cell>0.5571</cell></row><row><cell>4</cell><cell cols="2">oppositional_opposition 0.5866</cell><cell>0.5347 0.6586</cell><cell>0.5344</cell></row><row><cell>5</cell><cell>AI_Fusion</cell><cell>0.5805</cell><cell>0.5585 0.6082</cell><cell>0.5437</cell></row><row><cell>6</cell><cell>virmel</cell><cell>0.5742</cell><cell>0.5235 0.6477</cell><cell>0.5540</cell></row><row><cell>7</cell><cell>miqarn</cell><cell>0.5739</cell><cell>0.5184 0.6462</cell><cell>0.5325</cell></row><row><cell>8</cell><cell>TargaMarhuenda</cell><cell>0.5701</cell><cell>0.5161 0.6477</cell><cell>0.5437</cell></row><row><cell>9</cell><cell>ezio [44]</cell><cell>0.5694</cell><cell>0.5229 0.6340</cell><cell>0.5389</cell></row><row><cell>10</cell><cell>zhengqiaozeng [52]</cell><cell>0.5666</cell><cell>0.5122 0.6485</cell><cell>0.5421</cell></row><row><cell>11</cell><cell>Elias&amp;Sergio</cell><cell>0.5627</cell><cell>0.5149 0.6364</cell><cell>0.5248</cell></row><row><cell>12</cell><cell>DSVS [46]</cell><cell>0.5598</cell><cell>0.5332 0.6012</cell><cell>0.5287</cell></row><row><cell>13</cell><cell>CHEEXIST</cell><cell>0.5524</cell><cell>0.4767 0.6845</cell><cell>0.5299</cell></row><row><cell>14</cell><cell>rfenthusiasts</cell><cell>0.5479</cell><cell>0.5381 0.5666</cell><cell>0.5408</cell></row><row><cell>15</cell><cell>ALC-UPV-JD-2</cell><cell>0.5377</cell><cell>0.4643 0.6562</cell><cell>0.4956</cell></row><row><cell></cell><cell>baseline-BERT</cell><cell>0.5323</cell><cell>0.4684 0.6334</cell><cell>0.4998</cell></row><row><cell>16</cell><cell>Dap_upv</cell><cell>0.5272</cell><cell>0.4617 0.6297</cell><cell>0.4973</cell></row><row><cell>17</cell><cell>aish_team [58]</cell><cell>0.5213</cell><cell>0.4181 0.7456</cell><cell>0.2571</cell></row><row><cell>18</cell><cell>SINAI [41]</cell><cell>0.4582</cell><cell>0.5553 0.4279</cell><cell>0.4571</cell></row><row><cell>19</cell><cell>Trainers</cell><cell>0.3382</cell><cell>0.5124 0.2609</cell><cell>0.2858</cell></row><row><cell>20</cell><cell>nlpln [55]</cell><cell>0.3339</cell><cell>0.5286 0.3303</cell><cell>0.2710</cell></row><row><cell>21</cell><cell>ROCurve</cell><cell>0.2996</cell><cell>0.3154 0.3031</cell><cell>0.3425</cell></row><row><cell>22</cell><cell>TokoAI</cell><cell>0.2760</cell><cell>0.1870 0.6119</cell><cell>0.2677</cell></row><row><cell>23</cell><cell>DiTana</cell><cell>0.2756</cell><cell>0.5259 0.1947</cell><cell>0.2599</cell></row><row><cell>24</cell><cell>TheGymNerds</cell><cell>0.2070</cell><cell>0.2076 0.2127</cell><cell>0.2329</cell></row><row><cell>25</cell><cell>epistemologos</cell><cell>0.1709</cell><cell>0.1286 0.3244</cell><cell>0.1201</cell></row><row><cell>26</cell><cell>theateam</cell><cell>0.1503</cell><cell>0.1401 0.1652</cell><cell>0.0387</cell></row><row><cell>27</cell><cell>LaDolceVita</cell><cell>0.0726</cell><cell>0.2040 0.0453</cell><cell>0.0630</cell></row><row><cell>28</cell><cell>kaprov [57]</cell><cell>0.0150</cell><cell>0.0261 0.0165</cell><cell>0.0600</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/dkorenci/pan-clef-2024-oppositional</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://github.com/dkorenci/pan-clef-2024-oppositional</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">https://huggingface.co/models</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">https://www.anthropic.com/claude</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://platform.openai.com/docs/guides/embeddings</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>The shared task on Oppositional Thinking Analysis was organised in the framework of the XAI-DisInfodemics: eXplainable AI for disinformation and conspiracy detection during infodemics (MICIN PLEC2021-007681), funded by MCIN/AEI/ 10.13039/501100011033 and by European Union NextGener-ationEU/PRTR. The work of Damir Korenčić and Berta Chulvi was conducted while at Universitat Politècnica de València.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0" />			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">What distinguishes conspiracy from critical narratives? A computational analysis of oppositional discourse</title>
		<author>
			<persName><forename type="first">D</forename><surname>Korenčić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Bonet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mariona</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Toselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<idno type="DOI">10.1111/exsy.13671</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">What are conspiracy theories? A definitional approach to their correlates, consequences, and communication</title>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Douglas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Sutton</surname></persName>
		</author>
		<idno type="DOI">10.1146/annurev-psych-032420-031329</idno>
		<ptr target="https://doi.org/10.1146/annurev-psych-032420-031329" />
	</analytic>
	<monogr>
		<title level="j">Annual Review of Psychology</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page" from="271" to="298" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">An integrative theory of intergroup relations</title>
		<author>
			<persName><forename type="first">H</forename><surname>Tajfel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Turner</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Psychology of intergroup relations</title>
		<imprint>
			<biblScope unit="page" from="33" to="47" />
			<date type="published" when="1979">1979</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Social identity theory: past achievements, current problems and future challenges</title>
		<author>
			<persName><forename type="first">R</forename><surname>Brown</surname></persName>
		</author>
		<idno type="DOI">10.1002/1099-0992(200011/12)30:6&lt;745::AID-EJSP24&gt;3.0.CO;2-O</idno>
		<idno>AID-EJSP24&gt;3.0.CO;2-O</idno>
	</analytic>
	<monogr>
		<title level="j">European Journal of Social Psychology</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="page" from="745" to="778" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Hogg</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-29869-6_1</idno>
		<title level="m">Social identity theory</title>
				<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Rabbit hole syndrome: Inadvertent, accelerating, and entrenched commitment to conspiracy beliefs</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Sutton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Douglas</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.copsyc.2022.101462</idno>
		<ptr target="https://doi.org/10.1016/j.copsyc.2022.101462" />
	</analytic>
	<monogr>
		<title level="j">Current Opinion in Psychology</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page">101462</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A tribal mind: Beliefs that signal group identity or commitment</title>
		<author>
			<persName><forename type="first">E</forename><surname>Funkhouser</surname></persName>
		</author>
		<idno type="DOI">10.1111/mila.12326</idno>
		<ptr target="https://onlinelibrary.wiley.com/doi/pdf/10.1111/mila.12326" />
	</analytic>
	<monogr>
		<title level="j">Mind &amp; Language</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="page" from="444" to="464" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Beyond &quot;monologicality&quot;? exploring conspiracist worldviews</title>
		<author>
			<persName><forename type="first">B</forename><surname>Franks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bangerter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Noort</surname></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2017.00861</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/fpsyg.2017.00861.doi:10.3389/fpsyg.2017.00861" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Psychology</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Conspiracy theories in online environments: An interdisciplinary literature review and agenda for future research</title>
		<author>
			<persName><forename type="first">D</forename><surname>Mahl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Schäfer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zeng</surname></persName>
		</author>
		<idno type="DOI">10.1177/14614448221075759</idno>
		<idno>arXiv:</idno>
		<ptr target="https://doi.org/10.1177/14614448221075759" />
	</analytic>
	<monogr>
		<title level="j">New Media &amp; Society</title>
		<imprint>
			<biblScope unit="volume">0</biblScope>
			<biblScope unit="page">14614448221075759</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<title level="m" type="main">Conspiracy Theories are for Losers</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">E</forename><surname>Uscinski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Parent</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Torres</surname></persName>
		</author>
		<ptr target="https://papers.ssrn.com/abstract=1901755,aPSA2011AnnualMeetingPaper" />
		<imprint>
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Loco: The 88-million-word language of conspiracy corpus</title>
		<author>
			<persName><forename type="first">A</forename><surname>Miani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Hills</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bangerter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behavior research methods</title>
		<imprint>
			<biblScope unit="page" from="1" to="24" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Coco: an annotated twitter dataset of covid-19 conspiracy theories</title>
		<author>
			<persName><forename type="first">J</forename><surname>Langguth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Schroeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Filkuková</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brenner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Phillips</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Pogorelov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Computational Social Science</title>
		<imprint>
			<biblScope unit="page" from="1" to="42" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">FakeNews: Corona Virus and Conspiracies Multimedia Analysis Task at MediaEval</title>
		<author>
			<persName><forename type="first">K</forename><surname>Pogorelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Schroeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brenner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Langguth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes Proceedings of the MediaEval 2021 Workshop</title>
				<meeting><address><addrLine>Bergen, Norway and Online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Combining tweets and connections graph for fakenews detection at mediaeval</title>
		<author>
			<persName><forename type="first">K</forename><surname>Pogorelov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">T</forename><surname>Schroeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brenner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Maulana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Langguth</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the MediaEval 2022 Workshop</title>
				<meeting>the MediaEval 2022 Workshop<address><addrLine>Bergen, Norway and Online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-12-13">2022. 12-13 January 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Detecting covid-19-related conspiracy theories in tweets</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Peskine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Alfarano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Harrando</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Papotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">MediaEval 2021, MediaEval Benchmarking Initiative for Multimedia Evaluation Workshop</title>
				<imprint>
			<date type="published" when="2021-12">December 2021. 2021</date>
			<biblScope unit="page" from="13" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Detection of COVID-19-Related Conpiracy Theories in Tweets using Transformer-Based Models and Node Embedding Techniques</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Peskine</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Papotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Troncy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes Proceedings of the MediaEval 2022 Workshop</title>
				<meeting><address><addrLine>Bergen, Norway and Online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Tackling Covid-19 Conspiracies on Twitter using BERT Ensembles, GPT-3 Augmentation, and Graph NNs</title>
		<author>
			<persName><forename type="first">D</forename><surname>Korenčić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Grubišić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">H</forename><surname>Toselli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chulvi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<ptr target="https://2022.multimediaeval.com/paper8969.pdf" />
	</analytic>
	<monogr>
		<title level="m">Working Notes Proceedings of the MediaEval 2022 Workshop</title>
				<meeting><address><addrLine>Bergen, Norway and Online</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Covid-twitter-bert: A natural language processing model to analyse covid-19 content on twitter</title>
		<author>
			<persName><forename type="first">M</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Salathé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">E</forename><surname>Kummervold</surname></persName>
		</author>
		<idno type="DOI">10.3389/frai.2023.1023281</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/frai.2023.1023281.doi:10.3389/frai.2023.1023281" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Detection of conspiracy propagators using psycho-linguistic characteristics</title>
		<author>
			<persName><forename type="first">A</forename><surname>Giachanou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ghanem</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Rosso</surname></persName>
		</author>
		<idno type="DOI">10.1177/0165551520985486</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Information Science</title>
		<imprint>
			<biblScope unit="volume">49</biblScope>
			<biblScope unit="page" from="3" to="17" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Hunting conspiracy theories during the covid-19 pandemic</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Moffitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">M</forename><surname>Carley</surname></persName>
		</author>
		<idno type="DOI">10.1177/20563051211043212</idno>
	</analytic>
	<monogr>
		<title level="j">Social Media + Society</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Personality traits and echo chambers on facebook</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bessi</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.chb.2016.08.016</idno>
		<ptr target="https://www.sciencedirect.com/science/article/pii/S0747563216305817.doi:10.1016/j.chb.2016.08.016" />
	</analytic>
	<monogr>
		<title level="j">Computers in Human Behavior</title>
		<imprint>
			<biblScope unit="volume">65</biblScope>
			<biblScope unit="page" from="319" to="324" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Topic Modeling Reveals Distinct Interests within an Online Conspiracy Forum</title>
		<author>
			<persName><forename type="first">C</forename><surname>Klein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Clutton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Polito</surname></persName>
		</author>
		<idno type="DOI">10.3389/fpsyg.2018.00189</idno>
		<ptr target="https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00189" />
	</analytic>
	<monogr>
		<title level="j">Frontiers in Psychology</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The Government Spies Using Our Webcams&apos;: The Language of Conspiracy Theories in Online Discussions</title>
		<author>
			<persName><forename type="first">M</forename><surname>Samory</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mitra</surname></persName>
		</author>
		<idno type="DOI">10.1145/3274421</idno>
		<ptr target="https://dl.acm.org/doi/10.1145/3274421.doi:10.1145/3274421" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the ACM on Human-Computer Interaction</title>
				<meeting>the ACM on Human-Computer Interaction</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="1" to="24" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Investigating Memorization of Conspiracy Theories in Text Generation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Saxon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2021.findings-acl.416</idno>
		<ptr target="https://aclanthology.org/2021.findings-acl.416.doi:10.18653/v1/2021.findings-acl.416" />
	</analytic>
	<monogr>
		<title level="m">Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, Association for Computational Linguistics</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="4718" to="4729" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Mapping the Narrative Ecosystem of Conspiracy Theories in Online Anti-vaccination Discussions</title>
		<author>
			<persName><forename type="first">J</forename><surname>Introne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Korsunska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Krsova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1145/3400806.3400828</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3400806.3400828" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Social Media and Society</title>
				<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="184" to="192" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Which side are you on? Insider-Outsider classification in conspiracy-theoretic social media</title>
		<author>
			<persName><forename type="first">P</forename><surname>Holur</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Shahsavari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tangherlini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Roychowdhury</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.acl-long.341</idno>
		<ptr target="https://aclanthology.org/2022.acl-long.341.doi:10.18653/v1/2022.acl-long.341" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics</title>
		<title level="s">Long Papers</title>
		<meeting>the 60th Annual Meeting of the Association for Computational Linguistics<address><addrLine>Dublin, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4975" to="4987" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Awake together: Sociopsychological processes of engagement in conspiracist communities</title>
		<author>
			<persName><forename type="first">P</forename><surname>Wagner-Egger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bangerter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Delouvée</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Dieguez</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.copsyc.2022.101417</idno>
		<ptr target="https://doi.org/10.1016/j.copsyc.2022.101417" />
	</analytic>
	<monogr>
		<title level="j">Current Opinion in Psychology</title>
		<imprint>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page">101417</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">The Matthews correlation coefficient (MCC) is more reliable than balanced accuracy, bookmaker informedness, and markedness in two-class confusion matrix evaluation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chicco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tötsch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jurman</surname></persName>
		</author>
		<idno type="DOI">10.1186/s13040-021-00244-z</idno>
		<ptr target="https://doi.org/10.1186/s13040-021-00244-z.doi:10.1186/s13040-021-00244-z" />
	</analytic>
	<monogr>
		<title level="j">BioData Mining</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">13</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Fine-Grained Analysis of Propaganda in News Articles</title>
		<author>
			<persName><forename type="first">G</forename><surname>Da San Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Barrón-Cedeño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Petrov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/D19-1565</idno>
		<ptr target="https://aclanthology.org/D19-1565.doi:10.18653/v1/D19-1565" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics</title>
				<meeting>the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), Association for Computational Linguistics<address><addrLine>Hong Kong, China</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="5636" to="5646" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation</title>
		<author>
			<persName><forename type="first">D</forename><surname>Chicco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Jurman</surname></persName>
		</author>
		<idno type="DOI">10.1186/s12864-019-6413-7</idno>
		<ptr target="https://doi.org/10.1186/s12864-019-6413-7.doi:10.1186/s12864-019-6413-7" />
	</analytic>
	<monogr>
		<title level="j">BMC Genomics</title>
		<imprint>
			<biblScope unit="volume">21</biblScope>
			<biblScope unit="page">6</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding</title>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-W</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/N19-1423</idno>
		<ptr target="https://aclanthology.org/N19-1423.doi:10.18653/v1/N19-1423" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies</title>
		<title level="s">Long and Short Papers</title>
		<meeting>the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies<address><addrLine>Minneapolis, Minnesota</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="4171" to="4186" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b31">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Cañete</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chaperon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Fuentes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-H</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pérez</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2308.02976</idno>
		<ptr target="http://arxiv.org/abs/2308.02976.arXiv:2308.02976" />
		<title level="m">Spanish Pre-trained BERT Model and Evaluation Data</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<monogr>
		<title level="m" type="main">An Overview of Multi-Task Learning in Deep Neural Networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ruder</surname></persName>
		</author>
		<idno>arXiv:</idno>
		<ptr target="1706.05098" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">SkillSpan: Hard and soft skill extraction from English job postings</title>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Sonniks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Plank</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.naacl-main.366</idno>
		<ptr target="https://aclanthology.org/2022.naacl-main.366.doi:10.18653/v1/2022.naacl-main.366" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics</title>
				<meeting>the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Association for Computational Linguistics<address><addrLine>Seattle, United States</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="4962" to="4984" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<monogr>
		<title level="m" type="main">A survey of large language models</title>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">X</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Min</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-Y</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J.-R</forename><surname>Wen</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2303.18223.arXiv:2303.18223" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Ensemble methods in machine learning</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">G</forename><surname>Dietterich</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Multiple Classifier Systems</title>
				<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="1" to="15" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Text data augmentation for deep learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Shorten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">M</forename><surname>Khoshgoftaar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Furht</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of big Data</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page">101</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">How multilingual is multilingual BERT?</title>
		<author>
			<persName><forename type="first">T</forename><surname>Pires</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Schlinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Garrette</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/P19-1493</idno>
		<ptr target="https://aclanthology.org/P19-1493.doi:10.18653/v1/P19-1493" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Korhonen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Traum</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Màrquez</surname></persName>
		</editor>
		<meeting>the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics<address><addrLine>Florence, Italy</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="4996" to="5001" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Attention is all you need</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vaswani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Shazeer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Parmar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">N</forename><surname>Gomez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">U</forename><surname>Kaiser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper_files/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><forename type="middle">V</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="volume">30</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">Small Language Models and Large Language Models in Oppositional thinking analysis: Capabilities and Biases and Challenges</title>
		<author>
			<persName><forename type="first">Á</forename><surname>Huertas-García</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Martí-González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Muñoz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ambite</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">SINAI at PAN 2024 Oppositional Thinking Analysis: Exploring the fine-tuning performance of LLMs</title>
		<author>
			<persName><forename type="first">M</forename><surname>Vallecillo-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Martín-Valdivia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montejo-Ráez</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Izacard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-A</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Rozière</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hambro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Azhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2302.13971.arXiv:2302.13971" />
		<title level="m">Llama: Open and efficient foundation language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">Language models are few-shot learners</title>
		<author>
			<persName><forename type="first">T</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="volume">33</biblScope>
			<biblScope unit="page" from="1877" to="1901" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">An Oppositional Thinking Analysis Method Using BERTbased Model with BiGRU</title>
		<author>
			<persName><forename type="first">Q</forename><surname>Hu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Peng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Liu</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">On the properties of neural machine translation: Encoder-decoder approaches</title>
		<author>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Van Merriënboer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Bahdanau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bengio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation</title>
				<meeting>SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation</meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="103" to="111" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">2024: Ensemble Approach of Large Language Models for Analyzing Conspiracy Theories Against Critical Thinking Narratives</title>
		<author>
			<persName><forename type="first">S</forename><surname>Damian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Herrera-Gonzalez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Vazquez-Santana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Calvo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Felipe-Riverón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yáñez-Márquez</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">A Study on NLP Model Ensembles and Data Augmentation Techniques for Separating Critical Thinking from Conspiracy Theories in English Texts</title>
		<author>
			<persName><forename type="first">I</forename><surname>Sánchez-Hermosilla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Panizo Lledot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Camacho</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Conspiracy theory detection using transformers with multi-task and multilingual approaches</title>
		<author>
			<persName><forename type="first">L</forename><surname>Zrnić</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b48">
	<analytic>
		<title level="a" type="main">Towards a Computational Framework for Distinguishing Critical and Conspiratorial Texts by Elaborating on the Context and Argumentation with LLMs</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sahitaj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sahitaj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mohtaj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Möller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Schmitt</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b49">
	<analytic>
		<title level="a" type="main">Detection of conspiracy-related messages in Telegram with anonymized named entities</title>
		<author>
			<persName><forename type="first">J</forename><surname>Gómez-Romero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>González-Silot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Montoro-Montarroso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Molina-Solana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">Martínez</forename><surname>Cámara</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b50">
	<analytic>
		<title level="a" type="main">Binary Battle: Leveraging ML and TL Models to Distinguish between Conspiracy Theories and Critical Thinking</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mahesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Divakaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Girish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lakshmaiah</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b51">
	<analytic>
		<title level="a" type="main">A Conspiracy Theory Text Detection Method based on RoBERTa and XLM-RoBERTa Models</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Huang</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b52">
	<analytic>
		<title level="a" type="main">Conspiracy Theory Text Classification Based on CT-BERT and BETO Models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Sun</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b53">
	<analytic>
		<title level="a" type="main">Conspiracy vs critical thinking using an ensemble of transformers with data augmentation techniques</title>
		<author>
			<persName><forename type="first">A</forename><surname>Tulbure</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">Coll</forename><surname>Ardanuy</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b54">
	<analytic>
		<title level="a" type="main">An Approach to Classifying Conspiratorial and Critical Public Health Narratives</title>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Cao</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b55">
	<analytic>
		<title level="a" type="main">IUCL at PAN 2024: Using Data Augmentation for Conspiracy Theory Detection</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mhalgi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pulipaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kübler</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b56">
	<analytic>
		<title level="a" type="main">Oppositional Thinking Analysis: Conspiracy Theories vs Critical Thinking Narratives</title>
		<author>
			<persName><forename type="first">P</forename><surname>Balasundaram</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Swaminathan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Sampath</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Km</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b57">
	<analytic>
		<title level="a" type="main">Detection of Conspiracy vs. Critical Narratives and Their Elements using NLP</title>
		<author>
			<persName><forename type="first">A</forename><surname>Albladi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seals</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b58">
	<analytic>
		<title level="a" type="main">Using BERT to Identify Conspiracy Theories</title>
		<author>
			<persName><forename type="first">D</forename><surname>Espinosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ricárdez-Vázquez</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b59">
	<analytic>
		<title level="a" type="main">Deberta: Decoding-enhanced bert with disentangled attention</title>
		<author>
			<persName><forename type="first">P</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=XPZIaotutsD" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b60">
	<monogr>
		<title level="m" type="main">Electra: Pre-training text encoders as discriminators rather than generators</title>
		<author>
			<persName><forename type="first">K</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M.-T</forename><surname>Luong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2003.10555.arXiv:2003.10555" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b61">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1907.11692</idno>
		<idno>arXiv:1907.11692</idno>
		<title level="m">Roberta: A robustly optimized bert pretraining approach</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b62">
	<monogr>
		<title level="m" type="main">Unsupervised cross-lingual representation learning at scale</title>
		<author>
			<persName><forename type="first">A</forename><surname>Conneau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Khandelwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Chaudhary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Wenzek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Guzmán</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1911.02116.arXiv:1911.02116" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b63">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Majumder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Wei</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2402.05672.arXiv:2402.05672" />
		<title level="m">Multilingual e5 text embeddings: A technical report</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b64">
	<analytic>
		<title level="a" type="main">Maria: Spanish language models</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gutiérrez-Fandiño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Armengol-Estapé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pàmies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Llop-Palao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Silveira-Ocampo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">P</forename><surname>Carrino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Armentano-Oller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rodriguez-Penagos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gonzalez-Agirre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Villegas</surname></persName>
		</author>
		<idno type="DOI">10.26342/2022-68-3</idno>
		<ptr target="https://doi.org/10.26342/2022-68-3.doi:10.26342/2022-68-3" />
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="page" from="39" to="60" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b65">
	<monogr>
		<title level="m" type="main">Chain-of-thought prompting elicits reasoning in large language models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuurmans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bosma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ichter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2201.11903.arXiv:2201.11903" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b66">
	<monogr>
		<author>
			<persName><forename type="first">Deepseek-Ai</forename></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Dengr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ruan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ding</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Xin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Qu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qiu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Guan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">L</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Pei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yuan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">L</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>An</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><forename type="middle">Q</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Bi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Nie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Song</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">K</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">X</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">X</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Piao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>You</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><forename type="middle">Z</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sha</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Hao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Shao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Xie</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2405.04434.arXiv:2405.04434" />
		<title level="m">Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b67">
	<analytic>
		<title level="a" type="main">Efficient Pre-Training of a Spanish Language Model using Perplexity Sampling</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Rosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">G</forename><surname>Ponferrada</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Romero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Villegas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">G D P</forename><surname>Salas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Grandury</surname></persName>
		</author>
		<author>
			<persName><surname>Bertin</surname></persName>
		</author>
		<ptr target="0" />
	</analytic>
	<monogr>
		<title level="j">Procesamiento del Lenguaje Natural</title>
		<imprint>
			<biblScope unit="volume">68</biblScope>
			<biblScope unit="page" from="13" to="23" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b68">
	<monogr>
		<title level="m" type="main">Robertuito: a pre-trained language model for social media text in spanish</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Pérez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Furman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Alemany</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Luque</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2111.09453.arXiv:2111.09453" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b69">
	<analytic>
		<title level="a" type="main">Text categorization with support vector machines: Learning with many relevant features</title>
		<author>
			<persName><forename type="first">T</forename><surname>Joachims</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning: ECML-98</title>
				<editor>
			<persName><forename type="first">C</forename><surname>Nédellec</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Rouveirol</surname></persName>
		</editor>
		<meeting><address><addrLine>Berlin Heidelberg; Berlin, Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1998">1998</date>
			<biblScope unit="page" from="137" to="142" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b70">
	<analytic>
		<title level="a" type="main">Random forests</title>
		<author>
			<persName><forename type="first">L</forename><surname>Breiman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine learning</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="5" to="32" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b71">
	<analytic>
		<title level="a" type="main">DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing</title>
		<author>
			<persName><forename type="first">P</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chen</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=sE7-XhLxHA" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Learning Representations</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b72">
	<monogr>
		<title level="m" type="main">Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter</title>
		<author>
			<persName><forename type="first">V</forename><surname>Sanh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Debut</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chaumond</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wolf</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1910.01108.arXiv:1910.01108" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b73">
	<analytic>
		<title level="a" type="main">Conditional random fields: Probabilistic models for segmenting and labeling sequence data</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Lafferty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mccallum</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">C N</forename><surname>Pereira</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Eighteenth International Conference on Machine Learning, ICML &apos;01</title>
				<meeting>the Eighteenth International Conference on Machine Learning, ICML &apos;01<address><addrLine>San Francisco, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers Inc</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="282" to="289" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b74">
	<monogr>
		<title level="m" type="main">Albert: A lite bert for selfsupervised learning of language representations</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Lan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Goodman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Gimpel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Soricut</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/1909.11942.arXiv:1909.11942" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b75">
	<analytic>
		<title level="a" type="main">Don&apos;t stop fine-tuning: On training regimes for few-shot crosslingual transfer with multilingual language models</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">D</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Vulić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Glavaš</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/2022.emnlp-main.736</idno>
		<ptr target="https://aclanthology.org/2022.emnlp-main.736.doi:10.18653/v1/2022.emnlp-main.736" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">Y</forename><surname>Goldberg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Z</forename><surname>Kozareva</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</editor>
		<meeting>the 2022 Conference on Empirical Methods in Natural Language Processing, Association for Computational Linguistics<address><addrLine>Abu Dhabi, United Arab Emirates</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="10725" to="10742" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
