<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Detailed Overview of LeQua@CLEF 2022: Learning to Quantify</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Esuli</surname></persName>
							<email>andrea.esuli@isti.cnr.it</email>
						</author>
						<author>
							<persName><forename type="first">Alejandro</forename><surname>Moreo</surname></persName>
							<email>alejandro.moreo@isti.cnr.it</email>
						</author>
						<author>
							<persName><forename type="first">Fabrizio</forename><surname>Sebastiani</surname></persName>
							<email>fabrizio.sebastiani@isti.cnr.it</email>
						</author>
						<author>
							<persName><forename type="first">Gianluca</forename><surname>Sperduti</surname></persName>
							<email>gianluca.sperduti@isti.cnr.it</email>
						</author>
						<author>
							<affiliation key="aff0">
								<orgName type="department">Istituto di Scienza e Tecnologie dell&apos;Informazione Consiglio Nazionale delle Ricerche</orgName>
								<address>
									<postCode>56124</postCode>
									<settlement>Pisa</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff1">
								<orgName type="department">Evaluation Forum</orgName>
								<address>
									<addrLine>September 5-8</addrLine>
									<postCode>2022</postCode>
									<settlement>Bologna</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Detailed Overview of LeQua@CLEF 2022: Learning to Quantify</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">74334A6FF4880CC471828CEEB1E23FDA</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T03:32+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Quantification</term>
					<term>Learning to quantify</term>
					<term>Supervised class prevalence estimation</term>
					<term>Prior estimation</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest 𝒴 = {𝑦1, ..., 𝑦𝑛} in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting; this is the first time that an evaluation exercise solely dedicated to quantification is organized. For both the binary setting and the single-label multiclass setting, data were provided to participants both in ready-made vector form and in raw document form. In this overview article we describe the structure of the lab, we report the results obtained by the participants in the four proposed tasks and subtasks, and we comment on the lessons that can be learned from these results.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In a number of applications involving classification, the final goal is not determining which class (or classes) individual unlabelled items (e.g., textual documents, images, or other) belong to, but estimating the prevalence (or "relative frequency", or "prior probability", or "prior") of each class 𝑦 ∈ 𝒴 = {𝑦 1 , ..., 𝑦 𝑛 } in the unlabelled data. Estimating class prevalence values for unlabelled data via supervised learning is known as learning to quantify (LQ) (or quantification, or supervised prevalence estimation) <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>.</p><p>LQ has several applications in fields (such as the social sciences, political science, market research, epidemiology, and ecological modelling) which are inherently interested in characterising aggregations of individuals, rather than the individuals themselves; disciplines like the ones above are usually not interested in finding the needle in the haystack, but in characterising the haystack. For instance, in most applications of tweet sentiment classification we are not concerned with estimating the true class (e.g., Positive, or Negative, or Neutral) of individual tweets. Rather, we are concerned with estimating the relative frequency of these classes in the set of unlabelled tweets under study; or, put in another way, we are interested in estimating as accurately as possible the true distribution of tweets across the classes.</p><p>It is by now well known that performing quantification by classifying each unlabelled instance and then counting the instances that have been attributed to the class (the "classify and count" method) usually leads to suboptimal quantification accuracy (see e.g., <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>); this may be seen as a direct consequence of "Vapnik's principle" <ref type="bibr" target="#b10">[11]</ref>, which states If you possess a restricted amount of information for solving some problem, try to solve the problem directly and never solve a more general problem as an intermediate step. It is possible that the available information is sufficient for a direct solution but is insufficient for solving a more general intermediate problem.</p><p>In our case, the problem to be solved directly is quantification, while the more general intermediate problem is classification.</p><p>One reason why "classify and count" is suboptimal is that many application scenarios suffer from distribution shift, the phenomenon according to which the distribution across the classes 𝑦 1 , ..., 𝑦 𝑛 in the sample (i.e., set) 𝜎 of unlabelled documents may substantially differ from the distribution across the classes in the labelled training set 𝐿; distribution shift is one example of dataset shift <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>, the phenomenon according to which the joint distributions 𝑝 𝐿 (x, 𝑦) and 𝑝 𝜎 (x, 𝑦) differ. The presence of distribution shift means that the well-known IID assumption, on which most learning algorithms for training classifiers hinge, does not hold. In turn, this means that "classify and count" will perform suboptimally on sets of unlabelled items that exhibit distribution shift with respect to the training set, and that the higher the amount of shift, the worse we can expect "classify and count" to perform.</p><p>As a result of the suboptimality of the "classify and count" method, LQ has slowly evolved as a task in its own right, different (in goals, methods, techniques, and evaluation measures) from classification <ref type="bibr" target="#b1">[2]</ref>. The research community has investigated methods to correct the biased prevalence estimates of general-purpose classifiers <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5]</ref>, supervised learning methods specially tailored to quantification <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10]</ref>, evaluation measures for quantification <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>, and protocols for carrying out this evaluation. Specific applications of LQ have also been investigated, such as sentiment quantification <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b16">17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>, quantification in networked environments <ref type="bibr" target="#b19">[20]</ref>, or quantification for data streams <ref type="bibr" target="#b20">[21]</ref>. For the near future it is easy to foresee that the interest in LQ will increase, due (a) to the increased awareness that "classify and count" is a suboptimal solution when it comes to prevalence estimation, and (b) to the fact that, with larger and larger quantities of data becoming available and requiring interpretation, in more and more scenarios we will only be able to afford to analyse these data at the aggregate level rather than individually.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The rationale for LeQua 2022</head><p>The LeQua 2022 lab (https://lequa2022.github.io/) at CLEF 2022 has a "shared task" format; it is a new lab, in two important senses:</p><p>• No labs on LQ have been organized before at CLEF conferences.</p><p>• Even outside the CLEF conference series, quantification has surfaced only episodically in previous shared tasks. The first such shared task was SemEval 2016 Task 4 "Sentiment Analysis in Twitter" <ref type="bibr" target="#b21">[22]</ref>, which comprised a binary quantification subtask and an ordinal quantification subtask (these two subtasks were offered again in the 2017 edition). Quantification also featured in the Dialogue Breakdown Detection Challenge <ref type="bibr" target="#b22">[23]</ref>, in the Dialogue Quality subtasks of the NTCIR-14 Short Text Conversation task <ref type="bibr" target="#b23">[24]</ref>, and in the NTCIR-15 Dialogue Evaluation task <ref type="bibr" target="#b24">[25]</ref>. However, quantification was never the real focus of these tasks. For instance, the real focus of the tasks described by Nakov et al. <ref type="bibr" target="#b21">[22]</ref> was sentiment analysis on Twitter data, to the point that almost all participants in the quantification subtasks used the trivial "classify and count" method, and focused, instead of optimising the quantification component, on optimising the sentiment analysis component, or on picking the best-performing learner for training the classifiers used by "classify and count". Similar considerations hold for the tasks discussed in <ref type="bibr" target="#b22">[23,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b24">25]</ref>. This is the first time that a shared task whose explicit focus is quantification is organized. A lab on this topic was thus sorely needed, because the topic has great applicative potential, and because a lot of research on this topic has been carried out without the benefit of the systematic experimental comparisons that only shared tasks allow.</p><p>We expect the quantification community to benefit significantly from this lab. One of the reasons is that this community is spread across different fields, as also witnessed by the fact that work on LQ has been published in a scattered way across different areas, e.g., information retrieval <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b15">16]</ref>, data mining <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b7">8]</ref>, machine learning <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27]</ref>, statistics <ref type="bibr" target="#b27">[28]</ref>, or in the areas to which these techniques get applied <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref>. In their papers, authors often use as baselines only the algorithms from their own fields; one of the goals of this lab was thus to pull together people from different walks of life, and to generate cross-fertilisation among the respective sub-communities.</p><p>While quantification is a general-purpose machine learning / data mining task that can be applied to any type of data, in this lab we focus on its application to data consisting of textual documents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Setting up LeQua 2022</head><p>In quantification, a data item (usually represented as x) is the individual unit of information; for instance, a textual document, an image, a video, are examples of data items. In LeQua 2022, as data items we use textual documents (and, more specifically, product reviews). A document x has a label, i.e., it belongs to a certain class 𝑦 ∈ 𝒴 = {𝑦 1 , ..., 𝑦 𝑛 }; in this case we say that 𝑦 is the label of x. In LeQua 2022, classes are either merchandise classes for products, or sentiment classes for reviews (see Section 3.4 for more).</p><p>Some documents are such that their label is known to the quantification algorithm, and are thus called labelled items; we typically use them as training examples for the quantifier-training algorithm. Some other documents are such that their label is unknown to the quantifier-training algorithm and to the trained quantifier, and are thus called unlabelled items; for testing purposes we use documents whose label we hide to the quantifier. Unlike a classifier, a quantifier must not predict labels for individual documents, but must predict prevalence values for samples (i.e., sets) of unlabelled documents; a prevalence value for a class 𝑦 and a sample 𝜎 is a number in [0,1] such that the prevalence values for the classes in 𝒴 = {𝑦 1 , ..., 𝑦 𝑛 } sum up to 1. Note that when, in the following, we use the term "label", we always refer to the label of an individual document (and not of a sample of documents; samples do not have labels, but prevalence values for classes).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Tasks</head><p>Two tasks (T1 and T2) were offered within LeQua 2022, each admitting two subtasks (A and B).</p><p>In Task T1 (the vector task) participant teams were provided with vectorial representations of the (training / development / test) documents. This task was offered so as to appeal to those participants who are not into text learning, since participants in this task did not need to deal with text preprocessing issues. Additionally, this task allowed the participants to concentrate on optimising their quantification methods, rather than spending time on optimising the process for producing vectorial representations of the documents.</p><p>In Task T2 (the raw documents task), participant teams were provided with the raw (training / development / test) documents. This task was offered so as to appeal to those participants who wanted to deploy end-to-end systems, or to those who wanted to also optimise the process for producing vectorial representations of the documents (possibly tailored to the quantification task).</p><p>The two subtasks of both tasks were the binary quantification subtask (T1A and T2A) and the single-label multiclass quantification subtask (T1B and T2B); in both subtasks each document belongs to only one of the classes of interest 𝑦 1 , ..., 𝑦 𝑛 , with 𝑛 = 2 in T1A and T2A and 𝑛 &gt; 2 in T1B and T2B.</p><p>The four subtasks conceptually form a 2×2 grid, as illustrated in the following table.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Binary Multiclass (by sentiment)</head><p>(by topic)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Vector</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>T1A T1B</head><p>Raw Documents T2A T2B</p><p>For each subtask in { T1A, T1B, T2A, T2B }, participant teams were required not to use (training / development / test) documents other than those provided for that subtask. In particular, participants were explicitly advised against using any document from either T2A or T2B in order to solve either T1A or T1B.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">The evaluation protocol</head><p>As the protocol for generating the test samples on which the quantifiers will be tested we adopt the so-called artificial prevalence protocol (APP), which is by now a standard protocol for generating the datasets to be used in the evaluation of quantifiers. </p><formula xml:id="formula_0">(𝑦 𝑛 )) such that 𝑝 𝜎 (𝑦 𝑖 ) ∈ [0, 1] for all 𝑦 𝑖 ∈ 𝒴 and ∑︀ 𝑦 𝑖 ∈𝒴 𝑝 𝜎 (𝑦 𝑖 ) = 1.</formula><p>For this we use the Kraemer algorithm <ref type="bibr" target="#b30">[31]</ref>, whose goal is that of sampling in such a way that all legitimate class distributions are picked with equal probability. For each vector thus picked we randomly generate a test sample. We use this method for both the binary case and the multiclass case.</p><p>Note that this method is sharply different from traditional instantiations of the APP (as used, say, in <ref type="bibr" target="#b15">[16,</ref><ref type="bibr" target="#b31">32,</ref><ref type="bibr" target="#b32">33,</ref><ref type="bibr" target="#b18">19]</ref>), in which one 1. Chooses an integer 𝑃 ; this determines a "grid" 𝑔 1 of (𝑃 + 1) class prevalence values 𝑥/𝑃 , for 𝑥 ∈ {0, ..., 𝑃 }. For instance, given 𝑃 = 20, this determines the grid 𝑔 1 = {0.00, 0.05, ..., 0.95, 1.00} of 21 class prevalence values;</p><p>2. Generates the grid 𝑔 2 of the 𝐾(𝑃, 𝑛) probability distributions (𝑝 𝜎 (𝑦 1 ), ..., 𝑝 𝜎 (𝑦 𝑛 )) such that all the class prevalence values 𝑝 𝜎 (𝑦 𝑖 ) are in 𝑔 1 ; 3. For each distribution 𝑝 in the 𝐾(𝑃, 𝑛) probability distributions above, extracts 𝑚 random samples of 𝑞 data items each from 𝑈 , in such a way that each extracted sample exhibits probability distribution 𝑝. 4. Use the extracted random samples for the evaluation of the quantifiers. These traditional instantiations of the APP are suitable for small values of 𝑛, but quickly become unmanageable when 𝑛 grows; for instance, in the binary case (𝑛=2) we need to extract 𝑚 • 𝐾(20, 2) = 𝑚 • 21 samples, but this number grows to 𝑚 • 𝐾(20, 3) = 𝑚 • 231 for the ternary case, and quickly becomes unmanageable as 𝑛 grows.<ref type="foot" target="#foot_1">2</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">The evaluation measures</head><p>In a recent theoretical study on the adequacy of evaluation measures for the quantification task <ref type="bibr" target="#b14">[15]</ref>, relative absolute error (RAE) and absolute error (AE) have been found to be the most satisfactory, and are thus the only measures used in LeQua 2022. In particular, as a measure we do not use the once widely used Kullback-Leibler Divergence (KLD), since the same study has found it to be unsuitable for evaluating quantification systems. <ref type="foot" target="#foot_2">3</ref> RAE and AE are defined as</p><formula xml:id="formula_1">RAE(𝑝 𝜎 , 𝑝 ^𝜎) = 1 𝑛 ∑︁ 𝑦∈𝒴 |𝑝 ^𝜎(𝑦) − 𝑝 𝜎 (𝑦)| 𝑝 𝜎 (𝑦)<label>(1)</label></formula><formula xml:id="formula_2">AE(𝑝 𝜎 , 𝑝 ^𝜎) = 1 𝑛 ∑︁ 𝑦∈𝒴 |𝑝 ^𝜎(𝑦) − 𝑝 𝜎 (𝑦)|<label>(2)</label></formula><p>where 𝑝 𝜎 is the true distribution on sample 𝜎, 𝑝 ^𝜎 is the predicted distribution, 𝒴 is the set of classes of interest, and 𝑛 = |𝒴|. Note that RAE is undefined when at least one of the classes 𝑦 ∈ 𝒴 is such that its prevalence in the sample 𝜎 of unlabelled items is 0. To solve this problem, in computing RAE we smooth all 𝑝 𝜎 (𝑦)'s and 𝑝 ^𝜎(𝑦)'s via additive smoothing, i.e., we take</p><formula xml:id="formula_3">𝑝 𝜎 (𝑦) = (𝜖 + 𝑝 𝜎 (𝑦))/(𝜖 • 𝑛 + ∑︀ 𝑦∈𝒴 𝑝 𝜎 (𝑦))</formula><p>, where 𝑝 𝜎 (𝑦) denotes the smoothed version of 𝑝 𝜎 (𝑦) and the denominator is just a normalising factor (same for the 𝑝 𝜎 ^(𝑦)'s); following Forman <ref type="bibr" target="#b3">[4]</ref>, we use the quantity 𝜖 = 1/(2|𝜎|) as the smoothing factor. In Equation <ref type="formula" target="#formula_1">1</ref>we then use the smoothed versions of 𝑝 𝜎 (𝑦) and 𝑝 ^𝜎(𝑦) in place of their original non-smoothed versions; as a result, RAE is now always defined.</p><p>As the official measure according to which systems are ranked, we use RAE; we also compute AE results, but we do not use them for ranking the systems. The official score obtained by a given quantifier is the average value of the official evaluation measure (RAE) across all test samples; for each system we also compute and report the value of AE. For each subtask in { T1A, T1B, T2A, T2B } we use a two-tailed t-test on related samples at different confidence levels (𝛼 = 0.05 and 𝛼 = 0.001) to identify all participant runs that are not statistically significantly different from the best run, in terms of RAE and in terms of AE. We also compare all pairs of methods by means of critical difference diagrams (CD-diagrams - <ref type="bibr" target="#b33">[34]</ref>). We adopt the Nemenyi test and set the confidence level to 𝛼 = 0.05. The test compares the average ranks in terms of RAE and takes into account the sample size |𝜎|.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Data</head><p>The data we have used are Amazon product reviews from a large crawl of such reviews. From the result of this crawl we have removed (a) all reviews shorter than 200 characters and (b) all reviews that have not been recognised as "useful" by any users; this has yielded the dataset Ω that we have used for our experimentation. As for the class labels, (i) for the two binary tasks (T1A and T2A) we have used two sentiment labels, i.e., Positive (which encompasses 4-stars and 5-stars reviews) and Negative (which encompasses 1-star and 2-stars reviews), while for the two multiclass tasks (T1B and T2B) we have used 28 topic labels, representing the merchandise class the product belongs to (e.g., Automotive, Baby, Beauty). <ref type="foot" target="#foot_3">4</ref>We have used the same data (training / development / test sets) for the binary vector task (T1A) and for the binary raw document task (T2A); i.e., the former are the vectorized (and shuffled) versions of the latter. Same for T1B and T2B. In order to generate the document vectors, we compute the average of the GloVe vectors <ref type="bibr" target="#b34">[35]</ref> for the words contained in each document, thus producing 300-dimensional document embeddings. Each of the 300 dimensions of the document embeddings is then (independently) standardized, so that it has zero mean and unit variance.</p><p>The 𝐿 𝐵 (binary) training set and the 𝐿 𝑀 (multiclass) training set consist of 5,000 documents and 20,000 documents, respectively, sampled from the dataset Ω via stratified sampling so as to have "natural" prevalence values for all the class labels. (When doing stratified sampling for the binary "sentiment-based" task, we ignore the "topic" dimension; and when doing stratified sampling for the multiclass "topic-based" task, we ignore the "sentiment" dimension).</p><p>The development (validation) sets 𝐷 𝐵 (binary) and 𝐷 𝑀 (multiclass) consist of 1,000 development samples of 250 documents each (𝐷 𝐵 ) and 1,000 development samples of 1,000 documents each (𝐷 𝑀 ) generated from Ω ∖ 𝐿 𝐵 and Ω ∖ 𝐿 𝑀 via the Kraemer algorithm.</p><p>The test sets 𝑈 𝐵 and 𝑈 𝑀 consist of 5,000 test samples of 250 documents each (𝑈 𝐵 ) and 5,000 test samples of 1,000 documents each (𝑈 𝑀 ), generated from Ω ∖ (𝐿 𝐵 ∪ 𝐷 𝐵 ) and Ω ∖ (𝐿 𝑀 ∪ 𝐷 𝑀 ) via the Kraemer algorithm. A submission ("run") for a given subtask consists of prevalence estimations for the relevant classes (the two sentiment classes for the binary subtasks and the 28 topic classes for the multiclass subtasks) for each sample in the test set of that subtask.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.5.">Baselines</head><p>In order to set a sufficiently high bar for the participants to overcome, we made them aware of the availability of QuaPy <ref type="bibr" target="#b35">[36]</ref>, a library of quantification methods that contains, among others, implementations of a number of methods that have performed well in recent comparative evaluations. <ref type="foot" target="#foot_4">5</ref> QuaPy is a publicly available, open-source, Python-based framework that we have recently developed, and that implements not only learning methods, but also evaluation measures, parameter optimisation routines, and evaluation protocols, for LQ.</p><p>We used a number of quantification methods, as implemented in QuaPy, as baselines for the participants to overcome. <ref type="foot" target="#foot_5">6</ref> These methods were:</p><p>• Maximum Likelihood Prevalence Estimation (MLPE): Rather than a true quantification method, this a (more than) trivial baseline, consisting in assuming that the prevalence 𝑝 𝜎 (𝑦 𝑖 ) of a class 𝑦 𝑖 in the test sample 𝜎 is the same as the prevalence 𝑝 𝐿 (𝑦 𝑖 ) that was observed for that class in the training set 𝐿.  <ref type="bibr" target="#b31">[32]</ref> (see also <ref type="bibr" target="#b36">[37]</ref>): This is a method based on Expectation Maximization, whereby the posterior probabilities returned by a soft classifier 𝑠 for data items in an unlabelled set 𝑈 , and the class prevalence values for 𝑈 , are iteratively updated in a mutually recursive fashion. For SLD we calibrate the classifier since, for reasons discussed in <ref type="bibr" target="#b36">[37]</ref>, this yields an advantage for this method. <ref type="foot" target="#foot_6">7</ref>• QuaNet <ref type="bibr" target="#b15">[16]</ref>: This is a deep learning architecture for quantification that predicts class prevalence values by taking as input (i) the class prevalence values as estimated by CC, ACC, PCC, PACC, SLD; (ii) the posterior probabilities Pr(𝑦|x) for the positive class (since QuaNet is a binary method) for each document x, and (iii) embedded representations of the documents. For task T1A, we directly use the vectorial representations that we have provided to the participants as the document embeddings, while for task T2A we use the RoBERTa embeddings (described below). For training QuaNet, we use the training set 𝐿 for training the classifier. We then use the validation set for training the network parameters, using 10% of the validation samples for monitoring the validation loss (we apply early stop after 10 epochs that have shown no improvement). Since we devote the validation set to train part of the model, we did not carry out model selection for QuaNet, which was used with default hyperparameters (a learning rate of 1𝑒 −4 , 64 dimensions in the LSTM hidden layer, and a drop-out probability of 0.5).</p><p>All the above methods (with the exception of MLPE) are described in more detail in <ref type="bibr">[19, §3.3 and §3.4]</ref>, to which we refer the interested reader; all these methods are well-established, the most recent one (QuaNet) having been published in 2018. For all methods, we have trained the underlying classifiers via logistic regression, as implemented in the scikit-learn framework (https://scikit-learn.org/stable/index.html). Note that we have used HDy and QuaNet as baselines only in T1A and T2A, since they are binary-only methods. All other methods are natively multiclass, so we have used them in all four subtasks. We optimize two hyperparameters of the logistic regression learner by exploring 𝐶 (the inverse of the regularization strength) in the range {10 −3 , 10 −2 , . . ., 10 +3 } and class_weight (indicating the relative importance of each class) in {"balanced", "not-balanced"}. For each quantification method, model selection is carried out by choosing the combination of hyperparameters yielding the lowest average RAE across all validation samples.</p><p>For the raw documents subtasks (T2A and T2B), for each baseline quantification method we have actually generated two quantifiers, using two different methods for turning documents into vectors. (The only two baseline methods for which we do not do this are MLPE, which does not use vectors, and QuaNet, that internally generates its own vectors.) The two methods are • The standard tfidf term weighting method, expressed as</p><formula xml:id="formula_4">tfidf(𝑓, x) = log #(𝑓, x) × log |𝐿| |x ′ ∈ 𝐿 : #(𝑓, x ′ ) &gt; 0|<label>(3)</label></formula><p>where #(𝑓, x) is the raw number of occurrences of term 𝑓 in document x; weights are then normalized via cosine normalization, as</p><formula xml:id="formula_5">𝑤(𝑓, x) = tfidf(𝑓, x) √︁ ∑︀ 𝑓 ′ ∈𝐹 tfidf(𝑓 ′ , x) 2<label>(4)</label></formula><p>where 𝐹 is the set of all unigrams and bigrams that occur at least 5 times in 𝐿. • The RoBERTa transformer <ref type="bibr" target="#b37">[38]</ref>, from the Hugging Face hub. <ref type="foot" target="#foot_7">8</ref> In order to use RoBERTa, we truncate the documents to the first 256 tokens, and fine-tune RoBERTa for the task of classification via prompt learning for a maximum of 10 epochs on our training data, thus taking the model parameters from the epoch which yields the best macro 𝐹 1 as monitored on a held-out validation set consisting of 10% of the training documents randomly sampled in a stratified way. For training, we set the learning rate to 1𝑒 −5 , the weight decay to 0.01, and the batch size to 16, leaving the other hyperparameters at their default values. For each document, we generate features by first applying a forward pass over the fine-tuned network, and then averaging the embeddings produced for the special token [CLS] across all the 12 layers of RoBERTa. (In experiments that we carried out for another project, this latter approach yielded slightly better results than using the [CLS] embedding of the last layer alone.) The embedding size of RoBERTa, and hence the number of dimensions of our vectors, amounts to 768.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>The teams who participated in LeQua 2022 and the tasks for which they submitted runs.</p><p>T1A T1B T2A T2B DortmundAI</p><formula xml:id="formula_6">x x KULeuven x x UniLeiden x UniOviedo(Team1) x x x x UniOviedo(Team2) x x UniPadova x</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">The participating systems</head><p>Six teams submitted runs to LeQua 2022. As shown in in Table <ref type="table">1</ref>, the most popular subtask was, unsurprisingly, T1A (5 teams), while the subtask with the smallest participation was T2B (1 team). We here list the teams in alphabetical order:</p><p>• DortmundAI <ref type="bibr" target="#b38">[39]</ref> submitted a run each for T1A and T1B. Their original goal was to use a modified version of the SLD algorithm described in Section 3.5. The modification introduced by DortmundAI consists of the use of a regularization technique meant to smooth the estimates that expectation maximization computes for the class prevalence values at each iteration. After extensively applying model selection, though, the team realized that the best configurations of hyperparameters often reduce the strength of such regularization, so as to make the runs produced by their regularized version of SLD almost identical to a version produced by using the "traditional" SLD algorithm. They also found that a thorough optimization of the hyperparameters of the base classifier was instead the key to producing good results. • KULeuven <ref type="bibr" target="#b39">[40]</ref> submitted a run each for T1A and T1B. Their system consisted of a robust calibration of the SLD <ref type="bibr" target="#b31">[32]</ref> method based on the observations of Molinari et al. <ref type="bibr" target="#b40">[41]</ref>. While the authors explored trainable calibration strategies (i.e., regularization constraints that modify the training objective of a classifier in favour of better calibrated solutions), the team finally contributed a solution based on the Platt rescaling <ref type="bibr" target="#b41">[42]</ref> of the SVM outputs (i.e., a post-hoc calibration method that is applied after training the classifier) which they found to perform better in validation. Their solution differs from the version of SLD provided as baseline mainly in the choice of the underlying classifier (the authors chose SVMs while the provided baseline is based on logistic regression) and in the amount of effort devoted to the optimization of the hyperparameters (which was higher in the authors' case). • UniLeiden <ref type="bibr" target="#b42">[43]</ref> submitted a run for T1A only. The authors' system is a variant of the Median Sweep (MS) method proposed by Forman <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b43">44]</ref>, called Simplified Continuous Sweep, which consists of a smooth adaptation of the original method. The main modifications come down to computing the mean (instead of the median) of the class prevalence estimates by integrating over continuous functions (instead of summing across discrete functions) that represent the classification counts and misclassification rates. Since the underlying distributions of these counts and rates are unknown, kernel density estimation is used to approximate them. Although the system did not yield improved results with respect to MS, it paves the way for better understanding the theoretical implications of MS. • UniOviedo(Team1) <ref type="bibr" target="#b44">[45]</ref> submitted a run each for all four subtasks. Their system consists of a deep neural network architecture explicitly devised for the quantification task. The learning method is non-aggregative and does not need to know the labels of the training items composing a sample. As the training examples to train the quantifiers that produced the submissions it used the samples with known prevalence from the development sets 𝐷 𝐵 and 𝐷 𝑀 (each set is used for its respective task). A generator of additional samples that produces mixtures of pairs of samples of known prevalence is used to increase the number of training examples. Data from training sets 𝐿 𝐵 and 𝐿 𝑀 are used only to generate additional training samples when over-fitting is observed. Every sample is represented as a set of histograms, each one representing the distribution of values of an input feature. For tasks T1A and T1B, histograms are directly computed on the input vectors. For tasks T2A and T2B, the input text are first converted into dense vectors using a BERT model, for which the histograms are computed. The network uses RAE as the loss function, modified by the smoothing parameter so as to avoid undefined values when a true prevalence is zero, thus directly optimizing the official evaluation measure. • UniOviedo(Team2) <ref type="bibr" target="#b45">[46]</ref> submitted a run each for T1A and T1B. For T1A, this team used a highly optimized version of the HDy system (that was also one of the baseline systems), obtained by optimizing three different parameters (similarity measure used, number of bins used, method used for binning the posteriors returned by the classifier). For T1B, this team used a version of HDy (called EDy) different from the previous one; EDy uses, for the purpose of measuring the distance between two histograms, the "energy distance" in place of the Hellinger Distance.</p><p>• UniPadova <ref type="bibr" target="#b46">[47]</ref> submitted a run for T2A only. Their system consisted of a classifyand-count method in which the underlying classifier is a probabilistic "BM25" classifier. The power of this method thus only derives from the term weighting component, since nothing in the method makes explicit provisions for distribution shift.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Results</head><p>In this section we discuss the results obtained by our participant teams in the four subtasks we have proposed. The evaluation campaign started on Dec 1, 2021, with the release of the training sets (𝐿 𝐵 and 𝐿 𝑀 ) and of the development sets (𝐷 𝐵 and 𝐷 𝑀 ); alongside them, the participant teams were provided with a dummy submission, a format checker, and the official evaluation script. The unlabelled test sets (𝑈 𝐵 and 𝑈 𝑀 ) were released on Apr 22, 2022; and runs had to be submitted by <ref type="bibr">May 11, 2022</ref>. Each team could submit up to two runs per subtask, provided each such run used a truly different method (and not, say, the same method using different parameter values); however, no team decided to take advantage of this, and each team submitted at most  one run per subtask. An instantiation of Codalab (https://codalab.org/) was set up in order to allow the teams to submit their runs. The true labels of the unlabelled test sets were released on May 13, 2022, after the submission period was over and the official results had been announced to the participants. In the rest of this section we discuss the results that the participants' systems and the baseline systems have obtained in the vector subtasks (T1A and T1B -Section 5.1), in the raw document subtasks (T2A and T2B -Section 5.2), in the binary subtasks (T1A and T2A -Section 5.3), and in the multiclass subtasks (T1B and T2B -Section 5.4). We report the results of the participants' systems and the baseline systems in Figure <ref type="figure" target="#fig_1">1</ref> (for subtask T1A), Figure <ref type="figure" target="#fig_2">2</ref> (T1B), Figure <ref type="figure" target="#fig_3">3</ref> (T2A), and Figure <ref type="figure" target="#fig_4">4 (T2B)</ref>. In each such figure we also display critical-distance diagrams illustrating how the systems rank in terms of RAE and when the difference between the systems is statistically significant.  Interestingly enough, no system (either participants' system or baseline system) was the best performer in more than one subtask, with four different systems (the KULeuven system for T1A, the DortmundAI system for T1B, the QuaNet baseline system for T2A, and the UniOviedo(Team1) system for T2B) claiming top spot for the four subtasks. Overall, the performance of UniOviedo(Team1) was especially noteworthy since, aside from topping the rank in T2B, it obtained results not statistically significantly different (0.05 ≤ 𝑝-value) from those of the top-performing team also in T1A and T1B.</p><p>The results allow us to make a number of observations. We organize the discussion of these results in four sections (Section 5.1 to Section 5.4), one for each of the four dimensions (vectors vs. raw documents, binary vs. multiclass) according to which the four subtasks are structured. However, before doing that, we discuss some conclusions that may be drawn from the results and that affect all four dimensions. 1. MLPE is the worst predictor. This is true in all four subtasks, and was expected, given the fact that the test data are generated by means of the APP, which implies that the test data contain a very high number of samples characterized by substantial distribution shift, and that on these samples MLPE obviously performs badly. 2. CC and PCC obtain very low quantification accuracy; this is the case in all four subtasks, where these two methods are always near the bottom of the ranking. This confirms the fact (already recorded in previous work -see e.g., <ref type="bibr" target="#b35">[36,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b47">48]</ref>) that they are not good performers when the APP is used for generating the dataset, i.e., they are not good performers when there is substantial distribution shift. Interestingly enough, CC always outperforms PCC, which was somehow unexpected.  3. ACC and PACC are mid-level performers; this holds in all four subtasks, in which both methods are always in the middle portion of the ranking. Interestingly enough, PACC always outperforms ACC, somehow contradicting the impression (see Bullet 2) that "hard" counts are better than expected counts and/or that the calibration routine has not done a good job. 4. SLD is the strongest baseline; this is true in all four subtasks, in which SLD, while never being the best performer, is always in the top ranks. This confirms the fact (already recorded in previous work -see e.g., <ref type="bibr" target="#b35">[36,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b47">48]</ref>) that SLD is a very strong performer when the APP is used for generating the dataset, i.e., when the test data contain many samples characterized by substantial distribution shift. 5. Overall, the ranking MLPE &lt; PCC &lt; CC &lt; ACC &lt; PACC &lt; SLD (where "&lt;" means "performs worse than") clearly emerges from all four tasks.</p><p>As it might be expected, not always a good performance according to RAE (our official measure) also corresponds to a good performance on AE (our other measure). Only in 2 subtasks out of 4  (T1B, with the DortmundAI system, and T2A, with the QuaNet baseline system) the system that scores best according to RAE also scores best according to AE; in the other 2 subtasks this is not the case, and in one case (T2B) the system that performs best according to RAE (the UniOviedo(Team1) system) has a very low performance according to AE. This suggests that for some systems, including the UniOviedo(Team1) system, parameter optimization (which, quite naturally, is performed by trying to optimize the official measure) may have played an especially important role.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">T1A and T1B: The vector subtasks</head><p>In the vector subtasks the top-performing systems, KULeuven for T1A and UniDortmund for T1B, both consist of carefully optimized instances of SLD. The KULeuven system outperformed all the baseline systems in both tasks, while the UniDortmund system ranked 5th in T1A, one position below the SLD baseline. The runs from UniOviedo(Team1) and UniOviedo(Team2) obtained 2nd and 3rd ranks, respectively, in both T1A and T1B. The UniOviedo(Team1) system performed very well in both cases, obtaining RAE scores that, according to the test of statistical significance, are not significantly different from the best result obtained in each of these subtasks. Things are different if we instead look at the AE scores, for which UniOviedo(Team1) obtained the best result in T1A but the second-worst result in T1B.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">T2A and T2B: The raw documents subtasks</head><p>In both raw document tasks (T2A and T2B) the best-performing methods is always one based on deep learning (the QuaNet baseline for T2A and the UniOviedo(Team1) system for T2B).</p><p>A direct comparison between the UniOviedo(Team1) system and QuaNet in the multiclass case (T2B) is not possible because QuaNet is a binary-only method (see Section 3.5) and was thus not used in T2B. A common characteristic between these two methods is that both use (part of the) samples from the validation data not for tuning hyperparameters but for training the model.</p><p>Concerning the baseline systems, the results do not give a definitive answer on which between tfidf and RoBERTa is the best method for mapping raw documents into vectors. In fact, out of 9 cases (5 for T2A, 4 for T2B) in which we have generated both variants of the same baseline, the tfidf variant outperforms the RoBERTa variant in 4 cases and is outperformed by it in 5 cases. This was unexpected, since RoBERTa is a way more sophisticated and modern method than the time-worn tfidf. Interestingly (and mysteriously) enough, the tfidf variant is almost always the better performer in the binary case (T2A -4 cases out of 5), while the RoBERTa variant always outperforms the tfidf variant in the multiclass case (T2B -4 cases out of 4).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">T1A and T2A: The binary subtasks</head><p>Concerning T1A and T2A (the binary subtasks), we should first observe that we here use two further baseline systems, namely, HDy and QuaNet; we only use them in the binary subtasks since they are not natively multiclass. HDy performs fairly well in both T1A and T2A, outperforming MLPE, PCC, CC, ACC, and PACC (but not SLD) in both cases. Instead, QuaNet performs less consistently, since it places in the mid-lower ranks of the table in T1A but is no less than the best performer in T2A.</p><p>The inconsistent results obtained by QuaNet on binary tasks contrast with those obtained by the UniOviedo(Team1) system, the other method based on deep learning, which ranks among the top positions in both T1A and T2A. This is somehow surprising, given that in T1A (unlike in T2A), the source vectors used by UniOviedo(Team1) and QuaNet methods were exactly the same.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4.">T1B and T2B: The multiclass subtasks</head><p>Regarding the multiclass subtasks, the UniOviedo(Team1) system stands out, since it consistently obtained results that either outperform all other methods (T2B) or were not different, in a statistically significant sense, from the best-performing method (T1B). UniOviedo(Team1) was the only team participating in the raw-document multiclass subtask T2B. Although UniOviedo(Team1) beat all other baselines in terms of RAE, it performed comparably worse in terms of AE to most of the baselines (actually, worse than all baselines but MLPE).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Final remarks</head><p>Overall, something that we learn from this shared task is that SLD is very hard to beat (thereby confirming recent results reported in <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b35">36,</ref><ref type="bibr" target="#b47">48]</ref>), and that it tends to fare very well across different settings, including binary and multiclass quantification problems, and including different ways of processing text. This observation is reinforced by the fact that two of the best-performing systems (KULeuven and UniDortmund, which placed 1st in T1A and T1B, respectively) actually consist of carefully-tuned instances of SLD. Another "classic" method that has also proven to behave well is HDy, a method that forms the basis on which one of the best-performing methods (UniOviedo(Team2)) is built upon. However, the system that has delivered the most consistently competitive results across all tasks (UniOviedo(Team1)) is a "non-classical" one, since it is based on deep-learning technology.</p><p>To conclude, we think that LeQua 2022 has proven very useful for the quantification community, since it has confirmed, in a controlled settings, some intuitions about "classic" quantification systems (e.g., SLD) that had already surfaced in the recent literature, but has also shown that there are margins of improvement over them, especially if using "deep" learning approaches (such as QuaNet and the system used by UniOviedo(Team1)).</p><p>We plan to propose a LeQua edition for CLEF 2023, so as to allow the LeQua 2022 participants to profit from their 2022 experience in order to consolidate their systems so as to improve on their 2022 performance, and so as to allow prospective participants who could not make it for 2022 to jump in. The experimental setting that we have used for LeQua 2022 will be the starting point, but we might want to incorporate in it possible suggestions that might arise during the LeQua session at the CLEF 2022 conference. This session will host (a) a keynote talk by George Forman (Amazon Research), (b) a detailed presentation by the organisers, overviewing the lab and the results of the participants, (c) oral presentations by the participating teams, and (d) a final discussion on the takeaway message that LeQua 2022 gives us.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>•</head><label></label><figDesc>Classify and Count (CC): This is the trivial baseline, consisting in training a standard classifier ℎ on the training set 𝐿, using it to classify all the data items x in the sample 𝜎, counting how many such items have been attributed to class 𝑦 𝑖 , doing this for all classes in 𝒴, and dividing the resulting counts by the cardinality |𝜎| of the sample.• Probabilistic Classify and Count (PCC)<ref type="bibr" target="#b2">[3]</ref>: This is a probabilistic variant of CC where the "hard" classifier ℎ is replaced with a "soft" (probabilistic) classifier 𝑠, and where counts are replaced with expected counts. • Adjusted Classify and Count (ACC)<ref type="bibr" target="#b32">[33]</ref>: This is an "adjusted" variant of CC in which the prevalence values predicted by CC are subsequently corrected by considering the misclassification rates of classifier ℎ, as estimated on a held-out validation set. For our experiments, this held-out set consists of 40% of the training set. • Probabilistic Adjusted Classify and Count (PACC)<ref type="bibr" target="#b2">[3]</ref>: This is a probabilistic variant of ACC where the "hard" classifier ℎ is replaced with a "soft" (probabilistic) classifier 𝑠, and where counts are replaced with expected counts. Equivalently, it is an "adjusted" variant of PCC in which the prevalence values predicted by PCC are corrected by considering the (probabilistic versions of the) misclassification rates of soft classifier 𝑠, as estimated on a held-out validation set. For our experiments, this held-out set consists of 40% of the training set. • HDy<ref type="bibr" target="#b8">[9]</ref>: This is a probabilistic binary quantification method that views quantification as the problem of minimising the divergence (measured in terms of the Hellinger Distance, HD) between two distributions of posterior probabilities returned by the classifier, one coming from the unlabelled examples and the other coming from a validation set consisting of 40% of the training documents. HDy seeks for the mixture parameter 𝛼 ∈ [0, 1] that minimizes the HD between (a) the mixture distribution of posteriors from the positive class (weighted by 𝛼) and from the negative class (weighted by (1 − 𝛼)), and (b) the unlabelled distribution. • The Saerens-Latinne-Decaestecker algorithm (SLD)</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Results of Task T1A. Table (a) reports the results of participant teams in terms of RAE (official measure for ranking) and AE, averaged across the 5,000 test samples. Boldface indicates the best method for a given evaluation measure. Superscripts † and ‡ denote the methods (if any) whose scores are not statistically significantly different from the best one according to a paired sample, two-tailed t-test at different confidence levels: symbol † indicates 0.001 &lt; 𝑝-value &lt; 0.05 while symbol ‡ indicates 0.05 ≤ 𝑝-value. The absence of any such symbol indicates 𝑝-value ≤ 0.001 (i.e., that the difference in performance between the method and the best one is statistically significant at a high confidence level). Baseline methods are typeset in italic. Subfigure (b) reports the CD-diagram for Task T1A for the averaged ranks in terms of RAE.</figDesc><graphic coords="12,89.29,295.77,416.69,76.39" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: As in Figure 1, but for T1B in place of T1A.</figDesc><graphic coords="13,89.29,259.54,416.69,59.72" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: As in Figure 1, but for T2A in place of T1A.</figDesc><graphic coords="14,89.29,331.81,416.69,84.73" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: As in Figure 1, but for T2B in place of T1A.</figDesc><graphic coords="15,89.29,283.72,416.69,70.84" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>Using the APP consists of taking the test set 𝑈 of unlabelled data items, and extracting from it a number of subsets (the test samples), each characterised by a predetermined vector (𝑝 𝜎 (𝑦 1 ), ..., 𝑝 𝜎 (𝑦 𝑛 )) of prevalence values, where 𝑦 1 , ..., 𝑦 𝑛 are the classes of interest. In other words, for extracting a test sample 𝜎, we generate a vector of prevalence values, and randomly select documents from 𝑈 accordingly (i.e., by class-conditional random selection of documents until the desired class prevalence values are obtained).1  The goal of the APP is to generate samples characterised by widely different vectors of prevalence values; this is meant to test the robustness of a quantifier (i.e., of an estimator of class prevalence values) in confronting class prevalence values possibly different (or very different) from the ones of the set it has been trained on. For doing this we draw the vectors of class prevalence values uniformly at random from the set of all legitimate such vectors, i.e., from the unit (𝑛 − 1)-simplex of all vectors (𝑝 𝜎 (𝑦 1 ), ..., 𝑝 𝜎</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Everything we say here on how we generate the test samples also applies to how we generate the development samples.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">More precisely, there are 𝐾(𝑃, 𝑛) = (︀ 𝑃 +𝑛−1 𝑛−1 )︀ probability distributions (𝑝𝜎(𝑦1), ..., 𝑝𝜎(𝑦𝑛)) such that all the class prevalence values 𝑝𝜎(𝑦𝑖) are in 𝑔1. To exemplify, for 𝑛 = 5 classes we already reach 𝐾(20, 5) = 10, 626 valid combinations, while for 𝑛 = 10 classes the number of combinations rises to 𝐾(20, 10) = 10, 015, 005.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_2">One reason why KLD is undesirable is that it penalizes differently underestimation and overestimation, and it does so opaquely, i.e., in a way that is not explicit from its mathematical form and that cannot be controlled via an explicit parameter; another is that it is very little robust to outliers. See [15, §4.7 and §5.2] for a detailed discussion of these and other reasons.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_3">The set of 28 topic classes is flat, i.e., there is no hierarchy defined upon it.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_4">https://github.com/HLT-ISTI/QuaPy</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_5">Check the branch https://github.com/HLT-ISTI/QuaPy/tree/lequa2022</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_6">Calibration does not yield similar improvements for other methods such as PCC, PACC, and QuaNet, though. For this reason, we only calibrate the classifier for SLD.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_7">https://huggingface.co/docs/transformers/model_doc/roberta</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work has been supported by the SoBigData++ project, funded by the European Commission (Grant 871042) under the H2020 Programme INFRAIA-2019-1, and by the AI4Media project, funded by the European Commission (Grant 951911) under the H2020 Programme ICT-48-2020. The authors' opinions do not necessarily reflect those of the European Commission. We thank Alberto Barron Cedeño, Juan José del Coz, Preslav Nakov, and Paolo Rosso, for advice on how to best set up this lab.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Learning to quantify: Methods and applications (LQ</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Del Coz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/3459637.3482040</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 30th ACM International Conference on Knowledge Management (CIKM 2021)</title>
				<meeting>the 30th ACM International Conference on Knowledge Management (CIKM 2021)<address><addrLine>Gold Coast, AU</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="4874" to="4875" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A review on quantification learning</title>
		<author>
			<persName><forename type="first">P</forename><surname>González</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Castaño</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">V</forename><surname>Chawla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Del Coz</surname></persName>
		</author>
		<idno type="DOI">10.1145/3117807</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Computing Surveys</title>
		<imprint>
			<biblScope unit="volume">50</biblScope>
			<biblScope unit="page">40</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Quantification via probability estimators</title>
		<author>
			<persName><forename type="first">A</forename><surname>Bella</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ferri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Hernández-Orallo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Ramírez-Quintana</surname></persName>
		</author>
		<idno type="DOI">10.1109/icdm.2010.75</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 11th IEEE International Conference on Data Mining (ICDM 2010)</title>
				<meeting>the 11th IEEE International Conference on Data Mining (ICDM 2010)<address><addrLine>Sydney, AU</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2010">2010</date>
			<biblScope unit="page" from="737" to="742" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Quantifying counts and costs via classification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Forman</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10618-008-0097-y</idno>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="164" to="206" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Enhanced probabilistic classify and count methods for multilabel text quantification</title>
		<author>
			<persName><forename type="first">R</forename><surname>Levin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Roitman</surname></persName>
		</author>
		<idno type="DOI">10.1145/3121050.3121083</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 7th ACM International Conference on the Theory of Information Retrieval (ICTIR 2017)</title>
				<meeting>the 7th ACM International Conference on the Theory of Information Retrieval (ICTIR 2017)<address><addrLine>Amsterdam, NL</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="229" to="232" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Quantification-oriented learning based on reliable classifiers</title>
		<author>
			<persName><forename type="first">J</forename><surname>Barranquero</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Díez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Del Coz</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patcog.2014.07.032</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="591" to="604" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Ordinal text quantification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Da San Martino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/2911451.2914749</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 39th ACM Conference on Research and Development in Information Retrieval (SIGIR 2016)</title>
				<meeting>the 39th ACM Conference on Research and Development in Information Retrieval (SIGIR 2016)<address><addrLine>Pisa, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="937" to="940" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Optimizing text quantifiers for multivariate loss functions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/2700406</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Knowledge Discovery and Data</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
	<note>Article 27</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Class distribution estimation based on the Hellinger distance</title>
		<author>
			<persName><forename type="first">V</forename><surname>González-Castro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alaiz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Alegre</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ins.2012.05.028</idno>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<biblScope unit="volume">218</biblScope>
			<biblScope unit="page" from="146" to="164" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Quantification trees</title>
		<author>
			<persName><forename type="first">L</forename><surname>Milli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rossetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1109/icdm.2013.122</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th IEEE International Conference on Data Mining (ICDM 2013)</title>
				<meeting>the 13th IEEE International Conference on Data Mining (ICDM 2013)<address><addrLine>Dallas, US</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="528" to="536" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">V</forename><surname>Vapnik</surname></persName>
		</author>
		<title level="m">Statistical learning theory</title>
				<meeting><address><addrLine>New York, US</addrLine></address></meeting>
		<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="1998">1998</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A unifying view on dataset shift in classification</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Moreno-Torres</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Raeder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Alaíz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">V</forename><surname>Chawla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patcog.2011.06.019</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="521" to="530" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">Dataset shift in machine learning</title>
		<idno type="DOI">10.7551/mitpress/9780262170055.001.0001</idno>
		<editor>J. Quiñonero-Candela, M. Sugiyama, A. Schwaighofer, N. D. Lawrence</editor>
		<imprint>
			<date type="published" when="2009">2009</date>
			<publisher>The MIT Press</publisher>
			<pubPlace>Cambridge, US</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Sentiment quantification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="72" to="75" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Evaluation measures for quantification: An axiomatic approach</title>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10791-019-09363-y</idno>
	</analytic>
	<monogr>
		<title level="j">Information Retrieval Journal</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="255" to="288" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">A recurrent neural network for sentiment quantification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/3269206.3269287</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 27th ACM International Conference on Information and Knowledge Management (CIKM 2018)</title>
				<meeting>the 27th ACM International Conference on Information and Knowledge Management (CIKM 2018)<address><addrLine>Torino, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1775" to="1778" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">From classification to quantification in tweet sentiment analysis</title>
		<author>
			<persName><forename type="first">W</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1007/s13278-016-0327-z</idno>
	</analytic>
	<monogr>
		<title level="j">Social Network Analysis and Mining</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1" to="22" />
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Cross-lingual sentiment quantification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1109/MIS.2020.2979203</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Intelligent Systems</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="106" to="114" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Tweet sentiment quantification: An experimental re-evaluation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PLoS ONE</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
	<note>Forthcoming</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Quantification in social networks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Milli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Rossetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1109/dsaa.2015.7344845</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd IEEE International Conference on Data Science and Advanced Analytics (DSAA 2015)</title>
				<meeting>the 2nd IEEE International Conference on Data Science and Advanced Analytics (DSAA 2015)<address><addrLine>Paris, FR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Combining instance selection and self-training to improve data stream quantification</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Maletzke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Moreira Dos Reis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">E</forename><surname>Batista</surname></persName>
		</author>
		<idno type="DOI">10.1186/s13173-018-0076-0</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of the Brazilian Computer Society</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="43" to="48" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">SemEval-2016 Task 4: Sentiment analysis in Twitter</title>
		<author>
			<persName><forename type="first">P</forename><surname>Nakov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ritter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Rosenthal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/s16-1001</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval 2016)</title>
				<meeting>the 10th International Workshop on Semantic Evaluation (SemEval 2016)<address><addrLine>San Diego, US</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Overview of the 3rd Dialogue Breakdown Detection challenge</title>
		<author>
			<persName><forename type="first">R</forename><surname>Higashinaka</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Funakoshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Inaba</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Tsunomori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Takahashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Kaji</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 6th Dialog System Technology Challenge</title>
				<meeting>the 6th Dialog System Technology Challenge<address><addrLine>Long Beach, US</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Overview of the NTCIR-14 Short Text Conversation task: Dialogue Quality and Nugget Detection subtasks</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sakai</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 14th Workshop on NII Testbeds and Community for Information access Research (NTCIR 2019)</title>
				<meeting>the 14th Workshop on NII Testbeds and Community for Information access Research (NTCIR 2019)<address><addrLine>Tokyo, JP</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="289" to="315" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Overview of the NTCIR-15 Dialogue Evaluation task (DialEval-1)</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Zeng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Sakai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 15th Workshop on NII Testbeds and Community for Information access Research (NTCIR 2020)</title>
				<meeting>the 15th Workshop on NII Testbeds and Community for Information access Research (NTCIR 2020)<address><addrLine>Tokyo, JP</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="13" to="34" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Class and subclass probability re-estimation to adapt a classifier in the presence of concept drift</title>
		<author>
			<persName><forename type="first">R</forename><surname>Alaíz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Guerrero-Curieses</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Cid-Sueiro</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2011.03.019</idno>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">74</biblScope>
			<biblScope unit="page" from="2614" to="2623" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Class-prior estimation for learning from positive and unlabeled data</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">C</forename><surname>Du Plessis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Niu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sugiyama</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10994-016-5604-6</idno>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page" from="463" to="492" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Verbal autopsy methods with multiple causes of death</title>
		<author>
			<persName><forename type="first">G</forename><surname>King</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Lu</surname></persName>
		</author>
		<idno type="DOI">10.1214/07-sts247</idno>
	</analytic>
	<monogr>
		<title level="j">Statistical Science</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="78" to="91" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">The importance of calibration for estimating proportions from annotations</title>
		<author>
			<persName><forename type="first">D</forename><surname>Card</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
		<idno type="DOI">10.18653/v1/n18-1148</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2018)</title>
				<meeting>the 2018 Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2018)<address><addrLine>New Orleans, US</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="1636" to="1646" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">A method of automated nonparametric content analysis for social science</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">J</forename><surname>Hopkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>King</surname></persName>
		</author>
		<idno type="DOI">10.1111/j.1540-5907.2009.00428.x</idno>
	</analytic>
	<monogr>
		<title level="j">American Journal of Political Science</title>
		<imprint>
			<biblScope unit="volume">54</biblScope>
			<biblScope unit="page" from="229" to="247" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<monogr>
		<title level="m" type="main">Sampling uniformly from the unit simplex</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">W</forename><surname>Tromble</surname></persName>
		</author>
		<ptr target="https://www.cs.cmu.edu/~nasmith/papers/smith+tromble.tr04.pdf" />
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
		<respStmt>
			<orgName>Johns Hopkins University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report</note>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Adjusting the outputs of a classifier to new a priori probabilities: A simple procedure</title>
		<author>
			<persName><forename type="first">M</forename><surname>Saerens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Latinne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Decaestecker</surname></persName>
		</author>
		<idno type="DOI">10.1162/089976602753284446</idno>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page" from="21" to="41" />
			<date type="published" when="2002">2002</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Counting positives accurately despite inaccurate classification</title>
		<author>
			<persName><forename type="first">G</forename><surname>Forman</surname></persName>
		</author>
		<idno type="DOI">10.1007/11564096_55</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 16th European Conference on Machine Learning (ECML 2005)</title>
				<meeting>the 16th European Conference on Machine Learning (ECML 2005)<address><addrLine>Porto, PT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2005">2005</date>
			<biblScope unit="page" from="564" to="575" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Statistical comparisons of classifiers over multiple data sets</title>
		<author>
			<persName><forename type="first">J</forename><surname>Demšar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="1" to="30" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">Glove: Global vectors for word representation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Pennington</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Socher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">D</forename><surname>Manning</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)</title>
				<meeting>the 12th Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)<address><addrLine>Doha, QA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2014">2014</date>
			<biblScope unit="page" from="1532" to="1543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">QuaPy: A Python-based framework for quantification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/3459637.3482015</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 30th ACM International Conference on Knowledge Management (CIKM 2021)</title>
				<meeting>the 30th ACM International Conference on Knowledge Management (CIKM 2021)<address><addrLine>Gold Coast, AU</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="4534" to="4543" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">A critical reassessment of the Saerens-Latinne-Decaestecker algorithm for posterior probability adjustment</title>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Molinari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
		<idno type="DOI">10.1145/3433164</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Information Systems</title>
		<imprint>
			<biblScope unit="volume">39</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note>Article 19</note>
</biblStruct>

<biblStruct xml:id="b37">
	<monogr>
		<title level="m" type="main">RoBERTa: A robustly optimized BERT pretraining approach</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ott</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Du</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Levy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Stoyanov</surname></persName>
		</author>
		<idno>ArXiv:1907.11692</idno>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">DortmundAI at LeQua 2022: Regularized SLD</title>
		<author>
			<persName><forename type="first">M</forename><surname>Senz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bunse</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">KULeuven at LeQua 2022: Model calibration in quantification learning</title>
		<author>
			<persName><forename type="first">T</forename><surname>Popordanoska</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">B</forename><surname>Blaschko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b40">
	<analytic>
		<title level="a" type="main">Active learning and the Saerens-Latinne-Decaestecker algorithm: An evaluation</title>
		<author>
			<persName><forename type="first">A</forename><surname>Molinari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Esuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2nd Joint Conference of the Information Retrieval Communities in Europe (CIRCLE 2022)</title>
				<meeting>the 2nd Joint Conference of the Information Retrieval Communities in Europe (CIRCLE 2022)<address><addrLine>Samatan, FR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b41">
	<analytic>
		<title level="a" type="main">Probabilistic outputs for support vector machines and comparison to regularized likelihood methods</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Platt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Large Margin Classifiers</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Smola</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Bartlett</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Schölkopf</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Schuurmans</surname></persName>
		</editor>
		<meeting><address><addrLine>Cambridge, MA</addrLine></address></meeting>
		<imprint>
			<publisher>The MIT Press</publisher>
			<date type="published" when="2000">2000</date>
			<biblScope unit="page" from="61" to="74" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b42">
	<analytic>
		<title level="a" type="main">UniLeiden at LeQua 2022: The first step in understanding the behaviour of the median sweep quantifier using continuous sweep</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kloos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">A</forename><surname>Meertens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Karch</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b43">
	<analytic>
		<title level="a" type="main">Quantifying trends accurately despite classifier error and class imbalance</title>
		<author>
			<persName><forename type="first">G</forename><surname>Forman</surname></persName>
		</author>
		<idno type="DOI">10.1145/1150402.1150423</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2006)</title>
				<meeting>the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD 2006)<address><addrLine>Philadelphia, US</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2006">2006</date>
			<biblScope unit="page" from="157" to="166" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b44">
	<analytic>
		<title level="a" type="main">UniOviedo(Team1) at LeQua 2022: Sample-based quantification using deep learning</title>
		<author>
			<persName><forename type="first">P</forename><surname>González</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b45">
	<analytic>
		<title level="a" type="main">(Team2) at LeQua 2022: Comparison of traditional quantifiers and a new method based on energy distance</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">J</forename><surname>Del Coz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Unioviedo</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b46">
	<analytic>
		<title level="a" type="main">UniPadova at LeQua 2022: A preliminary study of a Tidyverse approach to quantification</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">M</forename><surname>Di</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Nunzio</forename></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of the 2022 Conference and Labs of the Evaluation Forum (CLEF 2022)</title>
				<meeting><address><addrLine>Bologna, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b47">
	<analytic>
		<title level="a" type="main">Re-assessing the &quot;classify and count&quot; quantification method</title>
		<author>
			<persName><forename type="first">A</forename><surname>Moreo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 43rd European Conference on Information Retrieval (ECIR 2021)</title>
				<meeting>the 43rd European Conference on Information Retrieval (ECIR 2021)<address><addrLine>Lucca, IT</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="volume">II</biblScope>
			<biblScope unit="page" from="75" to="91" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
