<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Overview of the CLEF 2024 JOKER Task 1: Humour-aware Information Retrieval</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Liana</forename><surname>Ermakova</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Université de Bretagne Occidentale</orgName>
								<address>
									<settlement>HCTI</settlement>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Anne-Gwenn</forename><surname>Bosser</surname></persName>
							<affiliation key="aff1">
								<orgName type="laboratory">Lab-STICC CNRS UMR 6285</orgName>
								<orgName type="institution">École Nationale d&apos;Ingénieurs de Brest</orgName>
								<address>
									<country key="FR">France</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Tristan</forename><surname>Miller</surname></persName>
							<affiliation key="aff2">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Manitoba</orgName>
								<address>
									<settlement>Winnipeg</settlement>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
							<affiliation key="aff3">
								<orgName type="department">Austrian Research Institute for Artificial Intelligence (OFAI)</orgName>
								<address>
									<settlement>Vienna</settlement>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Adam</forename><surname>Jatowt</surname></persName>
							<affiliation key="aff4">
								<orgName type="institution">University of Innsbruck</orgName>
								<address>
									<country key="AT">Austria</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Overview of the CLEF 2024 JOKER Task 1: Humour-aware Information Retrieval</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">3471E0CD5234BB5357E08B8984380536</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>information retrieval, wordplay, puns, computational humour, wordplay detection, test collection, 0000-0002-7598-7474 (L. Ermakova)</term>
					<term>0000-0002-0442-2660 (A. Bosser)</term>
					<term>0000-0002-0749-1100 (T. Miller)</term>
					<term>0000-0001-7235-0665 (A. Jatowt)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper presents the details of Task 1 of the JOKER-2024 Track, where the aim is to retrieve short humorous texts from an underlying document collection. The intended use case for this task is to search for a joke on a specific topic. This can be useful for humour researchers in the humanities, for second-language learners as a learning aid, for professional comedians as a writing aid, and for translators who might need to adapt certain jokes to other cultures. For this task, we provided a collection consisting of 61,268 documents, where 4,492 texts were humorous. Ten teams submitted 26 runs in total for this task.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>This paper presents details of Task 1 of the JOKER-2024 Track 1 , which was held as part of the 15th Conference and Labs of the Evaluation Forum (CLEF 2024) 2 . The overall objective of the JOKER track series <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3]</ref>, which began in 2022, is to facilitate collaboration among linguists, translators, and computer scientists to advance the development of automatic humour analysis. In each edition of the JOKER track, we construct and publish reusable, quality-controlled datasets to serve as training and test data for various humor processing tasks. In Task 1, participants build systems aiming to retrieve short humorous texts from a document collection based on a given query. For details on JOKER-2024's other two tasks, we refer the reader to their respective overviews <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b4">5]</ref>. Further information and insights are also presented in the Track's overview paper <ref type="bibr" target="#b0">[1]</ref>.</p><p>Search engines generally do not account for humour, ambiguity, or subversion of linguistic rules as features for selecting relevant documents to be returned. However, humour-aware retrieval, such as retrieval of wordplay-containing passages, can be useful for certain use cases or for user groups who appreciate or are interested in humorous qualities of text <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>To foster research in humour-aware information retrieval, in JOKER 2024 we have introduced a novel task that consists of retrieving short humorous texts from a document collection. The intended use case is to search for a humorous text on a particular topic. Besides users who especially like humour, this could be useful for writers, for humour researchers, for second-language learners as a learning aid, for advertisement copywriters, for professional comedians as a writing aid, or even for translators who might need to adapt certain jokes to other cultures. Formally, for Task 1, the objective is to retrieve short humorous texts from a document collection based on a given query. The retrieved texts should fulfill two criteria: to be relevant to the query, and to be humorous, which in our task means to be 1 Tomislav&amp;Rowan <ref type="bibr" target="#b15">[16]</ref> 1 UAms <ref type="bibr" target="#b16">[17]</ref> 8</p><p>Total 26</p><p>instances of wordplay. The intended use case is to search for a joke on a specific topic. For example, a search query of "math" would mean that the goal is to find math jokes, while the query "Tom" would mean that the goal is to find jokes about some person or entity named Tom. The test collection was built based on the English corpora constructed within the previous edition of the CLEF JOKER track:</p><p>• JOKER 2023 Task 1 -pun detection <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9]</ref>; • JOKER 2023 Task 2 -pun location and interpretation <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b8">9]</ref>; • JOKER 2023 Task 3 -pun translation <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b10">11,</ref><ref type="bibr" target="#b8">9]</ref>. This year, ten teams, out of the total 22 active JOKER participants, submitted 26 runs for Task 1 out of the 103 runs submitted to the track (see run statistics in Table <ref type="table" target="#tab_0">1</ref>).</p><p>This paper presents an overview of our data preparation process in Section 2. In Section 3, we describe the participants' runs, and we present the analysis of their results in Section 4. We provide some concluding remarks in Section 5.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Dataset</head><p>The data for this task extends that which was originally used for JOKER-2023's tasks on wordplay detection in English <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b8">9]</ref>. Those texts were annotated according to whether they are humorous; we supplement this data with texts from Task 3 of JOKER-2023 <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b8">9]</ref>, used for humour translation, and with some new wordplay instances. We further extended the data with text passages collected from non-humourous sources, as well as data that was automatically generated in relation to the queries. Specifically, the non-humorous data related to queries was obtained from the following sources:</p><p>• negative examples from the JOKER corpus.</p><p>• Wikipedia extracts returned for the queries. We used the Wikipedia Python package <ref type="foot" target="#foot_0">3</ref> for this and then collected sentences to form non-humorous text instances. • Descriptions of queries generated by Meta's Llama 2 with 7B parameters <ref type="bibr" target="#b17">[18]</ref>.</p><p>In total, we provided our participants with a collection consisting of 61,268 documents, where 4,492 texts are humorous. The latter encompasses 3,507 texts from JOKER 2023 and 985 new wordplay instances. The remaining 56,776 texts are non-humorous. These consist of 4,954 negative examples taken from the JOKER 2023 wordplay detection corpus, 12,523 texts generated using Llama 2, and 39,299 sentences from Wikipedia extracts. All the texts were typically one or two sentences long and were released in the form of JSON files. For creating the set of queries, we harnessed data from CLEF 2023 JOKER Task 2 -Pun Location and Interpretation <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b8">9]</ref>, and in particular, the locations of wordplay in texts, i.e. words or phrases carrying multiple meanings. In CLEF 2023 JOKER Task 2, puns were either homographic (identical spelling as in I used to be a banker but I lost interest) or heterographic (i.e. exploiting paronymy as propane/prophane in When the church bought gas for their annual barbecue, proceeds went from the sacred to the propane.) To expand the queries, we used the semantic annotations of pun locations (pun interpretation), i.e. pairs of lemmatized word sets, containing the synonyms (or, if absent, hypernyms) of the two words involved in the pun, excluding any that share the same spelling as the pun. The lists of query expansions were manually checked. The document was deemed humorous and relevant to the query if it came from the positive examples of the JOKER corpus and included the query term or its expansions.</p><p>Twelve queries with their judgments (qrels) were created for training or validating participants' systems. Then, another 45 queries were created as a test set. <ref type="foot" target="#foot_1">4</ref> For all 57 queries (combined test and training), 11,831 documents were deemed topically relevant. We considered a document to be topically relevant to a given query if it contained the term from this query, or its synonyms, or its hyperonyms. Among the topically relevant documents, 1,730 were considered to be humorous. The descriptive statistics of relevant humourous texts is given in Table <ref type="table" target="#tab_1">2</ref> while Figure <ref type="figure">1</ref> presents the histogram of the number of relevant humourous texts per query. The average number of relevant humorous texts per query is 30, while the median is 18 texts.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Evaluation Measures</head><p>When it comes to evaluation measures, a set of standard information retrieval metrics were used: map mean average precision -i.e., the mean of the average precision scores for each query ndcg normalised discounted cumulative gain, the gain of each document based on its relevance, discounted logarithmically by its position in the ranking normalised over the ideal ranking P1, P5, P10 precision -i.e. the ability of a system to present only relevant items, at different numbers of top ranked results R5, R10, R100, R1000 recall -measuring the ability of systems to find all or many relevant items, at different top numbers of results bpref binary preference, a sum-based metric showing how many relevant documents are ranked before irrelevant documents MRR mean reciprocal rank, the average of the multiplicative inverse of the ranks of the first correct answer of results for a sample of queries We used the pyterrier platform <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20]</ref> implementation of these metrics. "docid": "3", "text": "The organic compound primarily responsible for the characteristic odor of musk is muscone." }, { "docid": "51135", "text": "I've inherited a fortune, said Tom, willfully" }, { "docid": "591", "text": "My name is Will, I'm a lawyer." } ]</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Input format</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.2.">Queries</head><p>The train and test queries are also JSON files, this time with the following fields: qid a unique query identifier from the input file query the search query Input example:</p><p>[ {"qid":"qid_train_1","query":"steps"}, {"qid":"qid_train_3","query":"math"}, {"qid":"qid_train_4","query":"Tom"} ]</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.3.">Qrels</head><p>Finally, we provide training/validation data in the format of JSON qrels files with the following fields: qid a unique query identifier from the query input file docid a unique document identifier from the corpus qrel indication the document docid is relevant to the query qid and is a wordplay instance Example of a qrel file:</p><p>[ { "qid": "qid_train_0", "docid": "27260", "qrel": 0 }, { "qid": "qid_train_0", "docid": "591", "qrel": 1 }, { "qid": "qid_train_0", "docid": "51135", "qrel":1 } ]</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Output format</head><p>We required results to be provided in a JSON format with the following fields: run_id run ID starting with &lt;team_id&gt;_&lt;task_id&gt;_&lt;method_used&gt;, e.g. UBO_task_1_TFIDF manual flag indicating if the run is manual 0,1 qid a unique identifier from the input file docid an identifier of the document retrieved from the corpus to the qid query rank retrieved document rank score normalised document relevance score (in the [0-1] scale)</p><p>For each query, the maximum allowed number of distinct documents (docid field) is 1000. A sample output file is as follows:</p><p>[ { "run_id":"team1_task_1_TFIDF", "manual":0, "qid":"qid_train_0", "docid":"27260", "rank":1, "score":0.97 }, { "run_id":"team1_task_1_TFIDF", "manual":0, "qid":"qid_train_0", "docid":"591", "rank":2, "score":0.8 }, { "run_id":"team1_task_1_TFIDF", "manual":0, "qid":"qid_train_1", "docid":"27261", "rank":1, "score":0.7 } ]</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Participants' Approaches</head><p>In total, ten teams submitted 26 runs (see run statistics in Table <ref type="table" target="#tab_0">1</ref>). The approaches used by the participating teams are as follows:</p><p>• The jokester team <ref type="bibr" target="#b11">[12]</ref> provided a single run based on an approach that uses TF-IDF for feature weighting and a Logistic Regression classifier. • The Arampatzis team 5 provided ten runs, testing a range of diverse models such as TF-IDF, LSTM, Random Forest, XGBoost, LightGBM (Light Gradient-Boosting Machine), SVM, Decision Tree, Gaussian Naive Bayes, KNN, and neural nets. • A run submitted by LIS team <ref type="bibr" target="#b12">[13]</ref> was based on T5 transformer model, query processing, expanding terms with their synonyms collected from WordNet, choosing the optimal tokenisation method for queries and documents, and then selecting the best threshold for the similarity score. Finally, a pre-trained model was applied to filter texts with puns. • The Frane uses fine-tuned BERT models for estimating humorousness together with the wellknown retrieval model such as BM25. The team submitted one run. • The Dajana&amp;Kathy team processed text using stemming, lemmatisation, and stop word removal, and employed TF-IDF and BM25, together with fine-tuning BERT for submitting their run • The AB&amp;DPV team <ref type="bibr" target="#b13">[14]</ref> used TF-IDF for ranking humourous text within the collection for constructing thier run. • The RubyAiYoungTeam team's run was submitted without any description of the employed method. • The Petra&amp;Regina team <ref type="bibr" target="#b14">[15]</ref> submitted a single run employing logistic regression with TF-IDF vectorised documents and queries and iterative relevance scoring. • The Tomislav&amp;Rowan team <ref type="bibr" target="#b15">[16]</ref> employed logistic regression with TF-IDF vectorised documents to create a single run. • The UAms team <ref type="bibr" target="#b16">[17]</ref> provided two runs based on BM25 and BM25+RM3 using default settings.</p><p>Two other runs employ neural cross-encoder rerankings of the latter runs based on zero-shot application of an MSMARCO-trained ranker. The last four runs are based on two trained versions of the SimpleT5 model, one with a batch size of 6 and the other with a batch size of 8 and a trained BERT model using LoRa.</p><p>Note that we do not detail the zero-scored runs, nor the runs with problems that we could not resolve. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Evaluation on Test Data</head><p>The majority of submitted runs had some issues -for example, some runs were submitted on only part of the data, and there were runs for the training data only. We tried to solve these problems whenever possible.</p><p>In Table <ref type="table" target="#tab_3">3</ref> we show the main results for participants' runs on test data. We make the following observations based on the results:</p><p>• First, in general, both precision and recall are extremely low. Low precision is due to the presence of the query terms in the non-humorous texts which is considered as topical relevance by the retrieval systems. The low recall is probably related to the length of the text and the fact that in many texts, both humorous and topically relevant, the query terms do not appear. • The runs based on pseudo-relevance feedback RM3 query expansion outperform the BM25 baselines. • Cross-encoder rerankers do not exhibit better performance than the baseline models.</p><p>• Filtering trained on the wordplay detection task improved systems' results quite a lot.</p><p>• Simple solutions such as ones with TF-IDF and Logistic Regression remain quite competitive.</p><p>• Using T5 and BERT language models with RM3 is one of best approaches both in terms of precision and recall.</p><p>To evaluate the errors produced by the rankers, we compared the results with those obtained using topic relevance alone, disregarding the humorousness of the texts. Topical relevance results on the test data are given in Table <ref type="table" target="#tab_4">4</ref>. Traditional models without filtering, such as RM3, TF-IDF, BM25, showed high performance on topical relevance only with 𝑀 𝐴𝑃 &gt; 0.35 and 𝑁 𝐷𝐶𝐺 &gt; 0.55 but the official results, which take into account humorousness of the texts, go down with 𝑀 𝐴𝑃 &lt; 0.1 and 𝑁 𝐷𝐶𝐺 &lt; 0.3. Post-filtering applied with different ranking models improved MAP up to 50% (cf. UAms_rm3_T5_Filter2 and UAms_Anserini_rm3) according to the official results but dropped topical relevance. UAms_bm25_BERT_Filter demonstrated high scores according to both the official results and topic relevance alone.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Evaluation on Training Data</head><p>Here we also report the results on the training data in order to provide additional insights as to the performance and characteristics of different approaches. Table <ref type="table" target="#tab_5">5</ref> shows the performance based on the submitted runs.</p><p>Looking at the results we can make the following observations:  • While precision is quite high, recall still poses many challenges, even on training data. This may also support the hypothesis that low recall may arise due to the absence of query terms in texts which are relatively short. • The approaches using decision trees, and relatively standard approaches like SVM and kNN, achieve the best results. However, these results are from a team (Arampatzis) that have not submitted a system description paper, so they should be treated with caution. • Considering the remaining results, the ordering of the best runs is similar to that of the test data.</p><p>• Considering topical relevance alone on the training data (see Table <ref type="table" target="#tab_6">6</ref>), in general, we observe similar trends as seen in the test set where unfiltered runs tend to have higher topical relevance alone but a significant drop according to the official ranking. • The filtered runs exhibited identical scores in both the official ranking and for topical relevance alone, indicating that they retrieved (almost) exclusively humorous documents. However, the relative ranking between filtered and unfiltered runs differs, except for the Arampatzis runs, which are at the top of both tables. • The topical relevance scores between the test and training data are similar, but the ranking that considers both topical relevance and humor is nearly twice as low on the test data, indicating potential overfitting in humor classification (cf. the UAms runs). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>This paper has given an overview and discussed the results of Task 1 of the JOKER-2024 challenge on the retrieval of humorous texts. Based on the data for wordplay detection and interpretation previously constructed within the CLEF JOKER track [8, 10, 2, 9], we constructed a unique reusable test collection for wordplay retrieval in English.</p><p>Ten participating teams submitted 26 runs in total for Task 1. The teams applied diverse methods, ranging from traditional approaches rankers such as TF-IDF, BM25, and RM3 to cross-encoders with and without post-filtering based on classical machine learning methods (logistic regression, and SVMs) to more modern ones, including SimpleT5 and BERT.</p><p>The participants' results confirm that humour-oriented information retrieval remains a rather challenging task with both precision and recall being extremely low. Filtering trained on the wordplay detection task significantly improved the systems' results. However, while topical relevance scores between the test and training data are similar, the ranking that considers both topical relevance and humor is nearly twice as low on the test data, suggesting potential overfitting in humor classification.</p><p>In general, our results confirm that retrieval models are humour-agnostic and humour detection is still a challenge for machine learning models and LLMs. Developing new test collections, including those for non-English languages, could help address this issue.</p><p>Additional information on the track is available on the JOKER website: https://www.joker-project. com/</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Statistics on the runs submitted to the CLEF JOKER 2024 Task 1</figDesc><table><row><cell>Team</cell><cell># of runs</cell></row><row><cell>jokester [12]</cell><cell>1</cell></row><row><cell>LIS [13]</cell><cell>1</cell></row><row><cell>Arampatzis</cell><cell>10</cell></row><row><cell>Frane</cell><cell>1</cell></row><row><cell>Dajana&amp;Kathy</cell><cell>1</cell></row><row><cell>AB&amp;DPV [14]</cell><cell>1</cell></row><row><cell>RubyAiYoungTeam</cell><cell>1</cell></row><row><cell>Petra&amp;Regina [15]</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2 :</head><label>2</label><figDesc>Statistics of relevant humourous texts per query</figDesc><table><row><cell></cell><cell></cell><cell></cell><cell>7</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell></cell><cell>6</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>count</cell><cell>57</cell><cell></cell><cell>5</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>mean std</cell><cell>30 43</cell><cell>Frequency</cell><cell>3 4</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>min 25% 50%</cell><cell>1 18 8</cell><cell></cell><cell>1 2</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>75%</cell><cell>38</cell><cell></cell><cell>0</cell><cell>0</cell><cell>50</cell><cell>100</cell><cell>150</cell><cell>200</cell><cell>250</cell></row><row><cell>max</cell><cell>281</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell></cell><cell></cell><cell cols="8">Figure 1: Histogram of # relevant humourous texts</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 3</head><label>3</label><figDesc>Results on the test data. (Boldface indicates the best result per metric.)</figDesc><table><row><cell>run ID</cell><cell cols="2">map ndcg</cell><cell>R5</cell><cell cols="5">R10 R100 R1000 bpref MRR</cell><cell>P1</cell><cell>P5</cell><cell>P10</cell></row><row><cell>UAms_rm3_T5_Filter2</cell><cell cols="4">0.12 0.28 0.09 0.15</cell><cell>0.36</cell><cell>0.43</cell><cell>0.18</cell><cell>0.26</cell><cell>0.13</cell><cell>0.11</cell><cell>0.13</cell></row><row><cell>UAms_rm3_BERT_Filter</cell><cell>0.12</cell><cell>0.27</cell><cell cols="2">0.09 0.14</cell><cell>0.35</cell><cell>0.42</cell><cell>0.16</cell><cell>0.27</cell><cell>0.16</cell><cell>0.11</cell><cell>0.12</cell></row><row><cell>UAms_rm3_T5_Filter1</cell><cell>0.11</cell><cell>0.27</cell><cell cols="2">0.09 0.15</cell><cell>0.36</cell><cell>0.42</cell><cell>0.16</cell><cell>0.23</cell><cell>0.11</cell><cell>0.09</cell><cell>0.11</cell></row><row><cell cols="2">UAms_bm25_BERT_Filter 0.09</cell><cell>0.24</cell><cell>0.06</cell><cell>0.12</cell><cell>0.37</cell><cell>0.40</cell><cell>0.12</cell><cell>0.19</cell><cell>0.09</cell><cell>0.05</cell><cell>0.08</cell></row><row><cell>AB&amp;DPV_TFIDF</cell><cell>0.09</cell><cell>0.24</cell><cell>0.07</cell><cell>0.13</cell><cell>0.33</cell><cell>0.37</cell><cell>0.10</cell><cell>0.25</cell><cell>0.13</cell><cell cols="2">0.12 0.14</cell></row><row><cell>UAms_Anserini_rm3</cell><cell>0.08</cell><cell>0.27</cell><cell>0.06</cell><cell>0.08</cell><cell>0.38</cell><cell>0.50</cell><cell>0.09</cell><cell>0.20</cell><cell>0.11</cell><cell>0.06</cell><cell>0.06</cell></row><row><cell>jokester_TFIDF_LogRegr</cell><cell>0.08</cell><cell>0.19</cell><cell cols="2">0.09 0.09</cell><cell>0.10</cell><cell>0.16</cell><cell>0.21</cell><cell cols="4">0.51 0.44 0.23 0.14</cell></row><row><cell>UAms_Anserini_bm25</cell><cell>0.08</cell><cell>0.24</cell><cell>0.06</cell><cell>0.08</cell><cell>0.37</cell><cell>0.42</cell><cell>0.09</cell><cell>0.19</cell><cell>0.11</cell><cell>0.05</cell><cell>0.06</cell></row><row><cell>UAms_bm25_CE100</cell><cell>0.04</cell><cell>0.17</cell><cell>0.03</cell><cell>0.04</cell><cell>0.37</cell><cell>0.37</cell><cell>0.06</cell><cell>0.08</cell><cell>0.00</cell><cell>0.04</cell><cell>0.03</cell></row><row><cell>UAms_rm3_CE100</cell><cell>0.04</cell><cell>0.18</cell><cell>0.03</cell><cell>0.04</cell><cell>0.38</cell><cell>0.38</cell><cell>0.06</cell><cell>0.07</cell><cell>0.00</cell><cell>0.04</cell><cell>0.03</cell></row><row><cell>LIS_MiniLM-T5</cell><cell>0.02</cell><cell>0.05</cell><cell>0.03</cell><cell>0.04</cell><cell>0.05</cell><cell>0.05</cell><cell>0.05</cell><cell>0.13</cell><cell>0.04</cell><cell>0.06</cell><cell>0.04</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>Topical relevance results on the test data.</figDesc><table><row><cell>run ID</cell><cell cols="2">map ndcg</cell><cell>R5</cell><cell cols="5">R10 R100 R1000 bpref MRR</cell><cell>P1</cell><cell>P5</cell><cell>P10</cell></row><row><cell>UAms_Anserini_rm3</cell><cell>0.37</cell><cell>0.60</cell><cell cols="2">0.06 0.10</cell><cell>0.39</cell><cell>0.64</cell><cell>0.64</cell><cell>0.82</cell><cell>0.73 0.61 0.61</cell></row><row><cell>AB&amp;DPV_TFIDF</cell><cell>0.36</cell><cell>0.53</cell><cell cols="2">0.07 0.12</cell><cell>0.36</cell><cell>0.50</cell><cell>0.50</cell><cell>0.83</cell><cell>0.73 0.69 0.69</cell></row><row><cell>UAms_Anserini_bm25</cell><cell>0.35</cell><cell>0.55</cell><cell cols="2">0.07 0.11</cell><cell>0.38</cell><cell>0.56</cell><cell>0.56</cell><cell>0.79</cell><cell>0.64 0.61 0.60</cell></row><row><cell cols="2">UAms_bm25_BERT_Filter 0.30</cell><cell>0.48</cell><cell cols="2">0.07 0.11</cell><cell>0.35</cell><cell>0.46</cell><cell>0.46</cell><cell>0.77</cell><cell>0.62 0.62 0.60</cell></row><row><cell>UAms_rm3_T5_Filter1</cell><cell>0.25</cell><cell>0.44</cell><cell cols="2">0.06 0.10</cell><cell>0.30</cell><cell>0.40</cell><cell>0.40</cell><cell>0.86</cell><cell>0.78 0.69 0.63</cell></row><row><cell>UAms_rm3_CE100</cell><cell>0.22</cell><cell>0.40</cell><cell cols="2">0.05 0.10</cell><cell>0.39</cell><cell>0.39</cell><cell>0.39</cell><cell>0.79</cell><cell>0.64 0.56 0.55</cell></row><row><cell>UAms_rm3_BERT_Filter</cell><cell>0.22</cell><cell>0.39</cell><cell cols="2">0.06 0.09</cell><cell>0.27</cell><cell>0.34</cell><cell>0.34</cell><cell>0.84</cell><cell>0.76 0.68 0.61</cell></row><row><cell>UAms_bm25_CE100</cell><cell>0.22</cell><cell>0.39</cell><cell cols="2">0.05 0.10</cell><cell>0.38</cell><cell>0.38</cell><cell>0.38</cell><cell>0.78</cell><cell>0.62 0.56 0.55</cell></row><row><cell>UAms_rm3_T5_Filter2</cell><cell>0.22</cell><cell>0.38</cell><cell cols="2">0.06 0.10</cell><cell>0.27</cell><cell>0.34</cell><cell>0.34</cell><cell>0.80</cell><cell>0.64 0.71 0.63</cell></row><row><cell>jokester_TFIDF_LogRegr</cell><cell>0.03</cell><cell>0.09</cell><cell cols="2">0.03 0.03</cell><cell>0.04</cell><cell>0.05</cell><cell>0.07</cell><cell>0.63</cell><cell>0.62 0.39 0.24</cell></row><row><cell>LIS_MiniLM-T5</cell><cell>0.01</cell><cell>0.05</cell><cell cols="2">0.02 0.02</cell><cell>0.03</cell><cell>0.03</cell><cell>0.03</cell><cell>0.33</cell><cell>0.18 0.20 0.15</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_5"><head>Table 5</head><label>5</label><figDesc>Results on the training data. (Boldface indicates the best result per metric.)</figDesc><table><row><cell>run_id</cell><cell cols="2">map ndcg</cell><cell>R5</cell><cell cols="5">R10 R100 R1000 bpref MRR</cell><cell>P1</cell><cell>P5</cell><cell>P10</cell></row><row><cell>Arampatzis_DecisionTree</cell><cell cols="2">0.40 0.55</cell><cell cols="2">0.24 0.30</cell><cell>0.44</cell><cell>0.45</cell><cell>0.42</cell><cell cols="4">0.92 0.92 0.68 0.53</cell></row><row><cell>Arampatzis_SVM</cell><cell>0.36</cell><cell>0.52</cell><cell cols="2">0.25 0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.39</cell><cell>0.83</cell><cell cols="3">0.75 0.68 0.52</cell></row><row><cell>Arampatzis_kNN</cell><cell>0.36</cell><cell>0.50</cell><cell>0.23</cell><cell>0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.38</cell><cell>0.71</cell><cell>0.50</cell><cell>0.60</cell><cell>0.51</cell></row><row><cell>Arampatzis_GaussianNB</cell><cell>0.35</cell><cell>0.50</cell><cell>0.24</cell><cell>0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.38</cell><cell>0.72</cell><cell>0.58</cell><cell>0.63</cell><cell>0.51</cell></row><row><cell>UAms_rm3_T5_Filter2</cell><cell>0.23</cell><cell>0.39</cell><cell>0.14</cell><cell>0.25</cell><cell>0.44</cell><cell>0.52</cell><cell>0.35</cell><cell>0.34</cell><cell>0.17</cell><cell>0.28</cell><cell>0.28</cell></row><row><cell>UAms_rm3_BERT_Filter</cell><cell>0.23</cell><cell>0.42</cell><cell>0.12</cell><cell>0.23</cell><cell>0.50</cell><cell>0.60</cell><cell>0.36</cell><cell>0.37</cell><cell>0.17</cell><cell>0.23</cell><cell>0.23</cell></row><row><cell>UAms_rm3_T5_Filter1</cell><cell>0.21</cell><cell>0.37</cell><cell>0.13</cell><cell>0.24</cell><cell>0.40</cell><cell>0.49</cell><cell>0.29</cell><cell>0.38</cell><cell>0.25</cell><cell>0.25</cell><cell>0.27</cell></row><row><cell>UAms_bm25_BERT_Filter</cell><cell>0.19</cell><cell>0.37</cell><cell>0.07</cell><cell>0.19</cell><cell>0.49</cell><cell>0.59</cell><cell>0.27</cell><cell>0.22</cell><cell>0.08</cell><cell>0.12</cell><cell>0.18</cell></row><row><cell>UAms_Anserini_rm3</cell><cell>0.17</cell><cell>0.37</cell><cell>0.09</cell><cell>0.18</cell><cell>0.45</cell><cell>0.63</cell><cell>0.30</cell><cell>0.24</cell><cell>0.08</cell><cell>0.17</cell><cell>0.18</cell></row><row><cell cols="2">Arampatzis_NeuralNetwork 0.17</cell><cell>0.34</cell><cell>0.09</cell><cell>0.17</cell><cell>0.43</cell><cell>0.45</cell><cell>0.14</cell><cell>0.41</cell><cell>0.33</cell><cell>0.28</cell><cell>0.25</cell></row><row><cell>Arampatzis_LSTM</cell><cell>0.17</cell><cell>0.33</cell><cell>0.09</cell><cell>0.19</cell><cell>0.44</cell><cell>0.45</cell><cell>0.11</cell><cell>0.20</cell><cell>0.08</cell><cell>0.18</cell><cell>0.19</cell></row><row><cell>ABDPV_TFIDF</cell><cell>0.17</cell><cell>0.34</cell><cell>0.07</cell><cell>0.14</cell><cell>0.39</cell><cell>0.50</cell><cell>0.21</cell><cell>0.26</cell><cell>0.17</cell><cell>0.15</cell><cell>0.16</cell></row><row><cell>UAms_Anserini_bm25</cell><cell>0.16</cell><cell>0.35</cell><cell>0.07</cell><cell>0.17</cell><cell>0.46</cell><cell>0.60</cell><cell>0.24</cell><cell>0.19</cell><cell>0.08</cell><cell>0.12</cell><cell>0.16</cell></row><row><cell>jokester_TFIDF_LogRegr</cell><cell>0.16</cell><cell>0.34</cell><cell>0.11</cell><cell>0.12</cell><cell>0.14</cell><cell>0.36</cell><cell>0.49</cell><cell>0.59</cell><cell>0.58</cell><cell>0.30</cell><cell>0.20</cell></row><row><cell>UAms_rm3_CE100</cell><cell>0.07</cell><cell>0.22</cell><cell>0.01</cell><cell>0.03</cell><cell>0.45</cell><cell>0.45</cell><cell>0.09</cell><cell>0.12</cell><cell>0.00</cell><cell>0.08</cell><cell>0.09</cell></row><row><cell>UAms_bm25_CE100</cell><cell>0.07</cell><cell>0.22</cell><cell>0.01</cell><cell>0.03</cell><cell>0.46</cell><cell>0.46</cell><cell>0.09</cell><cell>0.12</cell><cell>0.00</cell><cell>0.08</cell><cell>0.08</cell></row><row><cell>LIS_MiniLM-T5</cell><cell>0.00</cell><cell>0.01</cell><cell>0.00</cell><cell>0.00</cell><cell>0.01</cell><cell>0.01</cell><cell>0.01</cell><cell>0.01</cell><cell>0.00</cell><cell>0.00</cell><cell>0.00</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_6"><head>Table 6</head><label>6</label><figDesc>Topical relevance results on the training data.</figDesc><table><row><cell>run ID</cell><cell cols="2">map ndcg</cell><cell>R5</cell><cell cols="5">R10 R100 R1000 bpref MRR</cell><cell>P1</cell><cell>P5</cell><cell>P10</cell></row><row><cell>Arampatzis_DecisionTree</cell><cell>0.40</cell><cell>0.55</cell><cell cols="2">0.24 0.30</cell><cell>0.44</cell><cell>0.45</cell><cell>0.42</cell><cell>0.92</cell><cell>0.92 0.68 0.53</cell></row><row><cell>AB&amp;DPV_TFIDF</cell><cell>0.38</cell><cell>0.56</cell><cell cols="2">0.08 0.13</cell><cell>0.36</cell><cell>0.58</cell><cell>0.58</cell><cell>0.72</cell><cell>0.50 0.67 0.65</cell></row><row><cell>Arampatzis_SVM</cell><cell>0.36</cell><cell>0.52</cell><cell cols="2">0.25 0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.39</cell><cell>0.83</cell><cell>0.75 0.68 0.52</cell></row><row><cell>Arampatzis_kNN</cell><cell>0.36</cell><cell>0.50</cell><cell cols="2">0.23 0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.38</cell><cell>0.71</cell><cell>0.50 0.60 0.51</cell></row><row><cell>UAms_Anserini_rm3</cell><cell>0.35</cell><cell>0.58</cell><cell cols="2">0.05 0.09</cell><cell>0.37</cell><cell>0.67</cell><cell>0.67</cell><cell>0.73</cell><cell>0.58 0.58 0.52</cell></row><row><cell>UAms_Anserini_bm25</cell><cell>0.35</cell><cell>0.57</cell><cell cols="2">0.06 0.11</cell><cell>0.37</cell><cell>0.65</cell><cell>0.65</cell><cell>0.66</cell><cell>0.50 0.55 0.53</cell></row><row><cell>Arampatzis_GaussianNB</cell><cell>0.35</cell><cell>0.50</cell><cell cols="2">0.24 0.28</cell><cell>0.44</cell><cell>0.45</cell><cell>0.38</cell><cell>0.72</cell><cell>0.58 0.63 0.51</cell></row><row><cell>UAms_bm25_BERT_Filter</cell><cell>0.30</cell><cell>0.50</cell><cell cols="2">0.06 0.12</cell><cell>0.34</cell><cell>0.52</cell><cell>0.52</cell><cell>0.66</cell><cell>0.50 0.57 0.58</cell></row><row><cell>UAms_rm3_T5_Filter1</cell><cell>0.25</cell><cell>0.42</cell><cell cols="2">0.06 0.11</cell><cell>0.28</cell><cell>0.39</cell><cell>0.39</cell><cell>0.73</cell><cell>0.67 0.58 0.62</cell></row><row><cell>UAms_rm3_T5_Filter2</cell><cell>0.23</cell><cell>0.39</cell><cell cols="2">0.14 0.25</cell><cell>0.44</cell><cell>0.52</cell><cell>0.35</cell><cell>0.34</cell><cell>0.17 0.28 0.28</cell></row><row><cell>UAms_rm3_BERT_Filter</cell><cell>0.23</cell><cell>0.42</cell><cell cols="2">0.12 0.23</cell><cell>0.50</cell><cell>0.60</cell><cell>0.36</cell><cell>0.37</cell><cell>0.17 0.23 0.23</cell></row><row><cell>UAms_rm3_CE100</cell><cell>0.20</cell><cell>0.37</cell><cell cols="2">0.05 0.08</cell><cell>0.37</cell><cell>0.37</cell><cell>0.37</cell><cell>0.81</cell><cell>0.67 0.52 0.52</cell></row><row><cell>UAms_bm25_CE100</cell><cell>0.20</cell><cell>0.37</cell><cell cols="2">0.05 0.08</cell><cell>0.37</cell><cell>0.37</cell><cell>0.37</cell><cell>0.81</cell><cell>0.67 0.52 0.50</cell></row><row><cell cols="2">Arampatzis_NeuralNetwork 0.17</cell><cell>0.34</cell><cell cols="2">0.09 0.17</cell><cell>0.43</cell><cell>0.45</cell><cell>0.14</cell><cell>0.41</cell><cell>0.33 0.28 0.25</cell></row><row><cell>Arampatzis_LSTM</cell><cell>0.17</cell><cell>0.33</cell><cell cols="2">0.09 0.19</cell><cell>0.44</cell><cell>0.45</cell><cell>0.11</cell><cell>0.20</cell><cell>0.08 0.18 0.19</cell></row><row><cell>jokester_TFIDF_LogRegr</cell><cell>0.06</cell><cell>0.17</cell><cell cols="2">0.03 0.03</cell><cell>0.04</cell><cell>0.17</cell><cell>0.22</cell><cell>0.59</cell><cell>0.58 0.30 0.21</cell></row><row><cell>LIS_MiniLM-T5</cell><cell>0.00</cell><cell>0.02</cell><cell cols="2">0.01 0.01</cell><cell>0.01</cell><cell>0.01</cell><cell>0.01</cell><cell>0.23</cell><cell>0.08 0.08 0.09</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">https://pypi.org/project/wikipedia/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">Note that we also included all the training-set queries in the test input file; however, they are excluded from the resulting scores.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_2">https://joker-project.com/</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This project has received a government grant managed by the National Research Agency under the program "Investissements d'avenir" integrated into France 2030, with the Reference ANR-19-GURE-0001. This track would not have been possible without the great support of numerous individuals. We want to thank in particular the colleagues and the students who participated in data construction and evaluation, in particular the students of the Université de Bretagne Occidentale. Please visit the JOKER website for more details on the track. <ref type="bibr" target="#b5">6</ref> </p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Overview of CLEF 2024 JOKER track on automatic humor analysis</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Palma-Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Fifteenth International Conference of the CLEF Association (CLEF 2024)</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">L</forename><surname>Goeuriot</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">Q</forename><surname>Philippe Mulhem</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Schwab</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Soulier</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">M D</forename><surname>Nunzio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuščáková</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G S</forename><surname>De Herrera</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Overview of JOKER -CLEF-2023 Track on Automatic Wordplay Analysis</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M P</forename><surname>Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CLEF&apos;23: Proceedings of the Fourteenth International Conference of the CLEF Association</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">A</forename><surname>Arampatzis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Kanoulas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Tsikrika</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vrochidis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Giachanou</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Aliannejadi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Vlachos</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><surname>Ferro</surname></persName>
		</editor>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Overview of JOKER@CLEF 2022: Automatic wordplay and humour translation workshop</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Regattin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Borg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Élise</forename><surname>Mathurin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">L</forename><surname>Corre</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Araújo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Hannachi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Boccou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Digue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Damoy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Jeanjean</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-13643-6_27</idno>
	</analytic>
	<monogr>
		<title level="m">Experimental IR Meets Multilinguality, Multimodality, and Interaction: Proceedings of the Thirteenth International Conference of the CLEF Association (CLEF 2022)</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">A</forename><surname>Barrón-Cedeño</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><forename type="middle">D S</forename><surname>Martino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">D</forename><surname>Esposti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Sebastiani</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Macdonald</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Pasi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Hanbury</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Potthast</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">13390</biblScope>
			<biblScope unit="page" from="447" to="469" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Overview of the CLEF 2024 JOKER Task 2: Humour classification according to genre and technique</title>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Palma Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Overview of the CLEF 2024 JOKER Task 3: Translate puns from English to French</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
		<ptr target=".org" />
	</analytic>
	<monogr>
		<title level="m">Working Notes of the Conference and Labs of the Evaluation Forum (CLEF 2024)</title>
		<title level="s">CEUR Workshop Proceedings</title>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Jester 2.0 (demonstration abstract): Collaborative filtering to retrieve jokes</title>
		<author>
			<persName><forename type="first">D</forename><surname>Gupta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Digiovanni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Narita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Goldberg</surname></persName>
		</author>
		<idno type="DOI">10.1145/312624.312770</idno>
		<idno>doi:10. 1145/312624.312770</idno>
		<ptr target="https://doi.org/10.1145/312624.312770" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;99</title>
				<meeting>the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR &apos;99<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="1999">1999</date>
			<biblScope unit="page">333</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Joke retrieval: Recognizing the same joke told differently</title>
		<author>
			<persName><forename type="first">L</forename><surname>Friedland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Allan</surname></persName>
		</author>
		<idno type="DOI">10.1145/1458082.1458199</idno>
		<idno>doi:10.1145/1458082.1458199</idno>
		<ptr target="https://doi.org/10.1145/1458082.1458199" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 17th ACM Conference on Information and Knowledge Management, CIKM &apos;08</title>
				<meeting>the 17th ACM Conference on Information and Knowledge Management, CIKM &apos;08<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2008">2008</date>
			<biblScope unit="page" from="883" to="892" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Overview of JOKER 2023 Automatic Wordplay Analysis Task 1 -pun detection</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Palma Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2023 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Aliannejadi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Vlachos</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3497</biblScope>
			<biblScope unit="page" from="1785" to="1803" />
		</imprint>
	</monogr>
	<note>CEUR Workshop Proceedings</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">The JOKER Corpus: English-French parallel data for multilingual wordplay recognition</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<idno type="DOI">10.1145/3539618.3591885</idno>
	</analytic>
	<monogr>
		<title level="m">SIGIR &apos;23: Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval</title>
				<meeting><address><addrLine>New York, NY</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>to appear</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Overview of JOKER 2023 Automatic Wordplay Analysis Task 2 -pun location and interpretation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Palma Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2023 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Aliannejadi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Vlachos</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3497</biblScope>
			<biblScope unit="page" from="1804" to="1817" />
		</imprint>
	</monogr>
	<note>CEUR Workshop Proceedings</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Overview of JOKER 2023 Automatic Wordplay Analysis Task 3 -pun translation</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ermakova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Bosser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">M</forename><surname>Palma Preciado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sidorov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jatowt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2023 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">M</forename><surname>Aliannejadi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Vlachos</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3497</biblScope>
			<biblScope unit="page" from="1818" to="1827" />
		</imprint>
	</monogr>
	<note>CEUR Workshop Proceedings</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">JOKER Track @ CLEF 2024: The Jokesters&apos; approaches for retrieving, classifying, and translating wordplay</title>
		<author>
			<persName><forename type="first">H</forename><surname>Baguian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">N</forename><surname>Ashley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">CLEF 2024 JOKER Task 1: Exploring pun detection using the T5 transformer model</title>
		<author>
			<persName><forename type="first">A</forename><surname>Gepalova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A.-G</forename><surname>Chifu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fournier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">JOKER 2024 by AB&amp;DPV: From &apos;LOL&apos; to &apos;MDR&apos; using AI models to retrieve and translate puns</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">P</forename><surname>Varadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bartulović</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Convergential approach in machine learning for effective humour analysis and translation</title>
		<author>
			<persName><forename type="first">R</forename><surname>Elagina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Vučić</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">CLEF 2024 JOKER Tasks 1-3: Humour identification and classification</title>
		<author>
			<persName><forename type="first">R</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mikulandric</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">University of Amsterdam at the CLEF 2024 Joker Track</title>
		<author>
			<persName><forename type="first">L</forename><surname>Buijs</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cazemier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Schuurman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kamps</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Working Notes of CLEF 2024 -Conference and Labs of the Evaluation Forum</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Faggioli</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Ferro</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Galuscakova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>García Seco De Herrera</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Martin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Stone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Albert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Almahairi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Babaei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bashlykov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Batra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Bhargava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bhosale</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2307.09288</idno>
		<title level="m">Llama 2: Open foundation and fine-tuned chat models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Declarative experimentation ininformation retrieval using pyterrier</title>
		<author>
			<persName><forename type="first">C</forename><surname>Macdonald</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Tonellotto</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of ICTIR 2020</title>
				<meeting>ICTIR 2020</meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Pytrec_eval: An extremely fast python interface to trec_eval</title>
		<author>
			<persName><forename type="first">C</forename><surname>Van Gysel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>De Rijke</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">SIGIR, ACM</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
