<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>EXAM++: LLM-based Answerability Metrics for IR Evaluation</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Naghmeh Farzi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Laura Dietz</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of New Hampshire</institution>
          ,
          <addr-line>33 Academic Way, Durham, NH</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Large language models provide an opportunity for reliable and eficient information retrieval evaluation methods. However, current evaluation metrics fall short in accurately assessing the information content of systems' responses-without resorting to expensive human judgments. In contrast, the EXAM++ Answerability Metric leverages a bank of query-related exam questions to quantify relevant information content that is covered in the systems' responses. The process involves (1) decomposing the query into detailed questions, (2) checking each for answerability with passages in the system response, and (3) devising evaluation metrics based on this information. Using the TREC Complex Answer Retrieval benchmark, we demonstrate that our LLM-based EXAM++ approach works successfully, outperforming several established baselines. In particular, we take a deep dive into diferent approaches to determine the answerability of questions in a given passage, including the use of question answering systems with answer verification and self-rated answerability determination. 1 Large Language Models (LLMs) can generate and/or retrieve responses for search queries, resulting in many systems that combine traditional retrieval with neural ranking and natural language generation. Ideally, the systems' responses cover relevant information content while being concise and complete. However, there is a need for convincing evaluation metrics to assess the accuracy and completeness of the information content in responses. This should be accomplished in a repeatable and reusable manner and without resorting to expensive human judgments. To address this scenario, Sander and Dietz [1] proposed the EXAM Answerability Metric, which evaluates retrieval/generation systems based on whether they retrieve passages that answer a set of query-specific exam questions. Given a test bank of exam questions, they automate the work-intensive part of scanning each passage for answers using an automated question answering system.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Information Retrieval Evaluation</kwd>
        <kwd>Large Language Models</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <sec id="sec-1-1">
        <title>With EXAM++, we significantly expand on Sander’s idea by</title>
        <p>
          • supporting the development of exam question banks with prompt-based generation,
• modernizing the question answering system with the recently released FLAN-T5 family [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ],
• exploiting abilities of modern LLMs to determine the answerability of questions,
• ofering relevance labels that are inter-operable with commonly used evaluation tools (e.g.
trec_eval).
        </p>
        <p>A strength of EXAM++ is that, in contrast to other work on LLM-based relevance grading,
we can readily integrate humans into the evaluation by having them manage the design of the
test bank of exam questions. The test questions should be designed to cover all relevant facets
of a query, so that the more questions are addressed, the more relevant a passage is. Based on
the long history of classroom education and exam design, we argue it is more natural for human
judges to control the design of exam questions than to directly provide relevance judgments.</p>
        <p>By virtue of automating the grading of system responses, human judges are never required
to perform passage-level relevance assessments. At the same time, humans are fully in control
of defining which information content is relevant via the exam question bank. 1</p>
        <p>The evaluation approach yields reusable test collections that can be expanded by modifying
the question bank at any point in the evaluation process, as the remaining pipeline is fully
automated. The impact of a question bank modification can be directly observed by listing
passages whose relevance grade would change.</p>
        <p>
          Contributions. In this paper, we provide an in-depth study analyzing diferent choices of
the EXAM++ approach: Automatic vs. manual test banks, predicted relevance labels with
traditional evaluation metrics vs. coverage-based measures, impacts of fine-tuning vs. prompt
engineering, grading via self-ratings vs. via question answering systems with diferent answer
verification approaches. While EXAM++ is identical to the question-based RUBRIC evaluation
method [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], in this paper we provide an in-depth comparison on diferent question-answering
approaches. Additionally, we compare to the original EXAM method [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] and several direct
grading prompts [
          <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
          ].
        </p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>We focus on an approach that does not require passage-level relevance judgments or source
texts. Our work is unique in this regard, but aspects relate to many active branches of research,
which we detail below.</p>
      <sec id="sec-2-1">
        <title>2.1. LLM-based Relevance Label Predictors</title>
        <p>In contrast to our approach, several LLM-based evaluation approaches attempt to directly
imitate the relevance judgment process.</p>
        <p>
          Sun et al. [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] rerank passages using a simple LLM prompt “does the passage answer the
query?” Faggioli et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] conduct an early evaluation experiment by asking an LLM to judge
1We recently released a resource to support human judges in supervising this process [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] https://github.com/
TREMA-UNH/rubric-grading-workbench.
the relevance of a passage. They design a simple prompt and a more elaborate multi-relevance
few-shot prompt developed for the TREC Deep Learning track. Thomas et al. [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ] compare
the ability of LLMs to perform document-level relevance judgments in comparison to diferent
groups of human annotators. In their study they use a detailed prompt that instructs the LLM
to respond with a multi-level relevance grade. We include several of these prompts in our
empirical evaluation.2
        </p>
        <p>
          In 1SLs, MacAvaney and Soldaini [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ] focus on evaluating passages with a DuoPrompt, that
instructs an LLM to indicate which of two passages is more relevant for a query.
        </p>
        <p>
          However, several critiques have been raised about using LLMs for producing relevance
labels in general. Faggioli et al. [
          <xref ref-type="bibr" rid="ref10 ref6">6, 10</xref>
          ] elaborates a wide range of theoretical concerns,
centered on questions of trustworthiness and reliability of LLMs now and in the future. Liu et al.
[
          <xref ref-type="bibr" rid="ref11">11</xref>
          ] demonstrate that evaluator-LLMs assign a higher score to systems that use the same LLM
model. Wang et al. [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ] empirically demonstrate that LLMs exhibit unfair positional bias
towards candidates displayed for evaluation. Fok and Weld [
          <xref ref-type="bibr" rid="ref13">13</xref>
          ] studies general issues of human
over-reliance and under-reliance on LLMs. They elaborate why rationales produced by LLMs
for human verification do not generally lead to improvements.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Evaluation with Test Questions</title>
        <p>
          The idea of anchoring an evaluation on a bank of test questions has been widely discussed in
literature on summarization [
          <xref ref-type="bibr" rid="ref14">14</xref>
          ], recently with automated question answering methods. Eyal
et al. [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ] suggest a system evaluation score that is based on the number of questions that a
question answering system can correctly answer using the system response—a principle that
both the original EXAM method and our approach follow.
        </p>
        <p>
          Many approaches use a Cloze-style approach to generate questions from a given gold
summary or source text. Questions can be in the form of multiple-choice questions [
          <xref ref-type="bibr" rid="ref16">16</xref>
          ], free text
questions with exact-match answer verification [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ], or be derived from extracted entities and
relations [
          <xref ref-type="bibr" rid="ref15 ref18">18, 15</xref>
          ].
        </p>
        <p>As it pertains to information retrieval evaluation, the problem with generating questions
from a given source text or gold summary is that (1) such a gold standard is usually not available
and (2) it is unclear which of these questions relate to relevant information in the gold summary
(or source text).</p>
        <p>
          The original EXAM method avoids this problem altogether by asking a human to design
questions that address the search query. In contrast, we propose to automatically generate
questions directly from the query, building on the world-knowledge of GPT [
          <xref ref-type="bibr" rid="ref19">19</xref>
          ]—with the
intention of employing manual labor to verify or weight the question set.
        </p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Background: Original EXAM</title>
      <p>
        The original EXAM method [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] uses a query-specific test bank of exam questions harvested
from school textbooks in the Textbook Question Answering (TQA) dataset [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]—a dataset from
      </p>
      <sec id="sec-3-1">
        <title>2All baseline prompts are provided in our online appendix.</title>
        <p>which topics for the TREC CAR Y3 evaluation were derived. Furthermore, they use a custom
question answering system that is optimized to answer multiple-choice questions in the style
of TQA questions.</p>
        <p>Their approach considers each passage retrieved by a system submitted to TREC CAR Y3
and uses the automated question answering system to extract answers for all test questions.
Each of these answers is verified against the answer key for each exam question—tracking
correctly answered questions. The system’s evaluation score is based on the set of questions
that is correctly addressed with any of the top 20 passages—averaged across all queries. The
more questions can be answered, the higher the EXAM evaluation score for that system.</p>
        <p>The original EXAM method relies solely on humans to design the exam, with the intent that
only a human could identify the core questions that would need to be addressed in a relevant
answer. This is in contrast to approaches that generate questions from a gold summary (detailed
in Section 2.2), which might lead to questions derived from non-relevant aspects mentioned in
relevant text.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Approach</title>
      <p>In this work we explore a modernized version of Sander’s EXAM Answerability metric, which
we call EXAM++.3 Akin to Sander’s method, we use a bank of exam questions to grade
systems based on the set of questions that can be correctly answered with information in the
system’s response. The more questions can be answered with the system’s response, and the
more passages answer questions well, the higher the EXAM evaluation score of the system.
By automating the component that determines the answerability of passages, the evaluation
paradigm becomes repeatable and reusable at a reasonable cost. As a result, it can be applied
to systems that retrieve passages from a corpus as well as systems that generate content with
LLMs.</p>
      <p>Our EXAM++ evaluation system assumes the following inputs:
1. A set of queries, optionally with query subtopics.
2. A set of system responses, which can come in the form of a passage ranking or a set of
generated passages.</p>
      <p>Note that the exam questions are intended to be kept secret from the retrieval/generation
system, only to be used for evaluation.</p>
      <p>
        The EXAM++ evaluation approach is structured into the following three phases that we
detail in the remainder of the section and depict in Figure 1.
1. Obtaining an exam question bank: A process of creating a test bank of query-specific
exam questions.
2. Grading system responses: All passages in system responses are graded using an
automated LLM-based system to determine which questions are answerable with the passage
content. For each passage, the set of answerable questions is tracked along with grades that
represent how relevant, complete, and accurate the provided answer is.
3An implementation of EXAM++ is available in the Autograding Workbench [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
3. EXAM evaluation scoring: We derive multiple evaluation scores. The more exam
questions can be answered well with passages of a systems’ response, the higher the system’s
EXAM-Cover score. The more passages address any of the exam questions well, the higher
the system’s precision-oriented EXAM score. By exporting passage-level relevance labels,
any traditional evaluation metric can be incorporated (we refer to this evaluation score as
EXAM-Qrels).
      </p>
      <p>Our contribution difers from the original EXAM method in several important ways:</p>
      <sec id="sec-4-1">
        <title>A. Obtaining an exam question bank: To obtain exam questions,</title>
        <p>• The original EXAM method is based on manually created multiple-choice exam questions.</p>
        <p>TREC CAR Y3 Question Bank Prompt
Explore the connection between ‘{query_title}’ with a specific focus on the subtopic
’{query_subtopic}’. Generate insightful questions that delve into advanced aspects of
‘{query_subtopic}’, showcasing a deep understanding of the subject matter. Avoid basic or
introductory-level inquiries. Give the question set in the following JSON format:
```json
{"questions":[question_text_1, question_text_2,...]}
```
• We propose to semi-automatically generate free-text questions for each query, as
described in Section 4.1.1.</p>
        <p>B. Grading system responses: To grade each passage via the answerability of exam
questions,
• The original EXAM method uses a pre-neural multiple-choice question answering system
with answer verification.
• First, we modernize the question answering system with an LLM-based approach (Section
4.2.1).
• Second, we explore the ability of LLMs to self-rate the answerability of a question with
given context, without directly verifying the correctness of answer (Section 4.2.2).
C. EXAM-Cover evaluation: To evaluate each IR system,
• With EXAM-Cover, we follow the original EXAM method by evaluating systems
according to the number of answerable exam questions (Section 4.3.1).
• To improve adoption, we add a variant “EXAM-Qrels” that implements a related idea so
that it is inter-operable with the popular evaluation tool trec_eval (Section 4.3.2).</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.1. Phase 1: EXAM++ Question Banks</title>
        <p>4.1.1. Generating Question Banks
We use a generative LLM, specifically ChatGPT, to automate the creation of free-text questions 4
that are directly tailored to the needs of information retrieval (IR) tasks and specific domain
requirements. This approach allows a larger information to be broken down need into insightful
and relevant questions that probe deeply into the nuances of each query, enhancing the depth
and quality of the question banks. With application to TREC CAR Y3, a set of open-ended
questions   are generated for each subtopic, via a zero-shot prompt as detailed in Table 1.</p>
        <p>
          The goal during topic development is to have a human judge ensure that essential
information about the query is covered by the question bank, and (if necessary) modify the questions
accordingly.
4This step is identical to generating question-based RUBRICs in Farzi and Dietz [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>Self-rating Answerability Prompt
Can the question be answered based on the available context? choose one:
- 5: The answer is highly relevant, complete, and accurate.
- 4: The answer is mostly relevant and complete but may have minor gaps or inaccuracies.
- 3: The answer is partially relevant and complete, with noticeable gaps or inaccuracies.
- 2: The answer has limited relevance and completeness, with significant gaps or inaccuracies.
- 1: The answer is minimally relevant or complete, with substantial shortcomings.
- 0: The answer is not relevant or complete at all.</p>
        <p>Question: {question} Context: {context}
4.1.2. Manual Question Banks
Alternatively, query-specific question banks   can be manually constructed from scratch.
Optionally this can include a gold answer key for verification, as described in the original EXAM
method, which uses such a test bank from the TQA dataset.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.2. Phase 2: Automated EXAM++ Grading</title>
        <p>
          The grading process leverages a state-of-the-art LLM, such as the FLAN-T5-large [
          <xref ref-type="bibr" rid="ref2">2</xref>
          ] model,
chosen to trade-of processing speed and ability to understand complex queries and context.
Prompts in Table 2 have been designed for reliable exam grading—especially so that the LLM
focuses solely on the provided context rather than relying on its pre-trained knowledge. The
LLM is queried separately for each passage to prevent positional biases, ensuring that each
answer is contextually derived from the passage to which it corresponds.
        </p>
        <p>Pre-processing system responses. Before grading, a judgment pool of all retrieved
passages is created for eficient processing. Longer system responses are segmented into
paragraph-sized passages  . Each passage is given a unique identifier ( passage_id) to ensure
that every part of the response can be individually traced throughout the grading process.
4.2.1. LLM-based Question Answering with Answer Checking
For every passage-question pair (,  ) , we ask the LLM to extract a best efort answer from the
passage. We use the prompt in Table 2 (top) with a text-to-text generation pipeline.</p>
        <p>Once answers are extracted, they are verified for correctness against the answer key. The
verification process will normalize the correct and predicted answers through lower-casing,
stopword removal, and stemming. We then apply a heuristic matching function where a match
is considered valid if the edit distance between normalized answers is less than 20% of the length
of the longer string.</p>
        <p>Occasionally the LLM will respond with an expression indicating that the question is
unanswerable with the provided context. We count an answer as incorrectly answered (grade 0)
when we encounter an ill-formed answer (such as “a.” or “(iii)”) or one of the following
expressions: “unanswerable”, “no”, “no answer”, “not enough information”, “unknown”, “it is not
possible to tell”, “it does not say”, or “no relevant information”.</p>
        <p>
          Variation: SQuAD2 fine-tuning. We study the impacts of fine-tuning the question
answering system using the SQuAD2 dataset [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. SQuAD2 is comprised of questions in a similar
style to TQA, to be answered in the context of a provided passage. SQuAD2 also includes many
training examples where questions are unanswerable with the given context, which is essential
to determine the answerability of questions for EXAM++.
        </p>
      </sec>
      <sec id="sec-4-4">
        <title>Variation: Answer verification with LLMs. The implementation of the answer verifica</title>
        <p>tion remains a technical challenge. Noticing that many correct answers are missed because they
are phrased diferently, we additionally explore asking the LLM to verify the answer match. We
verify with the prompt in Table 2 (middle) by providing the extracted answer, the gold answer,
and the question.</p>
        <p>
          We manually analyzed the accuracy of this verification step, based on extracted and correct
answers. To give an example from TQA exam question L_0016/NDQ_000615 “During very wet
times, the water table will...” for which the correct answer is “rise”, this LLM-based process
identifies additional answers including “increase”, “rising”, “be higher”, “increase substantially”
as well as answers that restate the question such as “During very wet times, the water table
will rise.”
4.2.2. Grading by Self-rating Answerability
Given the technical challenges of answer verification we explore an easier alternative. We use
an answerability system introduced as RUBRIC in Farzi and Dietz [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], that self-rates whether
the passage  answers the question  ∈   , without first extracting the answer.
        </p>
        <p>Given each passage-question pair, the LLM rates the answerability on a scale from 0 (worst)
to 5 (best) using the prompt provided in Table 2, bottom. In cases where the LLM does not
provide a numerical rating, we default to a rating of 1 for answered questions—with the exception
of answers that denote unanswerability (as in Section 4.2.1) for which we assign a grade of 0.</p>
        <p>This method enables an autonomous assessment of answerability and relevance, avoiding
technical issues of answer verification when there are diferent ways to phrase a correct answer
or if there are diferent answers that are equally correct. Moreover, this supports the use of
open-ended questions for evaluation.</p>
        <p>The output of the grading phase is, for each passage-question pair (,  ) , a grade that
represents the relevance, completeness, and accuracy with which the question is addressed. The
grade is 0 if the passage does not address the answer. For question answering with answer
verification, the grade is either 1 (if correct) or 0 otherwise. In addition, we track the extracted
answer to support manual verification via human judges.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.3. Phase 3: EXAM++ Evaluation</title>
        <p>4.3.1. EXAM-Cover Evaluation
We incorporate a coverage-style evaluation metric as suggested by Sander et al. It quantifies
the set of exam questions  ∈   for the query  that are covered in retrieved passages  ∈ 
with a minimum grade level  , as defined by:</p>
        <p>EXAM-Cover ( ) =</p>
        <p>1 | ⋃ { | grade(,  ) ≥  , ∀ ∈ 
|  | ∈
 }|
To avoid gaming the evaluation metric with a very long system response, the size of the passage
set  is limited to a fixed budget, e.g.  = 20 passages.
4.3.2. EXAM-Qrels Evaluation
Alternatively, we provide relevance labels for each passage facilitating compatibility with
traditional IR evaluation metrics, such as implemented in the trec_eval tool. Passage-level
relevance labels are obtained by mapping grades to a binary or multi-graded relevance label,
EXAM-Label() =
max grade(,  )
∈</p>
        <p>The EXAM-Label allows to use established IR evaluation metrics that incorporate
multigraded relevance labels (such as NDCG), or by choosing a minimum grade indicating relevance,
 , to control the leniency of the evaluation.</p>
        <p>Like all relevance-label based approaches, the pool of graded passages may impact the
evaluation results—therefore, as systems reveal unjudged passages, these should be graded to update
the qrel files.</p>
        <p>
          The downside of this EXAM-Qrels approach is that once a relevance label is determined,
the evaluation metric is unaware of which exam questions were covered. To preserve this
information, future work should explore integrating EXAM++ with intent-aware evaluation
measures such as  -NDCG [
          <xref ref-type="bibr" rid="ref22 ref23">22, 23</xref>
          ].
        </p>
        <p>Whether EXAM-Cover or EXAM-Qrels is a more appropriate evaluation measure depends
on the goals of the information retrieval application. When users are expected to stop after
the first relevant passage, then we suggest evaluating with EXAM-Qrels with mean reciprocal
rank. When recall is a priority, we suggest using EXAM-Qrels with R-precision or (mean-)
average precision (MAP). When the emphasis is on covering diverse facets of relevance, we
suggest to use EXAM-Cover.
(1)
(2)</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Experimental Evaluation</title>
      <sec id="sec-5-1">
        <title>5.1. Experimental Setup</title>
        <p>
          We experimentally compare variations of our EXAM++ system to the original EXAM method
[
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. The evaluation uses queries, manual TREC judgments, and submitted systems from the
third year of the TREC Complex Answer Retrieval track (TREC CAR Y3) [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ],5 as these align
with manual test questions and results from Sander and Dietz [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]. Empirical results on other
datasets are available in Farzi and Dietz [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ].
        </p>
        <p>In experiments with generated question banks, we follow ”Phase 1” to obtain ten questions
for each of the 721 query-subtopics across 131 queries in CAR Y3. In experiments that use
the manual TQA question bank, we use all non-diagram questions with gold answer keys. In
preparation for grading (Phase 2), we build a judgment pool of all passages in oficial judgments
and the top 20 of all run submissions—a total of 85,329 passages.</p>
        <p>For question generation we use gpt-3.5-turbo-instruct; for question verification and
self-rating we use the FLAN-T5-large model with the text2text-generation pipeline from
HuggingFace.6 We also explore fine-tuning the FLAN-T5-large on the SQuAD2 dataset. The
ifne-tuned model is available on HuggingFace as sjrhuschlee/flan-t5large-squad2 to be
used with the extractive question-answering pipeline.</p>
        <p>We compare the following variations of our approach.</p>
        <p>EXAM++: Using our generated question banks and grading with self-ratings (Sections 4.1.1
and 4.2.2).</p>
        <p>
          Manual EXAM++: As previous but using manual question banks from the TQA dataset [
          <xref ref-type="bibr" rid="ref20">20</xref>
          ]
(Sections 4.1.2 and 4.2.2).
        </p>
        <p>Manual-EXAM-QA: As previous but grading via question answering using prompts from</p>
        <p>Table 2 (top), with word-based answer checking (Sections 4.1.2 and 4.2.1).</p>
        <p>Manual-EXAM-Squad2: As previous but fine-tuning the grading LLM on SQuAD2 and using
the question-answering pipeline of a prompt.</p>
      </sec>
      <sec id="sec-5-2">
        <title>LLM-verified Manual-EXAM-QA &amp; Manual-EXAM-Squad2: Like the two previous but</title>
        <p>the extracted answers are verified with the FLAN-T5-large LLM using the answer
verification prompt from Table 2 (middle).</p>
        <p>For all these methods we compare both the EXAM-Qrels and EXAM-Cover evaluation
approach. For EXAM-Qrels, we export passage-level EXAM++ relevance labels to be used with
trec_eval on traditional evaluation measures. In this experiment we use measures used in
the oficial TREC CAR Y3 evaluation, such as average precision (MAP), normalized cumulative
discounted gain (NDCG@20), and R-precision (Rprec).</p>
        <sec id="sec-5-2-1">
          <title>We compare to the following reference baselines.</title>
          <p>
            Original EXAM: Using the results provided by Sander et al [
            <xref ref-type="bibr" rid="ref1">1</xref>
            ].
5The TREC CAR Y3 test set benchmarkY3test is available at http://trec-car.cs.unh.edu/datareleases/
6https://huggingface.co/google/flan-t5-large
Rank correlations of each evaluation method with diferent minimum grades  with the oficial TREC
CAR Y3 leaderboard. S: Spearman’s rank correlation. K: Kendall’s Tau correlation. Best evaluation
method in bold-italics. Equally good methods within (±0.05) marked in bold. Poor methods (obtaining
less than 0.5) marked in grey. Leaderboards of selected methods (marked in blue) are presented in
obtaining relevance labels by directly asking whether a passage is relevant for a query. We
use a set of established prompts [
            <xref ref-type="bibr" rid="ref5 ref6 ref7 ref8">5, 6, 7, 8</xref>
            ], listed in Appendix A.
          </p>
          <p>FaggioliB_few, Sun_few: As previous but using few-shot prompts suggested for the TREC</p>
          <p>
            Deep Learning track [
            <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
            ] to test their generalizability.
          </p>
        </sec>
        <sec id="sec-5-2-2">
          <title>We measure the quality of our evaluation paradigm in two ways:</title>
          <p>Leaderboard rank-correlation: The leaderboard of systems under the EXAM-Cover and
EXAM-Qrels metric should be similar to the oficial TREC CAR Y3 leaderboard. This
similarity is evaluated with two rank correlation measures: Spearman’s rank correlation coeficient,
which measures diferences of a system’s rank on the leaderboard, and Kendall’s  rank
correlation which penalizes swaps of two systems on the leaderboard.</p>
          <p>Inter-annotator agreement: High passage-level agreement between oficial judgments and
our predicted relevance labels. We provide count statistics and Cohen’s  inter-annotator
agreement statistics.</p>
          <p>Since Sander’s work demonstrated that ROUGE metrics are uncorrelated with leaderboard
rankings, we omit the comparison here.</p>
          <p>Significance testing. We perform a standard-error bar overlap test for Figure 2 and only
describe significant diferences in the text. For leaderboard correlation results in Table 3, we
consider results within ±0.05 as equally good.</p>
        </sec>
      </sec>
      <sec id="sec-5-3">
        <title>5.2. Overall Results</title>
        <p>EXAM++. Each evaluation method gives rise to a leaderboard of systems. Table 3
compares how well each leaderboard correlates with the oficial TREC leaderboard. Our proposed
EXAM++ with minimum grade  = 5 obtains overall best results for EXAM-Qrels. In many
cases this approach obtains near-perfect rank correlations above 0.9. For reference, rank
correlation statistics are on a range from -1 to +1, with 0 indicating no correlation.</p>
        <p>Table 4 presents the inter-annotator agreement between manual TREC judgments and
predicted relevance labels. We see that especially high self-rating grades obtain a good correlation
(Cohen’s kappa of 0.38).</p>
        <p>For leaderboards based on the EXAM-Cover metric, we also obtain strong results with
EXAM++, but observe even better results using the manually created question bank
(Manual EXAM++). We believe that the manual control in question bank design would only select
vetted questions that represent relevance. We find that some of the generated questions are
too broad, promoting systems that provide information that is not suficiently specific. Future
work should focus on adjusting the question bank generation prompt (Table 1) to obtain more
focused questions.</p>
        <p>QA + answer verification. Next, we turn to EXAM++ approaches that determine relevance
by verifying extracted answers from passages against gold answer keys. We find that verifying
extracted answers (Manual-EXAM-QA) obtains comparable results to the self-rated
answerability approach (Manual EXAM++) when used to obtain relevance labels. However, it is slightly
worse when used with coverage-based metrics.</p>
        <p>
          In either case, all our proposed approaches outperform the original EXAM method [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] by
using a strong LLM-based question answering method as opposed to a pre-neural question
answering system.
LLM-based relevance label predictors. None of the direct grading prompts described in
Section 2.1 work well on the TREC CAR Y3 dataset—in many cases obtaining weak rank
correlations of below 0.5 (marked in grey). This is in contrast to findings on the TREC DL test
collections [
          <xref ref-type="bibr" rid="ref5 ref6 ref8">5, 6, 8</xref>
          ], where these direct grading prompts perform extremely well (both when
using GPT [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] and FLAN-T5-large [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ]). We suspect that the exact prompts are designed for the
for unambiguous narrowly specified question-style queries as found in the TREC DL collection
(“When did rock’n’roll begin?”) but struggle with the broad information needs in the TREC
CAR Y3 collection (e.g., “the integumentary system”).
        </p>
        <p>Furthermore, the few shot examples designed for the TREC DL domain (used in
FaggioliB_few, Sun_few) do not generalize to the broad information needs of the TREC CAR
Y3 domain. We hope that future research analyzes which of the findings on the DL collection
generalize to other information retrieval use cases.</p>
      </sec>
      <sec id="sec-5-4">
        <title>5.3. Obtained System Leaderboards</title>
        <p>Figure 2 presents the impact of diferent evaluation methods on how systems are ranked on
the leaderboard. We choose three of the best performing evaluation methods, spanning across
our diferent options (marked in blue in Table 3), namely:
EXAM++ MAP (grade&gt;=5): Generated question bank, self-rated answerability EXAM++
with EXAM-Qrels, trec_eval using (mean) average precision, relevant grade ≥ 5.
Manual EXAM++ Cover (grade&gt;=1): Manual question bank, self-rated answerability</p>
        <p>EXAM-Cover, relevant grade ≥ 1.</p>
        <p>Manual-EXAM-Squad2: Manual question bank, using the question answering approach
with answer verification on a fine-tuned LLM model, EXAM-Qrels, trec_eval using
Rprecision.</p>
        <p>
          Orig EXAM [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ]: As reported in Sander and Dietz [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ] as “unnormalized”, which is akin to
        </p>
        <p>EXAM-Cover.</p>
        <p>
          Oficial leaderboard (MAP): Manual TREC judgments, (mean) average precision, relevant
grade ≥ 1, as reported in the TREC CAR Y3 overview paper [
          <xref ref-type="bibr" rid="ref24">24</xref>
          ].
        </p>
        <p>To make the system ranking behavior more visible, all systems’ evaluation scores are
renormalized so that the highest scores maps to 1.0, and the lowest to 0.0. Several systems use a
similar approach, leading to near identical scores on all leaderboards (including the oficial
CAR leaderboard).</p>
        <p>We find that all evaluation methods track the oficial leaderboard. Self-rating-based EXAM++
follows the shape the best. However, the higher grade cutof of  = 5 leads to much large error
bars in contrast to  = 4 . We find that the question answering-based method
Manual-EXAMSquad2 is too unspecific, assigning the same high score to two-thirds of all systems.</p>
        <p>We find that coverage-based evaluation with Manual EXAM++ and original EXAM promote
some of the low-ranking systems. With our experiment it is impossible to say whether this is
due to a bias in the oficial leaderboard (which does not acknowledge coverage) or an issue with
the coverage-based evaluation metric. However, the fact that two independent coverage-based
implementations agree on assigning ICT-DRMMTKS a higher score, suggests that this system
might indeed provide good coverage (albeit at lower precision).
1.0
e
r
o
cS0.8
n
o
i
t
a
u
l
va0.6
E
m
e
t
s
yS0.4
d
e
z
il
a
m
ro0.2
n
e
R</p>
      </sec>
      <sec id="sec-5-5">
        <title>5.4. Impact of Grade Cutofs</title>
        <p>While for generated banks of open-ended questions, a higher grade cutof of  = 5 obtains
stronger results, we observe the opposite for manual question banks taken from the TQA
dataset, where a grade cutof  = 1 produces best results.</p>
        <p>In general, we remark that the appropriate self-rating levels depend on the dificulty of the
question bank. Sander et al remark that questions of the TQA collection are often phrased in
an obtuse way, as they are designed to encourage (human) students to closely read the text.
As a result, too few passages obtain a high grade for most questions, which then results in
evaluation scores that don’t distinguish between systems.</p>
        <p>For the open-ended questions from our generated test bank, it is generally easier to obtain a
high self-rating grade—especially since multiple answers can be considered reasonably relevant.
Nevertheless, while for EXAM++ the grading cutof of  = 5 obtains slightly better correlations
than a cutof of  = 4 , the large error bars for cutof  = 5 (cf. Figure 2) suggest that a lower
cutof might yield a more useful evaluation measure.</p>
      </sec>
      <sec id="sec-5-6">
        <title>5.5. Self-rating vs. Answer Verification</title>
        <p>We analyze the set of evaluation approaches that use the manual benchmark, i.e., Manual
EXAM++ and methods under QA + Answer Verification. The best correlation is achieved with
self-rating methods on EXAM-Cover, obtaining a 0.959 Spearman’s rank correlation coeficient.
However, when it is desired to integrate the evaluation into trec_eval, we find that answer
verification approaches are strong contenders. Especially fine-tuning the FLAN-T5-large model
on SQuAD2, obtains slightly better results than other methods.</p>
        <p>Given that many correct answers are missed due to a diferent phrasing, we further explore
LLM-based answer verification. However, this adaptation has strong negative efects on leader
board correlation, in several cases obtaining a rank correlation of less than 0.5. We suspect that
this assigns a relevant grade to too many non-relevant passages, resulting in a degradation of
the leaderboard.</p>
      </sec>
      <sec id="sec-5-7">
        <title>5.6. A Worked Example</title>
        <p>We illustrate our EXAM++ method on an example from the TREC CAR Y3 dataset for query
tqa2:L_0384. The passage presented below was retrieved at rank 1 by the dangnt-nlp system
and was assessed by TREC judges as ’MUST be mentioned’.</p>
        <sec id="sec-5-7-1">
          <title>Query title: The Integumentary System Query subtopic: Structure of the Skin</title>
          <p>Passage:
ID: b95bf325b7fdacac183b1daf7c118be407f52a3a
The skin is the largest organ in the human body. Skin is made up of three layers,
the epidermis, dermis and the fat layer, also called the hypodermis. The epidermis
is the outer layer of skin that keeps vital flu ids in and harmful bacteria out of the
body. The dermis is the inner layer of skin that contains blood vessels, nerves,
hair follicles, oil, and sweat glands. Severe damage to large areas of skin exposes
the human organism to dehydration and infections that can result in death.</p>
          <p>TREC judgment: 3 (MUST be mentioned)</p>
          <p>The TQA question NDQ_007535 “Outer layer of the skin?” was correctly answered as
“epidermis” by this passage (highlighted in text). Under the self-rating prompt, FLAN-T5 indicates that
this question can be answered in a mostly relevant way but may have minor gaps (self-rated
answerability grade of 4).</p>
          <p>A generated exam question, “What are the main components of the epidermis and how do
they contribute to the structure of the skin?”, was also graded with a self-rating of 4. The
corresponding extracted answer is “keeps vital fluids in and harmful bacteria out of the body”
(highlighted in text).
Other generated questions for this query are:
1. What are the diferent layers of the skin and their respective functions?
2. How does the structure of the skin contribute to its various functions?
3. What is the role of dermal papillae in the structure of the skin?
4. How does the structure of the hypodermis difer from the other layers of the skin?
5. What structural changes occur in the skin due to aging?
6. How does the skin’s structure contribute to its role in temperature regulation?
7. What role does the extracellular matrix play in the structure and function of the skin?
8. How does the structure of the skin influence its ability to prevent water loss and maintain
hydration?
9. What structural adaptations exist in the skin of diferent animals and how do they serve their
specific needs?</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion</title>
      <p>With EXAM++ we are proposing an alternative evaluation approach that does not merely
outsource passage-level relevance determination to LLMs (or human judges). Instead, an exam
question bank is created as part of topic development, envisioning that each question addresses
an essential piece of information content for the query. As a result, whenever such questions
are answerable with responses from a retrieval/generation system, we conclude that the system
provides relevant information.</p>
      <p>
        Using the TREC Complex Answer data set, we demonstrate that (1) our proposed approach
can reproduce oficial TREC leaderboards nearly perfectly; and (2) we outperform several
strong LLM-based relevance label predictors [
        <xref ref-type="bibr" rid="ref5 ref6 ref8">5, 6, 8</xref>
        ] that were developed in the context of
other retrieval benchmarks. In contrast, EXAM++ ofers a clear path towards integrating a
human-in-the-loop, by supporting the refinement of the exam question banks, as a means for
humans to define relevance.
      </p>
      <p>
        We believe that more research will improve the question bank generation and LLM-based
grading. Future work should study efects on the quality, cost, and satisfaction of human judges
working with the EXAM++ approach in our Autograde software [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>We hope that by integrating EXAM++ evaluation metric with trec_eval, we ofer a system
that can be easily adopted by future IR evaluation tracks, ofering organizers an avenue to
reduce assessment costs, obtain reusable test collections for generative information systems.</p>
    </sec>
    <sec id="sec-7">
      <title>A. Appendix: Relevance Label Predictor Prompts</title>
      <p>
        Thomas [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]: As full prompt exceeds the token limitation, we use the following abridged
prompt used in citing work:
      </p>
      <p>Instruction: You are a search quality rater evaluating the relevance of passages.
Given a query and a passages, you must provide a score on an integer scale of 0
to 2 with the following meanings:
2 = highly relevant, very helpful for this query
1 = relevant, may be partly helpful but might contain other irrelevant content
0 = not relevant, should never be shown for this query
Question: {query_title}
Passage: {context}</p>
      <p>
        Answer:
FaggioliB [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]: Prompt designed for TREC DL:
      </p>
      <p>Instruction: Indicate if the passage is relevant for the question. Respond with
’Yes’ or ’No’.</p>
      <p>Question: {query_title}</p>
      <sec id="sec-7-1">
        <title>Passage: {context} Answer:</title>
        <p>
          HELM [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ]: Prompt designed for evaluating LLMs on information retrieval:
Instruction: Does the passage answer the query?
Respond with ’Yes’ or ’No’.
        </p>
        <p>Question: {query_title}
Passage: {context}</p>
        <p>
          Answer:
Sun [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ]: Prompt designed for question-style queries:
        </p>
        <p>Instruction: Given a passage and a query, predict whether the passage includes
an answer to the query by producing either ”Yes” or ”No”.</p>
        <p>Question: {query_title}
Passage: {context}</p>
        <p>
          Answer:
FaggioliB_few [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]: Prompt FaggioliB with additional few shot examples from the TREC DL
collection:
        </p>
        <p>Instruction: Indicate if the passage is relevant for the question. Respond with
’Yes’ or ’No’.</p>
        <p>Passage: Its 25 drops per ml, you guys are all wrong. If it is water, the
standard was changed 15 - 20 years ago to make 20 drops = 1mL. The viscosity
of most things is temperature dependent, so this would be at room temperature.
Hope this helps.</p>
        <p>Question: how many eye drops per ml
Answer: Yes
Passage: RE: How many eyedrops are there in a 10 ml bottle of Cosopt?
My Kaiser pharmacy insists that 2 bottles should last me 100 days but I run
out way before that time when I am using 4 drops per day. In the past other
pharmacies have given me 3 10-ml bottles for 100 days. E: How many eyedrops
are there in a 10 ml bottle of Cosopt? My Kaiser pharmacy insists that 2 bottles
should last me 100 days but I run out way before that time when I am using 4
drops per day.</p>
        <p>Question: how many eye drops per ml
Answer: No
Passage: You can transfer money to your checking account from other
Wells Fargo. accounts through Wells Fargo Mobile Banking with the mobile app,
online, at any. Wells Fargo ATM, or at a Wells Fargo branch. 1 Money in —
deposits.</p>
        <p>Question: can you open a wells fargo account online
Answer: No
Passage: You can open a Wells Fargo banking account from your home or
even online. It is really easy to do, provided you have all of the appropriate
documentation. Wells Fargo has so many bank account options that you will be
sure to find one that works for you. They ofer free checking accounts with free
online banking.</p>
        <p>Question: can you open a wells fargo account online
Answer: Yes</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>D. P.</given-names>
            <surname>Sander</surname>
          </string-name>
          , L. Dietz, Exam:
          <article-title>How to evaluate retrieve-and-generate systems for users who do not (yet) know what they want</article-title>
          .,
          <source>in: DESIRES</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>136</fpage>
          -
          <lpage>146</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>S.</given-names>
            <surname>Longpre</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Hou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Vu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Webson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. W.</given-names>
            <surname>Chung</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Tay</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q. V.</given-names>
            <surname>Le</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Zoph</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Wei</surname>
          </string-name>
          , et al.,
          <article-title>The flan collection: Designing data and methods for efective instruction tuning</article-title>
          ,
          <source>arXiv preprint arXiv:2301.13688</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dietz</surname>
          </string-name>
          ,
          <article-title>A workbench for autograding retrieve/generate systems</article-title>
          ,
          <source>in: Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR '24) - Resource and Reproducibility Papers</source>
          ,
          <year>2024</year>
          . doi:https: //doi.org/10.1145/3626772.3657871.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>N.</given-names>
            <surname>Farzi</surname>
          </string-name>
          , L. Dietz,
          <article-title>Pencils down! Automatic rubric-based evaluation of retrieve/generate systems</article-title>
          ,
          <source>in: Proceedings of the International Conference on the Theory of Information Retrieval</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>W.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <article-title>Is chatgpt good at search? investigating large language models as re-ranking agent</article-title>
          , arXiv e-prints (
          <year>2023</year>
          ) arXiv-
          <fpage>2304</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>G.</given-names>
            <surname>Faggioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dietz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Clarke</surname>
          </string-name>
          , G. Demartini,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hauf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kando</surname>
          </string-name>
          , E. Kanoulas,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , et al.,
          <article-title>Perspectives on large language models for relevance judgment</article-title>
          ,
          <source>in: Proceedings of the 2023 ACM SIGIR International Conference on Theory of Information Retrieval</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>39</fpage>
          -
          <lpage>50</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>P.</given-names>
            <surname>Liang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Bommasani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tsipras</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Soylu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Yasunaga</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Narayanan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Kumar</surname>
          </string-name>
          , et al.,
          <article-title>Holistic evaluation of language models</article-title>
          ,
          <source>arXiv preprint arXiv:2211.09110</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>P.</given-names>
            <surname>Thomas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Spielman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Craswell</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Mitra</surname>
          </string-name>
          ,
          <article-title>Large language models can accurately predict searcher preferences</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2309</volume>
          .
          <fpage>10621</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>MacAvaney</surname>
          </string-name>
          , L. Soldaini,
          <article-title>One-shot labeling for automatic relevance estimation</article-title>
          ,
          <source>arXiv preprint arXiv:2302.11266</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Faggioli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Dietz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Clarke</surname>
          </string-name>
          , G. Demartini,
          <string-name>
            <given-names>M.</given-names>
            <surname>Hagen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hauf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Kando</surname>
          </string-name>
          , E. Kanoulas,
          <string-name>
            <given-names>M.</given-names>
            <surname>Potthast</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Stein</surname>
          </string-name>
          , et al.,
          <article-title>Who determines what is relevant? humans or ai? why not both? a spectrum of human-ai collaboration in assessing relevance</article-title>
          .,
          <source>Communications of the ACM</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N. S.</given-names>
            <surname>Moosavi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <article-title>Llms as narcissistic evaluators: When ego inflates evaluation scores</article-title>
          ,
          <source>arXiv preprint arXiv:2311.09766</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>P.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Q.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Sui</surname>
          </string-name>
          ,
          <article-title>Large language models are not fair evaluators</article-title>
          ,
          <source>arXiv preprint arXiv:2305.17926</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>R.</given-names>
            <surname>Fok</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. S.</given-names>
            <surname>Weld</surname>
          </string-name>
          ,
          <article-title>In search of verifiability: Explanations rarely enable complementary performance in ai-advised decision making</article-title>
          ,
          <source>arXiv preprint arXiv:2305.07722</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>J.</given-names>
            <surname>Clarke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lapata</surname>
          </string-name>
          ,
          <article-title>Discourse constraints for document compression</article-title>
          ,
          <source>Computational Linguistics</source>
          <volume>36</volume>
          (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>M.</given-names>
            <surname>Eyal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Baumel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Elhadad</surname>
          </string-name>
          ,
          <article-title>Question answering as an automatic evaluation metric for news article summarization, in: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Association for Computational Linguistics</article-title>
          , Minneapolis, Minnesota,
          <year>2019</year>
          , pp.
          <fpage>3938</fpage>
          -
          <lpage>3948</lpage>
          . URL: https://www.aclweb.org/anthology/ N19-1395. doi:
          <volume>10</volume>
          .18653/v1/
          <fpage>N19</fpage>
          - 1395.
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>L.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Knowledge graph-augmented abstractive summarization with semantic-driven cloze reward</article-title>
          ,
          <source>Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics</source>
          (
          <year>2020</year>
          ). URL: http://dx.doi.org/10.18653/v1/
          <year>2020</year>
          .acl-main.
          <volume>457</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl- main.457.
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Deutsch</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Bedrax-Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Roth</surname>
          </string-name>
          ,
          <article-title>Towards question-answering as an automatic metric for evaluating the content quality of a summary, arXiv preprint</article-title>
          arXiv:
          <year>2010</year>
          .
          <volume>00490</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>A.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Cho</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lewis</surname>
          </string-name>
          ,
          <article-title>Asking and answering questions to evaluate the factual consistency of summaries, in: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics</article-title>
          , Online,
          <year>2020</year>
          , pp.
          <fpage>5008</fpage>
          -
          <lpage>5020</lpage>
          . URL: https://www.aclweb.org/anthology/2020.acl-main.
          <volume>450</volume>
          . doi:
          <volume>10</volume>
          .18653/v1/
          <year>2020</year>
          .acl- main.450.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <surname>T. B. Brown</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Mann</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Ryder</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Subbiah</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Kaplan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dhariwal</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Neelakantan</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Shyam</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Sastry</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Askell</surname>
          </string-name>
          , et al.,
          <article-title>Language models are few-shot learners</article-title>
          , arXiv preprint arXiv:
          <year>2005</year>
          .
          <volume>14165</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>A.</given-names>
            <surname>Kembhavi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Seo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Schwenk</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Farhadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Hajishirzi</surname>
          </string-name>
          ,
          <article-title>Are you smarter than a sixth grader? Textbook question answering for multimodal machine comprehension</article-title>
          ,
          <source>2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)</source>
          (
          <year>2017</year>
          )
          <fpage>5376</fpage>
          -
          <lpage>5384</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>P.</given-names>
            <surname>Rajpurkar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Lopyrev</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Liang</surname>
          </string-name>
          , Squad:
          <volume>100</volume>
          , 000+
          <article-title>questions for machine comprehension of text</article-title>
          ,
          <source>CoRR abs/1606</source>
          .05250 (
          <year>2016</year>
          ). URL: http://arxiv.org/abs/1606.05250. arXiv:
          <volume>1606</volume>
          .
          <fpage>05250</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>C. L.</given-names>
            <surname>Clarke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kolla</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. V.</given-names>
            <surname>Cormack</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Vechtomova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ashkan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Büttcher</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. MacKinnon</surname>
          </string-name>
          ,
          <article-title>Novelty and diversity in information retrieval evaluation</article-title>
          ,
          <source>in: Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>659</fpage>
          -
          <lpage>666</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>T.</given-names>
            <surname>Sakai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. P.</given-names>
            <surname>Kato</surname>
          </string-name>
          ,
          <string-name>
            <surname>Y.-I. Song</surname>
          </string-name>
          ,
          <source>Overview of ntcir-9, in: Proceedings of the 9th NTCIR Workshop Meeting</source>
          ,
          <year>2011</year>
          ,
          <year>2011</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Dietz</surname>
          </string-name>
          , J. Foley,
          <string-name>
            <surname>TREC CAR</surname>
          </string-name>
          <article-title>Y3: Complex Answer Retrieval overview</article-title>
          , in
          <source>: Proceedings of Text REtrieval Conference (TREC)</source>
          ,
          <year>2019</year>
          . Question:
          <article-title>{query_title} Passage: {context} Answer: Sun_few [5]: Prompt Sun with the same few shot examples as FaggioliB_few</article-title>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>