=Paper=
{{Paper
|id=Vol-3752/paper6
|storemode=property
|title=Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework
|pdfUrl=https://ceur-ws.org/Vol-3752/paper6.pdf
|volume=Vol-3752
|authors=Zackary Rackauckas,Arthur Câmara,Jakub Zavrel
|dblpUrl=https://dblp.org/rec/conf/llm4eval/RackauckasCZ24
}}
==Evaluating RAG-Fusion with RAGElo: an Automated Elo-based Framework==
Evaluating RAG-Fusion with RAGElo : an Automated
Elo-based Framework
Zackary Rackauckas1 , Arthur Câmara2 and Jakub Zavrel2
1
Columbia University, New York, NY, USA
2
Zeta Alpha, Amsterdam, The Netherlands
Abstract
Challenges in the automated evaluation of Retrieval-Augmented Generation (RAG) Question-
answering (QA) systems include hallucination problems in domain-specific knowledge and the
lack of gold standard benchmarks for company-internal tasks. This results in difficulties in
evaluating RAG variations, like RAG -Fusion (RAGF ) in the context of a product QA task at Infineon
Technologies. To solve these problems, we propose a comprehensive evaluation framework,
which leverages Large Language Models (LLMs) to generate large datasets of synthetic queries
based on real user queries and in-domain documents, uses LLM-as-a-judge to rate retrieved
documents and answers, evaluates the quality of answers, and ranks different variants of
Retrieval-Augmented Generation (RAG ) agents with RAGElo ’s automated Elo-based competition.
LLM-as-a-judge rating of a random sample of synthetic queries shows a moderate, positive
correlation with domain expert scoring in relevance, accuracy, completeness, and precision.
While RAGF outperformed RAG in Elo score, a significance analysis against expert annotations
also shows that RAGF significantly outperforms RAG in completeness, but underperforms in
precision. In addition, Infineon’s RAGF assistant demonstrated slightly higher performance in
document relevance based on MRR@5 scores. We find that RAGElo positively aligns with the
preferences of human annotators, though due caution is still required. Finally, RAGF ’s approach
leads to more complete answers based on expert annotations and better answers overall based
on RAGElo ’s evaluation criteria.
Keywords
Retrieval-augmented generation, Elo-based evaluation, LLM-as-a-judge, RAG-Fusion
1. Introduction
The text-generating capabilities of LLMs, together with their text understanding abilities,
have allowed conversational Question-Answering (QA) systems to experience a consider-
able leap in performance, with near-human text quality and reasoning capabilities [1].
However, these systems can be prone to hallucinations [2, 3], as they sometimes produce
seemingly plausible but factually incorrect answers.
The general inability of such models to identify unanswerable questions [4, 5] can
exacerbate hallucinations, especially in enterprise settings. In such scenarios, user
questions may require specific domain knowledge to be answered properly. This knowledge
LLM4Eval 2024:The First Workshop on Large Language Models for Evaluation in Information Retrieval,
18 July 2024, Washington, DC
Envelope-Open zcr2105@columbia.edu (Z. Rackauckas); camara@zeta-alpha.com (A. Câmara); zavrel@zeta-alpha.com
(J. Zavrel)
© 2024 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0
International (CC BY 4.0).
CEUR
ceur-ws.org
Workshop ISSN 1613-0073
Proceedings
is usually out-of-domain for most LLMs, but is present in private and confidential internal
documents from the company.
One such company is Infineon, a leading manufacturer of semiconductors. Given its
wide range of equipment, information about its products is spread across multiple, highly
technical documents, including datasheets and selection guides of hundreds of pages.
Therefore, an internal retrieval augmented conversational QA system was developed by
Infineon for internal users such as account managers, field application engineers, and sales
operations specialists. This system allows professionals to ask questions about products
from the whole catalog while in the field.
One of the features of Infineon’s conversational agent is the usage of RAG -Fusion (RAGF ),
a technique for increasing the quality of the generated answers by generating variations
of the user question and combining the rankings produced by these variations using
rank-fusion methods (i.e., recriprocal rank fusion (RRF) [6]) into a ranking that has both
more diverse and higher quality answers.
However, evaluating these systems bring complications common to retrieval augmented
agents, especially in enterprise settings, stemming from the lack of comprehensive test
datasets. Ideally, such a test set would comprise a large set of real user questions from
a query log, paired with “golden answers” provided by experts. The lack of such a
test set leads to two main issues. First, evaluation of answers generated by LLMs by
traditional n-gram evaluation metrics such as ROUGE [7], BLEU [8], and METEOR [9]
is not possible, given the lack of ground truth answers. Second, and as a consequence,
evaluating the quality of the answers generated by the LLM systems would require
in-domain experts (potentially from within the company) in a process that is both slow
and costly [10].
One approach for tackling the lack of an extensive test set is to use synthetic queries
generated by LLMs as a proxy of user queries [11]. However, the lack of in-domain
knowledge of LLMs makes queries naively generated by these models unreliable and
prone to hallucinations, especially when generating queries about specific products and
their specifications (c.f., Table 1 for examples of real user’s questions submitted to the
system).
To solve this, we propose to use a process similar to InPars [12] to create a set of
synthetic evaluation queries. We ask LLMs to generate queries based on portions of
existing documentation injected into the prompt. To increase similarity to real user
queries, we include existing user questions as few-shot examples to the prompt. With
this process, we are able to generate a large set of high-quality synthetic queries for
evaluating our systems. Figure 2 describes the process of generating synthetic queries
and the output of a search agent. Table 2 shows a sample of these queries.
To tackle the second issue, a lack of ground truth “golden answers,” we leverage an
LLM-as-a-judge process, where a strong LLM is used to evaluate the quality of the
answers generated by the RAG agent’s LLM [13]. We then follow the practice of judging
generated answers in a pairwise fashion [14], prompting the judge LLM to select the
better answer between two candidates generated by different RAG pipelines. (c.f. Section
6 with details of our pipelines).
Finally, to mitigate the lack of in-domain knowledge of the judging LLM, we also
annotate the relevance of the documents retrieved by the pipelines being evaluated and
inject the relevant documents in the context used by the judging LLM. This allows the
judging LLM to better assess for hallucinations and completeness and better align the
quality of the evaluations to those conducted by experts.
This process is mediated by RAGElo 1 , a toolkit for evaluating RAG systems inspired
by the Elo ranking system. RAGElo provides an easy-to-use CLI and Python library
for using LLMs to evaluate retrieval results and answers produced by RAG pipelines.
By combining a retrieval evaluator, a pairwise answer annotator, and an Elo-inspired
tournament, RAGElo leverages powerful LLMs to agnostically annotate and rank different
RAG pipelines. We notice that, although noisy, the LLM annotations generated by
RAGElo are generally well aligned with experts’ judgments of relative system quality,
allowing for fast experimentation and comparisons between different RAG implementations
without the frequent intervention of experts as annotators.
This paper evaluates multiple implementations of Infineon’s retrieval augmented con-
versational agent using RAGElo : a traditional Retrieval-Augmented Generation and a
RAG -Fusion implementation. RAG -Fusion generates multiple variations of the user question
and combines the rankings produced by these queries into a more diverse set of documents.
The documents are then fed into the LLM. We also analyze these same agents under
a keyword-based retrieval regimen (i.e., the retriever uses BM25 to retrieve and rank
documents), a dense retriever, and a hybrid retriever that combines the ranking generated
by the BM25 and the dense retrievers using RRF. Our goal is to answer the following
questions:
• Does the evaluation framework proposed by RAGElo align with the preferences of
human annotators for answers generated by RAG -based conversational agents?
• Does the RAGF approach of submitting multiple variations of the user question and
combining their rankings lead to better answers?
Table 1
Sample of questions submitted by users to the Infineon RAG-Fusion system
User-submitted queries
What is the country of origin of IM72D128, and how does geopolitical exposure affect the market and
my SAM for the microphone?
What is the IP rating of mounted IM72D128?
Tell me microphones that have been released since January 2023 based on the datasheet revision
history.
We need to confirm whether the IFX waterproof MIC has a sleeping mode and wake-up functions.
Table 2
Sample of synthetic queries for evaluating Infineon’s RAG assistant. GPT4 refers to OpenAI’s gpt-
4-turbo-2024-04-09 model. Opus, Sonnet and Haiku refer to Anthropic’s Claude 3 models opus-
20240229 , sonnet-20240229 and haiku-20240307 , respectively.
model Query
GPT4 What are some typical consumer applications for TLV496x-xTA/B sensors?
GPT4 What specific ISO 26262 readiness is available for the KP253 sensor?
Opus How small of a form factor can I achieve for a battery-powered air quality device using
Infineon’s PAS CO2 sensor?
Sonnet Can Infineon’s sensors support bus configurations or daisy-chaining for simplified wiring and
reduced complexity in IoT systems?
Haiku Which TLE4971 current sensor models are available in the TISON-8-6 package?
To cross-sell a MEMS microphone and a
How to cross-sell a MEMS microphone and a XENSIV XENSIV sensor to customers, you can follow
sensor to customers? these steps:
1. Identify customer's requirements (…)
Original query
Dense KNN search Top-k retrieved
Agent generated answer
documents
Traditional RAG Agent
What are the key features of Infineon's MEMS
microphones and XENSIV sensors (…)
Here are the key features, benefits, and
How can Infineon’s MEMS microphones and XENSIV applications of Infineon's MEMS microphone
sensors be integrated for enhanced audio (…) and XENSIV sensor products, along with
successful cross-selling strategies: (…)
What are the most suitable applications and
industries for Infineon's MEMS microphones (…) Dense KNN search Fused top-k
Agent generated answer
retrieved documents
Query variations generated by agent (RRF)
Per query top-k
retrieved documents
RAG Fusion Agent
Figure 1: A traditional Retrieval-Augmented Generation pipeline compared to a RAG -Fusion pipeline.
While a traditional RAG agent submits only the original query to the search system, a RAGF agent first
generates variations of the user query and combines the rankings induced by these queries into a final
ranking using RRF. The resulting top-k passages are fed into the LLM for generating the answer to the
user’s query.
2. Related Work
Several evaluation systems for RAG have been proposed to address flaws in current
evaluation methods. For instance, Facts as a Function (FaaF) [15] is an end-to-end
factual evaluation algorithm specially created for RAG pipelines. By creating functions
from ground truth facts, FaaF focuses on the quality of generation and retrieval by
calling LLMs. FaaF has substantially increased efficiency and cost-effectiveness, achieving
reduced error rates compared to traditional evaluation methods. The reliance on a set of
ground truths does not meet our goal of applying an automated evaluation toolkit to
our pipelines. Recently, researchers have moved to eliminate the need for ground truths.
This is especially important when automatically evaluating agents that retrieve highly
technical documents from a large database, such as the Infineon RAGF conversational
agent. RAGElo eliminates this reliance by using an LLM-as-a-judge, a method studied in
1
https://github.com/zetaalphavector/ragelo
numerous recent works.
SelfCheckGPT demonstrates the ability to leverage LLMs to detect and rank factuality
with zero resources [16]. In addition, it has been demonstrated that GPT3.5 Turbo
outperforms ground truth baselines in fact-checking with a "1/2-shot" method [17]. A
model built to classify statements as true or false based on the activations of an LLM’s
hidden layers had up to 83% classification accuracy [18]. This evidence supports RAGElo ’s
usage of LLM-as-a-judge.
Automated evaluation metrics can also be applied to RAG-based agents. BARTScore,
an automated metric based on the BART architecture, has also outperformed most
metrics on categories including factuality [19, 20]. Besides automated evaluation metrics,
several automated evaluation frameworks have been created with a similar goal to RAGElo .
Focusing on faithfulness, answer relevance, and content relevance, RAGAS leverages
LLM prompting to focus on situations where ground truths and human annotations are
not present in a dataset [21]. Prediction-powered inference aims to decrease the number
of human annotations needed for machine learning prediction on a dataset of images
of galaxies with approximately 300,000 annotations [22]. The ARES toolkit leverages
prediction-powered inference to evaluate RAG systems with fewer human annotations.
Like RAGElo , ARES automatically evaluates RAG systems using synthetically generated
data [23].
ARAGOG highlights Hypothetical Document Embedding (HyDE) and LLM reranking
as effective methods for enhancing retrieval precision while also exploring the effectiveness
of Sentence Window Retrieval and the potential of the Document Summary Index in
improving RAG systems [24].
While the aforementioned frameworks evaluate answers on relevance, faithfulness, and
correctness metrics, RAG can also be evaluated on noise and counterfactual robustness,
negative rejection, and information integration [25].
In addition to answers, frameworks have also been created to evaluate documents.
Corrective Retrieval Augmented Generation (CRAG) builds on RAG by employing a
retrieval evaluator to ensure that only the optimal documents are fed into the LLM
prompt prior to the answer generation phase [26].
Due to its Elo-based ranking system for answers, its use of LLM-as-a-judge, and its
relevance evaluation of the intermediate retrieval steps in a RAG pipeline, RAGElo is a
unique evaluation toolkit. In this study, we use it to compare a simple RAG versus a more
sophisticated RAGF system on a knowledge-intensive industry-specific domain.
3. Retrieval Augmented QA with rank fusion
While answers generated by traditional retrieval augmented systems are based on a
number of documents retrieved from a single query, RAGF introduces additional variation
into the retrieval process. Upon receiving a query from the user, a RAGF agent leverages
a large language model to generate a set of queries based on the original [27]. Table 3
shows examples of queries generated by the agent based on the query, “How to cross-sell
a MEMS microphone and a XENSIV sensor to customers?”.
Table 3
Queries Generated from “How to cross-sell a MEMS microphone and a XENSIV sensor to customers?”
LLM-Generated Query
What are the key features of Infineon’s MEMS microphones and XENSIV sensors that can be
highlighted while cross-selling?
How can Infineon’s MEMS microphones and XENSIV sensors be integrated for enhanced audio and
motion sensing capabilities in various applications?
What are the most suitable applications and industries for Infineon’s MEMS microphones and XENSIV
sensors to maximize cross-selling potential?
After generating the variations for the user query, the RAGF agent submits the original
and the generated queries to a retrieval system [28] that returns the top-𝑘 relevant
documents 𝑑, 𝑑1 , 𝑑2 , … 𝑑𝑘 from the set of all documents 𝐷 for each query. The rankings
induced by these queries are then combined using recriprocal rank fusion (RRF) [6] into
a final, higher-quality set of passages. The intuition behind RAGF is that submitting
variations of the same query and combining the final rankings increases the likelihood of
relevant passages being injected into the LLM prompt. In contrast, non-relevant passages
retrieved by a single query are discarded. Figure 1 describes how RAG and RAGF differ.
1
𝑅𝑅𝐹 𝑆𝑐𝑜𝑟𝑒(𝑑 ∈ 𝐷) = ∑ . (1)
𝑟∈𝑅 𝑟(𝑑) +𝑘
4. Development of a synthetic test set
claude-3-opus-20240229
claude-3-haiku-20240307
Documents
(passages) claude-3-sonnet-20240229
gpt-4-turbo-2024-04-09
Query generators
Expert questions
(few-shot)
What security features does the OPTIGA
Trust M provide for IoT devices (…)
Which TLE4971 current sensor models are
available in the TISON-8-6 package?
For applications with fast switching
technologies like SiC, which (…)
Are there any self-diagnosis features
available in the KP23x analog sensor?
Pooled queries
Figure 2: Process for creating synthetic queries. We prompt multiple LLMs to generate queries based
on existing documents. We include some existing user queries in the prompt as few-shot examples.
As previously discussed, one of the main issues when evaluating the quality of a QA
system in an enterprise setting is that, frequently, companies do not have a large enough
existing collection of queries to evaluate such systems’ quality. Therefore, in this work,
we propose to adopt a strategy previously used by methods for generating synthetic
queries for training retrieval systems, such as InPars [12] and Promptagator [29].
Similar to these approaches, we randomly sample passages from documents within
our collection and prompt an LLM to generate questions that users may ask about
these portions. However, one difference in our approach to generating training queries
is the size of these passages. When generating queries for training a retrieval system,
we ideally want to keep the passages short to fit in the dense encoder’s relatively short
context windows. However, when generating queries for evaluating QA systems (including
retrieval augmented), we are not bound to the limit of the embedding model used for
retrieval. Rather, a longer passage may yield questions that require multiple shorter
passages to be answered. Therefore, we submit relatively long passages to the LLMs.
Specifically, each passage is extracted from up to ten pages of PDF documents (about
2000 tokens 2 )
To keep the questions generated as diverse as possible, we prompt four different LLMs
to generate up to ten questions based on the same documents. Our test set collection
contains a mix of queries generated by OpenAI’s GPT-4 turbo [30] and Anthropic’s
Claude-3 [31] Opus, Sonnet, and Haiku models 3 . From a set of 𝑁 = 840 queries, we
sampled 200 queries across all four models. Half of the queries are selected from GPT-4
generated queries, and the other half from Claude 3 queries. Among the Claude 3 queries,
to ensure the quality of the queries and their diversity, we again sample according to
each model size. Ultimately, our test set contains 100 queries from GPT-4-turbo, 50 from
Claude 3 Opus, 30 from Sonnet, and 20 from Haiku.
Finally, to increase the quality of the generated queries, We asked for an account
manager, a sales operations specialist, a marketing representative, and a business devel-
opment manager to create queries that they would submit to the conversational agent
from the perspective of their role. They were instructed to produce queries regarding
products from the XENSIV sensor product line, consisting of MEM microphones, radar,
current, magnetic, pressure, and environmental sensors. We compiled a list of 23 of these
queries to use as a base for experimentation and used them as few-shot examples in
the query generation prompt. Figure 2 illustrates our method for generating synthetic
queries based on existing user queries and document passages.
5. LLM-as-a-Judge for RAG pipelines
Even with a suitable set of synthetic questions for evaluating our RAG conversational
agent, assessing whether a given answer properly answers a question is not trivially done.
If a ground-truth “golden answer” is available, one can use traditional syntactic-based
2
all LLMs used in our experiments had long context windows of 128k or 200k tokens.
3
We did not use GPT-3.5 or open source models due to their shorter context window at the time of
writing.
Agent A answer
Agent B answer
Agent A: Yes, the KP23x sensor has Both assistants correctly answer
some self-diagnosis features.
the question (…)
according to document , the
Agent B:The KP23x analog sensor series sensors have features such as (…)
includes self-diagnosis capabilities, Based on these observations,
Are there any self-diagnosis such as built-in monitoring
features available in the Assistant B provides a more
functionality and ISO 26262 (…)
KP23x analog sensor? accurate, relevant, and focused
response to the user's question
regarding the KP23x series sensors.
Final Verdict: [[B]]
RAGElo Retrieval Evaluator RAGElo Retrieval Evaluator
Retrieved
documents and Pointwise retrieved Pairwise answer
Synthetic test query Search agents generated answers document evaluation judgement FInal evaluation
Figure 3: The RAGElo evaluation pipeline. First, documents retrieved by the agents are evaluated
pointwise according to their relevance to the user’s question. Then, the agents’ answers are evaluated
pairwise, using the retrieved relevant documents from both agents as reference.
metrics such as BLEU, METEOR or ROUGE [8, 9, 7]. Without such reference answers,
one would require human annotators with a considerable understanding of the question’s
topic to manually assess the quality of the answers produced by each system. However,
this is a costly process.
Alternatively, several LLM-as-a-Judge methods have been proposed, where another
LLM is asked to evaluate the quality of answers generated by other LLMs. Nevertheless,
in an enterprise setting, the answers usually require the LLM to access knowledge not
present in their training datasets but rather contained in documents internal to the
company. This is usually accomplished using a RAG pipeline like the one described above.
Therefore, the judging LLM also needs access to similar knowledge to accurately evaluate
the agent’s answers’ quality.
Therefore, in this work, we rely on RAGElo , an open-source RAG evaluation toolkit that
evaluates the answers generated by each agent and the documents retrieved by them. By
injecting the annotation of retrieved documents, pooled by the agents being evaluated,
on the answer evaluation step, this method allows for the judging LLM to evaluate if
the generated answer was able to use all the information available about the question
properly and to check for any hallucinations. As the documents used for generating the
answers are included in the answer evaluating prompt, an agent that incorrectly cites
information from a source or refers to information not present in these documents is
likely hallucinating and should have its evaluation adjusted accordingly. As we explore
in Section 8, this two-step process results in a high correlation between human expert
annotators and the judging LLM, enabling higher reliability and trust when evaluating
different RAG pipelines. This process is also illustrated in Figure 3.
5.1. Evaluation aspects
While our main evaluation focuses on the pairwise comparison between the two agents,
RAGElo also allows us to evaluate answers pointwise. In this setting, similar to other
works [32], we prompt the judging LLM to evaluate the answers according to multiple
criteria:
• Relevance: Does the answer address the user’s question?
• Accuracy: Is the answer factually correct, based on the documents provided?
• Completeness: Does the answer provide all the information needed to answer the
user’s question?
• Precision: If the user’s question is about a specific product, does the answer provide
the answer for that specific product?
6. Retrieval pipelines
We not only experiment with different search agents (i.e., RAG and RAGF . We are also
interested in how different retrieval methods may impact the quality of the final answers
generated by these agents.
6.1. Retrieval methods
Our corpus consists of passages extracted from the Infineon XENSIV Product Selection
Guide, a 117-page document with detailed information on every product in the XENSIV
family. This document included technical information about all Infineon XENSIV sensors,
consumer and automotive sensor applications, guidance in selecting the correct sensor,
and other comprehensive and detailed information about the product line.
The passages are embedded using multilingual-e5-base [33] 4 and indexed using
OpenSearch, allowing us to perform both KNN-based vector search, keyword-based
search with BM25 [34], and RRF based hybrids thereof.
6.2. QA Systems Implementation
We mainly evaluate two agents: a naive RAG pipeline, where the agent first retrieves top-𝑘
passages that are then templated into a prompt, and the Infineon RAG -Fusion (RAGF )
agent. Upon receiving a query, a naive RAG agent takes the following actions:
1. Retrieve the top k most relevant passages from the search system.
2. Perform a Chat Completions API call, prompting the LLM with instructions for
generating an answer based on the five relevant passages.
3. Process and output the Chat Completions response.
Meanwhile, the Infineon RAGF conversational assistant uses a similar framework and
performs the following steps upon receiving a query:
1. Perform a Chat Completions API call to generate four new queries based on the
original query using a prompt tailored to the agent’s original goal.
2. Retrieve the top k most relevant passages for each query.
3. Using RRF, combines the top-𝑘 passages induced by all queries into a final ranking.
4. Perform a Chat Completions API call prompting the LLM with carefully worded
instructions for generating an answer based on the top-𝑘 fused passages
5. Process and output the Chat Completions response.
4
https://huggingface.co/intfloat/multilingual-e5-base
7. Experiments
7.1. Comparing LLM-as-a-judge to expert annotators
While LLM-as-a-judge is a theoretically viable algorithm for rating RAG and RAGF answers,
we must establish whether the results agree with the annotations of domain experts.
Figure 4 provides a Bland-Altman plot to visually represent the LLM and human
judgments’ agreement.
Figure 4: Bland-Altman plot to visualize the comparison between LLM-as-a-judge and expert answers.
The bias of approximately 0.12 indicates that, on average, LLM scores were slightly
higher than human scores. The limits of agreement ranged from approximately -1.17 to
1.41. demonstrating substantial variability in the difference between LLM and human
evaluators.
Next, we compared LLM-as-a-judge to expert annotators with Kendall’s 𝜏. Kendalls
𝜏 is a nonparametric measure that quantifies the degree of association between two
monotonic continuous or ordinal variables by calculating the proportion of concordance
and discordance among pairwise ranks, offering valuable insight into their rank correlation
[35, 36]. We used the SciPy Stats Kendalltau function to calculate a tau-b score and a
p-value for the combined ratings of all columns, flattened into a 1-D array with RAG and
RAGF ratings combined [37]. The tau-b value, a nonparametric measure of association, is
calculated using the following formula [38]:
(𝑃 − 𝑄)
𝜏𝑏 = (2)
√(𝑃 + 𝑄 + 𝑇 ) ⋅ (𝑃 + 𝑄 + 𝑈 )
P represents the number of concordant pairs, Q represents the number of discordant
pairs, T represents the number of ties exclusive to x, and U represents the number of ties
exclusive to y.
This test returned 𝜏 ≈ 0.56, indicating a moderate, positive correlation [39] with a
p-value against a null hypothesis of no association of 𝑝 < 0.01 (99.99% confidence level).
For comparison, in similar experiments judging human versus LLM judgments, Faggioli
et al. found 𝜏 values of 𝜏 = 0.76 and 𝜏 = 0.86[40].
Following the same methodology, we also calculated Spearman’s 𝜌, a similar nonpara-
metric correlation measure. This resulted in 𝜌 ≈ 0.59 with 𝑝 < 0.01, demonstrating a
statistically significant, moderate positive correlation [36].
7.2. RAG vs RAGF
7.2.1. Quality of retrieved documents
We assessed document retrieval quality using Mean Reciprocal Rank@5 (MRR@5), which
averages the inverse ranks of the first relevant result within the top five positions across
all queries. The formula is given by
|𝑄|
1 1
𝑀𝑅𝑅@5 = ∑ , (3)
|𝑄| 𝑖=1 rank𝑖
where |𝑄| is the total number of queries and rank𝑖 is considered only if it’s within the top
five, otherwise it counts as zero [41].
MRR@5 scores were calculated for each agent and each retrieval method considering
two categories:
1. MRR@5 score for documents deemed “somewhat relevant” or “very relevant.”
2. MRR@5 score for documents deemed “very relevant.”
The results can be seen below in Table 4.
Table 4
Mean MRR@5 scores for RAG vs RAG-F. The retrieval method columns indicate if the retrieval component
used was vector search only (KNN), keywords only (BM25) or hybrid (KNN and BM25, combined with
RRF).
Agent Retrieval Method Very Relevant Somewhat Relevant
RAG KNN 0.407 0.828
RAG BM25 0.821 0.955
RAG Hybrid 0.746 0.949
RAG-F KNN 0.396 0.810
RAG-F BM25 0.855 0.970
RAG-F Hybrid 0.758 0.961
7.2.2. Pairwise evaluation of answers
We then ran RAGElo games to evaluate end-to-end answer quality of RAG vs RAGF with
different base retriever configurations a task that cannot rely on standard Information
Retrieval metrics. These RAGElo results show more victories for RAGF than RAG ; For
example, when using BM25 as a base retriever, RAG F won 49% of the games, RAG won
14.5%, and RAG and RAGF are tied in 36.5% of the times. The resulting Elo scores for all six
variants are shown in table 6, which give a robust ranking of the systems, without reliance
on a gold standard. It is interesting to see, that for both RAGF as well as RAG , BM25 is a
strong baseline that is not surpassed by generic embeddings in these experiments.
Next, we compared the RAGElo outcome to the preference of our Infineon human
annotator. We performed two-tailed paired t-tests to compare RAG against RAGF on
each category from the Infineon representatives’ human evaluations with 𝛼 = .05. As
expected, due to its larger variety of retrieved results, RAGF significantly outperforms RAG
in completeness at the 95% confidence level with 𝑝 ≈ 0.01 However, on the precision of
answers, RAG significantly outperformed RAGF at the 95% confidence level with 𝑝 ≈ 0.04.
Table 5
RAG vs RAGF Win percentage between pairwise comparison of the agent’s answers using GPT-4o as a
judge with RAGElo .
BM25 KNN Hybrid
Agent AVG
RAG RAGF RAG RAGF RAG RAGF
RAG — 14.5% 49.5% 52.5% 29.0% 28.5% 34.8%
BM25
RAGF 49.0% — 58.5% 51.5% 53.5% 30.5% 48.6%
RAG 33.0% 27.0% — 20.0% 26.0% 31.0% 27.4%
KNN
RAGF 34.5% 30.0% 37.0% — 30.5% 32.0% 32.8%
RAG 41.5% 21.0% 51.5% 48.0% — 20.5% 36.0%
Hybrid
RAGF 46.0% 35.0% 49.0% 45.5% 43.5% — 44.3%
Table 6
Elo Ranking for all agents averaged over 500 tournaments.
Agent Retrieval Elo score
RAGF BM25 571.0
RAGF Hybrid 550.0
RAG Hybrid 497.0
RAG BM25 487.0
RAGF KNN 470.0
RAG KNN 436.0
8. Discussion
As observed above, we found statistically significant, moderate positive correlations
between LLM ratings and human annotations. This indicates a consistent association
between the ratings from LLM-as-a-judge and those by Infineon experts. We find that
on average, LLM scores are slightly higher than those of human annotators. This means
that while relevance judgements on individual queries should not be fully reliable, and
IR metrics derived from LLM-as-a-judge should not be equated with regular relevance
scores without further calibration, we can still make good use of this approach to rank-
order systems. These findings collectively support the validity of our LLM evaluation
method, which assesses conversational system outputs based on a combination of relevance,
accuracy, completeness, and recall.
The style of evaluation and the different dimensions it takes into account are specified
in the prompts given to the LLM in the RAGElo evaluation, which are provided in
Appendix A. Specifically, while the initial LLM-as-a-judge is given specific criteria to
focus on only four categories, we instructed RAGElo ’s impartial judge LLM to value more
than the initial four categories:
Your evaluation should consider factors such as comprehensiveness, correct-
ness, helpfulness, completeness, accuracy, depth, and level of detail of their
responses.
Since RAGF significantly outperformed RAG in the completeness category, the RAGElo
judge LLM likely weighed completeness higher than precision. In addition, based
on manual observation of a small random sample of answers, RAGF produced more
comprehensive answers and featured higher depth and level of detail due to the multiple
query generation. However, games where RAG won were most likely influenced by a
significantly more precise answer than that of RAGF . While RAGF values comprehensive
answers that offer multiple perspectives to the user, RAG produces shorter answers
that answer the original query only. Since completeness is defined as the extent to
which a user’s question was answered, it can be presumed that RAGF ’s longer and more
comprehensive answers may tend to be more complete. And since precision relates to the
agent mentioning the correct product or product family, it can be presumed that RAGF ’s
longer answers have more room to consider other products or product families, leading
to reduced answer precision. While the human annotation was done by Infineon experts,
different humans may rate answers differently, even if following the same set of criteria.
A larger number of documents or a database of non-technical documents may have led
to a different outcome. RAGF can be applied to not only Infineon documents but also any
documents database to retrieve. This includes not only enterprise uses but also uses in
education, such as mathematics and language learning. The algorithm can be tuned to
different use cases by tweaking the internal LLM prompt. For example, the Infineon RAGF
bot was prompted to "think like an engineer." However, an educator RAGF bot could be
prompted to "think like a teacher." Future work includes exploring other applications of
RAGF , especially in education. In addition, we will experiment with different prompts for
both LLM-as-a-judge and RAGElo while using different quantities and types of documents
with the same retrieval algorithms.
Based on the calculated MMR@5 scores, we found that the RAGF agent mostly outper-
forms the RAG agent in ranking both highly relevant and somewhat relevant documents
retrieved. This evidence search on multiple query variants produced, on average, slightly
more higher-ranked relevant documents than using only the original user query. We also
see that using vector search with embeddings is not a silver bullet, as for our test queries,
BM25 seriously outperforms it. Since retrieval quality is highly dependent on the quality
of the embeddings and their fit to the domain, this outcome will likely be changed by
fine-tuning the embeddings, and adding additional intelligent re-rankers, which we leave
here for future work, as the evaluation framework would remain the same.
9. Conclusion
Overall, we found that the evaluation framework proposed by RAGElo positively aligns
with the preferences of human annotators for RAG and RAGF with due caution due to a
moderate correlation and variability of scoring. We found that the RAGF approach leads to
better answers most of the time, according to the RAGElo evaluation. According to expert
scoring, the RAGF approach significantly outperforms in completeness compared to RAG
but significantly underperforms in precision compared to RAG. Based on these results,
we cannot confidently assert that RAGF ’s approach leads to better answers generally.
However, the results do support that RAGF ’s approach leads to more complete answers
and a higher proportion of better answers under evaluation by RAGElo .
Since RAGElo is generally applicable to all retrieval-augmented algorithms, in future
work, we also intend to test different agents other than RAG and RAGF , including those
with different reranking algorithms, different embedding models, and different LLMs. In
addition, due to RAGF ’s underperformance in document relevance, we may also leverage
CRAG to reduce this gap. We will also investigate the reflection of human sensitivity in
expert ratings, especially whether the LLMs should or can reflect human sensitivities.
Acknowledgments
We thank Brooks Felton from Infineon for his support during this work. We also thank
the Infineon sales team for providing valuable feedback.
References
[1] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Nee-
lakantan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger,
T. Henighan, R. Child, A. Ramesh, D. M. Ziegler, J. Wu, C. Winter, C. Hesse,
M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish,
A. Radford, I. Sutskever, D. Amodei, Language Models are Few-Shot Learners, 2020.
doi:10.48550/arXiv.2005.14165 . arXiv:2005.14165 .
[2] Y. Xiao, W. Y. Wang, On Hallucination and Predictive Uncertainty in Conditional
Language Generation, in: Proceedings of the 16th Conference of the European
Chapter of the Association for Computational Linguistics: Main Volume, Association
for Computational Linguistics, Online, 2021, pp. 2734–2744. doi:10.18653/v1/2021.
eacl- main.236 .
[3] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto,
P. Fung, Survey of Hallucination in Natural Language Generation, ACM Computing
Surveys 55 (2023) 248:1–248:38. doi:10.1145/3571730 .
[4] Z. Yin, Q. Sun, Q. Guo, J. Wu, X. Qiu, X.-J. Huang, Do large language models
know what they dont know?, in: Findings of the Association for Computational
Linguistics: ACL 2023, 2023, pp. 8653–8665.
[5] A. Amayuelas, L. Pan, W. Chen, W. Wang, Knowledge of knowledge: Exploring
known-unknowns uncertainty with large language models, 2023. URL: https://arxiv.
org/abs/2305.13712.
[6] G. V. Cormack, C. L. A. Clarke, S. Buettcher, Reciprocal rank fusion outperforms
condorcet and individual rank learning methods, in: SIGIR 2019, SIGIR ’09,
Association for Computing Machinery, New York, NY, USA, 2009, p. 758759. URL:
https://doi.org/10.1145/1571941.1572114. doi:10.1145/1571941.1572114 .
[7] C.-Y. Lin, ROUGE: A package for automatic evaluation of summaries, in: Text
Summarization Branches Out, Association for Computational Linguistics, Barcelona,
Spain, 2004, pp. 74–81. URL: https://aclanthology.org/W04-1013.
[8] K. Papineni, S. Roukos, T. Ward, W.-J. Zhu, Bleu: a method for automatic
evaluation of machine translation, in: ACL 2002, ACL ’02, Association for Computa-
tional Linguistics, USA, 2002, p. 311318. URL: https://doi.org/10.3115/1073083.1073135.
doi:10.3115/1073083.1073135 .
[9] A. Lavie, A. Agarwal, Meteor: an automatic metric for mt evaluation with high levels
of correlation with human judgments, in: StatMT 2007, StatMT ’07, Association
for Computational Linguistics, USA, 2007, p. 228231.
[10] Z. Yang, A. Moffat, A. Turpin, Pairwise crowd judgments: Preference, absolute, and
ratio, in: Proceedings of the 23rd Australasian Document Computing Symposium,
ADCS ’18, Association for Computing Machinery, New York, NY, USA, 2018. URL:
https://doi.org/10.1145/3291992.3291995. doi:10.1145/3291992.3291995 .
[11] N. Arabzadeh, C. L. A. Clarke, A comparison of methods for evaluating generative
ir, 2024. URL: https://arxiv.org/abs/2404.04044.
[12] V. Jeronymo, L. Bonifacio, H. Abonizio, M. Fadaee, R. Lotufo, J. Zavrel, R. Nogueira,
InPars-v2: Large Language Models as Efficient Dataset Generators for Information
Retrieval, 2023. doi:10.48550/arXiv.2301.01820 . arXiv:2301.01820 .
[13] H. Huang, Y. Qu, J. Liu, M. Yang, T. Zhao, An empirical study of llm-as-a-judge
for llm evaluation: Fine-tuned judge models are task-specific classifiers, 2024. URL:
https://arxiv.org/abs/2403.02839.
[14] L. Zheng, W.-L. Chiang, Y. Sheng, S. Zhuang, Z. Wu, Y. Zhuang, Z. Lin, Z. Li,
D. Li, E. P. Xing, H. Zhang, J. E. Gonzalez, I. Stoica, Judging llm-as-a-judge with
mt-bench and chatbot arena, 2023. URL: https://arxiv.org/abs/2306.05685.
[15] V. Katranidis, G. Barany, Faaf: Facts as a function for the evaluation of generated
text, 2024. arXiv:2403.03888 .
[16] P. Manakul, A. Liusie, M. J. F. Gales, Selfcheckgpt: Zero-resource black-box hallu-
cination detection for generative large language models, 2023. arXiv:2303.08896 .
[17] T. Zhang, H. Luo, Y.-S. Chuang, W. Fang, L. Gaitskell, T. Hartvigsen, X. Wu, D. Fox,
H. Meng, J. Glass, Interpretable unified language checking, 2023. arXiv:2304.03728 .
[18] A. Azaria, T. Mitchell, The internal state of an llm knows when it’s lying, 2023.
arXiv:2304.13734 .
[19] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoy-
anov, L. Zettlemoyer, Bart: Denoising sequence-to-sequence pre-training for natural
language generation, translation, and comprehension, 2019. arXiv:1910.13461 .
[20] W. Yuan, G. Neubig, P. Liu, Bartscore: Evaluating generated text as text generation,
in: M. Ranzato, A. Beygelzimer, Y. Dauphin, P. Liang, J. W. Vaughan (Eds.),
Advances in Neural Information Processing Systems, volume 34, Curran Associates,
Inc., 2021, pp. 27263–27277. URL: https://proceedings.neurips.cc/paper_files/paper/
2021/file/e4d2b6e6fdeca3e60e0f1a62fee3d9dd-Paper.pdf.
[21] S. Es, J. James, L. Espinosa-Anke, S. Schockaert, Ragas: Automated evaluation of
retrieval augmented generation, 2023. arXiv:2309.15217 .
[22] A. N. Angelopoulos, S. Bates, C. Fannjiang, M. I. Jordan, T. Zrnic, Prediction-
powered inference, 2023. arXiv:2301.09633 .
[23] J. Saad-Falcon, O. Khattab, C. Potts, M. Zaharia, Ares: An automated evaluation
framework for retrieval-augmented generation systems, 2024. arXiv:2311.09476 .
[24] M. Eibich, S. Nagpal, A. Fred-Ojala, Aragog: Advanced rag output grading, 2024.
arXiv:2404.01037 .
[25] J. Chen, H. Lin, X. Han, L. Sun, Benchmarking large language models in retrieval-
augmented generation, 2023. arXiv:2309.01431 .
[26] S.-Q. Yan, J.-C. Gu, Y. Zhu, Z.-H. Ling, Corrective retrieval augmented generation,
2024. arXiv:2401.15884 .
[27] G. Fazlija, Toward Optimising a Retrieval Augmented Generation Pipeline using
Large Language Model, Master’s thesis, 2024.
[28] Z. Rackauckas, Rag-fusion: A new take on retrieval augmented generation,
International Journal on Natural Language Computing 13 (2024) 3747. URL:
http://dx.doi.org/10.5121/ijnlc.2024.13103. doi:10.5121/ijnlc.2024.13103 .
[29] Z. Dai, V. Y. Zhao, J. Ma, Y. Luan, J. Ni, J. Lu, A. Bakalov, K. Guu, K. B.
Hall, M.-W. Chang, Promptagator: Few-shot dense retrieval from 8 examples, 2022.
arXiv:2209.11755 .
[30] OpenAI, Gpt-4 turbo and gpt-4, 2024. arXiv:gpt-4-turbo-and-gpt-4 .
[31] Anthropic, The claude 3 model family: Opus, sonnet, haiku, 2024. arXiv:Model
Card Claude 3.pdf .
[32] P. Thomas, S. Spielman, N. Craswell, B. Mitra, Large language models can accurately
predict searcher preferences, 2023. arXiv:2309.10621 .
[33] L. Wang, N. Yang, X. Huang, L. Yang, R. Majumder, F. Wei, Multilingual e5 text
embeddings: A technical report, 2024. arXiv:2402.05672 .
[34] S. E. Robertson, S. Walker, S. Jones, M. Hancock-Beaulieu, M. Gatford, Okapi
at TREC-3, in: D. K. Harman (Ed.), Proceedings of The Third Text REtrieval
Conference, TREC 1994, Gaithersburg, Maryland, USA, November 2-4, 1994, volume
500-225 of NIST Special Publication, National Institute of Standards and Technology
(NIST), 1994, pp. 109–126. URL: http://trec.nist.gov/pubs/trec3/papers/city.ps.gz.
[35] N. D. Edwards, E. de Jong, S. T. Ferguson, Graphing methods for kendall’s 𝜏, 2023.
arXiv:2308.08466 .
[36] S. Perreault, Efficient inference for kendall’s tau, 2022. arXiv:2206.04019 .
[37] scipy.stats.kendalltau, ???? URL: https://docs.scipy.org/doc/scipy/reference/generated/
scipy.stats.kendalltau.html#r4cd1899fa369-2.
[38] M. G. KENDALL, The treatment of ties in ranking problems, Biometrika 33 (1945)
239–251. doi:https://doi.org/10.1093/biomet/33.3.239 .
[39] P. Schober, C. Boer, L. A. Schwarte, Correlation coefficients: Appropriate use and
interpretation, Anesthesia & Analgesia 126 (2018) 1763–1768. doi:https://doi.org/
10.1213/ane.0000000000002864 .
[40] G. Faggioli, L. Dietz, C. L. A. Clarke, G. Demartini, M. Hagen, C. Hauff, N. Kando,
E. Kanoulas, M. Potthast, B. Stein, H. Wachsmuth, Perspectives on large language
models for relevance judgment, in: Proceedings of the 2023 ACM SIGIR International
Conference on Theory of Information Retrieval, ICTIR 23, ACM, 2023. URL:
http://dx.doi.org/10.1145/3578337.3605136. doi:10.1145/3578337.3605136 .
[41] A. Jadon, A. Patil, A comprehensive survey of evaluation techniques for recommen-
dation systems, 2024. arXiv:2312.16015 .
A. RAGElo ’s prompts and configurations
A.1. Retrieval Evaluator
We used the default RAGElo ’s ReasonerEvaluator , which has the following system prompt:
You a r e an e x p e r t document a n n o t a t o r . Your j o b i s t o e v a l u a t e
whether a document c o n t a i n s r e l e v a n t i n f o r m a t i o n t o answer a
user ’ s question .
P l e a s e a c t a s an i m p a r t i a l r e l e v a n c e a n n o t a t o r f o r a s e a r c h
e n g i n e . Your g o a l i s t o e v a l u a t e t h e r e l e v a n c y o f t h e
documents g i v e n a u s e r q u e s t i o n .
You s h o u l d w r i t e one s e n t e n c e e x p l a i n i n g why t h e document i s
r e l e v a n t o r not f o r t h e u s e r q u e s t i o n . A document can be :
− Not r e l e v a n t : The document i s not on t o p i c .
− Somewhat r e l e v a n t : The document i s on t o p i c but d o e s not
f u l l y answer t h e u s e r q u e s t i o n .
− Very r e l e v a n t : The document i s on t o p i c and an s we r s t h e u s e r ’
s question .
[ user question ]
{ query }
[ document c o n t e n t ]
{ document }
A.2. Answer evaluators
For the pointwise evaluator used in Section 5.1, we used the following prompt with
RAGElo ’s CustomPromptAnswerEvaluator :
You a r e an i m p a r t i a l j u d g e f o r e v a l u a t i n g t h e q u a l i t y o f t h e
r e s p o n s e s p r o v i d e d by an AI a s s i s t a n t t a s k e d t o answer u s e r s
’ q u e s t i o n s about t h e c a t a l o g u e o f IoT s e n s o r s produced by
Infineon .
You w i l l be g i v e n t h e u s e r ’ s q u e s t i o n and t h e answer produced
by t h e a s s i s t a n t .
The agent ’ s answer was g e n e r a t e d based on a s e t o f documents
r e t r i e v e d by a s e a r c h e n g i n e .
You w i l l be p r o v i d e d with t h e r e l e v a n t documents r e t r i e v e d by
the search engine .
Your t a s k i s t o e v a l u a t e t h e answer ’ s q u a l i t y based on t h e
r e s p o n s e ’ s r e l e v a n c e , acc uracy , and c o m p l e t e n e s s .
## Rules f o r e v a l u a t i n g an answer :
− ∗∗ R e l e v a n c e ∗ ∗ : Does t h e answer a d d r e s s t h e u s e r ’ s q u e s t i o n ?
− ∗∗ Accuracy ∗ ∗ : I s t h e answer f a c t u a l l y c o r r e c t , based on t h e
documents p r o v i d e d ?
− ∗∗ Completeness ∗ ∗ : Does t h e answer p r o v i d e a l l t h e i n f o r m a t i o n
needed t o answer t h e u s e r ’ s q u e s t i o n ?
− ∗∗ P r e c i s i o n ∗ ∗ : I f t h e u s e r ’ s q u e s t i o n i s about a s p e c i f i c
product , d o e s t h e answer p r o v i d e t h e answer f o r t h a t
s p e c i f i c product ?
## S t e p s t o e v a l u a t e an answer :
1 . ∗∗ Understand t h e u s e r ’ s i n t e n t ∗ ∗ : E x p l a i n i n your own words
what t h e u s e r ’ s i n t e n t i s , g i v e n t h e q u e s t i o n .
2 . ∗∗ Check i f t h e answer i s c o r r e c t ∗ ∗ : Think s t e p −by−s t e p
whether t h e answer c o r r e c t l y a ns w er s t h e u s e r ’ s q u e s t i o n .
3 . ∗∗ E v a l u a t e t h e q u a l i t y o f t h e answer ∗ ∗ : E v a l u a t e t h e q u a l i t y
o f t h e answer based on i t s r e l e v a n c e , acc uracy , and
completeness .
4 . ∗∗ A s s i g n a s c o r e ∗ ∗ : Produce a s i n g l e l i n e JSON o b j e c t with
t h e f o l l o w i n g keys , each with a s i n g l e s c o r e between 0 and
2 , where 2 i s t h e h i g h e s t s c o r e on t h a t a s p e c t :
− " relevance "
− 0 : The answer i s not r e l e v a n t t o t h e u s e r ’ s q u e s t i o n .
− 1 : The answer i s p a r t i a l l y r e l e v a n t t o t h e u s e r ’ s
question .
− 2 : The answer i s f u l l y r e l e v a n t t o t h e u s e r ’ s q u e s t i o n .
− " accuracy "
− 0 : The answer i s f a c t u a l l y i n c o r r e c t .
− 1 : The answer i s p a r t i a l l y c o r r e c t .
− 2 : The answer i s f u l l y c o r r e c t .
− " completeness "
− 0 : The answer d o e s not p r o v i d e enough i n f o r m a t i o n t o
answer t h e u s e r ’ s q u e s t i o n .
− 1 : The answer o n l y a n sw e rs some a s p e c t s o f t h e u s e r ’ s
question .
− 2 : The answer f u l l y a ns w er s t h e u s e r ’ s q u e s t i o n .
− " precision "
− 0 : The answer d o e s not mention t h e same p r o d u c t o r
product l i n e as the user ’ s question .
− 1 : The answer mentions a s i m i l a r p r o d u c t o r p r o d u c t l i n e ,
but not t h e same a s t h e u s e r ’ s q u e s t i o n .
− 2 : The answer mentions t h e e x a c t same p r o d u c t o r p r o d u c t
l i n e as the user ’ s question .
The l a s t l i n e o f your answer must be a SINGLE LINE JSON o b j e c t
with t h e k e y s " r e l e v a n c e " , " a c c u r a c y " , " c o m p l e t e n e s s " , and "
p r e c i s i o n " , each with a s i n g l e s c o r e between 0 and 2 .
[DOCUMENTS RETRIEVED]
{ documents }
[ User Query ]
{ query }
[ Agent answer ]
{ answer }
For the pairwise evaluation between agents used for the results in Tables 5 and 6, we
used RAGElo ’s PairwiseAnswerEvaluator with the following parameters:
pairwise_evaluator_config = PairwiseEvaluatorConfig (
n_games_per_query =15 ,
h a s _ c i t a t i o n s=F a l s e ,
include_raw_documents=True ,
i n c l u d e _ a n n o t a t i o n s=True ,
document_relevance_thresh old =2,
f a c t o r s=" t h e c o m p r e h e n s i v e n e s s , c o r r e c t n e s s , h e l p f u l n e s s ,
c o m p l e t e n e s s , ac curacy , depth , and l e v e l o f d e t a i l o f
t h e i r r e s p o n s e s . Answers a r e c o m p r e h e n s i v e i f they show
t h e u s e r m u l t i p l e p e r s p e c t i v e s i n a d d i t i o n t o but s t i l l
r e l e v a n t to the i n t e n t of the o r i g i n a l question . " ,
)
This generates 15 random games between two agents per query (i.e., all possible unique
games for 6 agents) and tells the evaluator that:
• The answers do not include specific citations to any passage (has_citations=False )
• Include the full text of the retrieved passages in the evaluation prompt
(include_raw_documents=True )
• Inject the output of the retrieval evaluator into the prompt
(include_annotations=True )
• Ignore any passage with a relevance score below 2
(document_relevance_threshold=2 )
• Consider these factors when selecting the best answer
factors=… )
These parameters produce the following final prompt used for evaluating the answers:
P l e a s e a c t a s an i m p a r t i a l j u d g e and e v a l u a t e t h e q u a l i t y o f
t h e r e s p o n s e s p r o v i d e d by two AI a s s i s t a n t s t a s k e d t o answer
t h e q u e s t i o n below based on a s e t o f documents r e t r i e v e d by
a search engine .
You s h o u l d c h o o s e t h e a s s i s t a n t t h a t b e s t an s we r s t h e u s e r
q u e s t i o n based on a s e t o f r e f e r e n c e documents t h a t may o r
may not be r e l e v a n t .
For each r e f e r e n c e document , you w i l l be p r o v i d e d with t h e t e x t
o f t h e document a s w e l l a s r e a s o n s why t h e document i s o r
i s not r e l e v a n t .
Your e v a l u a t i o n s h o u l d c o n s i d e r f a c t o r s such a s
comprehensiveness , correctness , h e l p f u l n e s s , completeness ,
accurac y , depth , and l e v e l o f d e t a i l o f t h e i r r e s p o n s e s .
Answers a r e c o m p r e h e n s i v e i f they show t h e u s e r m u l t i p l e
p e r s p e c t i v e s i n a d d i t i o n t o but s t i l l r e l e v a n t t o t h e i n t e n t
of the o r i g i n a l question .
D e t a i l s a r e o n l y u s e f u l i f they answer t h e u s e r ’ s q u e s t i o n . I f
an answer c o n t a i n s non−r e l e v a n t d e t a i l s , i t s h o u l d not be
p r e f e r r e d o v e r one t h a t o n l y u s e s r e l e v a n t i n f o r m a t i o n .
Begin your e v a l u a t i o n by e x p l a i n i n g why each answer c o r r e c t l y
ans we rs t h e u s e r ’ s q u e s t i o n . Then , you s h o u l d compare t h e
two r e s p o n s e s and p r o v i d e a s h o r t e x p l a n a t i o n o f t h e i r
d i f f e r e n c e s . Avoid any p o s i t i o n b i a s e s and e n s u r e t h a t t h e
o r d e r i n which t h e r e s p o n s e s were p r e s e n t e d d o e s not
i n f l u e n c e your d e c i s i o n . Do not a l l o w t h e l e n g t h o f t h e
r e s p o n s e s t o i n f l u e n c e your e v a l u a t i o n . Be a s o b j e c t i v e a s
possible .
A f t e r p r o v i d i n g your e x p l a n a t i o n , output your f i n a l v e r d i c t by
s t r i c t l y f o l l o w i n g t h i s format : " [ [ A ] ] " i f a s s i s t a n t A i s
b e t t e r , " [ [ B ] ] " i f a s s i s t a n t B i s b e t t e r , and " [ [ C ] ] " f o r a
tie .
[ User Q u e s t i o n ]
{ query }
[ R e f e r e n c e Documents ]
{ documents }
[ The S t a r t o f A s s i s t a n t A’ s Answer ]
{ answer_a }
[ The End o f A s s i s t a n t A’ s Answer ]
[ The S t a r t o f A s s i s t a n t B’ s Answer ]
{answer_b }
[ The End o f A s s i s t a n t B’ s Answer ]