<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">PersonaRAG: Enhancing Retrieval-Augmented Generation Systems with User-Centric Agents</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Saber</forename><surname>Zerhoudi</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Passau</orgName>
								<address>
									<settlement>Passau</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michael</forename><surname>Granitzer</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Passau</orgName>
								<address>
									<settlement>Passau</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">PersonaRAG: Enhancing Retrieval-Augmented Generation Systems with User-Centric Agents</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">040C946D53B3E4A76E1CD8327A882E91</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T17:09+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>User interactions</term>
					<term>Retrieval-Augmented Generation (RAG)</term>
					<term>Personalized Information Retrieval</term>
					<term>Multi-Agent RAG</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Large Language Models (LLMs) struggle with generating reliable outputs due to outdated knowledge and hallucinations. Retrieval-Augmented Generation (RAG) models address this by enhancing LLMs with external knowledge, but often fail to personalize the retrieval process. This paper introduces PersonaRAG, a novel framework incorporating user-centric agents to adapt retrieval and generation based on real-time user data and interactions. Evaluated across various question answering datasets, PersonaRAG demonstrates superiority over baseline models, providing tailored answers to user needs. The results suggest promising directions for user-adapted information retrieval systems. Findings and resources are available at https://github.com/padas-lab-de/ir-rag-sigir24-persona-rag.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Large Language Models (LLMs) such as GPT-4 <ref type="bibr" target="#b1">[2]</ref> and LLaMA 3 <ref type="bibr" target="#b2">[3]</ref> have significantly advanced the field of natural language processing (NLP) by demonstrating impressive performance across various tasks and exhibiting emergent abilities that push the boundaries of artificial intelligence <ref type="bibr" target="#b3">[4]</ref>. However, these models face challenges such as generating unreliable outputs due to issues like hallucination and outdated parametric memories <ref type="bibr" target="#b4">[5]</ref>.</p><p>Retrieval-Augmented Generation (RAG) models have shown promise in addressing these issues by integrating externally retrieved information to support more effective performance on complex, knowledge-intensive tasks <ref type="bibr" target="#b5">[6]</ref>. Despite these advancements, the deployment of RAG systems within broader AI frameworks continues to face significant challenges, particularly in handling noise and irrelevance in retrieved data <ref type="bibr" target="#b6">[7]</ref>.</p><p>A key limitation of existing RAG systems is their inability to adapt outputs to users' specific informational and contextual needs. Personalized techniques in information retrieval, such as adaptive retrieval based on user interaction data and context-aware strategies, are increasingly recognized as essential for enhancing user interaction and satisfaction <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. These methods aim to refine the retrieval process dynamically, tailoring it more closely to individual user profiles and situational contexts <ref type="bibr" target="#b9">[10]</ref>.</p><p>The integration of agent-based systems with personalized RAG architectures presents a compelling avenue for research. Such systems utilize a multi-agent framework to simulate complex, adaptive interactions tailored to userspecific requirements <ref type="bibr" target="#b10">[11]</ref>. By embedding intelligent, useroriented agents within the RAG framework, these systems can evolve into more sophisticated tools that not only retrieve relevant information but also align it closely with the user's specific preferences and contexts in real-time. Importantly, the personalization strategy employed in these systems is fully transparent to the user, ensuring that the user is aware of how their information is being used to tailor the results. Information Retrieval's Role in RAG Systems (IR-RAG) workshop at SIGIR, 2024, Washington D.C., USA Envelope saber.zerhoudi@uni-passau.de (S. Zerhoudi); michael.granitzer@uni-passau.de (M. Granitzer) Orcid 0000-0003-2259-0462 (S. Zerhoudi); 0000-0003-3566-5507 (M. <ref type="bibr">Granitzer)</ref> In this study, we present PersonaRAG, an innovative methodology that extends traditional RAG frameworks by incorporating user-centric agents into the retrieval process. This approach addresses the previously mentioned limitations by promoting active engagement with retrieved content and utilizing dynamic, real-time user data to continuously refine and personalize interactions. PersonaRAG aims to enhance the precision and relevance of LLM outputs, adapting dynamically to user-specific needs while maintaining full transparency regarding the personalization process.</p><p>Our experiments, conducted using GPT-3.5, develop the PersonaRAG model and evaluate its performance across various question answering datasets. The results indicate that PersonaRAG achieves an improvement of over 5% in accuracy compared to baseline models. Furthermore, Per-sonaRAG demonstrates an ability to adapt responses based on user profiles and information needs, enhancing the personalization of results. Additional analysis shows that the principles underlying PersonaRAG can be generalized to different LLM architectures, such as Llama 3 70b and Mixture of Experts (MoE) 8x7b <ref type="bibr" target="#b11">[12]</ref>. These architectures benefit from the integration of external knowledge facilitated by PersonaRAG, with improvements exceeding 10% in some cases. This evidence indicates that PersonaRAG not only contributes to the progress of RAG systems but also provides notable advantages for various LLM applications, signifying a meaningful step forward in the development of more intelligent and user-adapted information retrieval systems. [13] addressed this issue by employing natural language inference models to select pertinent sentences, thereby enhancing the RAG's robustness. Additionally, advancements have been made in adaptively retrieving information, with systems like those proposed by Jiang et al. <ref type="bibr" target="#b13">[14]</ref> dynamically fetching passages that are most likely to improve generation accuracy.</p><p>Despite these improvements, RAG systems still face limitations, particularly in adapting their output to the user's specific profile, such as their information needs or intellectual knowledge. This limitation stems from the current design of most RAG systems, which do not typically incorporate user context or personalized information retrieval strategies <ref type="bibr" target="#b14">[15]</ref>. Consequently, there exists a gap between the general effectiveness of RAG systems and their applicability in personalized user experiences, where context and individual user preferences play a crucial role.</p><p>Personalization in information retrieval is increasingly recognized as essential for enhancing user interaction and satisfaction <ref type="bibr" target="#b15">[16]</ref>. Techniques such as user profiling, contextaware retrieval, and adaptive feedback mechanisms are commonly employed to tailor search results to individual users' needs. For instance, Jeong et al. <ref type="bibr" target="#b16">[17]</ref> proposed adaptive retrieval strategies that dynamically adjust the retrieval process based on the complexity of the query and the user's historical interaction data. These personalized approaches not only improve user satisfaction but also increase the efficiency of information retrieval by reducing the time users spend sifting through irrelevant information.</p><p>The integration of personalized techniques with agentbased systems provides a promising pathway to augment the capabilities of RAG systems. Agent-based systems, particularly in the form of LLM-Based Multi-Agent Frameworks <ref type="bibr" target="#b17">[18]</ref>, enable the simulation of complex interactions that can lead to more nuanced and contextually appropriate outputs. By incorporating multi-agent systems into RAG frameworks, there is potential for developing more robust and adaptive retrieval mechanisms that can handle a broader range of queries and generate more accurate responses, closely tailored to the specific needs and contexts of individual users.</p><p>In conclusion, while significant progress has been made in enhancing the effectiveness and personalization of RAG systems, ongoing research is crucial to address their existing limitations and expand their applications. The integration of personalized information retrieval and agent-based enhancements represents a promising avenue for further enhancing the adaptability and accuracy of RAG systems, potentially leading to intelligent information retrieval tailored to the specific needs of users.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology</head><p>In this section, we present the methodology underlying our PersonaRAG approach, which aims to enhance the ability of Language Large Models (LLMs) to actively engage with, understand, and leverage user profile information for personalized content generation. We begin by discussing the fundamental concepts of Retrieval-Augmented Generation (RAG) models (Section 3.1) and then introduce our Per-sonaRAG technique, which encourages LLMs to actively assimilate knowledge from live search sessions (Section 3.2).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Fundamentals of Retrieval-Augmented Generation (RAG) Models</head><p>State-of-the-art RAG models, as described in previous studies <ref type="bibr" target="#b18">[19,</ref><ref type="bibr" target="#b19">20,</ref><ref type="bibr" target="#b20">21]</ref>, employ retrieval systems to identify a set of passages 𝐷 = {𝑑 1 , … , 𝑑 𝑛 } when given a query q. These passages are intended to enhance the generative capabilities of LLMs by providing them with contextually relevant information.</p><p>Early versions of RAG models typically employ a traditional retrieval-generation framework, in which the retrieved data set 𝐷 = {𝑑 1 , … , 𝑑 𝑛 } is directly fed into LLMs to generate responses to the query 𝑞. However, these passages often contain irrelevant information, and the direct utilization approach in RAG has been shown to restrict the potential benefits of the RAG framework <ref type="bibr" target="#b21">[22]</ref>. This limitation has sparked further discussion on how to improve LLMs by integrating retrieval results and outputs generated by the models themselves <ref type="bibr" target="#b22">[23]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">PersonaRAG: RAG with User-Centric Agents</head><p>Drawing from the principles of adaptive learning and usercentered design, we develop a new PersonaRAG architecture to enable IR systems to dynamically learn from and adapt to user behavior in real-time. As shown in Figure <ref type="figure" target="#fig_1">2</ref>, Per-sonaRAG introduces a three-step pipeline: retrieval, user interaction analysis, and cognitive dynamic adaptation. Unlike traditional IR models that statically respond to queries, PersonaRAG focuses on leveraging live user data to continually refine its understanding and responses without the need for manual retraining.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.1.">User Interaction Analysis</head><p>To understand user behavior from live interactions, Person-aRAG treats the IR system as a cognitive structure capable of receiving, interpreting, and acting upon user feedback <ref type="bibr" target="#b23">[24]</ref>. Mimicking human learning behaviors, we establish four distinct agents within the system dedicated to analyzing user interactions from different perspectives: engagement tracking, preference analysis, context understanding, and feedback integration. These agents' roles are detailed in Section 3.2.2.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.2.">Cognitive Dynamic Adaptation</head><p>Following adaptive learning principles, we employ a dynamic adaptation mechanism to assist the IR system in utilizing real-time user data for continuous improvement. This mechanism facilitates the integration of insights gained from User Interaction Analysis into the system's retrieval processes. Specifically, we prompt the system to adjust its query responses based on an initial understanding of the user's needs and refine these responses as more user data becomes available. This approach not only personalizes the search results but also helps in correcting any misalignments or errors in real-time. PersonaRAG employs a highly specialized agent architecture, with each agent focusing on a specific aspect of the information retrieval process. All agents utilize in-context learning, i.e., prompting, to perform their designated tasks. This role specialization allows for the efficient decomposition of complex user queries into manageable tasks <ref type="bibr" target="#b24">[25]</ref>.</p><p>To foster this, we engage the IR system as five specialized agents to analyze user interactions based on retrieved data. At present, the focus is on the functionality and interaction of these agents rather than their individual performance metrics.</p><p>User Profile Agent This component manages and updates user profile data, incorporating historical user interactions and preferences <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27]</ref>. It monitors how users interact with search results, such as click-through rates and navigation paths. The User Profile Agent helps the system understand what captures user interest and leads to deeper engagement, enabling personalized search experiences.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Contextual Retrieval Agent</head><p>This agent is responsible for the initial retrieval of documents based on the user's current query. It accesses both a traditional search index and a more dynamic context-aware system that can consider broader aspects of the query environment. It utilizes user profile data to modify and refine search queries or to prioritize search results. For instance, if a user consistently engages more with certain types of documents or topics, the retrieval agent can boost those document types in the search results, ensuring that the most relevant information is presented to the user.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Live Session Agent</head><p>This agent analyzes the current session in real-time, observing user actions such as clicks, time spent on documents, modifications to the query, and any feedback provided. It creates a session-specific context model that captures the user's immediate needs and interests. The real-time data collected by this agent is used to adjust the ongoing session, potentially re-ranking search results or suggesting new queries based on the user's behavior and preferences. Additionally, the Live Session Agent updates the user profile with new insights gleaned from the session, allowing for a more personalized and efficient search experience in future interactions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Document Ranking Agent</head><p>This agent is responsible for re-ranking the documents retrieved by the Contextual Retrieval Agent. It integrates insights from both the User Profile Agent and the Live Session Agent to score and order the documents more effectively. By considering the user's historical preferences and their current session behavior, the Document Ranking Agent ensures that the most relevant and valuable documents are presented to the user in a prioritized manner. This agent continuously adapts its ranking algorithms based on the feedback received from the user and the insights provided by the other agents in the system.</p><p>Feedback Agent This agent gathers implicit and explicit feedback during and after user interactions. Implicit feedback includes behavioral data like time spent on documents, click counts, and navigation patterns. Explicit feedback involves direct user input on document relevance and quality, collected through ratings, surveys, or comments. The agent uses this information to train and refine models for other agents, particularly the Document Ranking Agent. This process enhances the system's ability to anticipate user needs and deliver relevant documents based on accumulated feedback and insights.</p><p>By dynamically integrating insights from the User Profile Agent, Contextual Retrieval Agent, Live Session Agent, Document Ranking Agent, and Feedback Agent into the IR processes, PersonaRAG not only adapts to immediate user needs but also evolves over time to better anticipate and meet user expectations. This multi-agent approach enables PersonaRAG to embody a truly adaptive and user-focused information retrieval system, leveraging specialized agents to analyze user interactions from different behavioral perspectives and deliver highly personalized and contextually relevant search experiences. The inclusion of the Document Ranking Agent ensures that the most pertinent documents are identified and presented to users, further enhancing the system's ability to effectively satisfy user information needs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">PersonaRAG Operational Workflow</head><p>The PersonaRAG framework employs a structured workflow that allows for sequential and parallel processing of tasks, ensuring clarity and consistency in communication between agents through well-defined data structures and protocols <ref type="bibr" target="#b27">[28]</ref>. The process involves the User Profile Agent, Contextual Retrieval Agent, Live Session Agent, Document Ranking Agent, and Feedback Agent working together to refine search queries, prioritize relevant results, and improve document scoring and re-ranking based on user profile, session-specific contexts, and feedback.</p><p>PersonaRAG's modular design allows for flexibility in the system setup, enabling researchers to focus on the most relevant aspects of the user's profile, session, and feedback data. Agents work collaboratively by utilizing content from the Global Message Pool, which serves as a central hub for interagent communication <ref type="bibr" target="#b27">[28]</ref>, eliminating inefficiencies and enabling agents to access or update information as required.</p><p>The Feedback Agent collects and analyzes implicit and explicit user feedback to generate insights into the effectiveness of retrieval strategies and document relevance. This feedback is used to make dynamic adjustments to the system, refining retrieval methods and altering the weighting of user profile factors. Through this iterative process, Person-aRAG continuously adapts and improves its performance, enhancing the accuracy and user satisfaction of the retrieval results <ref type="bibr" target="#b28">[29]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental Setups</head><p>In this section, we present the experimental setup employed in our study, including the datasets, baseline models, evaluation metrics, and implementation details. We also provide an overview of the prompts used in our experiments.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Datasets</head><p>Our experiments are conducted on three widely used singlehop benchmark datasets in the field of Information Retrieval (IR): NaturalQuestions (NQ) <ref type="bibr" target="#b29">[30]</ref>, TriviaQA <ref type="bibr" target="#b30">[31]</ref>, and We-bQuestions (WebQ) <ref type="bibr" target="#b31">[32]</ref>. NQ is a well-known dataset in Natural Language Understanding (NLU), consisting of structured questions and corresponding Wikipedia pages annotated with long and short answers. TriviaQA comprises question-answer pairs collected from trivia and quiz-league websites, while WebQ consists of questions selected using the Google Suggest API, with answers being entities in Freebase.</p><p>Table <ref type="table" target="#tab_0">1</ref> summarizes the datasets used in our initial study. Due to the high cost of using language models and the large number of API calls required, we randomly sampled 500 questions from each raw dataset to create more manageable subsets for our experiments. While this sampling approach limits the scope of our study, it allows us to conduct an initial investigation into the performance of different RAG systems on these datasets. We acknowledge that future work with larger sample sizes and more comprehensive experiments will be necessary to draw definitive conclusions. Nonetheless, we believe this preliminary study provides valuable insights into the relative strengths and weaknesses of the tested RAG approaches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Models</head><p>We compare PersonaRAG with several baseline models, including prompt learning and RAG models. The prompt templates used in user interaction analysis and dynamic adaptation are presented in Section 4. <ref type="bibr" target="#b3">4</ref> the vanilla answer generation model. Following the work of Wei et al. <ref type="bibr" target="#b32">[33]</ref>, the Chain-of-Thought model is implemented, which generates question rationale results to produce the final results. Additionally, the Guideline model serves as a baseline, generating problem-solving steps and guiding Language Models (LLMs) to generate the answer.</p><p>For the RAG-based baselines, two models are implemented: vanilla RAG and Chain-of-Thought, which include utilizing raw retrieved passages (CoT with Passage) and refining the passages as notes (CoT with Note). The vanilla RAG model directly feeds the top-ranked passages to the LLM. The Chain-of-Note model <ref type="bibr" target="#b0">[1]</ref> is also implemented, which refines and summarizes the retrieved passages for generation. Inspired by Self-RAG Asai et al. <ref type="bibr" target="#b33">[34]</ref>, the Self-Rerank model is conducted, which filters out unrelated contents without fine-tuning LLMs.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Evaluation Metrics</head><p>When evaluating adaptive models, it is crucial to consider both task performance and user-centric adaptability simultaneously, along with their trade-offs. Therefore, the results are reported using different metrics, some of which measure effectiveness and others measure efficiency.</p><p>For effectiveness, accuracy is used, following the standard evaluation protocol in the field of Information Retrieval (IR) <ref type="bibr" target="#b34">[35,</ref><ref type="bibr" target="#b35">36,</ref><ref type="bibr" target="#b33">34]</ref>. Accuracy assesses whether the predicted answer contains the ground-truth answer. Both the outputs of the Language Learning Model (LLM) and golden answers are converted to lowercase, and string matching (StringEM) is performed between each golden answer and the model prediction to calculate accuracy.</p><p>To evaluate user-centric adaptability, the BLEU-2 score is measured to assess the text similarity between different RAG and baseline setups and how well the generated answers resemble each other. This metric provides insights into the system's ability to generate consistent and coherent responses across various configurations. Additionally, the average sentence length and the average number of syllables of the answers from different RAG setups are reported as a post-hoc analysis. These measures validate whether the RAG system effectively adjusts its responses based on user knowledge levels, ensuring that the generated answers are tailored to the user's understanding and expertise.</p><p>Combining these evaluation strategies provides a comprehensive view of both the effectiveness and user-centric adaptability of the RAG system. The accuracy metric ensures that the system generates correct answers, while the BLEU-2 score and post-hoc analysis of sentence length and syllable count confirm the system's ability to adapt to user knowledge levels. As the understanding of user needs and system capabilities evolves, it is essential to continuously refine these metrics to maintain the RAG system's effectiveness in delivering personalized, context-aware responses that cater to the diverse requirements of users in the field of IR.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Implementation Details</head><p>For a fair comparison and following the work of Mallen et al. <ref type="bibr" target="#b34">[35]</ref> and Trivedi et al. <ref type="bibr" target="#b36">[37]</ref>, the same retriever, a term-based sparse retrieval model known as BM25 <ref type="bibr" target="#b37">[38]</ref>, is used across all different models. The retrieval model is implemented using the OpenMatch toolkit <ref type="bibr" target="#b38">[39]</ref>. For the external document corpus, the KILT-Wikipedia corpus preprocessed by Petroni et al. <ref type="bibr" target="#b39">[40]</ref> is used, and the top-k relevant documents are retrieved.</p><p>Regarding the LLMs used to generate answers, the Llama 3 model instruct (ref) with 70b parameters, Mixture of Experts (MoE) 8x7b (ref), and the GPT-3.5 model (gpt-3.5turbo-0125) are employed. For the retrieval-augmented LLM design, the implementation details from Trivedi et al. <ref type="bibr" target="#b36">[37]</ref> are followed, which include input prompts, instructions, and the number of test samples for evaluation (e.g., 500 samples per dataset).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.">Prompts Used in PersonaRAG</head><p>This subsection presents the prompt templates employed in the construction of the PersonaRAG model. The prompts utilized in the User Interaction Analysis and Cognitive Dynamic Adaptation components are detailed below. The prompt templates used by the baseline models are available in the project repository <ref type="foot" target="#foot_0">1</ref> . In the templates, {question} represents the input question, {global_memory} the Global Message Pool, while {passages} denotes the retrieved passages. Additionally, {cot_answer} is populated with the output generated by the Chain-of-Thought model.</p><p>The placeholder {user_profile_answer} is filled with the response produced by the User Profile agent model.</p><p>Respectively, {contextual_answer} corresponds to the Contextual Retrieval agent model, {live_session_answer} to the Live Session agent model, {document_ranking_answer} to the Document Ranking agent model, and {feedback_answer} to the Feedback agent model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.1.">Prompts Used in User Interaction Analysis</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>User Profile Agent</head><p>Your task is to help the User Profile Agent improve its understanding of user preferences based on ranked document lists and the shared global memory pool.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Question: {question} Passages: {passages} Global Memory: {global_memory}</head><p>Task Description: From the provided passages and global memory pool, analyze clues about the user's search preferences. Look for themes, types of documents, and navigation behaviors that reveal user interest. Use these insights to recommend how the User Profile Agent can refine and expand the user profile to deliver better-personalized results. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Contextual Retrieval Agent</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Global Message Pool</head><p>You are responsible for maintaining and enriching the Global Message Pool, serving as a central hub for inter-agent communication.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Question: {question} Agent Responses: {agent_responses} Existing Global Memory: {global_memory}</head><p>Task Description: Using the responses from individual agents and the existing global memory, consolidate key insights into a shared repository. Your goal is to organize a comprehensive message pool that includes agent-specific findings, historical user preferences, sessionspecific behaviors, search queries, and user feedback. This structure should provide all agents with meaningful data points and strategic recommendations, reducing redundant communication and improving the system's overall efficiency.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.5.2.">Prompts Used in Cognitive Dynamic Adaptation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Chain-of-Thought</head><p>To solve the problem, Please think and reason step by step, then answer.</p><p>Question: {question} Passages: {passages} Reasoning process: 1. Read the given question and passages to gather relevant information. 2. Write reading notes summarizing the key points from these passages.</p><p>3. Discuss the relevance of the given question and passages. 4. If some passages are relevant to the given question, provide a brief answer based on the passages. 5. If no passage is relevant, directly provide the answer without considering the passages.</p><p>Answer:</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Cognitive Agent</head><p>Your task is to help the Cognitive Agent enhance its understanding of user insights to continuously improve the system's responses. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Experimental Results and Analyses</head><p>In this section, we show the overall experimental results and offer in-depth analyses of our method.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Main Results</head><p>Table <ref type="table" target="#tab_2">2</ref> summarizes the primary findings for PersonaRAG across various single-hop question answering datasets. The approach was evaluated against multiple baseline models, including large language models (LLMs) without retrievalaugmented generation (RAG), the conventional RAG model, and self-refined variants, such as utilizing raw retrieved passages (CoT with Passage) or refining passages into notes (CoT with Note). PersonaRAG demonstrated superior performance compared to most of the baseline models, achieving significant improvements over the conventional RAG (i.e., vanillaRAG) of over 10%, particularly on the WebQ dataset. It also consistently outperformed the ChatGPT-3.5 model, except on TriviaQA, which we suspect is part of the model's training dataset. These results suggest PersonaRAG's capability to guide LLMs in extracting relevant information through active learning techniques.</p><p>Specifically, the performance of RAG models was assessed using the top 3 and 5 ranked passages. While other RAG models generally benefited from more passages, Person-aRAG maintained consistent performance with either 3 or 5 passages, suggesting that 3 passages were adequate for generating accurate answers. PersonaRAG agents played a crucial role in efficiently extracting the necessary information regarding the user's information need to achieve these improvements.</p><p>Furthermore, on the WebQ dataset, PersonaRAG achieved accuracy scores of 63.46% and 67.50% using Top-3 and Top-5 passages, respectively, surpassing the vanillaRAG model by 25% and 17.36%, and nearly all other baseline models (except for Chain-of-Thought using Top-5, which performed equally). On the NQ dataset, PersonaRAG maintained similarly robust performance with scores of 49.02% and 48.78%, outperforming all baselines (except for Chain-of-Thought and Self-Rerank (SR) using Top-5). This pattern was further validated by experiments on other datasets, with results showing that PersonaRAG consistently outperforms conventional RAG models with the capability of providing an answer tailored to the user's interaction and information need. The comprehensive understanding it provides contributes to the generation of accurate and user-centric answers across various question complexities.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Comparative Analysis of RAG Configurations</head><p>Further experiments explored PersonaRAG's adaptive capabilities (Figure <ref type="figure" target="#fig_3">3</ref>). BLEU-2 scores compared outputs from Chain-of-Note (consistently best outside PersonaRAG) with other methods. PersonaRAG showed higher similarity scores, indicating its ability to generate responses that address user needs rather than just summarizing input. Additionally, PersonaRAG provides personalized answers tailored to user profiles, extending beyond mere information provision.</p><p>The Chain-of-Note approach demonstrated comparable performance to the Chain-of-Thought approach, implying that both techniques effectively extract pertinent information from the retrieved passages and adapt it to align with the user's information need.</p><p>In contrast, vanillaGPT and vanillaRAG outputs differed significantly from the Chain-of-Note approach, indicating that counterfactual cognition often leads to diverse outcomes rather than focusing solely on query-relevant content. This suggests LLMs can construct knowledge from multiple perspectives and customize responses based on user understanding.</p><p>Post-hoc analyses of average sentence length and syllable count across RAG configurations provided insights into the system's ability to adapt responses to user comprehension levels. These observations highlight PersonaRAG's capacity to synthesize knowledge from various perspectives and tailor responses to different levels of user expertise.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.3.">Analysis on Generalization Ability</head><p>This experiment evaluates the quality of knowledge construction using different large language models (LLMs). As illustrated in Table <ref type="table">3</ref>, the PersonaRAG outcomes are used to prompt open-source LLMs, specifically LLaMA3-70B and MoE-8x7b, to generate accurate answers. Compared to LLMs without retrieval-augmented generation (w/o RAG), vanilla RAG and Chain-of-Note often exhibit lower performance. This result suggests that retrieved passages can act as noise, adversely affecting model performance even after refinement through note generation. One primary reason for this behavior is that both LLaMA3-70B and MoE-8x7b struggle to efficiently analyze and identify relevant knowledge due to limitations in their processing capacities.</p><p>In contrast, the PersonaRAG method provides notable performance improvements: over 8% for LLaMA3-70B and more than 10% for MoE-8x7b across all datasets, underscoring its effectiveness. The PersonaRAG methodology distinguishes itself from the Chain-of-Note approach by offering a cognitive framework that connects retrieved passages with prior knowledge. This framework models the instructor's (GPT-3.5) reasoning process, guiding the student models (LLaMA3-70B and MoE-8x7b) to better understand knowledge retrieved from passages. The results demonstrate that the LLMs are capable of selecting appropriate passages to build more accurate responses, highlighting the benefits of the PersonaRAG approach for improving generalization.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.4.">Case Study</head><p>Finally, we randomly sample one case in Table to demonstrate the effectiveness of PersonaRAG.</p><p>The user interaction analysis mechanism effectively generates comprehensive results by integrating foundational and advanced insights from user data. Retrieved passages provide critical clues for answering questions, while agent analyses summarize and illustrate the applicability of external information to user queries. The cognitive dynamic adaptation module refines initial chain-of-thought responses using these insights, generating accurate answers. For example, including knowledge about the "theft of the Mona Lisa in 1911," "Vincenzo Peruggia," and "Florence" enhances the reasoning process's precision and detail. This demonstrates PersonaRAG's effectiveness in helping IR agents combine external knowledge with intrinsic user data to produce well-informed responses.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>This paper proposes PersonaRAG, which constructs the retrieval-augmentation architecture incorporating user interaction analysis and cognitive dynamic adaptation. Per-sonaRAG builds the user interaction agents and dynamic cognitive mechanisms to facilitate the understanding of user needs and interests and enhance the system capabilities to deliver personalized, context-aware responses with the intrinsic cognition of LLMs.</p><p>Furthermore, PersonaRAG demonstrates effectiveness in leveraging external knowledge and adapting responses based on user profiles, knowledge levels, and information needs to support LLMs in generation tasks without finetuning. However, this approach requires multiple calls to the LLM's API, which can introduce additional time latency and increase API calling costs when addressing questions. The process involves constructing the initial Chain-of-Thought, processing the User Interaction Agents results, and executing the Cognitive Dynamic Adaptation to generate the final answer. Furthermore, the inputs to LLMs in this approach tend to be lengthy due to the inclusion of extensive retrieved passages and the incorporation of user needs, interests, and profile construction results. These factors can impact the efficiency and cost-effectiveness of the PersonaRAG approach in practical applications of Information Retrieval (IR) systems.</p><p>Future research will aim to optimize the process by reducing API calls and developing concise representations of user profiles and retrieved information without compromising response quality. We also plan to explore more user-centric agents to better capture writing styles and characteristics of RAG users/searchers. This will enhance the system's ability to understand and adapt to individual preferences, improving personalization and relevance in IR tasks.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Illustrations of Various RAG Models. Vanilla RAG and Chain-of-Thought [1] use passive learning, while PersonaRAG involves user-centric knowledge acquisition.</figDesc><graphic coords="2,93.37,65.60,170.95,396.70" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Overview of Our PersonaRAG Model showcasing the dynamic interaction among specialized agents within the system, facilitated by a global message pool for structured communication. The diagram illustrates the flow from user query input through various agents, including User Profile, Context Retrieval, Session Analysis, Document Ranking, and Feedback Agents, highlighting their contributions to real-time adaptation and personalized content generation by integrating live user data and feedback for continuous improvement and contextually relevant search experiences.</figDesc><graphic coords="3,72.00,65.60,451.28,267.29" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>(a) Text Similarity for Top-3 Passages (b) Text Similarity for Top-5 Passages</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Text Similarity between Chain-of-Note (CoN) and Other Methods Using BLEU-2 Score for Evaluation, with Normalized Average Sentence Length and Average Syllable Count.</figDesc><graphic coords="8,72.00,210.61,213.68,128.21" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Question:</head><label></label><figDesc>Who stole the Mona Lisa from the Louvre in 1911? Answer: "Vincenzo Peruggia" Chain-of-Thought: 1. Read the given question and passages to gather relevant information. 2. Write reading notes summarizing the key points. 3. Discuss the relevance of the given question and passages. 4. If some passages are relevant to the given question, provide a brief answer based on the passages. 5. If no passage is relevant, directly provide the answer without considering the passages. Step 1: Passage Retrieval 1. The Mona Lisa was stolen from the Louvre Museum while the museum was closed for cleaning. Witnesses reported that a tall, stout individual was carrying what appeared to be a large panel covered with a horse blanket. Two years later, Vincenzo Peruggia was arrested in Florence. 2. On August 22, 1911, the Mona Lisa was stolen from the Louvre. Vincenzo Peruggia claimed he stole the painting to return it to Italy. 3. In 1911, Vincenzo Peruggia, a Louvre employee, stole the Mona Lisa. Step 2: User Interaction Analysis User Insights from Interaction Analysis: -User Profile Agent: The user is interested in art heists and historical mysteries. -Contextual Retrieval Agent: The user frequently searches for art history topics. -Live Session Agent: The user reads about unsolved mysteries and historic crimes. -Document Ranking Agent: The user favors detailed, chronological accounts. -Feedback Agent: The user interacts with articles about famous art heists. Step 3: Cognitive Dynamic Adaptation The user has a consistent interest in famous art heists and historical mysteries. They are drawn to unsolved mysteries and historic crimes. They favor detailed, chronological accounts, so the final response should highlight theft motives, the method, and the investigation leading to the arrest, providing comprehensive and authoritative sources to satisfy their engagement with articles on art heists. Answer: Vincenzo Peruggia, a Louvre employee, stole the Mona Lisa from the Louvre Museum on August 21, 1911. He claimed he stole the painting to return it to Italy, and the theft remained unsolved for two years before he was arrested in Florence in December 1913. The painting was later recovered and returned to the Louvre.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>. Initially, the questionanswering (QA) instruction is fed to ChatGPT to conduct Summary of datasets. Each dataset consists of randomly sampled 500 questions from the raw dataset.</figDesc><table><row><cell>Dataset</cell><cell cols="3">#Query #Corpus Sampling Rate</cell></row><row><cell>NQ</cell><cell>8,757</cell><cell>79,168</cell><cell>5.7%</cell></row><row><cell>TriviaQA</cell><cell>8,837</cell><cell>78,785</cell><cell>5.7%</cell></row><row><cell>WebQ</cell><cell>2,032</cell><cell>3,417</cell><cell>24.6%</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head></head><label></label><figDesc>You are a search technology expert guiding the Contextual Retrieval Agent to deliver contextaware document retrieval.</figDesc><table><row><cell></cell><cell>Task Description:</cell></row><row><cell></cell><cell>Using the retrieved passages and global memory</cell></row><row><cell></cell><cell>pool, identify methods for collecting implicit</cell></row><row><cell></cell><cell>and explicit user feedback. Suggest ways to</cell></row><row><cell></cell><cell>refine feedback mechanisms to align with user</cell></row><row><cell>Question: {question}</cell><cell>preferences, such as ratings, surveys, or</cell></row><row><cell>Passages: {passages}</cell><cell>behavioral data. Your recommendations should</cell></row><row><cell>Global Memory: {global_memory}</cell><cell>guide the Feedback Agent in updating other</cell></row><row><cell></cell><cell>agents' models for more personalized and</cell></row><row><cell>Task Description:</cell><cell>relevant results.</cell></row><row><cell>Using the global memory pool and the retrieved</cell><cell></cell></row><row><cell>passages, identify strategies to refine document</cell><cell></cell></row><row><cell>retrieval. Highlight how user preferences,</cell><cell></cell></row><row><cell>immediate needs, and global insights can</cell><cell></cell></row><row><cell>be leveraged to adjust search queries and</cell><cell></cell></row><row><cell>prioritize results that align with the user's</cell><cell></cell></row><row><cell>interests. Ensure the Contextual Retrieval Agent</cell><cell></cell></row><row><cell>uses this shared information to deliver more</cell><cell></cell></row><row><cell>relevant and valuable results.</cell><cell></cell></row><row><cell>Live Session Agent</cell><cell></cell></row><row><cell>Your expertise in session analysis is required</cell><cell></cell></row><row><cell>to assist the Live Session Agent in dynamically</cell><cell></cell></row><row><cell>adjusting results.</cell><cell></cell></row><row><cell>Question: {question}</cell><cell></cell></row><row><cell>Passages: {passages}</cell><cell></cell></row><row><cell>Global Memory: {global_memory}</cell><cell></cell></row><row><cell>Task Description:</cell><cell></cell></row><row><cell>Examine the retrieved passages and information</cell><cell></cell></row><row><cell>in the global memory pool. Determine how the</cell><cell></cell></row><row><cell>Live Session Agent can use this data to refine</cell><cell></cell></row><row><cell>its understanding of the user's immediate</cell><cell></cell></row><row><cell>needs. Suggest ways to dynamically adjust search</cell><cell></cell></row><row><cell>results or recommend new queries in real-time,</cell><cell></cell></row><row><cell>ensuring that session adjustments align with</cell><cell></cell></row><row><cell>user preferences and goals.</cell><cell></cell></row><row><cell>Document Ranking Agent</cell><cell></cell></row><row><cell>Your task is to help the Document Ranking Agent</cell><cell></cell></row><row><cell>prioritize documents for better ranking.</cell><cell></cell></row><row><cell>Question: {question}</cell><cell></cell></row><row><cell>Passages: {passages}</cell><cell></cell></row><row><cell>Global Memory: {global_memory}</cell><cell></cell></row><row><cell>Task Description:</cell><cell></cell></row><row><cell>Analyze the retrieved passages and global</cell><cell></cell></row><row><cell>memory pool to identify ways to rank documents</cell><cell></cell></row><row><cell>effectively. Focus on combining historical</cell><cell></cell></row><row><cell>user preferences, immediate needs, and session</cell><cell></cell></row><row><cell>behavior to refine ranking algorithms. Your</cell><cell></cell></row><row><cell>insights should ensure that documents presented</cell><cell></cell></row><row><cell>by the Document Ranking Agent are prioritized to</cell><cell></cell></row><row><cell>match user interests and search context.</cell><cell></cell></row><row><cell>Feedback Agent</cell><cell></cell></row><row><cell>You are an expert in feedback collection and</cell><cell></cell></row><row><cell>analysis, guiding the Feedback Agent to gather</cell><cell></cell></row><row><cell>and utilize user insights.</cell><cell></cell></row><row><cell>Question: {question}</cell><cell></cell></row><row><cell>Passages: {passages}</cell><cell></cell></row><row><cell>Global Memory: {global_memory}</cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 2</head><label>2</label><figDesc>Overall Accuracy Performance Comparison Using Top-3 and Top-5 Passages. PersonaRAG results are reported in bold.</figDesc><table><row><cell>Question: {question}</cell></row><row><cell>Initial Response: {cot_answer}</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_4"><head>Table 4</head><label>4</label><figDesc>PersonaRAG Case Study.</figDesc><table /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://github.com/padas-lab-de/ir-rag-sigir24-persona-rag</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work has received funding from the European Union's Horizon Europe research and innovation program under grant agreement No 101070014 (OpenWebSearch.EU, https: //doi.org/10.3030/101070014).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Chain-of-note: Enhancing robustness in retrievalaugmented language models</title>
		<author>
			<persName><forename type="first">W</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Yu</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2311.09210</idno>
		<idno>CoRR abs/2311.09210</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2311.09210.doi:10.48550/ARXIV.2311.09210" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title/>
		<author>
			<persName><surname>Openai</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2303.08774</idno>
		<idno type="arXiv">arXiv:2303.08774</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2303.08774" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">GPT-4 technical report</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Touvron</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Izacard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Martinet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Rozière</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hambro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Azhar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rodriguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joulin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Grave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2302.13971</idno>
		<idno type="arXiv">arXiv:2302.13971</idno>
		<ptr target="/ARXIV.2302.13971" />
		<title level="m">Llama: Open and efficient foundation language models</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Language models are few-shot learners</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">B</forename><surname>Brown</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ryder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Subbiah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kaplan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dhariwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Neelakantan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Shyam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Sastry</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Askell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Agarwal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Herbert-Voss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Krueger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Henighan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Child</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ramesh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M</forename><surname>Ziegler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Winter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Hesse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Sigler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Litwin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Chess</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Clark</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Berner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mccandlish</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Radford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Sutskever</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Amodei</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2020/hash/1457c0d6bfcb4967418bfb8ac142f64a-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Ranzato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Hadsell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Balcan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</editor>
		<meeting><address><addrLine>NeurIPS; virtual</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-12-06">2020. December 6-12, 2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A multitask, multilingual, multimodal evaluation of chatgpt on reasoning, hallucination, and interactivity</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Bang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cahyawijaya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Su</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wilie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lovenia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ji</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Chung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Do</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fung</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.IJCNLP-MAIN.45</idno>
		<ptr target="https://doi.org/10.18653/v1/2023.ijcnlp-main.45.doi:10.18653/V1/2023.IJCNLP-MAIN.45" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, IJCNLP 2023 -Volume 1: Long Papers</title>
		<title level="s">Association for Computational Linguistics</title>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Park</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Arase</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Hu</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><surname>Lu</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Wijaya</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Purwarianti</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">A</forename><surname>Krisnadhi</surname></persName>
		</editor>
		<meeting>the 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, IJCNLP 2023 -Volume 1: Long Papers<address><addrLine>Nusa Dua, Bali</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">November 1 -4, 2023. 2023</date>
			<biblScope unit="page" from="675" to="718" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Retrievalaugmented generation for knowledge-intensive NLP tasks</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S H</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Perez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piktus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Petroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Karpukhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Küttler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Yih</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Riedel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Kiela</surname></persName>
		</author>
		<ptr target="https://proceedings.neurips.cc/paper/2020/hash/6b493230205f780e1bc26945df7481e5-Abstract.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Larochelle</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Ranzato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Hadsell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Balcan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</editor>
		<meeting><address><addrLine>NeurIPS; virtual</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020-12-06">2020. December 6-12, 2020. 2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Benchmarking large language models in retrieval-augmented generation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.1609/AAAI.V38I16.29728</idno>
		<ptr target="https://doi.org/10.1609/aaai.v38i16.29728.doi:10.1609/AAAI.V38I16.29728" />
	</analytic>
	<monogr>
		<title level="m">Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Wooldridge</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Natarajan</surname></persName>
		</editor>
		<meeting><address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2024">February 20-27, 2024. 2024</date>
			<biblScope unit="page" from="17754" to="17762" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Personalizing search via automated analysis of interests and activities</title>
		<author>
			<persName><forename type="first">J</forename><surname>Teevan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">T</forename><surname>Dumais</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Horvitz</surname></persName>
		</author>
		<idno type="DOI">10.1145/3190580.3190582</idno>
		<idno>doi:10.1145/3190580. 3190582</idno>
		<ptr target="https://doi.org/10.1145/3190580.3190582" />
	</analytic>
	<monogr>
		<title level="j">SIGIR Forum</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page" from="10" to="17" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Adaptive web search based on user profile constructed without any effort from users</title>
		<author>
			<persName><forename type="first">K</forename><surname>Sugiyama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Hatano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yoshikawa</surname></persName>
		</author>
		<idno type="DOI">10.1145/988672.988764</idno>
		<idno>doi:10.1145/988672.988764</idno>
		<ptr target="https://doi.org/10.1145/988672.988764" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 13th international conference on World Wide Web, WWW 2004</title>
				<editor>
			<persName><forename type="first">S</forename><forename type="middle">I</forename><surname>Feldman</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Uretsky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Najork</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Wills</surname></persName>
		</editor>
		<meeting>the 13th international conference on World Wide Web, WWW 2004<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2004">May 17-20, 2004. 2004</date>
			<biblScope unit="page" from="675" to="684" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Context-aware recommender systems</title>
		<author>
			<persName><forename type="first">G</forename><surname>Adomavicius</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mobasher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tuzhilin</surname></persName>
		</author>
		<idno type="DOI">10.1609/AIMAG.V32I3.2364</idno>
		<ptr target="https://doi.org/10.1609/aimag.v32i3.2364.doi:10.1609/AIMAG.V32I3.2364" />
	</analytic>
	<monogr>
		<title level="j">AI Mag</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="67" to="80" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Wooldridge</surname></persName>
		</author>
		<title level="m">An Introduction to MultiAgent Systems</title>
				<imprint>
			<publisher>Wiley</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note>Second Edition</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Mixtral of experts</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">Q</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sablayrolles</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Roux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Mensch</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Savary</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bamford</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Chaplot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>De Las Casas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">B</forename><surname>Hanna</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Bressand</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lengyel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Bour</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Lample</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">R</forename><surname>Lavaud</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Saulnier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lachaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Stock</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Subramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Antoniak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">L</forename><surname>Scao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Gervet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lavril</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Lacroix</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">E</forename><surname>Sayed</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2401.04088</idno>
		<idno>doi:10.48550</idno>
		<ptr target="/ARXIV.2401.04088" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<title level="m" type="main">RECOMP: improving retrievalaugmented lms with compression and selective augmentation</title>
		<author>
			<persName><forename type="first">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Choi</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2310.04408</idno>
		<idno>doi:10.48550</idno>
		<ptr target="/ARXIV.2310.04408" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Active retrieval augmented generation</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">F</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dwivedi-Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Callan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Neubig</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.EMNLP-MAIN.495</idno>
		<ptr target="https://doi.org/10.18653/v1/2023.emnlp-main.495.doi:10.18653/V1/2023.EMNLP-MAIN.495" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Bouamor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Pino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Bali</surname></persName>
		</editor>
		<meeting>the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023<address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2023">December 6-10, 2023. 2023</date>
			<biblScope unit="page" from="7969" to="7992" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Embedding-based query language models</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zamani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">B</forename><surname>Croft</surname></persName>
		</author>
		<idno type="DOI">10.1145/2970398.2970405</idno>
		<ptr target="https://doi.org/10.1145/2970398.2970405.doi:10.1145/2970398.2970405" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2016 ACM on International Conference on the Theory of Information Retrieval, ICTIR 2016</title>
				<editor>
			<persName><forename type="first">B</forename><surname>Carterette</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Fang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Lalmas</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Nie</surname></persName>
		</editor>
		<meeting>the 2016 ACM on International Conference on the Theory of Information Retrieval, ICTIR 2016<address><addrLine>Newark, DE, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2016">September 12-6, 2016. 2016</date>
			<biblScope unit="page" from="147" to="156" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Personalised information retrieval: survey and classification, User Model</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">R</forename><surname>Ghorab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>O'connor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Wade</surname></persName>
		</author>
		<idno type="DOI">10.1007/S11257-012-9124-1</idno>
		<ptr target="https://doi.org/10.1007/s11257-012-9124-1.doi:10.1007/S11257-012-9124-1" />
	</analytic>
	<monogr>
		<title level="j">User Adapt. Interact</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<biblScope unit="page" from="381" to="443" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Adaptive-rag: Learning to adapt retrieval-augmented large language models through question complexity</title>
		<author>
			<persName><forename type="first">S</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Baek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Hwang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Park</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2403.14403</idno>
		<idno>doi:10.48550</idno>
		<ptr target="/ARXIV.2403.14403" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">Metaagents: Simulating interactions of human behaviors for llm-based taskoriented coordination via collaborative generative agents</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2310.06500</idno>
		<idno>doi:10.48550</idno>
		<ptr target="/ARXIV.2310.06500" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Gao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Jia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Pan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Bi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2312.10997</idno>
		<idno>CoRR abs/2312.10997</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2312.10997.doi:10.48550/ARXIV.2312.10997" />
		<title level="m">Retrievalaugmented generation for large language models: A survey</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">Y</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Huang</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.10981</idno>
		<title level="m">A survey on retrieval-augmented text generation for large language models</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Improving the domain adaptation of retrieval augmented generation (RAG) models for open domain question answering</title>
		<author>
			<persName><forename type="first">S</forename><surname>Siriwardhana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weerasekera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kaluarachchi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Wen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Rana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Nanayakkara</surname></persName>
		</author>
		<ptr target="https://transacl.org/ojs/index.php/tacl/article/view/4029" />
	</analytic>
	<monogr>
		<title level="j">Trans. Assoc. Comput. Linguistics</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Benchmarking large language models in retrieval-augmented generation</title>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Sun</surname></persName>
		</author>
		<idno type="DOI">10.1609/AAAI.V38I16.29728</idno>
		<ptr target="https://doi.org/10.1609/aaai.v38i16.29728.doi:10.1609/AAAI.V38I16.29728" />
	</analytic>
	<monogr>
		<title level="m">Thirty-Eighth AAAI Conference on Artificial Intelligence, AAAI 2024, Thirty-Sixth Conference on Innovative Applications of Artificial Intelligence, IAAI 2024, Fourteenth Symposium on Educational Advances in Artificial Intelligence, EAAI 2014</title>
				<editor>
			<persName><forename type="first">M</forename><forename type="middle">J</forename><surname>Wooldridge</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Natarajan</surname></persName>
		</editor>
		<meeting><address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2024">February 20-27, 2024. 2024</date>
			<biblScope unit="page" from="17754" to="17762" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zou</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2404.10198</idno>
		<title level="m">How faithful are rag models? quantifying the tug-of-war between rag and llms&apos; internal prior</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Human memory: A proposed system and its control processes</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Atkinson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Shiffrin</surname></persName>
		</author>
		<idno type="DOI">10.1016/S0079-7421(08)60422-3</idno>
		<idno>(08)60422-3</idno>
		<ptr target="10.1016/S0079-7421" />
	</analytic>
	<monogr>
		<title level="m">Psychology of Learning and Motivation, volume 2 of Psychology of Learning and Motivation</title>
				<editor>
			<persName><forename type="first">K</forename><forename type="middle">W</forename><surname>Spence</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">T</forename><surname>Spence</surname></persName>
		</editor>
		<imprint>
			<publisher>Elsevier</publisher>
			<date type="published" when="1968">1968</date>
			<biblScope unit="page" from="89" to="195" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Semantic web-based information retrieval models: a systematic survey</title>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Kumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Data Science and Analytics: 5th International Conference on Recent Developments in Science, Engineering and Technology, REDSET 2019</title>
		<title level="s">Revised Selected Papers, Part II</title>
		<meeting><address><addrLine>Gurugram, India</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">November 15-16, 2019. 2020</date>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="204" to="222" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Personalized Information Retrieval based on Time-Sensitive User Profile</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kacem</surname></persName>
		</author>
		<ptr target="https://tel.archives-ouvertes.fr/tel-01707423" />
	</analytic>
	<monogr>
		<title level="m">Recherche d&apos;Information Personalisée basée sur un Profil Utilisateur Sensible au Temps)</title>
				<meeting><address><addrLine>Toulouse, France</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
		<respStmt>
			<orgName>Paul Sabatier University</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Ph.D. thesis</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">A multi-agent framework for context-aware dynamic user profiling for web personalization</title>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sharma</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Software Engineering: Proceedings of CSI 2015</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Hong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Cheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">K S</forename><surname>Yau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2308.00352</idno>
		<idno>doi:10.48550</idno>
		<ptr target="/ARXIV.2308.00352" />
		<title level="m">Metagpt: Meta programming for multiagent collaborative framework</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Contextual relevance feedback in web information retrieval</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">K</forename><surname>Limbu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Connor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Pears</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">G</forename><surname>Macdonell</surname></persName>
		</author>
		<idno type="DOI">10.1145/1164820.1164848</idno>
		<idno>doi:10.1145/1164820. 1164848</idno>
		<ptr target="https://doi.org/10.1145/1164820.1164848" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 1st International Conference on Information Interaction in Context, IIiX 2006</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Ruthven</surname></persName>
		</editor>
		<meeting>the 1st International Conference on Information Interaction in Context, IIiX 2006<address><addrLine>Copenhagen, Denmark</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2006">October 18-20, 2006. 2006</date>
			<biblScope unit="page" from="138" to="143" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Natural questions: a benchmark for question answering research</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kwiatkowski</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Palomaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Redfield</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Collins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Alberti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Epstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Polosukhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Devlin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kelcey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Dai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Uszkoreit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Petrov</surname></persName>
		</author>
		<idno type="DOI">10.1162/TACL_A_00276</idno>
		<ptr target="https://doi.org/10.1162/tacl_a_00276.doi:10.1162/TACL\_A\_00276" />
	</analytic>
	<monogr>
		<title level="j">Trans. Assoc. Comput. Linguistics</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="452" to="466" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Triviaqa: A large scale distantly supervised challenge dataset for reading comprehension</title>
		<author>
			<persName><forename type="first">M</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">S</forename><surname>Weld</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/P17-1147</idno>
		<ptr target="https://doi.org/10.18653/v1/P17-1147.doi:10.18653/V1/P17-1147" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017</title>
		<title level="s">Long Papers</title>
		<editor>
			<persName><forename type="first">R</forename><surname>Barzilay</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><surname>Kan</surname></persName>
		</editor>
		<meeting>the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017<address><addrLine>Vancouver, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2017-08-04">July 30 -August 4. 2017</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="1601" to="1611" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Semantic parsing on freebase from question-answer pairs</title>
		<author>
			<persName><forename type="first">J</forename><surname>Berant</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Chou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Frostig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Liang</surname></persName>
		</author>
		<ptr target="https://aclanthology.org/D13-1160/" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013</title>
				<meeting>the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013<address><addrLine>Grand Hyatt Seattle, Seattle, Washington, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2013-10-21">18-21 October 2013. 2013</date>
			<biblScope unit="page" from="1533" to="1544" />
		</imprint>
	</monogr>
	<note>, A meeting of SIGDAT, a Special Interest Group of the ACL, ACL</note>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Chain-of-thought prompting elicits reasoning in large language models</title>
		<author>
			<persName><forename type="first">J</forename><surname>Wei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Schuurmans</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Bosma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Ichter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">H</forename><surname>Chi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><forename type="middle">V</forename><surname>Le</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhou</surname></persName>
		</author>
		<ptr target="http://papers.nips.cc/paper_files/paper/2022/hash/9d5609613524ecf4f15af0f7b31abca4-Abstract-Conference.html" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 35: Annual Conference on Neural Information Processing Systems 2022, NeurIPS 2022</title>
				<editor>
			<persName><forename type="first">S</forename><surname>Koyejo</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Mohamed</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Belgrave</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Cho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Oh</surname></persName>
		</editor>
		<meeting><address><addrLine>New Orleans, LA, USA</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-12-09">November 28 -December 9, 2022, 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Self-rag: Learning to retrieve, generate, and critique through self-reflection</title>
		<author>
			<persName><forename type="first">A</forename><surname>Asai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sil</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajishirzi</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2310.11511</idno>
		<idno>doi:10.48550/ARXIV.2310.11511</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2310.11511" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">When not to trust language models: Investigating effectiveness of parametric and nonparametric memories</title>
		<author>
			<persName><forename type="first">A</forename><surname>Mallen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Asai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Zhong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Khashabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Hajishirzi</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.ACL-LONG.546</idno>
		<ptr target="https://doi.org/10.18653/v1/2023.acl-long.546.doi:10.18653/V1/2023.ACL-LONG.546" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Boyd-Graber</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</editor>
		<meeting>the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL 2023<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computational Linguistics</publisher>
			<date type="published" when="2023">July 9-14, 2023. 2023</date>
			<biblScope unit="page" from="9802" to="9822" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<analytic>
		<title level="a" type="main">Knowledge-augmented language model verification</title>
		<author>
			<persName><forename type="first">J</forename><surname>Baek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jeong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">J</forename><surname>Hwang</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.EMNLP-MAIN.107</idno>
		<ptr target="https://doi.org/10.18653/v1/2023.emnlp-main.107.doi:10.18653/V1/2023.EMNLP-MAIN.107" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Bouamor</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Pino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">K</forename><surname>Bali</surname></persName>
		</editor>
		<meeting>the 2023 Conference on Empirical Methods in Natural Language Processing, EMNLP 2023<address><addrLine>Singapore</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">December 6-10, 2023. 2023</date>
			<biblScope unit="page" from="1720" to="1736" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b36">
	<analytic>
		<title level="a" type="main">Interleaving retrieval with chain-of-thought reasoning for knowledge-intensive multi-step questions</title>
		<author>
			<persName><forename type="first">H</forename><surname>Trivedi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Balasubramanian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Khot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Sabharwal</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2023.ACL-LONG.557</idno>
		<ptr target="https://doi.org/10.18653/v1/2023.acl-long.557.doi:10.18653/V1/2023.ACL-LONG.557" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Rogers</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><forename type="middle">L</forename><surname>Boyd-Graber</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">N</forename><surname>Okazaki</surname></persName>
		</editor>
		<meeting>the 61st Annual Meeting of the Association for Computational Linguistics<address><addrLine>Toronto, Canada</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">July 9-14, 2023. 2023</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="10014" to="10037" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title level="a" type="main">Okapi at TREC-3</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">E</forename><surname>Robertson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Walker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Jones</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hancock-Beaulieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gatford</surname></persName>
		</author>
		<ptr target="http://trec.nist.gov/pubs/trec3/papers/city.ps.gz" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of The Third Text REtrieval Conference, TREC 1994</title>
				<editor>
			<persName><forename type="first">D</forename><forename type="middle">K</forename><surname>Harman</surname></persName>
		</editor>
		<meeting>The Third Text REtrieval Conference, TREC 1994<address><addrLine>Gaithersburg, Maryland, USA</addrLine></address></meeting>
		<imprint>
			<publisher>NIST Special Publication</publisher>
			<date type="published" when="1994">November 2-4, 1994. 1994</date>
			<biblScope unit="page" from="109" to="126" />
		</imprint>
		<respStmt>
			<orgName>National Institute of Standards and Technology (NIST</orgName>
		</respStmt>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Openmatch-v2: An all-in-one multi-modality plm-based information retrieval toolkit</title>
		<author>
			<persName><forename type="first">S</forename><surname>Yu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Xiong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Liu</surname></persName>
		</author>
		<idno type="DOI">10.1145/3539618.3591813</idno>
		<idno>doi:10.1145/3539618.3591813</idno>
		<ptr target="https://doi.org/10.1145/3539618.3591813" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SI-GIR 2023</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">W</forename><forename type="middle">E</forename><surname>Duh</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Huang</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">P</forename><surname>Kato</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">J</forename><surname>Mothe</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">B</forename><surname>Poblete</surname></persName>
		</editor>
		<meeting>the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval, SI-GIR 2023<address><addrLine>Taipei, Taiwan</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2023">July 23-27, 2023. 2023</date>
			<biblScope unit="page" from="3160" to="3164" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b39">
	<analytic>
		<title level="a" type="main">KILT: a benchmark for knowledge intensive language tasks</title>
		<author>
			<persName><forename type="first">F</forename><surname>Petroni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Piktus</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Fan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">S H</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Yazdani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">D</forename><surname>Cao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Thorne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jernite</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Karpukhin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Maillard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Plachouras</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Rocktäschel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Riedel</surname></persName>
		</author>
		<idno type="DOI">10.18653/V1/2021.NAACL-MAIN.200</idno>
		<ptr target="https://doi.org/10.18653/v1/2021.naacl-main.200.doi:10.18653/V1/2021.NAACL-MAIN.200" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Toutanova</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Rumshisky</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">L</forename><surname>Zettlemoyer</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Hakkani-Tür</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Beltagy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bethard</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Cotterell</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">T</forename><surname>Chakraborty</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Zhou</surname></persName>
		</editor>
		<meeting>the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2021, Online</meeting>
		<imprint>
			<date type="published" when="2021">June 6-11, 2021. 2021</date>
			<biblScope unit="page" from="2523" to="2544" />
		</imprint>
	</monogr>
	<note>Association for Computational Linguistics</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
