<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Next-Gen Sponsored Search: Crafting the Perfect Query with Inventory-Aware RAG (InvAwr-RAG) - Based GenAI</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Md Omar Faruk Rokon</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Weizhi Du</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Zhaodong Wang</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Musen Wen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Walmart AdTech</institution>
          ,
          <addr-line>Sunnyvale, CA</addr-line>
          ,
          <country country="US">USA</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Sponsored search plays a crucial role in e-commerce revenue generation, where advertisers strategically bid on keywords to capture the attention of users through relevant search queries. However, the process of identifying pertinent keywords for a given query presents significant challenges because of a vast and evolving keyword landscape, ambiguous intentions, and topic diversity. This paper highlights an opportunity for to earn a considerable amount of Ads revenue and user engagement where a significant proportion of queries fail to retrieve any sponsored ads. To utilize this opportunity, we introduce the Inventory-Aware RAG-based Generative AI model (InvAwr-RAG), which integrates advanced semantic retrieval and real-time inventory data. This model combines dynamically generated and historically successful queries to align with available inventory and ad campaigns while diversifying rewritten queries to enhance relevance and user engagement. Preliminary results show a significant 68% increase in fill rate and balanced relevance metrics, indicating a strong potential for increased ad revenue. The InvAwr-RAG model sets a new standard in dynamic query optimization, significantly improving ad relevancy, advertiser ROI, and user experience on Walmart's digital platform.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Dynamic Query Rewriting</kwd>
        <kwd>Generative AI in Advertising</kwd>
        <kwd>Sponsored Search</kwd>
        <kwd>E-commerce Advertising</kwd>
        <kwd>RAG</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Sponsored search is a cornerstone of revenue generation in e-commerce, where advertisers
bid on keywords to display their ads in response to user queries. This system, however, faces
significant challenges, including the alignment of user queries with relevant ads—a process
complicated by the vast, dynamic keyword landscape and diverse user intents. In the competitive
landscape of digital advertising, the eficiency of sponsored search systems is paramount for
driving revenue and enhancing user experience on e-commerce platforms like Walmart. A
significant challenge that Walmart faces is the presence of search queries that fail to retrieve
any sponsored product ads—accounting for approximately 13% of all searches. This issue
represents a substantial revenue loss and a missed opportunity to engage potential customers.
The inability to show relevant ads not only impacts Walmart’s bottom line but also diminishes
the efectiveness of the platform for advertisers seeking visibility and for customers who may
miss out on discovering products of interest. Hence, there is a compelling business need for a
solution that can dynamically align search queries with available inventory and advertising
goals, ensuring that every search can result in meaningful ad placements.</p>
      <p>The core problem this research addresses is the high rate of search queries that yield no ad
results due to mismatches between user queries and the current inventory or the specificities
of real-time bidding budgets. The challenge is twofold: firstly, to enhance the relevance of ad
placements to ensure they correspond with available inventory and meet advertiser bidding
strategies; and secondly, to maintain or even improve user experience by presenting ads that
are perceived as relevant and potentially interesting. This problem is crucial because it afects
Walmart’s ability to maximize ad revenue, utilize advertising space eficiently, and ensure
customer satisfaction.</p>
      <p>
        Current solutions in sponsored search fall into two main categories: information retrieval
(IR) and generative or Natural Language Generation (NLG) based retrieval. IR methods, such
as Dense Retrieval (DR) approaches including ANCE [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], RocketQA [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], and NGAME [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
employ advanced deep learning models to create dense representations of queries and keywords,
achieving state-of-the-art performance by utilizing efective negative mining strategies [
        <xref ref-type="bibr" rid="ref4 ref5 ref6 ref7">4, 5, 6, 7</xref>
        ].
Conversely, NLG-based methods like CLOVER [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ] and ProphetNet-Ads [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] use generative
models to transform user queries into more efective keywords, aiming to synthesize query
forms that better match available ads. Despite these advances, both IR and NLG methods often
overlook real-time inventory data, leading to a disconnect between the generated queries and the
ads available, resulting in many queries failing to retrieve any ads. This lack of synchronization
with the dynamic inventory and ad campaign specifics renders these methods less efective and
unable to leverage potential insights from immediate market conditions.
      </p>
      <p>To bridge this gap, we introduce an innovative Inventory-Aware RAG-based Generative AI
model (InvAwr-RAG) at Walmart. Our system leverages state-of-the-art technologies, including
two-tower BERT embeddings for deep understanding of query and product semantics, combined
with advanced indexing techniques for rapid and eficient retrieval of inventory data. By
integrating large language models (LLMs) with Retrieval-Augmented Generation (RAG), our
approach rewrites incoming user queries in real-time to align them with the most relevant and
available sponsored products. Furthermore, our system enhances the query pool by blending
dynamically generated queries with proven successful queries from historical search logs that
consistently result in ad displays. This hybrid approach ensures that rewritten queries are
immediately applicable and efective, reflecting live updates in inventory and adhering to the
nuances of real-time ad bidding strategies.</p>
      <p>This research introduces several significant innovations in the field of sponsored search:
• Introducing a robust model for Dynamic Query Rewriting: Our model dynamically adjusts
user queries to improve alignment with real-time inventory and advertiser bids, efectively
turning previously unfulfillable searches into opportunities for ad placement.
• Enhancing Real-Time Data Integration: By integrating real-time inventory and bidding
data, our system ensures that every search query can result in relevant and efective ad
displays.
• Hybrid Query Generation: Combining AI-generated queries with historically successful
queries allows for a rich mix of freshness and reliability in ad placements.
• Improving User Experience and Advertiser ROI: Our model not only enhances the user
experience by providing relevant ad suggestions but also increases Return on Investment
(ROI) for advertisers by maximizing the visibility of their products in relevant searches.
• Scalability and Adaptability: The use of advanced data handling techniques and the
adaptability of LLMs to learn from vast amounts of data ensure that our solution is
scalable and can continuously evolve with changing market dynamics.</p>
      <p>Our model demonstrated a +68% improvement in fill rate for queries that previously failed to
retrieve ads, thereby reducing the number of no-result queries substantially. These findings not
only show a marked improvement in fill rates but also underscore the potential of an increase in
ad-related revenue by up to $1 billion over the next five years. and enhance the overall shopping
experience on Walmart’s e-commerce platform.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Methodology</title>
      <p>In this section, we describe the development and implementation of the Inventory-Aware
RAG-based Generative AI model (InvAwr-RAG). Our objective is to dynamically rewrite user
queries to enhance ad relevance and search eficiency on Walmart’s e-commerce platform. The
methodology encompasses several key stages: data preparation, model architecture design,
training, and real-time query processing.</p>
      <sec id="sec-2-1">
        <title>2.1. Data Preparation</title>
        <p>Eficient data preparation is crucial for our model’s success. It involves meticulous collection
and annotation of data to capture the complex relationships between search queries, product
titles, and user interactions.</p>
        <p>Data for Two-Tower BERT Model: We compiled a dataset for the Two-Tower BERT model,
pairing search queries with product titles and associated relevance scores, enhanced through
human annotations to refine accuracy.</p>
        <p>Search Log Analysis for Popular Queries: Our search log analysis identified popular
queries based on search frequency and ad impressions, guiding the RAG component to align ad
content efectively with user preferences.</p>
        <p>Collection and Annotation of Rewritten Queries: Furthermore, we compiled search
queries from our logs to identify potential alternative or revised queries. We selected queries
that led to at least 500 clicks on the same items over a six-month period, as these reflect a high
level of user engagement and intent. To ensure the relevance and uniqueness of these queries,
we subjected them to a rigorous human evaluation process. We filtered out any queries that
were deemed too similar to existing ones, retaining only those that provided distinct alternatives.
The finalized set of revised queries was then utilized to train our Query Rewrite LLM, ensuring
that the model learns from real-world, efective search patterns.</p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Model Architecture and Training</title>
        <p>The InvAwr-RAG model combines supervised learning with advanced fine-tuning, tailored for
the dynamic e-commerce environment.</p>
        <p>Supervised Learning with Two-Tower BERT The Two-Tower BERT model processes user
queries and product titles separately, using embeddings to capture deep semantic meanings
essential for efective matching.</p>
        <p>Fine-Tuning for Query Rewrite LLM with Low-Rank Adaptation (LoRA): Our Query
Rewrite LLM is based on the Llama2 7B model, which we have fine-tuned using Low-Rank
Adaptation (LoRA) to enhance its performance in the context of sponsored search query
rewriting. The fine-tuning process involves optimizing the model with a specialized dataset to ensure
it generates contextually relevant and inventory-aware rewritten queries. This approach allows
us to leverage the extensive pre-training of Llama2 7B while adapting it specifically for our use
case. The introduction of low-rank matrices  and  into the weight matrices optimizes the
adaptation, minimizing the number of trainable parameters.</p>
        <p>This integrated approach ensures that our model adapts dynamically to user inputs and
inventory data, providing an efective bridge between user expectations and available products.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Retrieval-Augmented Generation (RAG)</title>
        <p>Figure 3 provides a comprehensive overview of our end-to-end RAG-based query rewriting
system. This system utilizes embeddings efectively to match user queries with relevant
inventory items, considering both product relevance and budget constraints to ensure practical ad
placements.</p>
        <p>Step1 - Query Classifier: Our rule-based classifier assesses each user’s search query to
identify if it underperforms based on metrics like click-through rates. We redirect low-performing
queries to a specialized route that enhances their retrieval efectiveness, optimizing our system’s
resource usage and improving outcomes for complex queries.</p>
        <p>Step2 - Dynamic Retrieval of Inventory Items: We actively retrieve the top N inventory
items from our vector database, selecting items based on their cosine similarity to the user’s
query while ensuring budget constraints are met.</p>
        <p>Step3 - Preparation of Query Rewrite Prompts: After identifying relevant items, we
prepare prompts using the original query combined with descriptions of these items. Our goal
is to generate diverse yet relevant rewritten queries that cater to a broad range of customer
interests without compromising relevancy, as shown in Figure 4.</p>
        <p>Step4 - Rewrite Query by LLM: We use a generative model trained on query suggestion
synthesis to produce rewritten queries that are not only relevant but also efectively linked to
the available inventory.</p>
        <p>Step5 - Get Popular Queries: We identify and integrate popular queries from our logs that
resemble the rewritten queries generated by the LLM, enriching our query suggestions with
common search terms and patterns.</p>
        <p>Step6 - Merge K Queries and Retrieval: We merge the rewritten queries generated by the
LLM with popular queries from our history logs. We then use these queries to dynamically
retrieve relevant ad items. Our cross-encoder based BERT model evaluates the relevancy of
each item, ensuring that only those meeting our predefined relevancy thresholds are displayed.
This approach not only tailors ad displays to current search intents but also boosts customer
satisfaction by aligning ads closely with user expectations.</p>
        <p>Our comprehensive RAG system efectively bridges the gap between user queries and
suitable products, optimizing ad displays and enhancing the shopping experience. By leveraging
advanced algorithms for query processing and item retrieval, and ensuring that all interactions
are guided by relevance and user intent, our system not only optimizes ad displays but also
significantly enhances the user shopping experience. The careful integration of dynamic retrieval
processes with a robust relevance framework ensures that our e-commerce platform can cater
to a wide array of customer needs while maintaining high eficiency and scalability, ultimately
leading to a more engaging and successful search experience for all users.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Experiment and Results</title>
      <p>This section presents results for the InvAwr-RAG model, aimed at enhancing the efectiveness
of Walmart’s sponsored search system.</p>
      <sec id="sec-3-1">
        <title>3.1. Selection Criteria for N Items and K Queries</title>
        <p>An essential aspect of our methodology in the InvAwr-RAG model involves the selection of N=20
items and K=5 rewritten queries. These parameters were carefully determined based on both
empirical evidence and operational eficiency, ensuring optimal performance and relevance.</p>
        <p>Determining N=20 Items: The choice of N=20 items for retrieval from our vector database
is grounded in the following considerations:</p>
        <p>1. Diversity and Coverage: Retrieving 20 items allows the system to achieve a balance between
diversity and specificity. This number is large enough to cover various aspects of user queries,
yet manageable enough to maintain high relevance and avoid overwhelming the user with
options.</p>
        <p>2. User Experience: Based on user interaction data, we observed that presenting up to 20
items maximizes engagement without causing decision fatigue. Users are more likely to browse
through and interact with a set of 20 well-curated product suggestions.</p>
        <p>Computational Eficiency: From a technical perspective, retrieving 20 items strikes an optimal
balance between computational load and response time, ensuring that the system remains
responsive even under high trafic conditions.</p>
        <p>Choosing K=5 Rewritten Queries: The decision to generate K=5 rewritten queries for
each original query was based on several factors:</p>
        <p>1. Query Variation: Five rewritten queries provide suficient variation to explore diferent
linguistic formulations and product matches, increasing the chances of hitting upon the most
efective phrasing that captures the user’s intent.</p>
        <p>2. Precision and Focus: Limiting the number of rewritten queries to five helps maintain focus
and precision in query suggestions, ensuring each query is highly targeted and likely to yield
relevant results.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Ofline Results</title>
        <p>We conducted an initial ofline evaluation on a set of 10,000 queries historically known to have
a 0% fill rate—queries that consistently failed to retrieve any ad items. Our fine-tuned Llama2
7B model, integrated into the InvAwr-RAG system, achieved a 68% fill rate on this challenging
set, demonstrating substantial improvement in query coverage and relevance.</p>
        <p>For comparison, we also evaluated the performance of queries rewritten by GPT-4, a
stateof-the-art LLM, under similar conditions. GPT-4 achieved a fill rate of 53%, which, while
impressive, underscores the additional benefits gained from our fine-tuning approach that
specifically addresses the retail context of Walmart.</p>
        <p>Model
Baseline (0% Fill Rate)
GPT-4
InvAwr-RAG</p>
        <p>Fill Rate
0%
53%
68%</p>
        <p>NDCG@8</p>
        <p>0
0.6458
0.6847</p>
        <p>To further assess the relevancy of the returned ad items to the original queries, we utilized
the NDCG metric at a cutof of 8 (NDCG@8). This metric evaluates the quality of the ranking
by measuring the usefulness, or gain, of the ad items based on their positions in the result list.
Higher scores indicate better relevancy. The evaluations were conducted by third-party human
evaluators who assessed the top 8 items returned by each model. As shown in the table, our
InvAwr-RAG model not only improved fill rates but also demonstrated superior relevance as
indicated by its higher NDCG score compared to both the baseline and GPT-4.</p>
        <p>These preliminary findings suggest that the InvAwr-RAG model has the potential to
significantly enhance the sponsored search system, making a strong case for further investigation
through planned A/B testing.</p>
        <p>We will conduct A/B testing to measure the InvAwr-RAG model’s impact in a live setting,
focusing on Fill Rate, Click-Through Rate, Conversion Rate, and Revenue Impact. These
metrics will help validate the model’s potential to transform inefective queries into profitable
engagement opportunities, with preliminary data already indicating significant improvements.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion</title>
      <p>This section clarifies the essential role of query rewriting in enhancing item retrieval from the
vector database and addresses potential queries about the confidence in rewritten queries and
the integration of popular searches.</p>
      <p>The Complementary Role of Query Rewriting: Query rewriting is vital even with a
capable vector database because about 13% of queries yield no results. It bridges the gap by
transforming queries into formats more likely to match the inventory, improving relevance and
user intent alignment. Rewriting also adjusts to dynamic inventory changes and diverse user
expressions, ensuring robust and responsive search capabilities.</p>
      <p>Confidence in Rewritten Queries: Confidence in our rewritten queries is founded on
a robust combination of contextual knowledge, historical data, and human annotation. Our
LLM’s training, enriched with extensive contextual and industry-specific data, enables the
generation of pertinent queries even when initial retrievals are unsuccessful. Historical data
anchors the model’s suggestions in proven past interactions, while human annotation refines
our dataset with alternative query expressions that have historically led to successful outcomes.
This methodical approach ensures the eficacy of the model across various scenarios.</p>
      <p>Incorporating Popular Queries Integrating popular queries plays a crucial role by reflecting
collective user behavior, which ensures that rewritten queries resonate with broad user search
patterns. This strategy not only captures current trends, providing timely relevance, but also
combines the real-time adaptability of LLM-generated queries with the solid foundation of user
preferences. This hybrid approach is particularly valuable during peak shopping periods and
market shifts, efectively guiding the LLM to produce queries that align with the latest user
intents and market trends. This strategic integration enhances the relevance and efectiveness
of our search system, benefiting both users and the platform.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions</title>
      <p>This research has demonstrated the efectiveness of the Inventory-Aware RAG-based Generative
AI model (InvAwr-RAG) in addressing significant ineficiencies in sponsored search systems on
e-commerce platforms like Walmart. By dynamically rewriting queries to align with real-time
inventory and ad campaigns, the InvAwr-RAG model significantly reduces the occurrence of
no-result queries and has the potential to increase ad-related revenue substantially.</p>
      <p>Preliminary results of our system have shown promising results. These findings highlight the
potential of integrating advanced AI technologies to enhance the relevance and efectiveness of
ad placements, thereby improving both user experience and advertiser ROI.</p>
      <p>The forthcoming A/B test will provide a more definitive analysis of the InvAwr-RAG model’s
performance. Beyond this, future work will focus on refining the model’s understanding of
user intent and expanding its applicability across more diverse product categories and bidding
strategies. Continual improvements in scalability and eficiency will also be critical as Walmart’s
inventory and user base expand.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Xiong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.-F.</given-names>
            <surname>Tang</surname>
          </string-name>
          , J. Liu,
          <string-name>
            <given-names>P.</given-names>
            <surname>Bennett</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Ahmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Overwijk</surname>
          </string-name>
          ,
          <article-title>Approximate nearest neighbor negative contrastive learning for dense text retrieval</article-title>
          , arXiv preprint arXiv:
          <year>2007</year>
          .
          <volume>00808</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Qu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ding</surname>
          </string-name>
          , J. Liu, K. Liu,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. X.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>Rocketqa:</surname>
          </string-name>
          <article-title>An optimized training approach to dense passage retrieval for open-domain question answering</article-title>
          , arXiv preprint arXiv:
          <year>2010</year>
          .
          <volume>08191</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>K.</given-names>
            <surname>Dahiya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Gupta</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Saini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Soni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dave</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiao</surname>
          </string-name>
          , G. K, P. Dey,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          , et al.,
          <article-title>Ngame: Negative mining-aware mini-batching for extreme classification</article-title>
          ,
          <source>in: Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>258</fpage>
          -
          <lpage>266</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>X.</given-names>
            <surname>Bai</surname>
          </string-name>
          , E. Ordentlich,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Ratnaparkhi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Somvanshi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Tjahjadi</surname>
          </string-name>
          ,
          <article-title>Scalable query n-gram embedding for improving matching and relevance in sponsored search</article-title>
          ,
          <source>in: Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery &amp; data mining</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>52</fpage>
          -
          <lpage>61</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>A. Z.</given-names>
            <surname>Broder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ciccolo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fontoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Gabrilovich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Josifovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Riedel</surname>
          </string-name>
          ,
          <article-title>Search advertising using web relevance feedback</article-title>
          ,
          <source>in: Proceedings of the 17th ACM conference on information and knowledge management</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>1013</fpage>
          -
          <lpage>1022</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Broder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Fontoura</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Josifovski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Riedel</surname>
          </string-name>
          ,
          <article-title>A semantic approach to contextual advertising</article-title>
          ,
          <source>in: Proceedings of the 30th annual international ACM SIGIR conference on Research and development in information retrieval</source>
          ,
          <year>2007</year>
          , pp.
          <fpage>559</fpage>
          -
          <lpage>566</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bhatia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Dahiya</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Jain</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mittal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Prabhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Varma</surname>
          </string-name>
          ,
          <article-title>The extreme classification repository: Multi-label datasets and code</article-title>
          , URL http://manikvarma. org/downloads/XC/XMLRepository. html (
          <year>2016</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Mohankumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Dodla</surname>
          </string-name>
          ,
          <string-name>
            <surname>G.</surname>
          </string-name>
          <article-title>K, A</article-title>
          . Singh,
          <article-title>Unified generative &amp; dense retrieval for query rewriting in sponsored search</article-title>
          ,
          <source>in: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management</source>
          ,
          <year>2023</year>
          , pp.
          <fpage>4745</fpage>
          -
          <lpage>4751</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A. K.</given-names>
            <surname>Mohankumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Begwani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <article-title>Diversity driven query rewriting in search advertising</article-title>
          ,
          <source>in: Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery &amp; Data Mining</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>3423</fpage>
          -
          <lpage>3431</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>W.</given-names>
            <surname>Qi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Prophetnet-ads: A looking ahead strategy for generative retrieval models in sponsored search engine</article-title>
          ,
          <source>in: Natural Language Processing and Chinese Computing: 9th CCF International Conference, NLPCC</source>
          <year>2020</year>
          , Zhengzhou, China,
          <source>October 14-18</source>
          ,
          <year>2020</year>
          , Proceedings,
          <source>Part II 9</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>305</fpage>
          -
          <lpage>317</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>