<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>A Method to Incorporate Temporal Seasonality into Search Ranking for Improved Relevance and User Engagement⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Shreya Mahapatra</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Anandita Chopra</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Liping Zhang</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shabhareesh Komirishetty</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Tracy Holloway King</string-name>
        </contrib>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <abstract>
        <p>Search behavior is often influenced by seasonal trends such as Christmas, Halloween, and other recurring cultural or temporal events. During these periods, users tend to search for specific topics or content aligned with the seasonal moment. However, traditional ranking systems typically overlook these signals. In this paper, we devise a framework for identifying and incorporating seasonality signals into search ranking models. Seasonality signals capture recurring patterns in user behavior over time derived from historical interaction data. Our approach detects these recurring patterns using a forecasting model and integrates them into the retrieval and ranking stages of the search pipeline. Using seasonality, the system can predict and surface content that aligns with emerging user interests based on the current temporal and regional context. Furthermore, our method accounts for cultural variations across regions, allowing more localized and relevant seasonal experiences.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;multi-modal search</kwd>
        <kwd>seasonality and temporality</kwd>
        <kwd>search ranking</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>In search systems for images, templates, and other multi-modal content, ranking approaches primarily
rely on embedding-based retrieval and user behavioral signals. These methods focus on measuring
the semantic similarity between the user query and candidate content such as image. In addition,
multi-modal search also incorporates engagement metrics such as clicks, downloads or exports to
promote popular or trending results. Although these methods work well for capturing contextual
matching and content relevance, they are essentially reactive. These methods model what users have
interacted with in the past, rather than anticipating what users are likely to look for in the near future.
As a result, such systems often fail to capture temporal intent or predict future user interests, especially
those driven by recurring seasonal trends.</p>
      <p>The predictive approach is the main emphasis of this paper, where we attempt to forecast future
user behavior. We propose a framework that forecasts user behavior by leveraging seasonality signals.
Seasonality signals are crucial for understanding and predicting seasonal fluctuations in various contexts,
such as user behavior or content relevance. By analyzing seasonal trends and patterns, we can tailor
search results and recommendations to align with current and upcoming events, such as holidays
or special occasions. This ensures that users receive suggestions that match their immediate needs.
Our work is in the context of Adobe Express template search which provides users with multi-modal
templates that they can edit and remix for use on social media and printing (see Figure 1 for an example);
all examples in this paper are in that context. For instance, during the winter holidays, users searching
for “holiday invitations” or “party flyers” will receive tailored recommendations featuring Christmas,
New Years or winter. Similarly, as summer approaches, users looking for “event banners” or “vacation
photo templates” will be presented with vibrant, summer-themed options. For events these predictable
seasonal events, we can dynamically adjust search results and recommendations to feature relevant
templates, ensuring that users find the most appropriate and timely content for their needs. Furthermore,
our work considers the user’s cultural context (or their country). For example in mid-June, US results
show content related to US Independence Day (July 4th), while India results how content linked to
Yoga Day (June 21st). Our work is divided into parts:
• Predicting the seasonality signal
• Incorporating the seasonality signal into the search retrieval and ranking</p>
      <p>
        This work was conducted to improve search ranking of templates in Adobe Express. Adobe Express is
a content creation tool developed by Adobe, where templates are rich objects which contain many visual
layers and text boxes [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. The platform follows a freemium model, i.e. users with free accounts have
access to free content plus premium that has to be paid for individually, while paid users have unlimited
access to both free and premium content. This tiered structure afects user engagement patterns,
highlighting the need to analyze and incorporate seasonal trends across diferent user segments. The
proposed algorithm is generic and can be applied to other search ranking systems.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Prior Art</title>
      <p>
        There has been recent research on enhancing the search relevance by incorporating seasonality signal.
Chen et al. [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] addresses the challenge of incorporating seasonality into online product search by
utilizing a season enhanced BERT model to capture seasonal semantic relevance between queries and
product. However, the paper only uses a six-season classification system to define seasons for relevance
modeling. It does not incorporate seasonal moments such as holidays or festivals into its model. Verma
et al. [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] uses a feed forward neural network model to predict seasonality scores based on query text
and month. It then uses a L2 ranker to integrate the predicted seasonality scores and re-rank auto
complete suggestions. This model works at a monthly granular level.
      </p>
      <p>
        In contrast, our work uses a time-series model like TBATS [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] to model seasonal trends (section
3.4). Moreover, our work operates at a daily granular level, thereby capturing finer seasonal patterns.
Additionally, our intent taxonomy includes a wider diversity of seasonal intents such as major festivals
and holidays. Unlike prior work that focuses on global seasonal pattern, our model efectively captures
seasonality trends at the country level.
      </p>
    </sec>
    <sec id="sec-3">
      <title>3. Model and Data</title>
      <sec id="sec-3-1">
        <title>3.1. Template Data</title>
        <p>Adobe Express has a rich corpus of templates. Templates are editorially curated, customizable layouts
present within Adobe Express. A template can be thought of as a multi-modal asset that includes shapes,
icons, text, and images. In addition to the structured visual content, each template is associated with a
curated title. Insights regarding the use of templates are also provided via user engagement metrics like
clicks, exports, edits, and so forth.</p>
        <p>
          To predict the user behavior, we first devise an algorithm to classify the templates into relevant
seasonal intents. Our taxonomy [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ] includes ∼300 intents, out of which 135 are categorized as seasonal,
covering both global events (e.g. New Years Eve, summer (which takes into account northern and
southern hemispheres)) and culturally specific local events (e.g. Diwali, Chinese New Year). While our
current taxonomy is manually curated, our framework is designed to be easily extensible to new intents.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Template Intent Classification</title>
        <p>We classify the templates into intents using the intent taxonomy described above. An unsupervised
learning technique is employed for this classification, leveraging textual information and metadata
such as title. Sentence transformers generate embeddings for both the template and the predefined
intent taxonomy, where each intent is treated as a textual description. To determine the most relevant
intent for each template, we compute the cosine similarity between the template embedding and each
intent embedding. The template is assigned the intent corresponding to the closest match. Finally, the
assigned intent is encoded as a 512-dimension one hot vector, where a non-zero entry corresponds to
the confidence of the most likely intent, with additional capacity included for future seasonality.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Few Shot Learning with Sentence Transformer</title>
        <p>Each intent described in section 3.2 is associated with a set of templates. We select ∼24000 templates
across diferent intents. The templates with high confidence score are chosen as the base dataset for the
training. Each template is associated with the textual description and an intent which acts as its label.
This dataset comprises templates from varying geographical regions and languages.</p>
        <p>
          SetFit [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] is an eficient few shot tuning algorithm for sentence transformers. It achieves high accuracy
with relatively little labeled data and so is appropriate for our use case which requires a model that
can accurately classify the user behavior into diferent intents with little labeled data. We trained a
SetFit model with the Adobe-licensed templates’ textual data utilizing the dataset described in section
3.1. The dataset was divided into two parts based on the locale of the content used — English and
non-English (multi-lingual) training data. We utilized the ‘sentence-transformers/all-MiniLM-L6-v2’
model as our base model and trained it on Adobe-licensed templates’ English textual data. Similarly,
‘sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2’ was utilized for non-English textual
data. The CoSENTLoss (Contrastive Sentence Embedding Loss) [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] loss function was employed for
training.
        </p>
        <p>The model achieved an accuracy of 0.85 on the multi-lingual test set and an accuracy of 0.95 on the
English test set. This fine-tuned model understands the Adobe Express template data and can accurately
predict the user behavior since users typically remix these templates to create their projects.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Seasonal Forecast Prediction</title>
        <p>
          Seasonal forecasting is performed using the TBATS algorithm [
          <xref ref-type="bibr" rid="ref4">4</xref>
          ], which is particularly efective for
time series data with complex seasonal patterns. TBATS takes as input a time series of intents from
the past four years.1 We extract the sequence of project-level intents created by users aggregated over
time and converted it to a time series data. For each intent, we construct a time series representing the
1Four years was selected based on the availability of accurate user behavioral data in Express template search.
count of the occurrence of each intent over the four year window. The TBATS model learns patterns
from historical trends and predicts the seasonal intent for a time period. This modeling is performed
for each geographical region (e.g. larger regions like the United States, Europe and Asia-Pacific and/or
specific countries). Thus, our model predicts the relevant seasonal moment and intent for a given date
and region. The implementation steps are outlined below:
1. For each pre-defined region:
2. Select the last four years of data for prediction.
3. Predict the intent of the user behavior data using the model defined in section 3.3.
4. Filter out intents with a very low confidence value to reduce noise. Only user behavior data with
high confidence value are chosen.
5. Aggregate the predicted intent data on a weekly basis, using Monday as the first day of each
week as depicted in table 1.
6. Compute z-score for each intent’s occurrences via formula (1).
        </p>
        <p>count − mean (1)
standard deviation
7. Use the TBATS model with a seasonal period of 52.18 weeks. The TBATS model predicts the
forecasts for the next 104 weeks or 2 years. Forecast scores are generated for each intent across
weekly intervals.
8. Threshold filtering is applied to the predicted forecast score to focus on statistically significant
seasonal patterns. Specifically, only the intents with scores exceeding the 90th percentile are
chosen.
9. Select the top most likely forecasted intent for each week.</p>
        <p>The prediction also covers seasons (i.e. spring, summer, fall (autumn), and winter) across the world.
That is, it understands seasons based on geo-location and date. We derive the value of the current
ongoing season for all top-tier countries based on the hemisphere and date. For instance, if the date is
March 2nd and the country is Japan, we would categorize the season as ‘spring,’ since Japan is in the
northern hemisphere. In contrast, a user from a country in the southern hemisphere on the same date
would experience ‘fall’, reflecting the opposite seasonal pattern. These seasons are added to the list of
forecasted intents from the model.</p>
        <p>The algorithm takes into account the cultural aspects of a country. There are certain festivals which
have greater significance in a particular country. For instance, both International Women’s Day and
Holi occur in March. International Women’s Day is a global event, whereas Holi has more cultural
significance in India. Therefore, Holi should be ranked higher than International Women’s Day in
India. The system ranks culturally significant intents higher than global events. This is achieved by
manually tagging certain intents from the taxonomy as country specific and which countries they apply
to. Moreover, we leverage Large Language Models (LLMs) to predict the cultural significance of the
intent for a given country. As a result, the cultural significance of an intent is derived by a combination
of manual curation and LLM’s knowledge.</p>
        <p>Some events exhibit limited historical behavior but demonstrate strong relevance because of recent
user behavior signal (i.e. trending signals). The system incorporates these signals by manual addition
of new intents based on region and date. For intents not captured by the TBATS algorithm due to
limited historical data, we leverage LLMs to estimate their relevance scores. The LLM is provided with
an input of existing intents along with their relevance score for the specified region and date, i.e. the
ones forecasted by the TBATS algorithm for the given date and region. Based on the provided context,
the LLMs generate a relevance score for the new intent. If a new trending signal, which has limited
historical data, emerges, we simply need to include the new intent 3.1, re-classify the templates and
apply the same LLM-based technique to forecast its seasonality score.</p>
        <p>The overall seasonal forecasting system described above is shown in Figure 2</p>
        <p>The figure 3 shows the distribution of Independence Day and New Year in the US over the past two
years. As expected, the intents exhibit a cyclical pattern, becoming more prevalent as their actual date
approaches.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Seasonality in Search</title>
      <p>
        Adobe Express template search leverages a RankNet-based learning-to-rank model [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] to determine the
relevance of templates. This model is trained on user engagement metrics such as clicks and exports,
the content embedding and the manually curated tags and title (see [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] on how the embedding-based
features are combined with user behavior and metadata features for Express matching and ranking).
These features help the model to learn user preferences and rank templates accordingly. However, the
template ranking is based on the past interaction of the users and cannot anticipate the future usage
of the templates, including seasonally predictable preferences (see Figure 1 where a broad query like
instagram should include seasonally relevant templates, not just historically popular ones).
      </p>
      <p>To further enhance the search experience, this work incorporates a seasonality signal into the current
ranking framework. This signal, as derived from the forecasting algorithm described in section 3.4,
forecasts the relevant seasonal moment at a particular time of the year. Integrating the seasonal signal
into the search ranking not only reflects historical usage preferences but also anticipates the seasonal
usage, leading to a more context-aware search experience.</p>
      <sec id="sec-4-1">
        <title>4.1. Dataset and Training</title>
        <p>Each row of the training dataset consists of a unique combination of query, country and date. The
columns consist of features such as user engagement metrics, template embeddings and metadata-related
ifelds. The dataset is enhanced by including the seasonality signal. The seasonal signal is incorporated
through the following steps:
1. For a given date and country, predict the seasonal intents using the forecasting model (section
3.4). This results in a multi-hot vector, where each non-zero value corresponds to a specific intent
and represents the model’s confidence score for that intent.</p>
        <p>Example: Given a user searching for a template on December 24th in the US, the forecast model
may identify high-relevance seasonal intents such as Christmas (0.8), Winter (0.5), and New Year
(0.7). These are encoded as a seasonal intent vector with several intents with confidence scores
but many zeros representing extremely unlikely intents for that date:</p>
        <p>
          [0, 0.8, 0, 0, 0.5, 0, … 0, 0.7, 0, … 0]
2. Each template is associated with a seasonal embedding, represented as a one-hot vector (section
3.2). The non-zero value in this vector represents the confidence of the intent.
3. To compute the seasonal relevance of a template for a given query, we take the dot product
between the template’s seasonal embedding vector and the forecasted intent vector generated for
the query’s date and country. The dot product serves as the scalar score, signifying how well
a template aligns with the query’s forecasted intent. A higher score signifies higher alignment
with the forecasted intent.
4. Seasonal events are more relevant for broad, high level queries such as flyer, invitation, sale
or poster, where users are looking to create event-driven content. To efectively capture this
seasonal relevance, the existing training dataset is enhanced by including more broad, high level
queries. This is achieved by filtering the behavioral dataset to include more broad queries, thereby
increasing the model’s exposure to seasonally influenced search behavior.
5. This dataset is further enriched by combining it with other user engagement signals. This data,
combined with seasonality, provides a good balance between showing templates that are seasonal,
relevant to the user’s context and recently created. The user engagement signals include:
• Recency: Captures how recently the template was published.
• Clicked Template Id and BM25 Scores: Captures the textual relevance between the user’s
query and the clicked templates’ metadata (e.g. title, content) by leveraging BM25 scores.
• Template Embeddings: Encoding each template into a vector space to capture the semantic
relationship between the user’s search query and template [
          <xref ref-type="bibr" rid="ref1">1</xref>
          ].
• Template Metadata: This includes key attributes which provide contextual information about
the template, such as: User Segments (Enterprise, Education K-12), Task type
(instagramstory, poster), region (US, India, France etc). These attributes are associated with a template
and signify its intended audience and purpose.
        </p>
        <p>We trained a linear SVM model using the RankNet loss function in PyTorch. The clicked template
Ids correspond to the positive samples, while negative samples are drawn from the recall set of the
search query that were not clicked. The resulting model produces a weight score corresponding to each
feature, including seasonality, based on their contribution to ranking relevance.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Applying Seasonal Boost at Query Time</title>
        <p>At query time, for a given query’s date and country, the forecast model returns the forecasted intent.
To incorporate this seasonal information, we compute a weighted boost as follows:
Seasonal_Boost = Forecast_Confidence × SVM-learned_Weight
(2)
The weighted score reflects both the seasonal alignment as well as the learned impact on user behavior.
During template re-ranking, this boost is applied to rescore the template’s score. A dot product is
computed using the weighted forecast intent vector and template’s seasonal embedding to derive a
seasonal relevance score. A higher dot product indicates a greater match with the current date forecasted
intent, and hence boosts the template’s ranking in search results.</p>
        <p>The seasonality boost is applied along with other factors, which strike a balance between seasonal
relevance, contextual relevance, and recency. The adjustments help provide a diverse user experience
by promoting newly created templates along with the seasonal and contextually relevant templates.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Evaluation</title>
      <sec id="sec-5-1">
        <title>5.1. Ofline Large-scale Evaluation</title>
        <p>Our ofline large-scale evaluation uses the historical query logs to determine the efect of the new
ranking on the user experience. We computed Mean Reciprocal Rank (MRR) from 15000 randomly
sampled search queries. Table 3 shows the impact of adding the seasonality signal on MRR in the US
region. For all query frequencies (head, torso, tail) the MRR increases, suggesting that adding seasonality
would reduce the amount of scrolling that users would have to do to find relevant results. Both with
and without seasonality, MRR is higher for more frequent queries. This at first seems counter-intuitive
since broad head queries have many relevant results that users have to inspect to select from. However,
the relatively limited selection of templates for tail queries combined with the fact that templates are
editable and hence can be easily modified by users means that users have to scroll to find the ideal
template for niche intents, while for head queries they can select from the top results.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Ofline Manual Evaluation</title>
        <p>We conducted an internal side-by-side evaluation, where users, including Adobe product managers,
were asked to label which search experience they preferred based on the top 20 templates for a randomly
selected set of queries. Table 4 shows the statistics of the evaluation. Incorporating seasonality into
search improves the search experience, with 56% of search queries indicating that the seasonality
enhanced search experience was much better.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusion and Future work</title>
      <p>By integrating seasonality signals into the search ranking pipeline, we aim to enhance the search
experience by providing more seasonal content, thereby diversifying the content in the search. This
method not only enriches the search experience, but also provide a scalable and robust method to
extract seasonality signal and incorporating them into various systems. Many Adobe Express users
leverage templates to create social media content for their e-commerce business. In the context of
e-commerce, seasonallly relevant search results are particularly helpful because seasonal moments such
as back to school, holidays, and regional festivals greatly influence user purchasing behavior and intent.
By enhancing search results with seasonality signals, we can anticipate user needs and surface relevant
results based on the seasonal pattern.</p>
      <p>Our next steps are to run an AB test for Express template search to determine whether user behavior
on Adobe Express reflects the ofline predictions (section 5). In addition, we are integrating the
seasonality signal into template collection browse. These template collections are similar to broad
queries (e.g. collections for Instagram Square Posts and for Posters) and should benefit from having
seasonally-relevant templates as opposed to ranking based on popularity and recency.</p>
      <p>As future work, this model can be extended to other Adobe products such as Adobe Stock, which is
an image and video marketplace. Incorporating seasonality into Adobe Stock could help to surface more
contextually relevant seasonal images and videos. Moreover, the current model only leverages text
for intent classification. We can leverage multi-modal models that can incorporate visual features to
classify templates and user behavior prediction. Furthermore, the seasonality signal can be incorporated
into many other workflows such as auto complete, and email campaigns.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) have used ChatGpt for Grammar and spelling check.
After using these tool(s)/service(s), we have reviewed and edited the content as needed and take(s) full
responsibility for the publication’s content.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Aroraa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Srikantan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Uvalle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Valls-Vargas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Vardhan</surname>
          </string-name>
          ,
          <article-title>Smart multi-modal search: Contextual sparse and dense embedding integration in adobe express</article-title>
          ,
          <source>in: Proceedings of The 1st Workshop on Multimodal Search and Recommendations (CIKM MMSR '24)</source>
          , CEUR,
          <year>2024</year>
          . URL: https://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>3859</volume>
          /paper3.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Meng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Jiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Ni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Gao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Momma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <article-title>Improving product search with season-aware query-product semantic similarity</article-title>
          ,
          <source>in: Companion Proceedings of the ACM Web Conference</source>
          <year>2023</year>
          ,
          <year>2023</year>
          , pp.
          <fpage>864</fpage>
          -
          <lpage>868</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>P.</given-names>
            <surname>Verma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Zhong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Rajan</surname>
          </string-name>
          ,
          <article-title>Seasonality based reranking of e-commerce autocomplete using natural language queries</article-title>
          ,
          <source>arXiv preprint arXiv:2308</source>
          .
          <year>02055</year>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>A. M. De Livera</surname>
            ,
            <given-names>R. J.</given-names>
          </string-name>
          <string-name>
            <surname>Hyndman</surname>
            ,
            <given-names>R. D.</given-names>
          </string-name>
          <string-name>
            <surname>Snyder</surname>
          </string-name>
          ,
          <article-title>Forecasting time series with complex seasonal patterns using exponential smoothing</article-title>
          ,
          <source>Journal of the American statistical association 106</source>
          (
          <year>2011</year>
          )
          <fpage>1513</fpage>
          -
          <lpage>1527</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Sharma</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T. H.</given-names>
            <surname>King</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Poddar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Blank</surname>
          </string-name>
          ,
          <article-title>Augmenting knowledge graph hierarchies using neural transformers</article-title>
          ,
          <source>in: Proceedings of ECIR24 Industry Track</source>
          ,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>L.</given-names>
            <surname>Tunstall</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Reimers</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U. E. S.</given-names>
            <surname>Jo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bates</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Korat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Wasserblat</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Pereg</surname>
          </string-name>
          ,
          <article-title>Eficient few-shot learning without prompts</article-title>
          ,
          <source>arXiv preprint arXiv:2209.11055</source>
          (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>S.</given-names>
            <surname>Jianlin</surname>
          </string-name>
          ,
          <article-title>Cosent: A more eficient sentence vector scheme than sentence-bert</article-title>
          ,
          <year>2022</year>
          . URL: https: //kexue.fm/archives/8847.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>C.</given-names>
            <surname>Burges</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Shaked</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Renshaw</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Lazier</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Deeds</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hamilton</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Hullender</surname>
          </string-name>
          ,
          <article-title>Learning to rank using gradient descent</article-title>
          ,
          <source>in: Proceedings of the 22nd International Conference on Machine Learning (ICML)</source>
          , ACM, New York, NY, USA,
          <year>2005</year>
          , pp.
          <fpage>89</fpage>
          -
          <lpage>96</lpage>
          . URL: https://doi.org/10.1145/1102351.1102363. doi:
          <volume>10</volume>
          .1145/1102351.1102363.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>