<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <issn pub-type="ppub">1613-0073</issn>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>mender Models to Content Recom mendation in Chat Data using Non-Item Page-Models</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Albin Zehe</string-name>
          <email>zehe@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Elisabeth Fischer</string-name>
          <email>elisabeth.fischer@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Jonas Kaiser</string-name>
          <email>jonas.kaiser@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Toni Wagner</string-name>
          <email>toni.wagner@vaudience.ai</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andreas Hotho</string-name>
          <email>hotho@informatik.uni-wuerzburg.de</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Workshop</string-name>
        </contrib>
        <contrib contrib-type="editor">
          <string-name>Sequential Recommendation, Non-Item Pages, Content Enrichment</string-name>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Data Science Chair, CAIDAS, University of Würzburg</institution>
          ,
          <addr-line>Am Hubland, 97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>vAudience</institution>
          ,
          <addr-line>John-Skilton-Straße 22, 97074 Würzburg</addr-line>
          ,
          <country country="DE">Germany</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Most research in sequential recommender models has focused on sequences that are purely made of items (e.g, movies, page clicks), excluding additional elements in the sequence that may provide more information for the next relevant item. Recently, it has been proposed to include non-item pages (e.g., list pages or blog posts), in order to represent the users' intent more clearly. In this paper, we transfer the same modelling principle to sequences made of items and text messages. This enables us to adapt arbitrary sequential recommender methods to a new application area: We can use any sequential recommendation model to recommend links to relevant content in a chat setting, using the history of previous messages and mentioned items as context. We evaluate our models on four diferent datasets and show that we can identify content relevant to the conversations well when using pre-trained embeddings for the messages in the conversations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Italy.</p>
      <p>CEUR</p>
      <p>ceur-ws.org
or sports events. In all of these datasets, our goal is to detect which items (e.g., players, movies) are
currently discussed in the conversation and to recommend relevant links to the users. Figure 1 gives a
visualization of our task and approach.</p>
      <p>Our contributions in this paper are as follows: (a) We adapt sequential recommender models to
our task of providing additional content relevant to the conversation. (b) We introduce four datasets
for this task, partially derived from existing datasets, enabling further research. (c) We compare our
approach with baseline recommender models, showing that it is efective in finding relevant content
for conversations.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        As neural networks have gained widespread popularity, they have become a popular choice for modeling
and learning user behavior sequences. Sequential recommender models have been developed based
on recurrent neural networks [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] or convolutional neural networks [
        <xref ref-type="bibr" rid="ref2">2, 9, 10</xref>
        ]. Numerous architectures
have since adapted the attention mechanism [11] for sequential recommendation [
        <xref ref-type="bibr" rid="ref3">3, 12, 13, 14, 15</xref>
        ].
      </p>
      <p>
        All of these works utilized only the item sequences itself, but the inclusion of additional item or user
information has also been investigated by several studies. Item features have been modeled in RNNs
[
        <xref ref-type="bibr" rid="ref4">4, 16</xref>
        ], CNNs [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], and transformer models [
        <xref ref-type="bibr" rid="ref6">6, 17, 18, 19, 20</xref>
        ]. Some approaches merge additional item
information directly in the attention layer [21, 22]. There are also works that study the inclusion of
user representations in a similar way for diferent architectures [
        <xref ref-type="bibr" rid="ref2">2, 23, 24, 25, 26</xref>
        ]. Information that is
not directly related to a user or an item, yet still part of the sequence, has not been investigated as
extensively. Some works for the Coveo challenge [27] are an exception, as they include interactions
that are not tied to specific items. These non-item interactions are included in sequential recommender
models by [28, 29, 30], but they are represented the same way as item interactions. An explicit modeling
and a formal definition of non-item pages was first introduced for transformer-based models by Fischer
et al. [7]. A more extensive study [8] expands the setup to a wider set of sequential recommender models
and allows the integration of non-items with generic embedding representations (see Section 3.1).
      </p>
      <p>Providing relevant content items based on textual information can also be viewed as a form of Entity
Linking (EL). Usually, the task involves the recognition of entities explicitly mentioned in texts and
their disambiguation [31]. Since many conversations feature nondirect references to entities, where
context clues must be leveraged to identify the target, many EL approaches face dificulties recognizing
and resolving such references. This special setting – Implicit Entity Linking – has been investigated
primarily in the setting of Twitter posts [32, 33, 34]. These approaches, however, rely heavily on manual
feature selection rather than current deep learning models. Additionally, due to the setting, they often
only consider limited conversational context directly. In this work, we investigate how sequential
recommendation models can be adapted to this task. Applying these models to implicit entity linking is
beneficial, since it enables us to directly transfer any progress in sequential recommendation to implicit
entity linking.</p>
      <p>Our setting is also related to conversational recommendation [35]. However, while the goal there
is to lead a conversation with a user, ask questions designed to narrow down the space of possible
items, and finally provide a recommendation, our goal is to detect the items that are mentioned in a
conversation to recommend pages providing additional content relevant to the conversation.</p>
      <p>Our model can be used as one component of a conversational recommender system, for example
to identify the movies/items that the user mentions and provide them to the recommender model in
charge of determining new suggestions.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Background</title>
      <p>3.1. Sequential Item Recommendation with Non-Item Pages
Non-Item Pages have been proposed recently as a way to incorporate sequence elements other than
the items that are potential targets for recommendation into sequential recommender systems [7, 8].
In essence, they can be of arbitrary nature, representing additional content related to the items, for
example, a page listing items of a specific category, a search query, or a blog post about a set of items.
[8] propose diferent ways of representing these non-item pages, depending on their nature. The two
representations relevant for our setting are: (1) Unique Page-ID (UPID): A unique identifier can always
be assigned to any non-item page and integrated into the sequence as an item. Possible drawbacks
are high sparsity and a growing vocabulary. (2) Page Embedding (PE): A single placeholder id is used
to represent all non-items. The information about the non-item is included by adding an embedding
representation of the non-item to the id in the embedding layer.</p>
      <p>In this work, we use this non-item page modeling to include chat messages as information about the
current topic in a conversation.</p>
      <sec id="sec-3-1">
        <title>3.2. Task Description</title>
        <p>Our task is to enrich conversations in chat systems with content that is relevant to the current
conversation. We model this task as a sequential recommendation problem, treating chat messages as non-item
pages and relevant content as items, for example, in the form of links to Wiki pages.</p>
        <p>More formally, we closely follow the setup of [8] and introduce our notation as follows: Our work
aims to solve the task of recommending items (i.e., relevant content) in a conversation (modeled as a
session) given a sequence of chat messages and previously mentioned items. We define the item set
as  = { 1,  2, … ,  | | } and the set of non-items as ℳ = { 1,  2, … ,  |ℳ|}. The set of conversations is
defined as  = { 1,  2, … ,  | | }. For each conversation  , we can denote the sequence of interactions as
  = [ 1,  2, … ,   ] ⊆ { ∪ ℳ}  . Using these notations, our goal is to solve the task of predicting the next
item  +1 ∈  for every interaction   in each sequence   ∈  .</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Methodology</title>
      <p>We model our task as a sequential recommendation problem with sequences consisting of items and
possibly chat content. This enables us, in principle, to use any sequential recommendation architecture.
In this section, we describe our model architectures, our diferent ways of representing conversations
as sequences and our subsequence sampling.
Here, we discuss our representation of conversations with mentioned items as sequences for a
recommender model. Assume this short conversation as an example, where items are marked in bold:
User 1 I like The Godfather</p>
      <sec id="sec-4-1">
        <title>User 2 Do you like The Matrix?</title>
        <p>We compare diferent variants of modeling this as a sequence of items and possibly non-items:
Items Only As a baseline, we apply standard sequential recommendation models directly to the
sequence of items mentioned in the conversations, ignoring the chat messages. In this model, the
conversation above becomes  items = (The Godfather, The Matrix). This can work on some datasets
that are close to the traditional sequential recommender setting (e.g., in the ReDial-Mention dataset, cf.
Section 5.2). However, we expect that this setting does not carry enough information in other datasets
and that the content of the conversation is required to adequately identify the items that are referred to
in the conversation.</p>
        <p>Unique Token ID (UTID) Therefore, we propose to switch to a non-item modeling setting to
include chat content. As a first step towards this, we split the messages into tokens
corresponding to words and introduce a new ”word-item”   to the set of non-items ℳ for each unique
token occurring in the dataset. We then map each occurrence of a token to the corresponding
word-item and include these word-items in the sequence. Now, our sequence becomes  UTID =
(I, like, The, Godfather, The Godfather, Do, you, like, The, Matrix, ?, The Matrix). This means that
the model only has to identify the target items that have been referred to in the conversation rather
than predict which item could be brought up next. Note that the modelling here is still very close to a
traditional sequential recommender, with the only diference being that our models are only trained to
predict actual items from  , that is, word-items from ℳ are never the target for recommendation, as
we always build our sequences to end with actual items. This setting is identical to the Unique Page-ID
(UPID) setting in [8].</p>
        <p>Token Embedding (TE) Since the previous setting introduces a very high number of items to the
vocabulary (each word in the texts is mapped to a separate item), making it more dificult to train the
models, we explore the Page Embedding setting from [8] next: Here, we map all words to a single
placeholder token T, leading to a set of non-items with only one entry ℳ = {T}. Each occurrence of
this token is then associated with a pre-computed embedding  ̂ for the word it corresponds to. These
embeddings can, for example, be extracted from language models or simple word embeddings. With this
approach, our example conversation is represented by the id sequence  TE = (T, T, T, T, The Godfather,
T, T, T, T, T, T, The Matrix). The actual content of the tokens will only be represented by the
corresponding embeddings.</p>
        <p>Message Embedding (ME) While using Token Embeddings solves the problem of producing a very
large vocabulary, it still leads to a high sequence length, since each word is included in the conversation
as a separate element. However, the non-item page modeling also allows us to see entire messages as
non-items. Therefore, in our final setting, we map each message to a placeholder token M, again leading
to a singleton set of non-items ℳ = {M}. As in the previous setting, this placeholder token is shared
for all messages and the content of the message is represented by a precomputed embedding  ̂ , for
example from a language model. In this setting, our sequence becomes  ME = (M, The Godfather, M,
The Matrix).
4.2. Sequential Recommender Models
We utilize a set of popular recommender models which all leverage the sequence of item ids. While
UTID representations can be used without model changes, we need to adjust the embedding layer to
include additional embedding representations for non-items. The common setup for the embedding
layer, with embedding size  , looks as follows:
(i) an id embedding   ∈ ℝ| |×
(ii) an optional embedding   ∈ ℝ ×
the maximum input sequence length.</p>
        <p>to encode the position of the items in the sequence, with  as
To include any additional precomputed embedding representation   ̂ ∈ ℝ
|| for an interaction  , we add
(iii) a linear layer   () =   +</p>
        <p>with the weight matrix  ∈ ℝ × and bias  ∈ ℝ  which scales the
representation to the embedding size  .</p>
        <p>The final output of the embedding layer is created by summing up all layers. For each sequence
step  we embed the id of interaction   with   =   (  ), the position with   =   () and the scaled
precomputed representation   ̂ with   =   ( ̂ ). We compute ℎ</p>
        <p>following layers of the sequential recommender. This allows us to add additional latent representations
for any interaction, for non-items as well as items. Specifically, it also enables the use of the Token
Embedding and Message Embedding strategies (cf. Section 4.1), which use only one shared non-item id
0 =   +  
+   as the input for the
and represent the content of the non-items using pre-computed embeddings.</p>
        <sec id="sec-4-1-1">
          <title>4.3. Subsequence Sampling</title>
          <p>In our setting, not only predicting the last item in a sequence is of relevance, but rather predicting all
items given the chat context up to this point. Therefore, we generate a set   of subsequences from
each sequence   in our datasets in the following way: We select all items   ∈  in   and, for each of
these items, generate a subsequence  , =   [1 ∶ ] containing the sequence up to item   . Should a
subsequence exceed the maximum sequence length set for the model, we truncate it from the left side.
We evaluate our models on each of these subsequences to get an accurate estimate of their performance.</p>
          <p>Since some of our datasets are small, we optionally also employ the same subsequence generation
scheme for the training data, training on each subsequence and therefore increasing the amount of
training samples.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Datasets</title>
      <p>We use four datasets in our experiments:
1. ReDial-Mention: conversations between two users, where the task is to recommend links to the
movies mentioned in the conversation,
2. HandballSynth: synthetic conversations in streams of Handball games from the European</p>
      <p>Championship, where the goal is to link the profiles of mentioned players and teams,
3. Bestatter: conversations between users and a chatbot, with the task of recommending pages
with additional information about the discussed topic and
4. Twitch: chat sessions in e-sports streams, aiming to recommend wiki pages with information
about the players, teams, in-game characters and other entities mentioned by the users.</p>
      <p>Descriptive statistics for all annotated datasets are provided in Table 1.1
1All datasets are available from our repository at https://github.com/LSX-UniWue/non-items-recbole/tree/kars-workshop-24.</p>
      <sec id="sec-5-1">
        <title>5.1. Dataset Construction</title>
        <p>Generally, we construct samples for our task from conversations in the following way: Each conversation
is converted into a sequence  consisting of items  ∈  and potentially text messages  ∈ ℳ . For Items
Only, we build a sequence from every item  mentioned in the conversation, discarding text messages.
In all other variants, we build a sequence by representing each message  either as the sequence of
its words or as a single token M as described in Section 4. If one or multiple items  are relevant to a
message  , we append the items directly after  as targets for our recommender in the order in which
they were discussed in  .
5.2. ReDial-Mention
We adapt the ReDial dataset for conversational recommendation [36] to our task. ReDial consists of
conversations between two users, one “seeker”, who is looking for movie recommendations, and one
“recommender”, who tries to find fitting movies to suggest to the seeker. This dataset is suitable for
our task because it contains conversations annotated with movies that are mentioned in the messages.
We construct samples for our task from ReDial as described above. ReDial contains movie IDs in
the messages, which we replace by the full movie title to which they refer. Additionally, we crawled
information such as the plot summary and director from the OMDb API2, from which we build our
item embeddings.</p>
        <p>ReDial-Mention-Noise Since the first version of the ReDial dataset always mentions the exact movie
title in the messages, potentially making the task too easy, we construct an additional noisy version of
the dataset. To this end, we employ a pre-trained Large Language Model, specifically Llama 3.1 8b 3 to
get modified versions of the movie titles. We query the model for 20 alternative titles for each movie
with this prompt template:</p>
        <p>Return a name for this movie as it would likely be used in a conversation. For example, if the
movie is called “The Matrix”, you might return “Matrix”. For “The Lord of the Rings: The
Fellowship of the Ring”, you might return “The first Lord of the Rings movie”. Return only
one possible name, nothing else:
{ Movie Title }</p>
        <p>We keep the list of all 20 alternative titles, including duplicates. Since this sometimes yields titles
that are too noisy or even wrong, we apply an additional filtering step, where we query the same model
with this prompt template:</p>
        <p>You will be passed a pair of original movie title and a noisy version of it. Your task is to
determine if the noisy version is a valid reference that could be used in a conversation. This
can be either the original title, a slightly modified version or a description.</p>
        <p>If you think the noisy version is a valid alternative, return “yes”. If you think the noisy version
is not a valid alternative, return “no”. Do not return anything else.</p>
        <p>The original title is: { Movie Title }. The noisy title is: { Noisy Title }.</p>
        <p>We reconstruct the dataset by sampling an alternative title from this list for each mention of a movie
when building the message representation.
5.3. HandballSynth
We generate a synthetic dataset for a controlled scenario with accurate information on all intended
references. It simulates a chat with live messages from ongoing handball matches, where entities (e.g.,
players and coaches) are discussed and we want to recommend pages with additional information about
2https://www.omdbapi.com/
3https://ollama.com/library/llama3.1:8b
the overall number of interactions with items and messages as well as session length.
the entities that are mentioned by the users. We used the pre-trained Large Language Model Claude 3.5
Sonnet4 to generate conversations based on play-by-play information from live-ticker data5.</p>
        <p>The LLM was prompted to create artificial chat messages as reactions to the events in the ticker. In
order to keep the generated messages realistic, we instructed the model to generate emotional, short and
informal messages, as well as occasional of-topic comments. We included the relevant team and player
data6 in the prompt and instructed the model to discuss specific entities in roughly every third message.
This information is also used to build the embeddings for the items. Additionally, the model includes
the link to the website containing the entity’s information – obtained from the sources included in the
prompt – with every generated reference. We remove these links from the messages and use them as
target items as before.
5.4. Bestatter
We use a dataset of user conversations from a service chat bot on a German website for information
about funerals7. The bot provides users with information for funeral related inquiries. In addition to
general information on the topic, the bot also recommends links to information on the website, external
web sources, as well as relevant contact information.</p>
        <p>From these conversations, we construct a dataset for our task as before. As target items, we extract
links mentioned by the bot, as well as contact information such as email addresses and phone numbers
and replace their occurrences in the dialogues with a corresponding placeholder string.</p>
        <p>We generate item representations by embedding the text contents of the linked websites for links.
For contact information we construct the representation for the item by embedding the local context of
the reply message containing the email address or phone number, including up to 250 character before
and after the mentioned contact information.
5.5. Twitch
We collected a dataset of messages from Twitch.tv8, a popular streaming website, primarily used for
e-sports streams. Along with each stream, Twitch provides a chat where users can comment on the
content of the stream. The purpose of this data set is to recommend links that provide background
information on entities disscussed in the chat. Messages on Twitch.tv generally have an extremely
noisy language with a high amount of spelling errors and slang [37], making this dataset especially
challenging.</p>
        <p>Dataset Generation We downloaded chat messages from several streams of the popular e-sports
game Dota 2 using TwitchDownloader9 and manually annotated them with relevant links from
Liquipedia10 and Wikipedia. Liquipedia provides information about both in-game characters and mechanics
as well as real-world entities (players, teams) and tournaments in the context of Dota 2.</p>
        <p>In total, we selected seven broadcasts from between September 2023 and June 2024, featuring both
oficial streams from tournaments of diferent sizes as well as personal streams by individual, well
known streamers. From these chat logs we randomly sampled segments of 100 consecutive messages
for annotation. The messages where annotated independently by two annotators, with 1900 messages
judged by both and an additional 3000 messages annotated by either one alone. Using the messages
annotated by both participants, we observe a moderate inter annotator agreement of Cohen’s kappa
 = 0.47 . To further ensure quality of the annotations, an additional curator merged the two sets of
annotations, resolving any discrepancies in the process.</p>
        <p>Annotators were instructed to go through the chat log segments one message at a time, providing
links to any entity directly mentioned or implied within that message. These links will serve as targets
for our recommendation task.</p>
        <p>In case of a direct mention of an entity, we can map them directly to their wiki page. However, many
elements of the game do not have their own article. In this case, we link to the parent wiki page: For
example, if a named ability is mentioned, we link the wiki page of the character with this ability. Apart
from direct mentions, memes and in-jokes are common within the community. These often serve as a
reference to a particular player or incident, from which we again derive appropriate link targets.</p>
        <p>Again, we use the content of the linked pages to compute embeddings for the target items. As with
the other datasets introduced above, sequences are constructed from the annotated messages.</p>
        <p>Sequences are constructed from the annotated messages as above.</p>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Experiments</title>
      <p>Evaluation Metrics For all sequences, we ensure that the last interaction in a sequence is an actual
item  ∈  for training and testing. We calculate the Hit Rate@k (HR@k) for  ∈ {1, 5} and the Normalized
Discounted Cumulative Gain@5 (NDCG@5), as we are only interested in recommending a small number
of items at once. We calculate our metrics only on the top k recommended items, ignoring non-items.
Metrics are computed for all subsequences, as described in Section 4.3.</p>
      <p>
        Models For each dataset, we evaluate our setup on the following popular sequential recommender
architectures:
• GRU4Rec[38]: a simple recurrent neural network with Gated Recurrent Units.
• CASER [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ]: a convolutional network with horizontal and vertical convolution filters.
• NARM [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]: a neural network combining recurrent neural networks with attention.
• NextITNet [10]: a model built upon CASER, using dilated convolutions.
• SASRec [12]: a popular baseline utilizing self-attention for sequential recommendation. Here, it
is used with a cross entropy loss.
9https://github.com/lay295/TwitchDownloader
10https://liquipedia.net/
• BERT4Rec [13]: an adaption of BERT [39] with bidirectional transformer layers and masked
training [40].
• CORE [15]: an attention-based model, which couples the representation space for both encoding
and decoding by representing the session as a linear combination of the items.
• LightSANs [14]: an attention-based model with a low-rank decomposed self-attention and a
decoupled positional encoding.
      </p>
      <p>
        Model Hyper-Parameters For the configuration of our experiments, we follow the setup in [ 8]. All
of our models are trained with the Adam optimizer, and all models have a hidden and/or embedding
size of 64 and an inner size of 256. The number of heads and layers is set to 2 for transformer layers
and the dropout to 0.2. GRU4Rec also has 2 layers. The mask ratio for BERT4Rec is set to 0.2, while the
ifne-tuning ratio is set to 0.1. For CASER we set dropout = 0.4, Markov chain length = 5, vertical filters
= 4 and horizontal filters = 8. For NextItNet, we set the kernel size = 3, the convolutional filter width =
5 and the dilations = [
        <xref ref-type="bibr" rid="ref1 ref4">1, 4</xref>
        ]. We use [
        <xref ref-type="bibr" rid="ref6">212,10,42,404,6</xref>
        ] as fixed seeds and report the average and standard
deviation over 5 runs. We set the maximum sequence length to the ≈90% -quantile; see Table 1. We
train all models for 50 epochs with a batch size of 64 on a cluster of L40-GPUs. We utilize the code from
[8] and provide our own code and configurations in our repository 11.
      </p>
      <p>Embedding Models We compute non-item embeddings from the text of the messages and item
embeddings from the content of the linked websites to provide the models with additional information
about the targets. All content is embedded using Sentence Transformers [41, 42] models. For the English
datasets (ReDial-Mention, ReDial-Mention-Noise, Twitch), we use a monolingual English model.12
For the German datasets (HandballSynth, Bestatter), we use a multilingual model.13 Both models
provide embeddings of size 384.</p>
    </sec>
    <sec id="sec-7">
      <title>7. Results</title>
      <p>The main results for all datasets are provided in Tables 2 to 5. We selected the two best-performing
models, SASRec and LightSANs, and refer to the appendix for results of the other models. All results
in this section are for models trained and evaluated on subsequences (cf. Section 4.3). We include
results for models that are not trained, but only tested on subsequences, in the appendix. These models
consistently perform worse, which is expected due to the lower amount of training data available in
this setting. We compare our diferent sequence representation strategies (Items Only, UTID, TE, ME; cf.
Section 4.1), as well as training and evaluating the models with or without item embeddings (non-items
are always represented with embeddings). We can summarize our findings as follows:
Overall Performance Overall, our models yield reasonable to very good performance on all datasets.
Message Embeddings (ME) perform best consistently. This is expected since the message embeddings
provided by the Sentence Transformers models should capture the content of the messages well, while
also keeping the sequence length manageable. This makes it relatively easy for the model to determine
which entities are referred to in the messages. The models trained on word-level sequences perform
worse, but still manage to provide good recommendations in most cases. Interestingly, using item
embeddings only has a small influence on the results. This may be caused by the rather long content of
the linked items: Sentence Transformer models are primarily trained to represent sentences and short
paragraphs well, while we use them to represent entire websites. We leave this investigation, as well as
the development of better representations for the item content, for future work.
11https://github.com/LSX-UniWue/non-items-recbole/tree/kars-workshop-24
12https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2
13https://huggingface.co/sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
yes
yes
yes
no
yes</p>
      <p>Clean
0,046± 0,002
0,144± 0,004
0,097± 0,002
0,041± 0,003
0,132± 0,004
0,088± 0,003
0,046± 0,001
0,144± 0,002
0,097± 0,001
0,042± 0,003
0,132± 0,003
0,088± 0,002</p>
      <p>Noisy
0,046± 0,002
0,144± 0,004
0,097± 0,002
0,041± 0,003
0,132± 0,004
0,088± 0,003
0,046± 0,001
0,144± 0,002
0,097± 0,001
0,042± 0,003
0,132± 0,003
0,088± 0,002
0,299± 0,003
0,468± 0,010
0,390± 0,007
0,308± 0,004
0,485± 0,006
0,404± 0,004
0,541± 0,055
0,733± 0,036
0,650± 0,045
0,521± 0,006
0,716± 0,004
0,630± 0,004
0,271± 0,007
0,442± 0,010
0,364± 0,008
0,280± 0,009
0,457± 0,012
0,376± 0,011
0,536± 0,050
0,720± 0,038
0,640± 0,045
0,489± 0,035
0,686± 0,023
0,600± 0,028
0,310± 0,003
0,487± 0,003
0,406± 0,002
0,315± 0,009
0,490± 0,016
0,411± 0,013
0,533± 0,033
0,729± 0,022
0,644± 0,027
0,505± 0,017
0,706± 0,017
0,618± 0,017
0,276± 0,007
0,454± 0,006
0,372± 0,006
0,278± 0,007
0,446± 0,013
0,369± 0,010
0,515± 0,020
0,710± 0,016
0,625± 0,017
0,498± 0,039
0,697± 0,026
0,610± 0,031
0,619± 0,007
0,755± 0,013
0,698± 0,010
0,624± 0,005
0,771± 0,004
0,708± 0,002
0,631± 0,006
0,792± 0,004
0,724± 0,005
0,619± 0,007
0,764± 0,005
0,702± 0,005
0,594± 0,008
0,743± 0,004
0,680± 0,005
0,595± 0,009
0,750± 0,004
0,684± 0,005
0,598± 0,007
0,765± 0,005
0,694± 0,003
0,590± 0,005
0,750± 0,005
0,681± 0,005
ReDial-Mention The Message Embedding-models are able to identify the movies mentioned in the
conversations very well. Word-level models (UTID, TE) still perform somewhat well but worse than the
ME models. Item Only models are not able to solve the task well, since they are missing the content of
the conversations.</p>
      <p>ReDial-Mention-Noise Results on the noisy version of the dataset follow exactly the same trends as
on ReDial-Mention. Since the titles of the movies are no longer contained exactly in the messages, the
task becomes harder and, consequently, the performance drops slightly. However, the models are still
able to identify the mentioned movies well if given access to the content of the conversation.
HandballSynth The general trends also hold on this dataset. Here, the Items Only strategy yields
acceptable results for HR@5, which is likely caused by the somewhat low number of entities in the
dataset. However, for both HR@1 and HR@5, including the conversation’s content into the models still
leads to a clear improvement.</p>
      <p>Bestatter We were unable to train word-level models in the Bestatter dataset, as word-level
modeling leads to very long sequences on this dataset (cf. Table 1). Again, the Items Only strategy
works relatively well here. In addition to the low number of entities in the dataset, as in HandballSynth,
the conversations in this dataset are also somewhat schematic: Many of the conversations follow the
yes
no
yes</p>
      <p>SASRec
yes
no
yes</p>
      <p>Items Only
0,519± 0,042
0,828± 0,008
0,686± 0,019
0,544± 0,017
0,823± 0,003
0,695± 0,005
0,509± 0,056
0,794± 0,020
0,659± 0,034
0,533± 0,027
0,826± 0,010
0,691± 0,010</p>
      <p>TE
same structure, meaning that items mentioned in the future can often be inferred from past items
without including information about the conversation.</p>
      <p>Twitch This is the most challenging dataset due to the very noisy nature of messages on twitch.tv.
Consequently, the scores are lower than for other datasets, but the models still yield reasonable
performance when using Message Embeddings. Otherwise, the results follow the same trends as on the
other datasets.</p>
    </sec>
    <sec id="sec-8">
      <title>8. Conclusion</title>
      <p>We have shown that we can adapt arbitrary sequential recommender models to the task of recommending
relevant content in chat conversations between users. To this end, we have adapted the recently proposed
non-item page modeling to represent the content of chat messages in multiple ways. We have evaluated
these models on several datasets, showing that our modeling is successful. We find that the models
perform best when given access to pre-trained message embeddings to represent the content of the
chat messages.</p>
      <p>Our modeling can be applied in a variety of settings, ranging from the direct application of
showing relevant content in chat rooms to being used as an intermediate component in a conversational
yes
no
yes</p>
      <p>Items Only
0,163± 0,009
0,402± 0,013
0,288± 0,008
0,141± 0,015
0,351± 0,012
0,253± 0,007
0,156± 0,007
0,379± 0,026
0,275± 0,015
0,157± 0,020
0,397± 0,015
0,286± 0,013</p>
      <p>UTID
recommender system.</p>
      <p>Our findings imply that any improvements in sequential recommender models can be directly
transferred to our task of recommending relevant content to chat conversations. In particular, we
see the opportunity to adapt recommender models with access to knowledge graphs to our task.
There is potential to improve our results further by providing better content representations for both
recommended items and messages (especially in the case of Twitch.tv, where standard embedding
models may struggle with the noisy language), which we see as a promising direction for future work.</p>
    </sec>
    <sec id="sec-9">
      <title>Acknowledgments</title>
      <p>This work is partially funded by the German Federal Ministry of Education and Research (BMBF) under
grant number 01IS22051B in the KILiMod project.
Heidelberg, 2020, p. 275–282. URL: https://doi.org/10.1007/978-3-030-58285-2_23. doi:10.1007/
978-3-030-58285-2_23.
[7] E. Fischer, D. Schlör, A. Zehe, A. Hotho, Enhancing sequential next-item prediction through
modelling non-item pages, in: 2023 IEEE International Conference on Data Mining Workshops
(ICDMW), 2023, pp. 128–136. doi:10.1109/ICDMW60847.2023.00024.
[8] E. Fischer, D. Schlör, A. Zehe, A. Hotho, Modeling and analyzing the influence of non-item pages on
sequential next-item prediction, 2024. URL: https://arxiv.org/abs/2408.15953. arXiv:2408.15953.
[9] C. Xu, P. Zhao, Y. Liu, J. Xu, V. S. S.Sheng, Z. Cui, X. Zhou, H. Xiong, Recurrent convolutional neural
network for sequential recommendation, in: The World Wide Web Conference on - WWW’19,
ACM Press, 2019. doi:10.1145/3308558.3313408.
[10] F. Yuan, A. Karatzoglou, I. Arapakis, J. M. Jose, X. He, A simple convolutional generative network
for next item recommendation, 2018. arXiv:1808.05163.
[11] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, I. Polosukhin,
Attention is all you need, in: Advances in neural information processing systems, 2017, pp.
5998–6008.
[12] W.-C. Kang, J. McAuley, Self-attentive sequential recommendation, in: 2018 IEEE International</p>
      <p>Conference on Data Mining (ICDM), IEEE, 2018, pp. 197–206.
[13] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, P. Jiang, BERT4Rec: Sequential recommendation
with bidirectional encoder representations from transformer, in: Proceedings of the 28th ACM
International Conference on Information and Knowledge Management - CIKM 19, ACM Press,
2019. doi:10.1145/3357384.3357895.
[14] X. Fan, Z. Liu, J. Lian, W. Zhao, X. Xie, J.-R. Wen, Lighter and better: Low-rank decomposed
selfattention networks for next-item recommendation, 2021, pp. 1733–1737. doi:10.1145/3404835.
3462978.
[15] Y. Hou, B. Hu, Z. Zhang, W. X. Zhao, Core: Simple and efective session-based recommendation
within consistent representation space, in: Proceedings of the 45th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR ’22, Association for
Computing Machinery, New York, NY, USA, 2022, p. 1796–1801. URL: https://doi.org/10.1145/
3477495.3531955. doi:10.1145/3477495.3531955.
[16] Q. Liu, S. Wu, D. Wang, Z. Li, L. Wang, Context-aware sequential recommendation, 2016.</p>
      <p>arXiv:1609.05787.
[17] G. de Souza Pereira Moreira, S. Rabhi, J. M. Lee, R. Ak, E. Oldridge, Transformers4rec: Bridging
the gap between nlp and sequential / session-based recommendation, in: Proceedings of the 15th
ACM Conference on Recommender Systems, RecSys ’21, Association for Computing Machinery,
New York, NY, USA, 2021, p. 143–153. URL: https://doi.org/10.1145/3460231.3474255. doi:10.1145/
3460231.3474255.
[18] Z. Yang, Z. Dai, Y. Yang, J. Carbonell, R. Salakhutdinov, Q. V. Le, Xlnet: generalized autoregressive
pretraining for language understanding (2019).
[19] K. Clark, M. Luong, Q. V. Le, C. D. Manning, ELECTRA: pre-training text encoders as
discriminators rather than generators, CoRR abs/2003.10555 (2020). URL: https://arxiv.org/abs/2003.10555.
arXiv:2003.10555.
[20] A. Jagatap, N. Gupta, S. Farfade, P. M. Comar, Attribert - session-based product attribute
recommendation with bert, in: Proceedings of the 46th International ACM SIGIR Conference on
Research and Development in Information Retrieval, SIGIR ’23, Association for Computing
Machinery, New York, NY, USA, 2023, p. 3421–3425. URL: https://doi.org/10.1145/3539618.3594714.
doi:10.1145/3539618.3594714.
[21] C. Liu, X. Li, G. Cai, Z. Dong, H. Zhu, L. Shang, Non-invasive self-attention for side information
fusion in sequential recommendation, arXiv preprint arXiv:2103.03578 (2021).
[22] Y. Xie, P. Zhou, S. Kim, Decoupled side information fusion for sequential recommendation, in:
Proceedings of the 45th International ACM SIGIR Conference on Research and Development in
Information Retrieval, SIGIR ’22, Association for Computing Machinery, New York, NY, USA, 2022,
p. 1611–1621. URL: https://doi.org/10.1145/3477495.3531963. doi:10.1145/3477495.3531963.
[23] L. Wu, S. Li, C.-J. Hsieh, J. Sharpnack, Sse-pt: Sequential recommendation via personalized
transformer, in: Proceedings of the 14th ACM Conference on Recommender Systems, RecSys
’20, Association for Computing Machinery, New York, NY, USA, 2020, p. 328–337. URL: https:
//doi.org/10.1145/3383313.3412258. doi:10.1145/3383313.3412258.
[24] Q. Chen, H. Zhao, W. Li, P. Huang, W. Ou, Behavior sequence transformer for e-commerce
recommendation in alibaba, in: Proceedings of the 1st International Workshop on Deep Learning
Practice for High-Dimensional Sparse Data, ACM, 2019. doi:10.1145/3326937.3341261.
[25] Q. Zhang, L. Cao, C. Shi, Z. Niu, Neural time-aware sequential recommendation by jointly modeling
preference dynamics and explicit feature couplings, IEEE Transactions on Neural Networks and
Learning Systems 33 (2022) 5125–5137. doi:10.1109/TNNLS.2021.3069058.
[26] E. Fischer, A. Dallmann, A. Hotho, Personalization through user attributes for
transformerbased sequential recommendation, in: H. J. Corona Pampín, R. Shirvany (Eds.), Recommender
Systems in Fashion and Retail, Springer Nature Switzerland, Cham, 2023, pp. 25–43. doi:https:
//doi.org/10.1007/978-3-031-22192-7_2.
[27] J. Tagliabue, C. Greco, J.-F. Roy, B. Yu, P. J. Chia, F. Bianchi, G. Cassani, Sigir 2021 e-commerce
workshop data challenge, 2021. arXiv:2104.09423.
[28] G. de Souza P. Moreira, S. Rabhi, R. Ak, M. Y. Kabir, E. Oldridge, Transformers with
multimodal features and post-fusion context for e-commerce session-based recommendation, 2021.
arXiv:2107.05124.
[29] S. Ishihara, S. Goda, H. Arai, Adversarial validation to select validation data for evaluating
performance in e-commerce purchase intent prediction (2021).
[30] E. Fischer, D. Zoller, A. Hotho, Comparison of transformer-based sequential product
recommendation models for the coveo data challenge, SIGIR Workshop On eCommerce (2021).
[31] I. Guellil, A. Garcia-Dominguez, P. R. Lewis, S. Hussain, G. Smith, Entity linking for english and
other languages: a survey, Knowledge and Information Systems (2024) 1–52.
[32] S. Perera, P. N. Mendes, A. Alex, A. P. Sheth, K. Thirunarayan, Implicit entity linking in tweets, in:
H. Sack, E. Blomqvist, M. d’Aquin, C. Ghidini, S. P. Ponzetto, C. Lange (Eds.), The Semantic Web.</p>
      <p>Latest Advances and New Domains, Springer International Publishing, Cham, 2016, pp. 118–132.
[33] H. Hosseini, T. T. Nguyen, J. Wu, E. Bagheri, Implicit entity linking in tweets: An ad-hoc retrieval
approach, Applied Ontology 14 (2019) 451–477.
[34] H. Hosseini, E. Bagheri, Learning to rank implicit entities on twitter, Information Processing &amp;</p>
      <p>Management 58 (2021) 102503.
[35] D. Jannach, A. Manzoor, W. Cai, L. Chen, A survey on conversational recommender systems, ACM</p>
      <p>Computing Surveys 54 (2021) 1–36. URL: http://dx.doi.org/10.1145/3453154. doi:10.1145/3453154.
[36] R. Li, S. E. Kahou, H. Schulz, V. Michalski, L. Charlin, C. Pal, Towards deep conversational
recommendations, in: Advances in Neural Information Processing Systems 31 (NIPS 2018), 2018.
[37] K. Kobs, A. Zehe, A. Bernstetter, J. Chibane, J. Pfister, J. Tritscher, A. Hotho, Emote-controlled:
Obtaining implicit viewer feedback through emote based sentiment analysis on comments of
popular twitch.tv channels, ACM Transactions on Social Computing 3 (2020) 1–34. URL: https:
//doi.org/10.1145%2F3365523. doi:10.1145/3365523.
[38] Y. K. Tan, X. Xu, Y. Liu, Improved recurrent neural networks for session-based recommendations,
in: Proceedings of the 1st workshop on deep learning for recommender systems, 2016, pp. 17–22.
[39] J. Devlin, M.-W. Chang, K. Lee, K. Toutanova, BERT: Pre-training of deep bidirectional transformers
for language understanding, in: Proceedings of the 2019 Conference of the North American Chapter
of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long
and Short Papers), Association for Computational Linguistics, Minneapolis, Minnesota, 2019, pp.
4171–4186. doi:10.18653/v1/N19-1423.
[40] W. L. Taylor, ”cloze procedure”: a new tool for measuring readability., Journalism &amp; Mass</p>
      <p>Communication Quarterly 30 (1953) 415–433.
[41] N. Reimers, I. Gurevych, Sentence-bert: Sentence embeddings using siamese bert-networks,
in: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing,
Association for Computational Linguistics, 2019. URL: https://arxiv.org/abs/1908.10084.
[42] N. Reimers, I. Gurevych, Making monolingual sentence embeddings multilingual using knowledge
distillation, in: Proceedings of the 2020 Conference on Empirical Methods in Natural Language
Processing, Association for Computational Linguistics, 2020. URL: https://arxiv.org/abs/2004.09813.</p>
    </sec>
    <sec id="sec-10">
      <title>Appendix</title>
      <p>yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
SubSeq ItemEmb
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes</p>
      <p>Items Only
yes
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
yes
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes
no
yes</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hidasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Karatzoglou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Baltrunas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Tikk</surname>
          </string-name>
          ,
          <article-title>Session-based recommendations with recurrent neural networks</article-title>
          , in: Y. Bengio, Y. LeCun (Eds.),
          <source>ICLR (Poster)</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <article-title>Personalized top-n sequential recommendation via convolutional sequence embedding</article-title>
          ,
          <source>in: Proceedings of the eleventh ACM international conference on web search and data mining</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>565</fpage>
          -
          <lpage>573</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Ren</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Lian</surname>
          </string-name>
          , J. Ma,
          <article-title>Neural attentive session-based recommendation</article-title>
          ,
          <source>in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management</source>
          ,
          <year>2017</year>
          , pp.
          <fpage>1419</fpage>
          -
          <lpage>1428</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Hidasi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Quadrana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Karatzoglou</surname>
          </string-name>
          ,
          <string-name>
            <surname>D.</surname>
          </string-name>
          <article-title>Tikk, Parallel recurrent neural network architectures for feature-rich session-based recommendations</article-title>
          ,
          <source>in: Proceedings of the 10th ACM Conference on Recommender Systems, RecSys '16</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2016</year>
          , pp.
          <fpage>241</fpage>
          -
          <lpage>248</lpage>
          . doi:
          <volume>10</volume>
          .1145/ 2959100.2959167.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>T. X.</given-names>
            <surname>Tuan</surname>
          </string-name>
          , T. M.
          <article-title>Phuong, 3d convolutional networks for session-based recommendation with content features</article-title>
          ,
          <source>in: Proceedings of the Eleventh ACM Conference on Recommender Systems, RecSys '17</source>
          ,
          <string-name>
            <surname>ACM</surname>
          </string-name>
          , New York, NY, USA,
          <year>2017</year>
          , pp.
          <fpage>138</fpage>
          -
          <lpage>146</lpage>
          . doi:
          <volume>10</volume>
          .1145/3109859.3109900.
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>E.</given-names>
            <surname>Fischer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Zoller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Dallmann</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Hotho</surname>
          </string-name>
          ,
          <article-title>Integrating keywords into bert4rec for sequential recommendation</article-title>
          ,
          <source>in: KI 2020: Advances in Artificial Intelligence: 43rd German Conference on AI</source>
          , Bamberg, Germany,
          <source>September 21-25</source>
          ,
          <year>2020</year>
          , Proceedings, Springer-Verlag, Berlin,
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>