A Dynamic Session-Based Recommendation System with Graph Neural Networks⋆ Vrushali Mahajan1,∗,† , Ermiyas Birihanu1,† and Tsegaye Misikir Tashu2,† 1 ELTE Eötvös Loránd University, Department of Data Science and Engineering, Budapest, Hungary 2 Department of Artificial Intelligence, Bernoulli Institute of Mathematics, Computer Science and Artificial Intelligence, University of Groningen, Groningen, The Netherlands Abstract Session-based recommendation systems provide personalized recommendations to users based on their session activities. Traditional recommendation algorithms often overlook temporal dependencies within user sessions, leading to suboptimal recommendations. To address this limitation, we proposed a Temporal Graph Neural Network (TemporalGNN) approach that leverages the temporal relationships between items in sessions to enhance recommendations. The proposed session-based recommendation system can effectively capture temporal dependencies within user sessions. The method consists of two temporal GNN layers designed to capture temporal dependencies within user sessions. A directed graph is constructed from session data, where each unique item in the dataset is a node, and directed edges are created between consecutive items within a session, with edge weights representing the time differences between interactions. This graph structure allows the model to capture the sequential nature of user interactions. The model generates recommendations by computing similarity scores from the learned embeddings and selecting the top 𝑁 items with the highest scores. The experimental results on two real-world datasets showed the effectiveness of the proposed method in improving recommendation performance compared to the baseline approaches. The source code implementation is available on the GitHub repository at https://github.com/VrushaliM/SB-Recommendation-GNN. Keywords Session, User, Graph Neural Networks, Recommendation Systems 1. Introduction have emerged as a promising approach. GNNs improve node representations by incorporating adjacent information In today’s online world, personalized suggestions appear fre- through weighted or unweighted edges, making identifying quently while shopping, watching videos, or browsing the item transitions easier [8]. To effectively explore long-range Internet. These suggestions are made by recommendation vertex relationships in a graph, a GNN typically requires systems, which predict our preferences based on past be- three layers to propagate information. However, more than haviour. However, session-based recommendation systems three layers can result in over-smoothing, where the dis- have emerged since our tastes can change quickly during tinctions between items are lost [9]. short online sessions. These systems focus on our current Despite progress, challenges remain, such as quickly iden- online activities to provide accurate suggestions for what tifying preferences with limited information and adapting to we might like in real-time [1] [2]. Various types of recom- rapidly changing user behaviours. One of the most critical mendation systems exist in the research landscape, such as issues in session-based recommendation is how to accu- collaborative filtering, content-based filtering, and hybrid rately and efficiently capture and learn complex transitions approaches [3] [4] [5]. Collaborative filtering generates rec- of items from limited information. To address these chal- ommendations based on similarities in user behaviour, while lenges, we propose a novel approach for session-based rec- content-based filtering uses item characteristics. Hybrid ap- ommendation systems that can quickly understand user proaches combine both methods for better accuracy. These preferences, adapt to online behaviour, and efficiently han- studies consider collaborative filtering within sessions, cap- dle large data volumes. This study focuses on time-driven turing user-item interactions to handle sparse data and the session-based recommendation systems, integrating tempo- cold-start problem effectively. ral order and time interval information to enhance accuracy Researchers have been advancing these recommendation and relevance. Our proposed method aims to improve time- systems by exploring various techniques. These efforts in- driven session-based recommendations by considering the clude analyzing the sequential order of user interactions order of interactions and the specific duration between them. online and employing sophisticated models, such as neural This approach is designed to uncover significant correla- networks [1] [6]. These studies investigated online sessions’ tions between interacted items. Our goal is to enhance the duration, temporal dynamics, and underlying user intents. accuracy and relevance of online recommendations, making However, this approach may need to include other relevant them more helpful and enjoyable for users by exploring the information, potentially limiting recommendation diversity. impact of time intervals between interactions alongside the Session-based recommendation focuses on predicting the chronological order. next click of an anonymous user based on their current ses- sion activity [7]. Two popular models for session-based rec- ommendations are Markov Chains (MC) and Recurrent Neu- 2. Related work ral Networks (RNN) [8]. Graph Neural Networks (GNNs) Session-based recommendation systems have garnered sig- ITAT’24: Information Technologies – Applications and Theory, September nificant attention in recent years due to their ability to de- 20–24, 2024, Drienica, Slovakia liver personalized recommendations in real-time, reflecting ∗ Corresponding author. users’ immediate interests and preferences. This section † These authors contributed equally. provides an overview of the latest developments, method- Envelope-Open y2hse8@inf.elte.hu (V. Mahajan); ermiyasbirihanu@inf.elte.hu (E. Birihanu); t.m.tashu@rug.nl (T. M. Tashu) ologies, and findings in the field of session-based recom- © 2022 Copyright for this paper by its authors. Use permitted under Creative Commons License mendation systems, highlighting key trends and areas of Attribution 4.0 International (CC BY 4.0). CEUR ceur-ws.org Workshop ISSN 1613-0073 Proceedings research focus [10] extends primary Recurrent Neural Net- 3. Proposed method work (RNN) models for session-based recommendation by incorporating data augmentation techniques and address- The proposed method, TemporalGNN, uses user session ing shifts in input data distribution, leading to significant data to construct a graph where each session represents a performance improvements over traditional approaches. In sequence of interactions with items over time. Nodes in the [11] introduced Session-based Recommendation with Graph graph are items, and edges reflect the order of interactions Neural Networks (SR-GNN), utilizing graph-structured data with time differences as edge features. The TemporalGraph to represent session sequences and capture item transi- Neural Network (TemporalGNN) consists of stacked Tem- tions, enhancing user representations. Zhang et al. [12] pro- poralGNNLayers that capture temporal dependencies and posed A-PGNN (Attention-enhanced PGNN), which utilizes learn item representations. During training, the model up- a graph neural network to capture complex item relation- dates item embeddings based on the temporal context of ships in user behaviour graphs, integrating the DotAtten- interactions, learning patterns and relationships between tion mechanism to model the impact of historical sessions items. After training, recommendations are generated by on current sessions, facilitating long-term user preference calculating similarity scores from the learned embeddings capture. and selecting the top-N items with the highest scores. The study in [13] developed a graph hierarchical dwell- A temporal graph neural network models relationships time attention network to better capture user preferences and interactions between nodes while considering the tim- and improve recommendation accuracy by incorporat- ing of these interactions. The input is a graph with nodes ing a modified graph neural network and a hierarchical representing items and edges representing sequential in- dwell-time attention module. The study in [14] proposed teractions with time differences. The output is a set of Attention-enhanced Graph Neural Networks with Global learned item embeddings that capture structural and tempo- Context, modeling session-aware attention mechanisms and ral relationships, enabling personalized and context-aware graph convolutional networks to learn and merge item tran- recommendations. First, a directed temporal graph is con- sitions from all sessions. Yupu, G et al. [13] proposed a structed from the training data, with nodes representing Time-Aware Graph Neural Network (TA-GNN), considering unique items and edges encoding the temporal order of both long-term historical behaviours and collaborative fil- session interactions. The TemporalGNN model, with two tering information from neighbouring users by constructing TemporalGNNLayers, analyzes the temporal structure of user behaviour and neighbourhood graphs and incorporat- the graph data. During training, the model updates node ing time interval information. Guojia An et al. [14] proposed features by considering the temporal context and optimizing a method that uses graph structures for hierarchical feature model parameters to minimize differences between positive learning in item embeddings and automatically selects fo- items within the same session and different session inter- cus area lengths for session embeddings. Wang et al. [15] actions. The trained model makes recommendations by proposed a method that divides sessions into time slices and computing representations for items within a session and constructs temporal graphs and hypergraphs to capture item using their similarity to session items. The learned node transitions over time in the session-based recommendation. features enhance the model’s ability to make personalized However, there is a computational overhead associated with recommendations by capturing temporal variations in item modeling item transitions across multiple time slices, which interactions. Algorithm 1 and Figure 1 show how the pro- may affect scalability as the number of sessions and time posed method works. slices increases Zhu et al. [16] proposed the DGS-MGNN method, which 3.1. Graph construction dynamically constructs local, global, and consensus graphs to capture item representations from multiple perspectives, The interaction records 𝐷 contain information about user enhancing session-based recommendation accuracy. Sheng sessions, items, and timestamps. Each session records a et al. [17] modeled interaction sequences using a Weighted sequence of items interacted with by a user, along with Global Item Graph and current sessions with a Local Ses- the corresponding timestamps. A mapping 𝑀 is created to sion Graph, integrating multiple interaction patterns. While associate each unique item 𝑖 with a unique node ID 𝑛𝑖 . This significant advancements have been made in session-based mapping translates item interactions into node interactions recommendation systems, challenges remain, such as com- in the graph. putational complexity, scalability, and the need for more comprehensive user preference modeling. This study pro- 𝑀 ∶ 𝑖 → 𝑛𝑖 poses methods that address these challenges using dynamic The graph 𝐺 = (𝑉 , 𝐸) is constructed where: session-based graphs, aiming to improve the accuracy and efficiency of recommendations in diverse and large-scale • 𝑉 is the set of nodes, each corresponding to a unique datasets. item. Session-based recommendation systems aim to deliver • 𝐸 is the set of directed edges representing temporal personalized recommendations in real time based on users’ relationships between items within a session. immediate interests and preferences. Despite significant progress, challenges persist, including high computational We identify the unique items and their corresponding complexity, scalability issues, and the need for more com- node indices for each session using the mapping 𝑀. Directed prehensive modeling of user preferences. Hence, this study edges are added between consecutive items in each session. aims to improve time-driven session-based recommenda- The edges carry weights representing the time difference tions by considering the order of interactions and their between consecutive interactions. If a session involves items specific duration, targeting to enhance the model’s perfor- {item1, item2, item3} with corresponding timestamps {t1, mance. t2, t3}, the directed edges and their weights, that is, time differences are as follows: Figure 1: An architecture of proposed TemporalGNN model showing flow of data from Graph construction from session to item recommendation. Algorithm 1 Top-N Recommendation System 1: Input: 𝐷 - Input data, 𝐺 - Graph structure, 𝑑𝑖𝑛 - Input di- 𝑡 −𝑡 if there is an edge from node 𝑖 to node 𝑗 mension, 𝑑ℎ𝑖𝑑𝑑𝑒𝑛 - Hidden layer dimension, 𝑑𝑜𝑢𝑡 - Output 𝐴𝑖𝑗 = { 𝑗 𝑖 dimension 0 otherwise 2: Output: Top-N recommended items 3: Construct Graph • Feature Matrix 𝑋: Each node 𝑣𝑖 have associated 4: Create a directed graph 𝐺 = (𝑉 , 𝐸) from the input data features. We used an identity matrix where each 𝐷. node’s feature vector is a one-hot encoded vector 5: Define TemporalGNN Model corresponding to the item. 6: Initialize the TemporalGNN model with: 7: Input layer of dimension 𝑑𝑖𝑛 𝑋 = 𝐼𝑛𝑢𝑚_𝑖𝑡𝑒𝑚𝑠 8: Hidden layer of dimension 𝑑ℎ𝑖𝑑𝑑𝑒𝑛 After constructing the graph 𝐺 and obtaining the adja- 9: Output layer of dimension 𝑑𝑜𝑢𝑡 cency matrix 𝐴 and feature matrix 𝑋, the data is fed into the 10: Message Passing Through Layers Graph Neural Network. The adjacency matrix 𝐴 captures 11: for each layer in the GNN model do the structure of the graph and the temporal relationships 12: Refine node embeddings using attention mechanism between items, while the feature matrix 𝑋 provides the ini- 13: Capture fine-grained dependencies between items tial feature representation for each node. GNNs outperform 14: end for RNNs when dealing with graph-structured data, as they are 15: Train Model adept at capturing intricate relationships and processing 16: Train the TemporalGNN model by minimizing the loss non-Euclidean data. GNNs are capable of handling inputs function using training data. of varying sizes and structures, making them particularly 17: Compute Similarity Scores advantageous for tasks involving complex relationships and 18: Compute similarity scores between items using embed- graph-structured data [18]. In contrast, while RNNs excel dings obtained from the trained model. with sequential data, GNNs provide significant benefits for 19: Recommend Items these more complex scenarios. 20: Recommend the top-N items based on the computed similarity scores. 3.2. Temporal Graph Neural Network The TemporalGNN model processes graph data and item- to-node mapping to extract meaningful representations of 𝑒01 = (𝑛item1 , 𝑛item2 ), weight = 𝑡2 − 𝑡1 items within the graph. It consists of TemporalGNNLayers 𝑒12 = (𝑛item2 , 𝑛item3 ), weight = 𝑡3 − 𝑡2 that update node features based on their temporal context within the graph. The TemporalGNN model is initialized For a session 𝑆 = [(𝑖1 , 𝑡1 ), (𝑖2 , 𝑡2 ), … , (𝑖𝑘 , 𝑡𝑘 )]: with parameters where 𝑁 is the number of nodes, 𝐻 is the number of hidden units, and 𝑂 is the number of output 𝑉 = {𝑀(𝑖1 ), 𝑀(𝑖2 ), … , 𝑀(𝑖𝑘 )} features. 𝐸 = {(𝑀(𝑖𝑗 ), 𝑀(𝑖𝑗+1 )) ∣ ∀𝑗 ∈ [1, 𝑘 − 1]} The forward pass of the TemporalGNN model is per- Weight of edge(𝑀(𝑖𝑗 ), 𝑀(𝑖𝑗+1 )) = 𝑡𝑗+1 − 𝑡𝑗 formed through TemporalGNNLayers, which update node features iteratively. 𝐹 (0) represent the initial node features. The graph is then transformed into matrix representations: The forward pass of the TemporalGNN model can be ex- • Adjacency Matrix 𝐴: An adjacency matrix repre- pressed as: sents the connections between nodes. If there is a di- 𝐹 (𝑙+1) = TemporalGNNLayer(𝐺, 𝐹 (𝑙) ) rected edge from node 𝑖 to node 𝑗 with a weight (time difference), 𝐴𝑖𝑗 is the weight; otherwise, 𝐴𝑖𝑗 = 0. where 𝑙 represents the layer index, 𝐹 (𝑙) is the node features at layer 𝑙, and 𝐹 (𝑙+1) is the updated node features after passing through layer 𝑙. 3.2.1. TemporalGNNLayer then recommends items based on their similarity to items in the session, aiming to maximize the relevance of recom- The TemporalGNNLayer updates node features based on mendations. This process can be expressed as: their temporal context within the graph. Let 𝑁𝑖 denote the set of neighbouring nodes of node 𝑖. Recommend(𝑆𝑞 ) = arg max Sim(𝐹𝑖 , 𝐹𝑞 ) 𝑖∉𝑆𝑞 The graph 𝐺 and initial node features 𝐹 (𝑙) at layer 𝑙, the forward pass of the TemporalGNNLayer is: where 𝐹𝑖 represents the node features of item 𝑖, 𝐹𝑞 represents (𝑙+1) (𝑙) (𝑙) the aggregated features of items in session 𝑆𝑞 , and Sim(⋅) 𝐹𝑖 = Update(𝐹𝑖 , {𝐹𝑗 ∣ 𝑗 ∈ 𝑁𝑖 }) denotes a similarity measure between node features. (𝑙) where 𝐹𝑖 represents the feature vector of node 𝑖 at layer (𝑙) Similarity Score: The similarity between node features 𝑙, and {𝐹𝑗 ∣ 𝑗 ∈ 𝑁𝑖 } denotes the set of feature vectors of 𝐹𝑖 and 𝐹𝑞 are computed using Cosine similarity, and it is neighboring nodes of 𝑖. calculated as: The TemporalGNN model consists of two Temporal- GNNLayers. During the forward pass, each Temporal- 𝐹𝑖 ⋅ 𝐹𝑞 Cosine Similarity(𝐹𝑖 , 𝐹𝑞 ) = GNNLayer updates node features based on their temporal ‖𝐹𝑖 ‖ ⋅ ‖𝐹𝑞 ‖ context within the graph. The first layer uses an attention where 𝐹𝑖 ⋅ 𝐹𝑞 represents the dot product of the node feature mechanism to consider the importance of neighbouring vectors, and ‖𝐹𝑖 ‖ and ‖𝐹𝑞 ‖ represent their respective Euclidean nodes, while the second layer aggregates information from norms. The embeddings of items within a session are ob- neighbouring nodes using mean pooling. The final node fea- tained from the output of the temporalGNN model, which tures generated by the TemporalGNN model encode mean- provides updated node features representing each item’s ingful representations of items within the temporal graph. embedding. Similarity scores between items are computed These representations capture the temporal context of item based on cosine similarity, which measures the cosine of interactions within sessions and can be seen as embeddings the angle between two vectors, representing the similarity of items. between their directions in the embedding space. This calcu- The TemporalGNN model is trained using a training lation is carried out by taking the dot product of the embed- dataset 𝐷train , consisting of session data and correspond- dings of items within a session and then dividing it by the ing item interactions. During training, the model learns to product of their magnitudes, resulting in cosine similarity predict item sequences within sessions and refine its node scores. Finally, items are ranked based on similarity scores, representations. determining the order in which they are recommended to The training process involves optimizing model parame- the user. This process enables the model to recommend ters to minimize a loss function ℒ defined over the training items that are most similar to those already interacted with data: by the users. The method recommends the top 5 items by min ∑ ℒ (𝑆𝑖 , 𝑆neg ) Θ computing and ranking similarity scores for items within (𝑆𝑖 ,𝑆neg ) a specific session. The recommendation process is applied Where Θ represents the model parameters, 𝑆𝑖 denotes pos- to each session individually to provide personalized recom- itive samples, i.e. items within the same session, and 𝑆neg mendations to users. denotes negative samples, i.e. items from different sessions. The model is trained using pairwise hinge loss to optimize the parameters so that the embeddings of items interacted 4. Experiments and evaluation with in the same session are closer to each other than to the embeddings of items not interacted with. 4.1. Dataset and performance metrics We evaluated our model on the Yoochoose 1 and Diginet- 3.3. Recommendation process ica datasets 2 [15] [19]. The yoochoose dataset is obtained The recommendation process involves the selection of top-N from the RecSys’15 Challenge. The dataset captures user- items for users based on their activities in the current ses- item interactions collected from an online retailer over a sion. The value of N is decoded as five (5) based on average period of several months. It contains anonymized informa- lenghts of sessions in our dataset. The recommendation pro- tion about user sessions, including the items users viewed cess uses a trained TemporalGNN model to generate item and purchased during each session, as well as timestamps recommendations for sessions. The final node features ob- indicating when these interactions occurred. Diginetica is tained from the TemporalGNN model serve as input features a competition dataset used in CIKM Cup 2016. It contains for the recommendation task. When recommending items user sessions extracted from an e-commerce search engine for a specific session, the method first identifies the items in logs. The dataset includes columns such as session-id, user- the current session and retrieves their embeddings. It then id,item-id, eventdate and timeframe. computes the similarity scores between these session item We consider MRR@k and recall@k metrics to evaluate embeddings and all other item embeddings. The method the performance of our model, and We set k to 5, which is ranks all items based on these similarity scores and selects suitable for session-based recommendations. Mean Recipro- the top 5 items with the highest scores as recommendations. cal Rank measures how well a list of ranked items matches These items are considered the most similar or relevant to a set of true items. It is the average of the reciprocal ranks the current session’s context. of the true items in the ranked list. 𝑁 1 1 3.3.1. Recommendation MRR@k = ∑ 𝑁 𝑖=1 rank𝑖 Given a session 𝑆𝑞 , the model computes representations for 1 https://darel13712.github.io/rs_datasets/Datasets/yoochoose/ items within the session using its learned parameters. It 2 https://darel13712.github.io/rs_datasets/Datasets/diginetica/ Table 1 best. This optimal set of hyperparameters is then used to Statistics of the Yoochoose and Diginetica datasets train the final model on the entire training dataset. Statistics Yoochoose Diginetica # clicks 557,248 982,961 4.4. Experimental results # train 369,859 719,470 # test 55,898 60,858 Figure 2 shows the performance of the TemporalGNN model # items 16,766 43,097 on both datasets in terms of MRR@5 and Recall@5. avg length 6.16 5.12 Where 𝑁 is the number of sessions, and rank𝑖 is the rank po- sition of the true item in the i-th session’s recommendation list. Recall measures the proportion of true items successfully recommended in the top-k items. It is the ratio of the number of true items found in the top-k recommendations to the total number of true items. It is mathematically defined as: 𝑁 1 Recall@k = ∑ 1{true_item𝑖 ∈ top_k_recommended𝑖 } 𝑁 𝑖=1 Where 1{⋅} is an indicator function that is one if the condition Figure 2: Performance of proposed model inside is true and 0 otherwise. On the Yoochoose dataset, TemporalGNN significantly 4.2. Baselines outperforms the other models with an MRR@5 of 0.52 and To demonstrate the performance of the proposed method, a Recall@5 of 0.54. This indicates that TemporalGNN is we compared it with the contextual K-Nearest Neighbours highly effective in ranking the correct items near the top approach (CKNN) [20] and graph-based Node2Vec approach and retrieving relevant items within the top 5 recommen- [21] [22]. CKNN algorithm is a straightforward yet effec- dations. In contrast, KNN shows very poor performance tive method for recommendation [20]. In the context of with an MRR@5 of 0.069 and Recall@5 of 0.07, suggest- session-based recommendation, KNN recommends items ing that it struggles to provide useful recommendations. most similar to the items in the current session based on Node2vec performs better than KNN with an MRR@5 of historical co-occurrence data. 0.13 and Recall@5 of 0.3, indicating moderate effectiveness. Node2Vec approach integrates network-based represen- However, it still falls short of the performance achieved by tation learning, clustering, and personalized recommenda- TemporalGNN, highlighting the superiority of Temporal- tion techniques to provide practical and personalized rec- GNN’s ability to utilize temporal and graph-based features ommendations for users in session-based scenarios. for recommendation tasks. Table 2 4.3. Experimental settings Performance of proposed model and baseline models We used hyperparameter tuning and cross-validation to op- Model Yoochoose Diginetica timize the performance of the TemporalGNN model. The MRR Recall MRR Recall hyperparameters tuned include the number of hidden units TemporalGNN 0.52 0.54 0.29 0.30 in the first GNN layer, the number of output features in CKNN 0.06 0.07 0.02 0.02 the second GNN layer, the learning rate for the Adam op- Node2vec 0.13 0.30 0.14 0.50 timizer, and the number of training epochs. The search space for these parameters includes 32, 64 and 128 values On the Diginetica dataset, TemporalGNN’s performance for hidden units 16, 32 and 64 for output feature 0.001, 0.005 stands out, surpassing the other models with an MRR of and 0.01 for learning rate and 10, 20 and 30 for a number 0.29 and Recall of 0.30. In comparison, KNN’s performance of epochs. We used a random search strategy to explore is notably lower, with an MRR of 0.022 and Recall of 0.02, this search space by sampling ten different sets of hyper- and Node2vec, while better than KNN, still falls behind with parameter combinations. For each sampled combination, an MRR of 0.14 and Recall of 0.5. These results underscore 5-fold cross-validation is conducted to evaluate the model’s the superior performance of TemporalGNN in capturing the performance, which involves splitting the training data into temporal dynamics of user interactions, leading to more ac- five folds and iterating five times, each time using a different curate and relevant recommendations. Similarly, on the Dig- fold as the validation set. In comparison, the remaining four inetica dataset, TemporalGNN showed better performance folds are used for training. During each fold, the model is with an MRR@5 of 0.29 and Recall@5 of 0.30. Although trained with the specified hyperparameters, and its perfor- these scores are lower than their performance on Yoochoose, mance is evaluated using the Mean Reciprocal Rank metric, TemporalGNN remains the best model, indicating its robust which measures the quality of the ranking of recommended capability across different datasets. KNN, on the other hand, items. The MRR scores from all five folds are averaged to performs very poorly with an MRR@5 of 0.013 and Recall@5 obtain a single performance score for the hyperparameter of 0.01, showing that it is not well-suited for this dataset. set. The combination of hyperparameters that gives the Node2vec shows a notable improvement in Recall@5 with highest average MRR across the five folds is considered the a score of 0.5 but has an MRR@5 of 0.14, indicating that while it can retrieve relevant items better, it does not rank [8] J. Wang, H. Xie, F. L. Wang, L.-K. Lee, M. Wei, Jointly them as highly as TemporalGNN. This comparison further modeling intra-and inter-session dependencies with highlights the effectiveness of TemporalGNN in leveraging graph neural networks for session-based recommen- complex temporal and graph-based interactions for better dations, Information Processing & Management 60 recommendation performance. (2023) 103209. [9] R. Qiu, J. Li, Z. Huang, H. Yin, Rethinking the item order in session-based recommendation with graph 5. Conclusion and future work neural networks, in: Proceedings of the 28th ACM in- ternational conference on information and knowledge We introduced and evaluated a Temporal Graph Neural Net- management, 2019, pp. 579–588. work method for session-based recommendation tasks by [10] S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, T. Tan, Session- showing its effectiveness on the Yoochoose and Diginetica based recommendation with graph neural networks, datasets. We implemented the TempralGNN model to en- in: Proceedings of the AAAI conference on artificial code temporal dynamics within the graph, which extracted intelligence, volume 33, 2019, pp. 346–353. meaningful representations of items based on their temporal [11] H. Rong, W. Zhu, C. Zhu, Graph hierarchical dwell- context to recommend top-N items to users. TempooralGNN time attention network for session-based recommen- outperformed traditional methods like CKNN and graph- dation, in: ITM Web of Conferences, volume 47, EDP based Node2Vec and achieved higher performance in terms Sciences, 2022, p. 02032. of Mean Reciprocal Rank and Recall scores. [12] Y. Chen, Y. Tang, Y. Yuan, Attention-enhanced graph Future work could incorporate contextual information, neural networks with global context for session-based such as the device used, location, or time of day, to pro- recommendation, IEEE Access 11 (2023) 26237–26246. vide more contextually relevant recommendations. There [13] Y. Guo, Y. Ling, H. Chen, A time-aware graph neural are exciting possibilities for future research in the field of network for session-based recommendation, IEEE session-based recommendation tasks. Enhancements to Access 8 (2020) 167371–167382. the TemporalGNN architecture, such as incorporating atten- [14] G. An, J. Sun, Y. Yang, F. Sun, Enhancing collabora- tion mechanisms and exploring alternative graph structures, tive information with contrastive learning for session- could further improve its performance and scalability. We based recommendation, Information Processing & will include a comparative analysis of various approaches Management 61 (2024) 103738. alongside Temporal GNN. This should pique the audience’s [15] H. Wang, S. Yan, C. Wu, L. Han, L. Zhou, Cross- interest in the potential for further research and develop- view temporal graph contrastive learning for session- ment in the field. based recommendation, Knowledge-Based Systems 264 (2023) 110304. References [16] Z. Sheng, T. Zhang, Y. Zhang, S. Gao, Enhanced graph neural network for session-based recommendation, [1] B. Hidasi, M. Quadrana, A. Karatzoglou, D. Tikk, Paral- Expert Systems with Applications 213 (2023) 118887. lel recurrent neural network architectures for feature- [17] F. Wang, X. Gao, Z. Chen, L. Lyu, Contrastive multi- rich session-based recommendations, in: Proceedings level graph neural networks for session-based recom- of the 10th ACM conference on recommender systems, mendation, IEEE Transactions on Multimedia 25 (2023) 2016, pp. 241–248. 9278–9289. [2] C. Xu, P. Zhao, Y. Liu, V. S. Sheng, J. Xu, F. Zhuang, [18] I. R. Ward, J. Joyner, C. Lickfold, Y. Guo, M. Ben- J. Fang, X. Zhou, Graph contextualized self-attention namoun, A practical tutorial on graph neural net- network for session-based recommendation., in: IJCAI, works, ACM Computing Surveys (CSUR) 54 (2022) volume 19, 2019, pp. 3940–3946. 1–35. [3] F. Yu, Y. Zhu, Q. Liu, S. Wu, L. Wang, T. Tan, Tagnn: [19] X. Zhu, G. Tang, P. Wang, C. Li, J. Guo, S. Dietze, Dy- Target attentive graph neural networks for session- namic global structure enhanced multi-channel graph based recommendation, in: Proceedings of the 43rd neural network for session-based recommendation, international ACM SIGIR conference on research Information Sciences 624 (2023) 324–343. and development in information retrieval, 2020, pp. [20] H. Guo, R. Tang, Y. Ye, F. Liu, Y. Zhang, A novel 1921–1924. knn approach for session-based recommendation, in: [4] M. Zhang, S. Wu, M. Gao, X. Jiang, K. Xu, L. Wang, Advances in Knowledge Discovery and Data Mining: Personalized graph neural networks with attention 23rd Pacific-Asia Conference, PAKDD 2019, Macau, mechanism for session-aware recommendation, IEEE China, April 14-17, 2019, Proceedings, Part II 23, Transactions on Knowledge and Data Engineering 34 Springer, 2019, pp. 381–393. (2020) 3946–3957. [21] A. Grover, J. Leskovec, node2vec: Scalable feature [5] Y. K. Tan, X. Xu, Y. Liu, Improved recurrent neural learning for networks, in: Proceedings of the 22nd networks for session-based recommendations, in: Pro- ACM SIGKDD international conference on Knowledge ceedings of the 1st workshop on deep learning for discovery and data mining, 2016, pp. 855–864. recommender systems, 2016, pp. 17–22. [22] S. Okura, Y. Tagami, S. Ono, A. Tajima, Embedding- [6] B. Hidasi, A. Karatzoglou, L. Baltrunas, D. Tikk, based news recommendation for millions of users, in: Session-based recommendations with recurrent neural Proceedings of the 23rd ACM SIGKDD international networks, arXiv preprint arXiv:1511.06939 (2015). conference on knowledge discovery and data mining, [7] C. Ding, Z. Zhao, C. Li, Y. Yu, Q. Zeng, Session-based 2017, pp. 1933–1942. recommendation with hypergraph convolutional net- works and sequential information embeddings, Expert Systems with Applications 223 (2023) 119875.