<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Emotion-aware film recommendation with heterogeneous graph neural networks⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Yurii Halias</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Khrystyna Lipianina-Honcharenko</string-name>
          <email>HR@k</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Myroslav Komar</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mykola Telka</string-name>
          <email>m.telka@wunu.edu.ua</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Vasyl Lukianchuk</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>West Ukrainian National University</institution>
          ,
          <addr-line>Lvivska str., 11, Ternopil, 46000</addr-line>
          ,
          <country country="UA">Ukraine</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2026</year>
      </pub-date>
      <abstract>
        <p>This study proposes an integrated framework for predicting users' emotional reactions to films by leveraging a heterogeneous graph neural network (HGNN) that explicitly models three semantically distinct node types-Users, Movies, and Emotions-and four relation types (viewed, rated, dominant emotion, preferred emotion). The pipeline includes rigorous data cleansing, construction of a 68 knode/130 k-edge knowledge graph, initialization of multimodal node features, and training a twolayer relational graph convolutional network with class-balanced loss. On an 80 / 10 / 10 split the model attains Accuracy = 73.8 %, Macro F1 = 71.3 %, surpassing logistic regression and Random Forest baselines by 30.3 p.p. and 19.1 p.p. in accuracy, respectively. Recommendation-oriented metrics further confirm its effectiveness (Hit Rate@10 = 0.84, NDCG@10 = 0.79). Ablation reveals that incorporating emotion embeddings from EmoBank boosts Macro F1 by 4.2 p.p., while classweighting mitigates a 14 % imbalance-induced drop. Limitations include performance degradation for rare emotions (F1 &lt; 0.60), “cold-start” sensitivity (−12 p.p. accuracy), and computational overhead when scaling beyond one million edges. Future work will explore dynamic HGNNs for temporal preference drift, multimodal feature fusion, few-shot adaptation for new items, and fairness-aware training to reduce detected gender bias to ≤ 3 %. These directions aim to enable lowlatency (≤ 120 ms) and ethically robust deployment in real-world recommender systems.</p>
      </abstract>
      <kwd-group>
        <kwd>Keywords1</kwd>
        <kwd>emotional recommendation</kwd>
        <kwd>heterogeneous graph neural network</kwd>
        <kwd>user-item interaction</kwd>
        <kwd>sentiment prediction</kwd>
        <kwd>fairness-aware AI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>0000-0003-2389-3668 (Y. Halias); 0000-0002-2441-6292 (K. Lipianina-Honcharenko); 0000-0001-6541-0359 (M.
Komar); 0009-0002-4293-7515 (M. Telka); 0009-0009-8829-0316 (V. Lukianchuk)</p>
      <p>
        © 2025 Copyright for this paper by its authors. Use permitted under Creative Commons License Attribution 4.0 International (CC BY 4.0).
emotional response. Emotions are a key driver of user behavior, determining satisfaction with
a viewing experience and the desire to continue interacting with a platform [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ]. This creates a
need for next-generation recommendation systems capable not only of predicting ratings but
also of modeling and forecasting the emotional state of the user in response to watching a film
[
        <xref ref-type="bibr" rid="ref2 ref3">2,3</xref>
        ].
      </p>
      <p>
        Recent years have witnessed the rapid development of Graph Neural Networks (GNNs) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ],
which effectively model complex relationships between entities in data, such as users, movies,
and their attributes [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. At the same time, significant potential has been identified in the
application of deep learning methods to process emotional information, particularly in the
context of multiclass emotion classification from text, images, or behavioral patterns [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6-8</xref>
        ].
      </p>
      <p>
        Despite considerable progress in classical recommendation systems, the task of predicting
emotional reactions to content remains underexplored [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Most existing approaches focus on
rating prediction accuracy without a deeper analysis of the user's internal emotional state,
which limits their ability to deliver truly personalized interactions.
      </p>
      <p>This paper is devoted to the development and evaluation of the Emotional GNN model,
which focuses on predicting users’ emotional reactions in movie recommendation systems. A
heterogeneous graph is proposed, where nodes represent users, movies, and emotional states,
and edges represent their relationships. This structure allows the model to uncover deep
interaction patterns and predict the likelihood of a specific emotional response from the user
after watching a particular movie.</p>
      <p>The aim of this study is to evaluate the effectiveness of the Emotional GNN approach in
the task of emotion prediction and to compare its performance with traditional classification
methods on tabular data. It is expected that leveraging the structural information of the graph
and focusing on the emotional component will improve the quality of recommendations and
open new avenues for the development of personalized recommendation systems.</p>
      <p>The paper is structured as follows: Section 2 reviews related work and current approaches
in the field of recommendation systems and emotion prediction. Section 3 describes the
integrated approach and architecture of the proposed system. Section 4 presents the
implementation of the Emotional GNN model and the data preparation process. Section 5
demonstrates the results of experimental evaluation and a comparative analysis of model
performance. The study concludes in Section 6, which summarizes the findings and outlines
directions for future research.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>Over the past decades, recommender systems have undergone significant transformation,
evolving from simple filtering approaches to complex hybrid and deep learning models. The
most traditional and widely used methods include collaborative filtering, which relies on
similarities between users or items [10, 11], as well as content-based approaches that analyze
the characteristics of the movies themselves [12]. These methods form the foundation of
modern recommender systems; however, they are limited in their ability to account for
complex behavioral patterns and do not personalize recommendations at the level of the
user’s emotional response.</p>
      <p>
        In the field of emotion analysis, a significant number of studies focus on emotion
classification based on text, video, or audio signals [13–16]. Such systems are widely used in
review analysis, chatbots, and automated support systems; however, their application in the
context of recommendations remains limited. Some studies explore the use of emotions as
auxiliary features to improve recommendation quality [17], but they do not focus on directly
predicting the user’s emotional state [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ].
      </p>
      <p>
        At the same time, Graph Neural Networks (GNNs) have proven to be a powerful tool for
modeling relationships in complex structures such as social networks or recommendation
graphs [18–20]. Their use in recommender systems enables consideration of not only direct
interactions between users and movies, but also indirect connections through shared
preferences, genres, or other attributes [
        <xref ref-type="bibr" rid="ref5">5, 20</xref>
        ]. Expanding a traditional graph into a
heterogeneous one—by including additional entities such as genres or directors—has also
shown potential for improving prediction accuracy [21, 22].
      </p>
      <p>
        Nevertheless, the idea of using graph neural networks for directly predicting the emotional
response to a movie remains underexplored [
        <xref ref-type="bibr" rid="ref1 ref9">1, 9</xref>
        ]. Existing models primarily aim to predict a
numerical rating or the likelihood of viewing content, without taking into account the
affective component. Moreover, current approaches do not integrate data on emotions, users,
and movies into a unified graph, which hampers deep learning of contextual relationships.
Some studies are beginning to explore this intersection—for example, using GNNs for movie
recommendations based on emotions [10] or developing heterogeneous multimodal graph
frameworks for recognizing user emotions in social networks [23].
      </p>
      <p>Notably, the study in [24] proposed the UCCA-GAT model, which demonstrates
competitive results in emotion classification tasks. On the SemEval-2018 Task-1C dataset, this
model achieved an accuracy of 61.2%, and on GoEmotions — 71.2%, confirming the
effectiveness of incorporating semantic text structure via graph representations.</p>
      <p>Another study [25] introduced the Emotion-specific Transformer, which accounts for
emotional characteristics in representations, though it is limited to textual sources and does
not include structural information about user interactions. Nonetheless, the model achieved
an accuracy of 61.9% and a macro-F1 score of 52.0% in the WASSA-2022 task, indicating
improvements over baseline transformer architectures due to the inclusion of emotion-specific
features.</p>
      <p>Thus, both [24] and [25] consider the emotional component in the context of text
classification, but do not model the structured interaction between users, movies, and
emotions. In contrast, the heterogeneous graph neural network (HGNN) proposed in our work
models the user–movie–emotion triad, enabling context-aware prediction of emotions in
recommender systems. This approach not only achieves higher accuracy (Accuracy = 73.8%,
Macro F1 = 71.3%) but also preserves the structural integrity of the emotional interaction
environment. Therefore, the model represents a new class of solutions at the intersection of
graph learning and affective computing, suggesting a significant degree of novelty compared
to existing counterparts [26].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Integrated Method for Predicting Emotional Reactions to Films</title>
      <p>The process of building a system for predicting users' emotional reactions to films is
implemented as an integrated pipeline that combines graph-based representation of
interactions, deep learning using a Graph Neural Network (GNN), and evaluation of
prediction quality in the context of personalized recommendations. A heterogeneous Graph
Neural Network (HGNN) is chosen as the model, as it takes into account the types of nodes
and relationships in the graph, enabling the formation of context-dependent entity
representations. The method is implemented through sequential steps, which are described
below and illustrated schematically in Figure 1.</p>
      <sec id="sec-3-1">
        <title>3.1. Step 1. Data Collection and Preparation</title>
        <p>The first stage involves aggregating and cleaning the data required for graph construction.
The data sources include:
- User interactions with films — views, ratings, likes;
- Film metadata — genres, release year, description, keywords;
- Emotional annotations — emotion labels obtained using NLP classifiers based on reviews
(e.g., IMDb, TMDb) or manually annotated.</p>
        <p>All data undergoes duplicate filtering, and records with missing key fields are removed.
Names are normalized, transliterated, tags are cleaned, and genres are standardized into a
unified format.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Step 2. Construction of the "User–Movie–Emotion" Graph</title>
        <p>Based on the collected data, a heterogeneous graph G=(V,E) is formed, where the nodes
represent three types of entities:
- User — unique ID and, where possible, demographic attributes;
- Movie — described by genre or textual features;
- Emotion — from a predefined set (e.g., joy, sadness, anger, fear, excitement).
Graph edges describe the following types of interactions:
- User → Movie — rating or view (optionally weighted by rating);
- Movie → Emotion — the emotion most frequently associated with the film;
- User → Emotion (optional) — if user preferences for certain emotions are known.</p>
        <p>A key feature is typification: each node and edge has a type (defined by functions ϕ(v),
ψ(e)), allowing the model to account for context during training.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.3. Step 3. Feature Initialization and Balancing</title>
        <p>To train the graph model effectively, vector representations (features) must be assigned to
each node:
- For movies — one-hot vectors based on genre, or embeddings of descriptions;
- For users — age, country, activity level (or random vectors if unknown);
- For emotions — fixed embeddings can be used from resources like EmoBank.</p>
        <p>Since some emotions are significantly more frequent (e.g., "joy" or "trust"), class balancing
is applied:
- oversampling / undersampling;
- weighted loss functions;
- regularization of certain edge types.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.4. Step 4. Graph Model Architecture</title>
        <p>The proposed model (Fig. 2) is a heterogeneous knowledge graph that integrates three
semantically distinct types of nodes — users, movies, and emotions — along with typed edges
between them. This model serves as the foundation for building emotion prediction systems
within the domain of personalized recommendations.
For each node , a type function is defined as</p>
        <p>ϕ ( v ) ϵ {User , Movie , Emotion }
For each edge e, connecting nodes vi→vj, the edge type is defined as</p>
        <p>ψ ( e ) ϵ {viewed , rated , dominant emotion , preferred emotion} (2)
Thanks to the use of typed edges and semantically rich attributes, this structure effectively
supports heterogeneous graph neural networks (R-GCN, HGT), enabling precise modeling of
complex relationships between users, movies, and emotions. This ensures improved accuracy
of emotion prediction, especially in multi-class classification tasks or multi-label
recommendation scenarios.
(1)</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.5. Step 5. Training and Validation</title>
        <p>The model is trained using:
- Loss functions: categorical cross-entropy or weighted binary cross-entropy;
- Optimization: Adam optimizer, learning rate 0.001, Dropout = 0.3, L2 regularization;
- Graph splitting: train/val/test — 80/10/10%.</p>
        <p>To enhance generalization, early stopping is implemented — training stops if the macro-F1
metric on the validation set does not improve for 5 consecutive epochs.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.6. Step 6. Model Evaluation</title>
        <p>The quality of emotion prediction is evaluated using the structured Table 1, presenting
metrics according to the classification type [27] and usage context.</p>
        <sec id="sec-3-6-1">
          <title>Multiclass Classification</title>
        </sec>
        <sec id="sec-3-6-2">
          <title>Multi-label Classification</title>
        </sec>
        <sec id="sec-3-6-3">
          <title>Recommenda tion System Metrics Accuracy</title>
        </sec>
        <sec id="sec-3-6-4">
          <title>Mean Reciprocal Rank (MRR)</title>
          <p>NDCG@k
C
∑ T Pi
Accuraty= i=1
N
P =
i
R =</p>
          <p>i
F 1i=</p>
          <p>T Pi
T Pi+ F Pi</p>
          <p>T Pi
T Pi+ F N i
2 Pi Ri</p>
          <p>Pi+ Ri
Pμ+ Rμ
1 k</p>
          <p>∑ rel j
k j=1</p>
          <p>k
∑ rel j
Rec @ k = j=1
|R|
MRR= 1 ∑N 1
N j=1 rank j</p>
        </sec>
        <sec id="sec-3-6-5">
          <title>Brief Description / Purpose</title>
        </sec>
        <sec id="sec-3-6-6">
          <title>The proportion of correctly</title>
          <p>predicted emotions among
 examples (where  is the
number of classes)</p>
        </sec>
        <sec id="sec-3-6-7">
          <title>Balances precision and recall</title>
          <p>for each class. , ,  are
counts of true positives,
false positives, and false
negatives respectively</p>
        </sec>
        <sec id="sec-3-6-8">
          <title>Cij is the element of the</title>
          <p>matrix that counts the
number of cases when the
real emotion i, and is
predicted by j</p>
        </sec>
        <sec id="sec-3-6-9">
          <title>Aggregates TP, FP, FN over</title>
          <p>all classes; sensitive to
frequent emotions
relj is an indicator of
whether the j-th predicted
emotion is relevant;</p>
        </sec>
        <sec id="sec-3-6-10">
          <title>Average reciprocal rank of the first correct prediction</title>
        </sec>
        <sec id="sec-3-6-11">
          <title>Measures ranking quality of</title>
          <p>k rel
1 C
Macro − F 1= ∑ F M1iacro-F1 gives equal weight</p>
          <p>CC i=n1 to all classes; Weighted-F1
Weighted − F 1=∑ i F w1eights by class size 
i=1 N i
Cij=|{x∨ y =i , ^y = j }|</p>
        </sec>
        <sec id="sec-3-6-12">
          <title>Fraction of incorrectly</title>
          <p>HL= 1 ∑N ∑L ‖[ y jl ≠ ^y jl vp]erecdtoicrtse(dwehleemreentiss innubminbaerry
NL j=1 l=1
of emotions)</p>
        </sec>
        <sec id="sec-3-6-13">
          <title>Indicates whether at least</title>
          <p>HR @ k = N1 ∑j=N1 ‖[ Rel j ∩ Poprnreeeds etja,nkr≠tgi∅ent te]hmeottoipon-is
predictions</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Implementation</title>
      <p>The developed recommendation system prototype implements a complete pipeline for
predicting users' emotional reactions — from constructing a domain-oriented graph “user ↔
movie ↔ emotion” to producing multi-label predictions. The graph representation enabled
the integration of heterogeneous relationships (ratings, viewing events, genre affiliations) into
a unified context, enhancing the informativeness of the surrounding structure for each node.
After preprocessing, the graph was fed into a graph neural network trained using mini-batch
stochastic gradient descent; early stopping based on validation macro-F1 was applied to
control overfitting.</p>
      <p>The software implementation was done in Python 3.10 using PyTorch as the core deep
learning framework, supplemented by PyTorch Geometric for handling graph structures. The
pandas and NumPy libraries provided efficient tabular data transformations, while scikit-learn
was used to compute key metrics (precision, recall, macro-F1, ROC-AUC) and to build baseline
comparison models. NetworkX was employed for preliminary graph construction and
visualization, and Matplotlib and Seaborn facilitated graphical presentation of training curves
and confusion matrices, simplifying result interpretation. The entire experimental logic is
orchestrated within reproducible Jupyter notebooks with fixed random seeds to ensure
replicability.</p>
      <p>Input data are organized into four interrelated CSV files. The fileusers.csv contains unique
user identifiers and demographic attributes (age, gender, country, etc.); movies.csv stores
movie metadata (movie_id, genre tags, brief descriptions). The interactions.csv file aggregates
“user_id × movie_id” interactions with rating or viewing event fields, serving as graph edges.
Finally, emotions.csv associates each movie with one or multiple emotional labels, which serve
as target classes. This modular structure facilitates system scalability and integration of
additional sources, such as temporal or social features, to further improve emotional
prediction accuracy.</p>
      <p>Based on the data, a graph in the HeteroData format from the PyG library was constructed,
where each node type had a separate feature matrix.</p>
      <p>Key statistics of the constructed graph and the training parameters of the graph neural
network are summarized in Table 2.</p>
      <sec id="sec-4-1">
        <title>Graph Statistics</title>
        <p>≈ 12 000
≈ 6 000
&gt; 80 000
≈ 50 000
2
64
0.3
Adam
0.001
1024
50
prevent overfitting, an early stopping mechanism was implemented: training was halted if the
macro F1 score on the validation set did not improve for 5 consecutive epochs. This allowed
retaining the most generalized version of the model without losing performance due to
overfitting on the training data.</p>
        <p>In the experiment, 80% of the graph was used for training, 10% for validation, and 10% for
testing, providing an independent assessment of the model’s generalization capability in the
multi-class classification of eight dominant emotions in user–movie pairs. The convergence
dynamics are shown in Fig. 1, which presents the training and validation loss and accuracy
curves; these confirm stable training without overfitting after approximately 35 epochs.</p>
        <p>Table 3 summarizes the subsystem metrics on the test set. The proposed GNN achieves an</p>
        <sec id="sec-4-1-1">
          <title>Accuracy of 73.8% and a Macro F1 of 71.3%, accompanied by balanced Precision and</title>
          <p>Recall scores of 71.9% and 70.7%, respectively. The difference between Macro and Weighted
F1 scores (71.3% vs. 72.6%) indicates a moderate class imbalance, which does not lead to
significant bias of the model toward frequent emotions.</p>
          <p>Comparison with two baseline
approaches (logistic regression and Random Forest with a “flat” feature representation) is
presented in Table 4. The GNN demonstrates an accuracy gain of 30.3 percentage points over
logistic regression and 19.1 percentage points over Random Forest; similarly, Macro F1
increases by 33.1 and 21.4 percentage points, respectively. The most common classification
errors occur between closely valenced pairs “excitement ↔ joy” and “sadness ↔
disappointment.” For rare classes, precision is lower; however, the use of class weights during
training prevents the model from “ignoring” them, thus preserving generalization capability in
an imbalanced environment.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>Conclusions</title>
      <p>The proposed integrated approach for predicting users' emotional reactions is based on a
heterogeneous graph neural network that models the triad "user – movie – emotion,"
explicitly accounting for node and edge types [28]. On the test dataset, the system achieved an
Accuracy of 73.8% and a Macro F1 score of 71.3%, outperforming logistic regression by 30.3
percentage points and Random Forest by 19.1 percentage points. The relevance of
recommendations was confirmed by the metrics Hit Rate@10 = 0.84 and NDCG@10 = 0.79,
while maintaining balanced sensitivity across classes was evidenced by a Balanced Accuracy
of 71.6%, despite a moderate imbalance in emotional categories.</p>
      <p>At the same time, experimental results revealed several limitations. The largest
classification errors occurred within clusters of emotions close in valence ("joy ↔
admiration," "sadness ↔ disappointment"), where the F1 scores of some rare classes dropped
below 0.60. The "cold start" effect for new users and movies reduces accuracy by
approximately 12 percentage points, whereas increasing the graph size to 1 million edges
raises the time per epoch by four times, demonstrating the need for distributed computing.
Additionally, a bias towards a young male audience (~4 percentage points in Macro F1) was
detected, indicating a risk of unfair recommendations.</p>
      <p>Future research should focus on dynamic GNNs (DyHGT, TGN) to model temporal
evolution of preferences and multimodal features (CLIP embeddings, audio features), which
are expected to improve the Macro F1 score by 3–5 percentage points. Adaptive
few-/zeroshot learning mechanisms can reduce accuracy losses during the "cold start" to ≤ 5 percentage
points, while explainable GNNs (GNN-Explainer) and personalized weight regularization aim
to narrow the fairness gap to ≤ 3%. Running A/B tests in real environments and online
retraining will help maintain latency ≤ 120 ms and increase the User Satisfaction Score to ≥
4.3/5.</p>
      <p>In conclusion, the results confirm that the heterogeneous graph model provides a
significant improvement in the accuracy and relevance of emotional recommendations
compared to existing approaches [24, 25]. However, for industrial deployment, it is necessary
to address issues of scalability, real-time embedding updates, and ensuring ethical and
transparent system operation, thereby forming a roadmap for future research and
enhancements.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The authors used GPT-4 and DeepL to prepare this paper: Grammar and Spelling Checker.
After using these tools, the authors reviewed and edited the content as necessary and are
solely responsible for the content of the publication.
[10] Kipf, T. N., &amp; Welling, M. (2017). Semi-Supervised Classification with Graph
Convolutional Networks. In International Conference on Learning Representations
(ICLR). DOI: 10.48550/arXiv.1609.02907
[11] Schlichtkrull, M., Kipf, T. N., Bloem, P., van den Berg, R., Titov, I., &amp; Welling, M. (2018).</p>
      <p>Modeling Relational Data with Graph Convolutional Networks. In European Semantic
Web Conference (pp. 593–607). Springer. DOI: 10.48550/arXiv.1703.06103
[12] Buechel, S., &amp; Hahn, U. (2017). EmoBank: Studying the Impact of Annotation Perspective
and Representation Format on Dimensional Emotion Analysis. In ACL. DOI:
10.48550/arXiv.2205.01996
[13] Veličković, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., &amp; Bengio, Y. (2018). Graph
Attention Networks. In International Conference on Learning Representations (ICLR).</p>
      <p>DOI: 10.48550/arXiv.1710.10903
[14] Zhang, S., Yao, L., Sun, A., &amp; Tay, Y. (2019). Deep Learning based Recommender System:
A Survey and New Perspectives. ACM Computing Surveys, 52(1), 1–38. DOI:
10.1145/3285029
[15] Kusal, S., Patil, S., Choudrie, J., Kotecha, K., Vora, D., &amp; Pappas, I. (2022). A Review on
Text-Based Emotion Detection – Techniques, Applications, Datasets, and Future
Directions. arXiv preprint arXiv:2205.03235. DOI: 10.48550/arXiv.2205.03235
[16] Soleymani, M., Garcia, D., Jou, B., Schuller, B., Chang, S. F., &amp; Pantic, M. (2017). A survey
of multimodal sentiment analysis. Image and Vision Computing, 65, 3-14.
10.1016/j.imavis.2017.08.003
[17] Cambria, E., Poria, S., Hazarika, D., &amp;Kwok, K. (2020). SenticNet 6: Ensemble Application
of Symbolic and Subsymbolic AI for Sentiment Analysis. In CIKM. DOI:
10.1145/3340531.3412003
[18] Wang, X., Ji, H., Shi, C., Wang, B., Ye, Y., Cui, P., &amp; Yu, P. S. (2019). Heterogeneous Graph
Attention Network. In The World Wide Web Conference (pp. 2022–2032). DOI:
10.48550/arXiv.1903.07293
[19] Zhang, X., Zhao, J., &amp; LeCun, Y. (2015). Character-level Convolutional Networks for Text
Classification. In Advances in Neural Information Processing Systems (NeurIPS). DOI::
10.48550/arXiv.1509.01626
[20] Huang, W., Zhang, T., Rong, Y., &amp; Huang, J. (2020). Graph Learning based Recommender</p>
      <p>Systems: A Review. arXiv preprint arXiv:2012.06995. DOI: 10.48550/arXiv.2105.06339
[21] Sharma, N., Ahmad, M. A. F., Jajoo, D., &amp; Sandhya, A. (2024). Emotion Based Movie</p>
      <p>Recommendation System Using CNN and GNN. Studies in Science of Science, 42(11).
[22] Ziaee, S. S., Rahmani, H., &amp; Nazari, M. (2024). MoRGH: Movie Recommender System
using GNNs on Heterogeneous Graphs. arXiv preprint arXiv:2401.03808. DOI:
10.21203/rs.3.rs-3860094/v1
[23] Bhattacharyya, S., Yang, S., &amp; Wang, J. Z. (2025). A Heterogeneous Multimodal Graph
Learning Framework for Recognizing User Emotions in Social Networks. arXiv preprint
arXiv:2501.07746. DOI: 10.48550/arXiv.2501.07746
[24] Xia, L., et al. Emotion Classification in Texts Over GNNs: A UCCA-GAT Approach. In</p>
      <p>SemEval, 2021.
[25] Lee, J., et al. Leveraging Emotion Features with Emotion-Specific Transformers for Text</p>
      <p>Classification. In ACL, 2022.
[26] Zou, L., Xia, L., Ding, Z., Huang, J., &amp; Hua, X. S. (2019). Reinforcement learning for
userintent-driven conversational recommendation. Proceedings of the 2019 World Wide Web
Conference, 2506-2516. DOI: 10.1145/3292500.3330668
[27] Lipianina-Honcharenko, K., Wolff, C., Sachenko, A., Kit, I., &amp; Zahorodnia, D. (2023).</p>
      <p>Intelligent method for classifying the level of anthropogenic disasters. Big Data and
Cognitive Computing, 7(3), 157. DOI: 10.3390/bdcc7030157
[28] Turchenko, I., Osolinsky, O., Kochan, V., Sachenko, A., Tkachenko, R., Svyatnyy, V., &amp;
Komar, M. (2009). Approach to neural-based identification of multisensor conversion
characteristic. In Proceedings of the 5th IEEE International Workshop on Intelligent Data
Acquisition and Advanced Computing Systems: Technology and Applications (IDAACS 2009)
(pp. 27–31). IEEE. https://doi.org/10.1109/IDAACS.2009.5343030</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Calvo</surname>
            ,
            <given-names>R. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>D'Mello</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          (
          <year>2010</year>
          ).
          <article-title>Affect detection: An interdisciplinary review of models, methods, and their applications</article-title>
          .
          <source>IEEE Transactions on Affective Computing</source>
          ,
          <volume>1</volume>
          (
          <issue>1</issue>
          ),
          <fpage>18</fpage>
          -
          <lpage>37</lpage>
          . DOI:
          <volume>10</volume>
          .1109/T-AFFC.
          <year>2010</year>
          .1
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Afchar</surname>
            ,
            <given-names>D.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Ebrahimi</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>A survey on emotion-aware recommender systems</article-title>
          .
          <source>Multimedia Tools and Applications</source>
          ,
          <volume>81</volume>
          (
          <issue>13</issue>
          ),
          <fpage>18537</fpage>
          -
          <lpage>18575</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Tkalčič</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Košir</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tasič</surname>
            ,
            <given-names>J. F.</given-names>
          </string-name>
          (
          <year>2013</year>
          ).
          <article-title>The development of an affective recommender system</article-title>
          .
          <source>User Modeling</source>
          and
          <string-name>
            <surname>User-Adapted</surname>
            <given-names>Interaction</given-names>
          </string-name>
          ,
          <volume>23</volume>
          (
          <issue>3</issue>
          ),
          <fpage>279</fpage>
          -
          <lpage>317</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Yao</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Tay</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          (
          <year>2019</year>
          ).
          <article-title>Deep Learning based Recommender System: A Survey and New Perspectives</article-title>
          . ACM Computing Surveys,
          <volume>52</volume>
          (
          <issue>1</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>38</lpage>
          . DOI:
          <volume>10</volume>
          .1145/3285029
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            ,
            <given-names>W.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Xie</surname>
            ,
            <given-names>X.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Cui</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          (
          <year>2022</year>
          ).
          <article-title>Graph neural networks in recommender systems: a survey</article-title>
          .
          <source>ACM Computing Surveys (CSUR)</source>
          ,
          <volume>55</volume>
          (
          <issue>5</issue>
          ),
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          . DOI:
          <volume>10</volume>
          .1145/3535101
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Poria</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Cambria</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bajpai</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Hussain</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          (
          <year>2017</year>
          ).
          <article-title>A review of affective computing: From unimodal analysis to multimodal fusion</article-title>
          .
          <source>Information Fusion</source>
          ,
          <volume>37</volume>
          ,
          <fpage>98</fpage>
          -
          <lpage>125</lpage>
          . DOI:
          <volume>10</volume>
          .1016/j.inffus.
          <year>2017</year>
          .
          <volume>02</volume>
          .003
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Salminen</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          , Liu,
          <string-name>
            <given-names>Y. H.</given-names>
            ,
            <surname>Kwak</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            ,
            <surname>An</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            , &amp;
            <surname>Jung</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. G.</surname>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Analyzing emotions in social media: A review of machine learning methods, datasets, and evaluation metrics</article-title>
          .
          <source>Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</source>
          ,
          <volume>10</volume>
          (
          <issue>6</issue>
          ),
          <year>e1367</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Lipianina-Honcharenko</surname>
            ,
            <given-names>K.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wolff</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sachenko</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Desyatnyuk</surname>
            ,
            <given-names>O.</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sachenko</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Kit</surname>
            ,
            <given-names>I.</given-names>
          </string-name>
          (
          <year>2023</year>
          ).
          <article-title>Intelligent information system for product promotion in internet market</article-title>
          .
          <source>Applied sciences</source>
          ,
          <volume>13</volume>
          (
          <issue>17</issue>
          ),
          <fpage>9585</fpage>
          . DOI:
          <volume>10</volume>
          .3390/app13179585
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Thao</surname>
            ,
            <given-names>H. T. P.</given-names>
          </string-name>
          (
          <year>2020</year>
          ).
          <article-title>Deep Neural Networks for Predicting Affective Responses from Movies</article-title>
          .
          <source>Proceedings of the 2020 International Conference on Multimedia Retrieval</source>
          ,
          <fpage>4743</fpage>
          -
          <lpage>4747</lpage>
          . DOI:
          <volume>10</volume>
          .1145/3394171.3416517
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>