<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Context-Based Higher-Order Relation-Aware Denoising GNN for Recommendations*</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Haijun Liu</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Fiberhome Telecommunication Technologies Co.</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Wuhan</string-name>
        </contrib>
        <contrib contrib-type="author">
          <string-name>China</string-name>
        </contrib>
      </contrib-group>
      <abstract>
        <p>Recommender systems serve as intelligent tools to alleviate information overload and provide personalised services to users. Existing recommender systems that rely on user behavior to generate data often face the problem of data sparsity. Moreover, graph neural network-based recommendation algorithms model user feature representations by treating neighbours equally, but ignore the problem of inconsistency in neighbour preferences in a given context. In this paper, we propose CRDG, a GNN model based on context-aware denoising. Specifically, we first construct user similarity graphs and item relevance graphs from historical interaction data and capture useful information from implicit neighbours with similar preferences through higher-order relationship models to alleviate the data sparsity problem exacerbated in existing denoising-based contextual recommendations. Then, to address the context inconsistency problem, we propose a denoising GNN model to aggregate information from contextually consistent neighbours. In addition to refine the influence of different types of neighbours, we propose a dual attention model to assign different influence weights to different neighbours. Experimental results on several real datasets demonstrate the superiority and effectiveness of the proposed model.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;graph neural networks</kwd>
        <kwd>recommendation system</kwd>
        <kwd>graph attention</kwd>
        <kwd>denoising</kwd>
        <kwd>context consistency</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        The massive amount of information generated due to the rapid development of the Internet has
brought about the problem of information overload. Recommender systems are one of the main
solutions for addressing this problem. Among the many recommendation algorithms, Collaborative
filtering (CF) has received extensive attention from researchers due to its simplicity and efficiency
[
        <xref ref-type="bibr" rid="ref1 ref2">1,2</xref>
        ]. However, CF-based recommendation algorithms often face the problem of data sparsity [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        In recent years, graph neural networks (GNNs) have shown great advantages in recommendation
systems by virtue of their powerful modeling capabilities on non-Euclidean data [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ].The GC-MC
model proposed by Berg et al. applies GCNs to a matrix completion task with edge information and
converts the matrix completion task into a link prediction problem, which is modelled using an
endto-end graph self-encoder [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].The NGCF proposed by Wang et al. et al. proposed NGCF to improve
recommendation by stacking multiple embedding propagation layers to capture higher-order
connectivity in the user-item graph [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. While all of the above recommendation methods
demonstrate strong performance, most of the current algorithms treat the neighbors’ information
equally and do not consider it in a specific recommendation context, leading to the problem of
contextual inconsistency.
considering user u1 's preference for digital products, we argue that information aggregation for
sports items is potentially noisy due to the inconsistency of users' requirements for different types of
items. Some previous studies have reduced the impact of contextually inconsistent connections by
filtering out some first-order neighbors [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], but this strategy exacerbates the sparsity of the data. To
address this problem, we propose to leverage higher-order relations to capture useful information for
target users from implicit neighbors with similar preferences. For example, in Fig. 1, based on
historical interaction information, we argue that user u2 and user u3 can provide effective
information for predicting user u1 preferences. However, in the recommendation scenario of digital
products, we believe that user u3 can provide more effective information for modelling user's
preferences, while aggregating users may introduce noise due to the fact that the same interactions
of user u2 and user u1 are mainly reflected in the motion neighborhood, whereas they exhibit
dissimilar preferences in the digital domain. In addition, exploiting item correlation is also helpful
because an item and related items are likely to be purchased together by a specific group of people, a
well-known example being beer and diapers. Most traditional CF-based algorithms utilize these
findings, but GNN-based models ignore them.
      </p>
      <p>The above analysis shows the drawbacks of ignoring specific contexts (i.e., a single user may have
different context-consistent neighbors for different items). To this end, this paper proposes CRDG, a
context-aware GNN recommendation model that aggregates useful information from contextually
consistent neighbors. First, CRDG constructs user-similar collaboration graphs and item-related
collaboration graphs from user-item history interactions for users and items, respectively, and
models the impact of implicit neighbors in user-similar graphs and the role of item associations in
item-related graphs through a higher-order relationship-aware module. Next, to mitigate the effect of
context-inconsistent neighbors, we construct a context-aware denoising module. The model removes
context-inconsistent neighbors by sampling and aggregates information only from
contextconsistent neighbors. Then, to refine the influence of neighbors, we propose a dual -attention model
to assign weights to contextually consistent neighbors. Finally, we connect the initial feature
representations modeled based on higher-order perceptual modules with the final features based on
consistent neighbor aggregation for recommendation prediction. Extensive experiments on real
datasets demonstrate the effectiveness of our model.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Formalization of problems</title>
      <p>Let a bipartite graph G = (U ,V , EUV ) of user-item interactions consist of two different types of
node sets (User setU = {u1, u2 ,, um} and Item set I = {i1, i2 ,, i }
n ) and edge setEUV , where m and
n denote the number of users and items, respectively. We denote the interaction matrix as A ∈  m×n ,
where au,v
otherwise au,v = 0 .</p>
      <p>∈{0,1}</p>
      <p>. and only au,v = 1 denotes that user u interacts with item v , i.e., (u, v) ∈ EUV ,
Formally, the recommendation algorithm aims to construct an interaction prediction matrix
R ∈  m×n between users and items based on user-item interaction data. i.e., it first learns the latent
feature representations of users and items, and then predicts user-item interactions based on the
product of the feature representations of users and items.</p>
      <p>ruv = eu ⋅ ev
(1)
where eu ∈  d , ev ∈  d denote the final embedding of the user and the item, respectively, and d
is the embedding dimension.</p>
    </sec>
    <sec id="sec-3">
      <title>3. CRDG model</title>
      <p>This section first outlines the overall framework of the proposed model. The overall architecture
of the model is shown in Fig. 2. The four main component parts into which the model is divided will
then be described in detail.</p>
      <sec id="sec-3-1">
        <title>Higher-order relationship awareness module: In this module, we construct user similarity</title>
        <p>graphs and item relevance graphs based on historical user-item interactions, and then generate
preliminary feature representations of users and items by aggregating implicit similarity information
within the neighborhood via a higher-order relation-aware graph neural network (RGNN).</p>
        <p>Context-aware denoising module: In this module, we first construct embedded representations
of context pairs through the query layer, and then filter and denoise the candidate neighbors to
obtain context-consistent neighbors.</p>
        <p>u1
u2
u3</p>
        <p>Prediction module: In this part, we link the preliminary representations of users and items
obtained based on RGNN and the feature representations based on context-consistent neighbors
aggregation as the final representations, and then output the prediction results based on the inner
product of the point feature representations.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.1. Higher-order relationship awareness module</title>
      </sec>
      <sec id="sec-3-3">
        <title>3.1.1. Collaboration graph construction</title>
        <p>In order to capture the influence of implicit neighbors during feature representation, based on the
original user-item interaction bipartite graph G = (U ,V , EUV ) , we construct a user similarity graph
sim(ui , u j ) =</p>
        <p>N VG (ui )  N VG (u j )</p>
        <p>N VG (ui ) ⋅ N VG (ui )
rel(vi , v j ) =</p>
        <p>N UG (vi )  N UG (v j )</p>
        <p>N UG (vi )  N UG (v j )
GUs = (U , EUs ) and an item correlation graph GVr = (V , EVr ) , where EUs and EVr are the sets of edges of
the two collaboration graphs, respectively:
(2)
(3)
where N VG (ui ) and N VG (u j ) denote the neighbors of ui and u j on the user item bipartite graph
G , i.e., the set of interacting items, respectively. If sim(ui , u j ) &gt;τ , we add an edge between ui and
u
j where τ is the hyperparameter. Similarly, we compute the correlation between items as follows.</p>
        <p>where N UG (vi ) and N UG (v j ) denote the neighbors of vi and v j on the user item bipartite graph
G , i.e., the set of users who have interacted with vi and v j , respectively. Similarly if rel(vi , v j ) &gt; ξ ,
we then add an edge between vi and v j , where ξ is a hyperparameter.</p>
      </sec>
      <sec id="sec-3-4">
        <title>3.1.2. Higher-order relationship-aware graphical neural networks</title>
        <p>We construct a higher-order relationship-aware graph neural network (RGNN) that can effectively
aggregate the relevant information in the neighboring nodes in the two collaborative graphs, and the
detailed process is as follows.</p>
        <p>Aggregating Similar User Neighbours: for each user ui , given the layer l feature
xl
representation ui , we will update the user feature representation at layer l +1 as follows:
  (4)
xul+i1 =∑(W1s,l σ  xuli + W2s,l ( xuli  xul j )) </p>
        <p> u j∈NUs (ui ) 
σ is the LeakyReLU activation function, and
where W1s,l and W s,l
2 are the learnable parameters of layer l ,  denotes the element-wise product,</p>
        <p>N Us (ui ) denotes the neighbourhood of ui on U , i.e.,</p>
        <p>Gs
the similar user neighbourhood of userui .</p>
        <p>Aggregating Related Item Nneighbours: Similarly, for each item vi , given the layer l feature
xl
representation vi , we will update the item feature representation at layer l +1 as follows:
  (5)
xvli+1 =vj∈∑NVr σ  (vi ) (W1r,l xuli + W2r,l ( xvli  xvlj )) 
where W r,l 2 are the learnable parameters of layer l , N Vr (vi ) denotes the neighbourhood
1 and W r,l
r
of vi on GV .</p>
        <p>Finally, after L iterative propagation, we obtain the set of user and item feature representations,
xu and xv where l = [0,1, 2,, L] . Then connecting the feature representations of users and items at
l l
each level yields a preliminary feature representation xus = xu0 || xu1 || || xuL  of the user's
similaritybased feature representation and a preliminary feature representation xvr =  xv0 || xv1 || || xvL  of the
0 0
item's correlation-based feature representation , where xu and xv are the original input feature
representations.</p>
      </sec>
      <sec id="sec-3-5">
        <title>3.2. Context-aware denoising module:</title>
        <p>To solve the context-inconsistent problem, we design a context-aware denoising model, which
consists of a query layer and a context- consistent denoising-based module.</p>
      </sec>
      <sec id="sec-3-6">
        <title>3.2.1. Query layer</title>
        <p>eu00
 ev
=WU xus , u ∈U
=WV xvr , v ∈V</p>
        <p>A recommendation context refers to a user-item pair of a user's preference for a specific
item , and in order to capture the representation of a specific context, CRDG builds the query layer to
(u, v)
select context-consistent neighbors specifically for a specific recommendation context .
Specifically, it generates context embeddings by mapping the connection between the initial
embeddings of users and items:
qu,v
=(Wq σ ( xus ⊕ xvr ))</p>
        <p>s r
where qu,v is the context embedding, xu and xv are the preliminary feature representations of u</p>
        <p>W
and v , respectively. q is the learnable parameter, and ⊕ denotes the connection operation. Based on
the query layer, we can dynamically sample neighbors according to different contexts.</p>
      </sec>
      <sec id="sec-3-7">
        <title>3.2.2. Denoising based on context-consistent</title>
        <p>In the higher-order relationship-aware module above, we ignore the interaction information
between the user and the item. Therefore, in this phase we will introduce the user-item interaction
graph G . After the introduction of G , for user u two different types of neighbor nodes exist, i.e.,
item neighbors N VG (u ) in G and user neighbors N Us (u) in GUs . Similarly, item v has user</p>
        <p>N UG (v)
neighbors in G and item N Vr (v) neighbors in GVr . In order to realize the information
transfer between heterogeneous nodes, we map the preliminary user and item feature
representations to the same embedding space as follows:</p>
        <p>where WU and WV are learnable parameters. Given context(u, v) , we obtain the embedding qu,v
of (u, v) through the query layer, and then the context consistency score for user u 's neighbor
nu ∈ N Us (u )  N VG (u )
is computed as follows</p>
        <p>exp (− qu,v − en0u 22 )
p (nu ; qu,v ) =
∑nu′∈NUs (u)NVG (u) exp  − qu,v − en0u′ 22 

</p>
        <p>Similarly, the context consistency score for item
nv ∈ N Vr (v)  N UG (v)
as follows:
v 's neighbors B is calculated
(u, v)
(6)
(7)
(8)
p (nv ; qu,v ) =</p>
        <p>exp (− qu,v − en0v 22 )
∑nv′∈NVr (v)NUG (v) exp  − qu,v − e0 2 

 nv′ 2 
where nu and nv denote the neighbors of user u and item v , respectively. It is worth noting that
they can be item nodes as well as user nodes. Based on the above definitions, we can compute the
consistency scores of all neighbors of u and v with the given context
(u, v)
.</p>
        <p>We then select neighbors with the top percent γ of the consistency score, where 0 ≤ γ ≤ 1is a
hyperparameter. In this way, we can filter out most of the context-inconsistent neighbors, thus
eliminating their negative impact. After completing the denoising process, user u retains two types
of contextually consistent neighbors:
(9)
(10)
(11)
(12)
(13)
(14)
(15)
NUs (u ) = {nu | p ( nu ; qu,v ) ∈ topγ ( N Us (u ))}
NVG (u ) = {nu | p ( nu ; qu,v ) ∈ topγ ( N VB (u ))}
NVr (v) = {nv | p ( nv ; qu,v ) ∈ topγ ( N Vr (v))}</p>
        <p>NUG (v) = {nv | p ( nv ; qu,v ) ∈ topγ ( N UG (v))}
where NUs (u ) denotes the set of context (u, v)</p>
        <p>Gs
consistent user neighbors of user u on U , and
NVG (u ) denotes the set of context (u, v) consistent item neighbors of user u on G . topγ (⋅) denotes the
selection of the top percent γ of elements in the set. Similarly, for item v we retain two contextually
consistent neighbors:
where NVr (v) denotes the set of context-consistent item neighbors of item v on GV and NUG (v)
r
denotes the set of context-consistent user neighbors of item v on G . Next, to aggregate different
types of contextually consistent neighbor information, we design a dual -attention based consistent
neighbor aggregation module.</p>
      </sec>
      <sec id="sec-3-8">
        <title>3.3. Dual Attention Based Consistent Neighbor Aggregation Module</title>
        <p>Impact of different neighbors of the same type: Given that users u ,</p>
        <p>Gs
user neighbors of u in U , we aggregate the features of these consistent user neighbors as follows:
are consistent
NUs (u )
l−1
where eu′ denotes the. feature representation of u 's neighbor u′ in layer l −1 , and α ul ,u′ is the
eNlUs (u) = ∑u′∈NUs (u)α ul ,u′eul−′1
eNVG (u) = ∑v′∈NVG (u)α ul ,v′evl′−1</p>
        <p>l
weight corresponding to u′ . The calculation is as follows.</p>
        <p>α ul ,u′ =</p>
        <p>exp (σ (W l (eul−1 ⊕ eul−′1 )))
∑u′∈NUs (u) exp (σ (W l (eul−1 ⊕ eul−′1 )))
where W l is the learnable parameter of layer. Similarly for user u 's contextually consistent item
neighbor v′ ∈ NVG (u ) on G we take the same aggregation approach and compute the following:
where ev′ denotes the potential embedding of item neighbor v′ in layer l −1 , α ul ,v′ is the weight
l−1
parameter corresponding to v′ .</p>
        <p>Impact of different types of neighbors: We propose a secondary attention for aggregating
information from consistent user neighbors and consistent project neighbors, the aggregated
embedding obtained by user u is shown below:</p>
        <p>AGGul
=NlUs(u)eNUs β (u) + β NlVG (u)eNVG (u)</p>
        <p>l l
l l
where eNUs (u) and eNVG (u) are the aggregation embeddings of consistent user neighbors and
consistent item neighbors of useru , respectively. β NlUs (u) and β NlVG (u) are the attention weights of
consistent user neighbors and consistent item neighbors, respectively, as follows:
α ul ,v′ =</p>
        <p>exp (σ (W l (eul−1 ⊕ evl′−1 )))
∑v′∈NVG (u) exp (σ (W l (eul−1 ⊕ evl′−1 )))
β NlUs (u) =
β NlVG (u) =</p>
        <p>exp (σ (W l (eul−1 ⊕ eNl−U1s (u) )))
∑ g∈NUs (u)NVG (u) exp (σ (W l (eul−1 ⊕ egl−1 )))</p>
        <p>exp (σ (W l (eul−1 ⊕ eNl−V1G (u) )))
∑ g∈NUs (u)NVG (u) exp (σ (W l (eul−1 ⊕ egl−1 )))
eul
=(Wul σ (el−1 ⊕ AGGul ))</p>
        <p>u
where W l</p>
        <p>u is the l -layer learnable parameter. Similarly given the item v , we can follow the above
method to obtain the updated representation as follows:
evl
=(Wvl σ (el−1 ⊕ AGGl ))</p>
        <p>v v
where Wvl is the l -layer learnable parameter. AGGvl is calculated similarly to AGGul .
3.4. Prediction module
where W l is the learnable parameter of layerl . Finally, the embedding of user u in layer l is
updated as follows.</p>
        <p>eu
ev
=(Wu σ (eu0 ⊕ eL ))</p>
        <p>u
=(Wv σ (ev0 ⊕ evL ))
L
1</p>
        <p>∑ (ruv − au,v )
=
2 EUV u,v∈EUV
2
After completing the consistent neighbor information aggregation, we can get the hierarchical
l l
embedding of user and item features, i.e., eu and ev where . We select the
embedding values of the first and the last layer among them, so the final representations of users and
items are as follows:
l = [0,1, 2,, L]
where Wu and Wv are the learnable weight matrices. Then we take the following loss function to
measure the deviation between the predicted and true values:
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. experiment</title>
      <p>where EUV is the edge set of user-item interactions, and each
recommendation context. d is the predicted value of user-item interactions.</p>
      <p>In this section, the author mainly introduces the research content involved in the experiment, and
then describes the datasets, evaluation metrics, experimental settings and experimental results used
in this work.</p>
      <sec id="sec-4-1">
        <title>4.1. Experimental dataset</title>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Baseline algorithm</title>
        <p>The proposed model CRDG is compared with the following baselines.</p>
        <p>
          FM [
          <xref ref-type="bibr" rid="ref8">8</xref>
          ]: A second-order cross term is added to the traditional linear model to represent the
interaction between features by learning the auxiliary vectors of the features.
        </p>
        <p>
          NCF [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ]: Introducing deep learning to learn non-linear interactions between users and items.
        </p>
        <p>
          GCN [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]: Learning Complex Relationships between Users and Items Using Spectral
Convolutional Operators to Improve the Performance and Accuracy of Recommender Systems by
Learning User-Item Interaction Graphs.
        </p>
        <p>
          NGCF [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]: Learning User and Item Representations by Explicitly Encoding Collaborative Signals
in Higher-Order Connections by Propagating Embeddings on User-Item Interaction Graphs.
        </p>
        <p>
          LightGCN [
          <xref ref-type="bibr" rid="ref11">11</xref>
          ]: Simplify the NGCF model by retaining only the neighborhood aggregation
operation to improve the recommendation effect and computational efficiency.
        </p>
        <p>
          DiffNet++ [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ]: Achieved better performance in recommendation tasks by modeling the user's
interest and influence diffusion process
        </p>
        <p>GraphDA [13]: A denoised and augmented user-item matrix is generated by capturing the
correlation between user-user and item-item, and by top-K sampling.</p>
      </sec>
      <sec id="sec-4-3">
        <title>4.3. Analysis of results</title>
      </sec>
      <sec id="sec-4-4">
        <title>4.3.1. Contrast to the baseline algorithm</title>
        <p>The performance comparison results are shown in Table 2. From the results, the following
observations can be made:</p>
        <p>First FM, NCF performs poorly on both datasets. This is due to the fact that traditional
collaborative filtering-based methods are difficult to comprehensively model the interaction between
users and items compared to graph-based recommendation algorithms. Secondly, among the
graphbased learning methods Diffnet++ compared to the traditional graph learning methods NGCF and
LightGCN it is designed with a diffusion method that can effectively cross the limitation of one-hop
neighboring nodes, and thus can capture richer graph attributes. The reason why GraphDA is able to
achieve a better approach than the above methods may be due to the fact that the interaction matrix
based on denoising and enhancement can mitigate the effect of noise in the existing interaction
matrix, which in turn can model the feature representation of the user and the project more
efficiently.</p>
        <p>The proposed CRDG model performance baseline algorithm for the following reasons:1.
Higherorder relationship-aware module, which can effectively model the implicit similarity and relevance of
users and items, can be used as a complement to enhance the information sparsity problem that
exists in traditional recommendation.2. The denoising model based on can effectively alleviate the
noise problem that exists in the process of aggregation of information from contextually inconsistent
neighbors.3. Compared to the GraphDA and other denoising methods, our proposed dual-attention
based consistent neighbor aggregation module can refine the influence of different neighbors more
effectively.</p>
      </sec>
      <sec id="sec-4-5">
        <title>4.3.2. Ablation analysis</title>
        <p>In order to study the impact of each component, we designed three CRDG variants as follows.</p>
        <p>CRDG-RGNN, removes the higher-order relation-aware module from CRDG, i.e., the initial
feature representation is directly passed into the subsequent denoising and attention modules.</p>
        <p>CRDG-Denoising, removes the context-based denoising step from CRDG, i.e., the dual attention
module directly aggregates the representation information of the whole neighbors.</p>
        <p>CRDG-Datt, replaces the dual-attention model with the traditional GNN model.</p>
        <p>The experimental results are shown in Fig3. We can observe that CRDG consistently achieves the
best performance compared to the other variants, suggesting that all components are necessary to
obtain the best results. CRDG-RGNN exhibits poor performance reflecting the importance of the
higher-order relationship-aware module, for modeling user similarity and item relevance.
CRDGDenoising performs sub-optimally and greatly reflects the fact that a mechanism based on the dual
attention mechanism can effectively refine the influence of neighbors.</p>
      </sec>
      <sec id="sec-4-6">
        <title>4.3.3. Parametric sensitivity analysis</title>
        <p>In this subsection, we investigate how the performance of our proposed model varies with some
hyperparameters, including its embedding dimension d , the thresholdτ of the user similarity graph
and the threshold ξ of the item related graph , and the experimental results are as follows. The
experimental results are shown below.</p>
        <p>The results in Figure 4 show that the model performance tends to show an increasing and then
decreasing trend with increasing d . However, the optimal dimension d varies across datasets, with
the best performance achieved when d = 32 on the Yelp dataset and better results when d = 64 on
the Amazon dataset. The explanation for this result is that when d is small as d increases the
model's modeling ability is stronger, but when d is too large it leads to overfitting problems.</p>
        <p>Figure 5 shows the effect of different user similarity thresholds τ on the model performance on
τ = 0.3 , CRDG
both datasets. Specifically, whenτ = 0.5 , CRDG performs best on Yelp. When
performs best on Amazon. As τ gradually increases from 0.1 to 0.9, the model performance shows a
trend of increasing and then decreasing. The reason behind is that too small τ will weaken the
denoising ability of the model, while too large τ will remove a lot of effective information in the
denoising process, which makes the amount of information that the model can obtain decrease, thus
leading to a decrease in model performance. Therefore, a suitable τ needs to be set to ensure the
model’s performance.
achieves the best performance on both datasets when ξ = 0.3 . Similar to the user similarity
threshold, the overall performance of the model shows an increasing and then decreasing trend as
the threshold increases from 0.1 to 0.9, which is also due to the fact that when ξ is too small, it will
lead to a decrease in the denoising ability, and when it is too large, it will lead to a decrease in the
amount of information that can be obtained by the model.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusion</title>
      <p>In this paper, we propose a new context-based denoising recommendation model. This model
constructs user similarity graphs and item relevance graphs to model the implicit similarity and
relevance of users and items from their historical interactions. Then, the higher-order
relationaware graph neural network is used to learn the user similarity features and item relevance
features. Considering the issue of inconsistent neighbors in context-based recommendation, we
designed a context-aware denoising method. This method effectively filters out contextually
inconsistent neighbors, improving the effectiveness of information aggregation. Finally, in order to
refine the impact of different neighbors in the information aggregation process, we propose a
dualattention based consistent neighbor aggregation module to achieve adaptive propagation of
information from different neighbors. We show through extensive experiments that much of the
proposed model due to existing state-of-the-art methods and verify the effectiveness of the
proposed scheme.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <p>The author(s) have not employed any Generative AI tools.
[13] Fan Z, Xu K, Dong Z, et al. Graph collaborative signals denoising and augmentation for
recommendation[C]//Proceedings of the 46th international ACM SIGIR conference on research
and development in information retrieval. 2023: 2037-2041.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Aljunid</surname>
            <given-names>M F</given-names>
          </string-name>
          ,
          <string-name>
            <given-names>Manjaiah D H</given-names>
            ,
            <surname>Hooshmand M K</surname>
          </string-name>
          , et al.
          <article-title>A collaborative filtering recommender systems</article-title>
          : Survey[J].
          <source>Neurocomputing</source>
          ,
          <year>2025</year>
          ,
          <volume>617</volume>
          :
          <fpage>128718</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Koren</surname>
            <given-names>Y</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Bell</surname>
            <given-names>R</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Volinsky</surname>
            <given-names>C</given-names>
          </string-name>
          .
          <article-title>Matrix factorization techniques for recommender systems</article-title>
          [J].
          <source>Computer</source>
          ,
          <year>2009</year>
          ,
          <volume>42</volume>
          (
          <issue>8</issue>
          ):
          <fpage>30</fpage>
          -
          <lpage>37</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Vassøy</surname>
            <given-names>B</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Langseth H</surname>
          </string-name>
          .
          <article-title>Consumer-side fairness in recommender systems: a systematic survey of methods and evaluation[J]</article-title>
          .
          <source>Artificial Intelligence Review</source>
          ,
          <year>2024</year>
          ,
          <volume>57</volume>
          (
          <issue>4</issue>
          ):
          <fpage>101</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Anand</surname>
            <given-names>V</given-names>
          </string-name>
          ,
          <article-title>Maurya A K. A survey on recommender systems using graph neural network[J]</article-title>
          .
          <source>ACM Transactions on Information Systems</source>
          ,
          <year>2025</year>
          ,
          <volume>43</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>Van</given-names>
            <surname>Den Berg</surname>
          </string-name>
          <string-name>
            <given-names>R</given-names>
            ,
            <surname>Thomas</surname>
          </string-name>
          <string-name>
            <given-names>N K</given-names>
            ,
            <surname>Welling</surname>
          </string-name>
          <string-name>
            <surname>M.</surname>
          </string-name>
          <article-title>Graph convolutiona l matrix completion</article-title>
          [J].
          <source>arxiv preprint arxiv:1706.02263</source>
          ,
          <year>2017</year>
          ,
          <volume>2</volume>
          (
          <issue>8</issue>
          ):
          <fpage>9</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Wang</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>He</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>M</given-names>
          </string-name>
          , et al.
          <source>Neural graph collaborative filtering[C]//Proceedings of the 42nd international ACM SIGIR conference on Research and development in Information Retrieval</source>
          .
          <year>2019</year>
          :
          <fpage>165</fpage>
          -
          <lpage>174</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>Yang</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liu</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Dou</surname>
            <given-names>Y</given-names>
          </string-name>
          , et al.
          <article-title>Consisrec: Enhancing gnn for social recommendation via consistent neighbor aggregation[C]//</article-title>
          <source>Proceedings of the 44th international ACM SIGIR conference on Research and development in information retrieval</source>
          .
          <year>2021</year>
          :
          <fpage>2141</fpage>
          -
          <lpage>2145</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Rendle</surname>
            <given-names>S</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Gantner</surname>
            <given-names>Z</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Freudenthaler</surname>
            <given-names>C</given-names>
          </string-name>
          , et al.
          <article-title>Fast context-aware recommendations with factorization machines[C]//</article-title>
          <source>Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval</source>
          .
          <year>2011</year>
          :
          <fpage>635</fpage>
          -
          <lpage>644</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>He</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Liao</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Zhang</surname>
            <given-names>H</given-names>
          </string-name>
          , et al.
          <source>Neural collaborative filtering[C]//Proceedings of the 26th international conference on world wide web</source>
          .
          <year>2017</year>
          :
          <fpage>173</fpage>
          -
          <lpage>182</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Kipf</surname>
            <given-names>T N</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Welling</surname>
            <given-names>M</given-names>
          </string-name>
          .
          <article-title>Semi-supervised classification with graph convolutional networks[J]</article-title>
          .
          <source>arXiv preprint arXiv:1609.02907</source>
          ,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <surname>He</surname>
            <given-names>X</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Deng</surname>
            <given-names>K</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Wang</surname>
            <given-names>X</given-names>
          </string-name>
          , et al.
          <article-title>Lightgcn: Simplifying and powering graph convolution network for recommendation[C]//</article-title>
          <source>Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval</source>
          .
          <year>2020</year>
          :
          <fpage>639</fpage>
          -
          <lpage>648</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <surname>Wu</surname>
            <given-names>L</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Li</surname>
            <given-names>J</given-names>
          </string-name>
          ,
          <string-name>
            <surname>Sun</surname>
            <given-names>P</given-names>
          </string-name>
          , et al.
          <article-title>Diffnet++: A neural influence and interest diffusion network for social recommendation[J]</article-title>
          .
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          ,
          <year>2020</year>
          ,
          <volume>34</volume>
          (
          <issue>10</issue>
          ):
          <fpage>4753</fpage>
          -
          <lpage>4766</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>