<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Geographical Cellular Trafic Prediction with Multivariate Spatio-Temporal Modeling</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>ChungYi Lin</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shen-Lung Tung</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Winston H. Hsu</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Chunghwa Telecom laboratories</institution>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>National Taiwan University</institution>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents a novel approach for evaluating road trafic usage using multi-type Geographical Cellular Trafic (GCT). Working with a major telecom company, we propose a new prediction task for transportation trafic using GCT data. To accurately tackle this task, we propose a model that efectively integrates multivariate relation exploration and spatio-temporal modeling across multiple regions. Furthermore, we develop a new core as the foundation of each modeling component, eficiently improving the incorporation of attention mechanisms in the CNN-based architecture. Extensive experiments demonstrate the superior performance of our model in successfully handling the prediction task and reveal the influence of various GCT combinations. It is worth noting that our proposed data and model can pave a new path for intelligent transportation systems and urban planning.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Geographical Cellular Trafic</kwd>
        <kwd>Multivariate Spatio-Temporal Modeling</kwd>
        <kwd>Graph Neural Network</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>ally, we present a novel core for each View Modeling,
designed to enhance the eficiency of the attention
mechRecently, trafic prediction has become increasingly im- anism when processing convolution-encoded
represenportant for intelligent transportation systems [1, 2]. Ac- tations. Our experiments demonstrate the superior
percurate trafic prediction can help alleviate trafic conges- formance of our model compared to representative and
tion [3] and optimize trafic signal control [ 4]. However, state-of-the-art baselines, underscoring the importance
traditional trafic prediction approaches rely on dedicated of incorporating multi-type GCT data. Overall, this work
sensors, which require costly maintenance and develop- makes the following key contributions:
ment, have limited deployment coverage, and are
susceptible to insuficient usable information. • Novel data: We collected over 30 million GCT</p>
      <p>To tackle the limitations of traditional trafic predic- records from diverse road segments, analyzing
tion, we leverage large-scale and widely-distributed mo- spatial correlations, relationships among GCT
bile user data integrated with road network information types, and their evolution over time.
to analyze trafic usage conditions. Collaborating with • Prediction task and model: Our novel task
Taiwan’s major telecom provider, Chunghwa Telecom, predicts V-GCT, which provides valuable
transwe utilize geolocated cellular trafic, named Geographi- portation insights for city authorities and is
becal Cellular Trafic (GCT) , which is further classified into ing employed in a proof-of-concept area.
Meanvehicle, pedestrian, and stationary types. Accumulating while, our multivariate spatio-temporal model
efGCT at fixed intervals ofers insights into human ac- fectively captures dependencies and relationships
tivity patterns and road network usage, defined as GCT between GCT types for accurate predictions.
lfow. Consequently, we propose a new task of forecasting • Experiments and analysis: Extensive
evaluaspecific vehicular GCT (V-GCT) flow in various regions, tions demonstrate our model’s superior
perforwhich is highly related to road trafic conditions and mance against baselines for diverse prediction
difers from predicting cellular trafic usage for mobile intervals. Ablation and sensitivity analyses
highnetworks in previous studies [5, 6, 7, 8]. Hence, our pro- light the importance of model components and
posed task and dataset using mobile user data ofer new GCT flow combinations for adaptability and
realinsights into road network usage and trafic conditions. world potential.</p>
      <p>To address this new task, we propose a model with
Multivariate, Temporal, and Spatial View Modelings that
integrates multi-type GCT and utilizes spatio-temporal 2. Data Processing
correlations to predict V-GCT flow accurately.
Addition</p>
      <sec id="sec-1-1">
        <title>This section describes the definitions of geographical cellular trafic (GCT), data preprocessing, analysis, and potential applications.</title>
        <p>Type
vehicle
pedestrian
stationary</p>
        <sec id="sec-1-1-1">
          <title>2.1. Definitions</title>
          <p>Multi-Type Geographical Cellular Trafic (GCT).
GCT is cellular trafic with estimated GPS coordinates
obtained from triangulation, indicating where the
trafifc was generated. Each GCT is classified 1 into three
categories: vehicle, pedestrian, and stationary.
GCT Flow. We define GCT flow as the total quantity of
GCT within a fixed interval (e.g., 5 min) as in previous
vehicular flow studies [ 9]. With multi-type GCT, there are
various GCT flows, including vehicle (V-GCT), pedestrian
(P-GCT), and stationary (S-GCT) flows.</p>
          <p>Road Segments. Road segments are defined as 20m x
20m areas, based on the average road size in our
proof-ofconcept (POC) area in Hsinchu, Taiwan. These segments
geographically interconnect, forming a road network.
to maintain consistency, and assigned unique V-GCT,
P-GCT, and S-GCT flows to each road segment.
Data Privacy Protection. In compliance with strict
personal data protection laws, we established a
collaboration agreement with the company to outline data sharing
terms and ensure adherence to privacy regulations. We
processed GCT data in a secure intranet environment
and hashed IMEI numbers to protect the privacy users.</p>
        </sec>
        <sec id="sec-1-1-2">
          <title>2.3. Data Analysis</title>
        </sec>
        <sec id="sec-1-1-3">
          <title>2.2. Data Collection and Preprocessing</title>
          <p>2.3.1. Time Evolving Spatial Correlations
Data sourcing. We extracted essential data fields from Spatial Correlation. Building on the approach used in
the telecom company’s Geographical Cellular Trafic Stor- [10, 11], we utilized the Pearson correlation coeficient
age Database to reduce storage and computational re- to assess the spatial correlation between road segments
quirements, as shown in Table 1. Each row represents one in the POC area. Specifically, when using the historical
GCT, comprising five data fields: International Mobile one-hour V-GCT as the series variable for each segment
Station Equipment Identity (IMEI, a unique mobile phone in Figure 2, a Pearson correlation coeficient is assigned
identifier), latitude and longitude coordinates, recording to each road segment pair, ranging from -1 to 1. Road
segtime, and GCT type. ments in closer proximity tend to exhibit similar V-GCT
Road Segment Selection for GCT Collection. We flow patterns, leading to higher Pearson correlation
coefselected geographically connected road segments as the ifcients. This highlights the spatial correlation between
scope for GCT data collection to capture the mobility of road segments based on V-GCT.
mobile users across diferent areas, as shown in Figure Time Evolving Correlation. We use the Pearson
cor1. We collaborated with transportation authorities to relation coeficient to explore spatial correlations over
identify 21 road segments for analysis, each with unique time, as shown in Figure 2. Notable observations include:
functional locations nearby (e.g., universities, science - At 18:00, road segments near the Hsinchu Science
parks, and shopping areas). To ensure relevance to road Park (ID: 57, 59, 62) exhibit high Pearson coeficients,
inusage conditions, we extracted GCTs located within these dicating similar movement patterns as commuting users
road segments and included them in Table 1. leave the workspace.</p>
          <p>Preprocessing for GCT Flow. We preprocessed the - At 19:00, highly correlated regions emerge along the
data to ensure accuracy by eliminating inconsistencies, route (ID: 44, 35, 6, 30, 43, 45) from the work area to
outliers, and missing values. Next, we removed dupli- residential areas.
cate GCT record with identical IMEI and timestamps - By 20:00, users gradually return home or dine out,
to prevent distortion and ensure reliability. Finally, we resulting in high Pearson coeficients for road segments
calculated the total GCTs for every 5-minute interval near residential-commercial mixed areas (ID: 1, 16, 41,
43) reflecting similar V-GCT patterns. Road segments
1The algorithm is the telecom company’s confidential trade secret. near the Hsinchu Science Park (ID: 54, 56, 57, 59, 62)
also exhibit high Pearson coeficients again, as users who
ifnish work later leave the area.</p>
          <p>Overall, the time-evolving Pearson correlation analysis
not only reveals spatial correlations between road
segments by V-GCT flow but also indicates changes in
population activity patterns during diferent periods.
Concentrated regions with high Pearson coeficients may
shift over time, providing a new insight for
understanding user flow and identifying congestion points in trafic
management over time.
2.3.2. Understanding Regional Functionality</p>
          <p>through Multi-Type GCT Flows</p>
        </sec>
      </sec>
      <sec id="sec-1-2">
        <title>Recognizing regional functionality can aid in prediction</title>
        <p>tasks [12], but traditionally involves time-consuming
manual labeling. By analyzing variations between three
types of GCT flows, we can uncover hidden interactions
of user groups in diferent urban areas.</p>
        <p>Implicit Interactions among Multi-Type GCT Flows.</p>
        <p>Figure 3(a) and 3(b) reveal distinct patterns in diferent
urban areas, with commuting areas displaying dominant
V-GCT flow and residential-commercial areas showing
significant P-GCT and S-GCT flows. Inspired by [ 13],
we subtracted V-GCT from P-GCT and S-GCT to obtain
(V-P)-GCT and (V-S)-GCT, respectively, using one day of 2.4. Potential Applications
data, and calculated the Pearson correlation coeficient
between V-GCT and these flows. The subtraction types, Multi-type GCT flows provide new insights for urban
capturing relative diferences between the GCT flows, planning, with possible future applications including:
exhibit higher correlations with V-GCT, highlighting dis- Transportation Management. GCT flow assists
autinct patterns with unique characteristics in diferent thorities in developing efective trafic strategies,
improvareas. For instance, a high (V-P)-GCT value may indi- ing flow and reducing travel times.
cate a region with more vehicular trafic. These relative Public Safety. Real-time monitoring systems using GCT
diferences better capture GCT flow interactions. lfow aid in understanding crowd density and mobility,
Deriving Insights into Model Design. The higher cor- ensuring safety during critical incidents.
relations for subtraction types in multi-type GCT flows Urban Planning. Analyzing GCT flow helps planners
provide valuable insights for our model. Incorporating identify infrastructure needs and optimize city layouts
these relative diferences improves capturing interactions, to accommodate growing populations.
and distinguishing area functionality.</p>
      </sec>
    </sec>
    <sec id="sec-2">
      <title>3. Multi-View Modeling for V-GCT</title>
    </sec>
    <sec id="sec-3">
      <title>Prediction</title>
      <sec id="sec-3-1">
        <title>3.1. Definition of Prediction Task</title>
        <sec id="sec-3-1-1">
          <title>Given  road segments, we collect multi-type GCT flows</title>
          <p>at fixed intervals. Each historical GCT flow is represented
by  = {1 , 1 , ...,  }, where  corresponds to
V-GCT, S-GCT, or P-GCT, and  ∈ R denotes the
values at time step . Our objective is to predict V-GCT for
all segments over the next  steps using one or
multiple GCT flows  from the past  steps. We denote the
predicted values as { +1,  +2, . . . ,  + },
where  + ∈ R .</p>
        </sec>
        <sec id="sec-3-1-2">
          <title>We present a novel core, GCSAT, as the fundamental</title>
          <p>
            basis for the multivariate, spatial, and temporal modeling
components. This core aims to improve the eficiency
roefpartetseenntitoantiomneacfhtearniCsmNNsiennhcaonddinlign.g the multi-channel ˆ = ‖=1( ( ∑︁   ℎ)), (1)
Preliminary. Graph Convolutional Network(GCN)- ∈
based models [
            <xref ref-type="bibr" rid="ref2">15, 14, 16, 17</xref>
            ] use a 1D CNN to encode where  ∈ {1, 2, . . . , }, and  (· ), and  (· ) is a
nonlininput into latent representations in the form of [,  ,  ], ear function. The final output is the concatenation of
where  denotes channels,  represents spatial nodes, the aggregated representations ˆ with the addition of a
and  signifies historical observation time steps. While residual connection. We denote the function of GCSAT:
GCN-based methods yield promising spatio-temporal
prediction results, they assign equal weights to neighbor-  (, ) := {ˆ1, ˆ2, ..., ˆ } + , (2)
ing nodes, causing suboptimal performance [18]. Inte- where adding the multichannel representation  is
congrating Graph Attention Network (GAT) [19] into GCN- sidered a residual connection.
          </p>
        </sec>
      </sec>
      <sec id="sec-3-2">
        <title>3.4. Multivariate View Modeling 3.5. Temporal View Modeling</title>
        <p>
          The goal of Multivariate View Modeling is to investigate Temporal View Modeling, as shown in the TS-Module of
the relationships among multi-type GCT flows before Figure 4, converts the multi-channel representation into
the TS-Module, and extract implicit relations to enhance the shape [,  , ] for GCSAT processing. GCSAT treats
V-GCT prediction. Although existing GAT-based models input as  nodes with  dimensions, and  in Equation
have shown improved task accuracy by exploring fea- 2 represents a complete graph.
ture relationships [20, 21], attention mechanisms may Time series data in practical scenarios often exhibit
not fully utilize the potential diferences within various both short-term and long-term dependencies.
Simultafeatures during a limited number of epochs [13]. neously capturing these patterns using attention among
Deriving Insights from Multi-type GCT Flows. We time nodes is challenging due to entangled dependencies,
have verified that there is a more robust correlation be- making it dificult to identify valuable signals [23, 17].
tween V-GCT and the relative diferences of P-GCT and Extracting Diferent Time Scales To address the above
S-GCT, as shown by the Pearson coeficients displayed issue, we adopt two kernels with diferent sizes, inspired
in Figure 3. Inspired by anomaly detection [20, 13, 22], by [
          <xref ref-type="bibr" rid="ref2">14, 17</xref>
          ], to extract short-term and long-term
tempowe propose utilizing the magnitude diferences between ral patterns. Applying two 2D CNNs with kernel sizes
V-GCT flow and P-GCT and S-GCT flows to gain deeper (2,1) and (5,1) to the multi-channel representation H with
insights into inherent regional functionality and mobility shape [,  ,  ] produces outputs 2 with shape [,
user activity patterns. By learning from these diferences, ( − 1),  ] and 5 with shape [, ( − 4),  ],
respecwe can efectively extract hidden relationships within tively. The (2,1) kernel uncovers local temporal
relationmulti-type GCT and address the ineficiency of directly ships, revealing short-term patterns and dependencies
modeling features. crucial for understanding rapid changes. In contrast, the
Diference Representation. All multi-type GCT fea- (5,1) kernel captures longer-range temporal relationships,
tures (V-GCT, P-GCT, and S-GCT) are encoded by a 1D exposing hidden longer-term trends and dependencies
CNN into shape [,  ,  ]. To prevent aggregation of by encompassing a broader context of time steps.
unrelated information that might impact later training GCSAT for Temporal Modeling. After extracting
difresults, we process the features at each time step individu- ferent scale representations 2 and 5, we use GCSAT
ally. At time step , the diference between V-GCT and dif- to explore temporal dependencies among them, feeding
ferent GCT flows is calculated as: ∆  =  −  −  , them into Equation 2 as follows:
where  represents P-GCT or S-GCT. Thus, the diference
representation at time  is constituted as follows:  =  (2, ) +  (5, ), (5)
Δ = { −  , ∆  −  , ∆ −  } ∈ × 3×  ,
where  −  can be represented as: Δ[:, 0, :].
        </p>
        <p>GCSAT for Multivariate Diference Modeling. After
establishing the diference representation, Δ is applied
to GCSAT in Equation 2, as follows:
where the outputs for diferent time scale representations
in Equation 5 are truncated to match the temporal nodes
of the representation with the largest kernel size and
then concatenated accordingly.</p>
        <p>Gating Mechanism. Leveraging the gating
mechanism’s benefits in [ 15, 17], which controls the amount of
information passed to the next module, we process the
outputs of two Equation 5 separately with distinct
activation functions and perform element-wise multiplication:
 =  (1) ⊙  (2),</p>
        <p>(6)
where 1 and 2 are separately generated in
Equation 5,  denotes the sigmoid function,  denotes
the tangent hyperbolic function, and ⊙ represents the
Hadamard product.  is the output of the Gating
Mechanism and will be fed into the next modeling.</p>
      </sec>
      <sec id="sec-3-3">
        <title>3.6. Spatial View Modeling</title>
        <p>As the analysis in Figure 2 demonstrates the existence of
spatial correlations between V-GCT flows among road
segments. Thus, it is crucial to explore the relationships
between road segments. Spatial View Modeling, depicted
ˆΔ =  (Δ, ),
where each element in Δ is considered as the feature
node,  is a complete graph, and GCSAT is stacked with
two layers for more detail extraction.</p>
        <p>For the output of GCSAT at each time step, we only
extract the first node of ˆΔ, ˆΔ[:, 0, :], which is the
aggregated V-GCT representation with a shape [, 1,
 ]. Then, we concatenate the outputs for each time step
in Equation 3 along the second dimension, resulting in a
shape [,  ,  ]:
(3)
(4)
 = ‖=1(ˆΔ[:, 0, :]),</p>
        <p />
        <p>
          By capturing complex interactions and relationships
between GCT flows, our approach can achieve more
accurate V-GCT flow predictions. The output is then
forwarded to the next Temporal View Modeling.
in the TS-Module of Figure 4, aims to model these rela- Modeling and a Spatial View Modeling component. We
tionships to improve our understanding and prediction also followed [
          <xref ref-type="bibr" rid="ref2">14</xref>
          ] in using skip connection layers and
of V-GCT flow patterns. the output module.
        </p>
        <p>GCSATs for modeling bidirectional flow. Leveraging
GCSAT’s flexibility, we extract spatial correlations among 4.2. V-GCT Prediction Evaluation
road segments. First, we transform the representation
 from the previous temporal modeling output into a We evaluated our model and various baselines for
predictshape of [,, ], with  road segments as nodes and  ing future V-GCT flows at 15 (3 steps), 30 (6 steps), and
features. 60-minute (12 steps) intervals, and the results are shown</p>
        <p>
          To account for bidirectional V-GCT flow among road in Table 2, including the average MAE, RMSE, and MAPE
segments, we utilize two diferent GCSATs to explore over 10 repetitions for each method. Our observations
propagations in both directions. The adjacency matrix are as follows:
 of road segments is constructed using road network Performance comparison across prediction steps.
distance and a thresholded Gaussian kernel [24]. Fol- Our model consistently outperformed various baselines,
lowing [15], we define the forward transition matrix including TCN-based [26], GCN-based [
          <xref ref-type="bibr" rid="ref2">15, 14, 27, 17</xref>
          ],
 = /() and the backward transition ma- and attention-based [28, 21] models, across all
predictrix  =  /( ). After inputting the two tion steps. This demonstrates our model’s superior
abiltransition matrices into their respective GCSATs, we com- ity to capture the underlying multivariate relationships
bine their outputs to obtain the final output , as: and complex spatio-temporal patterns. Although
performance decreased as prediction steps increased for all
mod =  (,  ) +  (, ) (7) els, our model maintained its superior performance
comThis approach allows us to capture spatial relation- pared to the baselines, even at longer prediction steps.
ships among road segments while considering their bidi- Performance and impact of multi-type GCT flow.
rectional connections. Incorporating spatial correlation Table 2 shows that attention-based methods (2)
outperinformation into our modeling process enables us to bet- form both TCN-based (0) and GCN-based (1) methods,
ter explore dependencies among road segments. highlighting the efectiveness of attention mechanisms in
capturing complex relationships between road segments
and GCT flows. Moreover, our model further enhances
4. Experiments prediction accuracy by efectively capturing human
activity patterns and road network usage through the
ex4.1. Experimental Settings and Baselines ploration of hidden relationships among multi-type GCT
lfows. The combination of attention-based mechanisms
and multi-type GCT flows improves the model’s ability
to understand and forecast complex flow patterns.
        </p>
      </sec>
      <sec id="sec-3-4">
        <title>4.3. Ablation Study of Proposed Model</title>
        <sec id="sec-3-4-1">
          <title>Evaluation Metrics. We use Mean Absolute Error</title>
          <p>(MAE), Root Mean Squared Error (RMSE), and Mean
Absolute Percentage Error (MAPE).</p>
          <p>Train/Valid/Test data processing. Each type of GCT
lfow was processed in 5-minute intervals, from August 28,
2022, to September 29, 2022, across 21 road segments. We
followed [25], splitting data 70%-20%-10% for training,
testing, and validation. Each sequence sample had 24
time steps; the first 12 ( ) as historical input and the
remaining 12 () as ground truth.</p>
          <p>
            Baselines. We have selected seven representative trafic
prediction baselines for this task. These baselines are
categorized as follows, with overviews in Appendix A:
We conducted an ablation study to assess the impact of
our model’s components on trafic prediction tasks.
Average prediction metrics were calculated for prediction
steps 1 (5 min.) through 12 (60 min.). Table 3 compares
the full model with three ablated versions: without
Spatial View Modeling (w/o S), without Temporal View
Modeling (w/o T), and without Multivariate View Modeling
(w/o M). Our observations are as follows:
• Temporal Convolution (TCN) [26] Impact of w/o S. Omitting Spatial View Modeling had
• GCN-based models: Graph Wavenet (GWNet) the most significant impact on the model’s performance,
[15], MTGNN [
            <xref ref-type="bibr" rid="ref2">14</xref>
            ], DMGCN [27] resulting in a decrease in all metrics. This observation
• Attention-based models: Gman [28], MPNet [21] emphasizes the high spatial correlations between road
• State-of-the-art GNN model: ESG [17] segments in V-GCT and suggests that Spatial View
ModModel Settings. We followed [15] by repeating each eling efectively captures these dependencies, leading to
model 10 times and reporting the average of the met- improved performance.
rics. Our proposed model consists of one Multivari- Impact of w/o M. The model without Multivariate View
ate View Modeling component and three stacked TS- Modeling exhibited the second-lowest performance
comModules, with each module containing a Temporal View pared to the full model, indicating the importance of
          </p>
          <p>15 min. 30 min. 60 min.</p>
          <p>MAE RMSE MAPE MAE RMSE MAPE MAE RMSE MAPE
5.55± 0.02 8.82± 0.04 34.5%± 0.6 5.74± 0.02 9.38± 0.06 36.9%± 0.9 6.58± 0.05 11.22± 0.13 38.7%± 1.4
5.46± 0.01 8.72± 0.04 32.5%± 1.3 5.62± 0.03 9.08± 0.11 32.9%± 1.3 6.08± 0.06 10.31± 0.19 34.6%± 1.6
5.37± 0.01 8.61± 0.05 32.6%± 1.7 5.51± 0.04 8.99± 0.14 34.1%± 1.2 5.77± 0.02 9.68± 0.11 34.4%± 1.0
5.29± 0.02 8.52± 0.03 32.2%± 1.4 5.45± 0.01 8.86± 0.01 34.2%± 1.6 5.74± 0.03 9.66± 0.13 35.3%± 1.9
5.30± 0.01 8.46± 0.04 32.9%± 2.0 5.44± 0.03 8.84± 0.09 34.7%± 1.6 5.73± 0.03 9.68± 0.06 34.8%± 2.1
5.28± 0.04 8.48± 0.11 31.8%± 1.7 5.46± 0.01 8.81± 0.03 33.6%± 1.9 5.82± 0.02 9.56± 0.08 34.6%± 1.1
5.26± 0.03 8.43± 0.03 31.1%± 1.1 5.40± 0.02 8.76± 0.06 31.8%± 1.3 5.65± 0.02 9.46± 0.09 32.9%± 1.9
5.23± 0.01 8.27± 0.06 29.8%± 0.7 5.33± 0.02 8.54± 0.06 30.5%± 0.8 5.54± 0.03 9.25± 0.07 31.8%± 0.7
0 denotes the TCN-based methods, 1 denotes the GCN-based methods, and 2 denotes the attention-based methods.</p>
        </sec>
      </sec>
      <sec id="sec-3-5">
        <title>4.4. Sensitivity Analysis of Multi-Type</title>
      </sec>
      <sec id="sec-3-6">
        <title>GCT Feature Combinations</title>
        <p>accounting for the diferences in magnitude between Figure 5: The combinations of V-GCT with all subtraction
pedestrian, vehicular, and stationary GCT flows to make types consistently yields the lowest prediction error for
multiaccurate trafic predictions. step predictions.</p>
        <p>Impact of w/o T. The model’s performance significantly
decreased when Temporal View Modeling was removed,
as evidenced by lower metrics across all prediction steps. V-GCT and (V-S)-GCT model performs better, indicating
This result highlights the crucial role of Temporal View that the relative diferences between vehicle and
pedesModeling in capturing temporal patterns and dependen- trian or stationary GCT flows may vary in importance
cies, which contribute to enhanced prediction accuracy. with the prediction horizon.</p>
        <p>The complete model consistently outperforms ablated Performance of model with V-GCT and all
subtracversions, emphasizing the significance of Spatial, Tempo- tion types. The model incorporating V-GCT and all
ral, and Multivariate View Modeling. Each component subtraction types consistently achieves the lowest MAE
plays a critical role, and their integration leads to en- error over all prediction steps, demonstrating the
efechanced V-GCT flow predictions. tiveness of improving prediction by exploring hidden
relative diferences among multi-type GCT flows, and
our model’s capability in modeling complex relations.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>5. Conclusion</title>
      <sec id="sec-4-1">
        <title>Our model can explore the relationships between mul</title>
        <p>tivariate features, namely V-GCT, (V-S)-GCT, and
(V-P)GCT. Thus, we conducted a parameter sensitivity analy- We proposed and analyzed a multi-type GCT approach
sis for diferent combinations of V-GCT and subtraction that overcomes limitations in trafic prediction. Our
pretypes to assess their impact on prediction performance. dictive model efectively combined multivariate
spatioFigure 5 shows the MAE for three feature combination temporal modeling for V-GCT prediction across multiple
models across 12 prediction steps: V-GCT with (V-S)- road segments, outperforming the baselines. Our
experiGCT, V-GCT with (V-P)-GCT, and a combination of all ments highlighted the importance of model components
types including V-GCT, (V-P)-GCT, and (V-S)-GCT. Key and GCT flow combinations for prediction accuracy. By
observations are: initially validating the efectiveness of predicting V-GCT,
Impact on Prediction Step: For prediction steps 1 to we can further explore other types of GCT predictive
4, the model with V-GCT and (V-P)-GCT outperforms results, ofering potential applications for improving
inthe one with V-GCT and (V-S)-GCT. Beyond step 4, the telligent transportation.</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>Proc. of AAAI</source>
          , volume
          <volume>35</volume>
          ,
          <year>2021</year>
          , pp.
          <fpage>4027</fpage>
          -
          <lpage>4035</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Long</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Chang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , [1]
          <string-name>
            <given-names>F.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Xiong</surname>
          </string-name>
          , F.-Y.
          <article-title>Connecting the dots: Multivariate time series fore-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <article-title>iot-enabled smart urban trafic control and manage-</article-title>
          <source>KDD</source>
          ,
          <year>2020</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <string-name>
            <surname>ment</surname>
            , IEEE Transactions on Intelligent Transporta- [15]
            <given-names>Z.</given-names>
          </string-name>
          <string-name>
            <surname>Wu</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Pan</surname>
            ,
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Long</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Jiang</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Zhang</surname>
          </string-name>
          , Graph
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>tion Systems</source>
          <volume>21</volume>
          (
          <year>2019</year>
          ).
          <article-title>wavenet for deep spatial-temporal graph modeling</article-title>
          ., [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Dong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Tang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gao</surname>
          </string-name>
          , S. Guo, in
          <source>: Proc. of IJCAI</source>
          ,
          <year>2019</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <string-name>
            <given-names>L.</given-names>
            <surname>Cai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gutierrez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Campbel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. E.</given-names>
            <surname>Barnes</surname>
          </string-name>
          , [16]
          <string-name>
            <given-names>C.</given-names>
            <surname>Tian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W. K.</given-names>
            <surname>Chan</surname>
          </string-name>
          ,
          <article-title>Spatial-temporal attention</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          (
          <year>2023</year>
          ).
          <source>IET Intelligent Transport Systems</source>
          (
          <year>2021</year>
          ). [3]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Lv</surname>
          </string-name>
          ,
          <article-title>Deep learning for secu-</article-title>
          [17]
          <string-name>
            <given-names>J.</given-names>
            <surname>Ye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          , H. Xiong,
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          <string-name>
            <given-names>Transportation</given-names>
            <surname>Systems</surname>
          </string-name>
          (
          <year>2021</year>
          ).
          <source>in: Proc. of KDD</source>
          ,
          <year>2022</year>
          , pp.
          <fpage>2296</fpage>
          -
          <lpage>2306</lpage>
          . [4]
          <string-name>
            <given-names>P.</given-names>
            <surname>Xie</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Zhang</surname>
          </string-name>
          , Ur- [18]
          <string-name>
            <given-names>S.</given-names>
            <surname>Brody</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Alon</surname>
          </string-name>
          , E. Yahav, How attentive are graph
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <article-title>machine learning: A survey</article-title>
          ,
          <source>Information Fusion on Learning Representations</source>
          ,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          (
          <year>2020</year>
          ). [19]
          <string-name>
            <given-names>P.</given-names>
            <surname>Veličković</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Cucurull</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Casanova</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Romero</surname>
          </string-name>
          , [5]
          <string-name>
            <given-names>G.</given-names>
            <surname>Barlacchi</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. De Nadai</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Larcher</surname>
            ,
            <given-names>A.</given-names>
          </string-name>
          <string-name>
            <surname>Casella</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Lio</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Bengio</surname>
          </string-name>
          ,
          <article-title>Graph attention networks</article-title>
          , in:
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Chitic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Torrisi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Antonelli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vespignani</surname>
          </string-name>
          ,
          <source>Proc. of ICLR</source>
          ,
          <year>2018</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <string-name>
            <given-names>A.</given-names>
            <surname>Pentland</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lepri</surname>
          </string-name>
          ,
          <article-title>A multi-source dataset</article-title>
          of [20]
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Duan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cao</surname>
          </string-name>
          ,
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <article-title>trentino, Scientific data (</article-title>
          <year>2015</year>
          ).
          <article-title>time-series anomaly detection via graph attention</article-title>
          [6]
          <string-name>
            <given-names>X.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Xing</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Yang</surname>
          </string-name>
          , Y. Liu, network,
          <source>in: Proc. of ICDM</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>841</fpage>
          -
          <lpage>850</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <string-name>
            <given-names>C.</given-names>
            <surname>Peng</surname>
          </string-name>
          ,
          <article-title>Spatio-temporal analysis</article-title>
          and prediction [21]
          <string-name>
            <surname>C.-Y. Lin</surname>
            , H.-T. Su,
            <given-names>S.-L.</given-names>
          </string-name>
          <string-name>
            <surname>Tung</surname>
            ,
            <given-names>W. H.</given-names>
          </string-name>
          <string-name>
            <surname>Hsu</surname>
          </string-name>
          , Multivari-
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>on Mobile Computing</source>
          (
          <year>2018</year>
          ).
          <article-title>spatial-temporal prediction with outdoor cellular [7</article-title>
          ]
          <string-name>
            <given-names>N.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Pei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.-C.</given-names>
            <surname>Liang</surname>
          </string-name>
          , D. Niyato, trafic,
          <source>in: Proc. of CIKM</source>
          ,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <article-title>Spatial-temporal aggregation graph convolution</article-title>
          [22]
          <string-name>
            <given-names>S.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.-S.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yoon</surname>
          </string-name>
          , To-
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <string-name>
            <surname>tion</surname>
          </string-name>
          , IEEE Communications Letters (
          <year>2021</year>
          ). detection,
          <source>in: Proc. of AAAI</source>
          ,
          <year>2022</year>
          . [8]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Gu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Su</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Guizani</surname>
          </string-name>
          , Mvstgn: A multi- [23]
          <string-name>
            <given-names>J.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Rahmatizadeh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Bölöni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Turgut</surname>
          </string-name>
          , Real-
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <string-name>
            <surname>Computing</surname>
          </string-name>
          (
          <year>2021</year>
          ).
          <source>Transportation Systems</source>
          <volume>19</volume>
          (
          <year>2017</year>
          )
          <fpage>2572</fpage>
          -
          <lpage>2581</lpage>
          . [9]
          <string-name>
            <given-names>S.</given-names>
            <surname>Guo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Feng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Song</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Wan</surname>
          </string-name>
          , Atten- [24]
          <string-name>
            <given-names>D. I.</given-names>
            <surname>Shuman</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. K.</given-names>
            <surname>Narang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Frossard</surname>
          </string-name>
          ,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Ortega,
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <string-name>
            <surname>AAAI</surname>
          </string-name>
          ,
          <year>2019</year>
          .
          <article-title>data analysis to networks and other irregular do</article-title>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Yuan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , City- mains,
          <source>IEEE signal processing magazine</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <article-title>wide cellular trafic prediction based on densely</article-title>
          [25]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Shahabi</surname>
          </string-name>
          , Y. Liu, Difusion convolu-
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <string-name>
            <given-names>Communications</given-names>
            <surname>Letters</surname>
          </string-name>
          (
          <year>2018</year>
          ). forecasting,
          <source>in: Proc. of ICLR</source>
          ,
          <year>2018</year>
          . [11]
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Zang</surname>
          </string-name>
          , H. Liu,
          <string-name>
            <given-names>J.</given-names>
            <surname>Yu</surname>
          </string-name>
          , Modeling [26]
          <string-name>
            <given-names>F.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <surname>V.</surname>
          </string-name>
          <article-title>Koltun, Multi-scale context aggrega-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          <article-title>graph convolutional network for air quality predic-</article-title>
          <source>arXiv:1511.07122</source>
          (
          <year>2015</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          tion, in
          <source>: Proc. of WSDM</source>
          ,
          <year>2021</year>
          , pp.
          <fpage>616</fpage>
          -
          <lpage>634</lpage>
          . [27]
          <string-name>
            <given-names>L.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Du</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Fu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lv</surname>
          </string-name>
          , H. Xiong, [12]
          <string-name>
            <given-names>W.</given-names>
            <surname>Shao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Jin</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Xiao</surname>
          </string-name>
          ,
          <article-title>Dynamic and multi-faceted spatio-temporal deep</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <article-title>term spatio-temporal forecasting via dynamic</article-title>
          KDD,
          <year>2021</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <article-title>multiple-graph attention</article-title>
          ,
          <source>Proc. of IJCAI</source>
          ,
          <year>2022</year>
          . [28]
          <string-name>
            <given-names>C.</given-names>
            <surname>Zheng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Fan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Qi</surname>
          </string-name>
          , Gman: A graph [13]
          <string-name>
            <given-names>A.</given-names>
            <surname>Deng</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Hooi</surname>
          </string-name>
          ,
          <article-title>Graph neural network-based multi-attention network for trafic prediction</article-title>
          , in:
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <article-title>anomaly detection in multivariate time series</article-title>
          ,
          <source>in: Proc. of AAAI</source>
          ,
          <year>2020</year>
          . A. APPENDIX: Overview of
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <article-title>baselines • TCN [26]: A convolutional-based method for time</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          <article-title>seier modeling</article-title>
          . •
          <source>GWNet (Graph WaveNet)</source>
          [
          <volume>15</volume>
          ]
          <article-title>: A graph-based</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          mechanism. • MTGNN [14]
          <article-title>: A graph-based convolutional</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <article-title>model with dynamically learned graph structures</article-title>
          .
          <source>• DMGCN</source>
          [27]
          <article-title>: A model incorporating time-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          <article-title>volution, and multi-faceted fusion</article-title>
          . •
          <source>MPNet</source>
          [21]
          <article-title>: A GNN model with propagation at-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          <article-title>tention mechimism</article-title>
          . • Gman [28]
          <article-title>: A graph multi-attention model utiliz-</article-title>
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          <article-title>ing an encoder-decoder architecture</article-title>
          .
          <source>• ESG</source>
          [17]
          <article-title>: A model for capturing interactions in</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>