<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Understanding Vulnerable Road User Behavior using Spatio-Temporal Knowledge Graphs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>He Tan</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Erick Escandon Bailon</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Department of Computing, School of Engineering, Jönköping University</institution>
          ,
          <country country="SE">Sweden</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Understanding vulnerable road user (VRU) behavior is critical for designing safer and more inclusive urban infrastructure. This paper presents a structured and explainable framework using Spatio-Temporal Knowledge Graphs (STKGs) to model and analyze VRU crossing behaviors. By constructing knowledge graphs from real-world urban trafic datasets, we capture dynamic interactions between pedestrians, cyclists, and vehicles in diferent crossing scenarios. Through query-based analysis, our approach extracts insights that provide decision support to trafic engineers, addressing key safety-related concerns such as unsafe crossing patterns, pedestrian-vehicle interactions, and speed-related risks. This work contributes to responsible AI by enabling transparent, explainable, and data eficient decision-making support for real-world trafic planning and infrastructure design.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Spatio-Temporal Knowledge Graph</kwd>
        <kwd>Knowledge Graph for AI</kwd>
        <kwd>Semantic Representation of Behavior</kwd>
        <kwd>Ontology for Road User Behavior</kwd>
        <kwd>Vulnerable Road User</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Building an active mobility environment is crucial to foster more sustainable and inclusive
urban life [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ]. Vulnerable road users (VRUs), such as pedestrians (including children, the
elderly, and disabled individuals) and cyclists, are integral to the dynamics of urban trafic.
They play an essential role in establishing an active mobility environment. Implementing
safer road infrastructure not only safeguards these vulnerable groups but also encourages a
broader demographic to participate in active mobility. Municipalities, in particular, have both
responsibility and opportunity to enhance road infrastructure design to support this goal. To
develop safer and more inclusive road infrastructure while optimizing costs, municipalities must
transition from today’s subjective, assumption-based decision-making to objective, data-driven
strategies. Leveraging artificial intelligence (AI) methods can provide deeper insights and key
indicators, establishing a robust basis for informed decision making in infrastructure design
choices [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>
        Crossing behavior is one of the main aspects of VRUs behavior. It has been studied in
numerous research studies. It involves the actions and movements of VRUs while crossing
streets or roadways, often guided by trafic signals, road markings, and trafic conditions.
However, it is extremely challenging to understand the behavior of VRUs [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Unlike vehicles
that follow structured trafic rules and exhibit relatively predictable and linear motion patterns,
VRUs are influenced by a variety of external and internal factors and exhibit highly uncertain
and non-deterministic motion.
      </p>
      <p>
        Traditionally, stochastic models, linear regression models, and discrete choice models are
used to understand how pedestrians make crossing decisions based on various factors related
to trafic conditions, trafic controls, and trafic regulations [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. These models were developed
using self-reported data from interviews and/or questionnaires and observational data from
manually screened video recordings. Another approach involves agent-based models, where
road users are represented as intelligent agents making rational decisions in uncertain and
complex environments. These studies have focused on modeling pedestrians’ collision avoidance
mechanism. The rules for building the models are often derived from survey data [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ].
      </p>
      <p>
        More recently, trajectory prediction has become a common approach to understand road
user behavior [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ]. Trajectory prediction involves predicting the future positions and movement
patterns of road users over time [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ]. Accurate trajectory prediction allows trafic engineers to
gain insights into how road users move through road infrastructure and interact with dynamic
environments. Deep neural networks (DNN) based models has revolutionized trajectory
prediction [
        <xref ref-type="bibr" rid="ref10 ref11 ref9">9, 10, 11</xref>
        ]. They enable the direct learning of complex representations from large datasets
and support long-term future predictions. Recent advancements have focused on capturing
spatial features, particularly relationships and interactions within dynamic environments, using
graph-based models such as graph neural networks (GNNs), graph convolutional networks
(GCNs), and graph attention networks (GATs) (e.g., [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ]). Furthermore, the spatial-temporal
graph transformer (STGT) combines the strengths of graph-based models and transformer
architectures to handle both spatial and temporal aspects of trajectory prediction (e.g., [
        <xref ref-type="bibr" rid="ref14 ref15 ref16">14, 15, 16</xref>
        ]).
      </p>
      <p>Although deep learning (DL) has significantly advanced the modeling of pedestrian crossing
behavior, it often requires large-scale and high quality datasets for efective training. Moreover,
these methods primarily focus on prediction rather than providing structured, interpretable
representations. To enhance model transparency and trustworthiness, explainable AI (XAI)
techniques are important, enabling trafic engineers to interpret model outcomes and apply
insights in real-world decision making scenarios.</p>
      <p>Our work is to address this gap by leveraging a knowledge graph based approach that enable
explicit representing and reasoning about crossing behaviors. In this paper, we present a
structured and explainable framework for modeling and analyzing VRU crossing behaviors
using Spatio-Temporal Knowledge Graphs (STKGs). By employing a structured spatial-temporal
representation, we constructed knowledge graphs that capture the dynamic interactions during
crossings, based on real-world urban road user dynamics data. Through query-based analysis, we
extract insights to support trafic engineers in addressing safety-related concerns. This method
ofers a structured, explainable, and data-eficient approach to modeling and understanding
crossing behavior and can serve as a foundation for a decision support system for trafic engineer.</p>
      <p>The remainder of this paper is organized as follows: In Section 2, we introduce the methods
employed to understand human activities, particularly VRU behavior, using semantic
representation, ontology, and knowledge graphs. In Section 3, we present a crossing scene ontology.
We describe the knowledge graphs built using the ontology from road user dynamics data in
Section 4. Section 5 presents the insights gained from the knowledge graphs that are relevant
to trafic engineer’s safety concerns. Finally, Section 6 concludes the paper.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Human activity is a spatial-temporal evolution of interactions [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]. In 1970, Hägerstrand [
        <xref ref-type="bibr" rid="ref18">18</xref>
        ]
introduced the concept of a time-space path in understanding human activities, which
established a foundation for trajectory modeling. Inspired by Hägerstrand’s work, Orellana and
Renso [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] developed an interaction ontology. The ontology conceptualizes the characteristics
of pedestrian movement behavior. It has focused on identifying various movement patterns
from time-space paths, and the diferent categories of interactions, spatial and temporal contexts,
behaviors, and the high-level relations between these concepts. Logic-based reasoning is used
to categorize pedestrian movement behavior based on its movement patterns, interactions, and
contexts.
      </p>
      <p>
        Chai et al. [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ] utilized fuzzy logic to model the cognition and behavioral patterns of
pedestrians, in order to understand the efect of age and gender when pedestrians are crossing a
signalized crosswalk and jaywalking. Gharebaghi et al. [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] developed a mobility ontology
for people with motor disabilities (PWMD). Specifically, it considers the interactions between
people and both the social and physical environment. The ontology was used to support the
development of assistive technologies for the mobility of PWMD. Fang et al. [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] developed
an ontology defining various kinds of road users, including pedestrians, and describing their
relationships. The concepts from the ontology are used to define the rules for describing the
interactions between road users and to support rule-based reasoning for predicting road users’
behavior.
      </p>
      <p>
        In cognitive science and neuroscience, it has been recognized that segmentation is a
fundamental component of perception, playing a critical role in understanding activities. People tend to
perceive ongoing continuous activity as series of discrete events (or called segments) [
        <xref ref-type="bibr" rid="ref23 ref24 ref25">23, 24, 25</xref>
        ].
The relationships between segments are encoded in partonomic hierarchies [26]. Coarse
segmentation is often related to objects’ locations and their goals, and the causal relations between their
actions. Fine segmentation is closely linked to changes in the interactions between objects [27].
      </p>
      <p>Building on these findings in cognitive science and neuroscience, Ji et al. [ 28] proposed a
spatial-temporal scene graph to represent human activity and to improve the performance of
action recognition and few-shot action recognition using neural networks. Mlodzian et al. [29]
presented an ontology that was tailored for representing entities and their spatial and temporal
relations in trafic scenes in the nuScenes dataset 1. A knowledge graph was constructed from
the nuScenes dataset using the ontology and provided as a benchmark dataset for developing
advanced trajectory prediction models. These studies within computer vision have suggested
that a structured spatial-temporal representation can lead to more accurate human activity
understanding and improve the performance of various computer vision tasks.</p>
      <p>In our previous work [30], we presented a semantic representation of crossing behavior.
The representation captures the dynamic evolution of interactions between road users and
objects within the physical environment over time in both spatial and temporal dimensions.
The representation is generalizable and can be applied to represent the behavior of road user in
various trafic scenarios. We have also demonstrated that knowledge graphs can be constructed
from road user dynamics data using the representation, and the queries over the knowledge
graphs can be constructed to answer safety related questions on pedestrian crossing behavior
for trafic engineers.</p>
      <p>In this work, we extended the semantic representation to incorporate additional factors
relevant to VRUs’ crossing behavior. We constructed knowledge graphs from two road user
dynamics datasets collected in two school areas in Jönköping municipality, Sweden. In
collaboration with trafic engineers from the City planning, Development and Trafic Department of
Jönköping municipality, we leveraged insights gained from the knowledge graphs to improve
trafic engineers’ understanding of crossing behavior.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Crossing Scene Ontology</title>
      <p>In this section, we present the ontology describes the semantic representation for crossing
behavior (see Fig. 1). A crossing behavior can be seen as a dynamic evolution of interactions
between VRUs and other objects within the physical environment over time. Every crossing
behavior can be broken down into segments, each representing a distinct phase of the behavior.
These segments capture the changes of the interactions between VRUs and objects in both
physical and temporal dimensions, and together represent the crossing behavior. For example, Fig. 2
shows a crossing scene, which is extracted from road user behavior measurement performed
at a zebra crossing in a school area in Jönköping, Sweden. Fig 2-a displays the trajectories
of the VRUs and other moving objects involved in the event. In this event, a cyclist meets a
light vehicle at the crossing. The blue trajectory represents a cyclist, and the cyan trajectory
represents a light vehicle. Fig 2-b1 to b4 show a sequence of distinct segments that capture the
changes in interactions between the cyclist and objects over time during the event.</p>
      <p>The triples below express the crossing scene shown in Fig. 2. Each triple follows the format
((id, object_1), hasSpatialRelationship, (id, object_2)), where the object 1 is a moving object
such as pedestrian, cyclist, and vehicle, and the object 2 can be a moving object or a static object
such as crossing and sidewalk, and id is the unique identifier for each object. This is an unsafe
behavior, where the cyclist was trying to cross the street outside the designated crossing area
and came very close to the vehicle.
b1 : ((5833 , light_vehicle ) , out_of_area , (0 , crossing ))
b2 : ((5833 , light_vehicle ) , on , (0 , crossing ))
b3 : ((5833 , light_vehicle ) , out_of_area , (0 , crossing ))
((5833 , light_vehicle ) , far_away , (5835 , bicyclist ))
((5835 , bicyclist ) , out_of_area , (0 , crossing ))
b4 : ((5833 , light_vehicle ) , out_of_area , (0 , crossing ))
((5833 , light_vehicle ) , close_to , (5835 , bicyclist ))
((5835 , bicyclist ) , out_of_area , (0 , crossing ))</p>
      <p>Fig 1 illustrates the current version of the ontology designed to represent the spatial-temporal
evolution of crossing behavior. This version is an extended version of the ontology presented
in our previous work [30]. This version also includes the factors relevant to crossing behavior,
such as weather condition and trafic flow. The ovals represent the concepts, and the arrows
represent the relationships between concepts. Specifically, the blue arrow represents subclass
relations between concepts. The boxes represent the XSD (XML Schema Definition) data types.
This ontology is accessible on GitHub2. Since segment is often related to regions in an image in
computer vision, the term frame is used instead. In computer vision, a video can be divided
into a sequence of frames. Each frame represents a single still image in the video sequence. For
each object appearing in interactions, their coordinates and speed are captured too. Currently,
the ontology includes only a limited number of categories for both moving and static objects.
However, additional categories will be integrated as the ontology continues to undergo further
development.</p>
    </sec>
    <sec id="sec-4">
      <title>4. The Spatio-Temporal Knowledge Graphs</title>
      <p>In this section, we describe the knowledge graphs constructed from road user dynamic data
collected in two school areas in Jönköping, Sweden. The data is described using the semantic
representation of crossing behavior mentioned beforehand in Section 3.</p>
      <p>The datasets were prepared from the trafic measurement performed by Viscando AB 3 using
the 3D&amp;AI based infrastructure sensor OTUS3D. The measurements were carried out over 4
days, from 2023-05-23 to 2023-05-26, at two crossings in school areas in Jönköping. One crossing
is a zebra crossing, and the other is a zebra-free crossing. The data contains trajectories of all
road users recorded 10 times per second. Trajectories contain the unique track ID for each
object, the UTC time stamp, position (i.e. X-coordinate and Y-coordinate), velocity (i.e. object
speed in the direction of motion (km/h)) and object type. Currently, the object types include
pedestrian, cyclist, light vehicle and heavy vehicle. Vision data are processed in the embedded
computational unit and removed within 20 ms from being captured. Thus, the dataset is stored
fully anonymously, ensuring compliance with the General Data Protection Regulation (GDPR)
of the European Union4, because personal information is neither stored in the sensors nor
transmitted.</p>
      <p>Since the application is to support the trafic infrastructure planning and development
prioritizing VRUs, the knowledge graph construction has focused on the crossing scenes involving
2https://github.com/tanhe-git/crossing_behavior/blob/main/crossing_scene_ontology.owl
3www.viscando.com
4https://gdpr-info.eu/
dataset 1
dataset 2
both VRU(s) and vehicle(s). The spatial relationship between objects was calculated based on
the physical distance between them. The current spatial relationships include the ones between
moving objects, i.e. close_to and far_away, and the ones between a moving object and a static
object, i.e., on, close_to, far_away, out_of_area. When the information was extracted from the
aforementioned datasets, the ontology described in Section 3 was populated, and the knowledge
graphs were built.</p>
      <p>The structural metrics of the knowledge graphs are provided in Table 1. Each crossing
behavior is represented as a graph. Dataset 1 consists of data collected from a zebra crossing in
a school area in Jönköping, while dataset 2 contains data from a zebra-free crossing in another
school area in Jönköping. The numbers indicate the total number of crossing behaviors, as
well as the average number of triples and nodes in the knowledge graphs representing these
behaviors. Dataset 1 was collected from a school area in the center of Jönköping, where trafic
is typically heavier. This may explain why the behavior KGs are larger in this dataset.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Insights Learned from the STKGs</title>
      <p>In this section, we present the analysis of the knowledge graphs to gain a better understanding
of the crossing behaviors. Moreover, we discuss safety-related concerns collected from trafic
engineers, as well as insights queried and derived from the knowledge graphs, which can be
used to address these concerns.</p>
      <sec id="sec-5-1">
        <title>5.1. Clustering Analysis</title>
        <p>We analyzed the knowledge graphs using clustering techniques. Each STKG represents a
crossing scene and is transformed into embeddings. We then apply graph clustering algorithms
to generate clusters.</p>
        <p>Fig 3 shows the clusters obtained using the K-Means clustering algorithm, along with the
embeddings based on the Local Degree Profile (LDP) [ 31]. LDP encodes the degree distribution
of a node’s neighbors in a graph and captures the local connectivity patterns. The clusters
represent the graph’s density. The graphs highlighted in yellow are denser, while those in
purple are less dense. Higher density graph, the crossing behavior represented by the graph
shows more moving objects involved and more complex interactions between objects during
the crossing, which may correlate with higher accident risk. In contrast, sparsely connected
graphs represent simpler, low-risk crossing scenarios.</p>
        <p>Fig 4 shows the clusters obtained using the K-Means clustering algorithm, based on the
embeddings generated by FEATHER [32] and GL2Vec [33]. Both methods capture more global
structural patterns. However, they do not directly encode textual node or edge label information
of a graph. To assess the similarity between the clustering results, the Adjusted Rand Index
(ARI) [34] is used. ARI evaluates clustering similarity by considering all pairs of data points and
determining whether they are assigned to the same or diferent clusters. For dataset 1, the ARI
score is 0.51, indicating moderate agreement between clusterings, whereas for dataset 2, the
ARI is 0.20, suggesting weak agreement between the clustering results.</p>
      </sec>
      <sec id="sec-5-2">
        <title>5.2. Query-based Analysis</title>
        <sec id="sec-5-2-1">
          <title>Safety-related Concerns</title>
          <p>First, we present the safety-related concerns gathered from trafic engineers at the City Planning,
Development, and Trafic Department of Jönköping Municipality. These were collected during a
workshop with trafic engineers, focusing on factors influencing the safety of crossing behaviors
with respect to VRUs (pedestrians and cyclists) and vehicles. The discussions highlighted critical
elements afecting behavior and potential risks in mobility.</p>
          <p>Trafic engineers raised several concerns regarding pedestrian behavior at crossings, which
can impact overall mobility safety:</p>
          <p>P1: Taking shortcuts: Many pedestrians cross diagonally instead of using designated
crosswalks, increasing accident risks.</p>
          <p>P2: Walking speed variations: Diferences in pedestrians walking speeds may increase
collision risks at intersections.</p>
          <p>P3: False sense of security: Some pedestrians cross roads even when a car is approaching,
assuming that drivers will stop, which can lead to dangerous situations.</p>
          <p>Cyclists are particularly vulnerable road users, and their interactions with vehicles and
pedestrians can pose safety challenges. The following concerns were highlighted:
C1: Swinging out at crossings: Cyclists may unexpectedly change direction or enter vehicle
lanes when approaching crossings, creating conflict points with motor vehicles.
C2: Crossing at high speed: Some cyclists approach crossings at high speeds, reducing their
ability to stop in time and increasing the risk of collisions with other VRUs and vehicles.</p>
          <p>The behavior of drivers plays a crucial role in ensuring road safety, particularly in varying
environmental and trafic conditions. Key factors identified include:</p>
          <p>V1: Speed: How quickly vehicles pass through the crossing area.</p>
          <p>V2: Type of vehicle: Larger vehicles, such as trucks and buses, have longer stopping distances
and wider blind spots, increasing accident risks.</p>
          <p>V3: Weather conditions: Rain, snow, and icy roads afect vehicle traction and braking
performance.</p>
          <p>In this work we have focused on the concerns that can be addressed using datasets collected
through performed measurements. Other safety concerns were also raised during the workshop,
such as pedestrians crossing while looking at their phones. However, these particular concerns
cannot be addressed by analyzing datasets collected from current sensors or by ensuring
compliance with GDPR regulations.</p>
        </sec>
        <sec id="sec-5-2-2">
          <title>Query and Results</title>
          <p>The main SPARQL queries are included in Appendix A. For example, Query 4 is designed to
identify patterns of interactions involving cyclists who are not maintaining a safe speed during
pattern frequency
pedestrian – close_to – vehicle 124 (1), 127 (2)
pedestrian – cross (via) – out_of_area (crossing_area) 695 (1), 181 (2)
pedestrian – cross (with) – high speed 433 (1), 129 (2)
cyclist – cross (via) – out_of_area (crossing_area) 68 (1), 3 (2)
cyclist – cross (with) – high_speed 127 (1), 14 (2)
vehicle – pass (with) – high speed AND 222 (1), 70 (2)
good weather_condition
heavy vehicle – pass (with) – high speed AND 3 (1), 10 (2)
good weather_condition
vehicle – close_to – pedestrian AND 30 (1), 20 (2)
vehicle – cross (with) – high speed AND
good weather_condition
V1, V2, V3
P3, V1, V3
a crossing scene. In the WHERE clause, it is specified that the spatial relationships for cyclists
must include either ’out of crossing area’ or ’on’, ensuring that they have fully crossed the road
rather than merely passing along the sidewalk. The query also uses a UNION to account for
scenarios where cyclists are involved in interactions with either moving or static objects during
the scene. Finally, a FILTER condition is applied to include only those interactions where the
cyclist’s speed is greater than 20 km/h.</p>
          <p>In Table 2 we present the analysis results related to safety concerns observed in the two
datasets. These patterns capture various types of interactions between VRUs and crossing road
infrastructure, as well as interactions between vehicles, highlighting potential safety risks. The
table provides the occurrence frequency of each pattern in Dataset 1 (zebra crossing area) and
Dataset 2 (zebra-free crossing area), ofering insights into how crossing behaviors may vary
based on infrastructure design. These insights can allow trafic engineers to assess behavioral
trends and road infrastructure design regarding improving safety.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-6">
      <title>6. Conclusions</title>
      <p>In this paper, we introduced a structured spatial-temporal representation of VRU crossing
behavior using ontology-based knowledge graphs. Our method enables explicit reasoning
and explainability, addressing the limitations of deep learning-based trajectory prediction,
which often lacks interpretability and structured knowledge representation. By constructing
knowledge graphs from real-world urban datasets, we captured crossing behavior patterns
and provided query-based insights to support trafic engineers in assessing safety risks and
infrastructure planning.</p>
      <p>To further enhance the efectiveness of STKG-based analysis, future research will explore
additional techniques for pattern mining and causal analysis. Specifically, we aim to identify
recurring unsafe crossing behavior patterns using methods such as frequent subgraph mining
algorithms [35] and rule mining with LLMs [36], and reinforcement learning [37], and to apply
causal inference techniques to perform cause-efect analysis. Moreover, we will use knowledge
graph embeddings, rather than simple graph embeddings, to better capture the semantic
information within the KGs. Such analyses could provide insights into factors influencing unsafe
crossing behaviors and provide decision-making support for road infrastructure planning and
development.</p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgments</title>
      <p>This work has been conducted in the project "Data and AI for decision Making suppOrt in
trafic iNfrastructure Development (DAIMOND)" , which is funded by Vinnova (the Sweden’s
innovation agency) and AI Sweden (the Swedish national center for applied AI). The authors
would like to thank the trafic department in Jönköping municipality for providing trafic safety
related use cases and Viscando AB for providing trafic measurement dataset and expertise in
trafic measurements and analysis.</p>
    </sec>
    <sec id="sec-8">
      <title>Declaration on Generative AI</title>
      <p>During the preparation of this work, the author(s) used ChatGPT to improve grammar, check
spelling, and reword. After using these tool(s)/service(s), the author(s) reviewed and edited the
content as needed and take(s) full responsibility for the publication’s content.</p>
    </sec>
    <sec id="sec-9">
      <title>Appendix A: SPARQL Queries</title>
      <p>In this appendix, we include the SPARQL queries used to answer the safety related concerns described in Section 5.2.</p>
      <sec id="sec-9-1">
        <title>Query 1: VRU is taking shortcut</title>
        <p>SELECT (COUNT(DISTINCT ?b) AS ?total)
WHERE {
?b rdf:type ts:Behavior .
?b ts:hasFrame ?f .
?f ts:containsInteraction ?i .
?i ts:hasSpatialRelationship ?r .
FILTER(?r = ts:out_of_area) .
?i ts:hasObject1 ?obj1 .
?i ts:hasObject2 ?obj2 .
{?obj1 rdf:type ts:Pedestrian}
UNION {?obj2 rdf:type ts:Pedestrian}
}</p>
      </sec>
      <sec id="sec-9-2">
        <title>Query 2: regarding pedestrian walking speed</title>
      </sec>
      <sec id="sec-9-3">
        <title>Query 3: regarding cyclist swinging out at intersections</title>
        <p>SELECT (COUNT(DISTINCT ?b) AS ?total)
WHERE {
{</p>
        <p>SELECT ?b
WHERE {
?b rdf:type ts:Behavior .
?b ts:hasFrame ?f .
?f ts:containsInteraction ?i .
?i ts:hasSpatialRelationship ?r .
{
}</p>
        <p>FILTER(?outOfCrossingCount &gt; ?totalRelations / 2)
}
}
?b ts:hasFrame ?f .
?f ts:containsInteraction ?i .
?i ts:hasSpatialRelationship ?r .
{
?i ts:hasObject1 ?obj1 .
?obj1 rdf:type ts:Bicyclist .
?i ts:hasObject1Info ?obj1Info .
?obj1Info ts:speed ?speed .</p>
        <p>BIND(?obj1 AS ?bicyclist)</p>
      </sec>
      <sec id="sec-9-4">
        <title>Query 4: regarding cyclist unsafe speed</title>
        <p>SELECT (COUNT(DISTINCT ?b) AS ?total)
WHERE {
?b rdf:type ts:Behavior .
?b ts:hasFrame ?f .
?f ts:containsInteraction ?i .
?i ts:hasSpatialRelationship ?r .</p>
        <p>FILTER(?r = ts:out_of_area || ?r = ts:on) .
{
?i ts:hasObject1 ?obj1 .
?obj1 rdf:type ts:Bicyclist .
?i ts:hasObject1Info ?obj1Info .
?obj1Info ts:speed ?speed .</p>
        <p>BIND(?obj1 AS ?object)
}</p>
        <p>FILTER(?speed &gt; 20)</p>
      </sec>
      <sec id="sec-9-5">
        <title>Query 5: regarding vehicle’s speed</title>
      </sec>
      <sec id="sec-9-6">
        <title>Query 6: regarding vehicle’s speed and distance to VRUs</title>
        <p>SELECT (COUNT(DISTINCT ?b) AS ?total)
WHERE {
?b rdf:type ts:Behavior .
?b ts:hasFrame ?f .
?f ts:containsInteraction ?i .
?i ts:hasSpatialRelationship ts:close_to .
{
}</p>
        <p>FILTER(?speed &gt; 30)
[26] J. M. Zacks, B. Tversky, G. Iyer, Perceiving, remembering, and communicating structure in
events., Journal of experimental psychology: General 130 (2001) 29.
[27] N. K. Speer, J. M. Zacks, J. R. Reynolds, Perceiving narrated events, in: Proceedings of the</p>
        <p>Annual Meeting of the Cognitive Science Society, volume 26, 2004.
[28] J. Ji, R. Krishna, L. Fei-Fei, J. C. Niebles, Action genome: Actions as compositions of
spatio-temporal scene graphs, in: Proceedings of the IEEE/CVF Conference on Computer
Vision and Pattern Recognition, 2020, pp. 10236–10247.
[29] L. Mlodzian, Z. Sun, H. Berkemeyer, S. Monka, Z. Wang, S. Dietze, L. Halilaj, J. Luettin,
nuScenes Knowledge Graph - A Comprehensive Semantic Representation of Trafic Scenes
for Trajectory Prediction, in: Proceedings of the IEEE/CVF International Conference on
Computer Vision (ICCV) Workshops, 2023, pp. 42–52.
[30] H. Tan, F. Westphal, A semantic representation of pedestrian crossing behavior, in: Joint
of the ESWC 2024 Workshops and Tutorials, ESWC-JP 2024 Hersonissos 26 May 2024
through 27 May 2024, volume 3749, CEUR-WS, 2024.
[31] C. Cai, Y. Wang, A simple yet efective baseline for non-attributed graph classification,
arXiv preprint arXiv:1811.03508 (2018).
[32] B. Rozemberczki, R. Sarkar, Characteristic functions on graphs: Birds of a feather, from
statistical descriptors to parametric models, in: Proceedings of the 29th ACM international
conference on information &amp; knowledge management, 2020, pp. 1325–1334.
[33] H. Chen, H. Koga, Gl2vec: Graph embedding enriched by line graphs with edge features,
in: Neural Information Processing: 26th International Conference, ICONIP 2019, Sydney,
NSW, Australia, December 12–15, 2019, Proceedings, Part III 26, Springer, 2019, pp. 3–14.
[34] D. Steinley, Properties of the hubert-arable adjusted rand index., Psychological methods 9
(2004) 386.
[35] C. Jiang, F. Coenen, M. Zito, A survey of frequent subgraph mining algorithms, The</p>
        <p>Knowledge Engineering Review 28 (2013) 75–105.
[36] L. Luo, J. Ju, B. Xiong, Y.-F. Li, G. Hafari, S. Pan, Chatrule: Mining logical rules with large
language models for knowledge graph reasoning, arXiv preprint arXiv:2309.01538 (2023).
[37] W. Xiong, T. Hoang, W. Y. Wang, Deeppath: A reinforcement learning method for
knowledge graph reasoning, arXiv preprint arXiv:1707.06690 (2017).</p>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Koszowski</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Gerike</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hubrich</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Götschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Pohle</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Wittwer</surname>
          </string-name>
          ,
          <article-title>Active mobility: bringing together transport planning, urban planning, and public health, Towards usercentric transport in europe: challenges, solutions and collaborations (</article-title>
          <year>2019</year>
          )
          <fpage>149</fpage>
          -
          <lpage>171</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>W. H.</given-names>
            <surname>Organization</surname>
          </string-name>
          ,
          <article-title>Compendium of WHO and other UN guidance on health and environment: version with International Classification of Health Intervention (ICHI) codes</article-title>
          , World Health Organization,
          <year>2024</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>R.</given-names>
            <surname>Abduljabbar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Dia</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Liyanage</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. A.</given-names>
            <surname>Bagloee</surname>
          </string-name>
          ,
          <article-title>Applications of artificial intelligence in transport: An overview</article-title>
          ,
          <source>Sustainability</source>
          <volume>11</volume>
          (
          <year>2019</year>
          )
          <fpage>189</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ferguson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Luders</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Grande</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. P.</given-names>
            <surname>How</surname>
          </string-name>
          ,
          <article-title>Real-time predictive modeling and robust avoidance of pedestrians with uncertain, changing intentions</article-title>
          ,
          <source>in: Algorithmic Foundations of Robotics XI: Selected Contributions of the Eleventh International Workshop on the Algorithmic Foundations of Robotics</source>
          , Springer,
          <year>2015</year>
          , pp.
          <fpage>161</fpage>
          -
          <lpage>177</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>E.</given-names>
            <surname>Papadimitriou</surname>
          </string-name>
          , G. Yannis,
          <string-name>
            <given-names>J.</given-names>
            <surname>Golias</surname>
          </string-name>
          ,
          <article-title>A critical assessment of pedestrian behaviour models</article-title>
          ,
          <source>Transportation research part F: trafic psychology and behaviour 12</source>
          (
          <year>2009</year>
          )
          <fpage>242</fpage>
          -
          <lpage>255</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Rasouli</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Kotseruba</surname>
          </string-name>
          ,
          <article-title>Intend-wait-cross: Towards modeling realistic pedestrian crossing behavior</article-title>
          ,
          <source>in: 2022 IEEE Intelligent Vehicles Symposium (IV)</source>
          , IEEE,
          <year>2022</year>
          , pp.
          <fpage>83</fpage>
          -
          <lpage>90</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Ridel</surname>
          </string-name>
          , E. Rehder,
          <string-name>
            <given-names>M.</given-names>
            <surname>Lauer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Stiller</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Wolf</surname>
          </string-name>
          ,
          <article-title>A literature review on the prediction of pedestrian behavior in urban scenarios</article-title>
          ,
          <source>in: 2018 21st International Conference on Intelligent Transportation Systems (ITSC)</source>
          , IEEE,
          <year>2018</year>
          , pp.
          <fpage>3105</fpage>
          -
          <lpage>3112</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>E.</given-names>
            <surname>Schuetz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F. B.</given-names>
            <surname>Flohr</surname>
          </string-name>
          ,
          <article-title>A Review of Trajectory Prediction Methods for the Vulnerable Road User</article-title>
          ,
          <source>Robotics</source>
          <volume>13</volume>
          (
          <year>2023</year>
          )
          <article-title>1</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>A.</given-names>
            <surname>Alahi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Goel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Ramanathan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Robicquet</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Fei-Fei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Savarese</surname>
          </string-name>
          ,
          <string-name>
            <surname>Social</surname>
            <given-names>LSTM</given-names>
          </string-name>
          :
          <article-title>Human trajectory prediction in crowded spaces</article-title>
          ,
          <source>in: Proceedings of the IEEE conference on computer vision and pattern recognition</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>961</fpage>
          -
          <lpage>971</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          , W. Ouyang,
          <string-name>
            <given-names>P.</given-names>
            <surname>Zhang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Xue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Zheng</surname>
          </string-name>
          , SR-LSTM:
          <article-title>State refinement for lstm towards pedestrian trajectory prediction</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>12085</fpage>
          -
          <lpage>12094</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Quan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <article-title>Holistic LSTM for pedestrian trajectory prediction</article-title>
          ,
          <source>IEEE transactions on image processing 30</source>
          (
          <year>2021</year>
          )
          <fpage>3229</fpage>
          -
          <lpage>3239</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Bi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <surname>STGAT</surname>
          </string-name>
          :
          <article-title>Modeling spatial-temporal interactions for human trajectory prediction</article-title>
          ,
          <source>in: Proceedings of the IEEE/CVF international conference on computer vision</source>
          ,
          <year>2019</year>
          , pp.
          <fpage>6272</fpage>
          -
          <lpage>6281</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Cui</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>ST-AGNN: Spatial-Temporal Attention Graph Neural Network for Pedestrian Trajectory Prediction</article-title>
          , in: Applied Mathematics, Modeling and Computer Simulation, IOS Press,
          <year>2022</year>
          , pp.
          <fpage>268</fpage>
          -
          <lpage>275</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>C.</given-names>
            <surname>Yu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Ma</surname>
          </string-name>
          , J. Ren,
          <string-name>
            <given-names>H.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yi</surname>
          </string-name>
          ,
          <article-title>Spatio-temporal graph transformer networks for pedestrian trajectory prediction</article-title>
          , in: Computer Vision-ECCV
          <year>2020</year>
          : 16th European Conference, Glasgow, UK,
          <year>August</year>
          23-
          <issue>28</issue>
          ,
          <year>2020</year>
          , Proceedings,
          <source>Part XII 16</source>
          , Springer,
          <year>2020</year>
          , pp.
          <fpage>507</fpage>
          -
          <lpage>523</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>S.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Mao</surname>
          </string-name>
          ,
          <article-title>An eficient spatial-temporal model based on gated linear units for trajectory prediction</article-title>
          ,
          <source>Neurocomputing</source>
          <volume>492</volume>
          (
          <year>2022</year>
          )
          <fpage>593</fpage>
          -
          <lpage>600</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>J.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. G.</given-names>
            <surname>Wang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. X.</given-names>
            <surname>Xue</surname>
          </string-name>
          , PTPGC:
          <article-title>Pedestrian trajectory prediction by graph attention network with ConvLSTM</article-title>
          ,
          <source>Robotics and Autonomous Systems</source>
          <volume>148</volume>
          (
          <year>2022</year>
          )
          <fpage>103931</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>R.</given-names>
            <surname>Krishna</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Groth</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Johnson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Hata</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Kravitz</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Kalantidis</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.-J.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Shamma</surname>
          </string-name>
          , et al.,
          <article-title>Visual genome: Connecting language and vision using crowdsourced dense image annotations</article-title>
          ,
          <source>International journal of computer vision 123</source>
          (
          <year>2017</year>
          )
          <fpage>32</fpage>
          -
          <lpage>73</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>T.</given-names>
            <surname>Hägerstrand</surname>
          </string-name>
          ,
          <article-title>What about people in Regional Science?</article-title>
          ,
          <source>Papers of the Regional Science Association</source>
          <volume>24</volume>
          (
          <year>1970</year>
          )
          <fpage>6</fpage>
          -
          <lpage>21</lpage>
          . URL: http://dx.doi.org/10.1007/bf01936872. doi:
          <volume>10</volume>
          .1007/ bf01936872.
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>D.</given-names>
            <surname>Orellana</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Renso</surname>
          </string-name>
          ,
          <article-title>Developing an interactions ontology for characterising pedestrian movement behaviour, in: Movement-aware applications for sustainable mobility: Technologies and approaches</article-title>
          ,
          <source>IGI Global</source>
          ,
          <year>2010</year>
          , pp.
          <fpage>62</fpage>
          -
          <lpage>86</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>C.</given-names>
            <surname>Chai</surname>
          </string-name>
          ,
          <string-name>
            <given-names>X.</given-names>
            <surname>Shi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y. D.</given-names>
            <surname>Wong</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M. J.</given-names>
            <surname>Er</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E. T. M.</given-names>
            <surname>Gwee</surname>
          </string-name>
          ,
          <article-title>Fuzzy logic-based observation and evaluation of pedestrians' behavioral patterns by age and gender</article-title>
          ,
          <source>Transportation research part F: trafic psychology and behaviour 40</source>
          (
          <year>2016</year>
          )
          <fpage>104</fpage>
          -
          <lpage>118</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>A.</given-names>
            <surname>Gharebaghi</surname>
          </string-name>
          , M.
          <article-title>-</article-title>
          <string-name>
            <surname>A. Mostafavi</surname>
            , G. Edwards,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Fougeyrollas</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Gamache</surname>
            ,
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Grenier</surname>
          </string-name>
          ,
          <article-title>Integration of the social environment in a mobility ontology for people with motor disabilities</article-title>
          ,
          <source>Disability and Rehabilitation: Assistive Technology</source>
          <volume>13</volume>
          (
          <year>2018</year>
          )
          <fpage>540</fpage>
          -
          <lpage>551</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>F.</given-names>
            <surname>Fang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yamaguchi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Khiat</surname>
          </string-name>
          ,
          <article-title>Ontology-based reasoning approach for long-term behavior prediction of road users</article-title>
          ,
          <source>in: 2019 IEEE Intelligent Transportation Systems Conference (ITSC)</source>
          , IEEE,
          <year>2019</year>
          , pp.
          <fpage>2068</fpage>
          -
          <lpage>2073</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>D.</given-names>
            <surname>Newtson</surname>
          </string-name>
          ,
          <article-title>Attribution and the unit of perception of ongoing behavior</article-title>
          .,
          <source>Journal of personality and social psychology 28</source>
          (
          <year>1973</year>
          )
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>L.</given-names>
            <surname>Spector</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Grafman</surname>
          </string-name>
          , Planning, neuropsychology, and artificial intelligence: crossfertilization,
          <source>Handbook of neuropsychology 9</source>
          (
          <year>1994</year>
          )
          <fpage>377</fpage>
          -
          <lpage>392</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>C.</given-names>
            <surname>Baldassano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Chen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zadbood</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. W.</given-names>
            <surname>Pillow</surname>
          </string-name>
          ,
          <string-name>
            <given-names>U.</given-names>
            <surname>Hasson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. A.</given-names>
            <surname>Norman</surname>
          </string-name>
          ,
          <article-title>Discovering event structure in continuous narrative perception and memory</article-title>
          ,
          <source>Neuron</source>
          <volume>95</volume>
          (
          <year>2017</year>
          )
          <fpage>709</fpage>
          -
          <lpage>721</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>