<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Video Analytics for Volleyball: Preliminary Results and Future Prospects of the 5VREAL Project</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Andrea</forename><surname>Rosani</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ivan</forename><surname>Donadello</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michele</forename><surname>Calvanese</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Alessandro</forename><surname>Torcinovich</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Giuseppe</forename><forename type="middle">Di</forename><surname>Fatta</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Marco</forename><surname>Montali</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Oswald</forename><surname>Lanz</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Libera Università di Bolzano</orgName>
								<address>
									<addrLine>Piazza Università 1</addrLine>
									<postCode>39100</postCode>
									<settlement>Bozen-Bolzano</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Video Analytics for Volleyball: Preliminary Results and Future Prospects of the 5VREAL Project</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">E5419AFCEEADC9C3BA8F032C57084CB7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:56+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Video action recognition</term>
					<term>data augmentation</term>
					<term>video annotation</term>
					<term>process mining</term>
					<term>sports</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>This paper introduces a real-time action recognition and tactical-behavior mining system designed specifically for volleyball games. The system aims to provide data augmentation, video annotation and KPI extraction processes by accurately identifying various actions and action sequential patterns performed during volleyball matches. Leveraging advanced computer vision techniques, the system aims at automatically detecting and recognizing player actions and group actions in real time. Then, Process Mining techniques are used to extract tactical behaviors, in the form of temporal relations, among player actions. By providing precise annotations, the system significantly provides an instrument for volleyball game analytics and tactical analysis. This paper outlines the architecture and key components of the real-time action recognition and tactical-behavior mining system and presents some preliminary results on the performance of the proposed model.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Over the past decade, action recognition in professional sport activities has rapidly gained popularity as a tool for a variety of tasks such as player performance analytics, computer-aided game refereeing, and the like. In response to this interest, several action recognition systems have been devised in the context of several sports, such as football, basket, rugby, etc.</p><p>In this context, this paper presents an action recognition system for volleyball game analysis. The preliminary results obtained during the activity focus on the detection of actions, events, and tactical behaviors in volley with the final objective of providing a reliable Ai-powered data augmentation system that can be used for the TV broadcasting of volley games in a real time scenario, as well as for offline analytics activities, starting from the video collected by a multi view source and shared using 5G transmission.</p><p>The document is structured into several sections that outline in detail the study process and the developments obtained. First, a review of some particularly relevant works in the specific field is proposed. Then, methods and algorithms are described, along with some results of preliminary experiments on a public dataset <ref type="bibr" target="#b6">[7]</ref>. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.1.">Context: the 5VREAL Project</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">State of art in action recognition and tactical behavior for volley</head><p>The task of action/pose estimation involves analyzing video content to track one or more persons of interest and identify their key anatomical features, typically defined as keypoints <ref type="bibr" target="#b13">[14]</ref>, <ref type="bibr" target="#b25">[26]</ref>. When multiple actors interact, the task is usually referred as Group Activity Recognition (GAR) <ref type="bibr" target="#b17">[18]</ref>, <ref type="bibr" target="#b18">[19]</ref>, <ref type="bibr" target="#b21">[22]</ref>. GAR algorithms differ in how they model spatial and temporal information in videos. Some dated approaches apply recurrent models: <ref type="bibr" target="#b6">[7]</ref> develops a hierarchical model based on two long-short term memory (LSTM) models, <ref type="bibr" target="#b12">[13]</ref> proposes a recurrent neural network (RNN) model with attention mechanisms and semantic graphs, <ref type="bibr" target="#b2">[3]</ref> generates a map of candidate regions of interest and uses an RNN architecture for temporal processing, and <ref type="bibr" target="#b23">[24]</ref> adopts a top-down approach using Gated Recurrent Unit.</p><p>Other works focus on convolutional mechanisms: <ref type="bibr" target="#b1">[2]</ref> develops a convolutional relational machine for GAR, <ref type="bibr" target="#b18">[19]</ref> works on individual poses using onedimensional convolutional neural networks.</p><p>Newer models like graph-based networks and Transformers are also employed: <ref type="bibr" target="#b24">[25]</ref> uses a graphbased model for spatio-temporal relationships, designs a descriptor for crowded scenarios, and <ref type="bibr" target="#b9">[10]</ref> [12] proposes a Transformer-based solution for processing spatial and temporal information.</p><p>To recognize tactical behaviors, techniques like sequence mining algorithms and Inductive Logic Programming are used ( <ref type="bibr" target="#b20">[21]</ref>, <ref type="bibr" target="#b18">[19]</ref>, <ref type="bibr" target="#b22">[23]</ref>). Works in this field include <ref type="bibr" target="#b8">[9]</ref> and <ref type="bibr" target="#b10">[11]</ref> for predicting complex events from football matches using Answer Set Programming and Subgraph Discovery. In our work, temporal pattern mining algorithms based on Linear Temporal Logics will be used, offering a different approach compared to the mentioned works.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Methodology and algorithms</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">General architecture of the system</head><p>The AI block consists of a set of algorithms required to a) identify the position and trajectory of the ball, b) identify the position of individual players, and c) detect and identify actions performed within a specific timeframe.</p><p>The acquisition of images for AI occurs through three iPhone 14 Pro devices mounted tripods with calibrated cameras, connected to a backend via 5G, producing synchronized SRT (Secure Reliable Transport) compressed video streams. The ball localization module starts the processing by producing a continuous data stream of the ball trajectory. When a change in its direction is detected, the player tracking and action detection modules are activated (Figure <ref type="figure" target="#fig_2">1</ref>). This generates an output of the events occurred in the selected timeframe. In the following, we analyze in detail the different steps. 3D Ball tracking is described by a project partner in another submission to Ital-IA 2024.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Ball trajectories change detection</head><p>The general scheme for ball trajectory analysis can be subdivided in the following steps (Figure <ref type="figure" target="#fig_3">2</ref>):</p><p>1. Identification of possible candidate ball positions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Incremental interpolation of candidates with</head><p>parabolic trajectories, producing a parabola for each frame. 3. Linking of trajectories from which to derive the motion of the ball. 4. Detection of trigger events when the ball undergoes an upward acceleration, such as a player touching or a bounce on the floor. The algorithm, originally proposed in <ref type="bibr" target="#b4">[5]</ref>, requires as input the positions of the ball at each time step, that can be easily devised with a ball tracking system <ref type="bibr" target="#b13">[14]</ref>. The path of the ball is modelled by a piecewise parabolic trajectory. Initially, seed triplets are identified within a threshold distance (𝑟).</p><p>These triplets serve as initial anchors for parabolic fitting. Due to false positives, multiple seed triplets per frame may exist. Each triplet is used to fit a parabola, and candidate detections close to the estimated position are added to a set of supporting points.   <ref type="bibr" target="#b6">[7]</ref>). The variation in the ball trajectory identifies an interaction that triggers the event.</p><p>To ensure a unique parabola per frame, trajectory distances are computed and used to construct a weighted graph. Dijkstra's algorithm <ref type="bibr" target="#b5">[6]</ref> identifies the optimal path through this graph, yielding the final sequence of parabolas describing the ball's path.</p><p>Considering that the action mainly occurs around the ball's position, the proposed solution allows for detecting changes in the direction of the ball due to gameplay interactions. This trajectory variation triggers an analysis mechanism of the activities performed near the contact point to activate the subsequent phase of recognizing the actions of individual players and teams (Figure <ref type="figure" target="#fig_4">3</ref>).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Individual player action recognition</head><p>In the rapidly evolving field of action recognition, many datasets, structures, and architectures have been introduced to address the challenges and complexities associated with understanding human actions in different environments <ref type="bibr" target="#b3">[4]</ref>. These studies focus on extracting meaningful information from videos, by detecting and recognizing what a subject is doing <ref type="bibr" target="#b14">[15]</ref>, <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b16">[17]</ref>.</p><p>The posture detection occurs within the video stream, in the player's bounding box, that is the area of interests of an object (the player, in this case) tracked in each video frame. The detection of the posture uses pose estimation technologies based on machine learning models <ref type="bibr" target="#b23">[24]</ref>, that identify key anatomical features of players, such as joints, extremities, center of mass, etc., commonly referred to as keypoints <ref type="bibr" target="#b7">[8]</ref>. In the case of a volleyball player, the bounding box is used to locate the player's position within the video frame and subsequently extract keypoints on the players' bodies (Figures <ref type="figure" target="#fig_6">4 and 5</ref>). Starting from this information is possible to perform action recognition, as demonstrated effectively in <ref type="bibr" target="#b15">[16]</ref>, <ref type="bibr" target="#b16">[17]</ref> that will be used as reference in the project for this specific task.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Team activity recognition</head><p>The challenge of Group Activity Recognition (GAR) requires addressing two main aspects. First, it demands a compositional understanding of the scene. Due to the relatively high number of people present in the scene, it's challenging to learn meaningful representations for GAR over the entire area. Since group activities often involve subgroups of actors and scene objects, the final label of the action depends on a compositional understanding of these entities. Secondly, GAR benefits from relational reasoning on scene elements to understand the relative importance of entities and their interactions <ref type="bibr" target="#b25">[26]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Preliminary results</head><p>In the following, we present some preliminary results obtained using state-of-the-art techniques on public available datasets.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Dataset</head><p>The Volleyball dataset <ref type="bibr" target="#b6">[7]</ref>, represents a significant resource in the context of sports action recognition, specifically on volleyball. Although originally designed for athlete action recognition, the dataset has been extended to include the task of 2D ball detection in the image. The dataset comprises a total of 4830 frames from 55 videos, offering a wide variety of actions and activities to analyze (Figure <ref type="figure" target="#fig_5">4</ref>). In the dataset, there are nine annotations for individual player actions and eight group activities, detailed in Table <ref type="table">1</ref>. Table <ref type="table">1</ref> Classes of individual player activities are listed, and group actions, including the number of instances.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Group activity recognition</head><p>GAR is performed at different levels. Initially, the keypoints of the various players are extracted. Based on these, an estimation of the action each player is doing is defined, and then related to the predicted level of person-to-person and person-to-group interaction.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.1.">Trigger event identification and GAR</head><p>The situation that activates the GAR mechanism is represented by the trigger, identified with the change of the ball direction (Figure <ref type="figure" target="#fig_6">5</ref>). In Figure <ref type="figure" target="#fig_7">6</ref> we present some frames from <ref type="bibr" target="#b6">[7]</ref>, processed using the proposed algorithms, detailed in the following section, allowing for a comprehensive visualization of the keypoints of the various players combined with the trajectories of the ball</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.2.">Hierarchy of semantic events for GAR</head><p>Taking inspiration from the approach proposed in <ref type="bibr" target="#b25">[26]</ref>, composite learning of entities in the video and relational reasoning on these entities is established.   <ref type="bibr" target="#b6">[7]</ref>, <ref type="bibr" target="#b25">[26]</ref>. In the first confusion matrix we represent GAR, in the second one the single player activities. Like humans, object representation is performed at various granularities, as well as reasoning about their interactions to transform sensory signals into highlevel knowledge. GAR is addressed by modeling a video as a set of tokens representing multi-scale semantic concepts present in the video, thus allowing the described method to be easily adaptable to understand any video with multi-actor multi-object interactions.</p><p>In the specific case of volleyball, the actors are represented by the players, while the object is represented by the ball. These tokens include keypoints, people, person-to-person interactions, person-to-group interactions, and object interactions. (i.e., considering the entire images and not just the keypoints), shows significant accuracy (Figure <ref type="figure" target="#fig_8">7</ref>)</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Tactical behavior</head><p>By tactical behavior, we mean a set of temporal relationships among volleyball actions that can lead to an outcome of particular interest, such as scoring a point. In what follows we provide a conceptual framework to formally define tactical behaviors and use Process Mining (PM) techniques for mining tactical behaviors from annotated volleyball matches.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.1.">A conceptual model for tactical behaviors</head><p>A tactical behavior is a set of temporal relationships over events in a volleyball match. An event is the main action of a player on the ball which has a start time, an end time, a set of players involved with information related to their pose, their bounding boxes, their unique identifiers, the quality of the action and the position of the ball. For example:</p><p>• A dunk by a player from area A1 is immediately followed by a point scored.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>•</head><p>A reception (with low quality) of a player is immediately followed by a point.</p><p>Our conceptual model for a volleyball event is shown in Figure <ref type="figure" target="#fig_9">8</ref>. A volleyball match is therefore a sequence of annotations of volleyball events in chronological order. Such events are annotated with the use of the computer vision techniques above or provided by scoutmen.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.2.">Process Mining for tactical behaviors</head><p>Process Mining <ref type="bibr" target="#b19">[20]</ref> embraces Data Mining and Knowledge Representation and focuses on the analysis and improvement of business processes based on data collected from the information systems. One of its key features is the availability of tools for mining information from temporal discrete data. We analyzed the matches of the Volleyball dataset (converted in a suitable format) with the Process Mining RuM (Rule Mining Made Simple) tool <ref type="bibr" target="#b0">[1]</ref> to mine tactical behaviors.</p><p>RuM extracts temporal relations among actions of volleyball events through a list of templates defined with Linear Temporal Logic over finite traces (LTLf), one of the reference logics in the field <ref type="bibr" target="#b27">[28]</ref>. Examples of such templates are the Chain Response between actions A and B that means that action A must be immediately followed by action B or the Alternate Precedence between A and B that means that action B must be preceded by action A without any other occurrence of B in between, see <ref type="bibr" target="#b26">[27]</ref> Table <ref type="table">2</ref>. In addition, RuM provides the selection of a numeric support that indicates the percentage of occurrence of a particular template in the set of matches that can be used as a key process indicator. The 55 Volleyball matches were analyzed in less than 10 seconds, a suitable performance for an offline scenario. With a support of 20%, we obtained 50 tactical behaviors expressed using LTLf templates, automatically translated by the tool in natural language sentences for a better human comprehension. An example of mined tactical behavior is that in the 47.73% of the matches, each jump (for a block) is preceded by a dunk without any other jump in between. In addition, RuM also allows us to link the tactical behaviors of actions to the other concepts of the above conceptual scheme. RuM also supports the manual definition of tactical behaviors and the analysis of the matches according to such predefined behaviors. This task is called conformance checking and, as two examples of tactical behaviors, we defined that a jump is followed by a spike and that a spike is followed by a block. Figure <ref type="figure" target="#fig_10">9</ref> shows the results of the RuM conformance checking.</p><p>Each behavior is analyzed for each match and, on the right, the actions of match 5 are shown and highlighted in green if they conform to the tactical behavior, in red otherwise.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head></head><label></label><figDesc>Ital-IA 2024: 4th National Conference on Artificial Intelligence, organized by CINI, May 29-30, 2024, Naples, Italy * M. Calvanese contributed with work done during his Master Thesis project at UPC Barcelona with Prof. Carlos Andujar Gran.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>This paper describes the preliminary results obtained during the activity related to the project 5VREAL -5G Volley Reality Experience &amp; Analytics Live, focused on the study and implementation of a system for the acquisition, analysis and transmission of video and analytics in the context of volleyball games and training sessions. The project aims to create a scalable solution, which can be used at all levels of competition, professional and amateur. Two use cases are developed:• Fun Engagement: This use case aims to use artificial intelligence algorithms to enrich the spectator's experience while watching the match with augmented reality information displayed in real time on the broadcasted videos. 0009-0008-2622-6776 (A. Rosani); 000-0002-0701-5729 (I. Donadello); 0009-0005-4103-0147 (M. Calvanese); 0000-0001-8110-1791 (A. Torcinovich); 0000-0003-3096-2844 (Di Fatta), 0000-0002-8021-3430 (M. Montali), 0000-0003-4793-4276 (O. Lanz) • Coach: Use of the game &amp; 'rhythm' for technical staff. After the game, the technical staff or directly the coach receives indications on positions, speed, trajectories, time intervals between touches and higherlevel semantic information about the tactical behaviors of the team that can favor a more in-depth technical and tactical analysis. The involvement in the project of industrial partners operating in the media production sector will enable a real application scenario to test the performances of the proposed solution. The project is funded by the Italian Ministry of Enterprises and Made in Italy, MIMIT under the MIMIT FSC 2014-2020: Tecnologie 5G. Progetti di sperimentazione e ricerca -Piano di Sviluppo e Coesione 2014-2020.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Overview of the architecture of the volleyball action recognition system.The ball localization module starts the processing by producing a continuous data stream of the ball trajectory. When a change in its direction is detected, the player tracking and action detection modules are activated (Figure1). This generates an output of the events occurred in the selected timeframe. In the following, we analyze in detail the different steps. 3D Ball tracking is described by a project partner in another submission to Ital-IA 2024.</figDesc><graphic coords="2,306.02,230.41,202.35,83.64" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Ball trajectories analysis and trigger event detection [5]. The temporally furthest points within the support set are used to fit a new parabola. This iterative process continues until the set of supporting points ceases to grow. Parabolas with upward-pointing acceleration vectors are excluded as they violate physical constraints.</figDesc><graphic coords="3,142.38,108.26,104.20,150.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Action and Group Activity Recognition (images from<ref type="bibr" target="#b6">[7]</ref>). The variation in the ball trajectory identifies an interaction that triggers the event.</figDesc><graphic coords="3,85.05,354.47,214.41,93.45" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Example annotation from the Volleyball dataset showing the bounding box of each player divided by team (using different colors) and the action performed ("Left spike"). (Image from [7])</figDesc><graphic coords="3,315.02,272.75,184.35,87.35" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: -Detailed schema for action and group activity recognition.In Figure6we present some frames from<ref type="bibr" target="#b6">[7]</ref>, processed using the proposed algorithms, detailed in the following section, allowing for a comprehensive visualization of the keypoints of the various players combined with the trajectories of the ball</figDesc><graphic coords="4,94.53,464.36,186.36,119.75" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Example 2D application of player identification and identification of ball trajectory changes ("trigger"). Keypoints can be observed on each player's silhouette, along with the corresponding arc of the ball trajectory.</figDesc><graphic coords="4,340.52,244.91,133.20,116.17" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 7 :</head><label>7</label><figDesc>Figure7: Our results on the Volleyball dataset considering the Olympic Split<ref type="bibr" target="#b6">[7]</ref>,<ref type="bibr" target="#b25">[26]</ref>. In the first confusion matrix we represent GAR, in the second one the single player activities. Like humans, object representation is performed at various granularities, as well as reasoning about their interactions to transform sensory signals into highlevel knowledge. GAR is addressed by modeling a video as a set of tokens representing multi-scale semantic concepts present in the video, thus allowing the described method to be easily adaptable to understand any video with multi-actor multi-object interactions.In the specific case of volleyball, the actors are represented by the players, while the object is represented by the ball. These tokens include keypoints, people, person-to-person interactions, person-to-group interactions, and object interactions. The performance of this analysis, compared to previous techniques based on standard RGB analysis</figDesc><graphic coords="4,341.65,362.16,131.23,114.90" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: The conceptual model for volleyball events.</figDesc><graphic coords="5,85.05,376.02,211.44,107.60" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: The conformance checking analysis of predefined tactical behaviors.RuM also supports the manual definition of tactical behaviors and the analysis of the matches according to such predefined behaviors. This task is called conformance checking and, as two examples of tactical behaviors, we defined that a jump is followed by a spike and that a spike is followed by a block. Figure9shows the results of the RuM conformance checking.Each behavior is analyzed for each match and, on the right, the actions of match 5 are shown and highlighted in green if they conform to the tactical behavior, in red otherwise.</figDesc><graphic coords="5,304.60,363.60,200.77,120.65" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head></head><label></label><figDesc>The performance of this analysis, compared to previous techniques based on standard RGB analysis</figDesc><table><row><cell>Action</cell><cell>No. of</cell><cell>Group Activity</cell><cell>No. of</cell></row><row><cell>Classes</cell><cell>Instances</cell><cell>Class</cell><cell>Instances</cell></row><row><cell>Waiting</cell><cell>3601</cell><cell>Right set</cell><cell>644</cell></row><row><cell>Setting</cell><cell>1332</cell><cell>Right spike</cell><cell>623</cell></row><row><cell>Digging</cell><cell>2333</cell><cell>Right pass</cell><cell>801</cell></row><row><cell>Falling</cell><cell>1241</cell><cell>Right winpoint</cell><cell>295</cell></row><row><cell>Spiking</cell><cell>1216</cell><cell>Left winpoint</cell><cell>367</cell></row><row><cell>Blocking</cell><cell>2458</cell><cell>Left pass</cell><cell>826</cell></row><row><cell>Jumping</cell><cell>341</cell><cell>Left spike</cell><cell>642</cell></row><row><cell>Moving</cell><cell>5121</cell><cell>Left set</cell><cell>633</cell></row><row><cell>Standing</cell><cell></cell><cell></cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgements</head><p>This work is supported by 5VREAL -5G VOLLEY REALITY EXPERIENCE &amp; ANALYTICS LIVE, CUP I53C23001340005, funded by Italian Ministry of Enterprises and Made in Italy.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Declarative Process Mining for Software Processes: The RuM Toolkit and the Declare4Py Python Library</title>
		<author>
			<persName><forename type="first">A</forename><surname>Alman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Donadello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Maggi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Montali</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Int. Conf. on Product-Focused Sw Process Improvement</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Convolutional relational machine for group activity recognition</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Azar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Atigh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nickabadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alahi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE CVPR</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Social scene understanding: Endto-end multi-person action localization and collective activity recognition</title>
		<author>
			<persName><forename type="first">T</forename><surname>Bagautdinov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Alahi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fleuret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fua</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Savarese</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE CVPR</title>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">An Overview of the Vision-Based Human Action Recognition Field</title>
		<author>
			<persName><forename type="first">F</forename><surname>Camarena</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Gonzalez-Mendoza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Chang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Cuevas-Ascencio</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Math. Comput. Appl</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Ball tracking in Padel Videos using Convolutional Neural Networks</title>
		<author>
			<persName><forename type="first">M</forename><surname>Calvanese</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">in Artificial intelligence</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
		<respStmt>
			<orgName>Università di Bologna ; Corso di Studio</orgName>
		</respStmt>
	</monogr>
	<note>Laurea magistrale</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A note on two problems in connexion with graphs</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">W</forename><surname>Dijkstra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Numerische mathematik</title>
		<imprint>
			<date type="published" when="1959">1959</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">A hierarchical deep temporal model for group activity recognition</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">S</forename><surname>Ibrahim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Muralidharan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Deng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vahdat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Mori</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CVPR</title>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">RTMPose: Real-Time Multi-Person Pose Estimation based on MMPose</title>
		<author>
			<persName><forename type="first">T</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lyu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chen</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">ArXiv</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Visual reasoning on complex events in soccer videos using answer set programming</title>
		<author>
			<persName><forename type="first">A</forename><surname>Khan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Bozzato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Serafini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lazzerini</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">GCAI</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">CrowdPose: Efficient Crowded Scenes Pose Estimation and A New Benchmark</title>
		<author>
			<persName><forename type="first">J</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Mao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Lu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">CVPR</title>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Exploring successful team tactics in soccer tracking data</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Meerhoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">R</forename><surname>Goes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>De Leeuw</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Knobbe</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Discovery in Databases: Int. Workshops of ECML PKDD</title>
				<imprint>
			<date type="published" when="2019">2020. 2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Temporal poselets for collective activity detection and recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Murino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE CVPR</title>
				<imprint>
			<date type="published" when="2013">2013</date>
			<biblScope unit="page" from="500" to="507" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">stagnet: An attentive semantic rnn for group activity recognition</title>
		<author>
			<persName><forename type="first">M</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Qin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Luo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Van Gool</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. of the ECCV</title>
				<meeting>of the ECCV</meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Optical tracking in team sports: A survey on player and ball tracking methods in soccer and other team sports</title>
		<author>
			<persName><forename type="first">P</forename><surname>Rahimian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Toka</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Quantitative Analysis in Sports</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Gate-Shift Networks for Video Action Recognition</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sudhakaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Escalera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Lanz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE CVPR</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Gate-Shift-Fuse for Video Action Recognition</title>
		<author>
			<persName><forename type="first">S</forename><surname>Sudhakaran</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Escalera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Lanz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE TPAMI</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Robust Volleyball Tracking System Using Multi-View Cameras</title>
		<author>
			<persName><forename type="first">M</forename><surname>Takahashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Ikeya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ookubo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Mishina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICPR</title>
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">H</forename><surname>Thilakarathne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nibali</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Morgan</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2108.04186</idno>
		<title level="m">Pose is all you need: The pose only group activity recognition system (pogars)</title>
				<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Analyzing volleyball match data from the 2014 world championships using machine learning techniques</title>
		<author>
			<persName><forename type="first">J</forename><surname>Van Haaren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Ben Shitrit</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Davis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fua</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD</title>
				<meeting>the 22nd ACM SIGKDD</meeting>
		<imprint>
			<date type="published" when="2016-08">2016. August</date>
			<biblScope unit="page" from="627" to="634" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<author>
			<persName><forename type="first">W</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Van Der Aalst</surname></persName>
		</author>
		<title level="m">Data science in action</title>
				<meeting><address><addrLine>Berlin Heidelberg</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="3" to="23" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Data mining in elite beach volleyball-detecting tactical patterns using market basket analysis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wenninger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Link</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lames</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IJCSS</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="issue">2</biblScope>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A comprehensive review of group activity recognition in videos</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">F</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Qiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">X</forename><surname>Zhao</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Automation and Computing</title>
		<imprint>
			<biblScope unit="page" from="1" to="17" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">VREN: volleyball rally dataset with expression notation language</title>
		<author>
			<persName><forename type="first">H</forename><surname>Xia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Tracy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Fraisse</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">F</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Petzold</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE ICKG</title>
		<imprint>
			<biblScope unit="page" from="337" to="346" />
			<date type="published" when="2022-11">2022. November. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Group activity recognition by using effective multiple modality relation representation with temporalspatial attention</title>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Fu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Jian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Higcin: hierarchical graph-based cross inference network for group activity recognition</title>
		<author>
			<persName><forename type="first">R</forename><surname>Yan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Xie</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Tang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Q</forename><surname>Tian</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE TPAMI</title>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kadav</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shamsian</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Geng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kapadia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">P</forename><surname>Graf</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ECCV</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Outcome-oriented prescriptive process monitoring based on temporal logic patterns</title>
		<author>
			<persName><forename type="first">I</forename><surname>Donadello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Di Francescomarino</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">M</forename><surname>Maggi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ricci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shikhizada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Engineering Applications of Artificial Intelligence</title>
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<monogr>
		<title level="m" type="main">Declarative Process Specifications: Reasoning, Discovery, Monitoring. Process Mining Handbook</title>
		<author>
			<persName><forename type="first">Claudio</forename><surname>Di Ciccio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Marco</forename><surname>Montali</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
