<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Anomaly Detection for Physical Threat Intelligence</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Paolo</forename><surname>Mignone</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Via Orabona, 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">Big Data Lab</orgName>
								<orgName type="institution">National Interuniversity Consortium for Informatics (CINI)</orgName>
								<address>
									<addrLine>Via Ariosto, 25</addrLine>
									<postCode>00185</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Donato</forename><surname>Malerba</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Via Orabona, 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">Big Data Lab</orgName>
								<orgName type="institution">National Interuniversity Consortium for Informatics (CINI)</orgName>
								<address>
									<addrLine>Via Ariosto, 25</addrLine>
									<postCode>00185</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Michelangelo</forename><surname>Ceci</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Department of Computer Science</orgName>
								<orgName type="institution">University of Bari Aldo Moro</orgName>
								<address>
									<addrLine>Via Orabona, 4</addrLine>
									<postCode>70125</postCode>
									<settlement>Bari</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">Big Data Lab</orgName>
								<orgName type="institution">National Interuniversity Consortium for Informatics (CINI)</orgName>
								<address>
									<addrLine>Via Ariosto, 25</addrLine>
									<postCode>00185</postCode>
									<settlement>Rome</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Anomaly Detection for Physical Threat Intelligence</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">262227C9B6EB48709A70A3A9543FCCF9</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-25T07:45+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Anomaly detection</term>
					<term>Air pollution</term>
					<term>Public transport traffic</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Anomaly detection is a machine learning task that has been investigated within diverse research areas and application domains. In this paper, we performed anomaly detection for Physical Threat Intelligence. Specifically, we performed anomaly detection for air pollution and public transport traffic analysis for the city of Oslo, Norway. To this aim, the state-of-the-art method SparkGHSOM was considered to learn predictive models for normal (i.e. regular) scenarios of air quality and traffic jams in a distributed fashion. Furthermore, we extended the main algorithm to make the detected anomalies explainable through an instance-based feature ranking approach. The results showed that SparkGHSOM is able to detect anomalies for both the real applications considered in this study, despite the fact it was designed for different tasks.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Anomaly detection is a machine learning task that refers to the problem of identifying data that do not conform to patterns observed in historical data. These patterns represent the expected behaviour in normal conditions. Therefore, anomaly detection is usually performed through a data-driven algorithm to construct a model which will be able to detect a specific measurement/object/instance/observation as anomalous with respect to the historical data already seen. Anomaly detection is a very general task that finds applications in many realdomain scenarios such as fraud detection for credit cards, insurance, or health care, intrusion detection for cyber-security, fault detection in safety-critical systems, and military surveillance for enemy activities <ref type="bibr" target="#b0">[1]</ref>.</p><p>In this paper, we consider the Anomaly Detection task for the purposes of Physical Threat Intelligence. Specifically, we propose an algorithm for anomaly detection which works on data continuously collected by geo-located sensors located in urban areas. The data refer to physical information (e.g. temperature, number of vehicles crossing a gate, number of pedestrians in a given area, PM10 level at certain points in the town, etc.). The goal is to identify an anomalous, not expected, behaviour for one or many values simultaneously, considering the specific time, ITADATA2022: The 1 𝑠𝑡 Italian Conference on Big Data and Data Science, September 20-21, 2022, Milan, Italy Envelope paolo.mignone@uniba.it (P. Mignone); donato.malerba@uniba.it (D. Malerba); michelangelo.ceci@uniba.it (M. Ceci) GLOBE http://www.di.uniba.it/~mignone/ (P. Mignone); http://www.di.uniba.it/~malerba/ (D. Malerba); http://www.di.uniba.it/~ceci/ (M. Ceci) Orcid 0000-0002-8641-7880 (P. Mignone); 0000-0001-8432-4608 (D. Malerba); 0000-0002-6690-7583 (M. Ceci) date and spatial coordinates of the considered observation. This would give the opportunity to Security Operators to understand potentially dangerous situations and take the appropriate actions in time.</p><p>The task we consider hereby is particularly challenging since data generated by sensors are big in size and have spatial and temporal coordinates that make the data not independent. Indeed, the spatial proximity of sensors introduces spatial autocorrelation in functional annotations and violates the usual assumption that observations are independently and identically distributed (i.i.d.). Although the explicit consideration of these spatial dependencies brings additional complexity to the learning process, it generally leads to increased accuracy of learned models <ref type="bibr" target="#b1">[2]</ref>. In addition, data generated by sensors are also affected by temporal autocorrelation, since they: i) tend to have similar values at the same time on close days; ii) have a cyclic and seasonal (over days and years) behavior; iii) tend to show the same trend over time.</p><p>While stream mining algorithms deal with both i) and ii), they may fail to consider iii), since they tend to better represent the most recently observed concepts, forgetting previously learned ones <ref type="bibr" target="#b2">[3]</ref>. On the contrary, time series-based approaches are able to deal with iii), but may fail to consider i) and ii). In fact, they typically require the size of the temporal horizon as an input: Considering a short-term horizon (e.g., daily) excludes a long-term horizon (e.g., seasonal) and vice versa. On the contrary, in the approach presented in this paper, we propose a time-series approach that exploits both spatial and temporal features, in order to take into account all the aspects mentioned before. In particular, the method addresses the problem of identifying complex spatio-temporal patterns in sensor data by means of Self-Organizing Maps (SOMs).</p><p>A SOM <ref type="bibr" target="#b3">[4]</ref> is a neural-network-based clustering algorithm that operates by mapping highdimensional input data into a 2-dimensional space implemented by a grid of neurons called feature map. In this paper, we consider GHSOMs, (Growing Hierarchical SOMs) that are particularly suitable for time series data and better capture spatio-temporal information thanks to the hierarchical organization of the SOMs that better adapt to complex data distribution. Specifically, we consider the distributed extension Spark-GHSOM <ref type="bibr" target="#b0">[1]</ref>, that exploits the Spark architecture to process massive data, like those coming from sensors. Since GHSOMs are designed for clustering and not for anomaly detection tasks, we extend the learning algorithm Spark-GHSOM in order to learn GHSOMs for anomaly detection, in an unsupervised fashion.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Spark-GHSOM</head><p>Spark-GHSOM <ref type="bibr" target="#b0">[1]</ref> was introduced to overcome two limitations of the classical GHSOMs. Indeed, a GHSOM i) requires multiple iterations over the input dataset making it intractable on large datasets; ii) it is designed to handle datasets with numeric attributes only, representing an important limitation as most modern real-world datasets are characterized by mixed attributes (numerical and categorical). Therefore, Spark-GHSOM exploits the Spark platform to process massive datasets in a distributed fashion. Furthermore, it exploits the distance hierarchy <ref type="bibr" target="#b4">[5]</ref> to modify the optimization function of GHSOM so that it can (also) coherently handle mixedattribute datasets. Spark-GHSOM showed high accuracy, scalability, and descriptive power on different datasets.</p><p>The first step in the GHSOM algorithm is to compute the inherent dissimilarity in the input data with different types of attributes. Classical GHSOMs exploit the mean quantization error. However, this error is suitable for numerical attributes only. While there is no standard definition of mean for categorical attributes, SparkGHSOM replaces the mean quantization error by considering instead the variance in order to assess the quality of the map and neurons. For categorical attributes, unlikability is a good measure to estimate how often the values differ from one another <ref type="bibr" target="#b5">[6]</ref>. Formally, let 𝔻 the dataset under analysis, the unlikability for a categorical attribute A of 𝔻 is defined as:</p><formula xml:id="formula_0">𝕌(𝐴) = ∑ 𝑖∈𝑑𝑜𝑚𝑎𝑖𝑛(𝐴) 𝑝 𝑖 (1 − 𝑝 𝑖 )<label>(1)</label></formula><p>where</p><formula xml:id="formula_1">𝑝 𝑖 = 𝑓 𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦(𝐴 𝑖 ,𝔻) |𝔻|</formula><p>, 𝐴 𝑖 is the i-th value of the attribute A and 𝑓 𝑟𝑒𝑞𝑢𝑒𝑛𝑐𝑦(𝐴 𝑖 , 𝔻) is the absolute frequency of the value 𝐴 𝑖 for the attribute A in 𝔻. Therefore, SparkGHSOM computes the overall variance of the dataset as follows:</p><formula xml:id="formula_2">𝜎 = ∑ 𝐴∈𝑓 𝑒𝑎𝑡𝑢𝑟𝑒𝑠𝑒𝑡 1 𝑛𝑢𝑚(𝐴) 𝜎 (𝐴) + 1 𝑐𝑎𝑡(𝐴) 𝕌(𝐴) 2<label>(2)</label></formula><p>where 1 𝑛𝑢𝑚(𝐴) (resp. 1 𝑐𝑎𝑡(𝐴) ) is 1 when the attribute A is numerical (resp. categorical), 0 otherwise. 𝜎 (𝐴) represents the classical variance for the attribute A when it is numerical. The distance hierarchy <ref type="bibr" target="#b4">[5]</ref> is considered to compute the similarities among the categorical values. To compute the distance among categorical values, a distance hierarchy for each categorical attribute must be provided in advance. Similar values according to the concept hierarchy are placed under a common parent which represents an abstract concept. The GHSOM training process takes into account mixed attributes and consists in finding the winner (closest) neuron of the SOM w.r.t. the single input instance according to the distance hierarchy.</p><p>In the first step, the winner neuron is identified for the input instance according to the distance hierarchy. Therefore, the neuron's weight vector is modified by a certain amount to match the instance vector. In the hierarchy tree of the concepts, where the leaves represent the actual values of the instances and the non-leaf nodes represent the neurons, this process pulls the neuron point towards its leaf in order to "specialize" what the neuron describes.</p><p>In the second step, the closest winner neuron and its surrounding neighbor neurons of the SOM are adapted moving them towards the input instance. This training process requires a defined number of training epochs over the input dataset. The training is governed by the Mean Quantization Error (MQE) of a neuron, that is the total deviation of the neuron from its mapped input instances. The MQE for a SOM layer is computed as the average MQE of all the neurons representing instances. A higher value of the MQE means that the layer does not represent the input data well and requires more neurons to better represent the input domain. Moreover, when a single neuron is still not representing the surrounding instances, then the neuron is expanded as a SOM hierarchically (see figure <ref type="figure" target="#fig_0">1</ref>). </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Spark-GHSOM for Anomaly Detection</head><p>The training process of the Spark-GHSOM follows the classical process of the GHSOM training, except for the use of a different function for the calculation of the distance between the input vector and the neurons of the feature map, since the Euclidean distance is not computable on categorical attributes. For this reason, the hierarchical distance was chosen <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b4">5]</ref>.</p><p>The hierarchy obtained can thus be used to solve an anomaly detection task. In particular, when a new input vector is supplied to the hierarchy, the algorithm looks for the SOM that succeeds in better approximating the input data (that is, the SOM with the shortest distance with respect to the input vector). Once found, it is used to carry out the prediction for the new input data, based on the distance between the input vector and the closest neuron (the winner neuron) in the map.</p><p>More formally, let 𝑥 𝑖 be the new example to be considered, and let 𝑒(𝑥 𝑖 ) = arg min 𝑒 𝑑𝑖𝑠𝑡(𝑥 𝑖 , 𝑒) the closest neuron to 𝑥 𝑖 according to the distance measure described before, the example is considered an anomaly if the following inequality holds:</p><formula xml:id="formula_3">𝑑𝑖𝑠𝑡(𝑥 𝑖 , 𝑒(𝑥 𝑖 )) &gt; (𝑑 𝑎𝑣𝑔 + 𝑡𝑓 * 𝜎 )<label>(3)</label></formula><p>In the formula, 𝑑 𝑎𝑣𝑔 is the average distance among the training instances and the neurons of the model after the training, 𝜎 the standard deviation of such distances, and 𝑡𝑓 the user-defined threshold.</p><p>As data distributions tend to change over time, it may be necessary to update the knowledge of the anomaly detector using more recent data. For this reason, Spark-GHSOM for anomaly detection provides the possibility to update the weights vectors of the neurons while keeping the generated hierarchy unchanged. This process can be particularly useful if end users do not have enough time or data availability to train a new anomaly detector from scratch. Consequently, having a pre-trained model already available, it is possible to provide the model with a microbatch of data, in order to update the knowledge extracted by the model and adapt it to the user's needs. This aspect is particularly useful in our case, where data generated by the sensors can be relatively few.</p><p>The anomaly detector could produce different types of output depending on the level of detail. The simplest approach provides feedback for the current data in the form of a Boolean response. This kind of output could support raising an alert if the response is equal to "anomaly".</p><p>This approach presents the advantage that is simple to handle and transmits the prediction as a binary variable (e.g., anomaly/normal, 0/1, true/false). Its drawback is that it makes it difficult for the end-user to interpret the raised alert/anomaly. Therefore, a more informative approach could be considered by combining the previous one with a ranking of the variables (feature ranking) according to their importance, indicating the contribution to catching the variable's anomaly.</p><p>Feature ranking is a ranking of the entire set of features composing the data collection, ordered with respect to the feature importance. Feature importance is a numerical value between 0 and 1, which expresses how anomalous the value expressed by the feature is with respect to the data collection, such that the sum of all the features importance values in the feature ranking is equal to 1. The importance score is determined starting from a distance function between the current data under analysis and the winner neuron. Specifically, the ranking is proportional to the contribution provided by the single feature in the Euclidean distance between 𝑥 𝑖 and 𝑒(𝑥 𝑖 ). More formally, the ranking function for the instance 𝑥 𝑖 , 𝑟 𝑓 (𝑥 𝑖 ), is computed as follows:</p><formula xml:id="formula_4">𝑟 𝑓 (𝑥 𝑖 ) = (𝑥 𝑖 [𝑙] − 𝑒(𝑥 𝑖 )[𝑙]) 2 ∑ 𝑙 ′ (𝑥 𝑖 [𝑙 ′ ] − 𝑒(𝑥 𝑖 )[𝑙 ′ ]) 2<label>(4)</label></formula><p>where 𝑙 represents the feature index. This approach helps to identify the feature(s) that most contributed to the anomaly and, therefore, the "reason" for the anomaly.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experiments</head><p>The experiments were conducted for the city of Oslo (Norway) by considering two real domains for the following analyses: air pollution and public transport traffic.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Air pollution analysis</head><p>The proposed method was tested using data coming from air quality monitoring sensors to identify pollutant concentrations deemed abnormal. The considered locations within the city of Oslo are shown in Figure <ref type="figure" target="#fig_1">2</ref>.</p><p>At each location, different pollutants are monitored by the sensors:</p><p>• Hjortnes: NO, NO2, NOx, PM10 and PM2.5</p><p>• Loallmenningen: NO, NO2, NOx, PM1, PM10 and PM2.5</p><p>• Spikersuppa: PM10 and PM2.5</p><p>The information on the concentration of pollutants comes with both a timestamp and the geo-coordinates (latitude and longitude), so that the time series can be reconstructed. Data, which is publicly available, can be downloaded through a REST API <ref type="foot" target="#foot_0">1</ref> .  The period considered for training was from January 2021 to September 2021, with an hourly sampling rate, totalling 18.286 data points from the chosen locations. The period considered for testing is October 2021, totalling 720 acquisitions from the chosen locations. The best value for the parameter 𝑡𝑓 has been selected according to internal cross-validation on the training instances in the interval [0, 15].</p><p>Figure <ref type="figure" target="#fig_2">3</ref> shows the concentrations per hour of NO, NOx, and NO2 pollutants during the identified test period, i.e., October 2021, from the station of Hjortnes. The choice fell on these pollutants because they are present within the top-3 of the feature ranking, for those time instants considered anomalous by the algorithm, indicated with black arrows in the graph.</p><p>It is worth noting that we did not find an abnormal situation on October 21 at 10 a.m., indicated with a green arrow in Figure <ref type="figure" target="#fig_3">4</ref>, when very high concentrations of PM1 were recorded, even though at this time point the pollutant PM1 is correctly present in the first position of the feature ranking.</p><p>The motivation is because several pollutants are being observed together and the sudden increase of concentrations of one of them is sometimes not sufficient to classify the time instant as a potential abnormal situation.</p><p>Figure <ref type="figure" target="#fig_4">5</ref> shows the concentration per hour of PM1 pollutant during the test period, from Loallmenningen. For this place, PM1 is the most decisive pollutant for the detection of abnormal  situations that occurred during October 2021.</p><p>As in the previous graphs, the black arrows indicate the time instants in which we detected abnormal concentrations of the pollutants considered. As expected, the algorithm was able to correctly detect high concentrations of the PM1 pollutant.</p><p>On October 26 at 9 p.m., as indicated by the green arrow, the concentrations of PM1 were very similar to those of October 27 at 4 p.m., however only in the latter case, an anomalous situation was found by the algorithm. A more detailed graph is shown in Figure <ref type="figure" target="#fig_5">6</ref>.</p><p>The reason is due to a sudden increase in concentrations of the remaining pollutants, which occurred on October 27 at 4 p.m. This situation, as shown in Figure <ref type="figure" target="#fig_6">7</ref>, allowed the algorithm to identify an anomalous situation at this timestamp. Figure <ref type="figure" target="#fig_7">8</ref> shows the concentrations per hour of PM10 and PM2.5 pollutants during the test period, from the area of Spikersuppa. The pollutants shown in the graph are the only ones the station can monitor. As expected, the algorithm did not identify any situations deemed abnormal for this place, as the concentrations of October are quite regular.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Public transport traffic</head><p>This data consists of one week of data regarding Oslo's public transport. The instances represent GPS-tracked busses with latitude and longitude. Each instance is timestamped according to the standard ISO 8601 with a resolution in seconds. The Service Interface for Real time Information - For this dataset, the processing pipeline illustrated in Figure <ref type="figure" target="#fig_8">9</ref> was executed. Therefore, starting from the week of data from Oslo traffic transport, we performed data cleaning in order to fix some encoding issues. We also aggregated data by 5-minutes interval periods and by spatial areas according to some preliminary clustering. This step was crucial since the data provided refer to movable points in the map making the aggregation operations unfeasible. Clustering on the spatial location was performed by exploiting K-Means algorithm <ref type="bibr" target="#b6">[7]</ref>. The variables of the considered data were extended by considering the cluster identifier (cluster ID) and the cluster's centroid latitude and longitude to the data. Since K-Means algorithm needs the number of clusters to identify, we performed the well-known silhouette cluster analysis <ref type="bibr" target="#b7">[8]</ref> with the aim to identify the number of areas for monitoring the traffic. According to silhouette analysis, we considered 100 different regions for traffic monitoring (see Figure <ref type="figure" target="#fig_10">10</ref>).</p><p>The instances are therefore grouped by two levels: first the time, then the cluster id previously identified. Various new features are computed as part of the aggregation (e.g., the average "delay" of the buses in seconds) for each identified clustered monitoring area.    </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Table 1</head><p>Oslo public transport traffic: quantitative results in terms of accuracy, precision, recall, and f1-score.</p><p>After 26 hours of training, the anomaly detector becomes further stable and capable to predict most of the anomalies occurred during the anomalous time slot [20:40-21:40] in the evening.</p><p>After 28 hours of training, the anomaly detector becomes furthermore stable and capable to predict most of the anomalies occurred during the anomalous time slot [05:40-06:40] in the evening/early morning. In table 1, we report the overall quantitative results which confirm the fact that the algorithm, after sufficient data for training, shows very high prediction scores, with very high precision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions</head><p>In this paper, we tackle the task of anomaly detection. For this purpose, we extended the algorithm SparkGHSOM, originally designed for the clustering task, in order to consider the task at hand. Furthermore, the main algorithm has been made more explainable by providing the reasons for each detected anomaly in the form of an instance-based feature ranking. The results show the effectiveness of the proposed approach both qualitatively and quantitatively in real application scenarios. For future work, we aim to perform further and more robust experiments with the aim to better evaluate the predictive quality, the explainability, and the scalability of this new extended version of SparkGHSOM. From an architectural viewpoint, we aim to provide anomaly detection as an additional service according to the a model-based approach for Big Data Analytics-as-a-service <ref type="bibr" target="#b8">[9]</ref>.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: A Growing Hierarchical Self Organizing Map.</figDesc><graphic coords="4,214.30,84.19,166.67,152.95" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Location of the considered stations in the city of Oslo.</figDesc><graphic coords="6,193.47,84.19,208.35,118.87" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Concentrations per hour of NO, NOx and NO2 pollutants during October 2021, from Hjortnes station.</figDesc><graphic coords="6,89.29,242.34,416.70,132.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Concentrations per hour of PM10 and PM2.5 pollutants during October 2021, from Hjortnes station.</figDesc><graphic coords="7,89.29,84.19,416.70,130.06" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Concentrations per hour of PM1 pollutant during October 2021, from Loallmenningen station.</figDesc><graphic coords="7,89.29,258.38,416.69,130.09" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: A zoom-in with respect to the time interval for PM1 pollutant during October 2021, from Loallmenningen station.</figDesc><graphic coords="8,89.29,84.19,416.69,134.25" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Concentrations per hour of NO, NO2, PM10 and PM2.5 pollutants during October 2021, from Loallmenningen station.</figDesc><graphic coords="8,89.29,263.69,416.70,130.23" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Concentrations per hour of PM10 and PM2.5 pollutants during October 2021, from Spikersuppa station.</figDesc><graphic coords="8,89.29,439.19,416.68,121.36" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 9 :</head><label>9</label><figDesc>Figure 9: The data processing pipeline</figDesc><graphic coords="9,89.29,84.19,416.70,174.38" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head></head><label></label><figDesc>Multiple training and test sets were created as illustrated in figure 11. The 𝑛-th evaluation step uses 𝑛 hours for training, and the (𝑛 + 1)-th hour for testing. The 10% of the available test windows are perturbed by randomly selecting 3 columns for each instance and randomly assigning a new value for each selected feature. These test windows are considered anomalous. The remaining 90% of the available test windows are used without perturbation and considered non-anomalous for the evaluation. The aim of this setting is to perform an evaluation based on landmark windows. The best value for the parameter 𝑡𝑓 has been selected according to internal cross-validation on the training instances in the interval [0, 15]. In Figure 12 hour-by-hour histograms are reported for the first day. Stacked green bars indicate the correct predictions, while the red ones the wrong predictions. The red text in the</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 10 :</head><label>10</label><figDesc>Figure 10: The Oslo street map and the best locations for monitoring traffic according to the clustering step.</figDesc><graphic coords="10,89.29,84.19,416.69,192.54" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_11"><head>Figure 11 :</head><label>11</label><figDesc>Figure 11: Training and testing sets.</figDesc><graphic coords="10,214.30,322.03,166.67,151.05" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_12"><head>Figure 12 :</head><label>12</label><figDesc>Figure 12: Hour-by-hour histograms indicating True positives, True negatives, False positives and False negatives for the first day of data.</figDesc><graphic coords="11,89.29,84.19,416.70,151.68" type="bitmap" /></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">https://api.nilu.no/</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="2" xml:id="foot_1">https://api.entur.io/realtime/v1/rest/vm?datasetId=RUT</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgment</head><p>We acknowledge the project IMPETUS (Intelligent Management of Processes, Ethics and Technology for Urban Safety) that receives funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 883286. https://cordis.europa.eu/project/id/883286. Dr. Paolo Mignone acknowledges the support of Apulia Region through the REFIN project "Metodi per l'ottimizzazione delle reti di distribuzione di energia e per la pianificazione di interventi manutentivi ed evolutivi" (CUP H94I20000410008, Grant n. 7EDD092A).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Spark-ghsom: Growing hierarchical self-organizing map for large scale mixed attribute datasets</title>
		<author>
			<persName><forename type="first">A</forename><surname>Malondkar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Corizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kiringa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ceci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Japkowicz</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ins.2018.12.007</idno>
	</analytic>
	<monogr>
		<title level="j">Information Sciences</title>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Network regression with predictive clustering trees</title>
		<author>
			<persName><forename type="first">D</forename><surname>Stojanova</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ceci</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Appice</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Džeroski</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10618-012-0278-6</idno>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="378" to="413" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Rcd: A recurring concept drift framework</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Gonçalves</surname><genName>Jr</genName></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Barros</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.patrec.2013.02.005</idno>
	</analytic>
	<monogr>
		<title level="j">Pattern Recognition Letters</title>
		<imprint>
			<biblScope unit="volume">34</biblScope>
			<biblScope unit="page" from="1018" to="1025" />
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The self-organizing map</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kohonen</surname></persName>
		</author>
		<idno type="DOI">10.1109/5.58325</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE</title>
				<meeting>the IEEE</meeting>
		<imprint>
			<date type="published" when="1990">1990</date>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="1464" to="1480" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Generalizing self-organizing map for categorical data</title>
		<author>
			<persName><forename type="first">C.-C</forename><surname>Hsu</surname></persName>
		</author>
		<idno type="DOI">10.1109/TNN.2005.863415</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Neural Networks</title>
		<imprint>
			<biblScope unit="volume">17</biblScope>
			<biblScope unit="page" from="294" to="304" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Variability for categorical variables</title>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">D</forename><surname>Kader</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perry</surname></persName>
		</author>
		<idno type="DOI">10.1080/10691898.2007.11889465</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Statistics Education</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<date type="published" when="2007">2007</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Least squares quantization in pcm</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lloyd</surname></persName>
		</author>
		<idno type="DOI">10.1109/TIT.1982.1056489</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Information Theory</title>
		<imprint>
			<biblScope unit="volume">28</biblScope>
			<biblScope unit="page" from="129" to="137" />
			<date type="published" when="1982">1982</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Silhouettes: A graphical aid to the interpretation and validation of cluster analysis</title>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">J</forename><surname>Rousseeuw</surname></persName>
		</author>
		<idno type="DOI">10.1016/0377-0427(87)90125-7</idno>
	</analytic>
	<monogr>
		<title level="j">Journal of Computational and Applied Mathematics</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="53" to="65" />
			<date type="published" when="1987">1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">An owl ontology for supporting semantic services in big data platforms</title>
		<author>
			<persName><forename type="first">D</forename><surname>Redavid</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Corizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Malerba</surname></persName>
		</author>
		<idno type="DOI">10.1109/BigDataCongress.2018.00039</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Congress on Big Data (BigData Congress)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="228" to="231" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
