<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">On the Environmental Impact of the Algorithm LatentOut for Unsupervised Anomaly Detection</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Fabrizio</forename><surname>Angiulli</surname></persName>
							<email>f.angiulli@dimes.unical.it</email>
							<affiliation key="aff0">
								<orgName type="department">DIMES Dept</orgName>
								<orgName type="institution">University of Calabria</orgName>
								<address>
									<postCode>87036</postCode>
									<settlement>Rende (CS)</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fabio</forename><surname>Fassetti</surname></persName>
							<email>f.fassetti@dimes.unical.it</email>
							<affiliation key="aff0">
								<orgName type="department">DIMES Dept</orgName>
								<orgName type="institution">University of Calabria</orgName>
								<address>
									<postCode>87036</postCode>
									<settlement>Rende (CS)</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Luca</forename><surname>Ferragina</surname></persName>
							<email>luca.ferragina@unical.it</email>
							<affiliation key="aff0">
								<orgName type="department">DIMES Dept</orgName>
								<orgName type="institution">University of Calabria</orgName>
								<address>
									<postCode>87036</postCode>
									<settlement>Rende (CS)</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">On the Environmental Impact of the Algorithm LatentOut for Unsupervised Anomaly Detection</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DFF661F5520E3F9CFCB6A19C9375E584</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:38+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Anomaly Detection, Variational Autoencoder, Carbon Footprint Orcid 0000-0002-9860-7569 (F. Angiulli)</term>
					<term>0000-0002-8416-906X (F. Fassetti)</term>
					<term>0000-0003-3184-4639 (L. Ferragina)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Because of their astonishing performances, Deep Neural Network-based approaches have become pervasive in many human activities. However, they often require a long, energy-intensive training phase, which has a huge environmental impact.</p><p>In recent years, there has been a significant increase in the emphasis placed on environmental themes across various sectors, driven by growing concerns over climate change and sustainability. This heightened focus has led to many initiatives, policies and discussions aimed at addressing ecological challenges and promoting a more sustainable future. For the reasons stated above, Deep Learning cannot be exempted from such initiatives and the literature is starting to pay attention to these issues. This paper aims at contributing to this field, in particular, concerning the Anomaly Detection Task whose environmental impact, due to its widespread employment, deserves to be addressed.</p><p>Specifically, we focus on the Anomaly Detection field that, such as many other Data Mining tasks, is not excluded from this analysis. In particular, we consider Latent𝑂𝑢𝑡, a recently introduced Deep Learning-based framework for unsupervised Anomaly Detection that exploits both the latent space and the baseline anomaly score (i. e. the reconstruction error) of a Variational Autoencoder (VAE) to provide a refined anomaly score performing density estimation in the augmented latent-space/baseline-score feature space.</p><p>We analyze the environmental impact of Latent𝑂𝑢𝑡 in terms of carbon footprint by measuring the (estimated) 𝐶𝑂 2 consumption through the Python library CodeCarbon. We observe that, with equal 𝐶𝑂 2 consumption, Latent𝑂𝑢𝑡 achieves much better performances than the standard VAE. Moreover, we compare Latent𝑂𝑢𝑡 with other Anomaly Detection Neural Network-based methods and we highlight that it is the one that obtains the best results in terms of a balance between high accuracy performance and low carbon footprint.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Anomalies can be defined as examples that significantly deviate from the majority of the data to arise the suspect of being generated by a different mechanism. Anomaly Detection represents a fundamental task in many human activities, including Healthcare, Cyber-security, Industrial Monitoring, Fraud Detection, and many others.</p><p>It is possible to identify three different types of settings of Anomaly Detection <ref type="bibr" target="#b0">[1]</ref>. In the Supervised setting a dataset whose items are labeled as normal and abnormal is available to build a classifier, typically the dataset is highly unbalanced and the anomalies form a rare class. The Semi-supervised setting, also called one-class, is characterized by the presence in input of only examples from the normal class that are used to train the detector. In the Unsupervised setting the goal is to assign an anomaly score to each object of the input dataset in order to find anomalies in it.</p><p>Classical data mining and machine learning algorithms performing the task of detecting outliers include statistical-based <ref type="bibr" target="#b1">[2]</ref>, distance-based <ref type="bibr" target="#b2">[3,</ref><ref type="bibr" target="#b3">4,</ref><ref type="bibr" target="#b4">5,</ref><ref type="bibr" target="#b5">6]</ref>, density-based <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8]</ref>, reverse nearest neighborbased <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>, SVM-based <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref>, and many others <ref type="bibr" target="#b0">[1]</ref>.</p><p>1st Workshop on Green-Aware Artificial Intelligence, 23rd International Conference of the Italian Association for Artificial Intelligence (AIxIA 2024), <ref type="bibr">November 25-28, 2024, Bolzano, Italy</ref> Recently, the approaches that have achieved the most success have been those based on deep learning <ref type="bibr" target="#b13">[14]</ref>, which can be divided into three main families: reconstruction error-based methods employing Autoencoders (AE), models based on Generative Adversarial Networks (GAN), and SVM-like neural architectures.</p><p>At the basis of the application of Autoencoders (AE) and Variational Autoencoders (VAE) <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b13">14]</ref> to Anomaly Detection relies the concept of reconstruction error. More in detail, (Variational) Autoencoders are trained to map data into a low dimensional latent space and then map them back into the original space generating in output a reconstruction of the input as similar as possible to it. Since the majority of the data used for training models belongs to the normal class, it is assumed that these networks are able to reconstruct the inliers better than the outliers and, thus, the reconstruction error can be adopted as an anomaly score.</p><p>GAN-based models <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref> basically consist in the combined, adversarial training of two sub-architectures, the generator and the discriminator. Specifically, the generator network produces artificial anomalies as realistic as possible, and the discriminator assigns an anomaly score to each item.</p><p>SVM-like methods <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22,</ref><ref type="bibr" target="#b22">23]</ref> leverage the idea of enclosing normal data into a hypersphere employing a One-Class SVM-like loss function combined with a deep neural architecture. A slightly different approach that can be included in this family, is introduced in <ref type="bibr" target="#b23">[24]</ref> where the architecture presents an additional final layer composed of just one neuron that produces an anomaly score that, for anomalies, is as far as possible from a value obtained as the average of randomly sampled normal items anomaly scores.</p><p>Moreover, in <ref type="bibr" target="#b24">[25]</ref> has been introduced Deep Isolation Forest (DIF), a novel methodology that utilizes casually initialized neural networks to map original data into random representation ensembles, where random axis-parallel cuts are subsequently applied to perform data partition.</p><p>Nevertheless, the cost of high power and energy combines with the high accuracy and training speed of the Deep Learning models. This is leading researchers to be aware of the environmental impact of deep neural architectures by trading off accuracy against energy consumption and also to perform characterization in terms of performance, power and energy for guiding the architecture design of DNN models <ref type="bibr" target="#b25">[26,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b28">29]</ref>.</p><p>This paper aims to provide a contribution in this direction, and, in particular, to the field of Anomaly Detection by analyzing the behaviour of recent methods from the point of view of the detection performance as well as from the point of view of their carbon footprint. Specifically, we focus on the Latent𝑂𝑢𝑡 algorithm <ref type="bibr" target="#b29">[30,</ref><ref type="bibr" target="#b30">31,</ref><ref type="bibr" target="#b31">32,</ref><ref type="bibr" target="#b32">33]</ref>, an anomaly detection framework that applies to any deep neural architecture as a baseline to obtain a refined score, and we compare it with the baseline architecture on which it is applied and deep learning-based competitors from the other families.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">The Latent𝑂𝑢𝑡 algorithm for Unsupervised Anomaly Detection</head><p>Due to the quite good performances they obtained as well as their versatility, the ones based on (Variational) Autoencoders have become the most widespread Anomaly Detection approaches relying on Deep Neural Networks.</p><p>The main issue about them is that they often generalize so well to reconstruct also anomalies <ref type="bibr" target="#b29">[30]</ref>, thus worsening the capability of detecting anomalies of the reconstruction error.</p><p>In <ref type="bibr" target="#b30">[31]</ref> Latent𝑂𝑢𝑡 is introduced. It is a methodology that enhances both the reconstruction error and the latent space distribution of the Variational Autoencoder in order to obtain a refined anomaly score. Specifically, the first variant of the Latent𝑂𝑢𝑡 (Figure <ref type="figure" target="#fig_0">1</ref>) algorithm considers the enlarged feature space 𝐹 = 𝐿 × 𝐸, where 𝐿 represents the latent space and 𝐸 is the reconstruction error space (usually 𝐸 ⊆ ℝ), and performs a 𝑘-NN density estimation in the space 𝐹.</p><p>In Figure <ref type="figure" target="#fig_0">1</ref> the complete workflow of Latent𝑂𝑢𝑡 is showed. Each point of the dataset 𝑥 ∈ 𝑋 is mapped into the latent space 𝐿 of the VAE (blue points represent inliers, red ones represent anomalies) by means of the encoder 𝜙 𝑊 and then reconstructed back in the original space x ∈ 𝑋 by means of the decoder 𝜓 𝑊 . Then, the reconstruction error 𝐸(𝑥) = ‖𝑥 − x ‖ 2  2 is computed, the feature space 𝐹 = 𝐿 × 𝐸 is created, and the 𝑘-NN density estimation is performed in it to compute the Latent𝑂𝑢𝑡 anomaly score. The motivation behind this procedure is based on the observation that anomalies tend to lie in the sparsest regions of the augmented feature space 𝐹. This happens because even when their reconstruction error is not exceptionally large, is still significantly larger than that of their most similar normal items.</p><p>In <ref type="bibr" target="#b31">[32]</ref> Latent𝑂𝑢𝑡 has been expanded in order to be potentially applied to any neural architecture that has three fundamental properties:</p><p>• it outputs an anomaly score,</p><p>• it has a latent space 𝐿, • it performs a mapping from the original data space 𝑋 to 𝐿 through an encoder-shaped module.</p><p>In particular, the neural models on which Latent𝑂𝑢𝑡 has actually been tested are AE, VAE, GANomaly, Fast-AnoGAN, SO − GAAL, and MO − GAAL.</p><p>Moreover, in <ref type="bibr" target="#b32">[33]</ref> it has been showed that the separation properties of the enlarged space 𝐹 allow any generic anomaly score (not only the 𝑘-NN) to perform better when applied on it than on the input data space 𝑋.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Experimental results</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Experimental setup</head><p>In our experiments we consider the tabular datasets cardio, letter, lympho, mammography, pendigits, pima, satellite, satimage-2, speech, thyroid, from the ODDS repository <ref type="bibr" target="#b33">[34]</ref> as well as the image datasets MNIST <ref type="bibr" target="#b34">[35]</ref>, Fashion-MNIST <ref type="bibr" target="#b35">[36]</ref>, and CIFAR10 <ref type="bibr" target="#b36">[37]</ref>.</p><p>The last three datasets (differently from the ones from the ODDS repository) are multi-class, thus to make them suitable for the anomaly detection task we adopt a one-vs-all strategy, meaning that we consider one class as normal and we randomly sample 𝑠 items from each other class. If not otherwise stated, we set 𝑠 = 10. Specifically, we select the class "0" as normal for the MNIST dataset, the class "Sandal" for Fashion-MNIST, and the class "deer" for CIFAR-10.</p><p>As for the implementation details of the algorithm, we consider the original version of Latent𝑂𝑢𝑡 with the VAE as baseline architecture, and the 𝑘-NN with 𝑘 = 50 as estimator of the density of the feature space 𝐹. The latent space dimension ℓ of the VAE is set to ℓ = 2 for tabular ODDS datasets and to ℓ = 32 for image datasets. As for the encoder structure (the decoder is symmetric to it) we adopt the same strategy used in <ref type="bibr" target="#b32">[33]</ref>, i. e. we insert hidden layers of dimension ℓ 𝑖 = ⌊ 𝑑 4 𝑖 ⌋ between the input 𝑑-dimensional space and the ℓ-dimensional latent space for each 𝑖 ∈ ℕ + such that ⌊ 𝑑 4 𝑖 ⌋ &gt; ℓ. The 𝐶𝑂 2 emissions are estimated by means of the Python library CodeCarbon <ref type="bibr" target="#b37">[38]</ref> which bases its tracking on the power consumption and the geographic location where the code is executed.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Evolution of performance and emissions of Latent𝑂𝑢𝑡 and VAE during training</head><p>The energy consumption of any Deep Learning model is related to the training phase, and, in particular, to the number of training epochs.</p><p>Therefore, it is of crucial importance to understand the behavior of these algorithms as the training proceeds to optimize the trade-off between the maximization of the performance and the minimization of energy consumption.</p><p>The quantity of 𝐶𝑂 2 produced by Latent𝑂𝑢𝑡, which we represent as ℰ Latent𝑂𝑢𝑡 , is fundamentally constituted by two terms:</p><p>• the emissions ℰ 𝑉 𝐴𝐸 needed for the training of the architecture and the computation, which is shared with the Variational Autoencoder, • the emissions ℰ 𝑘-NN used for the building of the feature space ℱ and the computation of the 𝑘-NN algorithm in it.</p><p>Since the two operations are carried out in sequence and independently of each other, we have that</p><formula xml:id="formula_0">ℰ Latent𝑂𝑢𝑡 = ℰ 𝑉 𝐴𝐸 + ℰ 𝑘-NN</formula><p>which means that, with equal training epochs, the carbon footprint of Latent𝑂𝑢𝑡 is always greater than the one of the Variational Autoencoder. Thus, for a fair comparison, we train the Variational Autoencoder for 100 epochs and we stop the training earlier for evaluating the Latent𝑂𝑢𝑡 score. In figures 2, 3, 4, we show the performances of both Latent𝑂𝑢𝑡 (in orange) and the standard Variational Autoencoder (in blue) in terms of Area Under the ROC Curve (AUC) as the training proceeds. Observe that on the horizontal axis is reported the 𝐶𝑂 2 emissions (in 𝐾 𝑔), which means that, for the reasons stated above, each value of the AUC of Latent𝑂𝑢𝑡 is obtained with fewer epochs than the relative value of the VAE.</p><p>As we can see, in almost every plot the curve of Latent𝑂𝑢𝑡 is placed above the curve of the VAE. Moreover, the trend of Latent𝑂𝑢𝑡 is much more regular than the one of the VAE (see in particular the plots of the datasets cardio, mammography, satellite, satimage-2, mnist, cifar). This implies that if we fix a threshold on the amount of 𝐶𝑂 2 we want to emit, the score of Latent𝑂𝑢𝑡 always outperforms the standard score of the VAE. In other words, Latent𝑂𝑢𝑡 is able to better exploit the emissions produced than the standard architecture on which it is applied.  This happens because as the training proceeds the reconstruction capabilities of the VAE improve so much that at some point it becomes able to reconstruct also outliers, thus lowering the anomaly detection performances of the model. On the other side Latent𝑂𝑢𝑡 benefits of the latent space organization that produces a progressively better separation between normal examples and anomalies in the feature space 𝐹.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Comparison with competitors</head><p>We consider as competitors some of the neural networks algorithm implemented in the Python library PyOD <ref type="bibr" target="#b38">[39]</ref>, namely Deep-SVDD <ref type="bibr" target="#b20">[21]</ref>, from the SVM-like family, AnoGAN <ref type="bibr" target="#b16">[17]</ref> and ALAD <ref type="bibr" target="#b19">[20]</ref>, from the GAN family, and DIF <ref type="bibr" target="#b24">[25]</ref>. For the implementation details (number of layers and neurons, training epochs, learning rate, potential hyperparameters), we refer to the default values fixed in PyOD. As for Latent𝑂𝑢𝑡, we consider again the setup described in section 3.1 and we perform a few-epochs training, due to the good convergence properties observed in the last section. Specifically, the VAE is trained for 15 epochs.</p><p>As evaluation metrics we adopt the standard Area Under the ROC Curve (AUC) and the ratio</p><formula xml:id="formula_1">𝐶𝑂 2 𝐴𝑈 𝐶</formula><p>between the emissions of 𝐶𝑂 2 (in 𝐾 𝑔) produced for the training and the inference of a model, and the AUC. This last value is a measure combining both performance and energy consumption, indeed it indicates how much 𝐶𝑂 2 is needed (on average) to obtain a single percentage point of AUC. Table <ref type="table" target="#tab_0">1</ref> shows the results in terms of AUC. As we can see, Latent𝑂𝑢𝑡 is the best method for half the datasets, achieving performances close to the best also in the other half. In particular, confirming the observation made in <ref type="bibr" target="#b30">[31]</ref>, Latent𝑂𝑢𝑡 is especially effective on higher dimensional, structured data (for example speech and the image datasets). In Table <ref type="table" target="#tab_1">2</ref> are shown the results of the experiment in terms of the ratio 𝐶𝑂 2 𝐴𝑈 𝐶 . Here, Latent𝑂𝑢𝑡 outperforms its competitors in all but one dataset, exhibiting the best trade-off between performances obtained and the emissions of 𝐶𝑂 2 produced.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion</head><p>In this paper, we have focused on the algorithm Latent𝑂𝑢𝑡 for unsupervised anomaly detection in order to evaluate its performances and measure the environmental impact of its executions. When compared to the standard architecture on which it is applied, i. e. the Variational Autoencoder, Latent𝑂𝑢𝑡 shows that low energy-consumptive training can lead it to conspicuously better results. Moreover, in comparison with other neural network-based anomaly detection approaches it has shown superior performances both in terms of absolute AUC and, most importantly, in terms of the ratio between the emitted 𝐶𝑂 2 and the AUC obtained.</p><p>As future development, we intend to expand the discussion about the environmental impact of Latent𝑂𝑢𝑡 by including a more profound analysis of all its several variants and an investigation specialized on the hardware type (e.g., CPU vs. GPU), as well as propose novel measures to better capture the trade-off between emissions and performances. Finally, as a more ambitious goal, we aim at introducing a mechanism enabling Latent𝑂𝑢𝑡 to consider the green-aware aspect at training time.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Latent𝑂𝑢𝑡 receives the dataset as input and maps it into 𝐹. The transformed dataset is then processed by unsupervised anomaly detection methods which provide an anomaly score for each point.</figDesc><graphic coords="3,72.00,65.61,451.29,112.32" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Comparison between the performances of the Variational Autoencoder and Latent𝑂𝑢𝑡 in terms of AUC during the training epochs. ODDS datasets, group 1.</figDesc><graphic coords="4,149.73,461.89,146.66,104.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Comparison between the performances of the Variational Autoencoder and Latent𝑂𝑢𝑡 in terms of AUC during the training epochs. ODDS datasets, group 2.</figDesc><graphic coords="5,149.73,171.36,146.66,104.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Comparison between the performances of the Variational Autoencoder and Latent𝑂𝑢𝑡 in terms of AUC during the training epochs. MNIST, Fashion-MNIST and CIFAR10 datasets.</figDesc><graphic coords="5,75.15,321.38,146.66,104.76" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Comparison with competitors in terms of AUC.</figDesc><table><row><cell>Dataset (𝑑)</cell><cell></cell><cell cols="4">Latent𝑂𝑢𝑡 Deep-SVDD AnoGAN ALAD</cell><cell>DIF</cell></row><row><cell>cardio (21)</cell><cell></cell><cell>0.9300</cell><cell>0.9509</cell><cell>0.4460</cell><cell>0.4885 0.9129</cell></row><row><cell>letter (32)</cell><cell></cell><cell>0.6206</cell><cell>0.5189</cell><cell>0.5118</cell><cell>0.5094 0.6557</cell></row><row><cell>lympho (18)</cell><cell></cell><cell>0.9495</cell><cell>0.9460</cell><cell>0.9847</cell><cell>0.6549 0.8650</cell></row><row><cell>mammography (6)</cell><cell></cell><cell>0.8326</cell><cell>0.8767</cell><cell>0.1366</cell><cell>0.5450 0.7415</cell></row><row><cell>pendigits (16)</cell><cell></cell><cell>0.9880</cell><cell>0.9748</cell><cell>0.9729</cell><cell>0.4785 0.9363</cell></row><row><cell>pima (8)</cell><cell></cell><cell>0.6598</cell><cell>0.6289</cell><cell>0.7571</cell><cell>0.5472 0.6071</cell></row><row><cell>satellite (36)</cell><cell></cell><cell>0.7911</cell><cell>0.6460</cell><cell>0.5432</cell><cell>0.4037 0.7574</cell></row><row><cell>satimage-2 (36)</cell><cell></cell><cell>0.9984</cell><cell>0.9682</cell><cell>0.0165</cell><cell>0.4292 0.9935</cell></row><row><cell>speech (400)</cell><cell></cell><cell>0.5504</cell><cell>0.4968</cell><cell>0.4658</cell><cell>0.4906 0.4633</cell></row><row><cell>thyroid (6)</cell><cell></cell><cell>0.9055</cell><cell>0.8743</cell><cell>0.8967</cell><cell>0.4837 0.9613</cell></row><row><cell>MNIST (28 × 28)</cell><cell></cell><cell>0.9863</cell><cell>0.9321</cell><cell>0.2176</cell><cell>0.3350 0.9572</cell></row><row><cell cols="2">Fashion-MNIST (28 × 28)</cell><cell>0.9444</cell><cell>0.9392</cell><cell>0.6634</cell><cell>0.6623 0.6269</cell></row><row><cell>CIFAR-10 (32 × 32 × 3)</cell><cell></cell><cell>0.7474</cell><cell>0.6624</cell><cell>0.5756</cell><cell>0.5363 0.6383</cell></row><row><cell>Dataset (𝑑)</cell><cell cols="4">Latent𝑂𝑢𝑡 Deep-SVDD AnoGAN</cell><cell>ALAD</cell><cell>DIF</cell></row><row><cell>cardio (21)</cell><cell cols="2">4.7158e-6</cell><cell>9.6679e-6</cell><cell cols="2">1.2619e-3 2.0648e-5 4.0021e-5</cell></row><row><cell>letter (32)</cell><cell cols="2">5.7428e-6</cell><cell>1.8790e-5</cell><cell cols="2">1.3014e-3 1.9605e-5 5.6887e-5</cell></row><row><cell>lympho (18)</cell><cell cols="2">2.6640e-6</cell><cell>2.9348e-6</cell><cell cols="2">5.2290e-5 1.3394e-5 9.8577e-6</cell></row><row><cell>mammography (6)</cell><cell cols="2">1.5830e-5</cell><cell>4.8771e-5</cell><cell cols="2">2.4759e-2 2.9251e-5 1.7729e-4</cell></row><row><cell>pendigits (16)</cell><cell cols="2">9.2478e-6</cell><cell>3.7444e-5</cell><cell cols="2">2.1541e-3 2.7159e-5 1.0738e-4</cell></row><row><cell>pima (8)</cell><cell cols="2">4.1708e-6</cell><cell>9.1278e-6</cell><cell cols="2">1.9493e-6 1.6284e-5 3.3011e-5</cell></row><row><cell>satellite (36)</cell><cell cols="2">1.1943e-5</cell><cell>4.0915e-5</cell><cell cols="2">4.7031e-3 3.1390e-5 1.2655e-4</cell></row><row><cell>satimage-2 (36)</cell><cell cols="2">9.1152e-6</cell><cell>2.4921e-5</cell><cell cols="2">1.4122e-1 2.9071e-5 8.5686e-5</cell></row><row><cell>speech (400)</cell><cell cols="2">1.9139e-5</cell><cell>5.9722e-5</cell><cell cols="2">4.3628e-3 5.4631e-5 1.7098e-4</cell></row><row><cell>thyroid (6)</cell><cell cols="2">7.5721e-6</cell><cell>1.9487e-5</cell><cell cols="2">1.2720e-3 2.2425e-5 5.6633e-5</cell></row><row><cell>MNIST (28 × 28)</cell><cell cols="2">2.1834e-5</cell><cell>3.7648e-5</cell><cell cols="2">1.7076e-2 8.5111e-5 1.3168e-4</cell></row><row><cell cols="3">Fashion-MNIST (28 × 28) 2.3119e-5</cell><cell>4.6431e-5</cell><cell cols="2">5.5211e-3 3.7217e-5 1.9408e-4</cell></row><row><cell>CIFAR-10 (32 × 32 × 3)</cell><cell cols="2">4.9952e-5</cell><cell>6.9862e-5</cell><cell cols="2">7.7896e-3 5.8859e-5 2.1652e-4</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>Comparison with competitors in terms of</figDesc><table><row><cell>𝐶𝑂 2 𝐴𝑈 𝐶 .</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We acknowledge the support of the PNRR project FAIR -Future AI Research (PE00000013), Spoke 9 -Green-aware AI, under the NRRP MUR program funded by the NextGenerationEU.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A unifying review of deep and shallow anomaly detection</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ruff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">R</forename><surname>Kauffmann</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Vandermeulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montavon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kloft</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">G</forename><surname>Dietterich</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Müller</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Proc. IEEE</title>
		<imprint>
			<biblScope unit="volume">109</biblScope>
			<biblScope unit="page" from="756" to="795" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">The identification of multiple outliers</title>
		<author>
			<persName><forename type="first">L</forename><surname>Davies</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Gather</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of the American Statistical Association</title>
		<imprint>
			<biblScope unit="volume">88</biblScope>
			<biblScope unit="page" from="782" to="792" />
			<date type="published" when="1993">1993</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Distance-based outlier: algorithms and applications</title>
		<author>
			<persName><forename type="first">E</forename><surname>Knorr</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Tucakov</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">VLDB Journal</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="237" to="253" />
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Outlier mining in large high-dimensional data sets</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pizzuti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Knowl. Data Eng</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="203" to="215" />
			<date type="published" when="2005">2005</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Distance-based detection and prediction of outliers</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Basta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Pizzuti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="145" to="160" />
			<date type="published" when="2006">2006</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">DOLPHIN: an efficient algorithm for mining distance-based outliers in very large datasets</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Knowl. Disc. Data (TKDD)</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
	<note>Article 4</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Lof: Identifying density-based local outliers</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Breunig</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Kriegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sander</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. Int. Conf. on Managment of Data (SIGMOD)</title>
				<meeting>Int. Conf. on Managment of Data (SIGMOD)</meeting>
		<imprint>
			<date type="published" when="2000">2000</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Mining top-n local outliers in large databases</title>
		<author>
			<persName><forename type="first">W</forename><surname>Jin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tung</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Han</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proc. ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD)</title>
				<meeting>ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD)</meeting>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">Outlier detection using k-nearest neighbour graph</title>
		<author>
			<persName><forename type="first">V</forename><surname>Hautamäki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kärkkäinen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Fränti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Pattern Recognition (ICPR)</title>
				<meeting><address><addrLine>Cambridge, UK</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2004">August 23-26, 2004</date>
			<biblScope unit="page" from="430" to="433" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Reverse nearest neighbors in unsupervised distancebased outlier detection</title>
		<author>
			<persName><forename type="first">M</forename><surname>Radovanović</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Nanopoulos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Ivanović</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="1369" to="1382" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">CFOF: A concentration free measure for anomaly detection</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Knowledge Discovery from Data (TKDD)</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">53</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Estimating the support of a high-dimensional distribution</title>
		<author>
			<persName><forename type="first">B</forename><surname>Schölkopf</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">C</forename><surname>Platt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shawe-Taylor</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Smola</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">C</forename><surname>Williamson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Neural Computation</title>
		<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Support vector data description</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">M J</forename><surname>Tax</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P W</forename><surname>Duin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Mach. Learn</title>
		<imprint>
			<date type="published" when="2004">2004</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">R</forename><surname>Chalapathy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chawla</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1901.03407</idno>
		<title level="m">Deep learning for anomaly detection: A survey</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Outlier detection using replicator neural networks</title>
		<author>
			<persName><forename type="first">S</forename><surname>Hawkins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Williams</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Baxter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Data Warehousing and Knowledge Discovery (DAWAK)</title>
				<imprint>
			<date type="published" when="2002">2002</date>
			<biblScope unit="page" from="170" to="180" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Variational autoencoder based anomaly detection using reconstruction probability</title>
		<author>
			<persName><forename type="first">J</forename><surname>An</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cho</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2015">2015</date>
		</imprint>
		<respStmt>
			<orgName>SNU Data Mining Center</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical Report 3</note>
</biblStruct>

<biblStruct xml:id="b16">
	<monogr>
		<title level="m" type="main">Unsupervised anomaly detection with generative adversarial networks to guide marker discovery</title>
		<author>
			<persName><forename type="first">T</forename><surname>Schlegl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Seeböck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Waldstein</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Schmidt-Erfurth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Langs</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1703.05921</idno>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Akcay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Atapour-Abarghouei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">P</forename><surname>Breckon</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1805.06725</idno>
		<title level="m">Ganomaly: Semi-supervised anomaly detection via adversarial training</title>
				<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Generative adversarial active learning for unsupervised outlier detection</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>He</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Trans. Knowl. Data Eng</title>
		<imprint>
			<biblScope unit="volume">32</biblScope>
			<biblScope unit="page" from="1517" to="1528" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Adversarially learned anomaly detection</title>
		<author>
			<persName><forename type="first">H</forename><surname>Zenati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Romain</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-S</forename><surname>Foo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Lecouat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Chandrasekhar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2018 IEEE International conference on data mining (ICDM), IEEE</title>
				<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="727" to="736" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Deep one-class classification</title>
		<author>
			<persName><forename type="first">L</forename><surname>Ruff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Görnitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Deecke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Siddiqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Vandermeulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Binder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kloft</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 35th ICML 2018</title>
				<editor>
			<persName><forename type="first">J</forename><forename type="middle">G</forename><surname>Dy</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><surname>Krause</surname></persName>
		</editor>
		<meeting>the 35th ICML 2018<address><addrLine>Stockholm, Sweden</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Ruff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">A</forename><surname>Vandermeulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Görnitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Binder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kloft</surname></persName>
		</author>
		<title level="m">Deep semisupervised anomaly detection</title>
				<meeting><address><addrLine>Addis Ababa, Ethiopia, OpenReview</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note>8th ICLR 2020</note>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Cooperative deep unsupervised anomaly detection</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ferragina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Spada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Discovery Science -25th International Conference, DS 2022</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<meeting><address><addrLine>Montpellier, France</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">October 10-12, 2022. 2022</date>
			<biblScope unit="volume">13601</biblScope>
			<biblScope unit="page" from="318" to="328" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">Deep anomaly detection with deviation networks</title>
		<author>
			<persName><forename type="first">G</forename><surname>Pang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Shen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Den</surname></persName>
		</author>
		<author>
			<persName><surname>Hengel</surname></persName>
		</author>
		<idno type="DOI">10.1145/3292500.3330871</idno>
		<idno>doi:10.1145/3292500.3330871</idno>
		<ptr target="https://doi.org/10.1145/3292500.3330871" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD 2019</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Teredesai</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">V</forename><surname>Kumar</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Y</forename><surname>Li</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Rosales</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Terzi</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Karypis</surname></persName>
		</editor>
		<meeting>the 25th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD 2019<address><addrLine>Anchorage, AK, USA</addrLine></address></meeting>
		<imprint>
			<publisher>ACM</publisher>
			<date type="published" when="2019">August 4-8, 2019. 2019</date>
			<biblScope unit="page" from="353" to="362" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Deep isolation forest for anomaly detection</title>
		<author>
			<persName><forename type="first">H</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Pang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="12591" to="12604" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">Exploring the accuracy-energy trade-off in machine learning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">E</forename><surname>Brownlee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Adair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">O</forename><surname>Haraldsson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Jabbo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/ACM International Workshop on Genetic Improvement (GI)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="11" to="18" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">Evaluating performance, power and energy of deep neural networks on cpus and gpus</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Ou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Qi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Cai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Yan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Theoretical Computer Science: 39th National Conference of Theoretical Computer Science, NCTCS 2021</title>
				<meeting><address><addrLine>Yinchuan, China</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">July 23-25, 2021. 2021</date>
			<biblScope unit="volume">39</biblScope>
			<biblScope unit="page" from="196" to="221" />
		</imprint>
	</monogr>
	<note>Revised Selected Papers</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Green ai</title>
		<author>
			<persName><forename type="first">R</forename><surname>Schwartz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Dodge</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">A</forename><surname>Smith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Etzioni</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Communications of the ACM</title>
		<imprint>
			<biblScope unit="volume">63</biblScope>
			<biblScope unit="page" from="54" to="63" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">A systematic review of green ai</title>
		<author>
			<persName><forename type="first">R</forename><surname>Verdecchia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Sallou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cruz</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">e1507</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Improving deep unsupervised anomaly detection by exploiting vae latent space distribution</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ferragina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Discovery Science</title>
				<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="596" to="611" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Latent𝑂𝑢𝑡: an unsupervised deep anomaly detection approach exploiting latent space distribution</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ferragina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Machine Learning</title>
				<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Detecting anomalies with rmlatentout: Novel scores, architectures, and settings</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ferragina</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-16564-1_24</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-3-031-16564-1\_24" />
	</analytic>
	<monogr>
		<title level="m">Foundations of Intelligent Systems -26th International Symposium, ISMIS 2022</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">M</forename><surname>Ceci</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Flesca</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Masciari</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">G</forename><surname>Manco</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">Z</forename><forename type="middle">W</forename><surname>Ras</surname></persName>
		</editor>
		<meeting><address><addrLine>Cosenza, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2022">October 3-5, 2022. 2022</date>
			<biblScope unit="volume">13515</biblScope>
			<biblScope unit="page" from="251" to="261" />
		</imprint>
	</monogr>
	<note>Proceedings</note>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Enhancing anomaly detectors with latentout</title>
		<author>
			<persName><forename type="first">F</forename><surname>Angiulli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fassetti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Ferragina</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Intelligent Information Systems</title>
		<imprint>
			<biblScope unit="page" from="1" to="19" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<monogr>
		<title level="m" type="main">Odds library</title>
		<author>
			<persName><forename type="first">S</forename><surname>Rayana</surname></persName>
		</author>
		<ptr target="http://odds.cs.stonybrook.edu" />
		<imprint>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">The mnist database of handwritten digit images for machine learning research</title>
		<author>
			<persName><forename type="first">L</forename><surname>Deng</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Signal Processing Magazine</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="141" to="142" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b35">
	<monogr>
		<title level="m" type="main">Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms</title>
		<author>
			<persName><forename type="first">H</forename><surname>Xiao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Rasul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Vollgraf</surname></persName>
		</author>
		<idno>CoRR abs/1708.07747</idno>
		<ptr target="http://arxiv.org/abs/1708.07747.arXiv:1708.07747" />
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b36">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Krizhevsky</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Hinton</surname></persName>
		</author>
		<title level="m">Learning multiple layers of features from tiny images</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b37">
	<analytic>
		<title/>
		<author>
			<persName><forename type="first">B</forename><surname>Courty</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Schmidt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Luccioni</surname></persName>
		</author>
		<author>
			<persName><surname>Goyal-Kamal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Marioncoutarel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Feld</surname></persName>
		</author>
		<author>
			<persName><surname>Lecourt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liamconnell</surname></persName>
		</author>
		<author>
			<persName><surname>Saboni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Inimaz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Léval</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Blanche</surname></persName>
		</author>
		<author>
			<persName><surname>Cruveiller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Ouminasara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Bogroff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>De Lavoreille</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Laskaris</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Abati</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Blank</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Wang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Catovic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Alencon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Stęchły</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">O N</forename><surname>Bauer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Jpw</forename><surname>De Araújo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Minervabooks</forename></persName>
		</author>
		<idno type="DOI">10.5281/zenodo.11171501</idno>
		<ptr target="https://doi.org/10.5281/zenodo.11171501.doi:10.5281/zenodo.11171501" />
	</analytic>
	<monogr>
		<title level="j">mlco2/codecarbon</title>
		<imprint>
			<biblScope unit="issue">1</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b38">
	<analytic>
		<title level="a" type="main">Pyod: A python toolbox for scalable outlier detection</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Nasrullah</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Li</surname></persName>
		</author>
		<ptr target="http://jmlr.org/papers/v20/19-011.html" />
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="1" to="7" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
