<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Interpreting Outliers in Time Series Data through Decoding Autoencoder</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Patrick</forename><surname>Knab</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">Robert Bosch GmbH</orgName>
								<address>
									<settlement>Bühl</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Sascha</forename><surname>Marton</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Christian</forename><surname>Bartelt</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Robert</forename><surname>Fuder</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Robert Bosch GmbH</orgName>
								<address>
									<settlement>Bühl</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff2">
								<orgName type="department">Explainable AI for Time Series and Data Streams Tutorial-Workshop</orgName>
								<address>
									<addrLine>Sep. 9 th</addrLine>
									<postCode>2024</postCode>
									<settlement>Vilnius</settlement>
									<country>Lithunia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Interpreting Outliers in Time Series Data through Decoding Autoencoder</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">05049FFE3C1FFD17DAB5282D7020F176</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable Artificial Intelligence (XAI)</term>
					<term>Outlier Detection</term>
					<term>Autoencoder</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Outlier detection is a crucial analytical tool in various fields. In critical systems like manufacturing, malfunctioning outlier detection can be costly and safety-critical. Therefore, there is a significant need for explainable artificial intelligence (XAI) when deploying opaque models in such environments. This study focuses on manufacturing time series data from a German automotive supply industry. We utilize autoencoders to compress the entire time series and then apply anomaly detection techniques to its latent features. For outlier interpretation, we i) adopt widely used XAI techniques to the autoencoder's encoder. Additionally, ii) we propose AEE, Aggregated Explanatory Ensemble, a novel approach that fuses explanations of multiple XAI techniques into a single, more expressive interpretation. For evaluation of explanations, iii) we propose a technique to measure the quality of encoder explanations quantitatively. Furthermore, we qualitatively assess the effectiveness of outlier explanations with domain expertise.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Outliers represent exceptional instances that differ from a normal data distribution <ref type="bibr" target="#b0">[1]</ref>. Artificial intelligence (AI) is pivotal in outlier (anomaly) detection applications, particularly in domains with high-dimensional data, such as time series. By analyzing patterns, trends, and dependencies, algorithms can effectively identify outliers and anomalous events in various domains, ranging from finance and healthcare to industrial processes <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b0">1,</ref><ref type="bibr" target="#b2">3]</ref>. In particular, manufacturing processes generate vast amounts of time series data, making timely and accurate outlier detection critical for maintaining operational efficiency and safety. However, opaque neural networks (NN) often lack the interpretability necessary for high-stakes environments <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b2">3]</ref>. Consequently, explaining the model's decisions through explainable artificial intelligence (XAI) is essential to provide transparency and foster trust in automated decision-making <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>This work utilizes convolutional autoencoders (CAE) to compress univariate time series data for anomaly detection in an automotive manufacturing plant. A complete time series is considered an outlier if the entire sequence deviates from the expected pattern. The purpose of utilizing CAE is to learn specific manufacturing process features and map a time series into a low-dimensional space at its bottleneck. An unsupervised anomaly detection algorithm then uses these latent features to identify outliers. Therefore, we are interested in explaining how the The aggregated explanation is represented by a heat map in the background, with deeper shades of red indicating areas of greater significance for its explanation in the time series. The black curve visualizes an anomalous time series, with the explanation highlighting a disruption in the pattern between the 6300 and 7200 marks in the time series.</p><p>encoder transformation contributes to outlier detection by employing established XAI methods (Section 2) like Grad-CAM <ref type="bibr" target="#b7">[8]</ref>, LIME <ref type="bibr" target="#b8">[9]</ref>, SHAP <ref type="bibr" target="#b9">[10]</ref>, and LRP <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>, since we use the CAE's latent features for detecting outliers. The explanations of the XAI techniques mentioned above lead to fluctuations due to their unique features. This diversity motivates us to combine these explanations into a single, more comprehensive one: AEE -Aggregated Explanatory Ensemble (see Section 3.2), visualized in Figure <ref type="figure" target="#fig_0">1</ref> for an anomalous time series instance. Since ground-truth data for evaluating the produced explanations are often missing, counterfactuals are widely recognized as an effective quantitative method for XAI techniques <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. In our work, we implement a revised version of the quality measurement (QM) procedure, originally proposed in <ref type="bibr" target="#b15">[16]</ref>, as detailed in Section 3.3. We assess the effectiveness of the techniques both qualitatively and quantitatively (see Section 4) based on the underlying manufacturing process. Our primary focus is on discussing their implications for erroneous time series data to gain insights into the property of being an outlier as a complete time series.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related XAI Approaches</head><p>The following section briefly mentions the XAI approaches used in this work. They were chosen for their well-known status and ability to cover explainability from different aspects, e.g., local vs. global and model-agnostic vs. model-specific explanations. These techniques share the goal of providing post hoc explanations, but each employs different approaches to achieve explainability. CAM (Class Activation Mapping), proposed by Zhou et al. <ref type="bibr" target="#b16">[17]</ref>, is a local and model-specific technique for explaining convolutional neural networks (CNN). Selvaraju et al. <ref type="bibr" target="#b7">[8]</ref> enhanced this approach with Grad-CAM, incorporating gradients into the explanation process. This improvement removes the requirement for a global average pooling layer, making the method applicable to a broader range of model architectures. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Application of XAI to Autoencoder</head><p>Notation. An univariate time series instance t is fed into the convolutional autoencoder via the function t = 𝐷(𝐸(t)), with encoder 𝐸, decoder 𝐷, and latent space 𝐿, where 𝐿 = 𝐸(t). The output t is a reconstruction of t. We denote explanation E as the output of an XAI technique for a time series t.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Adapting XAI Techniques to Encoder</head><p>We employ a 1D convolutional autoencoder (1D CAE) to reduce feature dimensions and detect anomalies in time series data (see Section 4.1 for more details). We apply XAI techniques to the encoder since we use its output for anomaly detection. While the straightforward architecture facilitates the application of the XAI methods introduced in Section 2, their utilization, although widely applied in diverse machine learning scenarios, remains relatively limited within the realm of 1D CAE, mainly when applied to time series data. We adapt these methods to improve their capability to provide 1D explanations for 1D convolutional networks in the form of heatmaps. The application of the XAI methods above yields two distinct types of explanations:</p><p>• Individual Feature Explanation: For each latent feature, 𝑙 𝑖 ∈ 𝐿 (where 𝑖 is the feature index), we generate a dedicated heatmap. This allows us to inspect how individual features in t contribute to the reconstruction process (see Appendix A). • Combined Feature Explanation: In addition to the individual views, we also create a unified heatmap that integrates all latent features into a single representation (see Figure <ref type="figure" target="#fig_1">2a</ref>). This combined view provides a holistic understanding of how the interplay between features in t influences the reconstruction process. All experiments and figures in this paper use combined feature explanations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">AEE -Aggregated Explanatory Ensemble.</head><p>With the application of the covered XAI approaches (see Section 2), we generate a set of diverse explanations. Each XAI technique provides distinct insights: Grad-CAM emphasizes spatial relevance, LIME offers local interpretability, SHAP delivers global explanations, and LRP traces relevance propagation (see Figure <ref type="figure" target="#fig_1">2</ref>). By aggregating these methods, AEE leverages their strengths for a holistic understanding of anomalies. We restrict AEE to a time series t that stores the diverse explanations E 𝑖 in an array E 𝑥 𝑖 , where 𝑖 indicates the index of t and 𝑥 denotes the underlying XAI technique. To ensure equal consideration for each explanation, we individually scale each element E 𝑥 𝑖 based on its importance scores. Mathematically, the scaled explanation SE 𝑥  𝑖 is given by: SE 𝑥 𝑖 = (</p><formula xml:id="formula_0">E 𝑥 𝑖 − min(E 𝑥 ) max(E 𝑥 ) − min(E 𝑥 ) ) × (𝑎 max − 𝑎 min ) + 𝑎 min .<label>(1)</label></formula><p>Here, 𝑎 min and 𝑎 max are the minimum and maximum values desired for scaling SE 𝑥 𝑖 . After scaling, we compute the mean value for each point 𝑖 on the 𝑋 axis. We denote the aggregated version as A 𝑖 , where 𝑖 represents the aggregated value for the 𝑖th point of the time series t on the 𝑋 axis, mathematically:</p><formula xml:id="formula_1">A 𝑖 = 1 |𝑥| |𝑥| ∑ 𝑗=1 SE 𝑗 𝑖 .<label>(2)</label></formula><p>Here, |𝑥| denotes the count of explanations stored in SE 𝑥 𝑖 . Alternatively, a weighting scheme can be employed instead of equal contribution to assign more relevance to specific explanations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Quality Measurement of Encoder's Explanation</head><p>Given the interpretability constraints of the XAI results <ref type="bibr" target="#b4">[5]</ref>, we quantitatively analyze the explanations generated by each method using a modified version of the quality measurement function proposed by Schlegel et al. <ref type="bibr" target="#b15">[16]</ref>. In this work, the XAI techniques focus on the encoder's explainability, resulting in a multi-regression task. Using the reconstruction error as a quality measurement would involve the decoder, misleading the measurement of the encoder's explanation. Instead, we aim to analyze the projections of the original time series t, a randomly perturbed version t c r , and a version perturbed based on explanation results t c in the latent space. This approach operates independently of the decoder, focusing on explaining the techniques applied to the encoder. Adversarial perturbations <ref type="bibr" target="#b17">[18]</ref>, which manipulate predictions, suggest that the distance between t c r and t should be smaller than between t and t c . Thus, we define the quality measurement for the encoder as:</p><formula xml:id="formula_2">𝑞𝑚 𝑒 (t, t) ≤ 𝑞𝑚 𝑒 (t, t c r ) ≤ 𝑞𝑚 𝑒 (t, t c ).<label>(3)</label></formula><p>Here, 𝑞𝑚 𝑒 measures the Euclidean distance between the original and perturbed time series in the latent space. The underlying theory is that perturbations based on explanation results have a more significant impact on the model's predictions than random noise <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>. The approach applies to individual and combined feature explanations, revealing the importance of features for the outlierness property of the instance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimentation</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Experimental Setup</head><p>Dataset. As introduced in Section 1, our demonstration employs univariate time series data originating from a production plant. More specifically, it covers one process in a manufacturing line consisting of multiple processes. The dataset consists of 18,412 time series instances, each containing 8,192 data points. The test station (end of the line) automatically labels the data to We intentionally include known anomalies in the training process, as instances with NOK labels may contain errors originating from other processes in the manufacturing line that the time series does not cover. In addition, the proportion of abnormal instances is low enough (less than 1%) that the autoencoder continues to learn to reconstruct the time series correctly without learning anomalies. The pipeline consists of an anomaly detection mechanism that utilizes the latent feature space as input (see Appendix C). Specifically, we employ the density-based spatial clustering of applications with noise (DBSCAN) algorithm <ref type="bibr" target="#b18">[19]</ref>. Table <ref type="table" target="#tab_1">1</ref> presents the performance metrics of the anomaly detection pipeline, categorized into NOK and OK classes. These results are based on the evaluation of the test dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Qualitative Evaluation -Anomaly Interpretation</head><p>In the following, we discuss the utility of XAI techniques to interpret the encoder, focusing on understanding why specific instances lead to anomalies by leveraging domain-specific knowledge of the underlying manufacturing process. We examine an exemplary time series classified as NOK for all explanation techniques in Figure <ref type="figure" target="#fig_1">2</ref> and Figure <ref type="figure" target="#fig_3">3</ref>. The illustrative case diverges notably in its final third segment, as the pattern is expected to exhibit distinct characteristics compared to the preceding two thirds of the time series (see Appendix D). We explicitly demonstrate the anomalies shown here using examples that are easy to visually understand as outsiders.</p><p>We initiate with Grad-CAM (Figure <ref type="figure" target="#fig_1">2a</ref>), revealing a heatmap that distinctly accentuates positions later in t, precisely aligning with observable areas of technical failures in the manufacturing process. This targeted explanation effectively identifies the specific region preceding real-world anomalies. Subsequently, LIME (Figure <ref type="figure" target="#fig_1">2b</ref>) highlights the same area as CAM, but its interpretation is more straightforward because of its apparent intensity. Moreover, it also subtly indicates regions in intermediate areas of the time series. SHAP (Figure <ref type="figure" target="#fig_1">2c</ref>) pinpoints the same critical area of primary importance, consistent with the findings of the previous methods. Compared to the preceding, the final standalone method LRP (Figure <ref type="figure" target="#fig_1">2d</ref>) diverges in its explanation. Although it does not explicitly emphasize the most pronounced pattern, it assigns varying degrees of importance to different segments and provides valuable insights for manual analysis by a domain expert. Figure <ref type="figure" target="#fig_3">3</ref> shows the aggregated explanation. Parallel to Grad-CAM and SHAP, the region signaling an abnormal pattern is precisely accentuated, and the aggregated version amplifies  The format and layout of these explanations are consistent with those shown in Figure <ref type="figure" target="#fig_1">2</ref>.</p><p>the color representation, enhancing interpretability. Besides confirming the importance of the known area, this approach offers additional insights into other parts of the time series, e.g., it prioritizes early regions that indicate possible technical abnormalities. Repeated experiments prove its explanations are more stable due to its aggregation property, mitigating the negative implications of instability <ref type="bibr" target="#b19">[20]</ref>. The visualization indicates that each QM XAI score consistently outperforms its QM noise counterpart. The scores for the NOK cluster are significantly higher, demonstrating the effectiveness of using explanations for outlier interpretation. LRP and LIME overlap between Noise and XAI, while Grad-CAM and SHAP display a clearer separation in their explanations. The AEE produces a significant result, indicating that aggregating multiple explanations sharpens the distinction between relevant and irrelevant features within a time series, improving explanation quality. The measurements are further stratified into noise (t c r -XAI shuffled), denoted by green, and XAI (t c -XAI perturbated), represented by red.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Quantitative Evaluation -XAI Quality Measurement.</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Limitations and Future Work</head><p>Our study applied XAI techniques to CAE, leaving the potential for other architectures such as variational autoencoders (VAE) <ref type="bibr" target="#b20">[21]</ref> and recurrent neural networks (RNN) <ref type="bibr" target="#b21">[22,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b6">7]</ref> unexplored. Additionally, the evaluation of these techniques was primarily based on qualitative assessments, as anomalies required examination by domain experts. Future research on datasets not requiring expert knowledge should consider integrating additional quantitative methods to complement qualitative insights <ref type="bibr" target="#b23">[24]</ref>. In addition, a clear distinction between explanation and interpretation should be established <ref type="bibr" target="#b24">[25]</ref>, recognizing that not all explanations are inherently human-interpretable <ref type="bibr" target="#b25">[26]</ref>, as it was sometimes the case in this scenario.</p><p>Furthermore, exploring different weighting schemes for AEE could enhance the interpretation and accuracy of feature importance calculations in various scenarios. We outlined the experimentation on the time series manufacturing use case. Future research could involve testing the AEE approach across various data types, such as images or text. Regarding XAI approaches, future work could focus on improving time series segmentation using foundation models <ref type="bibr" target="#b26">[27]</ref>, particularly beneficial for LIME. Another promising direction is to direct the explanations not toward the latent features themselves but the classes in the latent space that signify the presence or absence of anomalies. Lastly, extending this methodology to multivariate time series <ref type="bibr" target="#b27">[28,</ref><ref type="bibr" target="#b22">23,</ref><ref type="bibr" target="#b28">29,</ref><ref type="bibr" target="#b29">30]</ref> or even multimodal data <ref type="bibr" target="#b20">[21]</ref> presents another intriguing avenue for future exploration.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusion</head><p>This paper contributes to the application of XAI techniques to CAEs for analyzing outlier properties within the latent space of time series data in the operational context of a manufacturing plant. We employed well-established XAI methods to demonstrate the practicality and effectiveness of these techniques in interpreting outliers. In addition, we introduced AEE, an ensemble of multiple XAI techniques. We quantitatively evaluated the different explanations using a QM approach specifically modified to fit the encoder of an AE. Moreover, the application of XAI techniques provided explanations for these outliers, accurately highlighting the abnormal segments within the time series. This alignment confirms the utility of XAI in providing meaningful insights into anomalies and building confidence in the system through the interpretation of XAI results.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>A. Individual Feature Explanation</head><p>Figure <ref type="figure" target="#fig_7">5</ref> shows an instance that the pipeline classified as NOK, featuring the reconstructed time series in red and the original time series in black. The underlying explanation is provided through individual feature explanations, where a distinct heatmap visually explains each latent feature.  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>B. Autoencoder Architecture</head><p>In the following, we present the defined search space of hyperparameters for tuning an AE in this work. The search space has been explored by 100 runs and 500 epochs each. Figure <ref type="figure">6</ref> represents the building blocks we tuned during this process.</p><p>• The amount of CNN blocks consists of an optional dropout and max pooling layer. We restrict this size to at least one and a maximum of three blocks. This number applies to both the encoder and the decoder. </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>…</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Encoder Decoder</head><p>Figure <ref type="figure">6</ref>: The Building Blocks of an Autoencoder: An Abstract Architecture. The AE's architecture comprises diverse blocks, each possessing unique internal attributes and dimensions. As a result, the encoder and decoder are constructed separately, deviating from the conventional symmetrical autoencoder. These blocks encompass convolutional layers and their associated operations alongside a dense block that integrates dense and dropout layers.</p><p>• Each convolutional layer is optimized with a specific number of filters in its operation, namely 16, 32, 64, and 128. Furthermore, the kernel size is tuned to either 8, 16, or 32. While it is possible to consider additional values for these parameters, doing so would increase the search space for the tuner. • The dropout layer is optional for each CNN and DNN block. Possible dropout rates range is 0.1, 0.2, 0.3, 0.4, and 0.5. • Max pooling is another optional layer in the CNN-Block but with a fixed pooling size of two. • The range of neurons in a dense layer is 32, 64, 128, and 256, respectively. • The activation function chosen for each layer in the autoencoder remains consistent, namely, the ReLu, Tanh, Sigmoid, or Softmax function. However, only the output layer of the decoder is individually tailored to these four functions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>C. Latent Space Plot</head><p>The encoder's output projection is shown in Figure <ref type="figure" target="#fig_8">7</ref>. This figure displays the latent variables on a two-dimensional scale for easier interpretation. Each point on the plot corresponds to a mapped instance, representing a complete time series from the test dataset. The colors indicate the categorization by DBSCAN in the latent space: red points are outliers, orange points represent instances with manually detected deviations yet considered OK, and green points indicate cases with no apparent deviations, also classified as OK.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>D. Exemplary Non-Outlier Time Series</head></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure1: Aggregated Explanatory Ensemble -AEE. The aggregated explanation is represented by a heat map in the background, with deeper shades of red indicating areas of greater significance for its explanation in the time series. The black curve visualizes an anomalous time series, with the explanation highlighting a disruption in the pattern between the 6300 and 7200 marks in the time series.</figDesc><graphic coords="2,107.47,92.55,396.79,65.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Individual XAI Results. The XAI results are presented in the form of heatmaps. The black portions of the images denote the time series signal. These displayed instances were identified as abnormal by the AE's pipeline. The heatmap in the background indicates feature importance using varying intensities of red. We must evaluate color intensity individually as XAI techniques calculate feature importance differently.</figDesc><graphic coords="6,107.47,417.74,396.79,65.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: AEE XAI Results. This figure presents the results of the XAI analysis for the AEE approach. The format and layout of these explanations are consistent with those shown in Figure 2.</figDesc><graphic coords="7,107.47,92.55,396.79,65.88" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 4</head><label>4</label><figDesc>Figure 4 depicts the QM (normalized Euclidean distances) distributions, where boxes represent the interquartile range (IQR) from Q1 to Q3, with a median line (Q2). The fences extend ±1.5 times the IQR. The OK category includes 100 randomly selected instances, and the NOK category comprises 38. The noise/shuffle box (green) represents QM values t c r , and the XAI box (red) represents t c .The visualization indicates that each QM XAI score consistently outperforms its QM noise counterpart. The scores for the NOK cluster are significantly higher, demonstrating the effectiveness of using explanations for outlier interpretation. LRP and LIME overlap between Noise and XAI, while Grad-CAM and SHAP display a clearer separation in their explanations. The AEE produces a significant result, indicating that aggregating multiple explanations sharpens the distinction between relevant and irrelevant features within a time series, improving explanation quality.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Interquartile Range Quality Measurements. The visualization depicts quality measurement scores for each XAI technique, categorized into true anomalies (NOK) and false anomalies (OK). The measurements are further stratified into noise (t c r -XAI shuffled), denoted by green, and XAI (t c -XAI perturbated), represented by red.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head></head><label></label><figDesc>(a) Heatmap Latent Feature One (b) Heatmap Latent Feature Two (c) Heatmap Latent Feature Three</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Grad-CAM -Individual Feature Heatmaps. The images illustrate individual latent feature explanations in the form of a heatmap generated by Grad-CAM. The black curve illustrates the original time series, while the red curve represents its reconstruction.</figDesc><graphic coords="12,89.29,380.43,416.70,74.92" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_8"><head>Figure 7 :</head><label>7</label><figDesc>Figure 7: Latent Space Visualization. Two-dimensional latent space representation of the autoencoder's features. Green and orange points represent instances assigned to two distinct clusters, while red points are identified as outliers.</figDesc><graphic coords="14,89.29,84.19,416.69,133.55" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_9"><head>Figure 8</head><label>8</label><figDesc>Figure 8 displays an exemplary time series classified as a non-outlier alongside its reconstruction by the autoencoder. The image demonstrates that the AE can meaningfully reconstruct the input time series. Additionally, the pattern of this time series is typical for an instance without apparent errors in this dataset.</figDesc><graphic coords="14,89.29,374.25,416.68,87.89" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_10"><head>Figure 8 :</head><label>8</label><figDesc>Figure 8: Time Series Reconstruction. The figure illustrates a time series, depicted in black, classified as OK. The corresponding reconstruction through the autoencoder is shown in red.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 1 Anomaly Detection Performance Measurements.</head><label>1</label><figDesc>The table contains the precision, recall, and F1-score performance metrics of the developed anomaly detection pipeline for the test set. This information can be used to assess whether the found outliers correspond to actual errors identified by the test station. Anomaly Detection Pipeline. Our 1D CAE architecture comprises three convolutional layers with ReLU activation functions, followed by max-pooling layers and a bottleneck layer with a three-dimensional latent space (see Appendix B). We divide the data into three sets to train the AE: a training set for model training (0.66% NOK), a validation set (0.74% NOK), and a separate set for testing (0.70% NOK) the model's performance.</figDesc><table><row><cell>Class</cell><cell>Precision</cell><cell>Recall</cell><cell>F1-Score</cell><cell>Support</cell></row><row><cell>0 (OK)</cell><cell>1.00</cell><cell>1.00</cell><cell>1.00</cell><cell>5441</cell></row><row><cell>1 (NOK)</cell><cell>0.89</cell><cell>0.63</cell><cell>0.74</cell><cell>38</cell></row><row><cell cols="5">indicate normal operation (OK) or an error (not OK-NOK) during production; NOK accounts</cell></row><row><cell>for 0.68% overall.</cell><cell></cell><cell></cell><cell></cell><cell></cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>• In contrast to the number of CNN blocks, this number varies for DNN blocks between the encoder and decoder parts. Both can have up to two DNN blocks.</figDesc><table><row><cell>Input</cell><cell>CNN</cell><cell>Pooling</cell><cell>Dropout</cell><cell>…</cell><cell>CNN</cell><cell>Pooling</cell><cell>Dropout</cell><cell>Flatten</cell><cell>Dense</cell><cell>Dropout</cell><cell>…</cell><cell>Dense</cell><cell>Dropout</cell><cell>Bottleneck</cell><cell>Dense</cell><cell>Dropout</cell><cell>…</cell><cell>Dense</cell><cell>Dropout</cell><cell>Dense Reshaping</cell><cell>Dropout</cell><cell>UpSampling</cell><cell>CNN_Transpose</cell><cell>Dropout</cell><cell>UpSampling</cell><cell>CNN_Transpose</cell><cell>Output</cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was supported by the German Federal Ministry for Economic Affairs and Climate Action (BMWK).</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Outlier detection: Methods, models, and classification</title>
		<author>
			<persName><forename type="first">A</forename><surname>Boukerche</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Zheng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Alfandi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3381028</idno>
		<ptr target="https://doi.org/10.1145/3381028.doi:10.1145/3381028" />
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">53</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<title level="m" type="main">Robust and explainable autoencoders for unsupervised time series outlier detection-extended version</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhao</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Zheng</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2204.03341</idno>
		<ptr target="https://arxiv.org/abs/2204.03341.doi:10.48550/ARXIV.2204.03341" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explaining anomalies detected by autoencoders using shapley additive explanations</title>
		<author>
			<persName><forename type="first">L</forename><surname>Antwarg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M</forename><surname>Miller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Shapira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rokach</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2021.115736</idno>
		<ptr target="https://doi.org/10.1016/j.eswa.2021.115736" />
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">186</biblScope>
			<biblScope unit="page">115736</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Utilizing XAI technique to improve autoencoder based model for computer network anomaly detection with shapley additive explanation(shap)</title>
		<author>
			<persName><forename type="first">K</forename><surname>Roshan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zafar</surname></persName>
		</author>
		<idno>CoRR abs/2112.08442</idno>
		<ptr target="https://arxiv.org/abs/2112.08442.arXiv:2112.08442" />
		<imprint>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<monogr>
		<title level="m" type="main">Interpretable Machine Learning</title>
		<author>
			<persName><forename type="first">C</forename><surname>Molnar</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Multivariate time-series anomaly detection with contaminated data</title>
		<author>
			<persName><forename type="first">T</forename><forename type="middle">K K</forename><surname>Ho</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Armanfard</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2308.12563.arXiv:2308.12563" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Unsupervised anomaly detection for iot-based multivariate time series: Existing solutions, performance analysis and future directions</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Belay</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">S</forename><surname>Blakseth</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Rasheed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">Salvo</forename><surname>Rossi</surname></persName>
		</author>
		<idno type="DOI">10.3390/s23052844</idno>
		<ptr target="https://www.mdpi.com/1424-8220/23/5/2844.doi:10.3390/s23052844" />
	</analytic>
	<monogr>
		<title level="j">Sensors</title>
		<imprint>
			<biblScope unit="volume">23</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Grad-cam: Visual explanations from deep networks via gradient-based localization</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">R</forename><surname>2selvaraju</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Cogswell</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Das</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Vedantam</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Batra</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE International Conference on Computer Vision (ICCV)</title>
				<meeting>the IEEE International Conference on Computer Vision (ICCV)</meeting>
		<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">why should i trust you?&quot;: Explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
		<idno type="DOI">10.1145/2939672.2939778</idno>
		<ptr target="https://doi.org/10.1145/2939672.2939778.doi:10.1145/2939672.2939778" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16</title>
				<meeting>the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A unified approach to interpreting model predictions</title>
		<author>
			<persName><forename type="first">S</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.1705.07874</idno>
		<ptr target="https://arxiv.org/abs/1705.07874.doi:10.48550/ARXIV.1705.07874" />
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems 30</title>
				<editor>
			<persName><forename type="first">I</forename><surname>Guyon</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">U</forename><forename type="middle">V</forename><surname>Luxburg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Bengio</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">H</forename><surname>Wallach</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Fergus</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">S</forename><surname>Vishwanathan</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Garnett</surname></persName>
		</editor>
		<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4765" to="4774" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Explaining deep neural networks and beyond: A review of methods and applications</title>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montavon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lapuschkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">J</forename><surname>Anders</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-R</forename><surname>Müller</surname></persName>
		</author>
		<idno type="DOI">10.1109/JPROC.2021.3060483</idno>
	</analytic>
	<monogr>
		<title level="j">Proceedings of the IEEE</title>
		<imprint>
			<biblScope unit="volume">109</biblScope>
			<biblScope unit="page" from="247" to="278" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Binder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montavon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Klauschen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-R</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<idno type="DOI">10.1371/journal.pone.0130140</idno>
		<ptr target="https://doi.org/10.1371/journal.pone.0130140.doi:10.1371/journal.pone.0130140" />
	</analytic>
	<monogr>
		<title level="j">PLOS ONE</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page" from="1" to="46" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">TSViz: Demystification of deep learning models for time-series analysis</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Siddiqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mercier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Munir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dengel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmed</surname></persName>
		</author>
		<idno type="DOI">10.1109/access.2019.2912823</idno>
		<ptr target="https://doi.org/10.1109%2Faccess.2019.2912823.doi:10.1109/access.2019.2912823" />
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="67027" to="67040" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Counterfactual visual explanations</title>
		<author>
			<persName><forename type="first">Y</forename><surname>Goyal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Wu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Ernst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Batra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Parikh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v97/goyal19a.html" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 36th International Conference on Machine Learning</title>
				<editor>
			<persName><forename type="first">K</forename><surname>Chaudhuri</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Salakhutdinov</surname></persName>
		</editor>
		<meeting>the 36th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">97</biblScope>
			<biblScope unit="page" from="2376" to="2384" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Interpretable latent space to enable counterfactual explanations</title>
		<author>
			<persName><forename type="first">F</forename><surname>Bodria</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Discovery Science</title>
				<editor>
			<persName><forename type="first">P</forename><surname>Pascal</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Ienco</surname></persName>
		</editor>
		<meeting><address><addrLine>Switzerland, Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer Nature</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="525" to="540" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Towards a rigorous evaluation of XAI methods on time series</title>
		<author>
			<persName><forename type="first">U</forename><surname>Schlegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Arnout</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>El-Assady</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Oelke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Keim</surname></persName>
		</author>
		<idno>CoRR abs/1909.07082</idno>
		<ptr target="http://arxiv.org/abs/1909.07082.arXiv:1909.07082" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Learning deep features for discriminative localization</title>
		<author>
			<persName><forename type="first">B</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Khosla</surname></persName>
		</author>
		<author>
			<persName><forename type="first">À</forename><surname>Lapedriza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Oliva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Torralba</surname></persName>
		</author>
		<idno>CVPR) abs/1512.04150</idno>
		<ptr target="http://arxiv.org/abs/1512.04150.arXiv:1512.04150" />
	</analytic>
	<monogr>
		<title level="m">IEEE Conference on Computer Vision and Pattern Recognition</title>
				<imprint>
			<date type="published" when="2015">2016. 2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<title level="m" type="main">When explainability meets adversarial learning: Detecting adversarial examples using SHAP signatures</title>
		<author>
			<persName><forename type="first">G</forename><surname>Fidel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Bitton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Shabtai</surname></persName>
		</author>
		<idno>CoRR abs/1909.03418</idno>
		<ptr target="http://arxiv.org/abs/1909.03418.arXiv:1909.03418" />
		<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Reynolds</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-0-387-73003-5_196</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-0-387-73003-5_196" />
		<title level="m">Gaussian Mixture Models</title>
				<meeting><address><addrLine>Boston, MA</addrLine></address></meeting>
		<imprint>
			<publisher>Springer US</publisher>
			<date type="published" when="2009">2009</date>
			<biblScope unit="page" from="659" to="663" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<monogr>
		<title level="m" type="main">Optilime: Optimized LIME explanations for diagnostic computer algorithms</title>
		<author>
			<persName><forename type="first">G</forename><surname>Visani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Bagli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chesani</surname></persName>
		</author>
		<idno>CoRR abs/2006.05714</idno>
		<ptr target="https://arxiv.org/abs/2006.05714.arXiv:2006.05714" />
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">A multimodal anomaly detector for robot-assisted feeding using an lstm-based variational autoencoder</title>
		<author>
			<persName><forename type="first">D</forename><surname>Park</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hoshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">C</forename><surname>Kemp</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Robotics and Automation Letters</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="1544" to="1551" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Unsupervised anomaly detection in time series using lstm-based autoencoders</title>
		<author>
			<persName><forename type="first">O</forename><forename type="middle">I</forename><surname>Provotar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><forename type="middle">M</forename><surname>Linder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Veres</surname></persName>
		</author>
		<idno type="DOI">10.1109/ATIT49449.2019.9030505</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Advanced Trends in Information Theory (ATIT)</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="513" to="517" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">An autocorrelationbased lstm-autoencoder for anomaly detection on time-series data</title>
		<author>
			<persName><forename type="first">H</forename><surname>Homayouni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ghosh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Ray</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gondalia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Duggan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Kahn</surname></persName>
		</author>
		<idno type="DOI">10.1109/BigData50022.2020.9378192</idno>
	</analytic>
	<monogr>
		<title level="m">2020 IEEE International Conference on Big Data (Big Data)</title>
				<imprint>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="5068" to="5077" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">From anecdotal evidence to quantitative evaluation methods: A systematic review on evaluating explainable ai</title>
		<author>
			<persName><forename type="first">M</forename><surname>Nauta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Trienes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Pathak</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Nguyen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Peters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Schmitt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schlötterer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Van Keulen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Seifert</surname></persName>
		</author>
		<idno type="DOI">10.1145/3583558</idno>
		<idno>doi:</idno>
		<ptr target="10.1145/3583558" />
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">55</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (xai): Concepts, taxonomies, opportunities and challenges toward responsible ai</title>
		<author>
			<persName><forename type="first">A</forename><surname>Barredo Arrieta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Díaz-Rodríguez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Del</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Bennetot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tabik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Barbado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Garcia</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Gil-Lopez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Molina</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Benjamins</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Chatila</surname></persName>
		</author>
		<author>
			<persName><surname>Herrera</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.inffus.2019.12.012</idno>
		<ptr target="https://doi.org/10.1016/j.inffus.2019.12.012" />
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">58</biblScope>
			<biblScope unit="page" from="82" to="115" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">A multidisciplinary survey and framework for design and evaluation of explainable ai systems</title>
		<author>
			<persName><forename type="first">S</forename><surname>Mohseni</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Zarei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">D</forename><surname>Ragan</surname></persName>
		</author>
		<idno type="DOI">10.1145/3387166</idno>
		<ptr target="https://doi.org/10.1145/3387166.doi:10.1145/3387166" />
	</analytic>
	<monogr>
		<title level="j">ACM Trans. Interact. Intell. Syst</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b26">
	<monogr>
		<author>
			<persName><forename type="first">P</forename><surname>Knab</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marton</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bartelt</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2403.07733</idno>
		<title level="m">Dseg-lime: Improving image explanation by hierarchical data-driven segmentation</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">Outlier detection for multidimensional time series using deep neural networks</title>
		<author>
			<persName><forename type="first">T</forename><surname>Kieu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Yang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">S</forename><surname>Jensen</surname></persName>
		</author>
		<idno type="DOI">10.1109/MDM.2018.00029</idno>
	</analytic>
	<monogr>
		<title level="m">19th IEEE International Conference on Mobile Data Management (MDM)</title>
				<imprint>
			<date type="published" when="2018">2018. 2018</date>
			<biblScope unit="page" from="125" to="134" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Usad: Unsupervised anomaly detection on multivariate time series</title>
		<author>
			<persName><forename type="first">J</forename><surname>Audibert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Michiardi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Guyard</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Marti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">A</forename><surname>Zuluaga</surname></persName>
		</author>
		<idno type="DOI">10.1145/3394486.3403392</idno>
		<idno>doi:10.1145/3394486.3403392</idno>
		<ptr target="https://doi.org/10.1145/3394486.3403392" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD &apos;20</title>
				<meeting>the 26th ACM SIGKDD International Conference on Knowledge Discovery &amp; Data Mining, KDD &apos;20<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="3395" to="3404" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Segal time series classification -stable explanations using a generative model and an adaptive weighting method for lime</title>
		<author>
			<persName><forename type="first">H</forename><surname>Meng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Triguero</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neunet.2024.106345</idno>
		<ptr target="https://doi.org/10.1016/j.neunet.2024.106345" />
	</analytic>
	<monogr>
		<title level="j">Neural Networks</title>
		<imprint>
			<biblScope unit="volume">176</biblScope>
			<biblScope unit="page">106345</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
