<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Ephrem</forename><forename type="middle">T</forename><surname>Mekonnen</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">Artificial Intelligence and Cognitive Load Research Lab</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Luca</forename><surname>Longo</surname></persName>
							<email>luca.longo@tudublin.ie</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="laboratory">Artificial Intelligence and Cognitive Load Research Lab</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Pierpaolo</forename><surname>Dondio</surname></persName>
							<email>pierpaolo.dondio@tudublin.ie</email>
							<affiliation key="aff0">
								<orgName type="department">School of Computer Science</orgName>
								<orgName type="institution">Technological University Dublin</orgName>
								<address>
									<country key="IE">Ireland</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Interpreting Black-Box Time Series Classifiers using Parameterised Event Primitives</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">929E743A90C4E5C7E9D7AF88615FC0E4</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable Artificial Intelligence, Model-Agnostic, Time Series, Post-hoc, Deep Learning, Orcid 0000-0002-0877-7063 (E. T. Mekonnen)</term>
					<term>0000-0002-2718-5426 (L. Longo)</term>
					<term>0000-0001-7874-8762 (P. Dondio)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Amidst the remarkable performance of deep learning models in time series classification, there is a pressing demand for methods that unveil their prediction rationale. Existing feature importance techniques often neglect the temporal nature of time series data, focusing solely on segment importance. Addressing this gap, this paper introduces a local model-agnostic method akin to LIME, which generates neighbouring samples by randomly perturbing segments of the original instance. Subsequently, weights are computed for each neighbouring instance based on its distance from the original, elucidating its influence. Parameterised event primitives (PEPs) are then extracted from these perturbed samples, encompassing increasing and decreasing events and local maxima and minima points. These primitives are clustered to form prototypical events that capture the temporal essence of the data. Leveraging these events, computed weights, and black box predictions, a simple linear regression model is trained to provide local explanations. Preliminary experiments on real-world datasets showcase the method's efficacy in identifying salient subsequences and points and their importance scores, thereby enhancing comprehension of produced explanations.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The ubiquity of sensors has facilitated the generation of extensive time series data across domains such as finance <ref type="bibr" target="#b0">[1]</ref>, healthcare <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref>, human activity recognition <ref type="bibr" target="#b3">[4]</ref>, and environmental monitoring <ref type="bibr" target="#b4">[5]</ref>. These data, crucial for informed decision making, require effective time series classification techniques. However, despite the success of deep learning models in various domains, including time series classification tasks, their lack of interpretability remains a significant challenge. Explainable Artificial Intelligence (XAI) has emerged to address this issue, aiming to provide transparent explanations for machine learning models. There are a multitude of XAI methods for image and tabular data; however, applying such methods to time series data presents unique challenges due to the temporal nature of the data and the requirement for domain knowledge <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7]</ref>. Locally Interpretable Model-Agnostic Explanation (LIME) has become a popular method for explaining black-box models <ref type="bibr" target="#b7">[8]</ref>. However, its application to time series data is hindered by the difficulty of segmenting data while preserving temporal characteristics <ref type="bibr" target="#b8">[9]</ref>. To address these challenges, we propose a novel Local Model Agnostic XAI method, akin to LIME, for interpreting black-box time series classifiers. Our approach does not require the segmentation of time series data. It provides detailed explanations of salient parts, including detecting trends such as increasing and decreasing local maxima and local minima. By enhancing the interpretability of black box time series classifiers, our method fosters a deeper understanding of model decisions and facilitates informed decision-making.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Related Works</head><p>Recent advancements in explainable artificial intelligence (XAI) have sparked significant interest in understanding black box models, particularly in time series classification. Although XAI research has focused predominantly on computer vision and natural language processing tasks, adapting these methods to time series analysis presents unique challenges due to the temporal nature of the data <ref type="bibr" target="#b5">[6]</ref>. Schlegel et al. <ref type="bibr" target="#b6">[7]</ref> explored various common XAI techniques, including saliency <ref type="bibr" target="#b9">[10]</ref>, LIME <ref type="bibr" target="#b7">[8]</ref>, SHAP <ref type="bibr" target="#b10">[11]</ref> and LRP <ref type="bibr" target="#b11">[12]</ref>, to interpret deep learning-based time series classification models. Zhou et al. <ref type="bibr" target="#b12">[13]</ref> have enriched the interpretability landscape by enhancing Class Activation Maps (CAM) and Grand-CAM with backpropagation. Simultaneously, the work described in <ref type="bibr" target="#b13">[14]</ref> introduced TSViz, a saliency map-based methodology later integrated into TSXplain <ref type="bibr" target="#b14">[15]</ref> to uncover the logic behind Deep Neural Networks (DNNs) in time series. These methodologies combine salient regions, instances, and statistical features, fostering natural language explanations. Furthermore, Vielhaben et al. <ref type="bibr" target="#b15">[16]</ref> introduced DFT-LRP, a tailored variant of Layer-wise Relevance Propagation (LRP), specifically designed to address the complexities of time series analysis by incorporating a virtual inspection layer.</p><p>While many existing methods are model-specific and rely on internal model structures, there is a growing interest in model-agnostic explanations that identify key features without being tied to a particular model architecture. However, adapting feature importance-based explanations to time series data requires careful consideration of the temporal dimension. Among featureimportance methods, LIME stands out as a popular approach, but its direct application to time series data requires thoughtful preprocessing to ensure interpretability. Guileme et al. <ref type="bibr" target="#b16">[17]</ref> and Neves et al. <ref type="bibr" target="#b17">[18]</ref> adapted LIME for deep learning-based time series classification by using longer segments for perturbation. Still, these approaches are limited by fixed window sizes. To overcome this limitation, Silvio et al. <ref type="bibr" target="#b18">[19]</ref> introduced the NNsegment, which identifies homogeneous regions in time series and employs various perturbation techniques for robust explanations. Furthermore, Schlegel et al. <ref type="bibr" target="#b19">[20]</ref> expanded the LIME approach by employing six distinct segmentation methods, but the challenge remains to understand the significance of identified segments. Hence, we present a local model-agnostic Explainable Artificial Intelligence (XAI) approach, akin to LIME, tailored for elucidating deep learning time series classifiers. Our method effectively highlights crucial input data segments that significantly impact the blackbox model's inferential process. Additionally, it provides insights into why these identified segments are important by providing information about their nature, such as whether they are increasing/decreasing events or local minima/maxima points on the time series input.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Method</head><p>This section introduces the proposed Local Model Agnostic (XAI) method tailored for time series classifiers. The steps involved in the approach are detailed in Figure <ref type="figure">1</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Figure 1:</head><p>Step-by-step illustration of the proposed approach.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Generating Neighbourhood Samples</head><p>Our approach distinguishes itself from existing methods by avoiding fixed-interval segmentation for interpreting time series classifiers. Instead, we employ random perturbation of segments in the original time series, offering a flexible and tailored approach to generating perturbed data. These segments can be replaced by zero, the segment mean, or the total mean of the series. Importantly, perturbation is used solely to generate neighbourhood samples, and rather than employing segments as features for the linear regression model, as detailed in subsection 3.3, we utilise clusters of parameterised event primitives.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Distance Computation and Neighborhood Weighting</head><p>After generating neighbouring samples through perturbation, we calculate the distance (𝑑) between the explained instance (𝑋 𝑖 ) and the neighbouring sample. In our scenario, we utilize dynamic time warping (DTW) as the distance metric, which is ideal for handling temporal data with varying speeds or time scales. Subsequently, we calculate the weight of each neighbouring instance according to an exponential kernel, denoted as 𝜋 𝑋 𝑖 , which assigns higher weights to instances similar to 𝑋 𝑖 . The exponential kernel is defined as:</p><formula xml:id="formula_0">𝜋 𝑋 𝑖 = 𝑒 −( 𝑑 2 𝜎 2 )</formula><p>. Here, 𝜎 (sigma) represents the bandwidth parameter that controls the width of the kernel. It regulates how quickly the weight assigned to neighbouring instances decreases with increasing distance from the instance being explained. Lower values of 𝜎 indicate a narrower kernel, focusing more on closer neighbours, while higher values result in a broader influence, considering distant neighbours as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.3.">Transforming Perturbed Data via Parameterised Event Primitives (PEPs)</head><p>Parameterised Event Primitives (PEPs) are vital for capturing domain-specific events within the time series data. By extracting PEPs as shown in Figure2, we can effectively represent the temporal characteristics of events as parameters, thus facilitating the learning process for interpretable models such as linear regression and decision trees <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b21">22]</ref>. These PEPs encompass various event types, including increasing and decreasing events, which capture parameters such as start time, duration, and average gradient value, and local maximum and minimum events, which capture time and corresponding value parameters. A structured three-step process was implemented to transform neighbouring samples in a manner conducive to training interpretable models. Initially, parameterized events were extracted from each time series sequence within the perturbed data. These events were encapsulated as tuples containing the relevant parameters. Subsequently, the parameterized events were flattened to enable the application of clustering algorithms, such as KMeans, resulting in the generation of distinct clusters. Determining the optimal number of clusters was facilitated by leveraging the silhouette method. Finally, event attribution was carried out, mapping the extracted events to their respective clusters. This process yielded a matrix wherein each cell represented the count of events associated with a specific cluster for a given instance. The event attribution matrices for each parameterized event primitive were combined to create a tabular dataset suitable for training interpretable models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.4.">Training Linear Model</head><p>In our approach, we utilize transformed data, black box predictions of neighbouring samples, and their corresponding weights to train interpretable models similar to LIME. We employ ridge regression, a regularised linear model renowned for its interpretability and robustness. Ridge regression aims to minimize the following loss function:</p><formula xml:id="formula_1">β = argmin 𝛽 ∑ 𝑧∈𝒵 𝜋 𝑋 𝑖 (𝑧) ( ŷ 𝑧 − (𝑧) ⋅ 𝛽) 2 + 𝜆||𝛽|| 2 2</formula><p>Here, β represents the optimized coefficients obtained by minimizing the weighted sum of squared errors. 𝜋 𝑋 𝑖 (𝑧) assigns weight to each neighbouring sample 𝑧, ŷ 𝑧 is the probability score predicted by the classifier for the perturbed instance 𝑧, (𝑧) is a perturbed instance and 𝜆 serves as the regularisation parameter, which governs the penalty imposed on the coefficients to prevent overfitting. In this case, the weights of the linear model learnt using a least-squares procedure denote the relative importance of each feature or each PEP cluster.   After training the interpretable linear model, we identify the most significant features based on their importance scores. Here, the features correspond to clusters of Parameterised Event Primitives (PEPs) such as increasing cluster1, increasing cluster2, decreasing cluster1, and so on. We then visualise the extracted events of the instance to be explained, which belong to the top clusters, as shown in Figure <ref type="figure" target="#fig_3">3</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Experimental Setup</head><p>Our preliminary experiment evaluated our method on two widely used univariate time series datasets: ECG200 and GunPoint from the UCR Archive <ref type="bibr" target="#b22">[23]</ref>, a renowned repository for time series classification tasks.</p><p>Our method provided local explanations for a black box model, the Fully Convolutional Network (FCN), built using the PyTorch-based tsai library <ref type="bibr" target="#b23">[24]</ref>. The FCN was configured with default kernel sizes 7, 5, 3 and filter sizes 128, 256, 128 for its convolutional layers. The datasets were partitioned into training sets (70%), validation sets (15%), and test sets (15%) to facilitate robust evaluation. We used early stopping during training to avoid overfitting, with a patience parameter set to 15 and a minimum delta of 0.001. Additionally, to ensure accuracy and stability, the model was trained 100 times with randomised splits for training, validation, and testing. In particular, our method achieved an average accuracy of 85% and 86% on the ECG200 dataset for training and testing, respectively, and 98% for both the validation and testing sets of the GunPoint dataset.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Result and Discussion</head><p>In this section, we present the results of our experiments and discuss their implications. The method was deployed to offer local explanations for predictions generated by a deep learning-based time series classifier, with fidelity metrics that evaluate the faithfulness of these explanations. We computed the local fidelity score across different replacement methods for the perturbation to generate neighbouring samples. From each dataset, 100 instances were randomly selected from the test set, and the resulting average fidelity score and standard deviation were calculated. Table <ref type="table" target="#tab_0">1</ref> presents the fidelity scores obtained using the zero and mean replacement methods. In the ECG200 dataset, the fidelity scores were 0.76 and 0.67 for the zero and mean replacements, respectively. Similarly, in the GunPoint dataset, the fidelity scores were 0.64 and 0.44 for zero and mean replacements, respectively. These results indicate that our method demonstrates varying fidelity across different datasets and replacement methods. The higher fidelity scores obtained using zero replacement suggest that this method better preserve the local interpretability of the model predictions compared to mean replacement. Furthermore, the observed standard deviations highlight the variability in the fidelity scores, indicating potential sensitivity to perturbation methods. This underscores the importance of careful consideration when selecting perturbation techniques to ensure reliable and consistent explanations. The explanation produced by our method, as depicted in Figure <ref type="figure" target="#fig_3">3</ref>, not only highlights the significance of each part of the time series instance for the black box model's decision-making process but also provides the relevance score associated with each segment or point, along with the types of events, such as increasing, decreasing, local maximum, and local minimum. Overall, our results demonstrate the effectiveness of our method in providing local explanations for predictions of deep learning-based time series classifiers. However, further analysis and experiments are needed to fully understand the factors influencing fidelity and optimize our approach for broader applications.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion</head><p>Our XAI method, incorporating random perturbation and transformation using parameterized event primitives, shows promising results in enhancing interpretability for time series classifiers. While our current experiment has focused on two univariate time series datasets, future research will extend to other univariate and multivariate data to widen its applicability. Further exploration into diverse perturbation techniques and comparative analyses with existing methods will provide a comprehensive understanding of our approach's effectiveness. Overall, our method contributes to advancing explainable AI in time series classification, offering valuable insights into model predictions with ongoing efforts for refinement and expansion.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>( a )</head><label>a</label><figDesc>Increasing and decreasing events.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>Local max and local min events.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Examples of events extracted from a single time series in the ECG200 dataset (a) increasing and decreasing events (b) local max and local min events .</figDesc><graphic coords="5,89.29,93.09,187.52,155.82" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: The explanation generated by the method highlights segment significance, relevance scores, and event types (e.g., increasing, decreasing, local maximum, local minimum) in the time series data for the black box model.</figDesc><graphic coords="6,119.62,413.22,356.04,212.40" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="3,89.29,147.56,416.68,256.62" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>Mean and standard deviation of explanation faithfulness across various perturbation replacement methods on ECG200 and GunPoint datasets.</figDesc><table><row><cell cols="3">Dataset Zero (Std) Mean (Std)</cell></row><row><cell>ECG200</cell><cell>0.76 (0.08)</cell><cell>0.67 (0.10)</cell></row><row><cell cols="2">GunPoint 0.64 (0.10)</cell><cell>0.44 (0.17)</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">At-lstm: An attention-based lstm model for financial time series prediction</title>
		<author>
			<persName><forename type="first">X</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Liang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zhiyuli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IOP Conference Series: Materials Science and Engineering</title>
				<imprint>
			<publisher>IOP Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="volume">569</biblScope>
			<biblScope unit="page">52037</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Arrhythmia classification of lstm autoencoder based on time series anomaly detection</title>
		<author>
			<persName><forename type="first">P</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>He</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Biomedical Signal Processing and Control</title>
		<imprint>
			<biblScope unit="volume">71</biblScope>
			<biblScope unit="page">103228</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Deep learning for ecg analysis: Benchmarks and insights from ptb-xl</title>
		<author>
			<persName><forename type="first">N</forename><surname>Strodthoff</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Wagner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Schaeffter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Journal of Biomedical and Health Informatics</title>
		<imprint>
			<biblScope unit="volume">25</biblScope>
			<biblScope unit="page" from="1519" to="1528" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Deep neural networks for time series classification in human activity recognition</title>
		<author>
			<persName><forename type="first">S</forename><surname>Joshi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Abdelfattah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 12th Annual Information Technology, Electronics and Mobile Communication Conference (IEMCON)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021. 2021</date>
			<biblScope unit="page" from="559" to="0566" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">An energy-efficient dual prediction scheme using lms filter and lstm in wireless sensor networks for environment monitoring</title>
		<author>
			<persName><forename type="first">T</forename><surname>Shu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><forename type="middle">K</forename><surname>Bhargava</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">W</forename><surname>Silva</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Internet of Things Journal</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="6736" to="6747" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Explainable ai for time series classification: A review, taxonomy and research directions</title>
		<author>
			<persName><forename type="first">A</forename><surname>Theissler</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Spinnato</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Schlegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Towards a rigorous evaluation of xai methods on time series</title>
		<author>
			<persName><forename type="first">U</forename><surname>Schlegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Arnout</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>El-Assady</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Oelke</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Keim</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE/CVF International Conference on Computer Vision Workshop (ICCVW)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="4197" to="4201" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Why should i trust you? explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd ACM SIGKDD international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brcic</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions</title>
				<imprint>
			<date type="published" when="2024">2024</date>
			<biblScope unit="page">102301</biblScope>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">K</forename><surname>Simonyan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Vedaldi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Zisserman</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1312.6034</idno>
		<title level="m">Deep inside convolutional networks: Visualising image classification models and saliency maps</title>
				<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A unified approach to interpreting model predictions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bach</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Binder</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montavon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Klauschen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K.-R</forename><surname>Müller</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">PloS one</title>
		<imprint>
			<biblScope unit="volume">10</biblScope>
			<biblScope unit="page">e0130140</biblScope>
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Salience-cam: Visual explanations from convolutional neural networks via salience score</title>
		<author>
			<persName><forename type="first">L</forename><surname>Zhou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Ma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">X</forename><surname>Shi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Zhang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Li</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Wu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2021 International Joint Conference on Neural Networks (IJCNN), IEEE</title>
				<imprint>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="8" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Tsviz: Demystification of deep learning models for time-series analysis</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Siddiqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mercier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Munir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dengel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">7</biblScope>
			<biblScope unit="page" from="67027" to="67040" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Tsxplain: Demystification of dnn decisions for time-series using natural language and statistical features</title>
		<author>
			<persName><forename type="first">M</forename><surname>Munir</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">A</forename><surname>Siddiqui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Küsters</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Mercier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Dengel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ahmed</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial Neural Networks and Machine Learning-ICANN 2019: Workshop and Special Sessions: 28th International Conference on Artificial Neural Networks</title>
				<meeting><address><addrLine>Munich, Germany</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2019">September 17-19, 2019. 2019</date>
			<biblScope unit="page" from="426" to="439" />
		</imprint>
	</monogr>
	<note>Proceedings 28</note>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Vielhaben</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lapuschkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Montavon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.06365</idno>
		<title level="m">Explainable ai for time series via virtual inspection layers</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Agnostic local explanation for time series classification</title>
		<author>
			<persName><forename type="first">M</forename><surname>Guillemé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Masson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rozé</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Termier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">IEEE 31st international conference on tools with artificial intelligence (ICTAI)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="432" to="439" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Interpretable heartbeat classification using local model-agnostic explanations on ecgs</title>
		<author>
			<persName><forename type="first">I</forename><surname>Neves</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Folgado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Santos</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Computers in Biology and Medicine</title>
		<imprint>
			<biblScope unit="volume">133</biblScope>
			<biblScope unit="page">104393</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Limesegment: Meaningful, realistic time series explanations</title>
		<author>
			<persName><forename type="first">T</forename><surname>Sivill</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Flach</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="3418" to="3433" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Ts-mule: Local interpretable model-agnostic explanations for time series forecast models</title>
		<author>
			<persName><forename type="first">U</forename><surname>Schlegel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">L</forename><surname>Vo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">A</forename><surname>Keim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Seebacher</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Joint European Conference on Machine Learning and Knowledge Discovery in Databases</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="5" to="14" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Learning comprehensible descriptions of multivariate time series</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">W</forename><surname>Kadous</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">ICML</title>
		<imprint>
			<biblScope unit="volume">454</biblScope>
			<biblScope unit="page">463</biblScope>
			<date type="published" when="1999">1999</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Explaining deep learning time series classification models using a decision tree-based post-hoc xai method</title>
		<author>
			<persName><forename type="first">E</forename><forename type="middle">T</forename><surname>Mekonnen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dondio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">CEUR Workshop Proceedings</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">The ucr time series archive</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">A</forename><surname>Dau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bagnall</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kamgar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C.-C</forename><forename type="middle">M</forename><surname>Yeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Zhu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Gharghabi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">A</forename><surname>Ratanamahatana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Keogh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE/CAA Journal of Automatica Sinica</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="1293" to="1305" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<title level="m" type="main">tsai -a state-of-the-art deep learning library for time series and sequential data</title>
		<author>
			<persName><forename type="first">I</forename><surname>Oguiza</surname></persName>
		</author>
		<ptr target="https://github.com/timeseriesAI/tsai" />
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>Github</publisher>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
