<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>World Conference on eXplainable Artificial
Intelligence: July</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Explaining Time Series Classifiers Through Post-Hoc XAI Methods Capturing Temporal Dependencies</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Ephrem T. Mekonnen</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence and Cognitive Load Research Lab, Technological University Dublin</institution>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>School of Computer Science, Technological University Dublin</institution>
          ,
          <country country="IE">Ireland</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>0</volume>
      <fpage>9</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>Time series classification is essential in domains such as healthcare and finance, where accurate predictions can have significant real-world consequences. However, in many high-stakes applications, understanding why a model makes a certain decision is just as important as the prediction itself. While deep learning models excel at capturing complex temporal patterns, their black-box nature limits transparency, making it dificult to trust and interpret their decisions. Although eXplainable AI (XAI) methods have advanced considerably for image and tabular data, applying them to time series remains challenging due to the intricate temporal dependencies and high dimensionality of the data. Post-hoc model-agnostic XAI techniques ofer a promising solution by providing explanations without altering the underlying model. This research focuses on developing novel post-hoc model-agnostic XAI methods specifically for time series classifiers. By elucidating prediction processes while preserving temporal structures, these methods seek to enhance interpretability and trust, thereby facilitating informed decision-making in high-stakes applications.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable Artificial Intelligence</kwd>
        <kwd>Time series</kwd>
        <kwd>Model-agnostic</kwd>
        <kwd>Post-hoc</kwd>
        <kwd>Deep learning</kwd>
        <kwd>XAI</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Time series data are crucial in many real-world applications, from healthcare and finance to
environmental monitoring. With the growing availability of such data, machine learning models, particularly
deep learning approaches, have demonstrated impressive performance in classifying complex temporal
patterns. However, these models often function as "black boxes," making it dificult to understand their
decision-making processes. In high-stakes scenarios, where interpretability and accountability are
essential, this lack of transparency poses significant challenges. Explainable Artificial Intelligence (XAI)
has emerged as a line of research aimed at addressing these issues by developing methods that provide
insight into model predictions [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        Although XAI has seen significant advances, most methods have been developed for image and tabular
data, which do not share the same characteristics as time series data. The high dimensionality and strong
temporal dependencies of time series pose unique challenges that many existing XAI techniques struggle
to handle. Popular approaches such as Local Interpretable Model-agnostic Explanations (LIME) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ],
Saliency Maps [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and Layer-wise Relevance Propagation (LRP) [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] have been adapted from computer
vision tasks [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. However, these methods often rely on visual heatmaps, which can be dificult to
interpret and are mainly designed for developers rather than end users [7, 8]. Furthermore, feature
attribution techniques such as LIME and SHAP tend to ignore temporal dependencies, treating each
time step or segment as an independent feature [
        <xref ref-type="bibr" rid="ref2 ref6">6, 2</xref>
        ].
      </p>
      <p>This research aims to develop novel post-hoc, model-agnostic XAI methods specifically tailored for
deep learning-based time series classifiers. By addressing the unique challenges of interpretability in
time series classification, this study seeks to fill a critical gap in the XAI landscape.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related Work</title>
      <p>
        Research in Explainable AI (XAI) for time series classification has evolved alongside broader
developments in XAI. Nonetheless, the majority of existing techniques have primarily been designed for
static data types, such as images and tabular data, which do not encapsulate the temporal dependencies
inherent in time series data. Consequently, adapting these techniques to efectively address the unique
challenges presented by temporal dependencies remains a non-trivial task [
        <xref ref-type="bibr" rid="ref2">2, 7</xref>
        ].
      </p>
      <p>XAI methods can be categorised based on several criteria, one of which distinguishes between
modelspecific and model-agnostic approaches. Model-specific methods leverage the internal mechanisms of
black-box models, whereas model-agnostic methods are applicable across various model architectures.
Notably, many of these methods serve as post-hoc explanations, ofering interpretability without
requiring changes to the underlying model and are applied after model training.</p>
      <sec id="sec-2-1">
        <title>2.1. Model-Specific Methods</title>
        <p>A range of model-specific methods has been employed to elucidate black-box models trained on time
series data. For instance, Class Activation Mapping (CAM), introduced by Zhou et al. [9], has been
adapted for explaining convolutional neural network (CNN)-based models in time series classification.
CAM identifies class-relevant regions by projecting weighted activations from the final convolutional
layer onto a feature map. However, its implementation necessitates a specific CNN architecture featuring
Global Average Pooling (GAP), thus limiting its broader applicability. In contrast, Grad-CAM [10]
extends CAM by utilising gradient information from the last convolutional layer to identify critical
regions and generate saliency maps, making it more adaptable to various CNN architectures without
strict architectural constraints.</p>
        <p>
          Schlegel et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] have also investigated several standard XAI techniques, including Layer-wise
Relevance Propagation (LRP) [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], to interpret deep learning-based time series classification models.
Specifically, the work presented in [ 11] introduced DFT-LRP, a tailored variant of LRP designed to
address the complexities associated with time series analysis by incorporating a virtual inspection
layer. Despite these advancements, many of these explanations tend to be developer-centric and rely
on latent layer activations, whereas end users often require higher-level, abstract explanations that
enhance overall interpretability [12]. Most XAI methods focus on local explanations, but some also
provide global insights for time series classifiers. For example, [ 13] extends CAM to all training samples
within a class, generating an average CAM to highlight key discriminative features. Similarly, TsViz
[14] identifies important input regions and evaluates filter importance for a given prediction. It also
derives global insights by clustering filters with similar activation patterns, as they likely capture the
same underlying concepts.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Model-Agnostic Methods</title>
        <p>
          Conversely, model-agnostic methods, such as LIME [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ] and SHAP [15], ofer greater applicability across
diferent model types and are particularly relevant in a post-hoc context. LIME approximates complex
model predictions by creating a locally interpretable model, such as a linear classifier, around the
instance to be explained. This process begins with the generation of a dataset of perturbed samples
near the target instance, using the original model’s predictions for these samples. A linear model is
subsequently trained on this dataset, with samples weighted according to their proximity to the target
instance, yielding feature importance scores as regression coeficients.
        </p>
        <p>
          SHAP, rooted in cooperative game theory, explains individual predictions by attributing feature
contributions through Shapley values. Although originally developed for image and tabular data,
Schlegel et al. [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] adapted SHAP methods for interpreting time series classifiers. However, this
adaptation often overlooks the temporal dependencies present in time series data, as each time step is
treated as an independent feature. To mitigate these limitations, Guilleme et al. [16] and Neves et al.
[17] have tailored LIME for deep learning-based time series classification by utilising longer segments
for perturbation. However, these approaches are constrained by fixed window sizes, which can limit
their efectiveness.
        </p>
        <p>Additionally, Sivill et al. [18] introduced the NNsegment method, which identifies homogeneous
regions within time series data and employs diverse perturbation techniques to yield more robust
explanations. Further, Schlegel et al. [12] expanded LIME by incorporating six segmentation methods;
however, understanding the significance of the identified segments continues to present challenges.</p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Challenges in Visualisation and Interpretation</title>
        <p>Moreover, many attribution-based and attention-based XAI methods [19] predominantly utilise
heatmaps to visualise feature attributions. While such visual representations can be advantageous
for domain experts, they often pose significant interpretability challenges for general users [ 8]. This
underscores the necessity for more intuitive and user-friendly explanation techniques that extend
beyond traditional heatmaps, facilitating clearer reasoning behind model predictions.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Research Questions and Objectives</title>
      <p>The overall goal of this research is to develop novel model-agnostic post-hoc explainable artificial
intelligence (XAI) methods specifically designed for time series classifiers by leveraging Parameterised
Event Primitives (PEPs). PEPs provide a structured approach to defining and extracting events in a time
series. An event is a specific pattern or behaviour expected to occur within the domain. Extracting PEPs
from time series data enables the representation of temporal characteristics as parameters, facilitating
the training of interpretable models such as decision trees [20]. These events can be characterised
using PEPs, which include increasing and decreasing trends defined by parameters such as start time,
duration, and average gradient, as well as local maxima and minima, which are characterised by the time
they occur and their corresponding values. This parameterisation ofers an intuitive and meaningful
representation of temporal structures, enhancing the interpretability of time series models.</p>
      <p>Furthermore, PEPs ofer a significant advantage by eliminating the need for explicit segmentation of
time series data. Segmentation can be complex and subjective, often overlooking critical patterns that
exist between segments [12]. Conversely, treating each time step as an independent feature obscures
the essential temporal dynamics that characterise time series, potentially leading to misinterpretations
of the underlying patterns. This study is structured around the following key question:</p>
      <p>How can model-agnostic XAI methods be designed to provide interpretable, faithful
explanations while capturing temporal dynamics in the data for black-box time-series classifiers?
• To develop a global model-agnostic XAI method that approximates the inference process of deep
learning-based time-series classifiers using decision trees, generating interpretable rule-based
explanations while preserving temporal dependencies.
• To design a local post-hoc XAI method that generates instance-specific explanations for
predictions made by time series classifiers, efectively capturing the temporal dynamics of the
data.
• To identify and establish a set of evaluation metrics specifically tailored to evaluate the faithfulness
and interpretability of the explanations generated by the proposed XAI methods for time-series
classifiers.
• To investigate and develop methodologies for constructing global explanations for time series
classifiers by aggregating local explanations, ensuring that the process maintains interpretability
and adequately captures the temporal dependencies present in the data.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Research Plan and Methodology</title>
      <p>To achieve these objectives, this research uses publicly available time series datasets from the UCR
archive [21] with minimal preprocessing. The study involves training a time series classifier and
evaluating its performance before applying the proposed XAI methods. The research is structured into
three main work packages, of which the first two have been done thus far, while the third is planned
for the future. An overview of the research plan is provided in Figure 1.</p>
      <p>Dataset
Preparation</p>
      <p>Preprocessing</p>
      <p>Train</p>
      <p>XAI Methods Development
Time series dataset</p>
      <p>Data preprocessing</p>
      <p>Classifier
Generate rules or decision tree graphs.</p>
      <p>Uses Parameterised Event Primitives (PEPs) to
capture temporal dependencies.</p>
      <p>Avoids segmentation, unlike existing methods.</p>
      <p>Captures temporal dependencies using PEPs.</p>
      <p>Provides visual explanations in human intutive
terms, identifying key subsequences &amp; points along
with relevance scores.</p>
      <p>Builds on the proposed Local Model-agnostic method.</p>
      <p>Aims to provide holistic insights by aggregating
multiple local explanations.</p>
      <p>Global Model-Agnostic</p>
      <p>XAI Method
Local Model-Agnostic</p>
      <p>XAI Method</p>
      <p>Local-to-Global
Explanation Synthesis
(L2G-XAI)</p>
      <p>Evaluation
Objective Metrics</p>
      <p>Accuracy
Complexity
Robustness</p>
      <p>FFiiddeelliittyy</p>
      <p>CompXaAriIsMonetwhoitdhsOther
XAI methods</p>
      <p>Metrics
LIME
SHAP
IG</p>
      <p>Performance
decrease</p>
      <p>The first work package focuses on developing a global model-agnostic XAI method that provides
explanations in the form of decision rules or decision tree graphs. This process begins with initial data
preprocessing to train and evaluate the time series classifier. Next, the test set is transformed using
Parameterised Event Primitives (PEPs) to train the decision tree. This involves extracting PEPs from the
test set and clustering them using an appropriate algorithm. The frequency of events belonging to each
cluster is then counted to create a dataframe, where columns represent clusters and rows represent
instances. The dataframes for various Parameterised Event Primitives (PEPs), including increasing and
decreasing trends, as well as local maxima and minima, are subsequently merged. Finally, the decision
tree is trained and evaluated using the transformed test set and the model’s predictions. The method
was rigorously evaluated using accuracy, fidelity, complexity, and robustness.</p>
      <p>The second work package, currently under review, focuses on developing LOMATCE (Local
ModelAgnostic Time-Series Classification Explanation), a local model-agnostic explainable artificial
intelligence (XAI) method that generates instance-specific explanations for time series classifiers without
requiring segmentation. LOMATCE begins by creating neighbouring instances around the instance
to be explained and computes weights for these instances based on their distance from the original
instance, elucidating their influence on the model’s prediction. The neighbouring samples are
transformed using Parameterised Event Primitives (PEPs), following a sequence of steps similar to the
global model-agnostic method. Using the transformed data, the method employs the weights of the
neighbouring samples along with predictions from the trained black-box classifier to train a linear
regression model. From this model, the top cluster is identified based on the coeficients derived from
the linear regression model. Finally, the events extracted from the original instance that belong to
the identified top clusters are highlighted, along with their corresponding importance scores, thereby
providing a clear and interpretable explanation of the original instance’s prediction.</p>
      <p>
        The third work package will explore the aggregation of local explanations generated by LOMATCE
to construct global explanations, referred to as Local to Global eXplanation (L2GX). Drawing
inspiration from Submodular-Pick LIME (SP-LIME) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], L2GX first generates local explanations for each
instance using LOMATCE, identifying the most important PEP clusters along with their corresponding
importance scores. Similar clusters are merged to enhance coherence and reduce redundancy. An
instance-cluster matrix is then constructed, where rows represent instances, columns denote clusters,
and each cell captures the importance score of a given cluster for the respective instance. This matrix
serves as the foundation for computing global importance scores. Using a predefined budget B, L2GX
employs submodular optimization techniques to select the top B instances that maximise coverage of
clusters with positive importance scores, ensuring diverse contributions of information. Finally, PEPs
from the selected instances are extracted, retaining only those belonging to the identified clusters as
global explanations. This method will be evaluated based on faithfulness using fidelity metrics and
compared to other XAI methods using performance drop metrics.
      </p>
      <p>To the best of my knowledge, the proposed global model-agnostic XAI method represents the first
attempt to generate global explanations for time series classifiers in the form of rules or decision tree
graphs. As illustrated in the Figure 1, this study does not include a comparative analysis with existing
methods but rather focuses on the development and evaluation of novel techniques tailored for time
series classification.</p>
    </sec>
    <sec id="sec-5">
      <title>5. Results and Contributions to Date</title>
      <p>To date, I have been focusing on the first and second work packages, specifically the development of
global and local model-agnostic XAI methods tailored for time series classifiers. One of the contributions
is a global model-agnostic XAI method designed to generate explanations in the form of decision rules or
decision graphs. This approach enhances the interpretability of model predictions by clearly identifying
the time steps that significantly influence outcomes. Initial findings were showcased as late-breaking
work at the XAI-2023 [22], followed by a full paper published in Frontiers in Articfiial Intelligence [ 23].
The efectiveness of this method was evaluated using various objective metrics, including accuracy,
ifdelity, depth, number of nodes, and robustness, demonstrating that the decision tree graph efectively
highlights crucial time steps, thereby facilitating a better understanding of the model’s predictions (see
Figure 2).</p>
      <p>Additionally, a local model-agnostic XAI method known as LOMATCE has been introduced, with
preliminary results presented at XAI-2024 [24]. A full paper is currently under review at the IEEE
Transactions on Artificial Intelligence (IEEE-TAI). LOMATCE provides instance-specific explanations
by leveraging Parameterised Event Primitives (PEPs) to capture temporal dependencies and train a
simple linear surrogate model. This method efectively captures essential patterns, such as increasing
and decreasing trends, as well as local maxima and minima, thereby ofering valuable insights into the
model’s decision-making process (illustrated in Figure 3). The explanations generated by LOMATCE
were compared with those produced by established methods, including Integrated Gradients (IG), LIME,
and SHAP. For visual comparison, Figures 4 and 5 present two approaches to applying LIME and
SHAP for time series data: (1) the segment-based approach, which involves dividing the time series
into frames and assigning relevance to each segment. However, this method may overlook important
relationships between segments and can yield diferent explanations depending on the selected segment
width, thereby complicating the determination of optimal segmentation; and (2) the approach that treats
each time step as a separate feature, which can obscure critical temporal dynamics. An evaluation of
LOMATCE across various perturbation strategies, which are used to generate neighbouring samples
around the instance to be explained, indicated that the choice of perturbation method plays a crucial
role in ensuring the faithfulness of the explanations. Comparative analysis shows that LOMATCE
performs competitively across diverse datasets, occasionally outperforming LIME and SHAP in terms
of both interpretability and accuracy.</p>
      <p>These ongoing eforts represent significant strides toward improving the transparency and
interpretability of time series classifiers while efectively addressing the challenge of capturing temporal
dependencies.</p>
    </sec>
    <sec id="sec-6">
      <title>6. Next Research Step and Expected Final Contribution</title>
      <p>The next phase of this research involves implementing the Local-to-Global eXplanation(L2GX) method,
which aggregates local explanations to generate coherent global insights while preserving
interpretability. Furthermore, the proposed methods will be rigorously validated across diverse univariate time
series datasets to assess their robustness and extended to multivariate time series, addressing the added
complexity of interdependent temporal variables and their dynamic relationships.</p>
      <p>The expected final contribution of this research is the development of novel model-agnostic
explainable AI (XAI) methods tailored for time series classifiers. A key component of this work is LOMATCE,
a local model-agnostic XAI method that generates faithful, instance-specific explanations while
capturing temporal dependencies without requiring predefined segmentation. Building on this, a second
global method constructs global explanations by aggregating local explanations from selected instances,
ensuring that the broader model behaviour is captured while preserving temporal dynamics.
Additionally, a separate global rule-based XAI method approximates the decision-making process of black-box
models using decision trees, providing interpretable explanations in the form of decision rules. These
contributions aim to enhance the transparency, reliability, and interpretability of time series classifiers,
facilitating their adoption in real-world applications where explainability is critical.</p>
    </sec>
    <sec id="sec-7">
      <title>Declaration on Generative AI</title>
      <p>The author has not employed any Generative AI tools.
[7] T. Rojat, R. Puget, D. Filliat, J. Del Ser, R. Gelin, N. Díaz-Rodríguez, Explainable artificial intelligence
(xai) on timeseries data: A survey, arXiv preprint arXiv:2104.00950 (2021).
[8] J. V. Jeyakumar, J. Noor, Y.-H. Cheng, L. Garcia, M. Srivastava, How can i explain this to you? an
empirical study of deep neural network explanation methods, Advances in Neural Information
Processing Systems 33 (2020) 4211–4222.
[9] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, A. Torralba, Learning deep features for discriminative
localization, in: Proc. of the IEEE conference on computer vision and pattern recognition, 2016,
pp. 2921–2929.
[10] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, D. Batra, Grad-cam: visual
explanations from deep networks via gradient-based localization, International journal of computer
vision 128 (2020) 336–359.
[11] J. Vielhaben, S. Lapuschkin, G. Montavon, W. Samek, Explainable ai for time series via virtual
inspection layers, arXiv preprint arXiv:2303.06365 (2023).
[12] U. Schlegel, D. L. Vo, D. A. Keim, D. Seebacher, Ts-mule: Local interpretable model-agnostic
explanations for time series forecast models, in: Joint European Conference on Machine Learning
and Knowledge Discovery in Databases, Springer, 2021, pp. 5–14.
[13] F. Oviedo, Z. Ren, S. Sun, C. Settens, Z. Liu, N. T. P. Hartono, S. Ramasamy, B. L. DeCost, S. I. Tian,
G. Romano, et al., Fast and interpretable classification of small x-ray difraction datasets using
data augmentation and deep neural networks, npj Computational Materials 5 (2019) 60.
[14] S. A. Siddiqui, D. Mercier, M. Munir, A. Dengel, S. Ahmed, Tsviz: Demystification of deep learning
models for time-series analysis, IEEE Access 7 (2019) 67027–67040.
[15] S. M. Lundberg, S.-I. Lee, A unified approach to interpreting model predictions, Advances in
neural information processing systems 30 (2017).
[16] M. Guillemé, V. Masson, L. Rozé, A. Termier, Agnostic local explanation for time series classification,
in: 2019 IEEE 31st Int. Conf. on tools with artificial intelligence (ICTAI), IEEE, 2019, pp. 432–439.
[17] I. Neves, D. Folgado, S. Santos, M. Barandas, A. Campagner, L. Ronzio, F. Cabitza, H. Gamboa,
Interpretable heartbeat classification using local model-agnostic explanations on ecgs, Computers
in Biology and Medicine 133 (2021) 104393.
[18] T. Sivill, P. Flach, Limesegment: Meaningful, realistic time series explanations, in: International</p>
      <p>Conference on Artificial Intelligence and Statistics, PMLR, 2022, pp. 3418–3433.
[19] F. Karim, S. Majumdar, H. Darabi, S. Chen, Lstm fully convolutional networks for time series
classification, IEEE access 6 (2017) 1662–1669.
[20] M. W. Kadous, Learning comprehensible descriptions of multivariate time series., in: ICML,
volume 454, 1999, p. 463.
[21] H. A. Dau, A. Bagnall, K. Kamgar, C.-C. M. Yeh, Y. Zhu, S. Gharghabi, C. A. Ratanamahatana,</p>
      <p>E. Keogh, The ucr time series archive, IEEE/CAA Journal of Automatica Sinica 6 (2019) 1293–1305.
[22] E. Mekonnen, P. Dondio, L. Longo, Explaining deep learning time series classification models
using a decision tree-based post-hoc xai method, in: xAI-2023 Late-breaking Work, Demos and
Doctoral Consortium Joint Proceedings, CEUR-WS.org, 2023.
[23] E. T. Mekonnen, P. Dondio, L. Longo, A global model-agnostic rule-based xai method based on
parameterised event primitives for time series classifiers, Frontiers in Artificial Intelligence 7
(2024) 1381921.
[24] E. T. Mekonnen, L. Longo, P. Dondio, Interpreting black-box time series classifiers using
parameterised event primitives, xAI-2024 Late-breaking Work, Demos &amp; Doctoral Consortium Joint
Proceedings (2024).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          , et al.,
          <source>Explainable artificial intelligence (xai) 2</source>
          .
          <article-title>0: A manifesto of open challenges and interdisciplinary research directions</article-title>
          ,
          <source>Information Fusion</source>
          <volume>106</volume>
          (
          <year>2024</year>
          )
          <article-title>102301</article-title>
          . doi:https: //doi.org/10.1016/j.inffus.
          <year>2024</year>
          .
          <volume>102301</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Theissler</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Spinnato</surname>
          </string-name>
          , U. Schlegel,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <article-title>Explainable ai for time series classification: A review, taxonomy and research directions</article-title>
          , IEEE Access (
          <year>2022</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>Why should i trust you? explaining the predictions of any classifier</article-title>
          ,
          <source>in: Proc. of the 22nd ACM SIGKDD Int. Conf. on knowledge discovery and data mining</source>
          ,
          <year>2016</year>
          , pp.
          <fpage>1135</fpage>
          -
          <lpage>1144</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>K.</given-names>
            <surname>Simonyan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Vedaldi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Zisserman</surname>
          </string-name>
          ,
          <article-title>Deep inside convolutional networks: Visualising image classification models and saliency maps</article-title>
          ,
          <source>arXiv preprint arXiv:1312.6034</source>
          (
          <year>2013</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Bach</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Binder</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Montavon</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Klauschen</surname>
          </string-name>
          ,
          <string-name>
            <surname>K.-R. Müller</surname>
          </string-name>
          , W. Samek,
          <article-title>On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation</article-title>
          ,
          <source>PloS one 10</source>
          (
          <year>2015</year>
          )
          <article-title>e0130140</article-title>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>U.</given-names>
            <surname>Schlegel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Arnout</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>El-Assady</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Oelke</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Keim</surname>
          </string-name>
          ,
          <article-title>Towards a rigorous evaluation of xai methods on time series</article-title>
          , in: 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW), IEEE,
          <year>2019</year>
          , pp.
          <fpage>4197</fpage>
          -
          <lpage>4201</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>