<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>World Conference on eXplainable Artificial
Intelligence: July</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <title-group>
        <article-title>Unraveling Anomalies: Explaining Outliers with DTOR⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Riccardo Crupi</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Daniele Regoli</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Alessandro Damiano Sabatino</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Immacolata Marano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Massimiliano Brinis</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Albertazzi</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Cirillo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Andrea Claudio Cosentini</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Audit Data &amp; Advanced Analytics</institution>
          ,
          <addr-line>Intesa Sanpaolo</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Data Science &amp; Artificial Intelligence</institution>
          ,
          <addr-line>Intesa Sanpaolo</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2024</year>
      </pub-date>
      <volume>1</volume>
      <fpage>7</fpage>
      <lpage>19</lpage>
      <abstract>
        <p>Explaining outliers' occurrence and mechanisms is crucial across various domains, as malfunctions, frauds, and threats require valid explanations for efective countermeasures. With the increasing use of sophisticated Machine Learning techniques to identify anomalies, explaining their presence becomes more challenging. Our proposed Decision Tree Outlier Regressor (DTOR) addresses this challenge by providing rule-based explanations for individual data points using anomaly scores from a detection model. By leveraging a Decision Tree Regressor to compute estimation scores and extracting relative paths, DTOR illustrates its efectiveness across diferent anomaly detectors and diverse datasets, including those with numerous features.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Outlier detection</kwd>
        <kwd>Explainability</kwd>
        <kwd>Decision Tree</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Internal audit in the banking sector is crucial for evaluating operational integrity and eficiency,
assessing internal controls, risk management processes, and regulatory compliance. Anomaly
detection techniques play a vital role in identifying atypical patterns and outliers within data
populations analyzed for audit purposes, assisting in risk mitigation and fraud detection.
However, ensuring the efective utilization of these techniques requires the ability to explain
why certain records are considered anomalies, particularly for internal auditors with limited
data analytics expertise [
        <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
        ].
      </p>
      <p>
        Among various anomaly detection techniques, Isolation Forest [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], One-Class SVM [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], and
Gaussian Mixture Models [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] are prominent anomaly detection techniques widely employed in
practical applications [
        <xref ref-type="bibr" rid="ref6 ref7">6, 7</xref>
        ]. These methods leverage diverse mathematical principles to detect
anomalies eficiently. However, their interpretability may be limited, necessitating explainable
artificial intelligence (XAI) techniques to elucidate model decisions, ensure transparency, and
enhance trust in AI-driven decisions [
        <xref ref-type="bibr" rid="ref10 ref8 ref9">8, 9, 10</xref>
        ].
      </p>
      <p>
        To meet this requirement, we introduce a novel model-agnostic XAI framework specifically
designed for anomaly detection in the banking sector. Unlike conventional XAI methods that
primarily focus on feature importance (e.g., SHAP and DIFFI [
        <xref ref-type="bibr" rid="ref8 ref9">8, 9</xref>
        ]), our framework generates
easily understandable rules to elucidate model predictions, thereby enhancing transparency
and fostering trust in AI-driven decisions. Notable techniques such as LORE, RuleXAI, and
Anchors [
        <xref ref-type="bibr" rid="ref11 ref12 ref13">11, 12, 13</xref>
        ] exemplify this approach.
      </p>
      <p>
        Our approach aims to bridge the divide between interpretability and efectiveness in anomaly
detection by ofering human-understandable rules that clarify the rationales behind anomalous
predictions. Relevant works in this domain include [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ] and [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], focusing on online anomaly
explanation and providing a survey of explainable anomaly detection methods, respectively. By
harnessing rule-based explanations, our XAI framework ensures transparency and accessibility
in the decision-making process of anomaly detection models for data scientists, domain experts,
and colleagues in the banking industry.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Method</title>
      <p>Our novel XAI method, inspired by the principles of the Isolation Forest algorithm, takes
advantage of the concept of isolating anomalies with minimal cuts in the feature space. To
provide clear explanations for anomaly detection decisions, we use decision tree regressors.
In our approach, a decision tree regressor is trained to learn the anomaly scores assigned to
each data point generated by the Anomaly Detector. Notably, during training, we introduce
a weighted loss function that gives a significantly higher weight to the data point under
consideration. This weighting scheme ensures that the decision tree regressor prioritizes
accurate estimation of the anomaly score for the target data point, thereby improving the
interpretability and reliability of the local explanation. After training the decision tree, extracting
the path of the datapoint can provide an interpretable rule for the anomaly score (algorithm 1).
The implementation of DTOR at the following link can be accessed online 1</p>
    </sec>
    <sec id="sec-3">
      <title>3. Experiments</title>
      <p>This section delineates the configurations of three Anomaly Detector models trained on two
public datasets and one private dataset from Intesa Sanpaolo (see Table 1), ofering explanations
using both Anchors and DTOR. The DTOR method and the experiments conducted on public
datasets are available in the GitHub repository accessible via the following link: https://github.
com/rcrupiISP/DTOR.</p>
      <p>Algorithm 1: The DTOR approach generates explanations for a given instance.
def explain_instance:
input : (e, ˆe): the sample to be explained along its corresponding score from the
AD;
(t, ˆt): a train set and its corresponding scores from the AD;
 : training weight associated to e;
h: list of parameters of the decision tree;
output : a list of rules explaining the instance (e, ˆe)
 ← len(t);
model ← DecisionTreeRegressor(h);
/* append the sample  in the train set */
ˆ ← concat t with e;
ˆ ← concat ˆt with ˆe;
/* build the array of weights that gives more importance in the
loss function to the sample e */
 ← concat 1 with  ;
/* train the DT to the weighted configuration */
model.fit( (ˆ, ˆ), sample_wights=);
/* retrieve the path taken by e in the decision tree
rule ← _ℎ(model, e);
return rule
*/</p>
      <sec id="sec-3-1">
        <title>3.1. Datasets and AD models</title>
        <p>
          Utilizing the novel XAI technique across various datasets aims to assess its efectiveness in
explaining diferent types of anomalies learned by unsupervised Machine Learning models. The
chosen Anomaly Detector models include IF, One-class SVM, and GMM [
          <xref ref-type="bibr" rid="ref17">17</xref>
          ]. Default parameters
were opted for, as the primary objective of this study is to comprehend the explanation rather
than optimize a performance metric specific to the dataset problem. Therefore, three distinct
models were chosen to reason in diferent ways. The dataset was partitioned into training and
testing sets. Specifically, the test set comprises 50 samples from each dataset, containing both
anomalies and normal data points. The anomalies for GMM are defined to represent 5% of the
training set, as well as for the isolation forest using the contamination hyperparameter set to
0.05. Default hyperparameters were retained for the SVM (kernel: radial basis function,  =
0.5, representing the upper bound on the fraction of training errors), resulting in anomalies
representing about 50% of the training set.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Rule-based XAI</title>
        <p>We explore various explainability techniques, focusing on rule-based explanations due to
challenges in interpreting feature importance methods like SHAP and DIFFI, especially with
high-dimensional datasets. Initially, Anchors were used to explain the banking dataset, but we
1https://github.com/rcrupiISP/DTOR.</p>
        <p>Dataset</p>
        <sec id="sec-3-2-1">
          <title>Banking (B)</title>
        </sec>
        <sec id="sec-3-2-2">
          <title>Glass Identification (GI)</title>
          <p>Lymphography (L)
100,000
214
148
26
9
19</p>
        </sec>
        <sec id="sec-3-2-3">
          <title>Dataset obtained from Intesa</title>
        </sec>
        <sec id="sec-3-2-4">
          <title>Sanpaolo Bank was used for</title>
          <p>anomaly identification and
improved client analysis to
discover probable instances of
fraud or criminal conduct.</p>
        </sec>
        <sec id="sec-3-2-5">
          <title>This information comes from</title>
          <p>the USA Forensic Science
Service and includes six diferent
glass kinds, each distinguished
by its oxide composition.</p>
        </sec>
        <sec id="sec-3-2-6">
          <title>The lymphography dataset was</title>
          <p>obtained from the University</p>
        </sec>
        <sec id="sec-3-2-7">
          <title>Medical Center, Institute of Oncology, in Ljubljana, Yugoslavia.</title>
          <p>found limitations, such as the inability to reason on regression tasks and constraints in model
implementation, leading to the development of DTOR. In addition to Anchors and DTOR, we
considered LORE and RuleXAI. However, LORE requires extensive hyperparameter tuning,
increasing implementation complexity. Additionally, RuleXAI is not actively maintained, with
outdated Python library requirements. For future work, we plan to compare DTOR with other
explainability techniques.</p>
          <p>We adopt a perspective of providing rule-based explanations to Data Scientists, summarizing
examples in Table 2 with four key metrics: execution time, coverage, and rule length. For
DTOR, we set specific hyperparameters tailored to the banking dataset, ensuring both quantity
and quality of explanations. However, a dataset-specific approach is crucial to identifying the
optimal anomaly detector and evaluating explanation quality efectively. The hyperparameters
for DTOR are carefully chosen, with the max depth set to 8, the min impurity decrease
to 10− 5, and the weight  for learning the rule to 0.1 *  , suitable for unbalanced datasets
with anomalies. DTOR estimates the anomaly score rather than a binary output, and the same
threshold used in anomaly detection models is applied to determine anomalies. While not
detailed here, each rule output by DTOR provides both precision and average anomaly score,
enhancing informativeness.</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Discussion and conclusion</title>
      <p>The findings derived from the DTOR algorithm provide significant insights into both anomaly
detection and explainability methodologies. Notably, we observed a consistent trend towards
shorter explanations for anomalies across various anomaly detection (AD) models and datasets,
as evidenced by examples in Table 2, particularly instances with IDs 1 and 2. Conversely,
instance ID 3 presents a lengthier explanation. This observation may align with the strategy
employed by the Isolation Forest, which aims to isolate anomalies through a minimal number
of steps. DTOR, by design, follows a similar path, leveraging the locally trained decision
tree to isolate the sample. If the sample is an outlier, it can be easily separated with fewer
steps, whereas non-outliers may require more complex separation. It’s worth noting that
our comparison was conducted against a surrogate classifier model, while our contribution
introduces a surrogate regressor model. This distinction allows us not only to provide the
rule but also to estimate the anomaly detection (AD) score, ofering nuanced insights beyond
binary classification tasks. Instance ID 1 showcases three distinct explanations, underscoring
the variability introduced by diferent AD models and potential feature correlations. This
phenomenon illustrates the Rashomon efect in explainability [ 18], where multiple plausible
explanations coexist.</p>
      <p>Although the execution time for generating explanations typically falls within seconds, it
slightly increases for the banking dataset due to its larger sample size, necessitating additional
computational resources. Looking ahead, further analysis is warranted to delve into these
explanations in depth and compare them with state-of-the-art rule-based explainability
techniques. Key metrics such as precision, coverage, and stability will be evaluated to assess the
efectiveness of DTOR and its potential advantages over existing methods. For a more detailed
analysis on the state of the art, performance and comparison experiments with Anchors, please
refer to [19].
[18] M. G. M. M. Hasan, D. Talbert, Mitigating the rashomon efect in counterfactual
explanation: A game-theoretic approach, in: The International FLAIRS Conference Proceedings,
volume 35, 2022.
[19] R. Crupi, A. D. Sabatino, I. Marano, M. Brinis, L. Albertazzi, A. Cirillo, A. C. Cosentini,
Dtor: Decision tree outlier regressor to explain anomalies, arXiv preprint arXiv:2403.10903
(2024).</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Nonnenmacher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Gómez</surname>
          </string-name>
          ,
          <article-title>Unsupervised anomaly detection for internal auditing: Literature review and research agenda</article-title>
          .,
          <source>International Journal of Digital Accounting Research</source>
          <volume>21</volume>
          (
          <year>2021</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>A.</given-names>
            <surname>Basile</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Crupi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Grasso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Mercanti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Regoli</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Scarsi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Yang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A. C.</given-names>
            <surname>Cosentini</surname>
          </string-name>
          ,
          <article-title>Disambiguation of company names via deep recurrent networks</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>238</volume>
          (
          <year>2024</year>
          )
          <article-title>122035</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2023</year>
          .
          <volume>122035</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>F. T.</given-names>
            <surname>Liu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K. M.</given-names>
            <surname>Ting</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.-H.</given-names>
            <surname>Zhou</surname>
          </string-name>
          ,
          <article-title>Isolation forest</article-title>
          , in: 2008 eighth ieee international
          <source>conference on data mining, IEEE</source>
          ,
          <year>2008</year>
          , pp.
          <fpage>413</fpage>
          -
          <lpage>422</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>B.</given-names>
            <surname>Schölkopf</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R. C.</given-names>
            <surname>Williamson</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Smola</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Shawe-Taylor</surname>
          </string-name>
          , J. Platt,
          <article-title>Support vector method for novelty detection</article-title>
          ,
          <source>Advances in neural information processing systems</source>
          <volume>12</volume>
          (
          <year>1999</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>D. A.</given-names>
            <surname>Reynolds</surname>
          </string-name>
          , et al.,
          <article-title>Gaussian mixture models</article-title>
          .,
          <source>Encyclopedia of biometrics 741</source>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhao</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Nasrullah</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <article-title>PyOD: A python toolbox for scalable outlier detection</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>20</volume>
          (
          <year>2019</year>
          )
          <fpage>1</fpage>
          -
          <lpage>7</lpage>
          . URL: http://jmlr.org/papers/v20/
          <fpage>19</fpage>
          -
          <lpage>011</lpage>
          .html.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>N.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Venugopal</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Qiu</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Kumar</surname>
          </string-name>
          ,
          <article-title>Detecting anomalous online reviewers: An unsupervised approach using mixture models</article-title>
          ,
          <source>Journal of Management Information Systems</source>
          <volume>36</volume>
          (
          <year>2019</year>
          )
          <fpage>1313</fpage>
          -
          <lpage>1346</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          , G. Erion,
          <string-name>
            <given-names>H.</given-names>
            <surname>Chen</surname>
          </string-name>
          , A. DeGrave,
          <string-name>
            <surname>J. M. Prutkin</surname>
            ,
            <given-names>B.</given-names>
          </string-name>
          <string-name>
            <surname>Nair</surname>
            ,
            <given-names>R.</given-names>
          </string-name>
          <string-name>
            <surname>Katz</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Himmelfarb</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          <string-name>
            <surname>Bansal</surname>
            ,
            <given-names>S.-I. Lee</given-names>
          </string-name>
          ,
          <article-title>From local explanations to global understanding with explainable ai for trees</article-title>
          ,
          <source>Nature machine intelligence</source>
          <volume>2</volume>
          (
          <year>2020</year>
          )
          <fpage>56</fpage>
          -
          <lpage>67</lpage>
          . doi:
          <volume>10</volume>
          .1038/s42256-019-0138-9.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>M.</given-names>
            <surname>Carletti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Terzi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G. A.</given-names>
            <surname>Susto</surname>
          </string-name>
          ,
          <article-title>Interpretable anomaly detection with difi: Depth-based feature importance of isolation forest</article-title>
          ,
          <source>Engineering Applications of Artificial Intelligence</source>
          <volume>119</volume>
          (
          <year>2023</year>
          )
          <article-title>105730</article-title>
          . doi:
          <volume>10</volume>
          .1016/j.engappai.
          <year>2022</year>
          .
          <volume>105730</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>R.</given-names>
            <surname>Crupi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Castelnovo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Regoli</surname>
          </string-name>
          , B. San Miguel Gonzalez,
          <article-title>Counterfactual explanations as interventions in latent space, Data Mining and Knowledge Discovery (</article-title>
          <year>2022</year>
          )
          <fpage>1</fpage>
          -
          <lpage>37</lpage>
          . doi:
          <volume>10</volume>
          .1007/s10618-022-00889-2.
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <article-title>Local rule-based explanations of black box decision systems</article-title>
          , arXiv preprint arXiv:
          <year>1805</year>
          .
          <volume>10820</volume>
          (
          <year>2018</year>
          ). URL: https://arxiv.org/abs/
          <year>1805</year>
          .10820.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>D.</given-names>
            <surname>Macha</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Kozielski</surname>
          </string-name>
          , Ł. Wróbel,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sikora</surname>
          </string-name>
          ,
          <article-title>Rulexai-a package for rule-based explanations of machine learning model</article-title>
          ,
          <source>SoftwareX</source>
          <volume>20</volume>
          (
          <year>2022</year>
          )
          <fpage>101209</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>Anchors: High-precision model-agnostic explanations</article-title>
          ,
          <source>Proceedings of the AAAI Conference on Artificial Intelligence</source>
          <volume>32</volume>
          (
          <year>2018</year>
          ). doi:
          <volume>10</volume>
          .1609/ aaai.v32i1.
          <fpage>11491</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>R. P.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Mastelini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Davari</surname>
          </string-name>
          ,
          <string-name>
            <given-names>E.</given-names>
            <surname>Aminian</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Veloso</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Gama</surname>
          </string-name>
          ,
          <article-title>Online anomaly explanation: a case study on predictive maintenance</article-title>
          ,
          <source>in: Joint European Conference on Machine Learning and Knowledge Discovery in Databases</source>
          , Springer,
          <year>2022</year>
          , pp.
          <fpage>383</fpage>
          -
          <lpage>399</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>Z.</given-names>
            <surname>Li</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Zhu</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Van Leeuwen</surname>
          </string-name>
          ,
          <article-title>A survey on explainable anomaly detection</article-title>
          ,
          <source>ACM Transactions on Knowledge Discovery from Data</source>
          <volume>18</volume>
          (
          <year>2023</year>
          )
          <fpage>1</fpage>
          -
          <lpage>54</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <surname>A. Frank,</surname>
          </string-name>
          <article-title>Uci machine learning repository</article-title>
          , http://archive. ics. uci. edu/ml (
          <year>2010</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          , et al.,
          <article-title>Scikit-learn: Machine learning in python</article-title>
          ,
          <source>the Journal of machine Learning research 12</source>
          (
          <year>2011</year>
          )
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>