<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta>
      <journal-title-group>
        <journal-title>S. Lundberg, G. Erion, H. Chen, A. DeGrave, J. Prutkin, B. Nair, R. Katz, J. Himmelfarb, N. Bansal,
S. Lee, From local explanations to global understanding with explainable ai for trees, Nature
Machine Intelligence</journal-title>
      </journal-title-group>
    </journal-meta>
    <article-meta>
      <article-id pub-id-type="doi">10.1145/2939672.2939778</article-id>
      <title-group>
        <article-title>Enhancing Cost-Sensitive Tree-Based XAI Surrogate Method: Exploring Alternative Cost Matrix Formulation⋆</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marija Kopanja</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Miloš Savić</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Longo</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Artificial Intelligence and Cognitive Load Research Lab, The Centre of Explainable Artificial Intelligence, Technological University Dublin</institution>
          ,
          <addr-line>Central Quad, CQ-214 Grangegorman Campus, D07 ADY7 Dublin</addr-line>
          ,
          <country country="IE">Ireland</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad</institution>
          ,
          <addr-line>Trg Dositeja Obradovića 3, 21000 Novi Sad</addr-line>
          ,
          <country country="RS">Serbia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2025</year>
      </pub-date>
      <volume>2</volume>
      <issue>2020</issue>
      <fpage>09</fpage>
      <lpage>11</lpage>
      <abstract>
        <p>This study investigates how diferent cost matrix formulations influence cost-sensitive tree extraction method performance within the post-hoc model-agnostic XAI framework. As an input parameter, the cost matrix is essential in building cost-sensitive tree models. The initial, default version of the cost matrix is defined to reflect the class imbalance ratio among each pair of classes. Here, two diferent formulations of the alternative cost matrix are proposed: centroid distance-based and medoid distance-based cost matrix. The cost-sensitive tree method with diferent formulations of cost-matrix is compared against other tree-based and rule-based XAI methods as a surrogate model for the underlying black-box model. Evaluation metrics are employed to assess the generated explanations, and results demonstrate that rule sets extracted from cost-sensitive trees are smaller with shorter rules on average across diferent datasets with varying number of classes.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable artificial intelligence</kwd>
        <kwd>Cost-sensitive decision tree</kwd>
        <kwd>Surrogate modeling</kwd>
        <kwd>Rule extraction</kwd>
        <kwd>Tree-based methods</kwd>
        <kwd>Model-agnostic explanations</kwd>
        <kwd>Rule-based systems</kwd>
        <kwd>Interpretability</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Explainable artificial intelligence (XAI) is one of the fastest emerging sub-fields of AI dedicated to
developing methods for making machine learning (ML) models more understandable and transparent [
        <xref ref-type="bibr" rid="ref1 ref2">1,
2</xref>
        ]. To extract information from already trained models, several diferent methods are developed for
explaining their inferential process post-hoc (after the model has been trained), without modifying
the internal structure or training process of the model. Creating a surrogate model is a post-hoc
approach [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] used to approximate the decision-making process of the original model by using simple
models such as decision trees, linear or rule-based models, which are typically interpretable and ofer a
more understandable and transparent view of the decision-making process.
      </p>
      <p>
        Decision trees and rule sets are graphical and textual representations types of explanations that
are easily understandable and interpretable [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. The cost-sensitive rule and tree extraction method
CORTEX [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] investigated in this study provides two easily understandable and interpretable explanation
forms: a cost-sensitive tree model and a set of rules. Cost-sensitive trees are an important category of
tree methods created by using a cost-sensitive supervised approach that considers various costs during
the learning process, such as misclassification costs (incorrectly classifying a sample), feature costs
(the cost of obtaining the feature values) or other related costs [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. By incorporating misclassification
costs for each class into the learning process, cost-sensitive algorithms can efectively address the
class imbalance problem, a well-known issue in the ML community that occurs when the number of
samples is uneven across classes. In a class-dependent cost matrix, samples from the same class have the
same costs, as opposed to a sample-dependent cost matrix, in which each sample may have a diferent
cost. The CORTEX is grounded in a cost-sensitive decision tree algorithm introduced for the binary
classification framework [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] with a sample-dependent cost matrix, where each sample has its cost matrix
defined. Specifically, the two-class sample-dependent cost-sensitive framework has been adapted into a
multi-class class-dependent framework by introducing an n-dimensional class-dependent cost matrix.
In our previous research, the default (ratio-based) cost matrix was initially developed based on class
imbalance ratios, providing a foundational approach to address the skewness of the class distribution.
However, the CORTEX method can efectively operate with a balanced distribution of target variables
since cost matrix definitions allow for generating a symmetric matrix in such cases.
      </p>
      <p>The CORTEX has limitations related to the ratio-based cost matrix in multi-class classification as it
treats the costs of minority samples equally across equally sized majority classes. Therefore, CORTEX
fails to consider that minority samples might be more similar to one majority class and fails to make
errors more acceptable for that class. To address this issue, as the main contribution of this paper, we
propose an alternative cost matrix formulation that utilizes the distance between class centroids or
medoids to reflect the similarities/diferences among classes and have a more accurate representation of
each class cluster.</p>
      <p>
        Consistent with the evaluation framework of our previous study [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ], the CORTEX method with
diferent formulations of the cost matrix is compared to other tree-based and rule-based XAI methods,
serving as a surrogate model for the underlying black-box model (neural network). Developing a
treebased/rule-based model as an explanation for a neural network model is accomplished on a relabeled
target variable without using internal elements of the network. The experimental results obtained
that CORTEX ofers competitive performance while addressing key limitations of existing tree-based
and rule-based methods, such as reduced interpretability due to deep trees, many rules, and long rule
lengths on average.
      </p>
      <p>The remainder of the paper is structured as follows: Section 2 reviews related work. Section 3
introduces the concept of an n-dimensional class-dependent cost matrix. In the first part of Section 4
are reported cost-sensitive tree models extracted by the CORTEX method with three diferent cost
matrix formulations followed by a comprehensive comparative evaluation of the CORTEX method with
other tree-based and rule-based XAI methods. Finally, Section 5 summarizes our key findings.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Related work</title>
      <p>
        In numerous applications, complex neural network models are often the preferred choice due to the
high-performance capacity of these models. Nevertheless, higher accuracy comes at the cost of these
models’ incomprehensible and non-understandable decision-making process. Tree-based models are
considered self-interpretable, transparent, and comprehensible [
        <xref ref-type="bibr" rid="ref3 ref8">3, 8</xref>
        ]. Several approaches have been
proposed to explain deep learning classification models, including using decision tree methods as
surrogate models and extracting rule sets from the resulting tree [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Surrogate models can be created
globally or locally [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], where global surrogate models aim to explain the model as a whole, and the local
surrogate model explains a single instance.
      </p>
      <p>
        Local Interpretable Model-Agnostic Explanations (LIME) [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] is a widely used local surrogate method.
Another popular post-hoc method that can provide local and global explanations is the Shapley Additive
Explanations (SHAP) method proposed in [11]. Both model-specific and model-agnostic versions of the
SHAP have been proposed for tree-based models [12], including also cost-sensitive models [13, 14].
      </p>
      <p>Tree-based algorithm C4.5-PANE [15] is an extension of a C4.5 decision tree algorithm [16], capable
of extracting if-then rules from ensembles of neural networks, and its performance is compared to other
rule-extractors in study [17]. Rule Extraction From Neural Network Ensemble (REFNE) was developed
to extract symbolic rules from neural networks [18]. Another rule-based method that relies on a reverse
engineering technique to extract rules from neural networks is Rule Extraction by Reverse Engineering
(RxREN) [19]. Finally, the TREPAN [20] method generates a decision tree by querying the underlying
network using a query and sampling approach.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Design and Methods</title>
      <p>
        The cost-sensitive rule and tree extraction method CORTEX [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] is a cost-sensitive multi-class
treebuilding algorithm where misclassification costs are incorporated using a pre-defined class-dependent
cost matrix. The learning phase consists of stratifying feature space into regions in a recursive manner
(top-down greedy search). The CORTEX method classifies the sample into the least costly class,
equivalent to classifying a sample with the highest cost-sensitive probability. In study [14],
costsensitive probabilities are introduced into the cost-sensitive decision tree method for a two-class
classification framework and later generalized in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ] for an arbitrary number of classes. By introducing
cost-sensitive probabilities, it is possible to access information about confidence in the prediction by
persevering the cost-dependence of labels. A detailed description of the CORTEX method is given in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ].
      </p>
      <p>The misclassification costs are typically represented as elements of a cost matrix. The cost matrix can
be class-dependent or sample-dependent, where the costs are associated with the classes or samples,
respectively. The former assumption of constant costs across classes is more substantial and widespread
through the application of most cost-sensitive learning algorithms [21, 22] since, in many real-life
problems, the values in the matrix are unknown and are not given by experts. Throughout this paper,
the term ’cost matrix’ will refer to the class-dependent type.</p>
      <p>The cost matrix is a function C of the actual and predicted classes, defined as C = [ ] ,  =
1, . . . ,  where  represents number of classes, while  and  represent actual and predicted class,
respectively. Accordingly,  = (, ) is the cost of predicting class  when the actual (true) class is .</p>
      <sec id="sec-3-1">
        <title>3.1. Ratio-based cost matrix</title>
        <p>
          In the CORTEX method [
          <xref ref-type="bibr" rid="ref5">5</xref>
          ], the default (ratio-based) version of the cost matrix is defined by using class
imbalance ratios among classes. If  is the number of samples in class , the values of a ratio-based
cost matrix are defined as  = + which reflects class imbalance ratio among the classes  and .
        </p>
        <p>The cost matrix in CORTEX is intentionally defined to reflect the proportions of the samples in classes,
since otherwise, with equal costs, CORTEX would not have the advantage over some other algorithm
(assuming other diferences between them are negligible) since minimizing cost would be equivalent to
minimizing the error rate, leading to inappropriate, biased classifier towards the majority class.</p>
        <p>One drawback of the ratio-based cost matrix can be noticed in the multi-class classification framework
where one class is under-represented. Namely, suppose there is the same number of samples in majority
classes (or nearly the same). In that case, the costs for misclassifying minority samples in either of the
majority classes will be the same. However, the minority samples might be more similar to those in one
majority class, and making such an error might be more acceptable than wrongly classifying minority
samples in other, more dissimilar majority class(es). Consequently, to reflect the similarity/dissimilarity
among classes, an alternative approach is proposed to use distance among their centroids or medoids.</p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Distance-based cost matrix</title>
        <p>Two diferent formulations of the alternative cost matrix are proposed: centroid distance-based and
medoid distance-based cost matrix. The centroid of a class is the point corresponding to the geometric
mean of all samples in the class, while the medoid is the existent sample from the class that minimizes
the average dissimilarity (in our study, Euclidean distance) to other samples in the class. Accordingly, the
centroid  of a class  is obtained as the mean vector of all samples belonging to the class . In contrast,
the medoid  of a class  is the sample within the class  with the minimum average distance to all
other samples in the class . By calculating Euclidean distance among centroids/medoids, the symmetric
cost matrix is obtained, where  =  = (,  ) or  =  = (,  ). Afterwards, the
obtained matrix must be multiplied with weights to reflect that the minority class(es) has fewer samples
(and, therefore, a higher cost). This is accomplished by scaling distances between centroids/medoids by
the size of the corresponding class. Accordingly, the centroid and medoid distance-based cost matrix
are defined as  = (,  ) * √︀ and  = (,  ) * √︀ . In our implementation, the weights
are proportional to the square root of the class size since we want to prevent the classifier from being
too biased towards the minority class.</p>
        <p>The intuition behind the distance-based cost matrix is that the further the two classes are, the higher
the cost of making the wrong classification for samples in the class with fewer samples should be.
The distance is measured between centroids or medoids of classes where the central tendency of a
cluster with outliers or skewed distribution can be more accurately reflected with the medoids as a
more robust measure [23], especially in the presence of class sub-concepts commonly observed in class
imbalance frameworks [24]. Depending on the target distribution, a centroid distance-based or medoid
distance-based cost matrix might be more suitable than a ratio-based cost matrix.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Results</title>
      <p>In the experimental part of the study, we compared the performance of the CORTEX method with
diferent formulations of cost matrix with other tree-based and rule-based XAI methods. The CORTEX
and other methods are used as a post-hoc XAI method by creating a surrogate tree model for a simple
neural network model and automatically extracting a set of rules from the obtained tree. Eight datasets
with varying class sizes ranging from 2 to 29 are considered, as in other studies [25, 17].</p>
      <p>The first step of the experimental setup is training a simple neural network model (feed-forward
with two fully connected hidden layers) on 70% of data with early-stopping to prevent overfitting. For
all network hyperparameters, optimal values are obtained from Table 2 reported by [17]. Afterwards,
the post-hoc surrogate models are created using 30% of test data, and predictions given by the neural
network. The cost-sensitive tree model is trained using the CORTEX algorithm. Due to space limitations,
tree topologies for CORTEX with three cost matrix formulations are given in Table 4 for several datasets.
The CORTEX method with centroid distance-based cost-matrix (CORTEX-c) gives a smaller tree for all
three datasets. For other datasets, neither matrix formulation provides a constantly smaller tree. The
CORTEX method with a medoid distance-based cost matrix (CORTEX-m) performs as well as or worse
than CORTEX-c or CORTEX with a ratio-based cost matrix. Notably, the CORTEX method generates
tree models with diferent topologies depending on the formulation of the cost matrix.</p>
      <p>CORTEX</p>
      <p>CORTEX-c</p>
      <p>CORTEX-m
Dataset
abalone
contraceptive
page_blocks</p>
      <p>The transformation from a tree model into a set of rules is essential to facilitate the comparison of
CORTEX with other tree-based and rule-based XAI methods. For comprehensive comparative analysis
are considered five rule exaction methods, where four rule-extractors, C4.5-PANE, REFNE, RxREN, and
TREPAN, are extensively studied in the literature [17] in similar framework and therefore considered
as a strong baseline model in our study. Furthermore, considering the CORTEX method is a tree-based
algorithm, the selected subset of benchmarking methods is extended with a traditional decision tree
classifier (DT) to provide a more comprehensive evaluation. In our work, we have used the scikit-learn
implementation of DT with weights that are automatically adjusted to be inversely proportional to
class frequencies in the weighted impurity gain measure in order to efectively take into account the
class imbalance ratios of the datasets1.</p>
      <p>
        Six metrics were selected to assess the degree of explainability of the rule sets, including completeness,
correctness, fidelity, robustness, number of rules, and average rule length. The formal definitions and
detailed description of these measures can be found in [
        <xref ref-type="bibr" rid="ref5">17, 5</xref>
        ].
      </p>
      <p>In Figure 1 are reported evaluation results, where only the number of rules is converted into the
logarithmic scale to enhance the visibility of the results. Notably, all methods except REFNE produce a
set of rules covering all samples across all datasets, reaching 100% completeness. Regarding correctness,
the CORTEX method, with diferent formulations of cost matrices, performs equally well or better than
other methods. The reported results for the fidelity measure show that the DT model outperforms other
methods across all datasets. However, for most datasets, the CORTEX, CORTEX-c, and CORTEX-m can
be ranked second-best, right after the DT method. Results also reveal that CORTEX models are less
robust than other tree-based extractors, such as C4.5-PANE and TREPAN. At the same time, CORTEX
is competitive with other rule extractors or better than them in terms of robustness, depending on the
dataset. The surpassed robustness of C4.5-PANE over other methods could be due to the augmentation
of training data with synthetic data in its training process. On the other hand, the good robustness of the
TREPAN can be explained by a user-specified minimum number of samples available at a node before
choosing a splitting feature for that node. Nonetheless, the robustness of TREPAN and C4.5-PANE comes
with a trade-of regarding average rule length. As noted, both TREPAN and C4.5-PANE produce the
highest average rule length. The CORTEX, CORTEX-c, and CORTEX-m produce rule sets significantly
shorter than rules generated by TREPAN, C4.5-PANE, and DT, but still not shorter than those extracted
from REFNE. Despite generating the shortest rules, REFNE generates sets with the highest number</p>
      <sec id="sec-4-1">
        <title>1Other rule extractors are obtained from https://github.com/giuliavilone/rule_extractor</title>
        <p>of rules, followed by C4.5-PANE. While CORTEX may not have the lowest average rule length nor
the smallest set of rules, it clearly shows the ability to balance diferent metrics, establishing efective
performance.</p>
        <p>A non-parametric Friedman test [26] is used to assess whether a specific tree-based or rule-based XAI
method performs significantly diferently than others according to the six analyzed metrics across eight
datasets. The results of the Friedman test are evaluated using a significance level of 0.05. Results show
that six p-values are lower than the significance level of 0.05, meaning there is evidence supporting the
null hypothesis for 6 out of 8 datasets, that some method performs consistently better (or worse).</p>
        <p>The subsequent phase of the experimental procedure involves ranking the selected XAI methods
according to the six metrics. Initially, rankings were determined for each metric. These rankings were
then aggregated across all datasets, and the sum of ranks for each metric was normalized, yielding the
ifnal normalized rankings. The results reported in Figure 2 indicate that CORTEX-c is ranked as the
best method for abalone and contraceptive datasets. Baseline CORTEX method with ratio-based cost
matrix achieves the highest rank for mushroom and _  datasets. The CORTEX-m method
is top-ranked only for wine dataset altogether with CORTEX-c. Therefore, for 5 out of 8 datasets, the
CORTEX method with diferent cost matrix formulations is ranked as the best method considering all
six measures used for performance assessment. For the other 3 datasets, the CORTEX method is ranked
as the second-best model. However, choosing the best cost matrix definition isn’t straightforward; it
largely depends on the specific dataset.</p>
        <p>The CORTEX method with diferent cost matrix formulations demonstrates competitive performance
compared to other tree-based models, showcasing its efectiveness in handling black-box models on
diverse datasets. Furthermore, it surpasses the capabilities of some inherent rule-extraction techniques,
delivering superior results in terms of analyzed quantitative measures of the degree of explainability.
Specifically, extracting shorter rule sets with shorter rule length, on average, suggests the advantages of
using the CORTEX method over alternative methods. However, this advantage comes with the trade-of
of having a less accurate and robust model, although it efectively balances this trade-of. Overall,
the results underscore the potential of CORTEX as a powerful XAI tool for scenarios requiring clear,
human-understandable rules while maintaining good predictive performance.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Concluding remarks</title>
      <p>In this paper, we have explored alternative cost matrix formulations in cost-sensitive rule and tree
extraction method (CORTEX) using centroid and medoid distance-based cost matrix. By using distance
among centroids or medoids of classes, the distance-based costs will difer for wrongly classifying
minority samples into majority classes. Instead of centroids, medoids could be used as more representative
objects of each class cluster, especially in the presence of outliers and class sub-concepts commonly
observed in class imbalance frameworks. The CORTEX method is compared against other tree-based
and rule-based XAI methods as a surrogate model for the underlying black-box model (neural network).
Our study demonstrates that CORTEX ofers competitive performance while addressing key limitations
of existing tree-based and rule-based methods, such as reduced interpretability due to deep trees, many
rules, and long rule lengths on average. Depending on the cost matrix used in CORTEX, smaller rule sets
with shorter rules can be produced at the cost of slightly reduced accuracy and robustness. To enhance
the robustness of the CORTEX method in future research, the training set could be augmented with
synthetic data. Overall, CORTEX efectively balances the interpretability-accuracy trade-of since it can
generate understandable tree models without significantly compromising other performance measures.
Therefore, CORTEX is a valuable XAI tool for generating understandable rules while retaining good
predictive performance as a surrogate model for complex models in class imbalance frameworks.</p>
    </sec>
    <sec id="sec-6">
      <title>Declaration on Generative AI</title>
      <sec id="sec-6-1">
        <title>The authors have not employed any Generative AI tools.</title>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brcic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hayashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Khosravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lecue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Malgieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Páez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Samek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Speith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions</article-title>
          ,
          <source>Information Fusion</source>
          <volume>106</volume>
          (
          <year>2024</year>
          )
          <article-title>102301</article-title>
          . doi:https://doi.org/10.1016/j.inffus.
          <year>2024</year>
          .
          <volume>102301</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>Development of a human-centred psychometric test for the evaluation of explanations produced by xai methods</article-title>
          , in: L.
          <string-name>
            <surname>Longo</surname>
          </string-name>
          (Ed.),
          <source>Explainable Artificial Intelligence</source>
          , Springer Nature Switzerland, Cham,
          <year>2023</year>
          , pp.
          <fpage>205</fpage>
          -
          <lpage>232</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ali</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Abuhmed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>El-Sappagh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>K.</given-names>
            <surname>Muhammad</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. M.</given-names>
            <surname>Alonso-Moral</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. Del</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Díaz-Rodríguez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (xai): What we know and what is left to attain trustworthy artificial intelligence</article-title>
          ,
          <source>Information Fusion</source>
          <volume>99</volume>
          (
          <year>2023</year>
          )
          <article-title>101805</article-title>
          . doi:https://doi.org/10.1016/j.inffus.
          <year>2023</year>
          .
          <volume>101805</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models 51 (</article-title>
          <year>2018</year>
          ). URL: https://doi.org/10.1145/3236009. doi:
          <volume>10</volume>
          .1145/ 3236009.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopanja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Savić</surname>
          </string-name>
          , L. Longo,
          <article-title>Cortex: A cost-sensitive rule and tree extraction method</article-title>
          ,
          <year>2025</year>
          . URL: https://arxiv.org/abs/2502.03200. arXiv:
          <volume>2502</volume>
          .
          <fpage>03200</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>P. D.</given-names>
            <surname>Turney</surname>
          </string-name>
          ,
          <article-title>Types of cost in inductive concept learning</article-title>
          ,
          <year>2002</year>
          . URL: https://arxiv.org/abs/cs/ 0212034. arXiv:cs/0212034.
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B. A.</given-names>
            <surname>Correa</surname>
          </string-name>
          ,
          <article-title>Example-dependent cost-sensitive decision trees</article-title>
          ,
          <source>Expert Systems with Applications</source>
          <volume>42</volume>
          (
          <year>2015</year>
          )
          <fpage>6609</fpage>
          -
          <lpage>6619</lpage>
          . doi:
          <volume>10</volume>
          .1016/j.eswa.
          <year>2015</year>
          .
          <volume>04</volume>
          .042.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Rizzo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>A comparative analysis of rule-based, model-agnostic methods for explainable artificial intelligence</article-title>
          , in: L.
          <string-name>
            <surname>Longo</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Rizzo</surname>
            ,
            <given-names>E.</given-names>
          </string-name>
          <string-name>
            <surname>Hunter</surname>
            ,
            <given-names>A</given-names>
          </string-name>
          . Pakrashi (Eds.),
          <source>Proceedings of The 28th Irish Conference on Artificial Intelligence and Cognitive Science</source>
          , Dublin, Republic of Ireland, December 7-
          <issue>8</issue>
          ,
          <year>2020</year>
          , volume
          <volume>2771</volume>
          <source>of CEUR Workshop Proceedings, CEUR-WS.org</source>
          ,
          <year>2020</year>
          , pp.
          <fpage>85</fpage>
          -
          <lpage>96</lpage>
          . URL: http://ceur-ws.
          <source>org/</source>
          Vol-
          <volume>2771</volume>
          /AICS2020_paper_33.pdf.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>E.</given-names>
            <surname>Mekonnen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dondio</surname>
          </string-name>
          , L. Longo,
          <article-title>Explaining deep learning time series classification models using a decision tree-based post-hoc xai method</article-title>
          , volume
          <volume>3554</volume>
          ,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2023</year>
          . doi:https: //doi.org/10.21427/9YKT-WZ47, publisher Copyright:
          <article-title>© 2023 CEUR-WS</article-title>
          .
          <article-title>All rights reserved</article-title>
          .
          <source>; Joint 1st World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium</source>
          , xAI-2023
          <string-name>
            <surname>: LB-D-</surname>
          </string-name>
          DC ; Conference date:
          <fpage>26</fpage>
          -
          <lpage>07</lpage>
          -2023 Through 28-
          <fpage>07</fpage>
          -
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>M. T.</given-names>
            <surname>Ribeiro</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Guestrin</surname>
          </string-name>
          ,
          <article-title>"why should i trust you?": Explaining the predictions of any</article-title>
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>