<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <article-id pub-id-type="doi">10.1145/2939672.2939778</article-id>
      <title-group>
        <article-title>A Novel Model-Agnostic xAI Method Guided by Cost-Sensitive Tree Models and Argumentative Decision Graphs</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Marija Kopanja</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>BioSense Institute</institution>
          ,
          <addr-line>Dr Zorana Djindjića 1, 21000 Novi Sad</addr-line>
          ,
          <country country="RS">Serbia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>Faculty of Sciences, University of Novi Sad</institution>
          ,
          <addr-line>Trg Dositeja Obradovića 3, 21000 Novi Sad</addr-line>
          ,
          <country country="RS">Serbia</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2016</year>
      </pub-date>
      <volume>9605</volume>
      <fpage>1135</fpage>
      <lpage>1144</lpage>
      <abstract>
        <p>In recent years there is increasing demand for comprehension and explainability of the inferences machine learning (ML) models make. Many explainable artificial intelligence (xAI) methods have been introduced as a tool for better understanding of inference process of complex AI models. The doctoral research aims to develop a new model-agnostic xAI framework for classification tasks by using costsensitive decision trees and argumentative decision graphs. From the classification problem point of view, especially if a dataset is imbalanced, the cost-sensitive decision tree (CSDT) method can be used for generating an acceptably accurate ML model by taking imbalance ratio into the consideration during the tree building procedure. On the other side, from the explainability perspective, generated cost-sensitive tree model can be more comprehensible compared to the tree model generated using traditional (cost-insensitive) decision tree learning algorithm, due to smaller tree size of the cost-sensitive tree. However, to have more plausibly accurate ML model for given imbalanced classification task, deep learning algorithms could be applied, leading to more complex, non-linear models whose decision-making process is hard to understand and explain. For such complex models, we can create surrogate model that will approximate the predictions of the underlying model as accurately as possible, while at the same time being interpretable and easy to explain. For the purpose of creating surrogate model, a costsensitive decision tree learning algorithm can be used. By having a CSDT model, it is possible to obtain explanation for any sample as a rule extracted from the tree. Thereby, we can consider cost-sensitive tree as a rule-extraction xAI method. Current research show that argumentation graph can represent the logic of the complex model with fewer rules than a decision tree. The aim of the study is to investigate possible ways of transforming cost-sensitive tree model into an argumentative decision graph in order to create a more concise structure that should be more understandable. The final step of generating argument-based explanations is evaluation by using both quantitative and human-center analysis.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable Artificial Intelligence</kwd>
        <kwd>Model Agnostic Explanations</kwd>
        <kwd>Explainable Surrogate Models</kwd>
        <kwd>Costsensitive Decision Tree</kwd>
        <kwd>Argumentation</kwd>
        <kwd>Machine Learning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction and research motivation</title>
      <p>
        Predictive machine learning (ML) models play a crucial role in various fields, from finance,
agriculture, to healthcare. As the availability of data exponentially increases, ML methods,
particularly deep learning methods, have led to the creation of powerful models. However, many
of these models are characterized by complex, non-linear structures that can be challenging to
interpret and explain. One of the important factors when using the ML model in production
regardless of the domain of application, or in research, is the interpretability of the model [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ].
Many explainable artificial intelligence (xAI) methods have been introduced as a tool for a
better understanding of the inference process of complex ML models. There is a plethora of xAI
methods and there have been many attempts to make a unified division of xAI methods [
        <xref ref-type="bibr" rid="ref2 ref3 ref4">2, 3, 4</xref>
        ].
Some approaches in the categorization of the xAI methods focus on the type of input data used
to train the ML model, others focus on the internal mechanisms of the xAI method, while some
focus on the scope i.e. whether the xAI method generates local and/or global explanations.
Another way to segregate the xAI method is by determining whether the method is post-hoc or
ante-hoc. The former group of xAI methods enable an understanding of the black-box model
a posteriori, while the latter group tries to make the ML model naturally explainable. The
advantage of any post-hoc method is that there is no influence on the performance of the
black-box model which is important due to a trade-of between predictive performance and
transparency, as the two objectives are conflicted [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. This problem any many other challenges
related to xAI are discussed in several papers [
        <xref ref-type="bibr" rid="ref6 ref7 ref8">6, 7, 8</xref>
        ].
      </p>
      <p>
        The doctoral research aims to develop a new post-hoc, model-agnostic xAI framework for
classification tasks by using cost-sensitive decision tree (CSDT) method and argumentative
decision graphs. Creating the new method is motivated by the fact that generating a posteriori
explanation can be the only solution for explaining already trained black-box ML models.
The method to be developed will be model-agnostic, hence without requirements in terms of
understanding the inner workings of the ML model to be explained. For any complex model, it
can be created a surrogate model that will approximate the predictions of the underlying model
as accurately as possible, while at the same time being interpretable and easy to explain. To
create a surrogate model, a CSDT learning algorithm can be used. By having a CSDT model, it
is possible to obtain an explanation for any sample as a rule extracted from the tree. Thereby,
we can consider a CSDT as a rule-extraction xAI method. Although tree-based models are
considered as naturally transparent and interpretable [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], for a layman it can be dificult to
comprehend explanations given by a tree model, especially if the tree is large. An extracted set
of rules from the tree model should contain as few concise and short rules for as many samples
as possible [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ]. Current research [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ] shows that an argumentation graph can represent the
logic of a complex model with fewer rules than a decision tree. In our framework one of the
objectives is to use CSDT model since generated CSDT model can be more comprehensible
compared to the tree model generated using a traditional (cost-insensitive) decision tree learning
algorithm [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], due to the smaller tree size of the cost-sensitive tree. The extracted rules from
any tree-based model should mimic the inferential process of a complex ML model [
        <xref ref-type="bibr" rid="ref12 ref5 ref9">5, 12, 9</xref>
        ].
To bridge the gap between lack of transparency and non-linearity of complex ML model, the
aim of the research is to develop new xAI method that will be based on rules extracted from a
surrogate CSDT model, further transforming the rules into an argumentative decision graph.
      </p>
    </sec>
    <sec id="sec-2">
      <title>2. Key related works that frame the research</title>
      <sec id="sec-2-1">
        <title>2.1. Surrogate xAI models</title>
        <p>
          One of the most popular model-agnostic xAI approaches is creating surrogate model for the
complex ML model to be explained [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. The surrogate model is created to accurately approximate
the predictions of the complex, black-box ML model, while still being interpretable. The only
requirement for the approach is to have training data and the predictions of the model to be
explained. The surrogate model can be global or local, depending if the original dataset is used
for training the model or just a subset of the original data. For example, the LIME method [13]
is a local post-hoc model-agnostic explanation method, meaning it generates an explanation
by using a new set of samples in the proximity of the sample to be explained and training a
local interpretable linear model. There are many studies that tried to improve the LIME and
resolve its issues with stability (problem of generating the same explanations for the same
sample in several runs) and local fidelity (problem when learned explanation model is not a good
local approximation of the model being explained), such as the ALIME method ([14]) that uses
autoencoders for assigning weights for samples and uses linear model as a local surrogate model.
Explanations provided by local interpretable model in view of the feature scores and prediction
probability can be hard to understand and interpret since the feature scores do not add up to the
prediction probability. Therefore, other interpretable models such as tree-based models could be
used. In [15] is proposed new approach tree-ALIME, modified version of ALIME, which uses a
decision tree as an interpretable model. As their results of evaluating tree-ALIME show, using a
decision tree model as a local interpretable model is promising. However, the results show that
using a decision tree model instead of a linear model, did not improve local fidelity probably due
to a simple decision tree model (maximal depth of the tree is set to be 5) and a tendency of tree
models to overfit the data. More importantly, regarding interpretability, the decision tree model
gained significantly better results compared to the linear model. Therefore, other tree-based
algorithms can be used in the proposed approach to tackle all aforementioned challenges. In
the abundance of tree-based models, it is possible to use the CSDT in tree-ALIME approach
as the local interpretable model. On the other side, any decision tree algorithm including the
CSDT algorithm, can be used to create a global surrogate model which might be an approach
more aligned with the aim of this research.
        </p>
      </sec>
      <sec id="sec-2-2">
        <title>2.2. Cost-sensitive decision tree</title>
        <p>The cost-sensitive decision tree (CSDT) method [16] is a ML algorithm for generating a tree
model by considering the cost matrix during the tree-building procedure. The CSDT method
belongs to the group of cost-sensitive learning methods ([17]), that can be used in the more
narrow, imbalanced learning framework. This approach can be seen as an algorithm-level
solution for the class imbalance problem, since there is an adaptation of existing classification
learning algorithm to improve performance with regards to the minority class. On the other
hand, a data-level solutions assume diferent rebalancing techniques to make data distribution
more balanced, having its limits and costs. Therefore, using algorithm-level solutions such as
the CSDT model might be more convenient option from the classification problem point of
view. On the other hand, from the explainability perspective, a cost-sensitive tree model can be
more comprehensible compared to the traditional decision tree learning algorithm.</p>
        <p>
          The tree structure of the model enables us to create explanation for each sample by following
the path from the root node to the leaf node of the tree. To create a CSDT model it must
be given the test set, the prediction labels of the corresponding test set obtained from the
black box ML model to be explained, and the cost matrix. In general, a cost matrix can be
either class-dependent (all samples from the same class have the same cost matrix) or
sampledependent (each sample has its cost matrix). Having proper cost-matrix defined is essential for
cost-sensitive tree-building process, since the CSDT algorithm chooses a feature that reduces
the misclassification cost the most. That is, the CSDT uses the cost-sensitive splitting criterion
and unlike traditional decision tree, a cost-sensitive tree will classify the sample in the region
to the least costly class. The resulting product is the tree object, as in any other tree-based
ML algorithm, that is considered naturally transparent and explainable. Nevertheless, any
tree model can be hard to understand if the model is deep, and this might be the case if the
cost-sensitive tree-model is used as a surrogate model. To be reliable, a CSDT surrogate model
must achieve high performance and be able to predict the same output as the complex ML model
before providing explanations. Therefore, the generated tree model might be deep and hence it
can be hard to comprehend its inference process. All things considered, the doctoral research
broaden the scope into the argumentation framework since rules can be seen as arguments in
the filed of argumentation [
          <xref ref-type="bibr" rid="ref9">9</xref>
          ].
        </p>
      </sec>
      <sec id="sec-2-3">
        <title>2.3. Argumentation framework</title>
        <p>
          Argumentation is a multidisciplinary subfield of AI that studies how arguments can be presented
in a defeasible reasoning (a formalism for non-monotonic reasoning) process and how to
evaluate the validity of the conclusions reached at the end of the reasoning process [
          <xref ref-type="bibr" rid="ref10">10, 18,
19, 20</xref>
          ]. Argument-based systems are typically build upon multi-layer schema [21, 18, 19].
Argumentation has several important concepts: arguments, attacks and semantics [21]. The
arguments are rules and attacks are binary relations between two conflicting rules (arguments)
and three classes of conflicts can be distinguished [ 21]. A fundamental feature of
argumentbased system is ability to determine the success of an attack [
          <xref ref-type="bibr" rid="ref10">10</xref>
          ]. For example to decide if an
attack is valid the strengths of arguments or attacks can be used [21].
        </p>
        <p>Argumentative decision graphs (ADGs) have a rule-based structure where each argument
has a single premise and a conclusion. The well-formed ADG can be extracted from a decision
tree, by taking each terminal node in the tree to generate a predictive argument in the ADG,
while non terminal nodes could be used as non predictive arguments [22]. The attacks could be
generated between arguments with diferent features and conclusions that are in disjoint paths
and lead to distinct terminal nodes.</p>
        <p>In the [22] is proposed new argumentative decision graph method, xADG (extend
argumentative decision graph), where the emphasis was on decision trees and argumentative models.
They showed that based on tree model, proposed method could create extended argumentative
decision graph of equivalent inferential capability that could be perceived as more
understandable. It is important that derived argumentative model is guaranteed to maintain the same
inferential capability, still being smaller in terms of a size. They analysed whether reasonably
smaller structures, in terms of number of arguments/attacks and amount of argument supports,
can be achieved for classification tasks. Their results suggest that leveraging the structure and
inferential capability of tree model with proposed novel framework for structured
argumentation could be good alternative for automating the creation of reasonably sized argumentation
framework.</p>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>3. Specific research questions, hypothesis and objectives</title>
      <p>The doctoral research will be carried out in a several phases described in the following paragraphs
and depicted in the diagram (Figure 1).</p>
      <p>Having the dataset spitted onto the train and test subset, the black-box ML model is trained
on the train subset and evaluated on the test. Next step is to provide understanding into the
inference process of the the complex ML model which will be done in phases. First phase is
creating a surrogate model by using inherently interpretable cost-sensitive decision tree model.</p>
      <p>The second phase has an objective to transform obtained rules from CSDT into
argumentbased representation. The process can be broken down into five layers [ 21, 18, 19]: 1. definition
of the internal structure of arguments 2. definition of conflicts between arguments 3. evaluation
of conflicts and definition of valid attacks 4. definition of the dialectical status of arguments 5.
accrual of acceptable arguments. One of our research questions related to argumentation and
described multi-layer schema is: whether the weighted notion of argument or attack should be
considered in our framework, where weights would represent the strength of the argument
or attack measured by considering misclassification cost reduction? For example, if two paths
(rules) in the tree model are conflicted, weights could be computed as the misclassification cost
of samples belonging to the intersection of the covers of two conflicting rules that are assigned
by the model to the same target class of the conclusion of the attacking rule.</p>
      <p>Given a set of arguments with defined attacks, a further decision that must be made is which
arguments can be accepted. An algorithm designed to produce a set of acceptable and
conflictfree arguments is called semantics [18]. Diferent semantics, such as grounded or preferred, can
be used, leading to a set of arguments with a status (rank). In [22] is shown that rules from a
tree model exploited by an extension based semantics, such as grounded, results in ADG with
the same set of inferences as the tree. Therefore, another question is related to the choice of
semantics designed for handling the (weighted) argumentation framework.</p>
      <p>The extend argumentative decision graph (xADG), proposed in [22] is as new framework
that allows for arguments to use boolean logic operators and multiple premises within their
internal structure, resulting in more concise argumentative graphs that may be easier for users
to understand. The xADG of equivalent inferential capability as ADG, is formed by performing
a set of modifications. We aim to test if the proposed framework xADG can be applied to ADG
built from CSDT and what modifications are needed if weighted argumentation is going to be
used. Therefore another research question we aim to answer is whether using CSDT instead
of the decision tree algorithm to derive an argumentative decision graph, would results in the
more comprehensible graph.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Current results and next steps</title>
      <p>
        To date, the CSDT models are trained on various datasets with diferent class imbalance ratios.
The current results show that cost-sensitive tree model is less complex compared to the
traditional decision-tree model, for the same tree depth, without implementation of a pruning
procedure [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ]. In further work, we aim to extend the number of datasets used for the
comparison purposes in order to test if rules extracted from a cost-sensitive tree model are consistently
shorter that the rules extracted from a traditional decision tree model.
      </p>
      <p>In the next step, a CSDT will be created as surrogate model for some complex ML model
such as deep neural network model. Afterwards, the CSDT model should be transformed into
argumentative decision graph to generate simpler rules that are potentially more comprehensible
as is done in [22].</p>
      <p>
        The final step of generating argument-based explanations will be evaluation. In general,
two ways of evaluating interpretability of the model can be distinguished: quantitative and
human-centered evaluations. The latter can include domain experts and/or people unfamiliar
with concepts such as ML and xAI, in order to evaluate obtained explanations provided to
individuals with diverse knowledge. As is done in the studies [
        <xref ref-type="bibr" rid="ref10 ref9">9, 10</xref>
        ], we can select several
metrics to quantitatively assess the degree of explainability of the rules extracted from the
CSDT and the rules of argumentation-based graph. For human-centred evaluation purposes
of explanations produced, in future work the human-centred psychometric test [23] could be
used. Developed argument-based model-agnostic xAI method should also be compared to other
rule-based and argument-based xAI methods [
        <xref ref-type="bibr" rid="ref10">10, 22</xref>
        ].
      </p>
    </sec>
    <sec id="sec-5">
      <title>5. Final contribution</title>
      <p>The end product of the doctoral research is post-hoc model-agnostic argument-based xAI method
developed by extraction of rules and their conflicts from CSDT models and their integration into
an argumentation framework that can serve as a mechanism for interpreting and explaining the
inferential process of complex ML models. Leveraging the structure and inferential capability
of CSDT with argumentation decision graph could be promising direction in automating the
creation of argumentation framework with reasonable size that will be more easy to comprehend
by the end-users.</p>
    </sec>
    <sec id="sec-6">
      <title>Acknowledgments</title>
      <p>Supported by ANTARES project that has received funding from the European Union’s Horizon
2020 research and innovation program under grant agreement SGA-CSA No. 739570 under FPA
No. 664387, https://doi.org/10.3030/739570 and Ministry of Education, Science and Technological
Development of the Republic of Serbia, grant agreement 451-03-47/2023-01/200358</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>C.</given-names>
            <surname>Molnar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Casalicchio</surname>
          </string-name>
          ,
          <string-name>
            <surname>B. Bischl,</surname>
          </string-name>
          <article-title>Interpretable Machine Learning - A Brief History, Stateof-the-</article-title>
          <string-name>
            <surname>Art</surname>
          </string-name>
          and Challenges, Springer International Publishing,
          <year>2020</year>
          , p.
          <fpage>417</fpage>
          -
          <lpage>431</lpage>
          . doi:
          <volume>10</volume>
          .1007/ 978-3-
          <fpage>030</fpage>
          -65965-3_
          <fpage>28</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Monreale</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Ruggieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Turini</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Giannotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Pedreschi</surname>
          </string-name>
          ,
          <article-title>A survey of methods for explaining black box models</article-title>
          ,
          <source>ACM Comput. Surv</source>
          .
          <volume>51</volume>
          (
          <year>2018</year>
          )
          <article-title>42</article-title>
          . doi:
          <volume>10</volume>
          .1145/3236009.
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>A.</given-names>
            <surname>Adadi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Berrada</surname>
          </string-name>
          ,
          <article-title>Peeking inside the black-box: A survey on explainable artificial intelligence (xai), IEEE Access PP (</article-title>
          <year>2018</year>
          )
          <fpage>1</fpage>
          -
          <lpage>1</lpage>
          . doi:
          <volume>10</volume>
          .1109/ACCESS.
          <year>2018</year>
          .
          <volume>2870052</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>Classification of explainable artificial intelligence methods through their output formats</article-title>
          ,
          <source>Machine Learning and Knowledge Extraction</source>
          <volume>3</volume>
          (
          <year>2021</year>
          )
          <fpage>615</fpage>
          -
          <lpage>661</lpage>
          . doi:
          <volume>10</volume>
          .3390/ make3030032.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>F. K.</given-names>
            <surname>Došilović</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brčić</surname>
          </string-name>
          ,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hlupić</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence: A survey</article-title>
          ,
          <source>in: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO)</source>
          ,
          <year>2018</year>
          , pp.
          <fpage>0210</fpage>
          -
          <lpage>0215</lpage>
          . doi:
          <volume>10</volume>
          .23919/MIPRO.
          <year>2018</year>
          .
          <volume>8400040</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Saranti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Molnar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Biecek</surname>
          </string-name>
          , W. Samek,
          <string-name>
            <surname>Explainable AI Methods - A Brief Overview</surname>
          </string-name>
          , Springer International Publishing, Cham,
          <year>2022</year>
          , pp.
          <fpage>13</fpage>
          -
          <lpage>38</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>031</fpage>
          -04083-
          <issue>2</issue>
          _
          <fpage>2</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brcic</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Cabitza</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Choi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Confalonieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. D.</given-names>
            <surname>Ser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Guidotti</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Hayashi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Herrera</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Jiang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Khosravi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lecue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Malgieri</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Páez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>W.</given-names>
            <surname>Samek</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Schneider</surname>
          </string-name>
          ,
          <string-name>
            <given-names>T.</given-names>
            <surname>Speith</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Stumpf</surname>
          </string-name>
          ,
          <article-title>Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions</article-title>
          ,
          <source>Information Fusion</source>
          <volume>106</volume>
          (
          <year>2024</year>
          )
          <article-title>102301</article-title>
          . doi:https://doi.org/10.1016/j.inffus.
          <year>2024</year>
          .
          <volume>102301</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Goebel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Lecue</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Kieseberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Holzinger</surname>
          </string-name>
          ,
          <source>Explainable artificial intelligence: Concepts</source>
          , applications, research challenges and visions, in: A.
          <string-name>
            <surname>Holzinger</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Kieseberg</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Tjoa</surname>
          </string-name>
          , E. Weippl (Eds.),
          <source>Machine Learning and Knowledge Extraction</source>
          , Springer International Publishing, Cham,
          <year>2020</year>
          , pp.
          <fpage>1</fpage>
          -
          <lpage>16</lpage>
          . doi:
          <volume>10</volume>
          .1007/978-3-
          <fpage>030</fpage>
          -57321-
          <issue>8</issue>
          _
          <fpage>1</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>A quantitative evaluation of global, rule-based explanations of post-hoc, model agnostic methods</article-title>
          ,
          <source>Frontiers in Artificial Intelligence</source>
          <volume>4</volume>
          (
          <year>2021</year>
          )
          <article-title>160</article-title>
          . doi:
          <volume>10</volume>
          .3389/frai.
          <year>2021</year>
          .
          <volume>717899</volume>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>G.</given-names>
            <surname>Vilone</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Longo</surname>
          </string-name>
          ,
          <article-title>A global model-agnostic xai method for the automatic formation of an abstract argumentation framework and its objective evaluation</article-title>
          , volume
          <volume>3209</volume>
          ,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2022</year>
          . doi:https: //doi.org/10.1007/978-3-
          <fpage>031</fpage>
          -04083-
          <issue>2</issue>
          _2,
          <string-name>
            <surname>publisher</surname>
            <given-names>Copyright</given-names>
          </string-name>
          :
          <article-title>© 2022 Copyright for this paper by its authors</article-title>
          .
          <source>; 1st International Workshop on Argumentation for eXplainable AI</source>
          ,
          <year>ArgXAI 2022</year>
          ; Conference date:
          <fpage>12</fpage>
          -
          <lpage>09</lpage>
          -
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>M.</given-names>
            <surname>Kopanja</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Hačko</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Brdar</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Savić</surname>
          </string-name>
          ,
          <article-title>Cost-sensitive tree shap for explaining cost-sensitive treebased models</article-title>
          ,
          <source>Computational Intelligence</source>
          <volume>40</volume>
          (
          <year>2024</year>
          )
          <article-title>e12651</article-title>
          . doi:https://doi.org/10.1111/ coin.12651.
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>E.</given-names>
            <surname>Mekonnen</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Dondio</surname>
          </string-name>
          , L. Longo,
          <article-title>Explaining deep learning time series classification models using a decision tree-based post-hoc xai method</article-title>
          , volume
          <volume>3554</volume>
          ,
          <string-name>
            <surname>CEUR-WS</surname>
          </string-name>
          ,
          <year>2023</year>
          . doi:https: //doi.org/10.21427/9YKT-WZ47, publisher Copyright:
          <article-title>© 2023 CEUR-WS</article-title>
          .
          <article-title>All rights reserved</article-title>
          .;
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>