<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">A Novel Model-Agnostic xAI Method Guided by Cost-Sensitive Tree Models and Argumentative Decision Graphs</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Marija</forename><surname>Kopanja</surname></persName>
							<email>marija.kopanja@biosense.rs</email>
							<affiliation key="aff0">
								<orgName type="department">Faculty of Sciences</orgName>
								<orgName type="institution">University of Novi Sad</orgName>
								<address>
									<addrLine>Trg Dositeja Obradovića 3</addrLine>
									<postCode>21000</postCode>
									<settlement>Novi Sad</settlement>
									<country key="RS">Serbia</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">BioSense Institute</orgName>
								<address>
									<addrLine>Dr Zorana Djindjića 1</addrLine>
									<postCode>21000</postCode>
									<settlement>Novi Sad</settlement>
									<country key="RS">Serbia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">A Novel Model-Agnostic xAI Method Guided by Cost-Sensitive Tree Models and Argumentative Decision Graphs</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">DC41173586C6A3DD7DC7573968BF8247</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T19:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable Artificial Intelligence</term>
					<term>Model Agnostic Explanations</term>
					<term>Explainable Surrogate Models</term>
					<term>Costsensitive Decision Tree</term>
					<term>Argumentation</term>
					<term>Machine Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In recent years there is increasing demand for comprehension and explainability of the inferences machine learning (ML) models make. Many explainable artificial intelligence (xAI) methods have been introduced as a tool for better understanding of inference process of complex AI models. The doctoral research aims to develop a new model-agnostic xAI framework for classification tasks by using costsensitive decision trees and argumentative decision graphs. From the classification problem point of view, especially if a dataset is imbalanced, the cost-sensitive decision tree (CSDT) method can be used for generating an acceptably accurate ML model by taking imbalance ratio into the consideration during the tree building procedure. On the other side, from the explainability perspective, generated cost-sensitive tree model can be more comprehensible compared to the tree model generated using traditional (cost-insensitive) decision tree learning algorithm, due to smaller tree size of the cost-sensitive tree. However, to have more plausibly accurate ML model for given imbalanced classification task, deep learning algorithms could be applied, leading to more complex, non-linear models whose decision-making process is hard to understand and explain. For such complex models, we can create surrogate model that will approximate the predictions of the underlying model as accurately as possible, while at the same time being interpretable and easy to explain. For the purpose of creating surrogate model, a costsensitive decision tree learning algorithm can be used. By having a CSDT model, it is possible to obtain explanation for any sample as a rule extracted from the tree. Thereby, we can consider cost-sensitive tree as a rule-extraction xAI method. Current research show that argumentation graph can represent the logic of the complex model with fewer rules than a decision tree. The aim of the study is to investigate possible ways of transforming cost-sensitive tree model into an argumentative decision graph in order to create a more concise structure that should be more understandable. The final step of generating argument-based explanations is evaluation by using both quantitative and human-center analysis.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction and research motivation</head><p>Predictive machine learning (ML) models play a crucial role in various fields, from finance, agriculture, to healthcare. As the availability of data exponentially increases, ML methods, particularly deep learning methods, have led to the creation of powerful models. However, many of these models are characterized by complex, non-linear structures that can be challenging to interpret and explain. One of the important factors when using the ML model in production regardless of the domain of application, or in research, is the interpretability of the model <ref type="bibr" target="#b0">[1]</ref>. Many explainable artificial intelligence (xAI) methods have been introduced as a tool for a better understanding of the inference process of complex ML models. There is a plethora of xAI methods and there have been many attempts to make a unified division of xAI methods <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3,</ref><ref type="bibr" target="#b3">4]</ref>. Some approaches in the categorization of the xAI methods focus on the type of input data used to train the ML model, others focus on the internal mechanisms of the xAI method, while some focus on the scope i.e. whether the xAI method generates local and/or global explanations. Another way to segregate the xAI method is by determining whether the method is post-hoc or ante-hoc. The former group of xAI methods enable an understanding of the black-box model a posteriori, while the latter group tries to make the ML model naturally explainable. The advantage of any post-hoc method is that there is no influence on the performance of the black-box model which is important due to a trade-off between predictive performance and transparency, as the two objectives are conflicted <ref type="bibr" target="#b4">[5]</ref>. This problem any many other challenges related to xAI are discussed in several papers <ref type="bibr" target="#b5">[6,</ref><ref type="bibr" target="#b6">7,</ref><ref type="bibr" target="#b7">8]</ref>.</p><p>The doctoral research aims to develop a new post-hoc, model-agnostic xAI framework for classification tasks by using cost-sensitive decision tree (CSDT) method and argumentative decision graphs. Creating the new method is motivated by the fact that generating a posteriori explanation can be the only solution for explaining already trained black-box ML models. The method to be developed will be model-agnostic, hence without requirements in terms of understanding the inner workings of the ML model to be explained. For any complex model, it can be created a surrogate model that will approximate the predictions of the underlying model as accurately as possible, while at the same time being interpretable and easy to explain. To create a surrogate model, a CSDT learning algorithm can be used. By having a CSDT model, it is possible to obtain an explanation for any sample as a rule extracted from the tree. Thereby, we can consider a CSDT as a rule-extraction xAI method. Although tree-based models are considered as naturally transparent and interpretable <ref type="bibr" target="#b5">[6]</ref>, for a layman it can be difficult to comprehend explanations given by a tree model, especially if the tree is large. An extracted set of rules from the tree model should contain as few concise and short rules for as many samples as possible <ref type="bibr" target="#b8">[9]</ref>. Current research <ref type="bibr" target="#b9">[10]</ref> shows that an argumentation graph can represent the logic of a complex model with fewer rules than a decision tree. In our framework one of the objectives is to use CSDT model since generated CSDT model can be more comprehensible compared to the tree model generated using a traditional (cost-insensitive) decision tree learning algorithm <ref type="bibr" target="#b10">[11]</ref>, due to the smaller tree size of the cost-sensitive tree. The extracted rules from any tree-based model should mimic the inferential process of a complex ML model <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b8">9]</ref>. To bridge the gap between lack of transparency and non-linearity of complex ML model, the aim of the research is to develop new xAI method that will be based on rules extracted from a surrogate CSDT model, further transforming the rules into an argumentative decision graph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Key related works that frame the research 2.1. Surrogate xAI models</head><p>One of the most popular model-agnostic xAI approaches is creating surrogate model for the complex ML model to be explained <ref type="bibr" target="#b2">[3]</ref>. The surrogate model is created to accurately approximate the predictions of the complex, black-box ML model, while still being interpretable. The only requirement for the approach is to have training data and the predictions of the model to be explained. The surrogate model can be global or local, depending if the original dataset is used for training the model or just a subset of the original data. For example, the LIME method <ref type="bibr" target="#b12">[13]</ref> is a local post-hoc model-agnostic explanation method, meaning it generates an explanation by using a new set of samples in the proximity of the sample to be explained and training a local interpretable linear model. There are many studies that tried to improve the LIME and resolve its issues with stability (problem of generating the same explanations for the same sample in several runs) and local fidelity (problem when learned explanation model is not a good local approximation of the model being explained), such as the ALIME method ( <ref type="bibr" target="#b13">[14]</ref>) that uses autoencoders for assigning weights for samples and uses linear model as a local surrogate model. Explanations provided by local interpretable model in view of the feature scores and prediction probability can be hard to understand and interpret since the feature scores do not add up to the prediction probability. Therefore, other interpretable models such as tree-based models could be used. In <ref type="bibr" target="#b14">[15]</ref> is proposed new approach tree-ALIME, modified version of ALIME, which uses a decision tree as an interpretable model. As their results of evaluating tree-ALIME show, using a decision tree model as a local interpretable model is promising. However, the results show that using a decision tree model instead of a linear model, did not improve local fidelity probably due to a simple decision tree model (maximal depth of the tree is set to be 5) and a tendency of tree models to overfit the data. More importantly, regarding interpretability, the decision tree model gained significantly better results compared to the linear model. Therefore, other tree-based algorithms can be used in the proposed approach to tackle all aforementioned challenges. In the abundance of tree-based models, it is possible to use the CSDT in tree-ALIME approach as the local interpretable model. On the other side, any decision tree algorithm including the CSDT algorithm, can be used to create a global surrogate model which might be an approach more aligned with the aim of this research.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Cost-sensitive decision tree</head><p>The cost-sensitive decision tree (CSDT) method <ref type="bibr" target="#b15">[16]</ref> is a ML algorithm for generating a tree model by considering the cost matrix during the tree-building procedure. The CSDT method belongs to the group of cost-sensitive learning methods ( <ref type="bibr" target="#b16">[17]</ref>), that can be used in the more narrow, imbalanced learning framework. This approach can be seen as an algorithm-level solution for the class imbalance problem, since there is an adaptation of existing classification learning algorithm to improve performance with regards to the minority class. On the other hand, a data-level solutions assume different rebalancing techniques to make data distribution more balanced, having its limits and costs. Therefore, using algorithm-level solutions such as the CSDT model might be more convenient option from the classification problem point of view. On the other hand, from the explainability perspective, a cost-sensitive tree model can be more comprehensible compared to the traditional decision tree learning algorithm.</p><p>The tree structure of the model enables us to create explanation for each sample by following the path from the root node to the leaf node of the tree. To create a CSDT model it must be given the test set, the prediction labels of the corresponding test set obtained from the black box ML model to be explained, and the cost matrix. In general, a cost matrix can be either class-dependent (all samples from the same class have the same cost matrix) or sampledependent (each sample has its cost matrix). Having proper cost-matrix defined is essential for cost-sensitive tree-building process, since the CSDT algorithm chooses a feature that reduces the misclassification cost the most. That is, the CSDT uses the cost-sensitive splitting criterion and unlike traditional decision tree, a cost-sensitive tree will classify the sample in the region to the least costly class. The resulting product is the tree object, as in any other tree-based ML algorithm, that is considered naturally transparent and explainable. Nevertheless, any tree model can be hard to understand if the model is deep, and this might be the case if the cost-sensitive tree-model is used as a surrogate model. To be reliable, a CSDT surrogate model must achieve high performance and be able to predict the same output as the complex ML model before providing explanations. Therefore, the generated tree model might be deep and hence it can be hard to comprehend its inference process. All things considered, the doctoral research broaden the scope into the argumentation framework since rules can be seen as arguments in the filed of argumentation <ref type="bibr" target="#b8">[9]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.3.">Argumentation framework</head><p>Argumentation is a multidisciplinary subfield of AI that studies how arguments can be presented in a defeasible reasoning (a formalism for non-monotonic reasoning) process and how to evaluate the validity of the conclusions reached at the end of the reasoning process <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19,</ref><ref type="bibr" target="#b19">20]</ref>. Argument-based systems are typically build upon multi-layer schema <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>. Argumentation has several important concepts: arguments, attacks and semantics <ref type="bibr" target="#b20">[21]</ref>. The arguments are rules and attacks are binary relations between two conflicting rules (arguments) and three classes of conflicts can be distinguished <ref type="bibr" target="#b20">[21]</ref>. A fundamental feature of argumentbased system is ability to determine the success of an attack <ref type="bibr" target="#b9">[10]</ref>. For example to decide if an attack is valid the strengths of arguments or attacks can be used <ref type="bibr" target="#b20">[21]</ref>.</p><p>Argumentative decision graphs (ADGs) have a rule-based structure where each argument has a single premise and a conclusion. The well-formed ADG can be extracted from a decision tree, by taking each terminal node in the tree to generate a predictive argument in the ADG, while non terminal nodes could be used as non predictive arguments <ref type="bibr" target="#b21">[22]</ref>. The attacks could be generated between arguments with different features and conclusions that are in disjoint paths and lead to distinct terminal nodes.</p><p>In the <ref type="bibr" target="#b21">[22]</ref> is proposed new argumentative decision graph method, xADG (extend argumentative decision graph), where the emphasis was on decision trees and argumentative models. They showed that based on tree model, proposed method could create extended argumentative decision graph of equivalent inferential capability that could be perceived as more understandable. It is important that derived argumentative model is guaranteed to maintain the same inferential capability, still being smaller in terms of a size. They analysed whether reasonably smaller structures, in terms of number of arguments/attacks and amount of argument supports, can be achieved for classification tasks. Their results suggest that leveraging the structure and inferential capability of tree model with proposed novel framework for structured argumentation could be good alternative for automating the creation of reasonably sized argumentation framework.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Specific research questions, hypothesis and objectives</head><p>The doctoral research will be carried out in a several phases described in the following paragraphs and depicted in the diagram (Figure <ref type="figure" target="#fig_0">1</ref>).</p><p>Having the dataset spitted onto the train and test subset, the black-box ML model is trained on the train subset and evaluated on the test. Next step is to provide understanding into the inference process of the the complex ML model which will be done in phases. First phase is creating a surrogate model by using inherently interpretable cost-sensitive decision tree model.</p><p>The second phase has an objective to transform obtained rules from CSDT into argumentbased representation. The process can be broken down into five layers <ref type="bibr" target="#b20">[21,</ref><ref type="bibr" target="#b17">18,</ref><ref type="bibr" target="#b18">19]</ref>: 1. definition of the internal structure of arguments 2. definition of conflicts between arguments 3. evaluation of conflicts and definition of valid attacks 4. definition of the dialectical status of arguments 5. accrual of acceptable arguments. One of our research questions related to argumentation and described multi-layer schema is: whether the weighted notion of argument or attack should be considered in our framework, where weights would represent the strength of the argument or attack measured by considering misclassification cost reduction? For example, if two paths (rules) in the tree model are conflicted, weights could be computed as the misclassification cost of samples belonging to the intersection of the covers of two conflicting rules that are assigned by the model to the same target class of the conclusion of the attacking rule.</p><p>Given a set of arguments with defined attacks, a further decision that must be made is which arguments can be accepted. An algorithm designed to produce a set of acceptable and conflictfree arguments is called semantics <ref type="bibr" target="#b17">[18]</ref>. Different semantics, such as grounded or preferred, can be used, leading to a set of arguments with a status (rank). In <ref type="bibr" target="#b21">[22]</ref> is shown that rules from a tree model exploited by an extension based semantics, such as grounded, results in ADG with the same set of inferences as the tree. Therefore, another question is related to the choice of semantics designed for handling the (weighted) argumentation framework.</p><p>The extend argumentative decision graph (xADG), proposed in <ref type="bibr" target="#b21">[22]</ref> is as new framework that allows for arguments to use boolean logic operators and multiple premises within their internal structure, resulting in more concise argumentative graphs that may be easier for users to understand. The xADG of equivalent inferential capability as ADG, is formed by performing a set of modifications. We aim to test if the proposed framework xADG can be applied to ADG built from CSDT and what modifications are needed if weighted argumentation is going to be used. Therefore another research question we aim to answer is whether using CSDT instead of the decision tree algorithm to derive an argumentative decision graph, would results in the more comprehensible graph.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Current results and next steps</head><p>To date, the CSDT models are trained on various datasets with different class imbalance ratios. The current results show that cost-sensitive tree model is less complex compared to the traditional decision-tree model, for the same tree depth, without implementation of a pruning procedure <ref type="bibr" target="#b10">[11]</ref>. In further work, we aim to extend the number of datasets used for the comparison purposes in order to test if rules extracted from a cost-sensitive tree model are consistently shorter that the rules extracted from a traditional decision tree model.</p><p>In the next step, a CSDT will be created as surrogate model for some complex ML model such as deep neural network model. Afterwards, the CSDT model should be transformed into argumentative decision graph to generate simpler rules that are potentially more comprehensible as is done in <ref type="bibr" target="#b21">[22]</ref>.</p><p>The final step of generating argument-based explanations will be evaluation. In general, two ways of evaluating interpretability of the model can be distinguished: quantitative and human-centered evaluations. The latter can include domain experts and/or people unfamiliar with concepts such as ML and xAI, in order to evaluate obtained explanations provided to individuals with diverse knowledge. As is done in the studies <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b9">10]</ref>, we can select several metrics to quantitatively assess the degree of explainability of the rules extracted from the CSDT and the rules of argumentation-based graph. For human-centred evaluation purposes of explanations produced, in future work the human-centred psychometric test <ref type="bibr" target="#b22">[23]</ref> could be used. Developed argument-based model-agnostic xAI method should also be compared to other rule-based and argument-based xAI methods <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b21">22]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Final contribution</head><p>The end product of the doctoral research is post-hoc model-agnostic argument-based xAI method developed by extraction of rules and their conflicts from CSDT models and their integration into an argumentation framework that can serve as a mechanism for interpreting and explaining the inferential process of complex ML models. Leveraging the structure and inferential capability of CSDT with argumentation decision graph could be promising direction in automating the creation of argumentation framework with reasonable size that will be more easy to comprehend by the end-users.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Conceptual framework of the new post-hoc, model-agnostic xAI method.</figDesc><graphic coords="5,91.44,84.19,412.40,120.40" type="bitmap" /></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgments Supported by ANTARES project that has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement SGA-CSA No. 739570 under FPA No. 664387, https://doi.org/10.3030/739570 and Ministry of Education, Science and Technological Development of the Republic of Serbia, grant agreement 451-03-47/2023-01/200358</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Interpretable Machine Learning -A Brief History, Stateof-the-Art and Challenges</title>
		<author>
			<persName><forename type="first">C</forename><surname>Molnar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalicchio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-65965-3_28</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="417" to="431" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A survey of methods for explaining black box models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Monreale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ruggieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Turini</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Giannotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Pedreschi</surname></persName>
		</author>
		<idno type="DOI">10.1145/3236009</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Comput. Surv</title>
		<imprint>
			<biblScope unit="volume">51</biblScope>
			<biblScope unit="page">42</biblScope>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Peeking inside the black-box: A survey on explainable artificial intelligence (xai)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2018.2870052</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access PP</title>
		<imprint>
			<biblScope unit="page" from="1" to="1" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Classification of explainable artificial intelligence methods through their output formats</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vilone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.3390/make3030032</idno>
	</analytic>
	<monogr>
		<title level="j">Machine Learning and Knowledge Extraction</title>
		<imprint>
			<biblScope unit="volume">3</biblScope>
			<biblScope unit="page" from="615" to="661" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence: A survey</title>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">K</forename><surname>Došilović</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brčić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Hlupić</surname></persName>
		</author>
		<idno type="DOI">10.23919/MIPRO.2018.8400040</idno>
	</analytic>
	<monogr>
		<title level="m">2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics</title>
				<meeting><address><addrLine>MIPRO)</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="page" from="210" to="0215" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<monogr>
		<title level="m" type="main">Explainable AI Methods -A Brief Overview</title>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Saranti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Molnar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Biecek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-04083-2_2</idno>
		<imprint>
			<date type="published" when="2022">2022</date>
			<publisher>Springer International Publishing</publisher>
			<biblScope unit="page" from="13" to="38" />
			<pubPlace>Cham</pubPlace>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence (xai) 2.0: A manifesto of open challenges and interdisciplinary research directions</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brcic</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Cabitza</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Choi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Confalonieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">D</forename><surname>Ser</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Hayashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Herrera</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jiang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Khosravi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lecue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Malgieri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Páez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Samek</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Schneider</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Speith</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Stumpf</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.inffus.2024.102301</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.inffus.2024.102301" />
	</analytic>
	<monogr>
		<title level="j">Information Fusion</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page">102301</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Explainable artificial intelligence: Concepts, applications, research challenges and visions</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Goebel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Lecue</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kieseberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-57321-8_1</idno>
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Extraction</title>
				<editor>
			<persName><forename type="first">A</forename><surname>Holzinger</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Kieseberg</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">M</forename><surname>Tjoa</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Weippl</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="1" to="16" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A quantitative evaluation of global, rule-based explanations of post-hoc, model agnostic methods</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vilone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.3389/frai.2021.717899</idno>
	</analytic>
	<monogr>
		<title level="j">Frontiers in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<biblScope unit="page">160</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">A global model-agnostic xai method for the automatic formation of an abstract argumentation framework and its objective evaluation</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vilone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-04083-2_2</idno>
		<idno>doi:</idno>
		<ptr target="©2022" />
	</analytic>
	<monogr>
		<title level="m">Argumentation for eXplainable AI, ArgXAI 2022 ; Conference date</title>
				<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">3209</biblScope>
			<biblScope unit="page" from="12" to="19" />
		</imprint>
	</monogr>
	<note>Copyright for this paper by its authors.; 1st International Workshop on</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Cost-sensitive tree shap for explaining cost-sensitive treebased models</title>
		<author>
			<persName><forename type="first">M</forename><surname>Kopanja</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Hačko</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Brdar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Savić</surname></persName>
		</author>
		<idno type="DOI">10.1111/coin.12651</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1111/coin.12651" />
	</analytic>
	<monogr>
		<title level="j">Computational Intelligence</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page">e12651</biblScope>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Explaining deep learning time series classification models using a decision tree-based post-hoc xai method</title>
		<author>
			<persName><forename type="first">E</forename><surname>Mekonnen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dondio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.21427/9YKT-WZ47</idno>
		<ptr target="https://doi.org/10.21427/9YKT-WZ47,publisherCopyright:©2023CEUR-WS" />
	</analytic>
	<monogr>
		<title level="m">World Conference on eXplainable Artificial Intelligence: Late-Breaking Work, Demos and Doctoral Consortium, xAI-2023: LB-D-DC ; Conference date</title>
				<imprint>
			<publisher>CEUR-WS</publisher>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">3554</biblScope>
			<biblScope unit="page" from="28" to="35" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">why should i trust you?&quot;: Explaining the predictions of any classifier</title>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Singh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
		<idno type="DOI">10.1145/2939672.2939778</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16</title>
				<meeting>the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD &apos;16<address><addrLine>New York, NY, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Association for Computing Machinery</publisher>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="1135" to="1144" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Alime: Autoencoder based approach for local interpretability</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Shankaranarayana</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Runje</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:202539758" />
	</analytic>
	<monogr>
		<title level="m">Intelligent Data Engineering and Automated Learning -IDEAL 2019</title>
				<editor>
			<persName><forename type="first">H</forename><surname>Yin</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">D</forename><surname>Camacho</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">P</forename><surname>Tino</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Tallón-Ballesteros</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Menezes</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">R</forename><surname>Allmendinger</surname></persName>
		</editor>
		<meeting><address><addrLine>Cham</addrLine></address></meeting>
		<imprint>
			<publisher>Springer International Publishing</publisher>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="454" to="463" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<monogr>
		<title level="m" type="main">Using decision tree as local interpretable model in autoencoder-based lime</title>
		<author>
			<persName><forename type="first">N</forename><surname>Ranjbar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Safabakhsh</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2204.03321.arXiv:2204.03321" />
		<imprint>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">Example-dependent cost-sensitive decision trees</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">A</forename><surname>Correa</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2015.04.042</idno>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="6609" to="6619" />
			<date type="published" when="2015">2015</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">The foundations of cost-sensitive learning</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">P</forename><surname>Elkan</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:16149383" />
	</analytic>
	<monogr>
		<title level="m">International Joint Conference on Artificial Intelligence</title>
				<imprint>
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A qualitative investigation of the explainability of defeasible argumentation and non-monotonic fuzzy reasoning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.21427/tby8-8z04</idno>
		<idno>doi:https://doi.org/10.21427</idno>
		<ptr target="/tby8-8z04" />
	</analytic>
	<monogr>
		<title level="m">Proceedings for the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science Trinity College</title>
				<meeting>for the 26th AIAI Irish Conference on Artificial Intelligence and Cognitive Science Trinity College<address><addrLine>Dublin, Dublin, Ireland</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">December 6-7th, 2018. 2018</date>
			<biblScope unit="page" from="138" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Examining the modelling capabilities of defeasible argumentation and non-monotonic fuzzy reasoning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Rizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Dondio</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.knosys.2020.106514</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.knosys.2020.106514" />
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">211</biblScope>
			<biblScope unit="page">106514</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">An empirical evaluation of the inferential capacity of defeasible argumentation, non-monotonic fuzzy reasoning and expert systems</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rizzo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.eswa.2020.113220</idno>
		<idno>doi:</idno>
		<ptr target="https://doi.org/10.1016/j.eswa.2020.113220" />
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="page" from="113" to="220" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<title level="m" type="main">Argumentation for Knowledge Representation, Conflict Resolution, Defeasible Inference and Its Integration with Machine Learning</title>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-319-50478-0_9</idno>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="volume">9605</biblScope>
			<biblScope unit="page" from="183" to="208" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">A Novel Structured Argumentation Framework for Improved Explainability of Classification Tasks</title>
		<author>
			<persName><forename type="first">L</forename><surname>Rizzo</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-44070-0_20</idno>
	</analytic>
	<monogr>
		<title level="j">Springer Nature</title>
		<imprint>
			<biblScope unit="page" from="399" to="414" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Development of a Human-Centred Psychometric Test for the Evaluation of Explanations Produced by XAI Methods</title>
		<author>
			<persName><forename type="first">G</forename><surname>Vilone</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-44070-0_11</idno>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="205" to="232" />
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
