<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Evaluating the Reliability of Shapley Value Estimates: An Interval-Based Approach</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Davide</forename><surname>Napolitano</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Politecnico di Torino</orgName>
								<address>
									<addrLine>Corso Duca degli Abruzzi 24</addrLine>
									<postCode>10129</postCode>
									<settlement>Torino</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Luca</forename><surname>Cagliero</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">Politecnico di Torino</orgName>
								<address>
									<addrLine>Corso Duca degli Abruzzi 24</addrLine>
									<postCode>10129</postCode>
									<settlement>Torino</settlement>
									<country key="IT">Italy</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Evaluating the Reliability of Shapley Value Estimates: An Interval-Based Approach</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">43DEBDB4664DDFDCC269B835E7936226</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:48+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable Artificial Intelligence</term>
					<term>Interval Shapley Values</term>
					<term>Feature Importance</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Shapley Values (SVs) are concepts used in game theory that have recently found application in Artificial Intelligence. They are exploited to explain models by quantifying the separate features' contribution to the predictor estimates. However, the reliability of the estimated SVs is often not thoroughly assessed. In this context, we leverage Interval Shapley Values (ISVs) to evaluate the importance and reliability of features' contributions when the classifier consists of an ensemble method. This paper presents a suite of ISVs estimators based on exact estimation, linear regression, and Monte Carlo sampling. In detail, we adapt classical SVs estimators to ISV-like concepts to efficiently handle real tabular datasets. We also provide a set of ad hoc performance metrics and visualization techniques that can be used to explore models' results under multiple aspects.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>Shapley Values (SVs), originally formulated in coalition game theory <ref type="bibr" target="#b0">[1]</ref>, are now widely used to generate post-hoc explanations for classifiers that assign discrete classes to unlabeled samples. In detail, SVs quantify the contribution of each input feature to a given classifier's prediction and, although they may not always accurately reflect feature importance <ref type="bibr" target="#b1">[2]</ref>, these contributions can be estimated on a per-sample basis (locally) or aggregated to provide insights into the overall behavior of the model (globally) <ref type="bibr" target="#b2">[3]</ref>.</p><p>When a model comprises multiple predictors, estimating the contributions of individual features becomes challenging, as each feature may influence each predictor differently. In some cases, certain predictors might entirely disregard features crucial to others. This implies that the performance provided by the various predictors can vary substantially, directly reflecting on the contributions made by the various features. Therefore, taking into account the contribution of each predictor makes the explanations robust to variability in the estimates (see Figure <ref type="figure" target="#fig_2">2</ref>).</p><p>To model the variability of SVs across multiple predictors, we rely on the concept of Interval Shapley Values (ISVs) <ref type="bibr" target="#b3">[4]</ref>. Derived from the field of cooperative interval games, they can be used to estimate SVs in the presence of uncertainty by encompassing different predictor outcomes, which are neglected in standard SVs. To ensure tractable and scalable computation on real data, we focus on Interval Shapley-Like Values (ISLVs), known to approximate ISVs <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6]</ref>.</p><p>HI-AI@KDD, Human-Interpretable AI Workshop at the KDD 2024, 26 𝑡ℎ of August 2024, Barcelona, Spain davide.napolitano@polito.it (D. Napolitano); luca.cagliero@polito.it (L. Cagliero)</p><p>Hereafter, we present a suite of algorithms adapted to explain combinations of predictors with ISLVs. They indicate the features' importance for the ensemble method's outcomes by explicitly indicating the reliability of such estimates. This is crucial to trust models' explanations and compare the outcomes of different estimators. The suite includes approaches to SVs estimation adapted to handle Interval-based scenarios successfully. Specifically, differently from the neural approaches proposed in <ref type="bibr" target="#b6">[7]</ref>, we focus on a linear regressor, a Monte Carlo sampling strategy, and an Exact estimator, aiming to incorporate all implementations in BONES <ref type="bibr" target="#b7">[8]</ref> library. To allow end-users to explore and compare the outcomes of Interval-based approaches, the suite supports ad hoc performance metrics, extended from the standard SVs scenario to support interval-level evaluations. The metrics can be visualized to ease model comparisons and complexity analysis.</p><p>The remainder of this paper is organized as follows. Section 2 introduces the preliminary notions. Section 3 describes the suite of Interval-based approaches. Section 4 shows examples of outcomes and comparisons. Finally, Section 5 draws the conclusions of the work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Preliminaries</head><p>In a cooperative game, the Shapley Value 𝜑 𝑖 represents the contribution of a single player 𝑖 to the total payoff of a group of player 𝑃 <ref type="bibr" target="#b8">[9]</ref>, where 𝜑 𝑖 is equal to the sum of the weighted marginal contributions of 𝑖 to 𝑃 over all possible player's coalitions 𝑆 ⊆ 𝑃 . Beyond explaining an individual sample 𝑥, Shapley Values can be leveraged to provide a global explanation of the dataset by averaging sample-level contributions <ref type="bibr" target="#b9">[10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>Suppose to have the outcome 𝑝𝑟 𝑥 of an ensemble M of predictors on a sample 𝑥 with a confidence interval [𝑝𝑟 𝑥 , 𝑝𝑟 𝑥 ]. In compliance with <ref type="bibr" target="#b3">[4]</ref>, we define the Coalitional Interval Game <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b12">13]</ref> as a pair (𝑤, 𝑆), where 𝑤: 2 𝑃 → 𝐼(R)is a function that maps an arbitrary coalition 𝑆 ⊆ 𝑃 to the corresponding confidence interval 𝑤(𝑆): [𝑤(𝑆), 𝑤(𝑆)] = [𝑝𝑟(𝑆), 𝑝𝑟(𝑆)].</p><p>To explain ensemble methods we use the concept of Interval Shapley Values (ISVs) <ref type="bibr" target="#b3">[4]</ref> associated with each Coalitional Interval Game (𝑤, 𝑃 ) to a payoff vector where each component is a compact interval of real numbers <ref type="bibr" target="#b13">[14]</ref>. In a nutshell, ISVs capture the range of contributions of a feature 𝑝 by evaluating the interval values across all possible feature combinations. ISVs have to satisfy two notable properties:</p><p>• Partial Subtractor: given two intervals 𝐼 and 𝐽, the Partial Subtraction 𝐼 − 𝐽 is defined as</p><formula xml:id="formula_0">[𝐼 − 𝐽, 𝐼 − 𝐽] only if Δ 𝐼 ≥ Δ 𝐽 , where Δ is the interval width: 𝐼 = [𝐼, 𝐼] → Δ = 𝐼 − 𝐼.</formula><p>• Size Monotonicity: ISVs can be defined only when the Coalitional Interval Game (𝑤, 𝑃 ) is size monotonic, i.e., when Δ𝑤(𝑆) ≤ Δ𝑤(𝑇 ) for all 𝑆, 𝑇 ∈ 2 𝑃 with 𝑆 ⊂ 𝑇 .</p><p>Since the ISVs constraints are computationally intractable <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b14">15]</ref>, Interval Shapley-Like Values <ref type="bibr" target="#b14">[15]</ref> offer a more efficient yet approximated approach to ISVs estimation. ISLVs adopt the Moore operators <ref type="bibr" target="#b15">[16]</ref>, in detail the Moore Subrtractor is used rather than the Partial Subtractor operator, i.e., given two intervals 𝐼 and 𝐽 the Moore subtraction is defined as 𝐼 ⊖𝐽 = [𝐼 −𝐽, 𝐼 − 𝐽]. To simplify the estimation <ref type="bibr" target="#b14">[15]</ref>, the Median and Uncertain-Spread games are introduced:</p><p>• Median Game (𝑤 𝑚 , 𝑃 ):</p><formula xml:id="formula_1">𝑤 𝑚 (𝑆) = [ 𝑤(𝑆) + 𝑤(𝑆) 2 , 𝑤(𝑆) + 𝑤(𝑆) 2 ], 𝑆 ∈ 2 𝑃<label>(1)</label></formula><p>• Uncertain-Spread Game (𝑤 𝑢 , 𝑃 ):</p><formula xml:id="formula_2">𝑤 𝑢 (𝑆) = [ −Δ𝑤(𝑆) 2 , Δ𝑤(𝑆) 2 ], 𝑆 ∈ 2 𝑃 (2)</formula><p>Hereafter, we focus on two ISLVs definitions based on the Median and Uncertain-Spread games:</p><p>• Improved ISLVs <ref type="bibr" target="#b14">[15]</ref>:</p><formula xml:id="formula_3">𝐼Φ 𝐼 𝑖 (𝑤) = Φ 𝐼 𝑝 (𝑤 𝑚 ) ⊕ ΔΦ 𝐼 𝑖 (𝑤 𝑢 ) ∑︀ 𝑝∈𝑃 ΔΦ 𝐼 𝑖 (𝑤 𝑢 ) 𝑤 𝑢 (𝑃 )<label>(3)</label></formula><p>• Reformulated ISLVs <ref type="bibr" target="#b5">[6]</ref>:</p><formula xml:id="formula_4">𝑅Φ 𝐼 𝑖 (𝑤) = Φ 𝐼 𝑖 (𝑤 𝑚 ) ⊕ 1 |𝑃 | 𝑤 𝑢 (𝑃 )<label>(4)</label></formula><p>where</p><formula xml:id="formula_5">⊕ is the Moore Addition 𝐼 ⊕ 𝐽 = [𝐼 − 𝐽, 𝐼 − 𝐽].</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Suite of Interval-based Approaches</head><p>Gruppo TIM -Uso Interno -Tutti i diritti riservati.</p><p>Evaluation &amp; Visualization Explainers Surrogate model We present a suite of SVs estimators adapted to handle Interval-based estimations on tabular data. To successfully explain ensembles of predictors, the suite integrates adaptations of existing algorithms that produce Improved <ref type="bibr" target="#b14">[15]</ref> and Reformulated <ref type="bibr" target="#b5">[6]</ref> estimates of ISLVs instead of classical SVs. The suite is available for research at the link: https://github.com/DavideNapolitano/ Evaluating-the-Reliability-of-Shapley-Value-Estimates-An-Interval-Based-Approach.</p><formula xml:id="formula_6">M ! M " M # M $ M #%</formula><p>A sketch of the suite is depicted in Figure <ref type="figure" target="#fig_0">1</ref>.The black-box model M to be explained consists of an ensemble of 𝑁 independent predictors, which are all trained on a labeled relational dataset. For every instance 𝑥 to be classified, each predictor returns its corresponding per-class output probabilities, which are then used to compute the confidence interval to retrieve an interval payoff for the ensemble method. As discussed in the Preliminaries, the standard formulations of both SVs and ISVs involve evaluating model contributions across different subsets of features. Since most of the existing models do not support holding out subsets of features, similar to <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b16">17]</ref>, we exploit a surrogate model to approximate the original model considering subsets of features, thus allowing the subsequent normalization of ISLVs similar to <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b17">18]</ref>.</p><p>To adapt traditional SVs-based explainers to ISLVs, we leverage the Median and Uncertain-Spread games according to the Improved <ref type="bibr" target="#b14">[15]</ref> and Reformulated <ref type="bibr" target="#b5">[6]</ref> ISLVs formulations.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Median game</head><p>The ISLVs can be expressed as a single value since the characteristic function 𝑤 𝑚 is defined as an interval with equal interval endpoints. This approach allows the estimation to be performed using established methods. Subsequently, the interval can be reconstructed in the next step when defining the ISLVs.</p><p>Uncertain-Spread game Since the minimum and maximum values returned by the characteristic function 𝑤 𝑢 are opposites (i.e., same value, opposite sign), the ISLVs estimation can be simplified by applying the addition operation, rather than the subtraction, upon the single absolute value. This consideration is exemplified in Equation <ref type="formula" target="#formula_7">5</ref>, where the subtraction is reconducted to the addition of the absolute values retrieved from different subsets 𝑆 applied on 𝑤 𝑢 .</p><formula xml:id="formula_7">𝑤 𝑢 (𝑆 1 , 𝑥) = [−𝑣 1 , 𝑣 1 ], 𝑤 𝑢 (𝑆 2 , 𝑥) = [−𝑣 2 , 𝑣 2 ] 𝑤 𝑢 (𝑆 1 , 𝑥) ⊖ 𝑤 𝑢 (𝑆 2 , 𝑥) = [−𝑣 1 − 𝑣 2 , 𝑣 1 + 𝑣 2 ] = [−(𝑣 1 + 𝑣 2 ), 𝑣 1 + 𝑣 2 ]<label>(5)</label></formula><p>Therefore, since managing the computation with a single value, we can reconduct the estimation to the traditional SVs formulation. In this way, classical predictors can be directly exploited to retrieve the absolute values and, following, to reconstruct the desired Φ 𝐼 𝑖 (𝑤 𝑢 ). Based on the considerations above, we adapt the following algorithms to support ISLVs: the Exact explainer <ref type="bibr" target="#b10">[11]</ref>, Unbiased and Biased KernelSHAP <ref type="bibr" target="#b18">[19]</ref> and Monte Carlo sampling <ref type="bibr" target="#b19">[20]</ref>. For each algorithm, we separately implement adaptations based on Median and an Uncertain-Spread, namely the Improved <ref type="bibr" target="#b14">[15]</ref> and Reformulated <ref type="bibr" target="#b5">[6]</ref> versions.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.1.">Performance metrics</head><p>Given the algorithms' outcomes achieved on a relational dataset, the suite allows the quantitative evaluation of (1) The accuracy of the intervals estimated by each algorithm against a ground truth in terms of (a) the 𝐿2 distance between the mean points, or (b) the 𝐿2 distance between the interval widths, or (c) the Euclidean distance between the intervals <ref type="bibr" target="#b20">[21]</ref>. <ref type="bibr" target="#b1">(2)</ref> The efficiency of the estimators in terms of training and inference time. Whenever not otherwise specified, we use the Exact algorithm adaptation as reference ground truth.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.2.">Outcome visualizations</head><p>The suite supports the following graphical visualization of the experimental results achieved on a test dataset: (1) A bar plot showing the per-feature intervals, which may allow a direct comparison between different algorithms; (2) A graph plotting the coefficient of variation of the ISVs (width over mean point), which provides insights into the reliability of the generated estimated; (3) A plot showing the computational times for model training and inference by varying the dataset size and dimensionality.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Preliminary results</head><p>We show examples of outcomes achieved on four relational datasets taken from the UCI repository <ref type="bibr" target="#b21">[22]</ref> namely Monks, Bank, Wisconsin Breast Cancer, and Diabetes. We explain a Random Forest Classifier with 100 predictor trees, implemented in the Scikit-learn library <ref type="bibr" target="#b22">[23]</ref>. We generate the confidence interval (with confidence level 𝛾 = 0.95) from the prediction of each tree. Then, we approximate the predictions of the black-box model using a Multi-Layer Perceptron (MLP) as a surrogate model. Similar to <ref type="bibr" target="#b16">[17,</ref><ref type="bibr" target="#b6">7]</ref>, MLP consists of three linear layers, each one with a hidden size of 512 units, interspersed with Rectified Linear Unit (ReLU) activation functions, and with two final classification heads. The surrogate model was trained for up to 200 epochs using the Kullback-Leibler divergence loss function. The training utilized the AdamW optimizer <ref type="bibr" target="#b23">[24]</ref>, with a learning rate of 10 −4 , a batch size of 8, and a weight decay of 10 −2 .</p><p>Regarding the explainers implementation, the baselines of Median and Uncertain-Spread Exact explainers are trained on 100 samples. Concerning the Monte Carlo approach, the number of iterations is set to 1000. For the KernelSHAP-based methodologies, we adopt the marginal models' approach as outlined in <ref type="bibr" target="#b24">[25]</ref>. Specifically, the Median marginal model was configured with 20 baseline samples, while the Uncertain-Spread marginal model was allocated 8 baseline samples. These sample sizes were carefully chosen to strike a balance between achieving accurate estimations and maintaining computational efficiency. Indeed, higher values lead to comparable results but with longer times, while lower values, although providing shorter times, give worse estimates. Moreover, regarding the iteration parameters of the two regression-based methods, the results are retrieved by testing all datasets with a threshold of 0.1 and a kernel iteration value of 128.  In this section, we present a comparative analysis of the proposed models based on various metrics. In detail, the table results are shown as confidence intervals computed on 5 different runs with a machine equipped with an AMD Ryzen 7950X CPU. Table <ref type="table" target="#tab_0">1</ref> illustrates the comparison of ISLVs with respect to the mean point and interval width. The results indicate that the outputs of Unbiased KernelSHAP and Monte Carlo Sampling best approximate the Exact model, with Unbiased KernelSHAP yielding superior results in terms of amplitude precision.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Examples of results and visualizations</head><p>Moreover, Table <ref type="table" target="#tab_0">1</ref> reports the results exclusively for the Improved models (denoted with the prefix I-), as they share mean points with the Reformulated models (denoted with the prefix R-), and the interval amplitudes for the latter remain invariant regardless of the approach. Similar takeaways can be derived from examining the Euclidean distances of the intervals presented in Table <ref type="table" target="#tab_1">2</ref>, where the Improved and Reformulated approaches yield similar rankings.   </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Execution times</head><p>Table <ref type="table" target="#tab_2">3</ref> compares the inference times per sample spent by all analyzed approaches separately for each tested dataset. The reported statistics show that the Reformulated ISLVs are overall faster compared to the Improved ones. Moreover, the results show that the Reformulated regression-based approach outperforms the other ones when the number of features increases. Summarizing the results, the Reformulated Unbiased KernelSHAP and Monte Carlo approaches yield comparable outcomes on the distances, with the former being favored for Improved ISLVs. Furthermore, considering inference times, the Reformulated Unbiased Ker-nelSHAP method provides the best overall results, especially as the number of features in the dataset increases.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Conclusions and future developments</head><p>The paper presented a suite of SVs estimators adapted to explain ensembles of predictors using ISVs. To estimate both importance and reliability of the features' contributions to the blackbox model estimates, we adapt three classical SV estimators to handle Intervals of Shapley Values by leveraging the concepts of Interval Shapley-Like Values. The suite allows researchers and practitioners to interact with Interval-based approaches and evaluate them using ad hoc performance metrics and visualizations.</p><p>In future work, we plan to investigate approaches not relying on surrogate models, to analyze new sampling techniques, and, most importantly, to extend this technique to other data modalities and in multimodal analyses, such as text and images combined.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Schema of the presented suite</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>0.049±0.001 0.003±0.001 0.026±0.001 0.019±0.001 0.024±0.001 0.019±0.002 0.028±0.001 0.008±0.001 I-KS 0.050±0.002 0.003±0.001 0.026±0.001 0.021±0.003 0.025±0.002 0.026±0.002 0.034±0.003 0.011±0.001 I-MC 0.076±0.004 0.001±0.001 0.016±0.001 0.036±0.001 0.021±0.001 0.022±0.001 0.034±0.002 0.008±0.001</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>Figure 2</head><label>2</label><figDesc>Figure2provides a visual insight into the numerical results, i.e., a bar plots and a Coefficient of Variation plot computed using Unbiased KernelSHAP on the Diabetes dataset. This example demonstrates how the interval data can be employed to assess the reliability of the Shapley Values associated with each feature.</figDesc><graphic coords="6,305.16,257.10,195.85,111.30" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Bar Plot and Coefficient of Variation Plot on the Diabetes dataset. Generally, when the amplitude is substantially larger than the midpoint (e.g., coefficient of variation &gt; 1), the reliability of the feature estimate should be carefully reconsidered.</figDesc><graphic coords="6,95.27,257.61,197.94,110.79" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>ISLVs Estimators -𝐿2 Distances on Interval Mean values (𝐿2 𝑀 ) and on Interval Withds value (𝐿2 𝑊 ).</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>ISLVs Estimators -Euclidean distances between intervals</figDesc><table /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Average inference times for each model and dataset.</figDesc><table><row><cell></cell><cell>N°Features</cell><cell>R-Exact</cell><cell>R-UKS</cell><cell>R-MC</cell><cell>I-Exact</cell><cell>I-UKS</cell><cell>I-MC</cell></row><row><cell>Bank</cell><cell>4</cell><cell cols="6">0.727±0.118 4.104±0.457 3.687±0.102 1.455±0.218 10.874±2.106 7.374±0.205</cell></row><row><cell>Monks</cell><cell>6</cell><cell cols="6">0.456±0.047 1.479±0.033 1.818±0.047 0.913±0.085 14.982±0.737 3.635±0.094</cell></row><row><cell>Diabetes</cell><cell>8</cell><cell cols="6">2.960±0.028 1.936±0.007 2.443±0.022 5.950±0.049 7.022±1.178 4.887±0.044</cell></row><row><cell>WBC</cell><cell>9</cell><cell cols="6">3.084±0.090 2.148±0.074 2.768±0.069 6.202±0.191 11.808±1.407 5.536±0.137</cell></row></table></figure>
		</body>
		<back>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">A value for n-person games</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Shapley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Contributions to the Theory of Games II</title>
				<editor>
			<persName><forename type="first">H</forename><forename type="middle">W</forename><surname>Kuhn</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">A</forename><forename type="middle">W</forename><surname>Tucker</surname></persName>
		</editor>
		<meeting><address><addrLine>Princeton</addrLine></address></meeting>
		<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="1953">1953</date>
			<biblScope unit="page" from="307" to="317" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<monogr>
		<author>
			<persName><forename type="first">X</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Marques-Silva</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2302.08160</idno>
		<title level="m">The inadequacy of shapley values for explainability</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities</title>
		<author>
			<persName><forename type="first">W</forename><surname>Saeed</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Omlin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-Based Systems</title>
		<imprint>
			<biblScope unit="volume">263</biblScope>
			<biblScope unit="page">110273</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">The interval shapley value: an axiomatization</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gök</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Branzei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tijs</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Central European Journal of Operations Research</title>
		<imprint>
			<biblScope unit="volume">18</biblScope>
			<biblScope unit="page" from="131" to="140" />
			<date type="published" when="2010">2010</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Some properties of interval shapley values: An axiomatic analysis</title>
		<author>
			<persName><forename type="first">S</forename><surname>Ishihara</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Shino</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Games</title>
		<imprint>
			<biblScope unit="volume">14</biblScope>
			<biblScope unit="page">50</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">A reformulated shapley-like value for cooperative games with interval payoffs</title>
		<author>
			<persName><forename type="first">W</forename><surname>Feng</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Pan</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Operations Research Letters</title>
		<imprint>
			<biblScope unit="volume">48</biblScope>
			<biblScope unit="page" from="758" to="762" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Efficient neural network-based estimation of interval shapley values</title>
		<author>
			<persName><forename type="first">D</forename><surname>Napolitano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vaiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cagliero</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<monogr>
		<title level="m" type="main">Bones: a benchmark for neural estimation of shapley values</title>
		<author>
			<persName><forename type="first">D</forename><surname>Napolitano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cagliero</surname></persName>
		</author>
		<ptr target="https://arxiv.org/abs/2407.16482.arXiv:2407.16482" />
		<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Shapley</surname></persName>
		</author>
		<title level="m">Notes on the N-Person Game II: The Value of an N-Person Game</title>
				<meeting><address><addrLine>Santa Monica, CA</addrLine></address></meeting>
		<imprint>
			<publisher>RAND Corporation</publisher>
			<date type="published" when="1951">1951</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">C</forename><surname>Frye</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rowat</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Feige</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1910.06358</idno>
		<title level="m">Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">A unified approach to interpreting model predictions</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Advances in Neural Information Processing Systems</title>
				<imprint>
			<publisher>Curran Associates, Inc</publisher>
			<date type="published" when="2017">2017</date>
			<biblScope unit="page" from="4765" to="4774" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">Convex interval games</title>
		<author>
			<persName><forename type="first">S</forename><surname>Gök</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Branzei</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Tijs</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Applied Mathematics and Decision Sciences</title>
		<imprint>
			<date type="published" when="2009">2009. 2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Z</forename><surname>Gök</surname></persName>
		</author>
		<title level="m">Cooperative interval games</title>
				<imprint>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Coalitional interval games for strategic games in which players cooperate</title>
		<author>
			<persName><forename type="first">L</forename><surname>Carpente</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Casas-Méndez</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>García-Jurado</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Van Den Nouweland</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Theory and Decision</title>
		<imprint>
			<biblScope unit="page" from="253" to="269" />
			<date type="published" when="2008">2008</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A new approach of cooperative interval games: The interval core and shapley value revisited</title>
		<author>
			<persName><forename type="first">W</forename><surname>Han</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Sun</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Xu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Operations Research Letters</title>
		<imprint>
			<biblScope unit="volume">40</biblScope>
			<biblScope unit="page" from="462" to="468" />
			<date type="published" when="2012">2012</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<title level="m" type="main">Methods and applications of interval analysis</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">E</forename><surname>Moore</surname></persName>
		</author>
		<imprint>
			<date type="published" when="1979">1979</date>
			<publisher>SIAM</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Learning confidence intervals for feature importance: A fast shapley-based approach</title>
		<author>
			<persName><forename type="first">D</forename><surname>Napolitano</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Vaiani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Cagliero</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Workshop Proceedings of the EDBT/ICDT 2023 Joint Conference</title>
				<meeting><address><addrLine>Ioannina, Greece</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023-03-31">March 28-March 31, 2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Fastshap: Real-time shapley value estimation</title>
		<author>
			<persName><forename type="first">N</forename><surname>Jethani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Sudarshan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">C</forename><surname>Covert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ranganath</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event</title>
				<imprint>
			<publisher>OpenReview</publisher>
			<date type="published" when="2022">April 25-29, 2022. 2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Covert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2012.01536</idno>
		<title level="m">Improving kernelshap: Practical shapley value estimation via linear regression</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Explaining prediction models and individual predictions with feature contributions</title>
		<author>
			<persName><forename type="first">E</forename><surname>Strumbelj</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Kononenko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge and Information Systems</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="647" to="665" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<monogr>
		<author>
			<persName><forename type="first">O</forename><surname>Kosheleva</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Kreinovich</surname></persName>
		</author>
		<title level="m">Euclidean distance between intervals is the only representation-invariant one</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<monogr>
		<title level="m" type="main">UCI machine learning repository</title>
		<author>
			<persName><forename type="first">K</forename><surname>Bache</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Lichman</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">Scikit-learn: Machine learning in Python</title>
		<author>
			<persName><forename type="first">F</forename><surname>Pedregosa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Gramfort</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Michel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Thirion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">O</forename><surname>Grisel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Blondel</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Prettenhofer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Weiss</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dubourg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vanderplas</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Passos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Cournapeau</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Brucher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Perrot</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Duchesnay</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="2825" to="2830" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Loshchilov</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Hutter</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1711.05101</idno>
		<title level="m">Decoupled weight decay regularization</title>
				<imprint>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b24">
	<monogr>
		<author>
			<persName><forename type="first">I</forename><surname>Covert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Kim</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2206.05282</idno>
		<title level="m">Learning to estimate shapley values with vision transformers</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
