<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Evaluating the Reliability of Shapley Value Estimates: An Interval-Based Approach</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Davide Napolitano</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Luca Cagliero</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Politecnico di Torino</institution>
          ,
          <addr-line>Corso Duca degli Abruzzi 24, 10129, Torino</addr-line>
          ,
          <country country="IT">Italy</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Shapley Values (SVs) are concepts used in game theory that have recently found application in Artificial Intelligence. They are exploited to explain models by quantifying the separate features' contribution to the predictor estimates. However, the reliability of the estimated SVs is often not thoroughly assessed. In this context, we leverage Interval Shapley Values (ISVs) to evaluate the importance and reliability of features' contributions when the classifier consists of an ensemble method. This paper presents a suite of ISVs estimators based on exact estimation, linear regression, and Monte Carlo sampling. In detail, we adapt classical SVs estimators to ISV-like concepts to eficiently handle real tabular datasets. We also provide a set of ad hoc performance metrics and visualization techniques that can be used to explore models' results under multiple aspects.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Explainable Artificial Intelligence</kwd>
        <kwd>Interval Shapley Values</kwd>
        <kwd>Feature Importance</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        Shapley Values (SVs), originally formulated in coalition game theory [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ], are now widely used to
generate post-hoc explanations for classifiers that assign discrete classes to unlabeled samples.
In detail, SVs quantify the contribution of each input feature to a given classifier’s prediction and,
although they may not always accurately reflect feature importance [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ], these contributions can
be estimated on a per-sample basis (locally) or aggregated to provide insights into the overall
behavior of the model (globally) [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ].
      </p>
      <p>When a model comprises multiple predictors, estimating the contributions of individual
features becomes challenging, as each feature may influence each predictor diferently. In some
cases, certain predictors might entirely disregard features crucial to others. This implies that
the performance provided by the various predictors can vary substantially, directly reflecting on
the contributions made by the various features. Therefore, taking into account the contribution
of each predictor makes the explanations robust to variability in the estimates (see Figure 2).</p>
      <p>
        To model the variability of SVs across multiple predictors, we rely on the concept of Interval
Shapley Values (ISVs) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]. Derived from the field of cooperative interval games, they can be used
to estimate SVs in the presence of uncertainty by encompassing diferent predictor outcomes,
which are neglected in standard SVs. To ensure tractable and scalable computation on real data,
we focus on Interval Shapley-Like Values (ISLVs), known to approximate ISVs [
        <xref ref-type="bibr" rid="ref5 ref6">5, 6</xref>
        ].
      </p>
      <p>
        Hereafter, we present a suite of algorithms adapted to explain combinations of predictors with
ISLVs. They indicate the features’ importance for the ensemble method’s outcomes by explicitly
indicating the reliability of such estimates. This is crucial to trust models’ explanations and
compare the outcomes of diferent estimators. The suite includes approaches to SVs estimation
adapted to handle Interval-based scenarios successfully. Specifically, diferently from the neural
approaches proposed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ], we focus on a linear regressor, a Monte Carlo sampling strategy, and
an Exact estimator, aiming to incorporate all implementations in BONES [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ] library. To allow
end-users to explore and compare the outcomes of Interval-based approaches, the suite supports
ad hoc performance metrics, extended from the standard SVs scenario to support interval-level
evaluations. The metrics can be visualized to ease model comparisons and complexity analysis.
      </p>
      <p>The remainder of this paper is organized as follows. Section 2 introduces the preliminary
notions. Section 3 describes the suite of Interval-based approaches. Section 4 shows examples
of outcomes and comparisons. Finally, Section 5 draws the conclusions of the work.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Preliminaries</title>
      <p>
        In a cooperative game, the Shapley Value  represents the contribution of a single player 
to the total payof of a group of player  [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], where  is equal to the sum of the weighted
marginal contributions of  to  over all possible player’s coalitions  ⊆  . Beyond explaining
an individual sample , Shapley Values can be leveraged to provide a global explanation of the
dataset by averaging sample-level contributions [
        <xref ref-type="bibr" rid="ref10 ref11">10, 11</xref>
        ].
      </p>
      <p>
        Suppose to have the outcome  of an ensemble M of predictors on a sample  with
a confidence interval [ , ]. In compliance with [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ], we define the Coalitional Interval
Game [
        <xref ref-type="bibr" rid="ref12 ref13">12, 13</xref>
        ] as a pair (, ), where : 2 → (R)is a function that maps an arbitrary coalition
 ⊆  to the corresponding confidence interval (): [(), ()] = [(), ()].
      </p>
      <p>
        To explain ensemble methods we use the concept of Interval Shapley Values (ISVs) [
        <xref ref-type="bibr" rid="ref4">4</xref>
        ]
associated with each Coalitional Interval Game (,  ) to a payof vector where each component
is a compact interval of real numbers [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ]. In a nutshell, ISVs capture the range of contributions
of a feature  by evaluating the interval values across all possible feature combinations. ISVs
have to satisfy two notable properties:
• Partial Subtractor: given two intervals  and  , the Partial Subtraction  −  is defined
as [ −  ,  −  ] only if Δ ≥ Δ , where Δ is the interval width:  = [, ] → Δ =  − .
• Size Monotonicity: ISVs can be defined only when the Coalitional Interval Game (,  )
is size monotonic, i.e., when Δ() ≤ Δ( ) for all ,  ∈ 2 with  ⊂  .
      </p>
      <p>
        Since the ISVs constraints are computationally intractable [
        <xref ref-type="bibr" rid="ref15 ref9">9, 15</xref>
        ], Interval Shapley-Like
Values [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] ofer a more eficient yet approximated approach to ISVs estimation. ISLVs adopt the
Moore operators [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ], in detail the Moore Subrtractor is used rather than the Partial Subtractor
operator, i.e., given two intervals  and  the Moore subtraction is defined as  ⊖  = [ −  ,  −
 ]. To simplify the estimation [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ], the Median and Uncertain-Spread games are introduced:
• Median Game (,  ):
      </p>
      <p>() = [
• Uncertain-Spread Game (,  ):
() + () () + ()</p>
      <p>
        ,
2 2
() = [ − Δ() , Δ() ],  ∈ 2
2 2
],  ∈ 2
Hereafter, we focus on two ISLVs definitions based on the
Median and Uncertain-Spread games:
• Improved ISLVs [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ]:
• Reformulated ISLVs [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]:
 Φ () = Φ() ⊕
Φ () = Φ () ⊕
      </p>
      <p>ΔΦ ()
∑︀∈ ΔΦ ()
1
| |
( )
( )
(1)
(2)
(3)
(4)
where ⊕ is the Moore Addition  ⊕  = [ −  ,  −  ].</p>
    </sec>
    <sec id="sec-3">
      <title>3. Suite of Interval-based Approaches</title>
      <p>Black-Box Model</p>
      <p>Surrogate model</p>
      <p>Evaluation &amp; Visualization
Explainers</p>
      <p>Exact
MonteCarlo
KernelSHAP
Unbiased
KernelSHAP</p>
      <p>Bar Plot</p>
      <p>Coef.</p>
      <p>Variance
Distances</p>
      <p>Times
M!</p>
      <p>M"
M#</p>
      <p>M$
M#%
:1…||</p>
      <p>:1…||
:1…||
!</p>
      <p>!</p>
      <p>
        We present a suite of SVs estimators adapted to handle Interval-based estimations on tabular
data. To successfully explain ensembles of predictors, the suite integrates adaptations of existing
algorithms that produce Improved [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and Reformulated [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] estimates of ISLVs instead of
classical SVs. The suite is available for research at the link: https://github.com/DavideNapolitano/
Evaluating-the-Reliability-of-Shapley-VGraupplouTIMe-U-soEIntersnot-Tiuttiidiraittitriseervsati.-An-Interval-Based-Approach.
m
      </p>
      <p>
        A sketch of the suite is depicted in Figjukjre 1.The black-box model M to be explained consists
of an ensemble of  independent predictors, which are all trained on a labeled relational dataset.
For every instance  to be classified, each predictor returns its corresponding per-class output
probabilities, which are then used to compute the confidence interval to retrieve an interval
payof for the ensemble method. As discussed in the Preliminaries, the standard formulations
of both SVs and ISVs involve evaluating model contributions across diferent subsets of features.
Since most of the existing models do not support holding out subsets of features, similar to [
        <xref ref-type="bibr" rid="ref17 ref7">7, 17</xref>
        ],
we exploit a surrogate model to approximate the original model considering subsets of features,
thus allowing the subsequent normalization of ISLVs similar to [
        <xref ref-type="bibr" rid="ref11 ref18">11, 18</xref>
        ].
      </p>
      <p>
        To adapt traditional SVs-based explainers to ISLVs, we leverage the Median and
UncertainSpread games according to the Improved [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and Reformulated [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] ISLVs formulations.
Median game The ISLVs can be expressed as a single value since the characteristic function
 is defined as an interval with equal interval endpoints. This approach allows the estimation
to be performed using established methods. Subsequently, the interval can be reconstructed in
the next step when defining the ISLVs.
      </p>
      <p>Uncertain-Spread game Since the minimum and maximum values returned by the
characteristic function  are opposites (i.e., same value, opposite sign), the ISLVs estimation can
be simplified by applying the addition operation, rather than the subtraction, upon the single
absolute value. This consideration is exemplified in Equation 5, where the subtraction is
reconducted to the addition of the absolute values retrieved from diferent subsets  applied on .
(1, ) = [− 1, 1], (2, ) = [− 2, 2]
(1, ) ⊖ (2, ) = [− 1 − 2, 1 + 2] = [− (1 + 2), 1 + 2]
(5)
Therefore, since managing the computation with a single value, we can reconduct the estimation
to the traditional SVs formulation. In this way, classical predictors can be directly exploited to
retrieve the absolute values and, following, to reconstruct the desired Φ ().</p>
      <p>
        Based on the considerations above, we adapt the following algorithms to support ISLVs: the
Exact explainer [
        <xref ref-type="bibr" rid="ref11">11</xref>
        ], Unbiased and Biased KernelSHAP [
        <xref ref-type="bibr" rid="ref19">19</xref>
        ] and Monte Carlo sampling [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ]. For
each algorithm, we separately implement adaptations based on Median and an Uncertain-Spread,
namely the Improved [
        <xref ref-type="bibr" rid="ref15">15</xref>
        ] and Reformulated [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] versions.
      </p>
      <sec id="sec-3-1">
        <title>3.1. Performance metrics</title>
        <p>
          Given the algorithms’ outcomes achieved on a relational dataset, the suite allows the quantitative
evaluation of (1) The accuracy of the intervals estimated by each algorithm against a ground
truth in terms of (a) the 2 distance between the mean points, or (b) the 2 distance between
the interval widths, or (c) the Euclidean distance between the intervals [
          <xref ref-type="bibr" rid="ref21">21</xref>
          ]. (2) The eficiency
of the estimators in terms of training and inference time. Whenever not otherwise specified, we
use the Exact algorithm adaptation as reference ground truth.
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>3.2. Outcome visualizations</title>
        <p>The suite supports the following graphical visualization of the experimental results achieved
on a test dataset: (1) A bar plot showing the per-feature intervals, which may allow a direct
comparison between diferent algorithms; (2) A graph plotting the coeficient of variation of
the ISVs (width over mean point), which provides insights into the reliability of the generated
estimated; (3) A plot showing the computational times for model training and inference by
varying the dataset size and dimensionality.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>4. Preliminary results</title>
      <p>
        We show examples of outcomes achieved on four relational datasets taken from the UCI
repository [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ] namely Monks, Bank, Wisconsin Breast Cancer, and Diabetes. We explain a Random
Forest Classifier with 100 predictor trees, implemented in the Scikit-learn library [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. We
generate the confidence interval (with confidence level  = 0.95) from the prediction of each
tree. Then, we approximate the predictions of the black-box model using a Multi-Layer
Perceptron (MLP) as a surrogate model. Similar to [
        <xref ref-type="bibr" rid="ref17 ref7">17, 7</xref>
        ], MLP consists of three linear layers, each
one with a hidden size of 512 units, interspersed with Rectified Linear Unit (ReLU) activation
functions, and with two final classification heads. The surrogate model was trained for up to 200
epochs using the Kullback-Leibler divergence loss function. The training utilized the AdamW
optimizer [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ], with a learning rate of 10− 4, a batch size of 8, and a weight decay of 10− 2.
      </p>
      <p>
        Regarding the explainers implementation, the baselines of Median and Uncertain-Spread Exact
explainers are trained on 100 samples. Concerning the Monte Carlo approach, the number
of iterations is set to 1000. For the KernelSHAP-based methodologies, we adopt the marginal
models’ approach as outlined in [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Specifically, the Median marginal model was configured
with 20 baseline samples, while the Uncertain-Spread marginal model was allocated 8 baseline
samples. These sample sizes were carefully chosen to strike a balance between achieving
accurate estimations and maintaining computational eficiency. Indeed, higher values lead to
comparable results but with longer times, while lower values, although providing shorter times,
give worse estimates. Moreover, regarding the iteration parameters of the two regression-based
methods, the results are retrieved by testing all datasets with a threshold of 0.1 and a kernel
iteration value of 128.
      </p>
      <sec id="sec-4-1">
        <title>4.1. Examples of results and visualizations</title>
        <p>In this section, we present a comparative analysis of the proposed models based on various
metrics. In detail, the table results are shown as confidence intervals computed on 5 diferent runs
with a machine equipped with an AMD Ryzen 7950X CPU. Table 1 illustrates the comparison of
ISLVs with respect to the mean point and interval width. The results indicate that the outputs
of Unbiased KernelSHAP and Monte Carlo Sampling best approximate the Exact model, with
Unbiased KernelSHAP yielding superior results in terms of amplitude precision.</p>
        <p>Moreover, Table 1 reports the results exclusively for the Improved models (denoted with the
prefix I-), as they share mean points with the Reformulated models (denoted with the prefix R-),
and the interval amplitudes for the latter remain invariant regardless of the approach. Similar
takeaways can be derived from examining the Euclidean distances of the intervals presented in
Table 2, where the Improved and Reformulated approaches yield similar rankings.</p>
      </sec>
      <sec id="sec-4-2">
        <title>4.2. Execution times</title>
        <p>Summarizing the results, the Reformulated Unbiased KernelSHAP and Monte Carlo
approaches yield comparable outcomes on the distances, with the former being favored for
Improved ISLVs. Furthermore, considering inference times, the Reformulated Unbiased
KernelSHAP method provides the best overall results, especially as the number of features in the
dataset increases.</p>
      </sec>
    </sec>
    <sec id="sec-5">
      <title>5. Conclusions and future developments</title>
      <p>The paper presented a suite of SVs estimators adapted to explain ensembles of predictors using
ISVs. To estimate both importance and reliability of the features’ contributions to the
blackbox model estimates, we adapt three classical SV estimators to handle Intervals of Shapley
Values by leveraging the concepts of Interval Shapley-Like Values. The suite allows researchers
and practitioners to interact with Interval-based approaches and evaluate them using ad hoc
performance metrics and visualizations.</p>
      <p>In future work, we plan to investigate approaches not relying on surrogate models, to
analyze new sampling techniques, and, most importantly, to extend this technique to other data
modalities and in multimodal analyses, such as text and images combined.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Shapley</surname>
          </string-name>
          ,
          <article-title>A value for n-person games</article-title>
          , in: H. W. Kuhn,
          <string-name>
            <given-names>A. W.</given-names>
            <surname>Tucker</surname>
          </string-name>
          (Eds.),
          <article-title>Contributions to the Theory of Games II</article-title>
          , Princeton University Press, Princeton,
          <year>1953</year>
          , pp.
          <fpage>307</fpage>
          -
          <lpage>317</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>X.</given-names>
            <surname>Huang</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Marques-Silva</surname>
          </string-name>
          ,
          <article-title>The inadequacy of shapley values for explainability</article-title>
          ,
          <source>arXiv preprint arXiv:2302.08160</source>
          (
          <year>2023</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>W.</given-names>
            <surname>Saeed</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Omlin</surname>
          </string-name>
          ,
          <article-title>Explainable ai (xai): A systematic meta-survey of current challenges and future opportunities</article-title>
          ,
          <source>Knowledge-Based Systems</source>
          <volume>263</volume>
          (
          <year>2023</year>
          )
          <fpage>110273</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>S.</given-names>
            <surname>Alparslan</surname>
          </string-name>
          <string-name>
            <surname>Gök</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Branzei</surname>
          </string-name>
          ,
          <string-name>
            <surname>S. Tijs,</surname>
          </string-name>
          <article-title>The interval shapley value: an axiomatization</article-title>
          ,
          <source>Central European Journal of Operations Research</source>
          <volume>18</volume>
          (
          <year>2010</year>
          )
          <fpage>131</fpage>
          -
          <lpage>140</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>S.</given-names>
            <surname>Ishihara</surname>
          </string-name>
          ,
          <string-name>
            <surname>J. Shino,</surname>
          </string-name>
          <article-title>Some properties of interval shapley values: An axiomatic analysis</article-title>
          ,
          <source>Games</source>
          <volume>14</volume>
          (
          <year>2023</year>
          )
          <fpage>50</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>W.</given-names>
            <surname>Feng</surname>
          </string-name>
          , W. Han,
          <string-name>
            <given-names>Z.</given-names>
            <surname>Pan</surname>
          </string-name>
          ,
          <article-title>A reformulated shapley-like value for cooperative games with interval payofs</article-title>
          ,
          <source>Operations Research Letters</source>
          <volume>48</volume>
          (
          <year>2020</year>
          )
          <fpage>758</fpage>
          -
          <lpage>762</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>D.</given-names>
            <surname>Napolitano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vaiani</surname>
          </string-name>
          , L. Cagliero,
          <article-title>Eficient neural network-based estimation of interval shapley values</article-title>
          ,
          <source>IEEE Transactions on Knowledge and Data Engineering</source>
          (
          <year>2024</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <given-names>D.</given-names>
            <surname>Napolitano</surname>
          </string-name>
          , L. Cagliero,
          <article-title>Bones: a benchmark for neural estimation of shapley values</article-title>
          ,
          <year>2024</year>
          . URL: https://arxiv.org/abs/2407.16482. arXiv:
          <volume>2407</volume>
          .
          <fpage>16482</fpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <given-names>L. S.</given-names>
            <surname>Shapley</surname>
          </string-name>
          , Notes on the N-Person
          <string-name>
            <surname>Game</surname>
            <given-names>II</given-names>
          </string-name>
          :
          <article-title>The Value of an N-Person Game</article-title>
          , RAND Corporation, Santa Monica, CA,
          <year>1951</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <given-names>C.</given-names>
            <surname>Frye</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Rowat</surname>
          </string-name>
          ,
          <string-name>
            <surname>I. Feige</surname>
          </string-name>
          ,
          <article-title>Asymmetric shapley values: incorporating causal knowledge into model-agnostic explainability</article-title>
          , arXiv preprint arXiv:
          <year>1910</year>
          .
          <volume>06358</volume>
          (
          <year>2019</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          [11]
          <string-name>
            <given-names>S. M.</given-names>
            <surname>Lundberg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>A unified approach to interpreting model predictions</article-title>
          ,
          <source>in: Advances in Neural Information Processing Systems</source>
          , Curran Associates, Inc.,
          <year>2017</year>
          , pp.
          <fpage>4765</fpage>
          -
          <lpage>4774</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          [12]
          <string-name>
            <given-names>S.</given-names>
            <surname>Alparslan</surname>
          </string-name>
          <string-name>
            <surname>Gök</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Branzei</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Tijs</surname>
          </string-name>
          ,
          <article-title>Convex interval games</article-title>
          ,
          <source>Journal of Applied Mathematics and Decision Sciences</source>
          <year>2009</year>
          (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          [13]
          <string-name>
            <given-names>S. Z.</given-names>
            <surname>Alparslan</surname>
          </string-name>
          <string-name>
            <surname>Gök</surname>
          </string-name>
          , Cooperative interval games (
          <year>2009</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          [14]
          <string-name>
            <given-names>L.</given-names>
            <surname>Carpente</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Casas-Méndez</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I.</given-names>
            <surname>García-Jurado</surname>
          </string-name>
          ,
          <string-name>
            <surname>A. van den Nouweland</surname>
          </string-name>
          ,
          <article-title>Coalitional interval games for strategic games in which players cooperate</article-title>
          ,
          <source>Theory and Decision</source>
          (
          <year>2008</year>
          )
          <fpage>253</fpage>
          -
          <lpage>269</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          [15]
          <string-name>
            <given-names>W.</given-names>
            <surname>Han</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Sun</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Xu</surname>
          </string-name>
          ,
          <article-title>A new approach of cooperative interval games: The interval core and shapley value revisited</article-title>
          ,
          <source>Operations Research Letters</source>
          <volume>40</volume>
          (
          <year>2012</year>
          )
          <fpage>462</fpage>
          -
          <lpage>468</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          [16]
          <string-name>
            <given-names>R. E.</given-names>
            <surname>Moore</surname>
          </string-name>
          ,
          <article-title>Methods and applications of interval analysis</article-title>
          ,
          <source>SIAM</source>
          ,
          <year>1979</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          [17]
          <string-name>
            <given-names>D.</given-names>
            <surname>Napolitano</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Vaiani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L.</given-names>
            <surname>Cagliero</surname>
          </string-name>
          , et al.,
          <article-title>Learning confidence intervals for feature importance: A fast shapley-based approach</article-title>
          ,
          <source>in: Workshop Proceedings of the EDBT/ICDT 2023 Joint Conference (March 28-March</source>
          <volume>31</volume>
          ,
          <year>2023</year>
          , Ioannina, Greece),
          <year>2023</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          [18]
          <string-name>
            <given-names>N.</given-names>
            <surname>Jethani</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Sudarshan</surname>
          </string-name>
          ,
          <string-name>
            <given-names>I. C.</given-names>
            <surname>Covert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Ranganath</surname>
          </string-name>
          , Fastshap:
          <article-title>Real-time shapley value estimation</article-title>
          ,
          <source>in: The Tenth International Conference on Learning Representations, ICLR</source>
          <year>2022</year>
          ,
          <string-name>
            <given-names>Virtual</given-names>
            <surname>Event</surname>
          </string-name>
          ,
          <source>April 25-29</source>
          ,
          <year>2022</year>
          , OpenReview.net,
          <year>2022</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          [19]
          <string-name>
            <given-names>I.</given-names>
            <surname>Covert</surname>
          </string-name>
          ,
          <string-name>
            <surname>S.-I. Lee</surname>
          </string-name>
          ,
          <article-title>Improving kernelshap: Practical shapley value estimation via linear regression</article-title>
          , arXiv preprint arXiv:
          <year>2012</year>
          .
          <volume>01536</volume>
          (
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          [20]
          <string-name>
            <given-names>E.</given-names>
            <surname>Strumbelj</surname>
          </string-name>
          , I. Kononenko,
          <article-title>Explaining prediction models and individual predictions with feature contributions</article-title>
          ,
          <source>Knowledge and Information Systems</source>
          <volume>41</volume>
          (
          <year>2014</year>
          )
          <fpage>647</fpage>
          -
          <lpage>665</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          [21]
          <string-name>
            <given-names>O.</given-names>
            <surname>Kosheleva</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Kreinovich</surname>
          </string-name>
          ,
          <article-title>Euclidean distance between intervals is the only representation-invariant one (</article-title>
          <year>2020</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [22]
          <string-name>
            <given-names>K.</given-names>
            <surname>Bache</surname>
          </string-name>
          ,
          <string-name>
            <surname>M. Lichman,</surname>
          </string-name>
          <article-title>UCI machine learning repository</article-title>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          [23]
          <string-name>
            <given-names>F.</given-names>
            <surname>Pedregosa</surname>
          </string-name>
          ,
          <string-name>
            <given-names>G.</given-names>
            <surname>Varoquaux</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Gramfort</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Michel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>B.</given-names>
            <surname>Thirion</surname>
          </string-name>
          ,
          <string-name>
            <given-names>O.</given-names>
            <surname>Grisel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Blondel</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P.</given-names>
            <surname>Prettenhofer</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Weiss</surname>
          </string-name>
          ,
          <string-name>
            <given-names>V.</given-names>
            <surname>Dubourg</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J.</given-names>
            <surname>Vanderplas</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Passos</surname>
          </string-name>
          ,
          <string-name>
            <given-names>D.</given-names>
            <surname>Cournapeau</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Brucher</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Perrot</surname>
          </string-name>
          , E. Duchesnay,
          <article-title>Scikit-learn: Machine learning in Python</article-title>
          ,
          <source>Journal of Machine Learning Research</source>
          <volume>12</volume>
          (
          <year>2011</year>
          )
          <fpage>2825</fpage>
          -
          <lpage>2830</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          [24]
          <string-name>
            <given-names>I.</given-names>
            <surname>Loshchilov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Hutter</surname>
          </string-name>
          ,
          <article-title>Decoupled weight decay regularization</article-title>
          ,
          <source>arXiv preprint arXiv:1711.05101</source>
          (
          <year>2017</year>
          ).
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          [25]
          <string-name>
            <given-names>I.</given-names>
            <surname>Covert</surname>
          </string-name>
          ,
          <string-name>
            <given-names>C.</given-names>
            <surname>Kim</surname>
          </string-name>
          ,
          <string-name>
            <given-names>S.-I.</given-names>
            <surname>Lee</surname>
          </string-name>
          ,
          <article-title>Learning to estimate shapley values with vision transformers</article-title>
          ,
          <year>2023</year>
          . arXiv:
          <volume>2206</volume>
          .
          <fpage>05282</fpage>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>