<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Explaining Change in Models and Data with Global Feature Importance and Effects</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Maximilian</forename><surname>Muschalik</surname></persName>
							<email>maximilian.muschalik@lmu.de</email>
							<affiliation key="aff0">
								<orgName type="institution">LMU Munich</orgName>
								<address>
									<postCode>D-80539</postCode>
									<settlement>Munich</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">MCML</orgName>
								<address>
									<settlement>Munich</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Fabian</forename><surname>Fumagalli</surname></persName>
							<email>ffumagalli@techfak.uni-bielefeld.de</email>
							<affiliation key="aff2">
								<orgName type="institution">Bielefeld University</orgName>
								<address>
									<postCode>D-33619</postCode>
									<settlement>Bielefeld</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Barbara</forename><surname>Hammer</surname></persName>
							<affiliation key="aff2">
								<orgName type="institution">Bielefeld University</orgName>
								<address>
									<postCode>D-33619</postCode>
									<settlement>Bielefeld</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Eyke</forename><surname>Hüllermeier</surname></persName>
							<affiliation key="aff0">
								<orgName type="institution">LMU Munich</orgName>
								<address>
									<postCode>D-80539</postCode>
									<settlement>Munich</settlement>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">MCML</orgName>
								<address>
									<settlement>Munich</settlement>
								</address>
							</affiliation>
						</author>
						<author>
							<affiliation key="aff3">
								<orgName type="department">Explainable AI for Time Series and Data Streams Tutorial-Workshop</orgName>
								<address>
									<addrLine>Sep. 9 th</addrLine>
									<postCode>2024</postCode>
									<settlement>Vilnius</settlement>
									<country>Lithunia</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Explaining Change in Models and Data with Global Feature Importance and Effects</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">8D0F02AACF2F9631F837B16A42334679</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:54+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Explainable Artificial Intelligence, Interpretable Machine Learning, Online Learning, Concept Drift Orcid 0000-0002-6921-0204 (M. Muschalik)</term>
					<term>0000-0003-3955-3510 (F. Fumagalli)</term>
					<term>0000-0002-0935-5591 (B. Hammer)</term>
					<term>0000-0002-9944-4108 (E. Hüllermeier)</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>In dynamic machine learning environments, where data streams continuously evolve, traditional explanation methods struggle to remain faithful to the underlying model or data distribution. Therefore, this work presents a unified framework for efficiently computing incremental model-agnostic global explanations tailored for time-dependent models. By extending static model-agnostic methods such as Permutation Feature Importance, SAGE, and Partial Dependence Plots into the online learning context, the proposed framework enables the continuous updating of explanations as new data becomes available. These incremental variants ensure that global explanations remain relevant while minimizing computational overhead. The framework also addresses key challenges related to data distribution maintenance and perturbation generation in online learning, offering time and memory efficient solutions like geometric reservoir-based sampling for data replacement.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>In applied machine learning, data often evolves over time, which necessitates changes to prediction models. Ensuring the reliability of such time-dependent models is increasingly important in high-stake applications, such as financial services <ref type="bibr" target="#b0">[1]</ref>, sensor <ref type="bibr" target="#b1">[2,</ref><ref type="bibr" target="#b2">3]</ref> and network <ref type="bibr" target="#b3">[4]</ref> analysis. In recent years, eXplainable Artificial Intelligence (XAI) has targeted such time-dependent explanations of predictions that react to changes in the underlying data distributions and prediction models <ref type="bibr" target="#b4">[5]</ref>. In extreme cases, where data is observed sequentially over time from a data stream, models are updated incrementally with each now observation, known as online learning or incremental learning <ref type="bibr" target="#b5">[6]</ref>. In this context, re-computing XAI methods from scratch can become computationally infeasible, where incremental variants have been proposed <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b7">8,</ref><ref type="bibr" target="#b8">9,</ref><ref type="bibr" target="#b9">10,</ref><ref type="bibr" target="#b10">11]</ref>.</p><p>In this work, we present a unified framework that allows to efficiently compute incremental variants of model-agnostic global explanations (MAGEs). We demonstrate that existing incremental XAI techniques are summarized in the incremental MAGE framework. Furthermore, static MAGEs cover a wide range of existing model-agnostic XAI methods, including Shapley interactions <ref type="bibr" target="#b11">[12]</ref>, which expand the range of efficient incremantal XAI techniques for interpretability of black-box online learning models.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>We first introduce background on model-agnostic global explanations (Section 2.1), as well as online learning from data streams (Section 2.2). We consider a trained black-box model 𝑓 ∶ 𝒳 → 𝒴 with input domain 𝒳 equipped with a 𝑑-dimensional feature representation 𝒟 = {1, … , 𝑑}, e.g. 𝒳 = ℝ 𝑑 , and output domain 𝒴. We do do not make any further assumption on the model architecture and instead only allow access to the model by predicting instances. This is known as model-agnostic explanations <ref type="bibr" target="#b12">[13]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Model-Agnostic Global Explanations</head><p>A global explanation of a black-box model considers the behavior of 𝑓 across a whole labeled dataset (𝑥 𝑗 , 𝑦 𝑗 ) ∈ 𝒳 × 𝒴 with 𝑗 = 1, … , 𝑛. Global feature importance (FI) is an instance of global explanations that outputs an importance score 𝜙 FI ∶ 𝒟 → ℝ for every feature 𝑖 ∈ 𝒟 <ref type="bibr" target="#b13">[14]</ref>. Global FI measures a change in a model's performance, if the model's access to this feature's information is restricted. Permutation FI (PFI) <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b15">16]</ref> is computed by permuting the values of the target feature and measuring the change in performance across a dataset. By permuting the feature's value, the model's access to this information is limited and, thus, PFI yields an efficient way to compute global FI. However, a feature's information provided to the model's performance might strongly depend on other features. Therefore, perturbing a single feature's value in the presence of all remaining features is a limitation of PFI. Shapley additive global importance (SAGE) <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b16">17]</ref> accounts for this limitation by computing the increase in loss across sampled permutations 𝜋 ∶ 𝒟 → 𝒟 over the feature space. For such a permutation 𝜋, each feature 𝑖 ∈ 𝒟 appears at a certain position. SAGE proposes to measure the average increase in loss for the preceding features of 𝑖 with and without 𝑖. By sampling over several permutations, an approximation of the Shapley Value (SV) <ref type="bibr" target="#b17">[18]</ref> is obtained, a concept from cooperative game theory that guarantees that the SAGE values fairly decompose the overall loss. While global FI quantifies the impact of individual features, it is limited in its expressivity.</p><p>To understand Feature Effects (FE), Partial Dependence Plots (PDPs) <ref type="bibr" target="#b18">[19]</ref> visualize the effect of imputing a specified feature's value cross all observations and compute the average prediction of all observations, when this feature's value is set. The PDP visualizes this average prediction across a range of different values, which allows to globally interpret the effect of changing this feature's value on average <ref type="bibr" target="#b19">[20]</ref>. Besides PDP, there exist other FE methods <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21,</ref><ref type="bibr" target="#b21">22]</ref> with extensions to regional explanations <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b22">23]</ref>. Another way of quantifying FEs is by using interaction indices that distribute contributions to all individuals and groups of features up to a maximum group size 𝑘. In recent work, several Shapley-based interaction indices have been proposed <ref type="bibr" target="#b23">[24,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b25">26]</ref> as well as their efficient computation in a model-agnostic setting <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b23">24,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b26">27,</ref><ref type="bibr" target="#b27">28,</ref><ref type="bibr" target="#b28">29]</ref>. Model-agnostic global explanations were widely applied in static environments <ref type="bibr" target="#b29">[30]</ref>, however, in practice, data is often of dynamic nature, where explanations become outdated when models are adapted over time.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.2.">Online Learning From Data Streams</head><p>In many real-world applications <ref type="bibr" target="#b1">[2]</ref> data is observed sequentially over time. In an extreme setting, we observe a data stream (𝑥 0 , 𝑦 0 ), … , (𝑥 𝑡 , 𝑦 𝑡 ), where at time 𝑡 the data point (𝑥 𝑡 , 𝑦 𝑡 ) is observed. The goal of online learning <ref type="bibr" target="#b5">[6]</ref> is to train a time-dependent model 𝑓 𝑡 by using the current observation (𝑥 𝑡 , 𝑦 𝑡 ) once to obtain an updated model 𝑓 𝑡+1 , i.e.</p><formula xml:id="formula_0">IncrementalUpdate(𝑓 𝑡 , 𝑥 𝑡 , 𝑦 𝑡 ) ⟶ 𝑓 𝑡+1 .</formula><p>Prominent instances of online learning algorithms include Hoeffding adaptive trees <ref type="bibr" target="#b30">[31]</ref> and adaptive random forests <ref type="bibr" target="#b31">[32]</ref>, where splits and tree-structures are replaced, if they become outdated. Other training schemes, such stochastic gradient descent, inherently allow for incremental updates <ref type="bibr" target="#b5">[6]</ref>. Online learning is especially important, if the underlying data distribution changes over time. This phenomenon is known as concept drift and occurs in many applications <ref type="bibr" target="#b32">[33]</ref>. Detecting concept drift and reacting adequately by updating the model accordingly is one of the major applications of incremental learning <ref type="bibr" target="#b5">[6]</ref>. A common approach to detect concept drift is via accuracy-based drift detectors, where a sudden change in accuracy of the model indicates a change of distributions <ref type="bibr" target="#b32">[33]</ref>. Recently, it was proposed to enhance such detection schemes using global FI methods <ref type="bibr" target="#b4">[5]</ref>. However, the computation of such methods is a challenging problem that has been mainly considered in static scenarios.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">A Unified Framework for Explaining Change in Models and Data</head><p>We now present unified framework that allows to efficiently explore incremental variants of modelagnostic global explanations in an online learning setting. In a static setting, a global explanation is typically computed for individual features (global FI) or groups of features (global FE), which we summarizes in the following definition.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 1 (ℰ).</head><p>Global explanations are computed for every element in the explanation domain ℰ, which is a collection of features and interactions ℰ ⊆ 2 𝒟 . Given an explanation domain, the explanation can be computed for each element in a static setting.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 2 (Static MAGE). A static Model-Agnostic Global Explanation (MAGE) 𝜙</head><formula xml:id="formula_1">𝑓 ∶ ℰ → ℝ for a set of features 𝑆 is 𝜙 𝑓 (𝑆) ∶= 1 𝑛 𝑛 ∑ 𝑗=1 𝜆 𝑓 (𝑥 𝑗 , 𝑦 𝑗 , 𝑆, 𝒫 𝑥 𝑗 ,𝑆 ) .</formula><p>Here, 𝒫 𝑥 𝑗 ,𝑆 is a set of data points and 𝜆 𝑓 is a method-specific explanation function.</p><p>Typically, the perturbation data 𝒫 𝑥,𝑆 is constructed by using a combination of the data point 𝑥 and another sampled data point x , where the feature's values from 𝑆 and −𝑆 ∶= 𝒟 ∖ 𝑆 are taken from either 𝑥 or x <ref type="bibr" target="#b13">[14]</ref>. Thereby, the sampling of x may be done dependently or independently of 𝑥. Instantiations of static MAGEs include PFI <ref type="bibr" target="#b14">[15]</ref> where ℰ contains individual features, and 𝜆 𝑓 measure the increase in loss. Therein, 𝒫 𝑥 𝑗 ,𝑆 includes a single data point constructed by the values of 𝑥 𝑗 for features in −𝑆 and the values of 𝑆 from another data point obtained from the dataset using a permutation. SAGE <ref type="bibr" target="#b13">[14]</ref> is also covered in this framework by choosing 𝜆 𝑓 as the average over sampled permutations over 𝒟, as described in Section 2.1. Lastly, PDPs <ref type="bibr" target="#b18">[19]</ref> are contained in this framework, where 𝜆 𝑓 is chosen as the prediction of a combination of 𝑥 𝑗 and x ∈ 𝒫 𝑥 𝑗 ,𝑆 , where 𝒫 𝑥,𝑆 contains the data points for which the PDP is visualized.</p><p>Having established a unified view on static MAGEs, we now turn our focus to an online learning setting as described in Section 2.2. Using the observed data points at time 𝑡 a naive way to compute MAGEs is via Definition 2 as</p><formula xml:id="formula_2">𝜙 𝑡 (𝑆) ∶= 1 𝑡 𝑡−1 ∑ 𝑠=0 𝜆 𝑓 𝑡 (𝑥 𝑠 , 𝑦 𝑠 , 𝑆, 𝒫 𝑥 𝑠 ,𝑆 ) .<label>(1)</label></formula><p>Re-computing Eq. 1 at every time step 𝑡 is an exhaustive operative, since static MAGEs are already time-consuming when computed once <ref type="bibr" target="#b13">[14]</ref>. Moreover, Eq. 1 requires to store the full data stream, which is typically considered infeasible. As a remedy, practitioners might restrict the computation to a time window of fixed size <ref type="bibr" target="#b4">[5]</ref>. However, reducing the number of observations lowers the quality of the explanation, which increases the variance. In the following, we propose a framework for an incremental computation of 𝜙 𝑡 , similar to the incremental update of the model 𝑓 𝑡 . Our goal is to leverage the previously calculated MAGE and update this explanation using the currently available datapoint, i.e.</p><p>IncrementalUpdate(𝜙 𝑡 , 𝑓 𝑡 , 𝑥 𝑡 , 𝑦 𝑡 ) ⟶ 𝜙 𝑡+1 By introducing a smoothing parameter 0 &lt; 𝛼 &lt; 1, we define the incremental MAGE.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Definition 3 (Incremental MAGE).</head><p>Let 0 &lt; 𝛼 &lt; 0. We define an incremental MAGE as 𝜙 𝑡 (𝑆) ∶= (1 − 𝛼) ⋅ 𝜙 𝑡−1 (𝑆) + 𝛼 ⋅ 𝜆 𝑓 𝑡 (𝑥 𝑡 , 𝑦 𝑡 , 𝑆, 𝒫 𝑥 𝑡 ,𝑆 ) .</p><p>The incremental MAGE computes a single term of the sum in Eq. 1 at each time step and exploits the previously computed MAGE values. This drastically reduces the computational complexity, which is at time 𝑡 equal to computing MAGE once with Eq. 1. However, the incremental MAGE allows to obtain 𝜙 𝑡 for every time step 𝑡 without sacrificing computational resources. Incremental variants of PFI <ref type="bibr" target="#b6">[7]</ref> and SAGE <ref type="bibr" target="#b7">[8]</ref>, as well as PDP <ref type="bibr" target="#b8">[9]</ref> have been recently proposed. They can be viewed as an instantiation of incremental MAGEs.</p><p>A major challenge in computing incremental MAGEs is the maintenance of the perturbation dataset 𝒫 𝑥,𝑆 over time, i.e. efficiently constructing perturbed data points that adhere to the data distribution. Reservoir sampling <ref type="bibr" target="#b33">[34]</ref> has been adapted to efficiently store the data distribution with minimum resources <ref type="bibr" target="#b6">[7]</ref>. Geometric sampling <ref type="bibr" target="#b6">[7]</ref> proposes to store a reservoir of fixed lengths, where data points are replaced over time and more recent observations have a higher probability to be present in the reservoir compared to older observations. This mechanism allows to maintain a time-dependent marginal data distribution with limited resources. More advanced techniques maintain conditional distributions using online decision trees and allow for conditional sampling as required for instance in conditional SAGE <ref type="bibr" target="#b7">[8]</ref>. It has been shown that both sampling techniques yield substantially different explanations <ref type="bibr" target="#b34">[35]</ref>. Geometric sampling with marginal distributions highlights the structure of the model, whereas observational approaches via conditional sampling include the data distribution in the explanation <ref type="bibr" target="#b34">[35]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Conclusion and Future Work</head><p>We summarized popular model-agnostic global explanation techniques, such as FI-based PFI and SAGE, as well as FE-based PDPs, into the MAGE framework for static learning environments. We then proposed the incremental MAGE framework to directly compute these explanations for online learning on data streams. Incremental MAGE allows to incrementally update previous estimates of MAGEs at each time step using minimal resources. We have shown that incremental variants, such as iPFP, iSAGE and iPDP can be summarized in the incremental MAGE framework. Incremental MAGE offers opportunities to expand the range of incremental variants of MAGE-techniques. For instance, recently proposed methods to estimate Shapley interactions <ref type="bibr" target="#b11">[12,</ref><ref type="bibr" target="#b24">25,</ref><ref type="bibr" target="#b28">29]</ref> may be placed in the incremental MAGE framework to discover for complex interactions beyond isolated FE. Moreover, with increasing variety of explanations using different complexity levels, human-centered presentations and visualizations are important future work.</p></div>		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>We gratefully acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation): TRR 318/1 2021 -438445824.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<monogr>
		<title level="m" type="main">Sequential Deep Learning for Credit Risk Monitoring with Tabular Financial Data</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Clements</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Xu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Yousefi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Efimov</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2012.15330</idno>
		<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Data stream analysis: Foundations, major tasks and tools</title>
		<author>
			<persName><forename type="first">M</forename><surname>Bahri</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bifet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Gomes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Maniu</surname></persName>
		</author>
		<idno type="DOI">10.1002/widm.1405</idno>
	</analytic>
	<monogr>
		<title level="j">Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page">e1405</biblScope>
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Predictive maintenance based on anomaly detection using deep learning for air production unit in the railway industry</title>
		<author>
			<persName><forename type="first">N</forename><surname>Davari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Veloso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">P</forename><surname>Ribeiro</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">M</forename><surname>Pereira</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gama</surname></persName>
		</author>
		<idno type="DOI">10.1109/DSAA53316.2021.9564181</idno>
	</analytic>
	<monogr>
		<title level="m">8th IEEE International Conference on Data Science and Advanced Analytics (DSAA 2021)</title>
				<imprint>
			<publisher>IEEE</publisher>
			<date type="published" when="2021">2021</date>
			<biblScope unit="page" from="1" to="10" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<monogr>
		<title level="m" type="main">Online Feature Ranking for Intrusion Detection Systems</title>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">G</forename><surname>Atli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Jung</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1803.00530</idno>
		<imprint>
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Agnostic explanation of model change based on feature importance</title>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<idno type="DOI">10.1007/S13218-022-00766-6</idno>
		<ptr target="https://doi.org/10.1007/s13218-022-00766-6.doi:10.1007/S13218-022-00766-6" />
	</analytic>
	<monogr>
		<title level="j">Künstliche Intell</title>
		<imprint>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="211" to="224" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Incremental On-line Learning: A Review and Comparison of State of the Art Algorithms</title>
		<author>
			<persName><forename type="first">V</forename><surname>Losing</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Wersing</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.neucom.2017.06.084</idno>
	</analytic>
	<monogr>
		<title level="j">Neurocomputing</title>
		<imprint>
			<biblScope unit="volume">275</biblScope>
			<biblScope unit="page" from="1261" to="1274" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Incremental permutation feature importance (ipfi): towards online explanations on data streams</title>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<idno type="DOI">10.1007/S10994-023-06385-Y</idno>
		<ptr target="https://doi.org/10.1007/s10994-023-06385-y.doi:10.1007/S10994-023-06385-Y" />
	</analytic>
	<monogr>
		<title level="j">Mach. Learn</title>
		<imprint>
			<biblScope unit="volume">112</biblScope>
			<biblScope unit="page" from="4863" to="4903" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">isage: An incremental version of SAGE for online explanation on data streams</title>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-43418-1_26</idno>
		<idno>doi:</idno>
		<ptr target="10.1007/978-3-031-43418-1\_26" />
	</analytic>
	<monogr>
		<title level="m">Machine Learning and Knowledge Discovery in Databases: Research Track -European Conference, ECML PKDD 2023</title>
		<title level="s">Lecture Notes in Computer Science</title>
		<editor>
			<persName><forename type="first">D</forename><surname>Koutra</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">C</forename><surname>Plant</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">M</forename><forename type="middle">G</forename><surname>Rodriguez</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">E</forename><surname>Baralis</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><surname>Bonchi</surname></persName>
		</editor>
		<meeting><address><addrLine>Turin, Italy</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="2023">September 18-22, 2023. 2023</date>
			<biblScope unit="volume">14171</biblScope>
			<biblScope unit="page" from="428" to="445" />
		</imprint>
	</monogr>
	<note>Proceedings, Part III</note>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">ipdp: On partial dependence plots in dynamic modeling scenarios</title>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Jagtani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-031-44064-9_11</idno>
		<ptr target="https://doi.org/10.1007/978-3-031-44064-9_11.doi:10.1007/978-3-031-44064-9\_11" />
	</analytic>
	<monogr>
		<title level="m">Explainable Artificial Intelligence -First World Conference, xAI 2023</title>
		<title level="s">Communications in Computer and Information Science</title>
		<editor>
			<persName><forename type="first">L</forename><surname>Longo</surname></persName>
		</editor>
		<meeting><address><addrLine>Lisbon, Portugal</addrLine></address></meeting>
		<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1901">July 26-28, 2023. 1901. 2023</date>
			<biblScope unit="page" from="177" to="194" />
		</imprint>
	</monogr>
	<note>Proceedings, Part I</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Calculating feature importance in data streams with concept drift using online random forest</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">P</forename><surname>Cassidy</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><forename type="middle">A</forename><surname>Deviney</surname></persName>
		</author>
		<idno type="DOI">10.1109/BigData.2014.7004352</idno>
	</analytic>
	<monogr>
		<title level="m">IEEE International Conference on Big Data (Big Data</title>
				<imprint>
			<date type="published" when="2014">2014. 2014. 2014</date>
			<biblScope unit="page" from="23" to="28" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Feature scoring using tree-based ensembles for evolving data streams</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Gomes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">F D</forename><surname>Mello</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pfahringer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bifet</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">2019 IEEE International Conference on Big Data (Big Data</title>
				<imprint>
			<date type="published" when="2019">2019. 2019</date>
			<biblScope unit="page" from="761" to="769" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">SHAP-IQ: Unified approximation of any-order shapley interactions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kolpaczki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><forename type="middle">E</forename><surname>Hammer</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Thirty-seventh Conference on Neural Information Processing Systems</title>
				<meeting><address><addrLine>NeurIPS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023. 2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
		<idno type="DOI">10.1109/ACCESS.2018.2870052</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="52138" to="52160" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Understanding Global Feature Contributions With Additive Importance Measures</title>
		<author>
			<persName><forename type="first">I</forename><surname>Covert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of International Conference on Neural Information Processing Systems</title>
				<meeting>International Conference on Neural Information Processing Systems<address><addrLine>NeurIPS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="page" from="17212" to="17223" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">Random Forests</title>
		<author>
			<persName><forename type="first">L</forename><surname>Breiman</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">45</biblScope>
			<biblScope unit="page" from="5" to="32" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">All Models are Wrong, but Many are Useful: Learning a Variable&apos;s Importance by Studying an Entire Class of Prediction Models Simultaneously</title>
		<author>
			<persName><forename type="first">A</forename><surname>Fisher</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Rudin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dominici</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">20</biblScope>
			<biblScope unit="page" from="1" to="81" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Visualizing the Feature Importance for Black Box Models</title>
		<author>
			<persName><forename type="first">G</forename><surname>Casalicchio</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Molnar</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<idno type="DOI">10.1007/978-3-030-10925-7_40</idno>
	</analytic>
	<monogr>
		<title level="s">Lecture Notes in Computer Science</title>
		<imprint>
			<biblScope unit="volume">11051</biblScope>
			<biblScope unit="page" from="655" to="670" />
			<date type="published" when="2019">2019</date>
			<publisher>Springer International Publishing</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">A Value for n-Person Games</title>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">S</forename><surname>Shapley</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Contributions to the Theory of Games (AM-28)</title>
				<meeting><address><addrLine>New Jersey, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Princeton University Press</publisher>
			<date type="published" when="1953">1953</date>
			<biblScope unit="volume">II</biblScope>
			<biblScope unit="page" from="307" to="318" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Greedy Function Approximation: A Gradient Boosting Machine</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">H</forename><surname>Friedman</surname></persName>
		</author>
		<ptr target="http://www.jstor.org/stable/2699986" />
	</analytic>
	<monogr>
		<title level="j">The Annals of Statistics</title>
		<imprint>
			<biblScope unit="volume">29</biblScope>
			<biblScope unit="page" from="1189" to="1232" />
			<date type="published" when="2001">2001</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">REPID: regional effect plots with implicit interaction detection</title>
		<author>
			<persName><forename type="first">J</forename><surname>Herbinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalicchio</surname></persName>
		</author>
		<ptr target="https://proceedings.mlr.press/v151/herbinger22a.html" />
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics, AISTATS 2022</title>
				<editor>
			<persName><forename type="first">G</forename><surname>Camps-Valls</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">F</forename><forename type="middle">J R</forename><surname>Ruiz</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">I</forename><surname>Valera</surname></persName>
		</editor>
		<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022-03">March 2022. 2022</date>
			<biblScope unit="volume">151</biblScope>
			<biblScope unit="page" from="10209" to="10233" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Visualizing the effects of predictor variables in black box supervised learning models</title>
		<author>
			<persName><forename type="first">D</forename><forename type="middle">W</forename><surname>Apley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Zhu</surname></persName>
		</author>
		<ptr target="https://api.semanticscholar.org/CorpusID:88522102" />
	</analytic>
	<monogr>
		<title level="j">Journal of the Royal Statistical Society: Series B (Statistical Methodology)</title>
		<imprint>
			<biblScope unit="volume">82</biblScope>
			<date type="published" when="2016">2016</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">From local explanations to global understanding with explainable AI for trees</title>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">M</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><forename type="middle">G</forename><surname>Erion</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">J</forename><surname>Degrave</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Prutkin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Nair</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Katz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Himmelfarb</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Bansal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lee</surname></persName>
		</author>
		<idno type="DOI">10.1038/s42256-019-0138-9</idno>
	</analytic>
	<monogr>
		<title level="j">Nature Machine Intelligence</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page" from="56" to="67" />
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<monogr>
		<title level="m" type="main">Decomposing global feature effects based on feature interactions</title>
		<author>
			<persName><forename type="first">J</forename><surname>Herbinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Bischl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Casalicchio</surname></persName>
		</author>
		<idno type="DOI">10.48550/ARXIV.2306.00541</idno>
		<idno type="arXiv">arXiv:2306.00541</idno>
		<ptr target="https://doi.org/10.48550/arXiv.2306.00541.doi:10.48550/ARXIV.2306.00541" />
		<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b23">
	<analytic>
		<title level="a" type="main">The Shapley Taylor Interaction Index</title>
		<author>
			<persName><forename type="first">M</forename><surname>Sundararajan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Dhamdhere</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Agarwal</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 37th International Conference on Machine Learning</title>
				<meeting>the 37th International Conference on Machine Learning<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<publisher>ICML</publisher>
			<date type="published" when="2020">2020. 2020</date>
			<biblScope unit="volume">119</biblScope>
			<biblScope unit="page" from="9259" to="9268" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b24">
	<analytic>
		<title level="a" type="main">Faith-Shap: The Faithful Shapley Interaction Index</title>
		<author>
			<persName><forename type="first">C</forename><surname>Tsai</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Yeh</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Ravikumar</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">24</biblScope>
			<biblScope unit="page" from="1" to="42" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b25">
	<analytic>
		<title level="a" type="main">From Shapley Values to Generalized Additive Models and back</title>
		<author>
			<persName><forename type="first">S</forename><surname>Bordt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">U</forename><surname>Luxburg</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics (AISTATS 2023)</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">206</biblScope>
			<biblScope unit="page" from="709" to="745" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b26">
	<analytic>
		<title level="a" type="main">SVARM-IQ: Efficient approximation of any-order Shapley interactions through stratification</title>
		<author>
			<persName><forename type="first">P</forename><surname>Kolpaczki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of The 27th International Conference on Artificial Intelligence and Statistics</title>
				<meeting>The 27th International Conference on Artificial Intelligence and Statistics<address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<publisher>AISTATS</publisher>
			<date type="published" when="2024">2024. 2024</date>
			<biblScope unit="volume">238</biblScope>
			<biblScope unit="page" from="3520" to="3528" />
		</imprint>
	</monogr>
	<note>Proceedings of Machine Learning Research</note>
</biblStruct>

<biblStruct xml:id="b27">
	<analytic>
		<title level="a" type="main">KernelSHAP-IQ: Weighted least square optimization for shapley interactions</title>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Kolpaczki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<ptr target="https://openreview.net/forum?id=d5jXW2H4gg" />
	</analytic>
	<monogr>
		<title level="m">Forty-first International Conference on Machine Learning</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b28">
	<analytic>
		<title level="a" type="main">Beyond treeshap: Efficient computation of any-order shapley interactions for tree ensembles</title>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<idno type="DOI">10.1609/AAAI.V38I13.29352</idno>
	</analytic>
	<monogr>
		<title level="m">Thirty-Eighth AAAI Conference on Artificial Intelligence</title>
				<imprint>
			<publisher>AAAI Press</publisher>
			<date type="published" when="2024">2024. 2024</date>
			<biblScope unit="page" from="14388" to="14396" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b29">
	<analytic>
		<title level="a" type="main">Explaining by Removing: A Unified Framework for Model Explanation</title>
		<author>
			<persName><forename type="first">I</forename><surname>Covert</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Lundberg</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S.-I</forename><surname>Lee</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Machine Learning Research</title>
		<imprint>
			<biblScope unit="volume">22</biblScope>
			<biblScope unit="page" from="1" to="90" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b30">
	<analytic>
		<title level="a" type="main">Mining time-changing data streams</title>
		<author>
			<persName><forename type="first">G</forename><surname>Hulten</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Spencer</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><surname>Domingos</surname></persName>
		</author>
		<idno type="DOI">10.1145/502512.502529</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2001)</title>
				<meeting>the seventh ACM SIGKDD international conference on Knowledge discovery and data mining (KDD 2001)</meeting>
		<imprint>
			<publisher>ACM Press</publisher>
			<date type="published" when="2001">2001</date>
			<biblScope unit="page" from="97" to="106" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b31">
	<analytic>
		<title level="a" type="main">Adaptive random forests for evolving data stream classification</title>
		<author>
			<persName><forename type="first">H</forename><forename type="middle">M</forename><surname>Gomes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Bifet</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Read</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Barddal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Enembreck</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Pfharinger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Holmes</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Abdessalem</surname></persName>
		</author>
		<idno type="DOI">10.1007/s10994-017-5642-8</idno>
	</analytic>
	<monogr>
		<title level="j">Machine Learning</title>
		<imprint>
			<biblScope unit="volume">106</biblScope>
			<biblScope unit="page" from="1469" to="1495" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b32">
	<analytic>
		<title level="a" type="main">Learning under Concept Drift: A Review</title>
		<author>
			<persName><forename type="first">J</forename><surname>Lu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Dong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Gama</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Zhang</surname></persName>
		</author>
		<idno type="DOI">10.1109/TKDE.2018.2876857</idno>
	</analytic>
	<monogr>
		<title level="j">IEEE Transactions on Knowledge and Data Engineering</title>
		<imprint>
			<biblScope unit="page" from="2346" to="2363" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b33">
	<analytic>
		<title level="a" type="main">Random Sampling with a Reservoir</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Vitter</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.ipl.2005.11.003</idno>
	</analytic>
	<monogr>
		<title level="j">ACM Transactions on Mathematical Software</title>
		<imprint>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="37" to="57" />
			<date type="published" when="1985">1985</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b34">
	<analytic>
		<title level="a" type="main">On feature removal for explainability in dynamic environments</title>
		<author>
			<persName><forename type="first">F</forename><surname>Fumagalli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Muschalik</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Hüllermeier</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<idno type="DOI">10.14428/ESANN/2023.ES2023-148</idno>
		<ptr target="https://doi.org/10.14428/esann/2023.ES2023-148.doi:10.14428/ESANN/2023.ES2023-148" />
	</analytic>
	<monogr>
		<title level="m">European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, ESANN 2023</title>
				<meeting><address><addrLine>Bruges, Belgium</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">October 4-6, 2023, 2023</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
