<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Application of Multi-Instance Counterfactual Explanation in Road Safety Analysis</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">André</forename><surname>Artelt</surname></persName>
							<email>aartelt@techfak.uni-bielefeld.de</email>
							<affiliation key="aff0">
								<orgName type="institution">Bielefeld University</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
							<affiliation key="aff1">
								<orgName type="institution">University of Cyprus</orgName>
								<address>
									<country key="CY">Cyprus</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Andreas</forename><surname>Gregoriades</surname></persName>
							<email>andreas.gregoriades@cut.ac.cy</email>
							<affiliation key="aff2">
								<orgName type="institution">Cyprus University of Technology</orgName>
								<address>
									<country key="CY">Cyprus</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Application of Multi-Instance Counterfactual Explanation in Road Safety Analysis</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">FF1261A1C4F8EB387287FFEEB9A90E15</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T18:28+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Road Safety</term>
					<term>XAI</term>
					<term>Counterfactual Explanations</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Road accidents cause millions of fatalities worldwide and impose a significant economic burden on societies. To address this problem, road safety researchers have mainly applied statistical or machine learning methods to predict accident occurrences and identify the causes of such crashes, but to the best of our knowledge, no work has explored how to optimally address the causes of such crashes to minimise accidents' severity. Recently, eXplainable AI (XAI) techniques have been applied in transportation to evaluate the effect of accidents' contributing factors. Limited work however delved into optimum ways to reduce accident severity by minimising changes needed to road network's infrastructure using XAI. In this work, we apply counterfactual explanations, a popular XAI technique, to road accident data, to identify optimal changes to infrastructure to improve road safety, by converting severe accidents into minor accidents. Traditionally, counterfactual explanations are used for single instances(accidents), which is not appropriate to this problem since the goal is to find actionable changes to the road infrastructure to convert as many severe accidents to non-severe accidents. Thus, our proposed methodology is based on multi-instance explanations. The proposed methodology is evaluated in a case study with real accident data from Cyprus.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>According to the World Health Organisation (WHO), approximately 1.4 million accidents occur each year worldwide leaving millions of people injured. Road accidents constitute the eighth leading cause of death worldwide and this number is likely to increase if this problem is not addressed effectively.</p><p>Predominantly research on road safety utilizes classical statistical techniques such as Logistic, Poisson, and Negative binomial regression <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2]</ref>. These methods undoubtedly provided insights; however, the fundamental characteristics of accident data often result in methodological limitations that cannot be accounted for. Recently, research has indicated that machine learning (ML) techniques outperformed conventional statistical methods by offering superior prediction and greater accuracy due to their ability to work with massive amounts of multidimensional and noisy data, while also being able to address generalizability, as reported by Iranitalab et al. <ref type="bibr">(2017)</ref>. With the increased availability of data from various sources such as Internet of Things devices, installed in the road network, connected vehicles, and naturalistic driving studies, machine learning is becoming a key methodology in transportation.</p><p>During road safety analysis, traffic accident data is used to develop models to predict and understand the causes of accidents through the identification of relationships among contributing factors and the outcome of the models. Contributing factors can be combinations of properties relating to the road infrastructure, driving behavior, and environmental conditions. These are considered independent variables and the outcome (accident occurrence or severity) is the dependent variable. Predicting an accident before it happens gives the chance to policymakers to take precautionary measures to minimise or prevent accidents from happening. In the machine learning domain, researchers use techniques such as eXplainable AI (XAI) <ref type="bibr" target="#b2">[3]</ref>, and/or causality analysis to investigate the contributing factors that lead to accidents. The latter approaches use multivariate statistical models to evaluate the effects of the contributing factors. However, statistical relationships are not always causal and could be the result of chance, bias, confounding variables, or others. A cause can be an action or event that changes the outcome that would not have changed otherwise. A cause is therefore a necessary precondition for an event such as an accident. Causal relationships are usually characterised by regularity, meaning the same cause always produces the same result <ref type="bibr" target="#b3">[4]</ref>. Causal relationships also are linked to the notion of counterfactual, which explains what would have happened if the cause was not present. Thus, to infer a causal relationship, it is always necessary to establish the counterfactual. In scientific analysis, the most popular approach to do so is to conduct a controlled experiment where treatment is applied at random, and the control group shows what would have happened if the action/treatment was not introduced (counterfactual). Road safety experiments are difficult to perform, with most studies being observational, thus the task of establishing the counterfactual is similar to controlling the confounding factors.</p><p>In transportation, one of the main approaches to improve safety is by addressing one or more causes or risk factors that are associated with accident occurrence. The traffic safety literature highlights different modifications to either the policy or road infrastructure to improve safety. For instance, infrastructural modifications could be, changes in horizontal curvature of roads, shoulder widths, the width of the median that separate lanes, etc. Such modifications, however, are usually introduced after analysing the problem, designing a solution, simulating it, and then applying it, without having any guarantees of the effect that these changes can make. Therefore, most of such projects fail to meet their goals. Additionally, such modifications are usually introduced without knowing the optimum degree of change (for example speed limit) to archive the desired effect. Optimisation has been applied in transportation to minimise the budget required to improve safety as reported in work by <ref type="bibr" target="#b4">[5]</ref>. Limited work however addressed the problem of optimising infrastructural changes to reduce fatalities or accident severity.</p><p>Explainable AI (XAI) <ref type="bibr" target="#b2">[3]</ref> is an approach aiming to explain black-box machine learning models and the reasons they come up with decisions through intuitive and human-understandable explanations. The need for explainability is not new since it addresses the question "why" a system behaves the way it does. The term XAI however has been recently coined by DARPA and it is used in a variety of domains where machine learning is applied. For instance, in transportation, two of the main XAI techniques used to extract knowledge from prediction models are Shapley Additive exPlanation (SHAP) and Lime <ref type="bibr" target="#b5">[6]</ref>. These approaches, however, do not offer recommendations on how to achieve the desired results. Counterfactual explanations <ref type="bibr" target="#b6">[7]</ref> ("counterfactuals" for short) on the other hand are designed for this purpose and thus became a popular technique for explaining black-box models. However, their application in the traffic safety domain is missing, making this work one of the first works that apply counterfactual explanations to road safety.</p><p>The goal of this work is the maximisation of road safety (reducing severe accidents) by minimising the infrastructural changes to a road network to improve safety. This is achieved by explaining using counterfactual explanations a machine learning model trained to predict accident severity using historical accident data of the specific road network. Counterfactuals, however, usually provide explanations for single instances (accidents) rather than a group of cases. Moreover, counterfactual explanations are applied on numerical variables (e.g. road width, speed limit, etc.), even though categorical variables (e.g. type or intersection) are a key type of features in different domains, including transportation. The approach proposed herein addresses these two problems by finding counterfactuals that satisfy multiple instances(accidents) simultaneously, characterised by both numerical and categorical features. Therefore, the method provides policymakers with recommendations that not only indicate the factors that contribute to the problem but also provide them with information regarding the degree of change to these factors to archive the desired effect. Therefore, the method can potentially minimise the costs of satisfying stakeholders' goals.</p><p>The remainder of this work is organised as follows: Section 2 elaborates on the background of counterfactual explanations, the related literature, and the motivation behind the proposed counterfactual methodology. The next section introduces the methodology (Section 3) and elaborates on its application in a road safety case study using accident data (Section 4). Finally, we summarize and discuss future directions (Section 5).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Counterfactual Explanations</head><p>A counterfactual explanation <ref type="bibr" target="#b6">[7]</ref> ("counterfactual" for short) states how to change a given instance such that the output of the model for this instance changes in a specific way (towards a desired outcome). The popularity of counterfactual explanations comes from the fact that they are very similar to the way humans explain situations <ref type="bibr" target="#b7">[8]</ref> and that they provide precise and actionable recommendations that can be directly applied in the real world <ref type="bibr" target="#b6">[7]</ref>.</p><p>To be useful in practice, a counterfactual must not only be feasible (i.e. valid) but also as simple as possible -e.g. not too many recommendations or big changes <ref type="bibr" target="#b6">[7]</ref>. Considering these two aspects, the computation of a counterfactual 𝛿 ⃗ cf for a given case 𝑥 ⃗ orig can be formally phrased as an optimization problem that minimises the modifications to the attributes of 𝑥 ⃗ orig so that the classifier ℎ(•) changes its prediction to the desired output 𝑦 cf <ref type="bibr" target="#b6">[7,</ref><ref type="bibr" target="#b8">9]</ref>: Definition 1 (Counterfactual Explanation). Assume a prediction function ℎ : 𝒳 → 𝒴 is given. Computing a counterfactual explanation 𝛿 ⃗ cf ∈ 𝒳 for a given instance 𝑥 ⃗ orig ∈ 𝒳 is phrased as the following optimization problem:</p><p>arg min</p><formula xml:id="formula_0">𝛿 ⃗ cf ∈ 𝒳 ℓ (︀ ℎ(𝑥 ⃗ orig ⊕ 𝛿 ⃗ cf ), 𝑦 cf )︀ + 𝐶 • 𝜃(𝛿 ⃗ cf )<label>(1)</label></formula><p>where ℓ(•) denotes a loss function that penalizes deviation of the output ℎ(𝑥 ⃗ orig ⊕ 𝛿 ⃗ cf ) from the requested output 𝑦 cf , 𝜃(•) implements the cost of 𝛿 ⃗ cf -i.e. prefer "simple, cheap &amp; easy to execute" explanations -, and 𝐶 &gt; 0 denotes the regularization strength.</p><p>In order to not make any assumptions on the data domain, we use the symbol ⊕ to denote the application/execution of the counterfactual 𝛿 ⃗ cf to the original instance 𝑥 ⃗ orig . While in the case of real and integer numbers (e.g. 𝒳 = R 𝑑 ) this reduces to the translation (i.e. (𝑥 ⃗ cf ) 𝑖 = (𝑥 ⃗ orig ) 𝑖 + (𝛿 ⃗ cf ) 𝑖 , in the case of categorical features it denotes a substitution -i.e. (𝑥 ⃗ cf ) 𝑖 = (𝛿 ⃗ cf ) 𝑖 .</p><p>Note that Definition 1 constitutes a non-causal approach -i.e. no causal model of the world is included. There exists an entirely different line of research on counterfactuals utilizing structural causal models to incorporate causal knowledge <ref type="bibr" target="#b9">[10]</ref>. However, in practice, such causal models are usually not known and have to be estimated from data or carefully specified with the help of domain experts. However, experts are not easily available, thus in this work, we only consider a non-causal approach.</p><p>There exists a wide variety of methods for computing counterfactual explanations -i.e. solving the optimization problem. The model-agnostic methods utilize gradient-based optimization methods that can be applied on any black box model <ref type="bibr" target="#b10">[11,</ref><ref type="bibr" target="#b11">12,</ref><ref type="bibr" target="#b12">13]</ref>, while the model-specific methods use details of the specific model's architecture to calculate recommendations <ref type="bibr" target="#b8">[9]</ref>. An important limitation of counterfactual explanations is the fact that they are missing uniqueness. This means that there usually exists more than one possible explanation which raises the question of which one to pick -usually, the "simplest" explanation is picked.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Multi-instance Counterfactual Explanations</head><p>In many real-world applications of counterfactuals, one is interested in gaining knowledge about a set or group of instances instead of a single instance, which is the usual case. For instance, in any organization, the human resource department is interested in minimizing employees' attrition, since this causes several problems. To understand the cause of attrition and deploy appropriate (global) countermeasures, an organization needs to consider all cases (employees intending to leave) and find a change (maybe increase employee salary) that will guarantee the retention of as many employees as possible <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15]</ref>.</p><p>To find a common feasible recommendation for a group of instances (i.e. employees), counterfactual explanations have been recently extended towards multi-instance counterfactual explanations (also called group-counterfactuals) <ref type="bibr" target="#b13">[14,</ref><ref type="bibr" target="#b14">15,</ref><ref type="bibr" target="#b15">16,</ref><ref type="bibr" target="#b16">17]</ref>. A multi-instance counterfactual states what to change on a group level (e.g. increasing some specific attribute for all instances in the group by the same amount) such that the outcome for this group of instances changes simultaneously in some desired way. Like counterfactual explanations, the computation of a multi-instance counterfactual explanation 𝛿 ⃗ cf for a set of cases 𝒟 can be formalized as an optimization problem: Definition 2 (Multi-instance Counterfactual Explanation). Let ℎ : 𝒳 → 𝒴 denote a prediction function, and let 𝒟 be a set of labeled instances with the same prediction 𝑦 ∈ 𝒴 under ℎ(•) -i.e. ℎ(𝑥 ⃗ 𝑖 ) = 𝑦 ∀𝑥 ⃗ 𝑖 ∈ 𝒟. We are looking for a single change 𝛿 ⃗ cf ∈ R 𝑑 that, if applied to the instances in 𝒟, changes as many of their predictions to some requested output 𝑦 cf ∈ 𝒴.</p><p>We call all pareto-optimal solutions 𝛿 ⃗ cf to the following multi-objective optimization problem multiinstance counterfactuals:</p><formula xml:id="formula_1">min 𝛿 ⃗ cf ∈ 𝒳 (︀ 𝜃(𝛿 ⃗ cf ) , ℓ(ℎ(𝑥 ⃗ 1 ⊕ 𝛿 ⃗ cf ), 𝑦 cf ), . . . , ℓ(ℎ(𝑥 ⃗ |𝒟| ⊕ 𝛿 ⃗ cf ), 𝑦 cf ) )︀<label>(2)</label></formula><p>where 𝜃(•) denotes the cost of the counterfactual, and ℓ(•) denotes a suitable loss function penalizing deviations from the requested outcome 𝑦 cf -suitable loss functions might be the mean-squared error or cross-entropy loss, while the cost 𝜃(•) might be implemented by a 𝑝-norm.</p><p>Note that in contrast to normal counterfactuals (Definition 1), multi-instance counterfactuals (Definition 2) require multiple constraints -one constraint for each case in the set 𝒟. Similar to normal counterfactuals, one could merge all constraints into a single objective which would then enable the use of general gradient-based or black-box methods for solving the optimization problem.</p><p>A major challenge in the computation of multi-instance counterfactuals is that there might be no feasible solution -i.e. it might be impossible to find a single change 𝛿 ⃗ cf that is feasible for all instances in 𝒟. Therefore, one might either want to relax the constraints and compute a change 𝛿 ⃗ cf that is feasible for as many as possible instances in 𝛿 ⃗ cf , or find a grouping/clustering of 𝒟 such that there exists a change 𝛿 ⃗ cf for each of the groups that is feasible for all instances within this sub-group -here the additional challenges of finding such sub-groups arise. We denote the percentage of instances for which the explanation is feasible as accuracy.</p><p>Unlike counterfactual explanations (Definition 1), multi-instance counterfactuals (Definition 2) is a novel concept and consequently, existing work on this is rather limited. For instance, the work by <ref type="bibr" target="#b14">[15]</ref> applies multi-instance counterfactuals to the employee attrition problem but only considers a linear classifier. One of the latest works such as <ref type="bibr" target="#b13">[14]</ref> proposes a counterfactual explanation tree, which assigns counterfactual explanations to a learned decision tree that assigns samples to groups -this method is only applicable if an automatic clustering into sub-groups is needed. Besides that, most existing work for multi-instance counterfactuals can be interpreted as summarizing or aggregating local counterfactuals. In the work of <ref type="bibr" target="#b15">[16]</ref>, multi-instance counterfactuals are generated by first computing individual counterfactuals and then selecting those that maximizes the cover of a given set of instances. Similarly, <ref type="bibr" target="#b17">[18]</ref> tries to obtain a global explanation by simply aggregating local explanations. However, these methods cannot guarantee high accuracy because they consider all instances separately.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Multi-instance Counterfactuals for Improving Road Safety</head><p>Herein we propose a methodology that utilizes multi-instance counterfactuals to analyze road accident records and compute suggestions on how to improve road safety -the only assumption we make is that the collected road accident records are labeled with accident severity. The proposed methodology consists of three main steps as illustrated in Figure <ref type="figure" target="#fig_0">1:</ref> 1. Train a binary classifier using road accident data to predict the severity of accidents.  We interpret the computed multi-instance counterfactuals as potential suggestions on how road safety can be improved. Note that the grouping in the second step of the methodology is optional because one could simply consider all severe accidents as one large group. However, depending on the use case a more refined grouping might allow the computation of suggestions for more specific questions: For instance, depending on the nature of the collected data, it might be possible to group road accidents based on their location (e.g. rural area vs. urban) or timing of the accident (day, night). In this way, one could generate more specific recommendations (e.g. area/time specific) for certain types of accidents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Empirical Case Study</head><p>We empirically evaluate our proposed methodology (see Section 3) in a case study using real-world accident data.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Data</head><p>The original data consists of accidents that occurred in Nicosia, Cyprus during 2007 -2012. After merging and cleaning the data, 9829 cases were left each consisting of 58 attributes. To ensure that the recommendations to be made are actionable, we only consider 12 attributes that could be changed in practice, in contrast to attributes that cannot be changed, such as the age of drivers. Examples of mutable attributes are the road width, speed limit, traffic control, pedestrians crossing, the existence of a median in the road, etc. The accident type attribute is converted into a binary variable by combining fatal with severe accidents and labelling them as severe, and slight(minor) with property damage accidents and labelling these as non-severe accidents.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Implementation</head><p>Because the data mainly consists of numerical (e.g. integer) and categorical attributes, most standard machine learning models are not suitable. Thus, in this work, we decided to use a tree-based classifier (i.e. Xgboost <ref type="bibr" target="#b18">[19]</ref>) that can handle such categorical attributes without transforming them. The classifier is trained to predict the severity of a given accident using different hyperparameters that have been tuned to improve the model's performance such as depth of trees, number of estimators, learning rate, and scale-pos-weight to address data imbalance. The classifier achieves an average F1-score of approx. 80% on the test data -i.e. we split the data into train (70%) and test data (30%).</p><p>We consider three groups of accidents (among the severe accidents only) in our experiments: All Severe accidents; Severe accidents in rural areas; and Severe accidents in urban areas. These groupings can be changed depending on the case study. For instance, a transportation engineer might use only accidents at a specific location in the road network (black spot). Table <ref type="table" target="#tab_1">1</ref> shows different multi-instance recommendations for each of these groups. The column multi-instance counterfactuals show the changes that need to be made so that the severe accidents are converted to non-severe. The variables in parenthesis refer to the property of the infrastructure and the number next to each one defines the type and degree of change. Therefore for the case (Traffic-control, 3) which is a categorical variable, it denotes that the type of traffic control needs to change to traffic-light (this refers to the number 3) for all of these accidents so that these change to non-severe. Similarly for continuous variables such as 'speed' the recommended change is defined as a number with a sign indicating positive or negative change. In the case of speed limit, most recommendations indicate a reduction in speed limit which abides to road safety literature <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>.</p><p>Because in this case study the data also include non-continuous attributes, existing methods for computing multi-instance counterfactuals cannot be applied out of the box. To address this issue, we built an evolutionary algorithm for computing multi-instance counterfactuals that are guaranteed to adhere to the specific attribute ranges and yield feasible solutions. The evolutionary (i.e. genetic) algorithm treats all variables as discrete and iteratively mutates and merges (cross-over) candidate solutions until convergence. To guarantee the feasibility of the final multi-instance counterfactual 𝛿 ⃗ cf , we construct the set of feasible changes for each feature of numerical variables as follows -assuming non-negativity which can be achieved by adding a constant:</p><formula xml:id="formula_2">𝑙 𝑖 = 𝛼 𝑖 − min 𝑗 {(𝑥 ⃗ 𝑗 ) 𝑖 } and 𝑢 𝑖 = 𝛽 𝑖 − max 𝑗 {(𝑥 ⃗ 𝑗 ) 𝑖 }<label>(3)</label></formula><p>where 𝛼 𝑖 and 𝛽 𝑖 denote the maximum and minimum feasible value of the 𝑖-th feature, and the final set of feasible changes is then given as [𝑙 𝑖 , 𝑢 𝑖 ]. These sets are used when computing mutations in our evolutionary algorithm of existing individuals during the optimization. As an objective, we use the zero-norm (i.e. setting p=0 in the p-norm) -by this, we aim to minimize the number of suggested changes. Together with the constraints, this yields the following optimization problem:</p><formula xml:id="formula_3">arg min 𝛿 ⃗ cf (︁ ‖𝛿 ⃗ cf ‖ 0 , ℓ(ℎ(𝑥 ⃗ 1 + 𝛿 ⃗ cf ), 𝑦 cf ), . . . , ℓ(ℎ(𝑥 ⃗ |𝒟| + 𝛿 ⃗ cf ), 𝑦 cf ) )︁<label>(4)</label></formula><p>The final algorithm is given in Algorithm 1.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Results</head><p>To generate reliable results and avoid recommendations that are based on a bad train-test data split, the experiments are conducted using a 3-fold cross-validation. Thus, we generate three multi-instance explanations for each accident groups (i.e. all severe accidents, severe accidents in urban areas, and rural areas) as shown in Table <ref type="table" target="#tab_1">1</ref>.</p><p>Besides listing the explanations in Table <ref type="table" target="#tab_1">1</ref> (tuple of attribute and recommended change), we also added a column for the accuracy of the explanation which refers to the percentage of instances for which the explanation correctly changes their prediction (Table <ref type="table" target="#tab_1">1</ref>). From the results, we observe that the generated multi-instance counterfactuals are almost always feasible (the accuracy is close to one). This demonstrates that our implemented evolutionary algorithm is able to compute feasible solutions with high reliability.</p><p>The results from the multi-instance counterfactuals show the required changes to infrastructure to convert severe accidents into non-severe. The variables that are considered important and thus are part of the recommended changes are the Traffic Control, which takes the values traffic signals, roundabout, police, stop sign and none; Road Width, stating the width in meters; Speed Limit in Km/h; Pedestrian Crossing type, that takes values: zebra crossing, pedestrian traffic signal crossing, and pelican crossing; Constriction variable that takes the values: one-way, two-way bridge, none; and Brake </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Proposed methodology for reducing accident severity.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0"><head></head><label></label><figDesc></figDesc><graphic coords="8,72.00,65.61,451.28,369.14" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Algorithm 1</head><label>1</label><figDesc>Multi-instance Counterfactuals for Road-safety Analysis Input: Set of labeled (severe vs. non-severe) accidents 𝒟 = {(𝑥 ⃗ 𝑖 , 𝑦 𝑖 )}, hyper-parameter 𝑁 denoting the number of evolutionary steps Output: Recommendations (i.e. multi-instance counterfactual explanation) 𝛿 ⃗ cf 1: Split data 𝒟 into train 𝒟 train and test 𝒟 test set ◁ Repeat in k-fold cross validation 2: Fit XGBoost classifier ℎ(•) to 𝒟 train 3: Consider all accidents 𝑥 ⃗ 𝑖 ∈ 𝒟 test with ℎ(𝑥 ⃗ 𝑖 ) = "severe" -create 𝒟 severe ◁ Severe accidents only 4: (Group accidents and pick group of interest) ◁ Optional 5: Compute feature bounds {[𝑙 𝑖 , 𝑢 𝑖 ]} Eq. (3) on 𝒟 severe 6: ◁ Evolutionary algorithm for computing a multi-instance counterfactual 7: {𝛿 ⃗ cf𝑗 } = random_init({[𝑙 𝑖 , 𝑢 𝑖 ]}) ◁ Random initial population of solutions -respect feature bounds! 8: for N iterations do Apply random mutations considering feature bounds {[𝑙 𝑖 , 𝑢 𝑖 ]} Apply cross-over to create next generation of solutions {𝛿 ⃗ cf𝑗 } 13: end for ‖𝛿 ⃗ cf ‖ 0 , ℓ(ℎ(𝑥 ⃗ 1 + 𝛿 ⃗ cf ), 𝑦 cf ), . . . , ℓ(ℎ(𝑥 ⃗ |𝒟severe| + 𝛿 ⃗ cf ), 𝑦 cf ) )︁</figDesc><table><row><cell>9:</cell><cell cols="3">Evaluate fitness of each 𝛿 ⃗ cf𝑗 using Eq. (4)</cell></row><row><cell>10:</cell><cell cols="2">Select best 𝛿 ⃗ cf𝑗 for next generation</cell></row><row><cell>11:</cell><cell></cell><cell></cell></row><row><cell>12:</cell><cell></cell><cell></cell></row><row><cell cols="2">14: 15: 𝛿 ⃗ cf = arg min</cell><cell>(︁</cell><cell>◁ Select best solution as the final recommendation 𝛿 ⃗ cf</cell></row><row><cell></cell><cell>{𝛿 ⃗ cf𝑗 }</cell><cell></cell></row></table></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This research was supported by the Ministry of Culture and Science NRW (Germany) as part of the Lamarr Fellow Network. This publication reflects the views of the authors only.</p></div>
			</div>

			<div type="annex">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Lane, that describes the width of the footway/shoulder in meters. The recommended changes mainly highlight the importance of traffic control type and speed limit, factors that are known in the literature to affect road safety <ref type="bibr" target="#b19">[20,</ref><ref type="bibr" target="#b20">21]</ref>. Additional changes include an increase in road width and break lane width, recommendations which are also consistent with the literature <ref type="bibr" target="#b21">[22]</ref>.</p><p>To validate our approach we used SHAP (SHapley Additive exPlanations) <ref type="bibr" target="#b22">[23]</ref>, an XAI model-agnostic method, to identify which features are influencing mostly accident type. The SHAP summary plot in Figure <ref type="figure">2</ref> shows that the road width, break lane width, and traffic control are key features along with conjunction type and speed limit. These results verify that our method uses these key features to make counterfactual recommendations. The limitation of SHAP however is that it does not indicate which combinations of features must change and by how much so that the accidents are converted from severe to non-severe.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Summary &amp; Future Research</head><p>In this work, we proposed a methodology for analyzing road accident data and using XAI to generate suggestions on how to improve road safety. Our methodology requires training a classifier to predict the severity of accidents (severe, non-severe) and then recommending changes to the input of a set of severe accidents using multi-instance counterfactual explanations, so that the predictions of the model change to non-severe accidents. The approach enable us to compute a global recommendation on how to reduce the severity of accidents by converting all severe accidents into non-severe using minimum alterations to chargeable features (referring to infrastructure), and by doing this improve road safety. We also conducted an empirical case study on real-world data, which showed that our proposed methodology computes reasonable recommendations that abide with the literature.</p><p>Despite the promising results, some aspects need further investigation:</p><p>• We observed that the suggested changes are often large (for example the speed limit), and it is not clear how plausible those changes would be in practice. We think this is mainly due to the fact that the designed evolutionary algorithm does not have any distributional knowledge about the data -i.e. which variable values or combinations are more often observed in the real world. Such distributional information might be used as part of the objective function, thus, automatically punishing large changes that would be costly to implement. Currently, we are working on an extension where the evolutionary algorithm is given distributional information which it utilizes when generating random mutations, new individuals, and cross-overs. By this, we hope to improve the quality of the recommended infrastructural changes significantly. • In the presented case study, we either considered all severe accidents as one large group or manually split them into two sub-groups based on a location attribute from the dataset. While the computation of multi-instance counterfactuals worked well for both groups (large group, and two smaller sub-groups), it remains unclear if other interesting or beneficial (for the generated suggestions) groupings exist. An automatic clustering might not only reveal interesting clusters within the accident data set but also give rise to better and more specific suggestions on how to improve road safety. Currently, we are investigating how to cluster cases into groups such that the resulting multi-instance counterfactuals are as simple as possible.</p></div>			</div>
			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Factors associated with traffic crashes on urban freeways</title>
		<author>
			<persName><forename type="first">A</forename><surname>Kassu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hasan</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.treng.2020.100014</idno>
		<ptr target="https://doi.org/10.1016/j.treng.2020.100014" />
	</analytic>
	<monogr>
		<title level="j">Transportation Engineering</title>
		<imprint>
			<biblScope unit="volume">2</biblScope>
			<biblScope unit="page">100014</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Evaluating expressway traffic crash severity by using logistic regression and explainable &amp; supervised machine learning classifiers</title>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">S</forename><surname>Madushani</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">K</forename><surname>Sandamal</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Meddage</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Pasindu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">P</forename><forename type="middle">A</forename><surname>Gomes</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.treng.2023.100190</idno>
		<ptr target="https://doi.org/10.1016/j.treng.2023.100190" />
	</analytic>
	<monogr>
		<title level="j">Transportation Engineering</title>
		<imprint>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page">100190</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Peeking inside the black-box: a survey on explainable artificial intelligence (xai)</title>
		<author>
			<persName><forename type="first">A</forename><surname>Adadi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Berrada</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE access</title>
		<imprint>
			<biblScope unit="volume">6</biblScope>
			<biblScope unit="page" from="52138" to="52160" />
			<date type="published" when="2018">2018</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Assessing causality in multivariate accident models</title>
		<author>
			<persName><forename type="first">R</forename><surname>Elvik</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aap.2010.08.018</idno>
		<ptr target="https://doi.org/10.1016/j.aap.2010.08.018" />
	</analytic>
	<monogr>
		<title level="j">Accident Analysis &amp; Prevention</title>
		<imprint>
			<biblScope unit="volume">43</biblScope>
			<biblScope unit="page" from="253" to="264" />
			<date type="published" when="2011">2011</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A budget optimisation model for road safety infrastructure countermeasures</title>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">B</forename><surname>Byaruhanga</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Evdorides</surname></persName>
		</author>
		<idno type="DOI">10.1080/23311916.2022.2129363</idno>
	</analytic>
	<monogr>
		<title level="j">Cogent Engineering</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page">2129363</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Towards the spatial analysis of motorway safety in the connected environment by using explainable deep learning</title>
		<author>
			<persName><forename type="first">M</forename><surname>Gregurić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Vrbanić</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Ivanjko</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge-based systems</title>
		<imprint>
			<biblScope unit="volume">269</biblScope>
			<biblScope unit="page">110523</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Counterfactual explanations without opening the black box: Automated decisions and the gdpr</title>
		<author>
			<persName><forename type="first">S</forename><surname>Wachter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Mittelstadt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Russell</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Harv. JL &amp; Tech</title>
		<imprint>
			<biblScope unit="volume">31</biblScope>
			<biblScope unit="page">841</biblScope>
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Counterfactuals in explainable artificial intelligence (xai): Evidence from human reasoning</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">M J</forename><surname>Byrne</surname></persName>
		</author>
		<idno type="DOI">10.24963/ijcai.2019/876</idno>
		<ptr target="https://doi.org/10.24963/ijcai.2019/876.doi:10.24963/ijcai.2019/876" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on Artificial Intelligence Organization</title>
				<meeting>the Twenty-Eighth International Joint Conference on Artificial Intelligence, IJCAI-19, International Joint Conferences on Artificial Intelligence Organization</meeting>
		<imprint>
			<date type="published" when="2019">2019</date>
			<biblScope unit="page" from="6276" to="6282" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<author>
			<persName><forename type="first">A</forename><surname>Artelt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Hammer</surname></persName>
		</author>
		<idno type="arXiv">arXiv:1911.07749</idno>
		<title level="m">On the computation of counterfactual explanations-a survey</title>
				<imprint>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b9">
	<analytic>
		<title level="a" type="main">Model-agnostic counterfactual explanations for consequential decisions</title>
		<author>
			<persName><forename type="first">A.-H</forename><surname>Karimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Barthe</surname></persName>
		</author>
		<author>
			<persName><forename type="first">B</forename><surname>Balle</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><surname>Valera</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics</title>
				<imprint>
			<publisher>PMLR</publisher>
			<date type="published" when="2020">2020</date>
			<biblScope unit="page" from="895" to="905" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<monogr>
		<author>
			<persName><forename type="first">S</forename><surname>Verma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Boonsanong</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Hoang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><forename type="middle">E</forename><surname>Hines</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">P</forename><surname>Dickerson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Shah</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2010.10596</idno>
		<title level="m">Counterfactual explanations and algorithmic recourses for machine learning: A review</title>
				<imprint>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b11">
	<analytic>
		<title level="a" type="main">A survey of contrastive and counterfactual explanation generation methods for explainable artificial intelligence</title>
		<author>
			<persName><forename type="first">I</forename><surname>Stepin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Alonso</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Catala</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Pereira-Fariña</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">IEEE Access</title>
		<imprint>
			<biblScope unit="volume">9</biblScope>
			<biblScope unit="page" from="11974" to="12001" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Counterfactual explanations and how to find them: literature review and benchmarking</title>
		<author>
			<persName><forename type="first">R</forename><surname>Guidotti</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Data Mining and Knowledge Discovery</title>
		<imprint>
			<biblScope unit="page" from="1" to="55" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Counterfactual explanation trees: Transparent and consistent actionable recourse with decision trees</title>
		<author>
			<persName><forename type="first">K</forename><surname>Kanamori</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Takagi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kobayashi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Ike</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">International Conference on Artificial Intelligence and Statistics</title>
				<meeting><address><addrLine>PMLR</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="1846" to="1870" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">how to make them stay?&quot;: Diverse counterfactual explanations of employee attrition</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">G</forename><surname>Andre Artelt</surname></persName>
		</author>
		<idno type="DOI">10.5220/0011961300003467</idno>
		<ptr target="https://doi.org/10.5220/0011961300003467.doi:10.5220/0011961300003467" />
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 25th International Conference on Enterprise Information Systems, ICEIS 2023</title>
				<meeting>the 25th International Conference on Enterprise Information Systems, ICEIS 2023<address><addrLine>Prague, Czech Republic, SCITEPRESS</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="volume">1</biblScope>
			<biblScope unit="page" from="532" to="538" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<monogr>
		<author>
			<persName><forename type="first">G</forename><surname>Warren</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">T</forename><surname>Keane</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Gueret</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Delaney</surname></persName>
		</author>
		<idno type="arXiv">arXiv:2303.09297</idno>
		<title level="m">Explaining groups of instances counterfactually for xai: A use case, algorithm and user study for group-counterfactuals</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note type="report_type">arXiv preprint</note>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Global counterfactual explainer for graph neural networks</title>
		<author>
			<persName><forename type="first">Z</forename><surname>Huang</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><surname>Kosan</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Medya</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Ranu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Singh</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Sixteenth ACM International Conference on Web Search and Data Mining</title>
				<meeting>the Sixteenth ACM International Conference on Web Search and Data Mining</meeting>
		<imprint>
			<date type="published" when="2023">2023</date>
			<biblScope unit="page" from="141" to="149" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<monogr>
		<author>
			<persName><forename type="first">D</forename><surname>Ley</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Mishra</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Magazzeni</surname></persName>
		</author>
		<title level="m">Global counterfactual explanations are reliable or efficient</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
	<note>but not both</note>
</biblStruct>

<biblStruct xml:id="b18">
	<analytic>
		<title level="a" type="main">Xgboost: A scalable tree boosting system</title>
		<author>
			<persName><forename type="first">T</forename><surname>Chen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Guestrin</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining</title>
				<meeting>the 22nd acm sigkdd international conference on knowledge discovery and data mining</meeting>
		<imprint>
			<date type="published" when="2016">2016</date>
			<biblScope unit="page" from="785" to="794" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b19">
	<analytic>
		<title level="a" type="main">Traffic safety effects of new speed limits in sweden</title>
		<author>
			<persName><forename type="first">A</forename><surname>Vadeby</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Åsa</forename><surname>Forsman</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aap.2017.02.003</idno>
		<ptr target="https://doi.org/10.1016/j.aap.2017.02.003" />
	</analytic>
	<monogr>
		<title level="m">road Safety on Five Continents 2016 -Conference in Rio de</title>
				<meeting><address><addrLine>Janeiro, Brazil</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">2018</date>
			<biblScope unit="volume">114</biblScope>
			<biblScope unit="page" from="34" to="39" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b20">
	<analytic>
		<title level="a" type="main">Review of road traffic control strategies</title>
		<author>
			<persName><forename type="first">M</forename><surname>Papageorgiou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Diakaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">V</forename><surname>Dinopoulou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Kotsialos</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Wang</surname></persName>
		</author>
		<idno type="DOI">10.1109/JPROC.2003.819610</idno>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the IEEE</title>
				<meeting>the IEEE</meeting>
		<imprint>
			<date type="published" when="2003">2003</date>
			<biblScope unit="volume">91</biblScope>
			<biblScope unit="page" from="2043" to="2067" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b21">
	<analytic>
		<title level="a" type="main">Safety effects of traffic lane and shoulder widths on two-lane undivided rural roads: A matched case-control study from norway</title>
		<author>
			<persName><forename type="first">P</forename><surname>Pokorny</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">K</forename><surname>Jensen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Gross</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Pitera</surname></persName>
		</author>
		<idno type="DOI">10.1016/j.aap.2020.105614</idno>
		<ptr target="https://doi.org/10.1016/j.aap.2020.105614" />
	</analytic>
	<monogr>
		<title level="j">Accident Analysis &amp; Prevention</title>
		<imprint>
			<biblScope unit="volume">144</biblScope>
			<biblScope unit="page">105614</biblScope>
			<date type="published" when="2020">2020</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b22">
	<analytic>
		<title level="a" type="main">On shapley value for measuring importance of dependent inputs</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">B</forename><surname>Owen</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Prieur</surname></persName>
		</author>
		<idno type="DOI">10.1137/16M1097717</idno>
	</analytic>
	<monogr>
		<title level="j">SIAM/ASA Journal on Uncertainty Quantification</title>
		<imprint>
			<biblScope unit="volume">5</biblScope>
			<biblScope unit="page" from="986" to="1002" />
			<date type="published" when="2017">2017</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
