<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Combining Fairness and Causal Graphs to Advance Both</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Lea</forename><surname>Cohausz</surname></persName>
							<email>lea.cohausz@uni-mannheim.de</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Jakob</forename><surname>Kappenberger</surname></persName>
							<email>jakob.kappenberger@uni-mannheim.de</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Heiner</forename><surname>Stuckenschmidt</surname></persName>
							<email>heiner.stuckenschmidt@uni-mannheim.de</email>
							<affiliation key="aff0">
								<orgName type="institution">University of Mannheim</orgName>
								<address>
									<country key="DE">Germany</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Combining Fairness and Causal Graphs to Advance Both</title>
					</analytic>
					<monogr>
						<idno type="ISSN">1613-0073</idno>
					</monogr>
					<idno type="MD5">9E156FA62CBDC3FD41524089059AB644</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2025-04-23T16:37+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<textClass>
				<keywords>
					<term>Fairness</term>
					<term>Causal Models</term>
					<term>Algorithmic Bias</term>
					<term>Bayesian Network Structure Learning</term>
				</keywords>
			</textClass>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Recent work on fairness in Machine Learning (ML) demonstrated that it is important to know the causal relationships among variables to decide whether a sensitive variable may have a problematic influence on the prediction and what fairness metric and potential bias mitigation strategy to use. These causal relationships can best be represented by Directed Acyclic Graphs (DAGs). However, so far, there is no clear classification of different causal structures containing sensitive variables in these DAGs. This paper's first contribution is classifying the structures into four classes, each with different implications for fairness. However, we first need to learn the DAGs to uncover these structures. Structure learning algorithms exist but currently do not make systematic use of the background knowledge we have when considering fairness in ML, although the background knowledge could increase the correctness of the DAGs. Therefore, the second contribution is an adaptation of the structure learning methods. This is evaluated in the paper, demonstrating that the adaptation increases correctness. The two contributions of this paper are implemented in our publicly available Python package causalfair, allowing everyone to evaluate which relationships in the data might become problematic when applying ML.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1.">Introduction</head><p>The importance of fairness in AI and, more specifically, Machine Learning (ML) has been recognized in recent years, in particular in areas directly concerning humans, such as education, finance, or health care <ref type="bibr" target="#b0">[1,</ref><ref type="bibr" target="#b1">2,</ref><ref type="bibr" target="#b2">3]</ref>. One way to discuss whether an ML system can be considered fair is to look at the outcome of the ML model <ref type="bibr" target="#b3">[4]</ref>. Then, fairness is usually evaluated by looking at metrics of algorithmic bias 1 , such as Demographic Parity (DP) <ref type="bibr" target="#b3">[4]</ref>. These encode different notions of fairness, and once a metric has been decided on, it can indicate whether a model is fair according to this notion. However, apart from the normative 2 , overarching question of which metric is most fair, the choice of metric, its interpretation, and how we should deal with potential fairness concerns is context-dependent. This aspect was also noted in previous works, remarking that taking a causal view allows us to account for much of the context-dependency and, hence, to properly assess whether an AI system's outcome should be considered fair <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b5">6,</ref><ref type="bibr" target="#b6">7]</ref>.</p><p>In particular, Chiappa et al. argued for using Causal Bayesian Networks (CBNs) to understand how variables influence each other and what the data-generating mechanism looks like <ref type="bibr" target="#b4">[5]</ref>. This procedure can help determine which metrics of algorithmic bias to choose, how to interpret them, and how to deal with potential fairness concerns.</p><p>However, so far, no clear classification of specific causal structures and their indications for fairness exists that also considers that the data is used in an ML context. What follows is that there is no implementation that can automatically detect different kinds of structures, which allows researchers and practitioners to check what parts of their data they intend to learn a model on could be problematic. In addition, existing work assumes that the CBNs are already constructed. However, constructing CBNs is non-trivial as we typically do not know all relationships existing in the data (i.e., cannot simply use expert knowledge), and data-driven causal structure learning methods are known to be error-prone in more complicated settings <ref type="bibr" target="#b7">[8,</ref><ref type="bibr" target="#b8">9]</ref>. In many fairness settings, however, we automatically have some background knowledge, i.e., we know which variables in the data are sensitive, and this has certain implications for learning the causal structure, as we will see. Hence, the contributions of the paper are twofold.</p><p>• We create a classification of different causal structures and explain their implications for fairness assessment. • We adapt data-driven causal structure learning algorithms to include background knowledge we have in fairness settings. We also evaluate whether the adaptation increases the accuracy of the CBNs on synthetic data for which we know the ground truth. <ref type="foot" target="#foot_0">3</ref>We implemented both contributions in our publicly available Python package causalfair that researchers and practitioners can use to assess whether and how the data used for Machine Learning (ML) is problematic. The package can be found online (https://github.com/lea-cohausz/ causalfair).<ref type="foot" target="#foot_1">4</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.">Background</head><p>Before discussing these aspects, we will briefly detail what CBNs are. The graphical part of CBNs consists of Directed Acyclic Graphs (DAGs). A DAG is a graph with nodes (also called vertices) 𝒳 that, in the case of a Bayesian Network, encode random variables and directed edges ℰ connecting the vertices <ref type="bibr" target="#b9">[10]</ref>. An edge from one node to another, i.e., 𝑥 𝑖 → 𝑥 𝑗 , means that the first node causally influences the second node. A path in a DAG encompasses a sequence of directed edges, i.e., 𝑥 𝑖 → 𝑥 𝑗 → ... → 𝑥 𝑡 . Furthermore, it holds for a CBN that a variable 𝑥 𝑖 is only dependent on its parents and independent of all other variables given its parents, i.e.:</p><formula xml:id="formula_0">𝑃 (𝑥 𝑖 ) = ∏︁ 𝑃 (𝑥 𝑖 |𝑃 𝑎(𝑥 𝑖 ))<label>(1)</label></formula><p>where 𝑃 𝑎(𝑥 𝑖 ) are the parents of 𝑥 𝑖 . Therefore, CBNs encode independence information. In DAG <ref type="bibr" target="#b5">(6)</ref> in Figure <ref type="figure" target="#fig_0">1</ref>, 𝐴 and 𝑌 are conditionally independent given 𝑋, which we write as 𝐴 ⊥ 𝑌 |𝑋. This is equivalent to saying that all information relevant for 𝑌 is encoded in 𝑋, and we do not need to know 𝐴 to learn something about 𝑌 <ref type="bibr" target="#b9">[10]</ref>. Note, however, that 𝐴 and 𝑌 are only conditionally independent, which means that the variables are correlated. An imperfect ML model may use this correlation, even though all information necessary is encoded in 𝑋.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2.1.">Sensitive Variables</head><p>When assessing whether an ML model is fair or not concerning the model's outcome, we usually use metrics of algorithmic bias <ref type="bibr" target="#b3">[4]</ref>. All of these metrics have in common that they monitor differences in the model's outcome with regard to sensitive variables. These sensitive variables are usually demographic variables <ref type="bibr" target="#b3">[4,</ref><ref type="bibr" target="#b10">11]</ref>. Demographic variables are, among others, variables such as gender, age, socio-economic status, and variables pertaining to this information. Another definition is that demographic features are features that cannot be changed within the context of the setting <ref type="bibr" target="#b10">[11]</ref>. For example, if we have a model in the educational setting, all those variables should be considered demographic and potentially sensitive, which cannot be changed within the educational setting (i.e., gender is not changed by education, but educational attainment itself is). The different fairness metrics require different absences of statistical relationships between a sensitive feature and the prediction of the target variable.</p><p>Because DAGs encode independence relationships and information on which variables influence each other, they are well suited for uncovering potential fairness problems in the data. Chiappa and Isaac showed that by looking at DAGs representing the data-generating mechanism, we could determine whether sensitive variables causally influence the target variable or not <ref type="bibr" target="#b4">[5]</ref>. Based on this, we can then make an informed decision about whether this is actually problematic and which fairness metric can be used <ref type="bibr" target="#b4">[5]</ref>. However, no clear classification of different structures was made, and existing work mostly focused on whether a structure is potentially problematic according to the causal structure <ref type="bibr" target="#b4">[5,</ref><ref type="bibr" target="#b6">7]</ref>. However, when we have the ultimate goal of using the data to build ML models, more considerations apply <ref type="bibr" target="#b5">[6]</ref>. Most importantly, ML models frequently use correlated but causally unconnected variables, even if the information in this variable is also entailed in another causally connected variable <ref type="bibr" target="#b11">[12]</ref>. To the best of our knowledge, we are the first to provide a clear classification of the structures in which sensitive variables are involved.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3.">Different Types of Structures</head><p>Figure <ref type="figure" target="#fig_0">1</ref> shows different structures, including a sensitive and a target variable we identified as potentially existing within a DAG. These structures (i.e., the different ways in which sensitive features and the target can be part of a larger DAG) can further be classified with respect to whether and how the sensitive variable involved in the structure is problematic. We have identified four classes of structures regarding this. We want to highlight again that the final decision of whether a sensitive variable has a problematic influence is still up to the expert. <ref type="foot" target="#foot_2">5</ref>In general, we speak of a problem for ML if it is likely that an ML model will use the sensitive variable or variables heavily dependent on observed or unobserved sensitive variables (proxies) for its prediction. We will now briefly mention these classes (i.e., the different ways in which sensitive variables have or do not have a potential impact on the target and, thus, fairness) before delving into the different structures within these classes. In the following, we use 𝐴 to refer to a sensitive variable and 𝑌 to refer to the target; other letters refer to other predictive variables.</p><formula xml:id="formula_1">A Y (1) A X (2a) Y A1 X (2b) Y A2 A X (3a) Y M A X Y (3b) A X1 (4) Y X2 (5) A X Y A X Y<label>(6)</label></formula><p>• Potentially problematic structures that are problematic for ML (structures 1, 2a, 2b, 5). This class is characterized by a direct or indirect connection or both but with the same direction of effects. To deal with the fairness problem, we need to remove 𝐴 and potentially mitigate the effect of 𝐴 on 𝑋 if the relationships are deemed problematic. In the following, we call these problematic variables. • Unproblematic structures that are unproblematic for ML (structure 4). This class is characterized by no undirected path between 𝐴 and 𝑌 . In the following, we call these unproblematic variables. • Unproblematic structures but potentially problematic from an ML perspective (structure 3a). This class occurs if all directed paths from 𝐴 to 𝑌 are blocked. In the following, we call these blocked variables. • Potentially problematic structures where removing the sensitive variable (structure 3b) is problematic. This class occurs if there is a direct and indirect connection from 𝐴 to 𝑌 and the effects are opposing. In the following, we call these opposing effects variables.</p><p>causalfair returns both the exact structures a sensitive variable is involved in as well as the classification of these structures. We want to highlight that the different structures within the same class still have to be viewed and handled differently <ref type="bibr" target="#b6">[7]</ref>. We will now discuss the different classes in slightly greater detail. Table <ref type="table" target="#tab_0">1</ref> summarizes the structures.</p><p>Problematic Variables: As already mentioned, structures (1), (2a), (2b), and ( <ref type="formula">5</ref>) are problematic. They all have in common that there is at least one directed path from the sensitive variable to the target. Information about the target is encoded in the sensitive variables, which means that an ML model might use the correlation and, thus, place direct importance on the sensitive variable -which is potentially problematic. In the indirect case, as all information is also encoded in the mediating variable, the model may or may not place importance on it. Still, information from this variable will be passed on through the mediating variable. If we do not think that information from the sensitive variable should be used, then we should remove the variable and, in the indirect case, mitigate the effect of the variable while monitoring fairness metrics. We may also decide that only the direct effect should not be used (e.g., in ( <ref type="formula">5</ref>)). Example: As an example, ethnicity may influence students' grades in a specific course due to discrimination. In this case, the sensitive variables influence the target in such a way that the target is also biased.</p><p>Unproblematic Variables: Structure ( <ref type="formula">4</ref>) stands for all networks that are fragmented without a connection between the subnetwork containing the sensitive variable and the subnetwork containing the target variable. In these cases, we do not have to be worried much. No information about 𝑌 is encoded in 𝐴. Still, as imperfect ML models (in particular Neural Networks) tend to assign some importance even to irrelevant features <ref type="bibr" target="#b12">[13]</ref>, it is probably best to remove both 𝐴 and 𝑋1. Then, nothing needs to be monitored or mitigated. Example: Gender may influence height, but neither of those variables is relevant to whether students pass a course. Hence, there is no statistical relationship between any of the variables of the different fragments at all.</p><p>Blocked Variables: For (3a), similar to (4), there is no path of directed edges that leads from the demographic variable to the target. In difference to (4), here, there is no lack of connection. Instead, 𝑀 blocks the paths, meaning no information from the sensitive variable is transported to the target. From a network perspective, the structure would be unproblematic, but from an ML perspective, it might not be. 𝑋, which is influenced by 𝐴, is correlated with 𝑌 . Although all information regarding the target is contained in 𝑀 and 𝑋 ⊥ 𝑌 |𝑀 , an ML model may still use and assign importance to 𝑋 due to the correlation. If the model uses 𝑋, it will likely also use 𝐴 to correct for the bias in 𝑋. This consequence was also observed by Ashurst et al. <ref type="bibr" target="#b5">[6]</ref>. Therefore, we have two options for handling this. We can remove both 𝐴 and 𝑋, but if we remove 𝐴, we must also remove 𝑋. Otherwise, we would introduce a bias in our predictions that is not reflected by the real and unbiased target variable. Alternatively, we leave both in the data and closely monitor all metrics for algorithmic bias. Example: The students' motivation influences both passing course X and passing course Y (the target). In course X, the professor discriminates against one gender. In this case, all information relevant to the target is encoded in the motivation, and X and gender are statistically independent of the target. However, course X is not independent of the motivation, which is a relevant variable for the target. This relationship may lead to an ML model that places weight on course X and, consequently, gender.</p><p>Opposing Effects Variables: Although (3b) looks like (5) and, thus, should be classified as problematic, this becomes a very different case when the direct and indirect effects are opposing. This is the case if we have a missing variable. For example, if 𝑀 in (3a) is not observed, then the DAG learned from data will be (3b). If we do not know 𝑀 , then it will appear like 𝑋 and 𝐴 influence 𝑌 , and 𝐴 also influences 𝑋. The influence 𝐴 has on 𝑋 corrects itself through the connection 𝐴 → 𝑌 , meaning the target is unbiased. From a causal structure perspective, such a structure is clearly problematic. However, from an ML perspective, we again have the two options we had for (3a). We either leave 𝑋 and 𝐴 in, as the opposing effects of 𝐴 on 𝑋 and 𝑌 , respectively, may cancel each other out. Or we must remove both. Then, however, we may lose a lot of predictive power. Although <ref type="bibr" target="#b4">(5)</ref> and (3b) look identical from a network perspective, the implications are very different. Hence, for causalfair, we check when such a structure exists whether the effects of 𝐴 on 𝑋 and of 𝐴 on 𝑌 point in opposite directions (demonstrated in the graph with the two colors). If this is not the case, then it is <ref type="bibr" target="#b4">(5)</ref>. Otherwise, causalfair informs the user of the structure. It is important to know that the resulting graph learned from data is not strictly speaking a causal graph as a relevant variable is missing. Example: If the variable motivation, as described in the example for the blocked variables, is unobserved, the graph (3b) would likely be learned by a structure learning algorithm.</p><p>Finally, structure <ref type="bibr" target="#b5">(6)</ref> has been considered in the literature before, but we argue that we usually do not have to think about this case because sensitive variables are -at least according to the definition explained above -usually not changeable by other variables. <ref type="foot" target="#foot_3">6</ref></p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.">Causal Structure Learning</head><p>Detecting the above-described structures relies on accurate DAGs. These DAGs first need to be constructed. There are several ways to do so:</p><p>1. Expert knowledge. While we can construct the DAG using background knowledge <ref type="bibr" target="#b13">[14]</ref>,</p><p>we usually do not know about causal relationships, or our ideas might not match the data. Still, expert knowledge is important: We often know about the temporal ordering of variables and, therefore, know that certain relationships cannot exist (e.g., grades cannot influence ethnicity). 2. Data-driven methods. Research on causal structure learning has produced several methods to learn CBNs from data <ref type="bibr" target="#b14">[15]</ref>. If certain assumptions hold and data is sufficient, these methods work rather well <ref type="bibr" target="#b7">[8]</ref>. In more realistic cases, however, the methods cannot reliably produce accurate DAGs <ref type="bibr" target="#b8">[9]</ref>. 3. Combining expert knowledge and data-driven methods. We may know that some relationships in the data are impossible or must exist, but we do not know about all relationships. We can feed this knowledge to the structure learning algorithms. Although combining both methods seems to lead to better results, doing so has been researched comparatively poorly, and some data-driven methods do not even allow the incorporation of background knowledge <ref type="bibr" target="#b15">[16]</ref>. In part, this lack of research is because there is no general procedure for it, and it greatly depends on the data, knowledge, and general situation.</p><p>We argue, however, that when constructing a graph to assess fairness, we can use a standard procedure to combine background knowledge and data-driven methods. The reason for this is the background information we automatically have when considering fairness.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.1.">Background Information</head><p>As mentioned in section 2, it follows from a definition of sensitive features that non-sensitive variables cannot influence them <ref type="bibr" target="#b10">[11]</ref>. Additionally, the target variable usually follows all other variables temporally. For example, if we try to predict admission to a university, all information that can be used has existed longer than the admission decision. Therefore, we can separate the variables into three groups: target variables (which cannot influence any other variables), sensitive variables (which cannot be influenced by any other variables), and regular predictive variables, for which it logically follows that they cannot influence sensitive variables or be influenced by the target. There may also be situations where sensitive variables can be influenced by other sensitive variables or where we know there is an order within the other predictive variables. However, we generally have at least three tiers: the target, other predictive features, and sensitive variables. With the specification of these tiers, we already have a lot of background knowledge: we can require that the data-driven structure learning methods do not include any edges that are impossible according to this specification. Using this background knowledge is particularly helpful, as it is also the knowledge we need to evaluate algorithmic bias, anyhow: knowing which variables are sensitive and what the target is. Additional knowledge we have about the structures can also be specified. <ref type="foot" target="#foot_4">7</ref>When we now want to learn DAGs from data, we first need to choose among the families of data-driven methods. We will evaluate one method from each of the three most popular families: constraint-based methods, score-based methods, or methods from functional causal modeling and discuss how the background knowledge can be used <ref type="bibr" target="#b14">[15]</ref>.<ref type="foot" target="#foot_5">8</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.2.">Constraint-Based Structure Learning</head><p>Constraint-based structure learning consists of two stages. During the first stage, edges are removed iteratively from an initially complete undirected graph by performing independence tests <ref type="bibr" target="#b14">[15]</ref>. Edges can be removed when two variables are (conditionally) independent of each other. Whenever an edge is removed, the variables that make these variables conditionally independent are stored. For example, if 𝐴 and 𝐵 are independent given C, i.e., 𝐴 ⊥ 𝐵|𝐶, then 𝐶 is stored. During the second stage, as many edges as possible are oriented. To do this, we look at groups of three variables 𝐴, 𝐵, 𝐶, and their separating sets. If we have two variables 𝐴, 𝐵 that are conditionally independent and both are dependent on the same third variable 𝐶 and their separating set does not include 𝐶, i.e., 𝐶 / ∈ 𝑆 𝐴𝐵 , then we have that 𝐴 → 𝐶 and 𝐵 → 𝐶. 𝐶 is a so-called collider, and 𝐴, 𝐵, 𝐶 form a v-structure. After all v-structures are identified and the corresponding edges are oriented, other edges are oriented to avoid new v-structures. This concludes the second stage. It has to be noted that not all edges are usually oriented, as only those edges that are part of a v-structure or directly avoid a v-structure can be oriented. Therefore, constraint-based methods do not return a DAG but a Complete Partial DAG (CPDAG). Constraint-based methods are guaranteed to return the correct CPDAG if the independence tests return correct results <ref type="bibr" target="#b14">[15,</ref><ref type="bibr" target="#b9">10]</ref>. Constraint-based algorithms are known to miss more edges than other methods but also insert fewer incorrect edges <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b7">8]</ref>. We use the PC-Stable (abbreviated in this paper as PC) algorithm, which has been found to work well <ref type="bibr" target="#b14">[15]</ref>.</p><p>Adaptation: Including background information in constraint-based methods is not straightforward as the first stage cannot really be modified, and no implementation so far allows a user to specify background information <ref type="bibr" target="#b15">[16]</ref>. Our approach is to use the background information at the end of the second stage: If we have an undirected edge and our background knowledge does not allow one direction, then the edge is oriented accordingly. Afterward, further edges are oriented to avoid v-structures again. Compared to the adaptations of the other methods, this method makes comparatively little use of the background information. It is also not guaranteed that relationships that go against our background knowledge do not exist because the edge may already have been oriented during the v-structure orientation. However, if the CPDAG is correct until we inject the background knowledge, the resulting graph will also be correct.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.3.">Score-Based Structure Learning</head><p>In score-based structure learning, we aim to find a DAG that maximizes a score <ref type="bibr" target="#b14">[15]</ref>. Hence, the search space of possible graphs must be searched, and the possible graphs must be compared with a score (e.g., an information-theoretic score). Searching the space of possible graphs is usually (though not always) done with a heuristic approach. Despite its simplicity, one algorithm frequently used directly or in some variants is the Hill-Climber (HC) <ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b7">8]</ref>. HC starts with an empty graph and iteratively adds or deletes those edges that lead to the highest increase in the chosen score until the score no longer improves. A DAG that is at least a local maxima is returned, but reaching the global maxima is not guaranteed <ref type="bibr" target="#b14">[15]</ref>.</p><p>Adaptation: Adapting score-based methods to handle the background information is easier, as we can restrict the search space, i.e., edges that are impossible according to our classification will never be added <ref type="bibr" target="#b15">[16]</ref>. <ref type="foot" target="#foot_6">9</ref> Constaninou et al. recently experimented with different kinds of background knowledge and their effect on the accuracy of DAGs but found that restricting edges only has a small effect <ref type="bibr" target="#b15">[16]</ref>. However, we limit the search space more fundamentally.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4.4.">Functional Causal Models</head><p>The key idea of Functional Causal Modeling is that variables can be determined by a function of their parent variables and a noise term that is independent of their parents. If the function is correctly identified, it is the case that the noise term is only independent of the parent variables for one direction and not for the other. Hence, algorithms belonging to this family search for such relationships between variables. It should be noted that this method is usually used for continuous data, although it can also be used for discrete data. One of the most prominent algorithms belonging to this family is the Linear Non-Gaussian Acyclic Model (LiNGAM) <ref type="bibr" target="#b16">[17]</ref>.</p><p>Adaptation: Similar to HC, we can include background knowledge by preventing LiNGAM from considering certain relationships. Those are then not even attempted to be modeled.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.">Evaluation</head><p>We will now evaluate whether our adaptations increase the correctness with which DAGs are learned. For this, we will look at several ground truth DAGs from which we sample data. Then, we will attempt to reconstruct the DAGs. We will vary the data size and the background information available. Additionally, we will check whether the sensitive variables are correctly classified according to the different classes we defined in section 3. We have the following research questions: RQ1: Does using background information improve the correctness of the models? RQ2: Is background information particularly helpful for specific data sizes or methods (PC-Stable, HC, LiNGAM)? RQ3: Does the classification accuracy of demographic variables according to section 3 increase with more background information correspondingly?</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.1.">Strategy</head><p>In order to evaluate the research questions, we need to know the ground truth DAGs. For this, we selected five Bayesian Networks (BN) from the "bnlearn" library that are frequently used to evaluate structure learning algorithms: asia, earthquake, sachs, alarm, and insurance <ref type="bibr" target="#b17">[18]</ref>. For each of the DAGs, we selected some root nodes to represent the sensitive variables. We also selected one of the leaf nodes as the target. Because sampling from these BNs produces discrete data, but we also want to test with continuous data, we created three additional synthetic DAGs of different sizes for which we sampled continuous data. An example of the smallest continuous network we created can be seen in Figure <ref type="figure" target="#fig_1">2</ref>. We specified non-linear relationships for two of the networks (II, III). A summary of the ground truth DAGs can be seen in Table <ref type="table" target="#tab_1">2</ref>.</p><p>For each of the networks, we extracted which variables belong to each of the four classes of sensitive variables. For Figure <ref type="figure" target="#fig_1">2</ref>, we have that 𝑎3 and 𝑎4 belong to the class that is problematic regarding both the causal structure and from the ML perspective (the paths they are involved in are highlighted in red). For 𝑎1, we have that it is neither, as it has no connection to 𝑦. 𝑎2 is not a problematic variable, as 𝑥2 blocks it, but it might be problematic when using ML. In this DAG, there is no opposing effects variable setting.</p><p>Having gathered the ground truth information, we can now run the experiments. For each DAG, we vary the number of data instances used (500, 1000, and 10000). We also vary whether we have information available (Info) or not (No Info). For each configuration, we sample the data 30 times to receive reliable results. In detail, we proceed like this: Algorithm 1 The algorithm shows the setup of the experiments for each DAG.  <ref type="foot" target="#foot_8">11</ref> To make the values comparable across DAGs, we normalize them by dividing them by the number of actually existing edges in the ground-truth DAG. This procedure means that the range for the incorrect edges is theoretically [0, ∞], as, of course, more incorrect edges can be inserted than correct edges exist. Still, this normalizes the value with regard to the size of the DAG, and in practice, the value is never larger than 1. For the true positives, the value is bounded to 1. Likewise, for RQ3, we look at the accuracy with which variables are classified into the four classes.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5.2.">Results</head><p>Figure <ref type="figure">3</ref> (a) shows the results relevant for answering RQ1. We can see that providing information increases the correctly found edges compared to having no information available. There are much fewer wrongly placed edges when using information compared to not using information. That this metric is more affected than the one measuring correct edges is as expected, as the restrictions we define through the background knowledge directly impact this. In general, we can clearly answer RQ1 in the affirmative. Tables <ref type="table" target="#tab_3">3 and 4</ref> show the results for RQ2. Generally, we can observe in Table <ref type="table" target="#tab_2">3</ref> that the percentage of correct edges increases with more data, and the percentage of incorrect edges slightly increases with more data. However, it does not appear that background information is more valuable for more or less data available. As shown in Table <ref type="table" target="#tab_3">4</ref>, the conclusion is a bit more mixed for the methods. The percentage of correct edges for PC actually slightly decreases with more information available; the percentage of incorrect edges decreases quite a bit, though. It should be noted, however, that the ground-truth CPDAGs for PC also vary with information, so the numbers are not directly comparable. HC and LiNGAM greatly benefit from the information. In accordance with previous research, we can observe that HC performs best, whereas PC misses a lot of correct edges and LiNGAM places a lot of wrong edges <ref type="bibr" target="#b7">[8]</ref>. For RQ2, we can say that background information is generally helpful for all settings but that HC and LiNGAM benefit more from it.  are found. This positive result is not true for the other two classes. However, looking deeper into the data, HC actually finds roughly 67% of all blocked variables when having information available, whereas PC and LiNGAM do much worse and push down the average. HC also finds more than 75% of all problematic variables when having information available; LiNGAM is performing above average, too. Finding situations that are indicative of opposing effects variables is tough for all methods, although HC still does much better than average (roughly 35% when having information available). Looking deeper into our results, we observed that the variables are often misclassified as problematic. In this way, at least, attention is drawn to potential problems with them. Most importantly, Figure <ref type="figure">3</ref> (b) shows that providing information helps classify sensitive variables, confirming RQ3. The difference is not very large, though. In general, using the background information helps with learning more correct DAGs and classifying the sensitive variables. HC and LiNGAM greatly benefit from background information, and PC-Stable does so less. The above evaluation only serves to show that background information improves the correctness of DAGs. Because of space constraints, we did not add an evaluation of how well the problematic structures are detected (i.e., the paths along which problematic influences might exist). This evaluation will be added in future work. The correctness of the DAGs can be further improved by focusing on score-based methods and potentially adding even more background information (e.g., some sensitive variables may be influenced by other sensitive variables). We want to highlight that causalfair allows us to specify more background knowledge, such as whether specific variables must be connected.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.">Conclusion and Limitations</head></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.1.">Limitations</head><p>As a general limitation, we want to highlight that the learned DAGs may not reflect real causal relationships. Either important predictive variables (e.g., as highlighted by the discussion of structure (3b) in section 3) or sensitive variables that have an effect might simply not be in the data. While this is a general problem of measuring fairness, it is important to stress that our CBNs do not necessarily provide a complete picture of the causal mechanisms producing the target.</p><p>Moreover, structure learning algorithms have limitations when the data is extremely imbalanced, contains many missing values, and when the relationship between variables is non-linear and complex. In other words, real data could pose a challenge. Real data has rarely been used to evaluate structure learning algorithms, in general, <ref type="bibr" target="#b7">[8]</ref>, but doing so is, of course, very important. Thus, future research should focus on a real-life evaluation as well.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6.2.">Conclusion and Outlook</head><p>In this paper, we introduced a classification of sensitive variables into four classes depending on whether and how they are involved in causal structures that could be problematic in the ML context. Additionally, we showed that we can improve the data-driven learning of DAGs by using background knowledge we naturally have in fairness settings. These contributions are implemented in our Python package causalfair. We hope researchers and practitioners use this package to evaluate whether they have problematic relationships in their data before learning ML models. In the future, we plan to add more structure learning methods (particularly score-based) to the package. Furthermore, we believe that future research should focus on performing more targeted bias mitigation that can also handle it if we only consider some but not all paths from a sensitive variable to a target as problematic. Chiappa and Isaac discuss a technique to estimate the path-specific effects of variables <ref type="bibr" target="#b4">[5]</ref>, and we believe this is a good starting point. Moreover, we believe that more effort should be put into constructing accurate DAGs. We show in this paper that background knowledge helps immensely in learning better DAGs, and we believe that further advancing the learning of DAGs using background knowledge should be a future research endeavor.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: Different causal structures. The numbers correspond to the numbers in the table.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: The smallest DAG we created to evaluate against. The red paths are paths where problematic information is transported to the target 𝑦.</figDesc><graphic coords="9,186.14,84.19,223.00,151.00" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head>1 : 5 :</head><label>15</label><figDesc>for sample size in {500, 1000, 10000} do 2: for experiment in range(0,30) do for method in {PC, HC, LiNGAM} do for information in {No Info, Info} do truth We compare the correctness of the graph by a) computing how many of the true edges are present in the computed DAG 10 (true positives) and b) how many edges in the computed DAG are incorrect (false positives).</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 3 (Figure 3 :</head><label>33</label><figDesc>Figure 3 (b) answers RQ3. We can see that most of the problematic and unproblematic variables</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_0"><head>Table 1</head><label>1</label><figDesc>This table provides a summary of whether the structures in the corresponding figure are problematic from a causal structure and ML point of view, and what strategy to use to mitigate algorithmic bias.</figDesc><table><row><cell>structure</cell><cell>(1)</cell><cell>(2a)</cell><cell>(2b)</cell><cell>(3a)</cell><cell>(3b)</cell><cell>(4)</cell><cell>(5)</cell><cell>(6)</cell></row><row><cell>problematic</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>according to</cell><cell>yes</cell><cell>yes</cell><cell>yes</cell><cell>no</cell><cell>yes</cell><cell>no</cell><cell>yes</cell><cell>no</cell></row><row><cell>causal structure</cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell><cell></cell></row><row><cell>problematic for ML</cell><cell>yes</cell><cell>yes</cell><cell>yes</cell><cell>maybe</cell><cell>maybe</cell><cell>no</cell><cell>yes</cell><cell>yes</cell></row><row><cell>strategy</cell><cell>remove A</cell><cell>remove A, mitigate influence of A on X</cell><cell>remove As, mitigate influence of As on X</cell><cell>remove A &amp; X or none</cell><cell>remove A &amp; X or none</cell><cell>none, or remove X1 and A</cell><cell>remove A, of A on X mitigate influence</cell><cell>remove A</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_1"><head>Table 2</head><label>2</label><figDesc>This table provides a summary of the DAGs used in the evaluation. It states the number of nodes and edges, problematic variables, blocked variables, opposite effects variables, and unproblematic variables. The final column shows the average percentage of correct edges found when using the structure learning algorithms across all settings.</figDesc><table><row><cell>Name</cell><cell cols="4">|Nodes| |Edges| |Problematic| |Blocked|</cell><cell>|Opposing Effects|</cell><cell cols="2">|Unproblematic| % correct</cell></row><row><cell>asia</cell><cell>8</cell><cell>8</cell><cell>2</cell><cell>0</cell><cell>0</cell><cell>0</cell><cell>0.66</cell></row><row><cell>earthquake</cell><cell>5</cell><cell>4</cell><cell>2</cell><cell>0</cell><cell>0</cell><cell>0</cell><cell>0.93</cell></row><row><cell>sachs</cell><cell>11</cell><cell>17</cell><cell>1</cell><cell>0</cell><cell>0</cell><cell>1</cell><cell>0.44</cell></row><row><cell>alarm</cell><cell>37</cell><cell>46</cell><cell>9</cell><cell>0</cell><cell>2</cell><cell>0</cell><cell>0.66</cell></row><row><cell>insurance</cell><cell>27</cell><cell>52</cell><cell>1</cell><cell>0</cell><cell>0</cell><cell>0</cell><cell>0.60</cell></row><row><cell>Synthetic I</cell><cell>9</cell><cell>7</cell><cell>2</cell><cell>1</cell><cell>0</cell><cell>1</cell><cell>0.89</cell></row><row><cell>Synthetic II</cell><cell>10</cell><cell>13</cell><cell>4</cell><cell>0</cell><cell>2</cell><cell>0</cell><cell>0.65</cell></row><row><cell>Synthetic III</cell><cell>20</cell><cell>29</cell><cell>3</cell><cell>2</cell><cell>0</cell><cell>1</cell><cell>0.48</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head>Table 3</head><label>3</label><figDesc>Results by sample size and information.</figDesc><table><row><cell>Sample Size</cell><cell>Info</cell><cell>correct edges</cell><cell>wrong edges</cell></row><row><cell></cell><cell>No Info</cell><cell>0.54</cell><cell>0.54</cell></row><row><cell>500</cell><cell>Info</cell><cell>0.49</cell><cell>0.33</cell></row><row><cell></cell><cell>No Info</cell><cell>0.54</cell><cell>0.55</cell></row><row><cell>1000</cell><cell>Info</cell><cell>0.62</cell><cell>0.35</cell></row><row><cell></cell><cell>No Info</cell><cell>0.62</cell><cell>0.59</cell></row><row><cell>10000</cell><cell>Info</cell><cell>0.71</cell><cell>0.41</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head>Table 4</head><label>4</label><figDesc>Results by method and information.</figDesc><table><row><cell>Method</cell><cell>Info</cell><cell>correct edges</cell><cell>wrong edges</cell></row><row><cell></cell><cell>No Info</cell><cell>0.54</cell><cell>0.33</cell></row><row><cell>PC</cell><cell>Info</cell><cell>0.55</cell><cell>0.22</cell></row><row><cell></cell><cell>No Info</cell><cell>0.58</cell><cell>0.38</cell></row><row><cell>HC</cell><cell>Info</cell><cell>0.73</cell><cell>0.19</cell></row><row><cell></cell><cell>No Info</cell><cell>0.52</cell><cell>0.98</cell></row><row><cell>LiNGAM</cell><cell>Info</cell><cell>0.69</cell><cell>0.68</cell></row></table></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="3" xml:id="foot_0">In addition, our online repository contains an example of a real-life dataset.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="4" xml:id="foot_1">Our experiments are also available here https://github.com/lea-cohausz/Causalfair_Experiments.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="5" xml:id="foot_2">We recommend Cohausz et al. to receive an idea of when we might deem relationships problematic and how this relates to selecting relevant fairness metrics<ref type="bibr" target="#b6">[7]</ref>.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="6" xml:id="foot_3">If, however, this happens to be false in a specific setting, then we should remove 𝐴 to avoid that an ML model uses the correlation.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="7" xml:id="foot_4">That is, whether certain variables cannot have ingoing edges or cannot be influenced by certain other variables.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="8" xml:id="foot_5">Hybrid methods connecting constraint-based and score-based structure learning also exist<ref type="bibr" target="#b14">[15]</ref>. In practice, hybrid methods have been proven to work less well than the mentioned individual methods<ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b7">8]</ref>. Hence, we will not consider them here.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="9" xml:id="foot_6">Similarly, we could also add information that a certain edge needs to exist -then the edge is directly added and can never be removed.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="10" xml:id="foot_7">Note that for PC, we evaluate against the CPDAG and that the ground-truth CPDAG also changes with more information available. Only exact matches (i.e., same orientation or both unoriented) are counted.</note>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="11" xml:id="foot_8">Note that while these are usual metrics in structure learning research, other metrics are also frequently used, such as, e.g., the Hamming Distance<ref type="bibr" target="#b8">[9,</ref><ref type="bibr" target="#b14">15]</ref>. However, we believe this provides a relatively easy-to-understand view of the results.</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>Lea Cohausz is funded by the grant "Consequences of Artificial Intelligence for Urban Societies (CAIUS), " by Volkswagen Foundation.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Fairness of artificial intelligence in healthcare: review and recommendations</title>
		<author>
			<persName><forename type="first">D</forename><surname>Ueda</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Kakinuma</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Fujita</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kamagata</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Fushimi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Ito</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Matsui</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nozaki</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Nakaura</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><surname>Fujima</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Japanese Journal of Radiology</title>
		<imprint>
			<biblScope unit="volume">42</biblScope>
			<biblScope unit="page" from="3" to="15" />
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">Algorithmic bias in education</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Hawn</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Artificial Intelligence in Education</title>
		<imprint>
			<biblScope unit="page" from="1" to="41" />
			<date type="published" when="2021">2021</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b2">
	<analytic>
		<title level="a" type="main">Application of fairness to healthcare, organizational justice, and finance: a survey</title>
		<author>
			<persName><forename type="first">P</forename><surname>Birzhandi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y.-S</forename><surname>Cho</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Expert Systems with Applications</title>
		<imprint>
			<biblScope unit="volume">216</biblScope>
			<biblScope unit="page">119465</biblScope>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">A clarification of the nuances in the fairness metrics landscape</title>
		<author>
			<persName><forename type="first">A</forename><surname>Castelnovo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Crupi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Greco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">D</forename><surname>Regoli</surname></persName>
		</author>
		<author>
			<persName><forename type="first">I</forename><forename type="middle">G</forename><surname>Penco</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Cosentini</surname></persName>
		</author>
		<idno type="DOI">10.1038/s41598-022-07939-1</idno>
		<ptr target="https://www.nature.com/articles/s41598-022-07939-1.doi:10.1038/s41598-022-07939-1" />
	</analytic>
	<monogr>
		<title level="j">Scientific Reports</title>
		<imprint>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page">4209</biblScope>
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">A causal bayesian networks viewpoint on fairness, Privacy and Identity Management. Fairness, Accountability, and Transparency</title>
		<author>
			<persName><forename type="first">S</forename><surname>Chiappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">W</forename><forename type="middle">S</forename><surname>Isaac</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">in the Age of Big Data: 13th IFIP WG 9.2</title>
				<meeting><address><addrLine>Vienna, Austria</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2018">/11. 11. August 20-24, 2018. 2019</date>
			<biblScope unit="volume">13</biblScope>
			<biblScope unit="page" from="3" to="20" />
		</imprint>
		<respStmt>
			<orgName>Summer School</orgName>
		</respStmt>
	</monogr>
	<note>SIG 9.2. 2 International. Revised Selected Papers</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Why fair labels can yield unfair predictions: Graphical conditions for introduced unfairness</title>
		<author>
			<persName><forename type="first">C</forename><surname>Ashurst</surname></persName>
		</author>
		<author>
			<persName><forename type="first">R</forename><surname>Carey</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Chiappa</surname></persName>
		</author>
		<author>
			<persName><forename type="first">T</forename><surname>Everitt</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the AAAI Conference on Artificial Intelligence</title>
				<meeting>the AAAI Conference on Artificial Intelligence</meeting>
		<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="volume">36</biblScope>
			<biblScope unit="page" from="9494" to="9503" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b6">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Cohausz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Kappenberger</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Stuckenschmidt</surname></persName>
		</author>
		<title level="m">What fairness metrics can really tell you: A case study in the educational domain</title>
				<imprint>
			<date type="published" when="2024">2024</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Who learns better bayesian network structures: Accuracy and speed of structure learning algorithms</title>
		<author>
			<persName><forename type="first">M</forename><surname>Scutari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><forename type="middle">E</forename><surname>Graafland</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><forename type="middle">M</forename><surname>Gutiérrez</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">International Journal of Approximate Reasoning</title>
		<imprint>
			<biblScope unit="volume">115</biblScope>
			<biblScope unit="page" from="235" to="253" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<analytic>
		<title level="a" type="main">A survey on bayesian network structure learning from data</title>
		<author>
			<persName><forename type="first">M</forename><surname>Scanagatta</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Salmerón</surname></persName>
		</author>
		<author>
			<persName><forename type="first">F</forename><surname>Stella</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Progress in Artificial Intelligence</title>
		<imprint>
			<biblScope unit="volume">8</biblScope>
			<biblScope unit="page" from="425" to="439" />
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><forename type="first">J</forename><surname>Pearl</surname></persName>
		</author>
		<title level="m">Causality</title>
				<imprint>
			<publisher>Cambridge university press</publisher>
			<date type="published" when="2009">2009</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">Using demographic data as predictor variables: a questionable choice</title>
		<author>
			<persName><forename type="first">R</forename><forename type="middle">S</forename><surname>Baker</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Esbenshade</surname></persName>
		</author>
		<author>
			<persName><forename type="first">J</forename><surname>Vitale</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><surname>Karumbaiah</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Journal of Educational Data Mining</title>
		<imprint>
			<biblScope unit="volume">15</biblScope>
			<biblScope unit="page" from="22" to="52" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<author>
			<persName><forename type="first">L</forename><surname>Cohausz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><surname>Tschalzev</surname></persName>
		</author>
		<author>
			<persName><forename type="first">C</forename><surname>Bartelt</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H</forename><surname>Stuckenschmidt</surname></persName>
		</author>
		<title level="m">Investigating the importance of demographic features for edm-predictions</title>
				<imprint>
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Why do tree-based models still outperform deep learning on typical tabular data?</title>
		<author>
			<persName><forename type="first">L</forename><surname>Grinsztajn</surname></persName>
		</author>
		<author>
			<persName><forename type="first">E</forename><surname>Oyallon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">G</forename><surname>Varoquaux</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Advances in neural information processing systems</title>
		<imprint>
			<biblScope unit="volume">35</biblScope>
			<biblScope unit="page" from="507" to="520" />
			<date type="published" when="2022">2022</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b13">
	<analytic>
		<title level="a" type="main">Thinking with causal models: A visual formalism for collaboratively crafting assumptions</title>
		<author>
			<persName><forename type="first">B</forename><surname>Hicks</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Kitto</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><surname>Payne</surname></persName>
		</author>
		<author>
			<persName><forename type="first">S</forename><forename type="middle">Buckingham</forename><surname>Shum</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">LAK22: 12th International Learning Analytics and Knowledge Conference</title>
				<imprint>
			<date type="published" when="2022">2022</date>
			<biblScope unit="page" from="250" to="259" />
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b14">
	<analytic>
		<title level="a" type="main">A survey of bayesian network structure learning</title>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Kitson</surname></persName>
		</author>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Constantinou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Liu</surname></persName>
		</author>
		<author>
			<persName><forename type="first">K</forename><surname>Chobtham</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Artificial Intelligence Review</title>
		<imprint>
			<biblScope unit="page" from="1" to="94" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b15">
	<analytic>
		<title level="a" type="main">The impact of prior knowledge on causal structure learning</title>
		<author>
			<persName><forename type="first">A</forename><forename type="middle">C</forename><surname>Constantinou</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Z</forename><surname>Guo</surname></persName>
		</author>
		<author>
			<persName><forename type="first">N</forename><forename type="middle">K</forename><surname>Kitson</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Knowledge and Information Systems</title>
		<imprint>
			<biblScope unit="page" from="1" to="50" />
			<date type="published" when="2023">2023</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b16">
	<analytic>
		<title level="a" type="main">Lingam: Non-gaussian methods for estimating causal structures</title>
		<author>
			<persName><forename type="first">S</forename><surname>Shimizu</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Behaviormetrika</title>
		<imprint>
			<biblScope unit="volume">41</biblScope>
			<biblScope unit="page" from="65" to="98" />
			<date type="published" when="2014">2014</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b17">
	<analytic>
		<title level="a" type="main">Package &apos;bnlearn&apos;, Bayesian network structure learning, parameter learning and inference</title>
		<author>
			<persName><forename type="first">M</forename><surname>Scutari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">M</forename><forename type="middle">M</forename><surname>Scutari</surname></persName>
		</author>
		<author>
			<persName><forename type="first">H.-P</forename><surname>Mmpc</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">R package version</title>
		<imprint>
			<biblScope unit="volume">4</biblScope>
			<date type="published" when="2019">2019</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
