<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Score based vs constraint based causal learning in the presence of confounders</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author>
							<persName><forename type="first">Sofia</forename><surname>Triantafillou</surname></persName>
							<affiliation key="aff0">
								<orgName type="department">Computer Science Dept</orgName>
								<orgName type="institution">University of Crete Voutes Campus</orgName>
								<address>
									<postCode>700 13</postCode>
									<settlement>Heraklion</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Ioannis</forename><surname>Tsamardinos</surname></persName>
							<affiliation key="aff1">
								<orgName type="department">Computer Science Dept</orgName>
								<orgName type="institution">University of Crete Voutes Campus</orgName>
								<address>
									<postCode>700 13</postCode>
									<settlement>Heraklion</settlement>
									<country key="GR">Greece</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Score based vs constraint based causal learning in the presence of confounders</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">6DE861A372EA6036725018CCD81A96F7</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-24T06:55+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>We compare score-based and constraint-based learning in the presence of latent confounders. We use a greedy search strategy to identify the best fitting maximal ancestral graph (MAG) from continuous data, under the assumption of multivariate normality. Scoring maximal ancestral graphs is based on (a) residual iterative conditional fitting [Drton et al., 2009]  for obtaining maximum likelihood estimates for the parameters of a given MAG and (b) factorization and score decomposition results for mixed causal graphs <ref type="bibr" target="#b0">[Richardson, 2009</ref>, Nowzohour et al.,  2015]. We compare the score-based approach in simulated settings with two standard constraintbased algorithms: FCI and conservative FCI. Results show a promising performance of the greedy search algorithm.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">INTRODUCTION</head><p>Causal graphs can capture the probabilistic and causal properties of multivariate distributions. Under the assumptions of causal Markov condition and faithfulness, the graph induces a factorization for the joint probability distribution, and a graphical criterion (d-separation) can be used to identify all and only the conditional independencies that hold in the joint probability distribution.</p><p>The simplest case of a causal graph is a directed acyclic graph <ref type="bibr">(DAG)</ref>. A causal DAG G and faithful probability distribution P constitute a causal Bayesian network (CBN) <ref type="bibr">[Pearl, 2000]</ref>. Edges in the graph of a CBN have a straightforward interpretation: A directed edge X → Y denotes a causal relationship that is direct in the context of variables included in the DAG. In general, CBNs are considered in the setting where causal sufficiency holds, i.e. the absence of latent confounders. This is restrictive, since in most cases we can/do not observe all variables that participate in the causal mechanism of a multivariate system.</p><p>We consider a representation for an equivalence class of models based on maximal ancestral graphs (MAGs) <ref type="bibr">[Richardson and Spirtes, 2002]</ref>. MAGs are extensions of CBNs that also consider latent confounders. Latent confounders are represented with bi-directed edges. The set of conditional independencies that hold in a faithful probability distribution can be identified from the graph with the graphical criterion of m-separation. The causal semantics of edges in MAGs are more complicated: Directed edges denote causal ancestry, but the relationship is not necessarily direct. Bi-directed edges denote latent common causes. However, each pair of variables can only share one edge, and causal ancestry has precedence over confounding: If X is a causal ancestor of Y and the two are also confounded, then X → Y in the MAG. MAGs have several attractive properties: They are closed under marginalization and every non-adjacency corresponds to a conditional independence.</p><p>There exist two main approaches for learning causal graphs from data. Constraint-based approaches infer the conditional independencies imprinted in the data and search for a DAG/MAG that entails all (and only) of these independences according to d/m-separation. Score-based approaches try to find the graph G that maximizes the likelihood of the data given G (or the posterior), according to the factorization imposed by G. In general, a class of causal graphs, that are called Markov equivalent, fit the data equally well. Constraint-based approaches are more efficient and output a single graph with clear semantics, but give no indication the relative confidence in the model. Moreover, they have been shown to be sensitive to error propagation <ref type="bibr">[Spirtes, 2010]</ref>. Score-based methods on the other hand do not have this problem, and they also provide a metric of confidence in the entire output model. Hybrid methods that exploit the best of both worlds have therefore proved successful in learning causal graphs from data <ref type="bibr">[Tsamardinos et al., 2006]</ref>.</p><p>Numerous constraint-based and score-based algorithms exist that learn causal DAGs (classes of Markov equivalent DAGs) from data. Learning MAGs on the other hand is typically done with constraint-based algorithms. A score-based method for mixed causal graphs (not necessarily MAGs) has recently been proposed <ref type="bibr">[Nowzohour et al., 2015]</ref> based on relative factorization results <ref type="bibr">[Tian and</ref><ref type="bibr">Pearl, 2003, Richardson, 2009]</ref>.</p><p>Using these decomposition results, we implemented a simple greedy search for learning MAGs from data. We compare the results of this approach with <ref type="bibr">FCI [Spirtes et al., 2000</ref><ref type="bibr">, Zhang, 2008]</ref> and conservative <ref type="bibr">FCI [Ramsey et al., 2006]</ref> outputs. Greedy search performs slightly worse in most settings in terms of structural hamming distance, and better than FCI in terms of precision and recall.</p><p>Based on these results, we believe that score-based approach can be used to improve learning causal graphs in the presence of confounders. Algorithm implementation and code for the detailed results are available in https://github.com/striantafillou. The rest of the paper is organized as follows: Section 2 briefly reviews causal graphs with and without causal sufficiency. Section 3 gives an overview of constraint-based and score-based methods for DAGs and MAGs. Section 4 describes a greedy search algorithm for learning MAGs. Related work is discussed in Section 5. Section 6 compares the performance of the algorithm against FCI and CFCI. Conclusions and future work are presented in 7.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">CAUSAL GRAPHS</head><p>We begin with some graphical notation: A mixed graph (MG) is a collection of nodes (interchangeably variables) V, along with a collection of edges E. Edges can be directed (X → Y ) or bi-directed (X ↔ Y ). A path is a sequence of adjacent edges (without repetition). The first and last node of a path are called endpoints of the path.</p><p>A bi-directed path is a path where every edge is bi-directed. A directed path is a path where every edge is directed and oriented in the same direction. We use X Y to symbolize a directed path from X to Y . A directed cycle occurs when there exists a directed path X</p><p>X. An almost directed cycle is occurs when X ↔ Y and X Y . A triplet X, Y, Z on consequent nodes on a path are form a collider if X → Y ← Z. If X and Z are not adjacent, the triplet is an unshielded collider.</p><p>A mixed graph is called ancestral if it has no directed and almost directed cycles. An ancestral graph without bidirected edges is a DAG. X is a parent of Y in a MG G if X → Y in G. We use the notation P a G (X), An G (X) to denote the set of parents and ancestors of X in G.</p><p>Under causal sufficiency, DAGs can be used to model causal relationships: For a graph G over a set of variables V, X → Y in G if X causes Y directly (no variables in V mediate this relationship). Under the causal Markov condition and faithfulness Pearl <ref type="bibr">[2000]</ref>, G is connected to the joint probability distribution P over V through the criterion of d-separation (defined below). Equivalently, the causal Markov condition imposes a simple factorization of the joint probability distribution:</p><formula xml:id="formula_0">P (V) = V ∈V P (V |P a G (V )) (1)</formula><p>Thus, the parameters of the joint probability distribution describe the probability density function of each variable given its parents in the graph. 2. Every collider on the path is an ancestor of some member of Z.</p><p>A and B are said to be m-separated by Z if there is no m-connecting path between A and B relative to Z. Otherwise, they are said to be m-connected given Z.For graphs without bi-directed edges, m-separation is reduced to the d-separation criterion.</p><p>Markov equivalence classes of semi-Markov causal models do not have a simple characterization, because Markov equivalent SMCMs do not necessarily share the same edges: Absence of an edge in a SMCM does not necessarily correspond to an m-separation. Figure <ref type="figure" target="#fig_0">1</ref> shows an example of two SMCMs that encode the same m-separations but do not have the same edges (figure taken from <ref type="bibr">[Triantafillou and Tsamardinos, 2015]</ref>).</p><p>Maximal ancestral graphs are also used to model causality and conditional independencies in causally insufficient systems. MAGs are mixed ancestral graphs, which means that they can have no directed or almost directed cycles. Every pair of variables X, Y in an ancestral graph is joined by at most one edge. The orientation of this edge represents (non) causal ancestry. A directed edge X → Y denotes that X is an ancestor of Y , but the relation is not necessarily direct in the context of modeled variables (see for example edge A → D in MAG M 1 of Figure <ref type="figure" target="#fig_0">1</ref>). Moreover, X and Y may also be confounded (e.g. edge B → D in MAG M 1 of Figure <ref type="figure" target="#fig_0">1</ref>). A bi-directed edge X ↔ Y denotes that X and Y are confounded.</p><p>Like SMCMs, ancestral graphs encode the conditional independencies of a faithful distribution according to the criterion of m-separation. Maximal ancestral graphs are graphs in which every missing edge (non-adjacency) corresponds to a conditional independence. Every ancestral graph can be extended to a maximal ancestral graph by adding some bi-directed edges <ref type="bibr">[Richardson and Spirtes, 2002]</ref>. Thus, Markov equivalence classes of maximal ancestral graphs share the same edges and unshielded colliders, and some additional shielded colliders, discussed in Zhang <ref type="bibr">[2008]</ref>, Ali et al. <ref type="bibr">[2009]</ref>. Partial ancestral graphs (PAGs) are used to represent the Markov equivalence classes of MAGs.</p><p>Figure <ref type="figure" target="#fig_0">1</ref> illustrates some differences in SMCMs and MAGs that represent the same marginal of a DAG. For example, A is a causal ancestor of D in DAG G 1 , but not a direct cause (in the context of observed variables). Therefore, the two are not adjacent in the corresponding SMCM S 1 over {A, B, C, D}. However, the two cannot be rendered independent given any subset of {B, C}, and therefore A → D is in the respective MAG M 1 .</p><p>On the same DAG, B is another causal ancestor (but not a direct cause) of D. The two variables share the common cause L. Thus, in the corresponding SMCM S 1 over {A, B, C, D} B ↔ D is present. However, a bi-directed edge between B and D is not allowed in MAG M 1 , since it would create an almost directed cycle. Thus,</p><formula xml:id="formula_1">B → D is in M 1 .</formula><p>Overall, a SMCM has a subset of the adjacencies of its MAG counterpart. These extra adjacencies in MAGs correspond to pairs of variables that cannot be m-separated given any subset of observed variables, but neither directly causes the other, and the two are not confounded. These adjacencies can be checked in a SMCM using a special type of path, called inducing path <ref type="bibr">[Richardson and Spirtes, 2002]</ref>. </p><formula xml:id="formula_2">A B C L D G 1 : A B C D G 2 : A B C D S 1 : A B C D S 2 : A B C D M 1 : A B C D M 2 : A B C D P 1 : A B C D P 2 :</formula></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">LEARNING THE STRUCTURE OF CAUSAL GRAPHS</head><p>As mentioned above, there are two main approaches for learning causal networks from data, constraint-based and score-based. Constraint-based methods estimate from the data which conditional independencies hold, using appropriate tests of conditional independence. Each conditional independence corresponds to an m(d)-separation constraint. Constraint-based algorithms try to eliminate all graphs that are inconsistent with the observed constraints, and ultimately return only the statistically equivalent graphs consistent with all the tests.</p><p>Notice that the number of possible conditional independencies are exponential to the number of variables. For graphs that are maximal, i.e. every missing edge corresponds to a conditional independence (DAGs and MAGs but not SM-CMs), there exist efficient procedures that can return the skeleton and invariant orientations of the Markov equivalence class of graphs that are consistent with a data set, using only a subset of conditional independencies.</p><p>The PC algorithm <ref type="bibr">[Spirtes et al., 2000</ref>] is a prototypical, asymptotically correct constraint-based algorithm that identifies the PDAG consistent with a data set. FCI <ref type="bibr">[Spirtes et al., 2000</ref><ref type="bibr">, Zhang, 2008]</ref> is the first asymptotically correct constraint-based algorithm that identifies the PAG consistent with a data set. The algorithms work in two stages. The first stage is the skeleton identification stage: Starting from the full graph, the algorithm tries to identify a conditional independence X ⊥ ⊥ Y |Z for each pair of variables X, Y . The corresponding edge is then removed, and the separating set Z is cached. The second stage is the orientations stage, where the cached conditioning sets are employed to orient the edges.</p><p>Given faithfulness, the subset of conditional independencies that have been identified during the skeleton identification stage are sufficient to make all invariant orientation and return the PDAG or PAG that represents the Markov equivalence class of causal graphs that are consistent with all and only the cached conditional independencies. In practice, the orientation stage is sensitive to error propagation <ref type="bibr">[Spirtes, 2010]</ref>. Conservative PC (CPC) <ref type="bibr">[Ramsey et al., 2006]</ref> proposes a modification of PC that results in more robust orientations: The algorithm performs additional conditional independence tests during the orientation stage, and performs only a subset of robust orientations that are consistent with multiple conditional (in) dependencies. We use the term conservative FCI (CFCI) to describe FCI with the same conservative extension.</p><p>Score-based methods on the other hand search over the space of possible graphs trying to maximize a score that reflects how well the graph fits the data. This score is typically related to the likelihood of the graph given the data, P (D|G). For multinomial and Gaussian parametrizations, respectively, BDe <ref type="bibr">[Heckerman et al., 1995]</ref> and BGe <ref type="bibr">[Geiger and Heckerman, 1994]</ref> are Bayesian scores that integrate over all possible parameters. These scores are employed in most DAG/PDAG-learning algorithms. Other criteria like BIC or MDL can be used to score a graph with specific parameters <ref type="bibr">[Bouckaert, 1995]</ref>.</p><p>The number of possible graphs is super-exponential to the number of variables. For DAGs, efficient search-and-score learning is based on the factorization in Equation <ref type="formula">1</ref>. Thus, the likelihood of the graph can be decomposed into a product of individual likelihoods of each node given its parents. This can make greedy search very efficient: In each step of the search, all the graphs that occur with single changes of the current graph are considered. Using the score decomposition, one only needs to recompute the scores of nodes that are affected by the change (i.e. the set of parents changes). Unfortunately, the factorization presented in Equation 1 does not apply to probability distributions that are faithful to mixed graphs. This happens because variables connected with bi-directed edges (confounded) are no longer independent given their parents in the graph. Thus, SMCMs and MAGs do not admit a simple factorization, where each node has a single contribution to the likelihood. However, the joint probability of a set of variables V according to an ADMG G can be factorized based on sets of variables, called the c-components <ref type="bibr">[Tian and Pearl, 2003]</ref>, or districts <ref type="bibr" target="#b0">[Richardson, 2009]</ref> of the graph. The c-components correspond to the connected components of the bi-directed part of G (denoted G ↔ ), the graph stemming from G after the removal of all directed edges.</p><p>Parametrizations of the set of distributions obeying the conditional independence relations given by an ADMG are available for multivariate discrete <ref type="bibr" target="#b0">[Richardson, 2009]</ref> and multivariate Gaussian distributions <ref type="bibr" target="#b0">[Richardson and</ref><ref type="bibr">Spirtes, 2002, Drton et al., 2009]</ref>. Gaussian parametrizations for SMCMs are not always identifiable, but they have been shown to be almost everywhere identifiable for AD-MGs without edges (called bows in the structural equation model literature) <ref type="bibr">[Brito and Pearl, 2002]</ref>. If, in addition, an ADMG does not have almost directed cycles (i.e. is ancestral), the parametrization is everywhere identifiable <ref type="bibr">[Richardson and Spirtes, 2002]</ref>.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">GSMAG: GREEDY SEARCH FOR MAXIMAL ANCESTRAL GRAPHS</head><p>Let V = {V i : i = 1, . . . , V } be a random vector of V variables following a multivariate normal distribution N (O, Σ) with positive definite covariance matrix Σ. Let G be a MAG. Then graph G defines a system of linear equations:</p><formula xml:id="formula_3">V i = j∈P a G (Vi) β ij V j + i , i ∈ {1, . . . , V }<label>(2)</label></formula><p>Let B(G) be the collection of all real V × V matrices B = (β ij ) such that (i) β ij = 0 when j → i is not in G, and (ii)</p><formula xml:id="formula_4">(I − B) is invertible. Let Ω(G) be all the V × V matrices Ω = (ω ij ) such that (i) Ω is positive definite and (ii) ω ij = 0 if j ↔ i is not in G.</formula><p>Then the system of linear equations (2) can be written as Let D be a V × N matrix of observations for the variables V . Then the empirical covariance matrix is defined as</p><formula xml:id="formula_5">V = BV + ,</formula><formula xml:id="formula_6">S = 1 n DD T .</formula><p>For N ≥ V + 1, S is almost surely positive definite. For a MAG G, the log likelihood of the model is </p><formula xml:id="formula_7">l G (B, Ω|S) = − N 2 ln(|2πΩ|− N − 1 N tr[(I − B) T Ω −1 (I − B)S])<label>(3</label></formula><formula xml:id="formula_8">(I − B) −1 Ω(I − B) −T .</formula><p>Based on the factorization of MAGs presented in Richardson <ref type="bibr">[2009]</ref>, the likelihood can be decomposed according to the c-components of G as follows <ref type="bibr">[Nowzohour et al., 2015]</ref>:</p><formula xml:id="formula_9">l G ( Σ|S) = − N 2 k |C k |ln(2π) + ln |Σ G k | j∈P a G k σ 2 kj + N − 1 N tr[Σ −1 G k S G k − |P a G (C k ) \ {C k }|] ,<label>(4)</label></formula><p>where the G k is the graph consisting only of nodes in C k ∪ P a G (C k ) without any edges among variables in P a G (C k )\ C k , and the subscript G k denotes the restriction of a matrix to the rows and columns participating in G k . σ 2 kj denotes the diagonal entry of Σ G k corresponding to parent node k. The log likelihood is now a sum of c-component scores.</p><p>The scoring function is typically the negative log likelihood </p><formula xml:id="formula_10">G ← action( X, Y, G); if action==(orientBidirected ∨ addBidirected) then m ← m : X ∈ C m ; l ← l : Y ∈ C l ; C m ← C m ∪ C l ; C ← C \ C l ; Σm ← RICF(G m , S m , tol); s m ← scoreContrib(C m , Σ m , N ); end else if X ↔ Y in G then m ← m : X, Y ∈ C m ; C new ← connectedComponents(G m ); C ← (C \ {C m }) ∪ C new ; foreach C ∈ C new do m ← index of C in C ; Σm ← RICF(G m , S m , tol); s m ← scoreContrib(C m , Σ m , N ); end end else m ← C m : G m = G m ; Σm ← RICF(G m , S m , tol); s m ← scoreContrib(C m , Σ m , N ); end</formula><p>Algorithm 2: updateScores regularized by a penalty for the number of parameters to avoid over-fitting. The BIC score for MAGs is:</p><formula xml:id="formula_11">BIC( Σ, G) = −2ln(l G ( Σ|S)) + ln(N )(2V + E),<label>(5)</label></formula><p>where l G ( Σ|G) is the likelihood of the graph G with the MLE parameters B, Ω. BIC is an asymptotically correct criterion for selecting among Gaussian ancestral graph models <ref type="bibr">[Richardson and Spirtes, 2002]</ref>.</p><p>A simple greedy strategy starts from a MAG G with a score sc and then checks the local neighborhood (i.e. the graphs that stem from the current graph after making a single edge change) for the lowest-scoring network. The algorithm continues this "hill-climbing" until no single edge change reduces the score.</p><p>Algorithm 1 begins with the empty graph, where each node is a component. At every subsequent step, every possible edge change is considered: For absent edges, the possible actions are addLeft, addRight, addBidirected. For directed edges, the possible actions are reverse, orientBidirected, remove. For bi-directed edges the possible actions are ori-entLeft, orientRight, remove.</p><p>Score decomposition described in Equation 4 is used to avoid re-fitting the entire MAG. Instead, only the likelihood of the c-components affected by the change need to be reestimated. Algorithm 1 describes a simple greedy search strategy for learning MAG structure.</p><p>Only actions that do not create directed or almost directed cycles are attempted. To efficiently check for cycle creation, a matrix of ancestral relationships<ref type="foot" target="#foot_0">1</ref> of the current MAG is cached. Edge removals can never create directed cycles. Using the cached ancestral matrix, it is straightforward to check whether the addition of a directed edge will create a directed cycle, or if the addition of a bi-directed edge will create an almost directed cycle. Almost directed cycles can also be created when adding directed edges: For each edge X ↔ Y , adding edge J → I will create a semidirected cycle if I is an ancestor if X(Y ) and Y (X) is an ancestor of J.</p><p>At the end of each iteration, the matrix of ancestral relationships is updated. If a previously missing edge is added, the update takes O(V 2 ) time. If an edge is removed, the matrix is recomputed using Warshall' s algorithm for transitive closure <ref type="bibr">[Warshall, 1962]</ref>.</p><p>The c-components are only updated in case a bi-directed edge is added or altered in any way. When adding a bidirected edge, the corresponding c-components of the endpoints are merged if separate. When an existing bi-directed edge is removed (completely or becomes directed), the corresponding c-component C k is divided in the new connected components. The scores of the affected components are recomputed using new RICF estimates. In any other case, the c-components remain the same, and the score of the c-component whose corresponding graph G k is affected by the change is recomputed. This procedure is described in Algorithm 2.</p><p>When no single-edge change improves the current score, the algorithm terminates and the current network is returned. Greedy hill-climbing procedures can be stuck in local optima (minima). To tackle this problem, they are often augmented with meta-heuristics such as random restarts, TABU lists or simulated annealing. For the scope of this work we use no such heuristic. In preliminary experiments, however, we found that augmenting Algorithm 1 with a TABU heuristic did not significantly improve performance.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">RELATED WORK</head><p>Several constraint-based algorithms exist for learning a Markov equivalence class of MAGs from an observational data set: FCI <ref type="bibr">[Spirtes et al., 2000</ref><ref type="bibr">, Zhang, 2008]</ref>  </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">COMPARISON OF GSMAG WITH FCI, CFCI</head><p>We compared the performance of Algorithm 1 against FCI and CFCI in simulated data. We simulated 100 random DAGs over 10, 20 and 50 variables. To control the sparseness of the DAGs, we set the maximum parents of each node. We present results for sparse networks, where each variable was allowed to have up to 3 parents, and denser networks where each variables was allowed to have up to 5 parents. For each DAG, 10% of the variables were marginalized (1, 2 and 5 variables respectively). The ground truth PAG P GT was then created for each marginal DAG.</p><p>Data sets with 100, 1000 and 5000 samples were simulated for each DAG and random parameters with absolute values in {0.1, 0.9}. The corresponding marginal data sets were input in Algorithm 1, FCI and CFCI. FCI and CFCI were run with a significance threshold of 0.05 and a maximum conditioning size 5. Algorithm 1 outputs a MAG. To compare the outputs of the algorithms, the corresponding PAG   Summarizing PAG differences is not trivial, and many dif-  We also use precision and recall, as described in Tillman and Spirtes <ref type="bibr">[2011]</ref>: Precision is defined as the number of edges in the output PAG with the correct orientations, divided by the number of edges in the output PAG. Recall is defined as the number of edges in the output PAG with correct orientations, divided by the number of edges in the ground truth PAG. These metrics are very conservative, since they penalize even small differences. For example, an edge that is → in the ground truth but in the output PAG will be classified as a false positive.</p><p>Figures <ref type="figure" target="#fig_5">2, 3, and 4</ref> show the performance results for networks of 10, 20 and 50 variables, respectively. Mean values over 100 iterations are presented for all experiments. All algorithms perform better in sparser networks.</p><p>Greedy search has larger structural hamming distances than FCI and CFCI. More specifically, out of 900 cases (over all variable and sample sizes), FCI outperforms GSMAG in 657 cases for sparse networks and in 715 cases for dense networks, while CFCI outperforms GSMAG in 682 cases for sparse networks and in 710 cases for dense networks. In terms of precision, CFCI is again the best of the three (outperforms GSMAG in 601 and 659 out of 900 cases for sparse and dense networks, respectively). FCI has the poorest precision: it outperforms GSMAG in 318 and 221 cases for sparse and dense networks, respectively). Finally, GS-MAG has the best recall out of all algorithms, with CFCI being second. Specifically, terms of recall, FCI outperforms GSMAG in 139 and 83 cases, while CFCI outperforms GSMAG in 357 and 249 cases for sparse networks and dense networks, respectively. Naturally, greedy search is much slower than both conservative and plain FCI.</p><p>Intriguingly, GSMAG's performance declines for the largest attempted sample size (5000 samples), particularly for larger networks. This happens because greedy search tends to include many false positive edges. It is possible that this is related to the naive greedy search, and could be improved by augmenting some kind of heuristic for escaping local minima, or by adjusting the scoring criterion.</p><p>Figure <ref type="figure" target="#fig_7">6</ref> shows the score of the output MAG for Algorithm 1 and the ground truth MAG. To compare also with FCI, we used the method presented in Zhang <ref type="bibr">[2008]</ref> to obtain a MAG from P F CI . Notice that this cannot be applied to the output of CFCI, since it is not a complete PAG (due to unfaithful triplets). Greedy search typically scores closer to the ground truth, particularly for denser networks.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="7">FUTURE WORK</head><p>We present an implementation of a greedy search algorithm for learning MAGs from observations, and compare it to FCI and CFCI. To the best of our knowledge, this is the first comparison of score-based and constraint-based search in the presence of confounders.</p><p>The algorithm uses the decomposition presented in <ref type="bibr">Nowzohour et al. [2015]</ref> for bow-free SMCMs. Compared to SMCMs, MAGs are less expressive in terms of causal statements. However, since they have no almost directed cycles, fitting procedures for obtaining maximum likelihood estimates always converge. Semi-Markov causal models that are Markov equivalent to the output MAG could be identified as a post-processing step.</p><p>Heuristic procedures for escaping local minima could also be explored, to improve the performance of GSMAG. Algorithm efficiency could also possibly be improved by updating without recomputing inverse matrices and where applicable.</p><p>Other interesting directions include taking weighted averages for specific PAG features, or using both constraintbased and score-based techniques for hybrid learning. </p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 1 :</head><label>1</label><figDesc>Figure 1: An example of two different DAGs and the corresponding mixed causal graphs over observed variables. From the top: DAGs G 1 over variables {A, B, C, D, L} (left) and G 2 over variables {A, B, C, D} (right). From left to right, on the same row as the underlying causal DAG, the respective SMCMs S 1 and S 2 over {A, B, C, D}; the respective MAGs M 1 = G 1 [ L and M 2 = G 2 over variables {A, B, C, D}; finally, the respective PAGs P 1 and P 2 . Notice that, M 1 and M 2 are identical, despite representing different underlying causal structures.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>and for B ∈ B(G), Cov( ) = Ω ∈ Ω(G) it has a unique solution that is a multivariate normal vector with covariance matrix Σ = (I − B) −1 Ω(I − B) −T , where the superscript −T denotes the transpose inverse. The family of distributions with covariance matrix in the set {Σ = (I − B) −1 Ω(I − B) −T } is called the normal linear model associated with G (symb N(G)). For MAGs, the normal linear model is everywhere identifiable.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>input : Pair X, Y , Action action, c-components C, scores s, MAG G, covariance matrix S, tolerance tol, sample size N output: c-components C , scores s , MAG G</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Performance of FCI, CFCI and GSMAG for networks with 9 observed variables (top) 3 maximum parents per variables (bottom) 5 maximum parents per variable.</figDesc><graphic coords="7,81.58,280.88,105.29,78.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_4"><head>Figure 3 :</head><label>3</label><figDesc>Figure 3: Performance of FCI, CFCI and GSMAG for networks with 18 observed variables (top) 3 maximum parents per variable (bottom) 5 maximum parents per variable.</figDesc><graphic coords="7,81.58,489.75,105.29,78.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_5"><head>Figure 4 :</head><label>4</label><figDesc>Figure 4: Performance of FCI, CFCI and GSMAG for networks with 45 observed variables (top) 3 maximum parents per variable (bottom) 5 maximum parents per variable.</figDesc><graphic coords="7,81.58,572.28,105.29,78.94" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_6"><head>Figure 5 :</head><label>5</label><figDesc>Figure 5: Score divided by number of samples for FCI, GS and the ground truth network for sparse networks (3 maximum parents) for 9(left), 18(middle) and 45(right) observed variables.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_7"><head>Figure 6 :</head><label>6</label><figDesc>Figure 6: Score divided by number of samples for FCI, GS and the ground truth network (5 maximum parents) for 9(left), 18(middle) and 45(right) observed variables.</figDesc><graphic coords="8,81.58,302.82,70.20,52.63" type="bitmap" /></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_2"><head></head><label></label><figDesc>is a sound and complete algorithm that returns the complete PAG. RFCIColombo et al. [2012]  and FCI+[Claassen et al.,  2013]  are modifications of FCI that try to avoid the computationally expensive possible d-separating stage in the skeleton search of FCI. Conservative FCI[Ramsey et al.,  2006]  is a modification of FCI that makes fewer, but more robust orientations, to avoid error propagation.Nowzohour et al. [2015]  propose a greedy search with random restarts for learning "bow-free" ADMGs from data and introduce the score decomposition showed in Equation 4. Since Markov equivalence for ADMGs that are not MAGs has not yet been characterized, they use a greedy strategy for obtaining the empirical Markov equivalence class, based on score similarity. The authors use the estimated ADMGs to compute causal effects and show promising results. However, since they do not necessarily find maximal ancestral graphs, they do not compare against constraint-based methods or evaluate the accuracy of the learnt graphs.</figDesc><table><row><cell>Marginalizing out variables from causal DAGs results in</cell></row><row><cell>some additional equality constraints that are not condi-</cell></row><row><cell>tional independencies. Nested Markov models Shpitser</cell></row><row><cell>et al. [2013] extend SMCMs and are used to also model</cell></row><row><cell>these additional constraints. Shpitser et al. [2012] use a pe-</cell></row><row><cell>nalized likelihood score and a greedy search with TABU</cell></row><row><cell>list to identify a nested Markov model from discrete data.</cell></row></table></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" type="table" xml:id="tab_3"><head></head><label></label><figDesc>Eichler, and TS Richardson. Computing maximum likelihood estimates in recursive linear models with correlated errors. The Journal of Machine Learning Research, 10:2329-2348, 2009. D Geiger and D Heckerman. Learning Gaussian networks. In Proceedings of the 10th Annual Conference on Uncertainty in Artificial Intelligence, 1994. Causality: Models, Reasoning and Inference, volume 113 of Hardcover. Cambridge University Press, 2000. J Ramsey, P Spirtes, and J Zhang. Adjacency faithfulness and conservative causal inference. In Proceedings of the 22nd Conference on Uncertainty in Artificial Intelligence, 2006. TS Richardson. A factorization criterion for acyclic directed mixed graphs. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, 2009. TS Richardson and P Spirtes. Ancestral graph Markov models. The Annals of Statistics, 30(4):962-1030, 2002. I Shpitser, TS Richardson, JM Robins, and R Evans. Parameter and structure learning in nested markov models. In In UAI (Workshop on Causal Structure Learning), 2012. I Shpitser, R Evans, TS Richardson, and JM Robins. Sparse nested Markov models with log-linear parameters. In Proceedings of the 29h Conference on Uncertainty in Artificial Intelligence, 2013. P Spirtes. Introduction to causal inference. Journal of Machine Learning Research, 11:1643-1662, 2010. P Spirtes, C Glymour, and R Scheines. Causation, Prediction, and Search. MIT Press, second edition, January 2000. J Tian and J Pearl. On the identification of causal effects. Technical Report R-290-L, UCLA Cognitive Systems Laboratory, 2003. RE Tillman and P Spirtes. Learning equivalence classes of acyclic models with latent and selection variables from multiple datasets with overlapping variables. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics, 2011. Sofia Triantafillou and Ioannis Tsamardinos. Constraintbased causal discovery from multiple interventions over overlapping variable sets. Journal of Machine Learning Research, 16:2147-2205, 2015. I Tsamardinos, LE Brown, and CF Aliferis. The max-min hill-climbing Bayesian network structure learning algorithm. Machine Learning, 65(1):31-78, 2006. S Warshall. A theorem on boolean matrices. J. ACM, 9(1): 11-12, 1962. J Zhang. On the completeness of orientation rules for causal discovery in the presence of latent confounders and selection bias. Artificial Intelligence, 172(16-17): 1873-1896, 2008. J Zhang and P Spirtes. A transformational characterization of Markov equivalence for directed acyclic graphs with latent variables. In Proceedings of the 21st Conference on Uncertainty in Artificial Intelligence, UAI 2005, pages 667-674, 2005. cited By 8.</figDesc><table><row><cell>RR Bouckaert. Bayesian belief networks: from construc-</cell></row><row><cell>tion to inference. PhD thesis, University of Utrecht,</cell></row><row><cell>1995.</cell></row><row><cell>C Brito and J Pearl. A new identification condition for re-</cell></row><row><cell>cursive models with correlated errors. Structural Equa-</cell></row><row><cell>tion Modeling, 9(4):459-474, 2002.</cell></row><row><cell>T Claassen, JM Mooij, and T Heskes. Learning sparse</cell></row><row><cell>causal models is not NP-hard. In Proceedings of the 29th</cell></row><row><cell>Annual Conference on Uncertainty in Artificial Intelli-</cell></row><row><cell>gence, 2013.</cell></row><row><cell>D Colombo, MH Maathuis, M Kalisch, and TS Richardson.</cell></row><row><cell>Learning high-dimensional directed acyclic graphs with</cell></row><row><cell>latent and selection variables. The Annals of Statistics,</cell></row><row><cell>40(1):294-321, 02 2012.</cell></row><row><cell>M Drton, M D Heckerman, D Geiger, and DM Chickering. Learn-</cell></row><row><cell>ing bayesian networks: The combination of knowledge</cell></row><row><cell>and statistical data. Machine Learning, 20(3):197-243,</cell></row><row><cell>1995.</cell></row><row><cell>C Nowzohour, M Maathuis, and P Bühlmann. Structure</cell></row><row><cell>learning with bow-free acyclic path diagrams. arXiv</cell></row><row><cell>preprint arXiv:1508.01717, 2015.</cell></row><row><cell>Greedy search in the space of PAGs instead of MAGs could</cell></row><row><cell>also be explored, since a transformational characterization</cell></row><row><cell>for Markov equivalent MAGs exists [Zhang and Spirtes,</cell></row><row><cell>2005].</cell></row></table><note>J Pearl.</note></figure>
			<note xmlns="http://www.tei-c.org/ns/1.0" place="foot" n="1" xml:id="foot_0">Changing edge orientation is equivalent to a removing the edge and then adding it re-oriented. To test for possible cycles efficiently, a matrix of all the non-trivial ancestral relationships (more than one variable in the path) is also cached. Reversing an edge X → Y creates a directed cycle only if X is a non-trivial ancestor of Y .</note>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Acknowledgments</head><p>This work was funded by European Research Council (ERC) and is part of the CAUSALPATH -Next Generation Causal Analysis project, No 617393.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Markov equivalence for ancestral graphs</title>
		<author>
			<persName><surname>Ra Ali</surname></persName>
		</author>
		<author>
			<persName><surname>Ts Richardson</surname></persName>
		</author>
		<author>
			<persName><surname>Spirtes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">The Annals of Statistics</title>
		<imprint>
			<biblScope unit="volume">37</biblScope>
			<biblScope unit="issue">5B</biblScope>
			<biblScope unit="page" from="2808" to="2837" />
			<date type="published" when="2009-10">October 2009</date>
		</imprint>
	</monogr>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
