<?xml version="1.0" encoding="UTF-8"?>
<TEI xml:space="preserve" xmlns="http://www.tei-c.org/ns/1.0" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" 
xsi:schemaLocation="http://www.tei-c.org/ns/1.0 https://raw.githubusercontent.com/kermitt2/grobid/master/grobid-home/schemas/xsd/Grobid.xsd"
 xmlns:xlink="http://www.w3.org/1999/xlink">
	<teiHeader xml:lang="en">
		<fileDesc>
			<titleStmt>
				<title level="a" type="main">Situation Calculus Semantics for Actual Causality</title>
			</titleStmt>
			<publicationStmt>
				<publisher/>
				<availability status="unknown"><licence/></availability>
			</publicationStmt>
			<sourceDesc>
				<biblStruct>
					<analytic>
						<author role="corresp">
							<persName><forename type="first">Vitaliy</forename><surname>Batusov</surname></persName>
							<email>vbatusov@cse.yorku.ca</email>
							<affiliation key="aff0">
								<orgName type="institution">York University Toronto</orgName>
								<address>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<author>
							<persName><forename type="first">Mikhail</forename><surname>Soutchanski</surname></persName>
							<affiliation key="aff1">
								<orgName type="institution">Ryerson University</orgName>
								<address>
									<settlement>Toronto</settlement>
									<country key="CA">Canada</country>
								</address>
							</affiliation>
						</author>
						<title level="a" type="main">Situation Calculus Semantics for Actual Causality</title>
					</analytic>
					<monogr>
						<imprint>
							<date/>
						</imprint>
					</monogr>
					<idno type="MD5">80443F73D122BC469AC255134A1CABD8</idno>
				</biblStruct>
			</sourceDesc>
		</fileDesc>
		<encodingDesc>
			<appInfo>
				<application version="0.7.2" ident="GROBID" when="2023-03-23T20:59+0000">
					<desc>GROBID - A machine learning software for extracting information from scholarly documents</desc>
					<ref target="https://github.com/kermitt2/grobid"/>
				</application>
			</appInfo>
		</encodingDesc>
		<profileDesc>
			<abstract>
<div xmlns="http://www.tei-c.org/ns/1.0"><p>The state-of-the-art definitions of actual cause by Pearl and Halpern suffer from the modest expressivity of causal models. We develop a new definition of actual cause in the context of situation calculus (SC) basic action theories. As a result, we avoid the paradoxes that arise in causal models and can identify complex actual causes of conditions expressed in first-order logic. We provide a formal translation from causal models to SC and establish a relationship between the definitions. Using examples, we show that long-standing disagreements between alternative definitions of actual causality can be mitigated by faithful modelling.</p></div>
			</abstract>
		</profileDesc>
	</teiHeader>
	<text xml:lang="en">
		<body>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="1">Introduction</head><p>Actual causality, also known as token-level causality, is concerned with finding in a given scenario a singular event that caused another event. This is in contrast to type-level causality which is concerned with universal causal mechanisms governing the world. The leading line of computational inquiry into actual causality was pioneered by <ref type="bibr" target="#b9">[Pearl, 1998;</ref><ref type="bibr">2000]</ref> and continued by <ref type="bibr" target="#b0">[Halpern and Pearl, 2005;</ref><ref type="bibr" target="#b1">Halpern, 2000;</ref><ref type="bibr" target="#b0">Eiter and Lukasiewicz, 2002;</ref><ref type="bibr">Hopkins, 2005;</ref><ref type="bibr">Halpern, 2015;</ref><ref type="bibr">2016]</ref> and in other publications. We call it the HP approach. It is based on the concept of structural equations <ref type="bibr" target="#b12">[Simon, 1977]</ref> and implemented in the framework of causal models. The HP approach follows the Humean counterfactual definition of causation, which posits that saying "an event A caused an outcome B" is the same as saying "if A had not been, then B never had existed". This definition is wellknown to suffer from the problem of preemption: it could be the case that in the absence of event A, B would still have occurred due to another event, which in the original scenario was preempted by A. HP address this by performing counterfactual analysis only under carefully selected contingencies which suspend some subset of the model's mechanisms. Selecting proper contingencies proved to be a challenging task; as mentioned in <ref type="bibr">[Halpern, 2016]</ref> on p.27, "The jury is still out on what the 'right' definition of causality is".</p><p>The HP approach is prone to producing results that cannot be reconciled with intuitive understanding due to the limited expressiveness of causal models <ref type="bibr">[Hopkins, 2005;</ref><ref type="bibr">Hopkins and Pearl, 2007]</ref>. The ontological commitments of structural causal models resemble propositional logic, they have no objects, no relationships, no time, no support for quantified causal queries. As a remedy, <ref type="bibr">[Hopkins, 2005;</ref><ref type="bibr">Hopkins and Pearl, 2007]</ref> leverage the expressive power of first-order logic and the robustness of the situation calculus (SC) <ref type="bibr" target="#b11">[Reiter, 2001]</ref>. To formulate counterfactuals within SC, they allow arbitrary modifications in a sequence of actions, e.g. removing actions that serve as preconditions for subsequent actions. They do not define actual causality.</p><p>Given that theories of actual causality based on structural equations share the same ailments <ref type="bibr" target="#b7">[Menzies, 2014;</ref><ref type="bibr" target="#b0">Glymour et al., 2010]</ref>, it seems natural to explore actual causality from a different perspective. We do this in the language of SC under the classical Tarskian semantics, where the notion of a cause naturally aligns with the notion of an action, and the effect can be specified by a FOL formula with quantifiers over object variables. In contrast to HP whose analysis is based on observing the end results of interventions, we do so by analyzing the dynamics which lead to the end results. Our developments are based on a small set of plausible intuitions.</p><p>The next section briefly summarizes SC. Section 3 motivates our approach and supplies a running example. Section 4 characterizes causes which achieve an effect. Section 5 explores maintenance causes-actions which protect existing conditions from being lost. In Section 6, we combine achievement and maintenance causes into an all-encompassing notion of actual cause. In Section 7, we outline the HP approach and, in Section 8, formally connect it to ours. Finally, we briefly compare our definition to others using examples and discuss related work.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="2">Situation Calculus</head><p>In the situation calculus <ref type="bibr" target="#b5">[McCarthy and Hayes, 1969;</ref><ref type="bibr" target="#b11">Reiter, 2001]</ref>, the constant S 0 denotes the initial situation that represents an empty list of actions, while the complex situation term do([α 1 , ...., α n ], S 0 ) represents the situation that results from executing actions α 1 , ..., α n consecutively so that α 1 is executed in S 0 , and α n is executed last. If none of the action terms α i have variables, then we call this situation term an (actual) narrative. An action term α i may occur in the narrative more than once at different positions. The set of all situations can be visualized as a tree with a partial-order relation s 1 s 2 on situations s 1 , s 2 , and s 1 s 2 abbreviates s 1 s 2 ∨ s 1 = s 2 . It is characterized by the foundational domain-independent axioms (Σ) included in a basic action theory (BAT) D that also includes axioms D S0 describing the initial situation, and action precondition axioms D ap using the predicate P oss(a, s) to say when an action a is possible in s. For each action function there is one precondition axiom P oss(A(x), s) ↔ Π A (x, s), where as usual, all free variables are implicitly ∀-quantified, and Π(x, s) is a formula uniform in s, meaning that it has no occurrences of P oss, , no other situation terms, no quantifiers over situations. For each fluent F , D includes a successor state axiom (SSA) F (x, do(a, s)) ↔ ψ + (x, a, s) ∨ F (x, s) ∧ ¬ψ − (x, a, s)), where the fluent predicate F (x, s) represents a situationdependent relation over tuple of objects x, uniform formulas ψ + (x, a, s) and ψ − (x, a, s) specify action terms that under certain application-dependent conditions have a positive effect (make F true), or a negative effect on fluent F (make it false), respectively. The SSAs are derived under the causal completeness assumption <ref type="bibr" target="#b10">[Reiter, 1991]</ref> that all effects of actions on fluents are explicitly represented. There are a number of auxiliary axioms, such as unique name axioms, that are included in D. The common abbreviation executable(s) means that each action mentioned in the situation term s was possible in the situation in which it was executed. The basic computational task, called the projection problem, is the task of establishing whether a BAT entails a sentence φ(σ) for an executable ground situation σ, where φ(s) is a formula uniform in s. This problem can be solved using the one-step regression operator ρ. The expression ρ[ϕ, α] denotes the formula obtained from ϕ by replacing each fluent atom F in ϕ with the right-hand side of the SSA for F where the action variable a is instantiated with the ground action α, and then simplified using unique name axioms for actions and constants. Similarly to the theorem about multi-step regression R presented in <ref type="bibr" target="#b10">[Reiter, 1991]</ref>, one can prove that given a BAT D, a formula ϕ(s) uniform in s, and a ground action term α, we have that D |= ∀s. ϕ(do(α, s)) ↔ ρ[ϕ(s), α].</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="3">Motivation</head><p>We propose to axiomatize a dynamic world using a SC theory and derive actual causality from first principles. Specifically, to represent a "scenario", we consider a BAT D and a narrative describing the actions or events which transpired in the world characterized by D. We do not formally distinguish between agent actions and nature's events. The narrative is specified by an executable ground situation term σ called the "actual situation". An effect for which we seek to identify causes is given by a formula ϕ(s) uniform in situation s. Since actions are the sole source of change in a BAT, we identify the set of potential causes of an effect ϕ with the set of all ground action terms occurring in σ. Example 1. A D flip-flop is a digital circuit capable of storing one bit of information. A basic D flip-flop has two Boolean inputs, D and enable, and one output, Q. Each input and output signal can be either at the low level (modelled as false), or at the high level (modelled as true). If an input enable is high, every time the clock "ticks", the output Q assumes the value of the main input D and maintains it until the next tick. When the signal enable is low, the flip-flop preserves the value of Q regardless of D and ticks.  Let d, e 1 , and e 2 be constants that represent the input signals. Let the action functions be hi(x), lo(x), tick, and c on, where the first two actions set signal x to high or to low voltage level, respectively, tick represents the action of the clock, and c on turns the clock on, making tick possible. The fluent ClockOn(s) represents the state of the clock, High(x, s) represents the logical value of signal x, En(s) represents the output of the OR-gate, and Q(s) is the output of the flip-flop.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head>Q</head><p>Let the narrative σ be do <ref type="bibr">([c on, hi(d)</ref>, tick, hi(e 1 ), hi(e 2 ), tick, lo(e 1 ), lo(e 2 ), tick, lo(d), tick], S 0 ), and let the effect of interest be Q(s). In the initial situation, we have that ∀x(¬High(x, S 0 )), ¬Q(S 0 ), ¬En(S 0 ). The following BAT models the operation of the circuit.</p><p>P oss(tick, s) ↔ ClockOn(s), P oss(c on, s), P oss(hi(x), s) ↔ ¬High(x, s), P oss(lo(x), s) ↔ High(x, s),</p><formula xml:id="formula_0">ClockOn(do(a, s)) ↔ a = c on ∨ ClockOn(s), High(x, do(a, s)) ↔ a = hi(x) ∨ High(x, s) ∧ a = lo(x), En(do(a, s)) ↔ a = hi(e 1 ) ∨ a = hi(e 2 ) ∨ En(s) ∧ ¬[a = lo(e 1 ) ∧ ¬High(e 2 , s)]∧ ¬[a = lo(e 2 ) ∧ ¬High(e 1 , s)], Q(do(a, s)) ↔ [a = tick ∧ En(s) ∧ High(d, s)] ∨ Q(s) ∧ ¬[a = tick ∧ En(s) ∧ ¬High(d, s)].</formula><p>Figure <ref type="figure" target="#fig_0">2</ref> graphically shows the truth values, relative to D, of the key ground fluents in situation S 0 and after each subsequent action in σ. Observe that all fluents are initially false, shown as the thick lower edges, the #1 action c on makes subsequent tick actions (#3, #6, #9, #11) possible, the actions hi(d), hi(e 1 ), hi(e 2 ), lo(e 1 ) change the voltage levels of the corresponding signals, hi(e 1 ) also changes the state of En(s), the second occurrence of tick (#6) makes the output Q(s) true, but other occurrences of tick are inconsequential.</p><p>It is obvious that the 6-th action, tick, is a cause of Q(s) in σ, having acted as the proverbial last straw that broke the camel's back, but so are the actions hi(d) and hi(e 1 ), having created the right circumstances for the back-breaking. Action #6 would accomplish nothing had the flip-flop not been enabled and the input bit set to high. The task before us is to introduce general formal criteria for identifying such actions.</p><p>We axiomatically recognize two kinds of causal roles which events may assume. Achievement causes are the events which realize-in whole or in part-either the condition of interest or the preconditions of other achievement causes. Maintenance causes are the events which prevent other events from falsifying the condition of interest. We use the generic term actual cause to refer to an event which contributes to the effect of interest via a combination of these causal roles. Before we proceed, we, like HP, introduce the notion of a causal setting which formally captures a scenario. Definition 1. A (SC) causal setting is a triple D, σ, ϕ(s) where D is a BAT, σ is a ground situation term such that D |= executable(σ), and ϕ(s) is a SC formula uniform in s such that D |= ∃s(executable(s) ∧ ϕ(s)).</p><p>Since the BAT D is fixed in our approach, we typically refer to D, σ, ϕ(s) as just σ, ϕ(s) .</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="4">The Achievement Causal Chain</head><p>Intuition provides few definite truths about actual causality, but we hold the following to be self-evident: If some action α of the action sequence σ triggers the formula ϕ(s) to change its truth value from f alse to true relative to D and if there is no action in σ after α that changes the value of ϕ(s) back to f alse, then α is an actual cause of achieving ϕ(s) in σ. This statement is sound because (a) the narrative σ determines a total linear order on its actions, (b) change is associated with a particular element of that order, and (c) no change comes about other than by an action of σ. The next definition states this observation formally. Definition 2. A causal setting C = σ, ϕ(s) satisfies the achievement condition via the situation term do(α, σ )</p><formula xml:id="formula_1">σ iff D |= ¬ϕ(σ ) ∧ ∀s (do(α, σ ) s σ → ϕ(s)).</formula><p>Whenever a causal setting C satisfies the achievement condition via do(α, σ ), we say that the ground action α executed in σ is a (primary) achievement cause in C.</p><p>If a causal setting does not satisfy the achievement condition and ϕ(s) is non-tautological and holds throughout the narrative σ, then we ascribe the achievement of ϕ(s) to an unknowable cause masked by the initial situation S 0 . If ϕ(s) is a tautology, it legitimately has no cause. If ϕ(σ) is not entailed by D, then its achievement cause truly does not exist. Example 2 (continued). The entailment of Definition 2 holds when α is tick and σ is do <ref type="bibr">([c on, hi(d)</ref>, tick, hi(e 1 ), hi(e 2 )], S 0 ), meaning that the action #6 (tick executed after σ ) is the achievement cause of Q(s) in σ.</p><p>The notion of the achievement condition forms our basic tool which, when used together with the single-step regression operator ρ, helps us not only find the single action that brings about the effect of interest, but also identify the actions that build up to it. Intuitively, ρ[ϕ(s), α] is the weakest precondition that must hold in a previous situation σ in order for ϕ(s) to hold after performing α in σ. If we prove α to be an achievement cause of ϕ(s) in do(α, σ), we can use regression ρ to obtain a formula that holds at σ and constitutes a necessary and sufficient condition for the achievement of ϕ(s) via α. This new formula may have an achievement cause of its own which, by virtue of α, also constructively contributes to the achievement of ϕ(s). By repeating this process, we can uncover the entire chain of actions that incrementally build up to the achievement of the ultimate effect. At the same time, we must not overlook the condition which makes the execution of α in σ even possible. This condition is conveniently captured by the right-hand side Π α (s) of the precondition axiom for α and may have achievement causes of its own. The following inductive definition formalizes these intuitions. Definition 3. If a causal setting C = σ, ϕ(s) satisfies the achievement condition via some situation term do(A( t), σ ) σ and α is an achievement cause in the causal setting σ , ρ[ϕ(s), A( t)] ∧ Π A ( t, s) , then α is an achievement cause in C.</p><p>Clearly, the process of discovering intermediary achievement causes using single-step regression repeatedly cannot continue beyond S 0 . Since the given narrative σ is a finite sequence, the achievement causes of C also form a finite sequence which we call the achievement causal chain of C. Note that the actions of the achievement causal chain need not be adjacent in the action sequence of σ. Example 3 (continued). We found that the action tick (#6) executed in σ = do <ref type="bibr">([c on, hi(d)</ref>, tick, hi(e 1 ), hi(e 2 )], S 0 ) is the achievement cause of Q(s). We can now use Definition 3 to find in σ the complete causal chain leading up to Q(s). The one-step regression of Q(s) through tick is</p><formula xml:id="formula_2">ρ[Q(s), tick] = (¬En(s) ∨ High(d, s)) ∧ (En(s) ∨ Q(s)).</formula><p>Call ψ(s) the conjunction of this formula and ClockOn(s), the precondition of tick. By Definition 2, the achievement cause of ψ(s) is the action hi(e 1 ) executed in do <ref type="bibr">([c on, hi(d)</ref>, tick], S 0 ). Therefore, hi(e 1 ) is a secondary achievement cause of Q(s). Applying Definition 3 again, we formulate another causal setting with the query</p><formula xml:id="formula_3">ρ[ψ(s), hi(e 1 )] ∧ Π hi (e 1 , s) ≡ High(d, s) ∧ ClockOn(s) ∧ ¬High(e 1 , s)</formula><p>and situation do([c on, hi(d), tick], S 0 ), where hi(d) is an achievement cause as a part of do([c on, hi(d)], S 0 ). Regressing High(d, s) ∧ ClockOn(s) ∧ ¬High(e 1 , s) just past hi(d), we obtain ¬High(e 1 , s) ∧ ClockOn(s), for which c on is an achievement cause. Notice that the first action, c on established preconditions for tick; were it not for c on, tick would have never happened! There are no more achievement causes of Q(s) in σ aside from those already identified: c on, hi(d), hi(e 1 ), tick. Observe that these are indeed the key events that lead to the achievement of Q(s) in σ.</p><p>It is worth noting that our approach handled a classic instance of (late) preemption without appealing to contingencies occurring in neighbouring possible worlds, which is the essential strategy in counterfactual analyses. Namely, it correctly excluded hi(e 2 ) from the causal chain for being preempted by hi(e 1 ), although hi(e 2 ) would have been sufficient, in the absence of hi(e 1 ), for achieving En(s) and Q(s).</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="5">Maintenance Causes</head><p>The achievement causal chain explains precisely how a condition comes to be, but not how it persists throughout the remaining actions of the narrative. The narrative may well contain actions which could destroy the effect but were somehow neutralized. We formalize our intuitive understanding of protective actions using the notion of maintenance. Our general considerations are as follows. First, in a causal setting C = σ, ϕ(s) , if D |= ϕ(σ), then there is nothing to maintain. Therefore, C may have a maintenance cause only if D |= ϕ(σ). Second, every instance of maintenance involves at least two actions of σ, where one action-call it a threatwould falsify the goal ϕ were it not for the other action, the maintenance cause itself. Obviously, the maintenance cause must occur in σ before the corresponding threat. Third, if C satisfies the achievement condition via some do(α, σ ), then neither α nor any action of σ may be a threat to ϕ(s), in accordance with the first consideration. If, alternatively, ϕ(s) holds at S 0 and throughout σ, then any action of σ except the very first one may be a threat.</p><p>The key property of a threat is that it has the potential to falsify the effect (but did not do so in the narrative). A test for this property involves a construction of a hypothetical scenario where the suspected threat falsifies the effect. Such test is by nature counterfactual and, therefore, gives rise the usual question: what alternative scenarios should we admit to the analysis? For the sake of generality, we require only that the alternative scenarios obey the rules of the world, and for the sake of tractability, that they do not contain too many actions. Both requirements are fulfilled by the following broad definition, where len(s) is the number of actions in a situation term s and len(S 0 ) = 0. Definition 4. A causal setting C = σ, ϕ(s) satisfies the maintenance condition via a ground situation term do(τ, σ ) σ iff σ = S 0 and D |= ∀s(σ s σ → ϕ(s)) and D |= ∃s executable(do(τ, s)) ∧ ϕ(s) ∧ ¬ϕ(do(τ, s)) ∧ len(do(τ, s)) ≤ len(σ) , in which case τ is a threat in C.</p><p>A tighter definition of a threat would artificially decrease the search space of maintenance causes. If, through unchecked generality, we misdiagnose a harmless action as a threat, the subsequent achievement cause analysis would be unable to identify an action which neutralized the threat's harmful effects.</p><p>Before we define what is a maintenance cause, consider a threat τ to ϕ(s). By the definition of regression, ρ[¬ϕ(s), τ ] is a formula that should hold to make sure ϕ(s) becomes false after executing τ . Since we would like to preserve ϕ(s), we are interested in the negation of this formula. But by the regression theorem, D |= ¬ρ[¬φ, τ ] ↔ ρ[φ, τ ], so the formula expressing the maintenance goal is simply ρ[ϕ(s), τ ]. Notably, the set of achievement causes of this formula will include the achievement causes of ϕ(s), because, intuitively, ϕ(s) holds after τ in part due to being achieved.</p><p>Definition 5. Suppose a causal setting C = σ, ϕ(s) satisfies the maintenance condition via some situation term do(τ, σ ) σ, where τ is a threat in C. Let C be the related causal setting σ , ρ[ϕ(s), τ ] . If α is an achievement cause in C , we say that α is a maintenance cause in C. Example 4. Consider a formula ψ(s) with quantifiers over object variables in the same BAT except that for the sake of example there is a countably infinite set of signal constants c i for i ≥ 1 with unique names. Let the query ψ(s) be ∃x∃y(x = y ∧ High(x, s) ∧ High(y, s)) -"there are at least two high signals". Let the actual situation σ be do([hi(c 1 ), hi(c 2 ), hi(c 3 ), lo(c 1 )], S 0 ).</p><p>By Definition 4, lo(c 1 ) is a threat in this causal setting. By Definition 5, it yields a related causal setting with situation do([hi(c 1 ), hi(c 2 ), hi(c 3 )], S 0 ) and query ρ[ψ(s), lo(c 1 )], which simplifies to ∃x∃y(x = y ∧ High(x, s) ∧ High(y, s)</p><formula xml:id="formula_4">∧ x = c 1 ∧ y = c 1 ).</formula><p>Applying Definition 2, we see this related causal setting has hi(c 3 ) as an achievement cause. Therefore, the original causal setting has hi(c 3 ) as a maintenance cause.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="6">Actual Cause</head><p>The Definitions 2, 3, and 5 are centered around the top level of a given casual setting and fail to capture the interplay between achievement and maintenance causes at the deeper levels of analysis. Specifically, suppose a causal setting C arises via the achievement (resp., maintenance) condition during the analysis of another setting C. On its own, C may have both achievement and maintenance causes, but, by Definition 3 (resp., 5), only the former are counted as causes of C. On the natural assumption that all causes of a descendant setting are equally relevant to the ancestor setting, the following definition inductively combines all possible interactions between the achievement and maintenance conditions under the generic term actual cause. Definition 6. Let α be a ground action and σ a narrative. We say that α is an actual cause in a causal setting C = σ, ϕ(s) if at least one of the following conditions holds.</p><p>(a) C satisfies the achievement condition via do(α, σ ) σ.</p><p>(b) C satisfies the achievement condition via some situation term do(A( t), σ ) σ and α is an actual cause in the causal setting σ , ρ[ϕ(s), A( t)] ∧ Π A ( t, s) . (c) C satisfies the maintenance condition via do(τ, σ ) σ, and α is an actual cause in σ , ρ[ϕ(s), τ ] . Example 5 (continued from 4). By Definition 6, the actions hi(c 1 ), hi(c 2 ), hi(c 3 ) are all actual causes of ψ(s). Notice that maintenance causes are just as important as achievement causes: the condition ψ(s) was realized through the properties of objects c 1 , c 2 , but persevered by virtue of c 2 , c 3 . Achievement cause analysis alone disregards the role of c 3 . Example 6. Consider again our running example. By Definition 6, the 8-th action lo(e 2 ) is a non-trivial actual cause of Q(s) discovered through a combination of two maintenance condition. Intuitively, it is causally important because it disables the flop-flop, preventing the actions tick (#11) and lo(d) (#10) from destroying Q(s) -both are threats in their respective settings. <ref type="bibr" target="#b0">Halpern and Pearl (2005)</ref>, following the motivation of <ref type="bibr" target="#b4">[Lewis, 1974]</ref>, base their formal account of actual causality on the notion of a counterfactual -a conditional statement whose premise is contrary to fact. They construct counterfactual statements in a formal language whose semantics is defined relative to a causal setting (see below). A causal model M is a tuple U, V, R, F , where U and V are finite disjoint sets of exogenous and endogenous variables, respectively, with each variable taking various values from an underlying domain. The function R maps every variable Z ∈ U ∪ V to a nonempty set R(Z) of possible values. F is a set of total functions {F X : × Z∈U ∪V\{X} R(Z) → R(X) | X ∈ V} which act like structural equations; each tuple of values assigned to the variables (excluding X) maps to a single value of X. Intuitively, for each endogenous variable X, F X encodes the entirety of causal laws which determine X by mapping every value assignment on all variables except X to some value of X. The values of exogenous variables U are set externally; a tuple VU of values for U is called a context of M , and the pair (M, VU ) constitutes a causal setting. The tuple U, V, R is called the signature of M . The set of functions F determines a partial dependency order X Y on endogenous variables X, Y . Namely, Y depends on X, X Y , if either X affects Y directly by virtue of F Y , or indirectly via intermediate functions. It is ubiquitously assumed that a given causal model is acyclic, that is, for each context VU of M , there is a partial order on V that is anti-symmetric, reflexive and transitive. This assumption guarantees the existence of a unique solution to the equations F.</p><p>The language of the HP approach is as follows. A primitive event is a formula X = V X where X ∈ V and V X ∈ R(X). We call a Boolean combination of primitive events a HP query. A general causal formula is one of the form</p><formula xml:id="formula_5">[Y 1 ← V Y1 , . . . , Y k ← V Y k ]φ where φ is a HP query, Y i for 1 ≤ i ≤ k are distinct variables from V, and V Yi ∈ R(Y i ). (We abbreviate [Y 1 ← V Y1 , . . . , Y k ← V Y k ] as [ Ȳ ← VY ]</formula><p>and call it an intervention.) A primitive event X = V X is satisfied in a causal setting (M, VU ), denoted (M, VU ) |= (X = V X ), if X takes on the value V X in the unique solution to the equations F once U are set to VU . HP queries are interpreted following the usual rules for Boolean connectives. Finally,</p><formula xml:id="formula_6">(M, VU ) |= [Y 1 ← V Y1 , . . . , Y k ← V Y k ]φ iff (M , VU ) |= φ where M is obtained from M by replacing each F Yi ∈ F by the trivial function F Yi : × Z∈U ∪V\{X} R(Z) → V Yi that fixes Y i to a constant V Yi for</formula><p>all the values of arguments.</p><p>In this paper, we focus on the so-called modified HP definition, or HP m , of actual cause <ref type="bibr">[Hopkins, 2005;</ref><ref type="bibr">Halpern, 2015;</ref><ref type="bibr">2016]</ref> because it is the most recent, intuitively appealing, and thoroughly connected with older definitions by formal results in <ref type="bibr">[Halpern, 2016]</ref>. According to this definition, the conjunction of primitive events X = VX (short for 2. There exists a set W (disjoint from X) of variables in V with (M, VU ) |= ( W = VW ) and a setting</p><formula xml:id="formula_7">X 1 = V X1 ∧ . . . ∧ X k = V X k ) is an actual cause in (M,</formula><formula xml:id="formula_8">V X of vari- ables X such that (M, VU ) |= [ X ← V X , W ← VW ]¬φ.</formula><p>3. No proper sub-conjunction of ( X= VX ) satisfies 1, 2.</p><p>Example 7. Consider the two well-known "Forest Fire" examples from <ref type="bibr" target="#b0">[Halpern and Pearl, 2005;</ref><ref type="bibr">Halpern, 2016]</ref> </p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="8">Formal Relationship with HP</head><p>We establish a common ground between the two formalisms by axiomatizing causal models in SC.</p><p>Let (M, VU ) be a HP causal setting where M = U, V, R, F is an acyclic causal model and VU a context. We assume that U, V, and the range of R are finite sets and there are no collisions between constants for variable and value symbols.</p><p>We construct a BAT D from (M, VU ) as follows. We treat U, V, and R(X) for all X ∈ U ∪ V as sets of SC constant symbols for which we introduce unique name axioms. If S = {C 1 , . . . , C n } is a set of constants and y is a SC object term, the expression y ∈ S denotes</p><formula xml:id="formula_9">(y = C 1 ∨ . . . ∨ y = C n ). If X ∈ U ∪V with R(X) = {V 1 , . . . , V n }, y ∈ R(X) denotes (y = V 1 ∨ . . . ∨ y = V n ).</formula><p>To represent functions F, we introduce a situation-independent relational symbol f with arity 1 + |U ∪ V| + 1 where the first argument is the name of the variable (X) which F X ∈ F determines, the last argument is the value which F X assigns to X, and the arguments in between are the values of variables U ∪ V arranged in some predetermined order. The actions of D are get(x, v), meaning compute the value of the endogenous variable x using F x ∈ F, and set(x, v), meaning ignore F x and force the value v upon x. The only fluent of D is the relational fluent V (x, v, s) stating that v is the value of the endogenous variable x in situation s.</p><p>Let Det(x, v, s) be an abbreviation for</p><formula xml:id="formula_10">∀v 1 . . . ∀v N . 1≤i≤N ∃y y = Z i ∧ v i ∈ R(Z i ) ∧ ∀v (V (y, v , s) → v i = v ) → f (x, v 1 , . . . , v N , v),</formula><p>where U ∪ V = {Z 1 , . . . , Z N }. Det(x, v, s) means that the value of variable x is determined in s to be v. Det(x, v, s) holds true when the values v i which exist in s, when bound to appropriate arguments of f , unequivocally assign v to x. This means, crucially, that x may be determined as soon as some-but not necessarily all-of the variables on which it "depends" (as per ) have acquired values.</p><p>The axioms of D are as follows.</p><p>X∈V ¬∃v(V (X, v, S 0 )),</p><formula xml:id="formula_11">V Y ∈ VU V (Y, V Y , S 0 ), P oss(set(x, v), s) ↔ X∈V (x = X ∧ v ∈ R(X)) ∧ ¬∃v V (x, v , s), P oss(get(x, v), s) ↔ x ∈ V ∧ ¬∃v V (x, v , s) ∧ Det(x, v, s), V (x, v, do(a, s)) ↔ a = get(x, v) ∨ a = set(x, v) ∨ V (x, v, s).</formula><p>In words, none of the endogenous variables have values at S 0 , and all exogenous variables have values at S 0 as specified by the context. It is possible to force a value v upon x as long as x is an endogenous variable, v is in the range of x, and x has not yet acquired a value. It is possible to compute the value of x as long as x is an endogenous variable which has not yet acquired a value but which is destined at s to get the value v. Overall, the theory models all possible propagations of values (including interventions) throughout the set of variables according to the structural equations. As we are interested only in those situations where all variables have acquired values, which represent a unique solution to F, we introduce an abbreviation terminal(s) for the expression executable(s) ∧ ¬∃a(P oss(a, s)). In order to refer to situations under specific interventions, we use the abbreviation</p><formula xml:id="formula_12">interv Y1←V Y 1 ,...,Y k ←V Y k (s) which stands for terminal(s) ∧ ∀x∀v.[∃s (do(set(x, v), s ) s) ↔ 1≤i≤k (x = Y i ∧ v = V Yi )].</formula><p>The special case interv ∅ (s) describes s under an empty intervention.</p><p>Finally, given a HP query φ, we obtain a corresponding SC query φ from φ by replacing each primitive event (X = V X ) by V (X, V X , s). Thus, φ is ground in all object arguments and uniform in s. It is tedious but straightforward to prove the correctness of our translation relative to a HP causal setting. With this result, we can easily translate m to the language of SC and formally compare the two approaches. Theorem 2. Let (M, VU ) be a HP causal setting and φ a HP query over M . Let D be a BAT obtained from (M, VU ) as described above. Let X ∈ V and V X ∈ R(X).</p><p>1. (X = V X ) is a singleton cause of φ in (M, VU ) according to HP m if and only if get(X, V X ) ∈ σ appears in the achievement causal chain of σ, φ(s) for every ground situation term σ of D such that D |= interv ∅ (σ).</p><p>2. (X = V X ) is a part of a cause of φ in (M, VU ) according to HP m if and only if there exists a ground situation term σ of D such that D |= interv ∅ (σ) and get(X, V X ) ∈ σ appears in the achievement causal chain of σ, φ(s) .</p><p>The proof of Theorem 2 is quite involved and is not shown due to lack of space. By an immediate corollary, achievement cause analysis alone captures all HP m causes.</p><p>Example 8. (cont.) Consider a translation of the disjunctive Forest Fire causal model M d . The corresponding terminal narratives σ are do <ref type="bibr">([get(M D, true)</ref>, get(L, true), get(FF, true)], S 0 ), do <ref type="bibr">([get(L, true)</ref>, get(M D, true), get(FF, true)], S 0 ), do <ref type="bibr">([get(M D, true)</ref>, get(FF, true), get(L, true)], S 0 ), do <ref type="bibr">([get(L, true)</ref>, get(FF, true), get(M D, true)], S 0 ).</p><p>Action get(M D, true) is a part of the causal chain of σ, V (FF, true, s) only for the first and third choice of σ. Similarly, get(L, true) is an achievement cause only for the second and fourth choice. By Part 1 of Theorem 2, they are not actual causes according to HP m . By Part 2 of Theorem 2, they are both parts of an actual cause according to HP m . This agrees with conclusions of the original HP causal model.</p></div>
<div xmlns="http://www.tei-c.org/ns/1.0"><head n="9">Discussion</head><p>Our approach shifts the focus away from causal models and towards first-order logic representation of the underlying dynamics of the scenario. There are other attempts to step away from purely counterfactual analysis <ref type="bibr">[Vennekens et al., 2010;</ref><ref type="bibr" target="#b12">Vennekens, 2011;</ref><ref type="bibr" target="#b0">Beckers and Vennekens, 2012;</ref><ref type="bibr">2016]</ref>, but they share the same expressivity limitations.</p><p>Curiously, <ref type="bibr">[Vennekens et al., 2010]</ref> consider SC to be too expressive, stating that "SC contains many features that go beyond what is traditionally expressed in a causal model. For typical causal reasoning problems, these features are not needed". To refute this and to see where we stand with respect to other approaches, let us consider three telling examples featured in <ref type="bibr" target="#b0">[Beckers and Vennekens, 2012;</ref><ref type="bibr">2016]</ref> and discussed in other papers. Assume all fluents are false at S 0 . Example 9. Assassin poisons victim's coffee, victim drinks it and dies. If assassin had not poisoned the coffee, his backup would have, and victim would still have died.</p><p>This example from <ref type="bibr" target="#b3">[Hitchcock, 2007]</ref> illustrates early preemption, namely that the causal link from the backup to victim's death is preempted by the assassin. Let the actions be assassin and backup (the two acts of poisoning the coffee) and drink. Let the fluents be P (s) meaning "coffee contains poison" and D(s) meaning "the victim is dead". P oss(assassin, s), P oss(backup, s) P oss(drink, s) ↔ P (s),</p><formula xml:id="formula_13">P (do(a, s)) ↔ a = assassin ∨ a = backup ∨ P (s), D(do(a, s)) ↔ [a = drink ∧ P (s)] ∨ D(s).</formula><p>The narrative is σ = do([assassin, drink], S 0 ). By our analysis, all of σ is an achievement causal chain. This agrees with HP and <ref type="bibr" target="#b3">[Hitchcock, 2007]</ref> but disagrees with Beckers and Vennekens who believe that assassin is not an actual cause. Rather than appeal to intuition, we just point out that the causal roles assumed by the assassin and his backup are clearly distinct in the given scenario. Example 10. An engineer is standing by a switch in the railroad track. A train approaches in the distance. She flips the switch, so that the train travels down the left-hand track instead of the right. Since the tracks re-converge up ahead, the train arrives at its destination all the same. This example from by <ref type="bibr" target="#b8">[Paul and Hall, 2013]</ref> illustrates the distinction between causation and determination. Beckers and Vennekens state that it is isomorphic to the previous one, while the intuition about its causes is the polar opposite. In fact, the two examples are isomorphic only within the expressivity bounds of causal models and CP-logic.</p><p>Let the fluent In(s) mean that the train is on the section of the track leading to the first junction, let L(s) (resp., R(s)) mean that it is on the left-hand track (resp., right), and let Out(s) mean that it is on the section of the track past the second junction. Let the fluent Sw(s) mean that the switch is engaged and Arrived(s) that the train has arrived. Let the actions be f lip (engineer flips the switch), f ork 1 (train passes first junction), f ork 2 (train passes second junction), and arrive (self-explanatory). Let only In(s) hold at S 0 .</p><p>P oss(f lip, s), P oss(f ork 1 , s) ↔ P oss(f ork 2 ,s) ↔ L(s)∨R(s), P oss(arrive,s) ↔ Out(s),</p><formula xml:id="formula_14">In(do(a, s)) ↔ In(s) ∧ a = f ork 1 , L(do(a, s)) ↔ a=f ork 1 ∧ Sw(s) ∨ L(s) ∧ a = f ork 2 , R(do(a, s)) ↔ a=f ork 1 ∧ ¬Sw(s) ∨ R(s) ∧ a = f ork 2 , Out(do(a, s)) ↔ a=f ork 2 ∨ Out(s), Sw(do(a, s)) ↔ a=f lip ∨ Sw(s) ∧ a = f lip, Arrived(do(a, s)) ↔ a=arrive ∨ Arrived(s).</formula><p>The narrative σ is do([f lip,f ork 1 ,f ork 2 , arrive], S 0 ). By our analysis, the f lip action is not an actual cause of train's arrival. This conclusion is elaboration tolerant <ref type="bibr" target="#b6">[McCarthy, 1987]</ref> as long as the relation between L, R, Sw is preserved. For HP, the answer depends on how model is constructed and which definition is applied. <ref type="bibr" target="#b10">[Pearl, 2000]</ref> calls this class of problems "switching causation" and argues that flipping switch is a cause (see <ref type="bibr">Section 10.3.4,</ref>. Both <ref type="bibr" target="#b10">[Pearl, 2000]</ref> and <ref type="bibr" target="#b0">[Halpern and Pearl, 2005]</ref> argue that switch is a cause, while, according to HP m , it is not.</p><p>Example 11. Assistant Bodyguard puts a harmless antidote in victim's coffee. Buddy who knows about the antidote poisons the coffee; he would not have done so otherwise. Victim drinks the coffee and survives.</p><p>This example is called "Careful Poisoning" in <ref type="bibr" target="#b13">[Weslake, 2013]</ref> and left as a challenge for future work. Let the actions be antidote, poison, drink. The fluents P (s), D(s) are as before, and the fluent A(s) means "coffee contains antidote". P oss(antidote, s), P oss(drink, s), P oss(poison, s) ↔ A(s), A(do(a, s)) ↔ a = antidote ∨ A(s), P (do(a, s)) ↔ a = poison ∨ P (s), D(do(a, s)) ↔ [a = drink ∧ P (s) ∧ ¬A(s)] ∨ D(s). σ = do([antidote, poison, drink], S 0 ), so D |= ¬D(σ). In fact, ¬D(s) holds throughout the narrative, so it has no achievement causes. It has no maintenance causes either: drink is a threat to ¬D(s), yielding a new causal setting do([antidote, poison], S 0 ), ¬D(s) ∧ (¬P (s) ∨ A(s) ) with no achievement causes. The action poison could be a threat, but it does not qualify as such by our definition: no executable situation admits poison in the absence of the antidote, owing to the precondition for poison. Therefore, the given causal setting contains no causes. This agrees with Beckers and Vennekens and disagrees with Hitchcock and HP.</p><p>There exist multiple examples where the results of the HP approach cannot be reconciled with intuitive understandingwhich, incidentally, the approach treats as the only measure of merit. This problem was traced by <ref type="bibr">[Hopkins and Pearl, 2007]</ref> and <ref type="bibr" target="#b0">[Glymour et al., 2010]</ref> to the limited expressiveness of causal models. Causal models do not distinguish between enduring conditions and transitions and cannot to model the absence of an event except as the presence of its opposite; examples where this leads to absurd conclusions are easy to come by, see e.g. [Hopkins and <ref type="bibr">Pearl, 2007</ref>]. An explicit notion of an action solves these problems.</p><p>Addressing the lack of expressivity, [Hopkins and <ref type="bibr">Pearl, 2007]</ref> re-defined causal models in the language of SC, but they preserved the implicit possible worlds semantics of causal formulas and dropped the requirement that situations be executable. The latter is especially problematic, since dismissing preconditions results in paradoxes and makes inferences untrustworthy. Our work reaps the benefits which [Hopkins and <ref type="bibr">Pearl, 2007]</ref> aimed at but does not suffer from the issues associated with giving a meaningful definition of a counterfactual in SC, which appears to be no easy task. A counterfactual query not relativized to a particular scenario can be formulated in SC without special tools <ref type="bibr" target="#b4">[Lin and Soutchanski, 2011</ref>], but it is not clear how such queries can be useful for defining actual causality. An original study conducted in <ref type="bibr">[Costello and McCarthy, 1999]</ref> perhaps comes closest to a good definition of a counterfactual in SC, but it operates outside of the well-studied basic action theories and it is not concerned with actual causality. There exist numerous studies of the semantics of causal models and the relationship of causal models to various logics, such as an elaborate axiomatization of causal models <ref type="bibr" target="#b1">[Halpern, 2000]</ref> and a logical representation <ref type="bibr" target="#b0">[Bochman and Lifschitz, 2015]</ref> of causal models in a non-monotonic logic which encompasses general causation as a foundational principle. The approach of <ref type="bibr" target="#b0">[Finzi and Lukasiewicz, 2003</ref>] combines causal models with independent choice logic. Finally, there are methodological or technical critiques of the causal model approach, exemplified by <ref type="bibr" target="#b0">[Glymour et al., 2010]</ref>, <ref type="bibr" target="#b7">[Menzies, 2014]</ref>, <ref type="bibr" target="#b5">[Livengood, 2013]</ref>, <ref type="bibr" target="#b13">[Weslake, 2013]</ref> and <ref type="bibr" target="#b0">[Baumgartner, 2013]</ref>.</p><p>It is clear that a more broad definition of actual cause requires more expressive action theories that can model not only sequences of actions, but can also include explicit time and concurrent actions. Only after that one can try to analyze some of the popular examples of actual causation formulated in philosophical literature; some of those examples sound deceptively simple, but faithful modelling of them requires time, concurrency and natural actions <ref type="bibr" target="#b11">[Reiter, 2001]</ref>. This does not imply that future research should focus only on popular scenarios proposed by philosophers. To the contrary, we firmly believe that the future of causal research is in elaborating computational methodology for the analysis of complex technical systems.</p></div><figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_0"><head>Figure 2 :</head><label>2</label><figDesc>Figure 2: Evolution of fluent values throughout σ</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_1"><head></head><label></label><figDesc>VU ) of a HP query φ if all of the following conditions hold: 1. (M, VU ) |= ( X = VX ) and (M, VU ) |= φ.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_2"><head></head><label></label><figDesc>. Both have the same set of endogenous variables: M D (match dropped by arsonist), L (lightning strike), FF (forest is on fire). In both cases, M D and L are set to true by the context. The model M d for the disjunctive scenario has it that either one of the events (M D = true), (L = true) is sufficient to start a fire, so the equation for FF is FF := (M D= true) ∨ (L=true). The model M c for the conjunctive scenario requires both events in order to create a forest fire, so FF := (M D = true) ∧ (L = true). By HP m , neither (M D = true) nor (L = true) are singleton actual causes in M d because it is impossible to fulfill part 2 of the definition above by setting either variable to f alse, but the conjunction (M D = true)∧(L = true) is deemed an actual cause. In contrast, in M c , both (M D = true) and (L = true) are singleton actual causes because setting one of {M D, L} to f alse makes the forest fire impossible, but their conjunction is not an actual cause because it violates the minimality condition.</figDesc></figure>
<figure xmlns="http://www.tei-c.org/ns/1.0" xml:id="fig_3"><head></head><label></label><figDesc>Theorem 1. Let (M, VU ) be a HP causal setting, [ Ȳ ← VY ]φ an arbitrary causal formula over M , and D a BAT obtained from (M, VU ). Then (M, VU ) |= [ Ȳ ← VY ]φ iff D |= (∀s). interv Ȳ← VY (s) → φ(s).</figDesc></figure>
		</body>
		<back>

			<div type="acknowledgement">
<div xmlns="http://www.tei-c.org/ns/1.0"><p>Acknowledgement: We thank the Natural Sciences and Engineering Research Council of Canada for financial support.</p></div>
			</div>

			<div type="references">

				<listBibl>

<biblStruct xml:id="b0">
	<analytic>
		<title level="a" type="main">Counterfactual dependency and actual causation in CP-logic and structural models: a comparison</title>
		<author>
			<persName><forename type="first">Michael</forename><surname>Baumgartner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Baumgartner</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vennekens</forename><surname>Beckers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sander</forename><surname>Beckers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joost</forename><surname>Vennekens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Beckers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Sander</forename><surname>Beckers</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Joost</forename><surname>Vennekens</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Bochman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lifschitz</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Alexander</forename><surname>Bochman</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Vladimir</forename><surname>Lifschitz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Eiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Eiter</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Lukasiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Finzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lukasiewicz</forename></persName>
		</author>
		<author>
			<persName><forename type="first">Alberto</forename><surname>Finzi</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Thomas</forename><surname>Lukasiewicz</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Glymour</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI&apos;03</title>
				<editor>
			<persName><forename type="first">Tom</forename><surname>Costello</surname></persName>
		</editor>
		<editor>
			<persName><forename type="first">John</forename><surname>Mc-Carthy</surname></persName>
		</editor>
		<meeting>the Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI&apos;03<address><addrLine>San Francisco, CA, USA</addrLine></address></meeting>
		<imprint>
			<publisher>Morgan Kaufmann Publishers Inc</publisher>
			<date type="published" when="1999">2013. 2013. 2012. 2012. 2016. Oct 2016. 2015. 2015. 1999. 1999. 2002. 2002. 2003. 2003. 2010. 2010. 2005. 2005</date>
			<biblScope unit="volume">78</biblScope>
			<biblScope unit="page" from="843" to="887" />
		</imprint>
	</monogr>
	<note>Electron. Trans. Artif. Intell.</note>
</biblStruct>

<biblStruct xml:id="b1">
	<analytic>
		<title level="a" type="main">A modification of the Halpern-Pearl definition of causality</title>
		<author>
			<persName><forename type="first">Joseph</forename><forename type="middle">Y</forename><surname>Halpern</surname></persName>
		</author>
		<author>
			<persName><surname>Halpern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Halpern</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Y</forename><surname>Joseph</surname></persName>
		</author>
		<author>
			<persName><surname>Halpern</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015</title>
				<meeting>the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI 2015<address><addrLine>Buenos Aires, Argentina</addrLine></address></meeting>
		<imprint>
			<date type="published" when="2000">2000. 2000. 2015. July 25-31, 2015. 2015</date>
			<biblScope unit="volume">12</biblScope>
			<biblScope unit="page" from="3022" to="3033" />
		</imprint>
	</monogr>
	<note>Axiomatizing causal reasoning</note>
</biblStruct>

<biblStruct xml:id="b2">
	<monogr>
		<title level="m" type="main">Actual Causality</title>
		<author>
			<persName><forename type="first">Joseph</forename><forename type="middle">Y</forename><surname>Halpern</surname></persName>
		</author>
		<author>
			<persName><surname>Halpern</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2016">2016. 2016</date>
			<publisher>The MIT Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b3">
	<analytic>
		<title level="a" type="main">Mark Hopkins and Judea Pearl. Causality and counterfactuals in the situation calculus</title>
		<author>
			<persName><forename type="first">Christopher</forename><surname>Hitchcock</surname></persName>
		</author>
		<author>
			<persName><surname>Hitchcock</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">The Actual Cause: From Intuition to Automation</title>
				<imprint>
			<publisher>Mark Hopkins</publisher>
			<date type="published" when="2005">2007. 2007. 2007. 2007. 2005. 2005</date>
			<biblScope unit="volume">116</biblScope>
			<biblScope unit="page" from="939" to="953" />
		</imprint>
		<respStmt>
			<orgName>University of California Los Angeles</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">PhD thesis</note>
	<note>Hopkins and Pearl</note>
</biblStruct>

<biblStruct xml:id="b4">
	<analytic>
		<title level="a" type="main">Causal theories of actions revisited</title>
		<author>
			<persName><forename type="first">David</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Lin</forename><forename type="middle">;</forename><surname>Lewis</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Fangzhen</forename><surname>Lin</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Mikhail</forename><surname>Soutchanski</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning</title>
				<imprint>
			<date type="published" when="1974">1974. 1974. 2011. 2011</date>
			<biblScope unit="volume">70</biblScope>
			<biblScope unit="page" from="556" to="567" />
		</imprint>
	</monogr>
	<note>Causation</note>
</biblStruct>

<biblStruct xml:id="b5">
	<analytic>
		<title level="a" type="main">Some philosophical problems from the standpoint of artificial intelligence</title>
		<author>
			<persName><forename type="first">Jonathan</forename><surname>Livengood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">John</forename><surname>Livengood</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Patrick</forename><forename type="middle">J</forename><surname>Mccarthy</surname></persName>
		</author>
		<author>
			<persName><surname>Hayes</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Readings in artificial intelligence</title>
				<editor>
			<persName><forename type="first">Hayes</forename><surname>Mccarthy</surname></persName>
		</editor>
		<imprint>
			<date type="published" when="1969">2013. 2013. 1969. 1969</date>
			<biblScope unit="volume">47</biblScope>
			<biblScope unit="page" from="431" to="450" />
		</imprint>
	</monogr>
	<note>Actual causation and simple voting scenarios</note>
</biblStruct>

<biblStruct xml:id="b6">
	<analytic>
		<title level="a" type="main">Generality in artificial intelligence</title>
		<author>
			<persName><forename type="first">John</forename><surname>Mccarthy</surname></persName>
		</author>
		<author>
			<persName><surname>Mccarthy</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="j">Commun. ACM</title>
		<imprint>
			<biblScope unit="volume">30</biblScope>
			<biblScope unit="issue">12</biblScope>
			<biblScope unit="page" from="1029" to="1035" />
			<date type="published" when="1987">1987. 1987</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b7">
	<analytic>
		<title level="a" type="main">Counterfactual theories of causation</title>
		<author>
			<persName><forename type="first">Peter</forename><surname>Menzies</surname></persName>
		</author>
		<author>
			<persName><surname>Menzies</surname></persName>
		</author>
		<ptr target="https://plato.stanford.edu/entries/causation-counterfactual/" />
	</analytic>
	<monogr>
		<title level="m">Stanford Encyclopedia of Philosophy</title>
				<imprint>
			<date type="published" when="2014-01-15">2014. 2014. January 15, 2017</date>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b8">
	<monogr>
		<title level="m" type="main">Causation: a user&apos;s guide</title>
		<author>
			<persName><forename type="first">Hall</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">L</forename><forename type="middle">A</forename><surname>Paul</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Ned</forename><surname>Hall</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2013">2013. 2013</date>
			<publisher>Oxford University Press</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b9">
	<monogr>
		<author>
			<persName><surname>Pearl</surname></persName>
		</author>
		<idno>R-259</idno>
		<title level="m">Judea Pearl. On the definition of actual cause</title>
				<imprint>
			<date type="published" when="1998">1998. 1998</date>
		</imprint>
		<respStmt>
			<orgName>University of California Los Angeles</orgName>
		</respStmt>
	</monogr>
	<note type="report_type">Technical report</note>
</biblStruct>

<biblStruct xml:id="b10">
	<analytic>
		<title level="a" type="main">The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression</title>
		<author>
			<persName><forename type="first">;</forename><surname>Pearl</surname></persName>
		</author>
		<author>
			<persName><forename type="first">Raymond</forename><surname>Reiter</surname></persName>
		</author>
		<author>
			<persName><surname>Reiter</surname></persName>
		</author>
	</analytic>
	<monogr>
		<title level="m">Artificial intelligence and mathematical theory of computation</title>
				<imprint>
			<publisher>Cambridge University Press</publisher>
			<date type="published" when="1991">2000. 2000. 1991. 1991</date>
			<biblScope unit="volume">27</biblScope>
			<biblScope unit="page" from="359" to="380" />
		</imprint>
	</monogr>
	<note>Judea Pearl. Causality: Models, Reasoning, and Inference</note>
</biblStruct>

<biblStruct xml:id="b11">
	<monogr>
		<title level="m" type="main">Knowledge in action: logical foundations for specifying and implementing dynamical systems</title>
		<author>
			<persName><forename type="first">Raymond</forename><surname>Reiter</surname></persName>
		</author>
		<author>
			<persName><surname>Reiter</surname></persName>
		</author>
		<imprint>
			<date type="published" when="2001">2001. 2001</date>
			<publisher>MIT press Cambridge</publisher>
		</imprint>
	</monogr>
</biblStruct>

<biblStruct xml:id="b12">
	<analytic>
		<title level="a" type="main">Embracing events in causal modelling: Interventions and counterfactuals in CP-logic</title>
		<author>
			<persName><forename type="first">Herbert</forename><forename type="middle">A</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><forename type="first">;</forename><surname>Simon</surname></persName>
		</author>
		<author>
			<persName><surname>Vennekens</surname></persName>
		</author>
		<ptr target="http://arxiv.org/abs/1107.4865" />
	</analytic>
	<monogr>
		<title level="m">European Workshop on Logics in Artificial Intelligence</title>
				<imprint>
			<publisher>Springer</publisher>
			<date type="published" when="1977">1977. 1977. 2010. 2010. 2011. 2011</date>
			<biblScope unit="volume">11</biblScope>
			<biblScope unit="page" from="647" to="662" />
		</imprint>
	</monogr>
	<note>Vennekens. Actual causation in CPlogic</note>
</biblStruct>

<biblStruct xml:id="b13">
	<monogr>
		<author>
			<persName><forename type="first">Brad</forename><surname>Weslake</surname></persName>
		</author>
		<author>
			<persName><surname>Weslake</surname></persName>
		</author>
		<ptr target="c4eb488" />
		<title level="m">A Partial Theory of Actual Causation</title>
				<imprint>
			<date type="published" when="2013-07-18">2013. 2013. July 18, 2017</date>
		</imprint>
	</monogr>
	<note>Version</note>
</biblStruct>

				</listBibl>
			</div>
		</back>
	</text>
</TEI>
