=Paper= {{Paper |id=Vol-2052/paper2 |storemode=property |title=Situation Calculus Semantics for Actual Causality |pdfUrl=https://ceur-ws.org/Vol-2052/paper2.pdf |volume=Vol-2052 |authors=Vitaliy Batusov,Mikhail Soutchanski |dblpUrl=https://dblp.org/rec/conf/commonsense/BatusovS17 }} ==Situation Calculus Semantics for Actual Causality== https://ceur-ws.org/Vol-2052/paper2.pdf
                          Situation Calculus Semantics for Actual Causality
                       Vitaliy Batusov                                        Mikhail Soutchanski
                       York University                                         Ryerson University
                       Toronto, Canada                                          Toronto, Canada
                  vbatusov@cse.yorku.ca                                     mes@scs.ryerson.ca


                          Abstract                                 structural causal models resemble propositional logic, they
                                                                   have no objects, no relationships, no time, no support for
     The state-of-the-art definitions of actual cause by           quantified causal queries. As a remedy, [Hopkins, 2005;
     Pearl and Halpern suffer from the modest expres-              Hopkins and Pearl, 2007] leverage the expressive power of
     sivity of causal models. We develop a new defini-             first-order logic and the robustness of the situation calculus
     tion of actual cause in the context of situation cal-         (SC) [Reiter, 2001]. To formulate counterfactuals within SC,
     culus (SC) basic action theories. As a result, we             they allow arbitrary modifications in a sequence of actions,
     avoid the paradoxes that arise in causal models and           e.g. removing actions that serve as preconditions for subse-
     can identify complex actual causes of conditions              quent actions. They do not define actual causality.
     expressed in first-order logic. We provide a formal              Given that theories of actual causality based on structural
     translation from causal models to SC and establish            equations share the same ailments [Menzies, 2014; Glymour
     a relationship between the definitions. Using ex-             et al., 2010], it seems natural to explore actual causality from
     amples, we show that long-standing disagreements              a different perspective. We do this in the language of SC
     between alternative definitions of actual causality           under the classical Tarskian semantics, where the notion of
     can be mitigated by faithful modelling.                       a cause naturally aligns with the notion of an action, and the
                                                                   effect can be specified by a FOL formula with quantifiers over
1   Introduction                                                   object variables. In contrast to HP whose analysis is based
Actual causality, also known as token-level causality, is con-     on observing the end results of interventions, we do so by
cerned with finding in a given scenario a singular event           analyzing the dynamics which lead to the end results. Our
that caused another event. This is in contrast to type-level       developments are based on a small set of plausible intuitions.
causality which is concerned with universal causal mecha-             The next section briefly summarizes SC. Section 3 moti-
nisms governing the world. The leading line of computa-            vates our approach and supplies a running example. Sec-
tional inquiry into actual causality was pioneered by [Pearl,      tion 4 characterizes causes which achieve an effect. Section 5
1998; 2000] and continued by [Halpern and Pearl, 2005;             explores maintenance causes—actions which protect existing
Halpern, 2000; Eiter and Lukasiewicz, 2002; Hopkins, 2005;         conditions from being lost. In Section 6, we combine achieve-
Halpern, 2015; 2016] and in other publications. We call it the     ment and maintenance causes into an all-encompassing no-
HP approach. It is based on the concept of structural equa-        tion of actual cause. In Section 7, we outline the HP approach
tions [Simon, 1977] and implemented in the framework of            and, in Section 8, formally connect it to ours. Finally, we
causal models. The HP approach follows the Humean coun-            briefly compare our definition to others using examples and
terfactual definition of causation, which posits that saying “an   discuss related work.
event A caused an outcome B” is the same as saying “if A had
not been, then B never had existed”. This definition is well-      2    Situation Calculus
known to suffer from the problem of preemption: it could be        In the situation calculus [McCarthy and Hayes, 1969; Reiter,
the case that in the absence of event A, B would still have        2001], the constant S0 denotes the initial situation that rep-
occurred due to another event, which in the original scenario      resents an empty list of actions, while the complex situation
was preempted by A. HP address this by performing counter-         term do([α1 , ...., αn ], S0 ) represents the situation that results
factual analysis only under carefully selected contingencies       from executing actions α1 , ..., αn consecutively so that α1 is
which suspend some subset of the model’s mechanisms. Se-           executed in S0 , and αn is executed last. If none of the action
lecting proper contingencies proved to be a challenging task;      terms αi have variables, then we call this situation term an
as mentioned in [Halpern, 2016] on p.27, “The jury is still out    (actual) narrative. An action term αi may occur in the nar-
on what the ‘right’ definition of causality is”.                   rative more than once at different positions. The set of all
   The HP approach is prone to producing results that can-         situations can be visualized as a tree with a partial-order re-
not be reconciled with intuitive understanding due to the          lation s1 @ s2 on situations s1 , s2 , and s1 v s2 abbreviates
limited expressiveness of causal models [Hopkins, 2005;            s1 @ s2 ∨ s1 = s2 . It is characterized by the foundational
Hopkins and Pearl, 2007]. The ontological commitments of           domain-independent axioms (Σ) included in a basic action
theory (BAT) D that also includes axioms DS0 describing the                            D     Q
                                                                           d                                      Figure 1: A D flip-flop
initial situation, and action precondition axioms Dap using               e1
the predicate P oss(a, s) to say when an action a is possi-               e2           enable                     with a disjunctive enable
ble in s. For each action function there is one precondition                           clock                      (clock input is shown but
axiom P oss(A(x̄), s) ↔ ΠA (x̄, s), where as usual, all free                                                      not modeled)
variables are implicitly ∀-quantified, and Π(x̄, s) is a formula




                                                                                                  hi(e1 )
                                                                                                  hi(e2 )

                                                                                                  lo(e1 )
                                                                                                  lo(e2 )
uniform in s, meaning that it has no occurrences of P oss, @,




                                                                                                  hi(d)




                                                                                                  lo(d)
                                                                                           c on

                                                                                                  tick



                                                                                                  tick



                                                                                                  tick

                                                                                                  tick
no other situation terms, no quantifiers over situations. For
                                                                                      S0
each fluent F , D includes a successor state axiom (SSA)
                                                                          High(d, s)
  F (x̄, do(a, s)) ↔ ψ + (x̄, a, s) ∨ F (x̄, s) ∧ ¬ψ − (x̄, a, s)),
                                                                          High(e1 , s)
where the fluent predicate F (x̄, s) represents a situation-
dependent relation over tuple of objects x̄, uniform formulas             High(e2 , s)
ψ + (x̄, a, s) and ψ − (x̄, a, s) specify action terms that under             En(s)
certain application-dependent conditions have a positive ef-                   Q(s)
fect (make F true), or a negative effect on fluent F (make it
false), respectively. The SSAs are derived under the causal                                 1     2   3   4   5     6   7   8   9   10   11
completeness assumption [Reiter, 1991] that all effects of ac-
tions on fluents are explicitly represented. There are a number             Figure 2: Evolution of fluent values throughout σ
of auxiliary axioms, such as unique name axioms, that are in-
cluded in D. The common abbreviation executable(s) means
that each action mentioned in the situation term s was possi-            Consider the circuit in Figure 1. It consists of a D flip-flop,
ble in the situation in which it was executed. The basic com-         shown as a box, whose enable input is controlled by an OR-
putational task, called the projection problem, is the task of        gate such that at least one of the signals e1 , e2 needs to be
establishing whether a BAT entails a sentence φ(σ) for an ex-         high in order for the flip-flop to produce the output Q = D.
ecutable ground situation σ, where φ(s) is a formula uniform             Let d, e1 , and e2 be constants that represent the input sig-
in s. This problem can be solved using the one-step regres-           nals. Let the action functions be hi(x), lo(x), tick, and c on,
sion operator ρ. The expression ρ[ϕ, α] denotes the formula           where the first two actions set signal x to high or to low volt-
obtained from ϕ by replacing each fluent atom F in ϕ with             age level, respectively, tick represents the action of the clock,
the right-hand side of the SSA for F where the action variable        and c on turns the clock on, making tick possible. The flu-
a is instantiated with the ground action α, and then simplified       ent ClockOn(s) represents the state of the clock, High(x, s)
using unique name axioms for actions and constants. Simi-             represents the logical value of signal x, En(s) represents the
larly to the theorem about multi-step regression R presented          output of the OR-gate, and Q(s) is the output of the flip-flop.
in [Reiter, 1991], one can prove that given a BAT D, a for-              Let the narrative σ be do([c on, hi(d), tick, hi(e1 ), hi(e2 ),
mula ϕ(s) uniform in s, and a ground action term α, we have           tick, lo(e1 ), lo(e2 ), tick, lo(d), tick], S0 ), and let the ef-
that D |= ∀s. ϕ(do(α, s)) ↔ ρ[ϕ(s), α].                               fect of interest be Q(s). In the initial situation, we have that
                                                                      ∀x(¬High(x, S0 )), ¬Q(S0 ), ¬En(S0 ). The following BAT
3   Motivation                                                        models the operation of the circuit.
We propose to axiomatize a dynamic world using a SC the-                       P oss(tick, s) ↔ ClockOn(s), P oss(c on, s),
ory and derive actual causality from first principles. Specif-                 P oss(hi(x), s) ↔ ¬High(x, s),
ically, to represent a “scenario”, we consider a BAT D and
                                                                               P oss(lo(x), s) ↔ High(x, s),
a narrative describing the actions or events which transpired
in the world characterized by D. We do not formally distin-
guish between agent actions and nature’s events. The nar-             ClockOn(do(a, s)) ↔ a = c on ∨ ClockOn(s),
rative is specified by an executable ground situation term σ          High(x, do(a, s)) ↔ a = hi(x) ∨ High(x, s) ∧ a 6= lo(x),
called the “actual situation”. An effect for which we seek to         En(do(a, s)) ↔ a = hi(e1 ) ∨ a = hi(e2 ) ∨
identify causes is given by a formula ϕ(s) uniform in situa-
tion s. Since actions are the sole source of change in a BAT,             En(s) ∧ ¬[a = lo(e1 ) ∧ ¬High(e2 , s)]∧
we identify the set of potential causes of an effect ϕ with the              ¬[a = lo(e2 ) ∧ ¬High(e1 , s)],
set of all ground action terms occurring in σ.                        Q(do(a, s)) ↔ [a = tick ∧ En(s) ∧ High(d, s)] ∨
Example 1. A D flip-flop is a digital circuit capable of storing          Q(s) ∧ ¬[a = tick ∧ En(s) ∧ ¬High(d, s)].
one bit of information. A basic D flip-flop has two Boolean
inputs, D and enable, and one output, Q. Each input and                  Figure 2 graphically shows the truth values, relative to D,
output signal can be either at the low level (modelled as false),     of the key ground fluents in situation S0 and after each subse-
or at the high level (modelled as true). If an input enable           quent action in σ. Observe that all fluents are initially false,
is high, every time the clock “ticks”, the output Q assumes           shown as the thick lower edges, the #1 action c on makes
the value of the main input D and maintains it until the next         subsequent tick actions (#3, #6, #9, #11) possible, the ac-
tick. When the signal enable is low, the flip-flop preserves          tions hi(d), hi(e1 ), hi(e2 ), lo(e1 ) change the voltage levels
the value of Q regardless of D and ticks.                             of the corresponding signals, hi(e1 ) also changes the state of
En(s), the second occurrence of tick (#6) makes the output          brings about the effect of interest, but also identify the actions
Q(s) true, but other occurrences of tick are inconsequential.       that build up to it. Intuitively, ρ[ϕ(s), α] is the weakest pre-
   It is obvious that the 6-th action, tick, is a cause of Q(s)     condition that must hold in a previous situation σ in order for
in σ, having acted as the proverbial last straw that broke the      ϕ(s) to hold after performing α in σ. If we prove α to be an
camel’s back, but so are the actions hi(d) and hi(e1 ), having      achievement cause of ϕ(s) in do(α, σ), we can use regression
created the right circumstances for the back-breaking. Action       ρ to obtain a formula that holds at σ and constitutes a neces-
#6 would accomplish nothing had the flip-flop not been en-          sary and sufficient condition for the achievement of ϕ(s) via
abled and the input bit set to high. The task before us is to       α. This new formula may have an achievement cause of its
introduce general formal criteria for identifying such actions.     own which, by virtue of α, also constructively contributes to
   We axiomatically recognize two kinds of causal roles             the achievement of ϕ(s). By repeating this process, we can
which events may assume. Achievement causes are the events          uncover the entire chain of actions that incrementally build up
which realize—in whole or in part—either the condition of           to the achievement of the ultimate effect. At the same time,
interest or the preconditions of other achievement causes.          we must not overlook the condition which makes the execu-
Maintenance causes are the events which prevent other events        tion of α in σ even possible. This condition is conveniently
from falsifying the condition of interest. We use the generic       captured by the right-hand side Πα (s) of the precondition ax-
term actual cause to refer to an event which contributes to the     iom for α and may have achievement causes of its own. The
effect of interest via a combination of these causal roles. Be-     following inductive definition formalizes these intuitions.
fore we proceed, we, like HP, introduce the notion of a causal      Definition 3. If a causal setting C = hσ, ϕ(s)i satis-
setting which formally captures a scenario.                         fies the achievement condition via some situation term
Definition 1. A (SC) causal setting is a triple hD, σ, ϕ(s)i        do(A(t̄), σ 0 ) v σ and α is an achievement cause in the causal
where D is a BAT, σ is a ground situation term such that D |=       setting hσ 0 , ρ[ϕ(s), A(t̄)] ∧ ΠA (t̄, s)i, then α is an achieve-
executable(σ), and ϕ(s) is a SC formula uniform in s such           ment cause in C.
that D |= ∃s(executable(s) ∧ ϕ(s)).                                    Clearly, the process of discovering intermediary achieve-
   Since the BAT D is fixed in our approach, we typically           ment causes using single-step regression repeatedly cannot
refer to hD, σ, ϕ(s)i as just hσ, ϕ(s)i.                            continue beyond S0 . Since the given narrative σ is a finite
                                                                    sequence, the achievement causes of C also form a finite se-
4   The Achievement Causal Chain                                    quence which we call the achievement causal chain of C.
                                                                    Note that the actions of the achievement causal chain need
Intuition provides few definite truths about actual causality,      not be adjacent in the action sequence of σ.
but we hold the following to be self-evident: If some action α
of the action sequence σ triggers the formula ϕ(s) to change        Example 3 (continued). We found that the action tick (#6)
its truth value from f alse to true relative to D and if there is   executed in σ 0 = do([c on, hi(d), tick, hi(e1 ), hi(e2 )], S0 ) is
no action in σ after α that changes the value of ϕ(s) back to       the achievement cause of Q(s). We can now use Definition 3
f alse, then α is an actual cause of achieving ϕ(s) in σ. This      to find in σ the complete causal chain leading up to Q(s). The
statement is sound because (a) the narrative σ determines a         one-step regression of Q(s) through tick is
total linear order on its actions, (b) change is associated with    ρ[Q(s), tick] = (¬En(s) ∨ High(d, s)) ∧ (En(s) ∨ Q(s)).
a particular element of that order, and (c) no change comes
about other than by an action of σ. The next definition states      Call ψ(s) the conjunction of this formula and ClockOn(s),
this observation formally.                                          the precondition of tick. By Definition 2, the achievement
                                                                    cause of ψ(s) is the action hi(e1 ) executed in do([c on,
Definition 2. A causal setting C = hσ, ϕ(s)i satisfies the          hi(d), tick], S0 ). Therefore, hi(e1 ) is a secondary achieve-
achievement condition via the situation term do(α, σ 0 ) v σ        ment cause of Q(s). Applying Definition 3 again, we formu-
iff D |= ¬ϕ(σ 0 ) ∧ ∀s (do(α, σ 0 ) v s v σ → ϕ(s)).                late another causal setting with the query
   Whenever a causal setting C satisfies the achievement con-
dition via do(α, σ 0 ), we say that the ground action α executed           ρ[ψ(s), hi(e1 )] ∧ Πhi (e1 , s) ≡
in σ 0 is a (primary) achievement cause in C.                                High(d, s) ∧ ClockOn(s) ∧ ¬High(e1 , s)
   If a causal setting does not satisfy the achievement con-        and situation do([c on, hi(d), tick], S0 ), where hi(d) is an
dition and ϕ(s) is non-tautological and holds throughout the        achievement cause as a part of do([c on, hi(d)], S0 ). Re-
narrative σ, then we ascribe the achievement of ϕ(s) to an          gressing High(d, s) ∧ ClockOn(s) ∧ ¬High(e1 , s) just past
unknowable cause masked by the initial situation S0 . If ϕ(s)       hi(d), we obtain ¬High(e1 , s) ∧ ClockOn(s), for which
is a tautology, it legitimately has no cause. If ϕ(σ) is not en-    c on is an achievement cause. Notice that the first action,
tailed by D, then its achievement cause truly does not exist.       c on established preconditions for tick; were it not for c on,
Example 2 (continued). The entailment of Definition 2 holds         tick would have never happened! There are no more achieve-
when α is tick and σ 0 is do([c on, hi(d), tick, hi(e1 ),           ment causes of Q(s) in σ aside from those already identified:
hi(e2 )], S0 ), meaning that the action #6 (tick executed af-       c on, hi(d), hi(e1 ), tick. Observe that these are indeed the
ter σ 0 ) is the achievement cause of Q(s) in σ.                    key events that lead to the achievement of Q(s) in σ.
   The notion of the achievement condition forms our basic             It is worth noting that our approach handled a classic in-
tool which, when used together with the single-step regres-         stance of (late) preemption without appealing to contingen-
sion operator ρ, helps us not only find the single action that      cies occurring in neighbouring possible worlds, which is the
essential strategy in counterfactual analyses. Namely, it cor-        Definition 5. Suppose a causal setting C = hσ, ϕ(s)i sat-
rectly excluded hi(e2 ) from the causal chain for being pre-          isfies the maintenance condition via some situation term
empted by hi(e1 ), although hi(e2 ) would have been suffi-            do(τ, σ 0 ) v σ, where τ is a threat in C. Let C 0 be the re-
cient, in the absence of hi(e1 ), for achieving En(s) and Q(s).       lated causal setting hσ 0 , ρ[ϕ(s), τ ]i. If α is an achievement
                                                                      cause in C 0 , we say that α is a maintenance cause in C.
5   Maintenance Causes                                                Example 4. Consider a formula ψ(s) with quantifiers over
The achievement causal chain explains precisely how a con-            object variables in the same BAT except that for the sake
dition comes to be, but not how it persists throughout the re-        of example there is a countably infinite set of signal con-
maining actions of the narrative. The narrative may well con-         stants ci for i ≥ 1 with unique names. Let the query ψ(s)
tain actions which could destroy the effect but were some-            be ∃x∃y(x 6= y ∧ High(x, s) ∧ High(y, s)) — “there are
how neutralized. We formalize our intuitive understanding             at least two high signals”. Let the actual situation σ be
of protective actions using the notion of maintenance. Our            do([hi(c1 ), hi(c2 ), hi(c3 ), lo(c1 )], S0 ).
general considerations are as follows. First, in a causal set-           By Definition 4, lo(c1 ) is a threat in this causal setting. By
ting C = hσ, ϕ(s)i, if D 6|= ϕ(σ), then there is nothing to           Definition 5, it yields a related causal setting with situation
maintain. Therefore, C may have a maintenance cause only if           do([hi(c1 ), hi(c2 ), hi(c3 )], S0 ) and query ρ[ψ(s), lo(c1 )],
D |= ϕ(σ). Second, every instance of maintenance involves             which simplifies to
at least two actions of σ, where one action—call it a threat—          ∃x∃y(x 6= y ∧ High(x, s) ∧ High(y, s) ∧ x 6= c1 ∧ y 6= c1 ).
would falsify the goal ϕ were it not for the other action, the        Applying Definition 2, we see this related causal setting has
maintenance cause itself. Obviously, the maintenance cause            hi(c3 ) as an achievement cause. Therefore, the original
must occur in σ before the corresponding threat. Third, if C          causal setting has hi(c3 ) as a maintenance cause.
satisfies the achievement condition via some do(α, σ 0 ), then
neither α nor any action of σ 0 may be a threat to ϕ(s), in ac-       6   Actual Cause
cordance with the first consideration. If, alternatively, ϕ(s)
                                                                      The Definitions 2, 3, and 5 are centered around the top level
holds at S0 and throughout σ, then any action of σ except the
                                                                      of a given casual setting and fail to capture the interplay be-
very first one may be a threat.
                                                                      tween achievement and maintenance causes at the deeper lev-
   The key property of a threat is that it has the potential to
                                                                      els of analysis. Specifically, suppose a causal setting C 0 arises
falsify the effect (but did not do so in the narrative). A test for
                                                                      via the achievement (resp., maintenance) condition during the
this property involves a construction of a hypothetical sce-
                                                                      analysis of another setting C. On its own, C 0 may have both
nario where the suspected threat falsifies the effect. Such test
                                                                      achievement and maintenance causes, but, by Definition 3
is by nature counterfactual and, therefore, gives rise to the
                                                                      (resp., 5), only the former are counted as causes of C. On
usual question: what alternative scenarios should we admit to
                                                                      the natural assumption that all causes of a descendant set-
the analysis? For the sake of generality, we require only that
                                                                      ting are equally relevant to the ancestor setting, the following
the alternative scenarios obey the rules of the world, and for
                                                                      definition inductively combines all possible interactions be-
the sake of tractability, that they do not contain too many ac-
                                                                      tween the achievement and maintenance conditions under the
tions. Both requirements are fulfilled by the following broad
                                                                      generic term actual cause.
definition, where len(s) is the number of actions in a situation
term s and len(S0 ) = 0.                                              Definition 6. Let α be a ground action and σ a narrative. We
                                                                      say that α is an actual cause in a causal setting C = hσ, ϕ(s)i
Definition 4. A causal setting C = hσ, ϕ(s)i satisfies                if at least one of the following conditions holds.
the maintenance condition via a ground situation term
do(τ, σ 0 ) v σ iff σ 6= S0 and D |= ∀s(σ 0 v s v σ → ϕ(s))            (a) C satisfies the achievement condition via do(α, σ 0 ) v σ.
and D |= ∃s executable(do(τ,        s)) ∧ ϕ(s) ∧ ¬ϕ(do(τ, s)) ∧        (b) C satisfies the achievement condition via some situation
                           
len(do(τ, s)) ≤ len(σ) , in which case τ is a threat in C.                   term do(A(t̄), σ 0 ) v σ and α is an actual cause in the
                                                                             causal setting hσ 0 , ρ[ϕ(s), A(t̄)] ∧ ΠA (t̄, s)i.
   A tighter definition of a threat would artificially de-
crease the search space of maintenance causes. If, through             (c) C satisfies the maintenance condition via do(τ, σ 0 ) @ σ,
unchecked generality, we misdiagnose a harmless action as                    and α is an actual cause in hσ 0 , ρ[ϕ(s), τ ]i.
a threat, the subsequent achievement cause analysis would             Example 5 (continued from 4). By Definition 6, the actions
be unable to identify an action which neutralized the threat’s        hi(c1 ), hi(c2 ), hi(c3 ) are all actual causes of ψ(s). Notice
harmful effects.                                                      that maintenance causes are just as important as achievement
   Before we define what is a maintenance cause, consider a           causes: the condition ψ(s) was realized through the prop-
threat τ to ϕ(s). By the definition of regression, ρ[¬ϕ(s), τ ]       erties of objects c1 , c2 , but persevered by virtue of c2 , c3 .
is a formula that should hold to make sure ϕ(s) becomes false         Achievement cause analysis alone disregards the role of c3 .
after executing τ . Since we would like to preserve ϕ(s), we          Example 6. Consider again our running example. By Def-
are interested in the negation of this formula. But by the re-        inition 6, the 8-th action lo(e2 ) is a non-trivial actual cause
gression theorem, D |= ¬ρ[¬φ, τ ] ↔ ρ[φ, τ ], so the formula          of Q(s) discovered through a combination of two mainte-
expressing the maintenance goal is simply ρ[ϕ(s), τ ]. No-            nance condition. Intuitively, it is causally important because
tably, the set of achievement causes of this formula will in-         it disables the flop-flop, preventing the actions tick (#11) and
clude the achievement causes of ϕ(s), because, intuitively,           lo(d) (#10) from destroying Q(s) — both are threats in their
ϕ(s) holds after τ in part due to being achieved.                     respective settings.
7     The Halpern-Pearl Approach                                         3. No proper sub-conjunction of (X̄=V̄X ) satisfies 1, 2.
Halpern and Pearl (2005), following the motivation of [Lewis,        Example 7. Consider the two well-known “Forest Fire” ex-
1974], base their formal account of actual causality on the          amples from [Halpern and Pearl, 2005; Halpern, 2016]. Both
notion of a counterfactual — a conditional statement whose           have the same set of endogenous variables: M D (match
premise is contrary to fact. They construct counterfactual           dropped by arsonist), L (lightning strike), FF (forest is on
statements in a formal language whose semantics is defined           fire). In both cases, M D and L are set to true by the con-
relative to a causal setting (see below). A causal model M is        text. The model Md for the disjunctive scenario has it that
a tuple hU, V, R, Fi, where U and V are finite disjoint sets of      either one of the events (M D = true), (L = true) is suffi-
exogenous and endogenous variables, respectively, with each          cient to start a fire, so the equation for FF is FF := (M D=
variable taking various values from an underlying domain.            true) ∨ (L=true). The model Mc for the conjunctive sce-
The function R maps every variable Z ∈ U ∪ V to a non-               nario requires both events in order to create a forest fire,
empty set R(Z) of possible values. F is a set of total func-         so FF := (M D = true) ∧ (L = true). By HPm , neither
tions {FX : ×Z∈U ∪V\{X} R(Z) 7→ R(X) | X ∈ V} which                  (M D = true) nor (L = true) are singleton actual causes in
act like structural equations; each tuple of values assigned to      Md because it is impossible to fulfill part 2 of the definition
the variables (excluding X) maps to a single value of X. In-         above by setting either variable to f alse, but the conjunction
tuitively, for each endogenous variable X, FX encodes the            (M D = true)∧(L = true) is deemed an actual cause. In con-
entirety of causal laws which determine X by mapping every           trast, in Mc , both (M D = true) and (L = true) are single-
value assignment on all variables except X to some value of          ton actual causes because setting one of {M D, L} to f alse
X. The values of exogenous variables U are set externally; a         makes the forest fire impossible, but their conjunction is not
tuple V̄U of values for U is called a context of M , and the pair    an actual cause because it violates the minimality condition.
(M, V̄U ) constitutes a causal setting. The tuple hU, V, Ri is
called the signature of M . The set of functions F determines        8     Formal Relationship with HP
a partial dependency order X  Y on endogenous variables
X, Y . Namely, Y depends on X, X  Y , if either X af-               We establish a common ground between the two formalisms
fects Y directly by virtue of FY , or indirectly via intermedi-      by axiomatizing causal models in SC.
ate functions. It is ubiquitously assumed that a given causal           Let (M, V̄U ) be a HP causal setting where M = hU, V, R,
model is acyclic, that is, for each context V̄U of M , there is a    Fi is an acyclic causal model and V̄U a context. We assume
partial order  on V that is anti-symmetric, reflexive and tran-     that U, V, and the range of R are finite sets and there are no
sitive. This assumption guarantees the existence of a unique         collisions between constants for variable and value symbols.
solution to the equations F.                                            We construct a BAT D from (M, V̄U ) as follows. We treat
   The language of the HP approach is as follows. A prim-            U, V, and R(X) for all X ∈ U ∪ V as sets of SC con-
itive event is a formula X = VX where X ∈ V and VX ∈                 stant symbols for which we introduce unique name axioms.
R(X). We call a Boolean combination of primitive events              If S = {C1 , . . . , Cn } is a set of constants and y is a SC object
a HP query. A general causal formula is one of the form              term, the expression y ∈ S denotes (y = C1 ∨ . . . ∨ y = Cn ).
[Y1 ← VY1 , . . . , Yk ← VYk ]φ where φ is a HP query, Yi for        If X ∈ U ∪V with R(X) = {V1 , . . . , Vn }, y ∈ R(X) denotes
1 ≤ i ≤ k are distinct variables from V, and VYi ∈ R(Yi ).           (y = V1 ∨ . . . ∨ y = Vn ). To represent functions F, we intro-
(We abbreviate [Y1 ← VY1 , . . . , Yk ← VYk ] as [Ȳ ← V̄Y ] and     duce a situation-independent relational symbol f with arity
call it an intervention.) A primitive event X = VX is satisfied      1 + |U ∪ V| + 1 where the first argument is the name of the
in a causal setting (M, V̄U ), denoted (M, V̄U ) |= (X = VX ),       variable (X) which FX ∈ F determines, the last argument
if X takes on the value VX in the unique solution to the equa-       is the value which FX assigns to X, and the arguments in
tions F once U are set to V̄U . HP queries are interpreted           between are the values of variables U ∪ V arranged in some
following the usual rules for Boolean connectives. Finally,          predetermined order. The actions of D are get(x, v), mean-
(M, V̄U ) |= [Y1 ← VY1 , . . . , Yk ← VYk ]φ iff (M 0 , V̄U ) |= φ   ing compute the value of the endogenous variable x using
where M 0 is obtained from M by replacing each FYi ∈ F               Fx ∈ F, and set(x, v), meaning ignore Fx and force the
by the trivial function FYi : ×Z∈U ∪V\{X} R(Z) 7→ VYi that           value v upon x. The only fluent of D is the relational flu-
fixes Yi to a constant VYi for all the values of arguments.          ent V (x, v, s) stating that v is the value of the endogenous
   In this paper, we focus on the so-called modified HP defini-      variable x in situation s.
tion, or HPm , of actual cause [Hopkins, 2005; Halpern, 2015;           Let Det(x, v, s) be an abbreviation for
2016] because it is the most recent, intuitively appealing,                             V             
                                                                       ∀v1 . . . ∀vN . 1≤i≤N ∃y y = Zi ∧ vi ∈ R(Zi ) ∧
and thoroughly connected with older definitions by formal
results in [Halpern, 2016]. According to this definition, the              ∀v 0 (V (y, v 0 , s) → vi = v 0 ) → f (x, v1 , . . . , vN , v),
conjunction of primitive events X̄ = V̄X (short for X1 =
VX1 ∧ . . . ∧ Xk = VXk ) is an actual cause in (M, V̄U ) of          where U ∪ V = {Z1 , . . . , ZN }. Det(x, v, s) means that the
a HP query φ if all of the following conditions hold:                value of variable x is determined in s to be v. Det(x, v, s)
                                                                     holds true when the values vi which exist in s, when bound
    1. (M, V̄U ) |= (X̄ = V̄X ) and (M, V̄U ) |= φ.                  to appropriate arguments of f , unequivocally assign v to x.
    2. There exists a set W̄ (disjoint from X̄) of variables in V    This means, crucially, that x may be determined as soon as
       with (M, V̄U ) |= (W̄ = V̄W ) and a setting V̄X0 of vari-     some—but not necessarily all—of the variables on which it
       ables X̄ such that (M, V̄U ) |= [X̄ ← V̄X0 , W̄ ← V̄W ]¬φ.    “depends” (as per ) have acquired values.
  The axioms of D are as follows.                                  Example 8. (cont.) Consider a translation of the disjunctive
   V                            V                                  Forest Fire causal model Md . The corresponding terminal
     X∈V ¬∃v(V (X, v, S0 )),      VY ∈V̄U V (Y, VY , S0 ),         narratives σ are
    P oss(set(x, v), s) ↔                                              do([get(M D, true), get(L, true), get(FF, true)], S0 ),
                                     0       0
       W
         X∈V (x = X ∧ v ∈ R(X)) ∧ ¬∃v V (x, v , s),                    do([get(L, true), get(M D, true), get(FF, true)], S0 ),
    P oss(get(x, v), s) ↔                                              do([get(M D, true), get(FF, true), get(L, true)], S0 ),
       x ∈ V ∧ ¬∃v 0 V (x, v 0 , s) ∧ Det(x, v, s),                    do([get(L, true), get(FF, true), get(M D, true)], S0 ).
    V (x, v, do(a, s)) ↔                                           Action get(M D, true) is a part of the causal chain of hσ,
      a = get(x, v) ∨ a = set(x, v) ∨ V (x, v, s).                 V (FF, true, s)i only for the first and third choice of σ. Sim-
                                                                   ilarly, get(L, true) is an achievement cause only for the sec-
   In words, none of the endogenous variables have values at       ond and fourth choice. By Part 1 of Theorem 2, they are not
S0 , and all exogenous variables have values at S0 as speci-       actual causes according to HPm . By Part 2 of Theorem 2,
fied by the context. It is possible to force a value v upon x      they are both parts of an actual cause according to HPm . This
as long as x is an endogenous variable, v is in the range of       agrees with conclusions of the original HP causal model.
x, and x has not yet acquired a value. It is possible to com-
pute the value of x as long as x is an endogenous variable         9    Discussion
which has not yet acquired a value but which is destined at
s to get the value v. Overall, the theory models all possible      Our approach shifts the focus away from causal models and
propagations of values (including interventions) throughout        towards first-order logic representation of the underlying dy-
the set of variables according to the structural equations. As     namics of the scenario. There are other attempts to step away
we are interested only in those situations where all variables     from purely counterfactual analysis [Vennekens et al., 2010;
have acquired values, which represent a unique solution to F,      Vennekens, 2011; Beckers and Vennekens, 2012; 2016], but
we introduce an abbreviation terminal(s) for the expression        they share the same expressivity limitations.        Curiously,
executable(s) ∧ ¬∃a(P oss(a, s)). In order to refer to sit-        [Vennekens et al., 2010] consider SC to be too expressive,
uations under specific interventions, we use the abbreviation      stating that “SC contains many features that go beyond what
intervY1←VY1 ,...,Yk←VYk (s) which stands for terminal(s) ∧        is traditionally expressed in a causal model. For typical causal
                                                                   reasoning problems, these features are not needed”. To re-
∀x∀v.[∃s0 (do(set(x, v), s0 ) v s) ↔ 1≤i≤k (x = Yi ∧ v =
                                         W
                                                                   fute this and to see where we stand with respect to other ap-
VYi )]. The special case interv∅ (s) describes s under an          proaches, let us consider three telling examples featured in
empty intervention.                                                [Beckers and Vennekens, 2012; 2016] and discussed in other
   Finally, given a HP query φ, we obtain a corresponding SC       papers. Assume all fluents are false at S0 .
query φ̂ from φ by replacing each primitive event (X = VX )        Example 9. Assassin poisons victim’s coffee, victim drinks it
by V (X, VX , s). Thus, φ̂ is ground in all object arguments       and dies. If assassin had not poisoned the coffee, his backup
and uniform in s. It is tedious but straightforward to prove the   would have, and victim would still have died.
correctness of our translation relative to a HP causal setting.        This example from [Hitchcock, 2007] illustrates early pre-
Theorem 1. Let (M, V̄U ) be a HP causal setting, [Ȳ ← V̄Y ]φ      emption, namely that the causal link from the backup to vic-
an arbitrary causal formula over M , and D a BAT ob-               tim’s death is preempted by the assassin. Let the actions be
tained from (M, V̄U ). Then (M, V̄U ) |= [Ȳ ← V̄Y ]φ iff          assassin and backup (the two acts of poisoning the coffee)
D |= (∀s). intervȲ←V̄Y (s) → φ̂(s).                               and drink. Let the fluents be P (s) meaning “coffee contains
                                                                   poison” and D(s) meaning “the victim is dead”.
  With this result, we can easily translate HPm to the lan-
guage of SC and formally compare the two approaches.                    P oss(assassin, s), P oss(backup, s)
                                                                        P oss(drink, s) ↔ P (s),
Theorem 2. Let (M, V̄U ) be a HP causal setting and φ a HP
query over M . Let D be a BAT obtained from (M, V̄U ) as                P (do(a, s)) ↔ a = assassin ∨ a = backup ∨ P (s),
described above. Let X ∈ V and VX ∈ R(X).                               D(do(a, s)) ↔ [a = drink ∧ P (s)] ∨ D(s).
 1. (X = VX ) is a singleton cause of φ in (M, V̄U ) accord-          The narrative is σ = do([assassin, drink], S0 ). By our
    ing to HPm if and only if get(X, VX ) ∈ σ appears in the       analysis, all of σ is an achievement causal chain. This agrees
    achievement causal chain of hσ, φ̂(s)i for every ground        with HP and [Hitchcock, 2007] but disagrees with Beckers
    situation term σ of D such that D |= interv∅ (σ).              and Vennekens who believe that assassin is not an actual
                                                                   cause. Rather than appeal to intuition, we just point out that
 2. (X = VX ) is a part of a cause of φ in (M, V̄U ) according     the causal roles assumed by the assassin and his backup are
    to HPm if and only if there exists a ground situation term     clearly distinct in the given scenario.
    σ of D such that D |= interv∅ (σ) and get(X, VX ) ∈ σ
                                                                   Example 10. An engineer is standing by a switch in the rail-
    appears in the achievement causal chain of hσ, φ̂(s)i.
                                                                   road track. A train approaches in the distance. She flips the
  The proof of Theorem 2 is quite involved and is not shown        switch, so that the train travels down the left-hand track in-
due to lack of space. By an immediate corollary, achievement       stead of the right. Since the tracks re-converge up ahead, the
cause analysis alone captures all HPm causes.                      train arrives at its destination all the same.
   This example from by [Paul and Hall, 2013] illustrates          situation admits poison in the absence of the antidote, owing
the distinction between causation and determination. Beck-         to the precondition for poison. Therefore, the given causal
ers and Vennekens state that it is isomorphic to the previous      setting contains no causes. This agrees with Beckers and Ven-
one, while the intuition about its causes is the polar oppo-       nekens and disagrees with Hitchcock and HP.
site. In fact, the two examples are isomorphic only within the        There exist multiple examples where the results of the HP
expressivity bounds of causal models and CP-logic.                 approach cannot be reconciled with intuitive understanding—
   Let the fluent In(s) mean that the train is on the section of   which, incidentally, the approach treats as the only measure of
the track leading to the first junction, let L(s) (resp., R(s))    merit. This problem was traced by [Hopkins and Pearl, 2007]
mean that it is on the left-hand track (resp., right), and let     and [Glymour et al., 2010] to the limited expressiveness of
Out(s) mean that it is on the section of the track past the        causal models. Causal models do not distinguish between
second junction. Let the fluent Sw(s) mean that the switch         enduring conditions and transitions and cannot to model the
is engaged and Arrived(s) that the train has arrived. Let          absence of an event except as the presence of its opposite;
the actions be f lip (engineer flips the switch), f ork1 (train    examples where this leads to absurd conclusions are easy to
passes first junction), f ork2 (train passes second junction),     come by, see e.g. [Hopkins and Pearl, 2007]. An explicit
and arrive (self-explanatory). Let only In(s) hold at S0 .         notion of an action solves these problems.
                                                                      Addressing the lack of expressivity, [Hopkins and Pearl,
 P oss(f lip, s), P oss(f ork1 , s) ↔ In(s),
                                                                   2007] re-defined causal models in the language of SC, but
 P oss(f ork2 ,s) ↔ L(s)∨R(s), P oss(arrive,s) ↔ Out(s),           they preserved the implicit possible worlds semantics of
 In(do(a, s)) ↔ In(s) ∧ a 6= f ork1 ,                              causal formulas and dropped the requirement that situations
 L(do(a, s)) ↔ a=f ork1 ∧ Sw(s) ∨ L(s) ∧ a 6= f ork2 ,             be executable. The latter is especially problematic, since
                                                                   dismissing preconditions results in paradoxes and makes in-
 R(do(a, s)) ↔ a=f ork1 ∧ ¬Sw(s) ∨ R(s) ∧ a 6= f ork2 ,            ferences untrustworthy. Our work reaps the benefits which
 Out(do(a, s)) ↔ a=f ork2 ∨ Out(s),                                [Hopkins and Pearl, 2007] aimed at but does not suffer from
 Sw(do(a, s)) ↔ a=f lip ∨ Sw(s) ∧ a 6= f lip,                      the issues associated with giving a meaningful definition of
 Arrived(do(a, s)) ↔ a=arrive ∨ Arrived(s).                        a counterfactual in SC, which appears to be no easy task.
                                                                   A counterfactual query not relativized to a particular sce-
The narrative σ is do([f lip,f ork1 ,f ork2 , arrive], S0 ). By    nario can be formulated in SC without special tools [Lin and
our analysis, the f lip action is not an actual cause of train’s   Soutchanski, 2011], but it is not clear how such queries can
arrival. This conclusion is elaboration tolerant [McCarthy,        be useful for defining actual causality. An original study con-
1987] as long as the relation between L, R, Sw is preserved.       ducted in [Costello and McCarthy, 1999] perhaps comes clos-
For HP, the answer depends on how model is constructed             est to a good definition of a counterfactual in SC, but it op-
and which definition is applied. [Pearl, 2000] calls this class    erates outside of the well-studied basic action theories and it
of problems “switching causation” and argues that flipping         is not concerned with actual causality. There exist numerous
switch is a cause (see Section 10.3.4, p.324-5). Both [Pearl,      studies of the semantics of causal models and the relation-
2000] and [Halpern and Pearl, 2005] argue that switch is a         ship of causal models to various logics, such as an elaborate
cause, while, according to HPm , it is not.                        axiomatization of causal models [Halpern, 2000] and a log-
                                                                   ical representation [Bochman and Lifschitz, 2015] of causal
Example 11. Assistant Bodyguard puts a harmless antidote
                                                                   models in a non-monotonic logic which encompasses gen-
in victim’s coffee. Buddy who knows about the antidote poi-
                                                                   eral causation as a foundational principle. The approach of
sons the coffee; he would not have done so otherwise. Victim       [Finzi and Lukasiewicz, 2003] combines causal models with
drinks the coffee and survives.
                                                                   independent choice logic. Finally, there are methodological
   This example is called “Careful Poisoning” in [Weslake,
                                                                   or technical critiques of the causal model approach, exempli-
2013] and left as a challenge for future work. Let the actions
                                                                   fied by [Glymour et al., 2010], [Menzies, 2014], [Livengood,
be antidote, poison, drink. The fluents P (s), D(s) are as
                                                                   2013], [Weslake, 2013] and [Baumgartner, 2013].
before, and the fluent A(s) means “coffee contains antidote”.
                                                                      It is clear that a more broad definition of actual cause re-
   P oss(antidote, s), P oss(drink, s),                            quires more expressive action theories that can model not
                                                                   only sequences of actions, but can also include explicit time
   P oss(poison, s) ↔ A(s),
                                                                   and concurrent actions. Only after that one can try to ana-
   A(do(a, s)) ↔ a = antidote ∨ A(s),                              lyze some of the popular examples of actual causation for-
   P (do(a, s)) ↔ a = poison ∨ P (s),                              mulated in philosophical literature; some of those examples
   D(do(a, s)) ↔ [a = drink ∧ P (s) ∧ ¬A(s)] ∨ D(s).               sound deceptively simple, but faithful modelling of them re-
                                                                   quires time, concurrency and natural actions [Reiter, 2001].
  σ = do([antidote, poison, drink], S0 ), so D |= ¬D(σ).           This does not imply that future research should focus only
In fact, ¬D(s) holds throughout the narrative, so it has no        on popular scenarios proposed by philosophers. To the con-
achievement causes. It has no maintenance causes either:           trary, we firmly believe that the future of causal research is
drink is a threat to ¬D(s), yielding a new causal setting          in elaborating computational methodology for the analysis of
hdo([antidote, poison], S0 ), ¬D(s) ∧ (¬P (s) ∨ A(s) ) i with      complex technical systems.
no achievement causes. The action poison could be a threat,        Acknowledgement: We thank the Natural Sciences and En-
but it does not qualify as such by our definition: no executable   gineering Research Council of Canada for financial support.
References                                                        [Hopkins, 2005] Mark Hopkins. The Actual Cause: From In-
[Baumgartner, 2013] Michael Baumgartner. A regularity                tuition to Automation. PhD thesis, University of California
                                                                     Los Angeles, 2005.
   theoretic approach to actual causation.          Erkenntnis,
   78(Supplement 1 “Actual Causation”):85–109, 2013.              [Lewis, 1974] David Lewis. Causation. The Journal of Phi-
                                                                     losophy, 70(17):556–567, 1974.
[Beckers and Vennekens, 2012] Sander Beckers and Joost
   Vennekens. Counterfactual dependency and actual causa-         [Lin and Soutchanski, 2011] Fangzhen Lin and Mikhail
   tion in CP-logic and structural models: a comparison. In          Soutchanski. Causal theories of actions revisited. In AAAI
   Proceedings of the Sixth Starting AI Researchers Sympo-           Spring Symposium: Logical Formalizations of Common-
   sium, volume 241, pages 35–46, 2012.                              sense Reasoning, 2011.
[Beckers and Vennekens, 2016] Sander Beckers and Joost            [Livengood, 2013] Jonathan Livengood. Actual causation
   Vennekens. A principled approach to defining actual cau-          and simple voting scenarios. Nous, 47(2):316–345, 2013.
   sation. Synthese, Oct 2016.                                    [McCarthy and Hayes, 1969] John McCarthy and Patrick J
                                                                     Hayes. Some philosophical problems from the standpoint
[Bochman and Lifschitz, 2015] Alexander Bochman and
                                                                     of artificial intelligence. Readings in artificial intelligence,
   Vladimir Lifschitz. Pearl’s causality in a logical setting.       pages 431–450, 1969.
   In Proceedings of the Twenty-Ninth AAAI Conference
   on Artificial Intelligence, AAAI’15, pages 1446–1452.          [McCarthy, 1987] John McCarthy. Generality in artificial in-
   AAAI Press, 2015.                                                 telligence. Commun. ACM, 30(12):1029–1035, 1987.
[Costello and McCarthy, 1999] Tom Costello and John Mc-           [Menzies, 2014] Peter Menzies. Counterfactual theories
   Carthy. Useful counterfactuals. Electron. Trans. Artif. In-       of causation. In Stanford Encyclopedia of Philoso-
   tell., 3(A):51–76, 1999.                                          phy, https://plato.stanford.edu/entries/
                                                                     causation-counterfactual/, 2014. Retrieved on
[Eiter and Lukasiewicz, 2002] Thomas Eiter and Thomas                January 15, 2017.
   Lukasiewicz.       Complexity results for structure-based      [Paul and Hall, 2013] L.A. Paul and Ned Hall. Causation:
   causality. Artif. Intell., 142(1):53–89, 2002.
                                                                     a user’s guide. Oxford University Press, ISBN 978-
[Finzi and Lukasiewicz, 2003] Alberto Finzi and Thomas               0199673452, 2013.
   Lukasiewicz. Structure-based causes and explanations           [Pearl, 1998] Judea Pearl. On the definition of actual cause.
   in the independent choice logic. In Proceedings of the            Technical report, R-259, University of California Los An-
   Nineteenth Conference on Uncertainty in Artificial Intelli-       geles, 1998.
   gence, UAI’03, pages 225–323, San Francisco, CA, USA,
   2003. Morgan Kaufmann Publishers Inc.                          [Pearl, 2000] Judea Pearl. Causality: Models, Reasoning,
                                                                     and Inference. Cambridge University Press, 1 edition,
[Glymour et al., 2010] Clark Glymour, David Danks, Bruce             2000.
   Glymour, Frederick Eberhardt, Joseph Ramsey, Richard
                                                                  [Reiter, 1991] Raymond Reiter. The frame problem in the
   Scheines, Peter Spirtes, Choh Man Teng, and Jiji Zhang.
   Actual causation: a stone soup essay.              Synthese,      situation calculus: A simple solution (sometimes) and a
   175(2):169–192, 2010.                                             completeness result for goal regression. Artificial intelli-
                                                                     gence and mathematical theory of computation: papers in
[Halpern and Pearl, 2005] Joseph Y Halpern and Judea                 honor of John McCarthy, 27:359–380, 1991.
   Pearl. Causes and explanations: A structural-model ap-         [Reiter, 2001] Raymond Reiter. Knowledge in action: logi-
   proach. Part I: Causes. The British Journal for the Philos-       cal foundations for specifying and implementing dynami-
   ophy of Science, 56(4):843–887, 2005.                             cal systems. MIT press Cambridge, 2001.
[Halpern, 2000] Joseph Y. Halpern. Axiomatizing causal            [Simon, 1977] Herbert A Simon. Causal ordering and iden-
   reasoning. J. Artif. Intell. Res. (JAIR), 12:317–337, 2000.       tifiability. In Models of Discovery, pages 53–80. Springer,
[Halpern, 2015] Joseph Y Halpern. A modification of the              1977.
   Halpern-Pearl definition of causality. In Proceedings of the   [Vennekens et al., 2010] Joost        Vennekens,          Maurice
   Twenty-Fourth International Joint Conference on Artificial        Bruynooghe, and Marc Denecker. Embracing events
   Intelligence, IJCAI 2015, Buenos Aires, Argentina, July           in causal modelling: Interventions and counterfactuals in
   25-31, 2015, pages 3022–3033, 2015.                               CP-logic. In European Workshop on Logics in Artificial
[Halpern, 2016] Joseph Y. Halpern. Actual Causality. The             Intelligence, pages 313–325. Springer, 2010.
   MIT Press, 2016. ISBN 9780262035026.                           [Vennekens, 2011] Joost Vennekens. Actual causation in CP-
[Hitchcock, 2007] Christopher Hitchcock. Prevention, pre-            logic. TPLP, 11(4-5):647–662, 2011. http://arxiv.
   emption, and the principle of sufficient reason. The Philo-       org/abs/1107.4865.
   sophical Review, 116(4):495–532, 2007.                         [Weslake, 2013] Brad Weslake. A Partial Theory of Actual
[Hopkins and Pearl, 2007] Mark Hopkins and Judea Pearl.              Causation. http://bweslake.s3.amazonaws.
                                                                     com/research/papers/weslake_ac.pdf, 2013.
   Causality and counterfactuals in the situation calculus.
                                                                     Version c4eb488. Retrieved on July 18, 2017.
   Journal of Logic and Computation, 17(5):939–953, 2007.