<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Situation Calculus Semantics for Actual Causality</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Vitaliy Batusov</string-name>
          <email>vbatusov@cse.yorku.ca</email>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mikhail Soutchanski</string-name>
          <email>mes@scs.ryerson.ca</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Ryerson University</institution>
          ,
          <addr-line>Toronto</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>York University</institution>
          ,
          <addr-line>Toronto</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>The state-of-the-art definitions of actual cause by Pearl and Halpern suffer from the modest expressivity of causal models. We develop a new definition of actual cause in the context of situation calculus (SC) basic action theories. As a result, we avoid the paradoxes that arise in causal models and can identify complex actual causes of conditions expressed in first-order logic. We provide a formal translation from causal models to SC and establish a relationship between the definitions. Using examples, we show that long-standing disagreements between alternative definitions of actual causality can be mitigated by faithful modelling.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>Actual causality, also known as token-level causality, is
concerned with finding in a given scenario a singular event
that caused another event. This is in contrast to type-level
causality which is concerned with universal causal
mechanisms governing the world. The leading line of
computational inquiry into actual causality was pioneered by [Pearl,
1998; 2000] and continued by [Halpern and Pearl, 2005;
Halpern, 2000; Eiter and Lukasiewicz, 2002; Hopkins, 2005;
Halpern, 2015; 2016] and in other publications. We call it the
HP approach. It is based on the concept of structural
equations [Simon, 1977] and implemented in the framework of
causal models. The HP approach follows the Humean
counterfactual definition of causation, which posits that saying “an
event A caused an outcome B” is the same as saying “if A had
not been, then B never had existed”. This definition is
wellknown to suffer from the problem of preemption: it could be
the case that in the absence of event A, B would still have
occurred due to another event, which in the original scenario
was preempted by A. HP address this by performing
counterfactual analysis only under carefully selected contingencies
which suspend some subset of the model’s mechanisms.
Selecting proper contingencies proved to be a challenging task;
as mentioned in [Halpern, 2016] on p.27, “The jury is still out
on what the ‘right’ definition of causality is”.</p>
      <p>The HP approach is prone to producing results that
cannot be reconciled with intuitive understanding due to the
limited expressiveness of causal models [Hopkins, 2005;
Hopkins and Pearl, 2007]. The ontological commitments of
structural causal models resemble propositional logic, they
have no objects, no relationships, no time, no support for
quantified causal queries. As a remedy, [Hopkins, 2005;
Hopkins and Pearl, 2007] leverage the expressive power of
first-order logic and the robustness of the situation calculus
(SC) [Reiter, 2001]. To formulate counterfactuals within SC,
they allow arbitrary modifications in a sequence of actions,
e.g. removing actions that serve as preconditions for
subsequent actions. They do not define actual causality.</p>
      <p>Given that theories of actual causality based on structural
equations share the same ailments [Menzies, 2014; Glymour
et al., 2010], it seems natural to explore actual causality from
a different perspective. We do this in the language of SC
under the classical Tarskian semantics, where the notion of
a cause naturally aligns with the notion of an action, and the
effect can be specified by a FOL formula with quantifiers over
object variables. In contrast to HP whose analysis is based
on observing the end results of interventions, we do so by
analyzing the dynamics which lead to the end results. Our
developments are based on a small set of plausible intuitions.</p>
      <p>The next section briefly summarizes SC. Section 3
motivates our approach and supplies a running example.
Section 4 characterizes causes which achieve an effect. Section 5
explores maintenance causes—actions which protect existing
conditions from being lost. In Section 6, we combine
achievement and maintenance causes into an all-encompassing
notion of actual cause. In Section 7, we outline the HP approach
and, in Section 8, formally connect it to ours. Finally, we
briefly compare our definition to others using examples and
discuss related work.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Situation Calculus</title>
      <p>In the situation calculus [McCarthy and Hayes, 1969; Reiter,
2001], the constant S0 denotes the initial situation that
represents an empty list of actions, while the complex situation
term do([ 1; ::::; n]; S0) represents the situation that results
from executing actions 1, ..., n consecutively so that 1 is
executed in S0, and n is executed last. If none of the action
terms i have variables, then we call this situation term an
(actual) narrative. An action term i may occur in the
narrative more than once at different positions. The set of all
situations can be visualized as a tree with a partial-order
relation s1 @ s2 on situations s1; s2, and s1 v s2 abbreviates
s1 @ s2 _ s1 = s2. It is characterized by the foundational
domain-independent axioms ( ) included in a basic action
theory (BAT) D that also includes axioms DS0 describing the
initial situation, and action precondition axioms Dap using
the predicate P oss(a; s) to say when an action a is
possible in s. For each action function there is one precondition
axiom P oss(A(x); s) $ A(x; s), where as usual, all free
variables are implicitly 8-quantified, and (x; s) is a formula
uniform in s, meaning that it has no occurrences of P oss; @,
no other situation terms, no quantifiers over situations. For
each fluent F , D includes a successor state axiom (SSA)
F (x; do(a; s)) $
+(x; a; s) _ F (x; s) ^ :
(x; a; s));
where the fluent predicate F (x; s) represents a
situationdependent relation over tuple of objects x, uniform formulas
+(x; a; s) and (x; a; s) specify action terms that under
certain application-dependent conditions have a positive
effect (make F true), or a negative effect on fluent F (make it
false), respectively. The SSAs are derived under the causal
completeness assumption [Reiter, 1991] that all effects of
actions on fluents are explicitly represented. There are a number
of auxiliary axioms, such as unique name axioms, that are
included in D. The common abbreviation executable(s) means
that each action mentioned in the situation term s was
possible in the situation in which it was executed. The basic
computational task, called the projection problem, is the task of
establishing whether a BAT entails a sentence ( ) for an
executable ground situation , where (s) is a formula uniform
in s. This problem can be solved using the one-step
regression operator . The expression ['; ] denotes the formula
obtained from ' by replacing each fluent atom F in ' with
the right-hand side of the SSA for F where the action variable
a is instantiated with the ground action , and then simplified
using unique name axioms for actions and constants.
Similarly to the theorem about multi-step regression R presented
in [Reiter, 1991], one can prove that given a BAT D, a
formula '(s) uniform in s, and a ground action term , we have
that D j= 8s: '(do( ; s)) $ ['(s); ].
3</p>
    </sec>
    <sec id="sec-3">
      <title>Motivation</title>
      <p>We propose to axiomatize a dynamic world using a SC
theory and derive actual causality from first principles.
Specifically, to represent a “scenario”, we consider a BAT D and
a narrative describing the actions or events which transpired
in the world characterized by D. We do not formally
distinguish between agent actions and nature’s events. The
narrative is specified by an executable ground situation term
called the “actual situation”. An effect for which we seek to
identify causes is given by a formula '(s) uniform in
situation s. Since actions are the sole source of change in a BAT,
we identify the set of potential causes of an effect ' with the
set of all ground action terms occurring in .</p>
      <p>Example 1. A D flip-flop is a digital circuit capable of storing
one bit of information. A basic D flip-flop has two Boolean
inputs, D and enable, and one output, Q. Each input and
output signal can be either at the low level (modelled as false),
or at the high level (modelled as true). If an input enable
is high, every time the clock “ticks”, the output Q assumes
the value of the main input D and maintains it until the next
tick. When the signal enable is low, the flip-flop preserves
the value of Q regardless of D and ticks.
enable
clock
S0
n
o
c
i)(dh itck i)(eh1 i)(eh2 itck l)(eo1 l)(eo2 itck l)(od itck</p>
      <p>Consider the circuit in Figure 1. It consists of a D flip-flop,
shown as a box, whose enable input is controlled by an
ORgate such that at least one of the signals e1; e2 needs to be
high in order for the flip-flop to produce the output Q = D.</p>
      <p>Let d, e1, and e2 be constants that represent the input
signals. Let the action functions be hi(x), lo(x), tick, and c on,
where the first two actions set signal x to high or to low
voltage level, respectively, tick represents the action of the clock,
and c on turns the clock on, making tick possible. The
fluent ClockOn(s) represents the state of the clock, High(x; s)
represents the logical value of signal x, En(s) represents the
output of the OR-gate, and Q(s) is the output of the flip-flop.</p>
      <p>Let the narrative be do([c on, hi(d), tick, hi(e1), hi(e2),
tick, lo(e1), lo(e2), tick, lo(d), tick], S0), and let the
effect of interest be Q(s). In the initial situation, we have that
8x(:High(x; S0)); :Q(S0); :En(S0). The following BAT
models the operation of the circuit.</p>
      <sec id="sec-3-1">
        <title>P oss(tick; s) $ ClockOn(s); P oss(c on; s);</title>
        <p>P oss(hi(x); s) $ :High(x; s);</p>
        <p>P oss(lo(x); s) $ High(x; s);
ClockOn(do(a; s)) $ a = c on _ ClockOn(s);
High(x; do(a; s)) $ a = hi(x) _ High(x; s) ^ a 6= lo(x);
En(do(a; s)) $ a = hi(e1) _ a = hi(e2) _</p>
        <p>En(s) ^ :[a = lo(e1) ^ :High(e2; s)]^</p>
        <p>:[a = lo(e2) ^ :High(e1; s)];
Q(do(a; s)) $ [a = tick ^ En(s) ^ High(d; s)] _</p>
        <p>Q(s) ^ :[a = tick ^ En(s) ^ :High(d; s)]:</p>
        <p>Figure 2 graphically shows the truth values, relative to D,
of the key ground fluents in situation S0 and after each
subsequent action in . Observe that all fluents are initially false,
shown as the thick lower edges, the #1 action c on makes
subsequent tick actions (#3, #6, #9, #11) possible, the
actions hi(d), hi(e1), hi(e2), lo(e1) change the voltage levels
of the corresponding signals, hi(e1) also changes the state of
En(s), the second occurrence of tick (#6) makes the output
Q(s) true, but other occurrences of tick are inconsequential.</p>
        <p>It is obvious that the 6-th action, tick, is a cause of Q(s)
in , having acted as the proverbial last straw that broke the
camel’s back, but so are the actions hi(d) and hi(e1), having
created the right circumstances for the back-breaking. Action
#6 would accomplish nothing had the flip-flop not been
enabled and the input bit set to high. The task before us is to
introduce general formal criteria for identifying such actions.</p>
        <p>We axiomatically recognize two kinds of causal roles
which events may assume. Achievement causes are the events
which realize—in whole or in part—either the condition of
interest or the preconditions of other achievement causes.
Maintenance causes are the events which prevent other events
from falsifying the condition of interest. We use the generic
term actual cause to refer to an event which contributes to the
effect of interest via a combination of these causal roles.
Before we proceed, we, like HP, introduce the notion of a causal
setting which formally captures a scenario.</p>
        <p>Definition 1. A (SC) causal setting is a triple hD; ; '(s)i
where D is a BAT, is a ground situation term such that D j=
executable( ), and '(s) is a SC formula uniform in s such
that D j= 9s(executable(s) ^ '(s)).</p>
        <p>Since the BAT D is fixed in our approach, we typically
refer to hD; ; '(s)i as just h ; '(s)i.
4</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>The Achievement Causal Chain</title>
      <p>Intuition provides few definite truths about actual causality,
but we hold the following to be self-evident: If some action
of the action sequence triggers the formula '(s) to change
its truth value from f alse to true relative to D and if there is
no action in after that changes the value of '(s) back to
f alse, then is an actual cause of achieving '(s) in . This
statement is sound because (a) the narrative determines a
total linear order on its actions, (b) change is associated with
a particular element of that order, and (c) no change comes
about other than by an action of . The next definition states
this observation formally.</p>
      <p>Definition 2. A causal setting C = h ; '(s)i satisfies the
achievement condition via the situation term do( ; 0) v
iff D j= :'( 0) ^ 8s (do( ; 0) v s v ! '(s)).</p>
      <p>Whenever a causal setting C satisfies the achievement
condition via do( ; 0), we say that the ground action executed
in 0 is a (primary) achievement cause in C.</p>
      <p>If a causal setting does not satisfy the achievement
condition and '(s) is non-tautological and holds throughout the
narrative , then we ascribe the achievement of '(s) to an
unknowable cause masked by the initial situation S0. If '(s)
is a tautology, it legitimately has no cause. If '( ) is not
entailed by D, then its achievement cause truly does not exist.
Example 2 (continued). The entailment of Definition 2 holds
when is tick and 0 is do([c on, hi(d), tick, hi(e1),
hi(e2)], S0), meaning that the action #6 (tick executed
after 0) is the achievement cause of Q(s) in .</p>
      <p>The notion of the achievement condition forms our basic
tool which, when used together with the single-step
regression operator , helps us not only find the single action that
brings about the effect of interest, but also identify the actions
that build up to it. Intuitively, ['(s); ] is the weakest
precondition that must hold in a previous situation in order for
'(s) to hold after performing in . If we prove to be an
achievement cause of '(s) in do( ; ), we can use regression
to obtain a formula that holds at and constitutes a
necessary and sufficient condition for the achievement of '(s) via
. This new formula may have an achievement cause of its
own which, by virtue of , also constructively contributes to
the achievement of '(s). By repeating this process, we can
uncover the entire chain of actions that incrementally build up
to the achievement of the ultimate effect. At the same time,
we must not overlook the condition which makes the
execution of in even possible. This condition is conveniently
captured by the right-hand side (s) of the precondition
axiom for and may have achievement causes of its own. The
following inductive definition formalizes these intuitions.
Definition 3. If a causal setting C = h ; '(s)i
satisfies the achievement condition via some situation term
do(A(t); 0) v and is an achievement cause in the causal
setting h 0, ['(s); A(t)] ^ A(t; s)i, then is an
achievement cause in C.</p>
      <p>Clearly, the process of discovering intermediary
achievement causes using single-step regression repeatedly cannot
continue beyond S0. Since the given narrative is a finite
sequence, the achievement causes of C also form a finite
sequence which we call the achievement causal chain of C.
Note that the actions of the achievement causal chain need
not be adjacent in the action sequence of .</p>
      <p>Example 3 (continued). We found that the action tick (#6)
executed in 0 = do([c on, hi(d), tick, hi(e1), hi(e2)], S0) is
the achievement cause of Q(s). We can now use Definition 3
to find in the complete causal chain leading up to Q(s). The
one-step regression of Q(s) through tick is</p>
      <p>[Q(s); tick] = (:En(s) _ High(d; s)) ^ (En(s) _ Q(s)):
Call (s) the conjunction of this formula and ClockOn(s),
the precondition of tick. By Definition 2, the achievement
cause of (s) is the action hi(e1) executed in do([c on,
hi(d), tick], S0). Therefore, hi(e1) is a secondary
achievement cause of Q(s). Applying Definition 3 again, we
formulate another causal setting with the query
[ (s); hi(e1)] ^</p>
      <p>hi(e1; s)</p>
      <p>High(d; s) ^ ClockOn(s) ^ :High(e1; s)
and situation do([c on; hi(d); tick]; S0), where hi(d) is an
achievement cause as a part of do([c on; hi(d)]; S0).
Regressing High(d; s) ^ ClockOn(s) ^ :High(e1; s) just past
hi(d), we obtain :High(e1; s) ^ ClockOn(s), for which
c on is an achievement cause. Notice that the first action,
c on established preconditions for tick; were it not for c on,
tick would have never happened! There are no more
achievement causes of Q(s) in aside from those already identified:
c on, hi(d), hi(e1), tick. Observe that these are indeed the
key events that lead to the achievement of Q(s) in .</p>
      <p>It is worth noting that our approach handled a classic
instance of (late) preemption without appealing to
contingencies occurring in neighbouring possible worlds, which is the
essential strategy in counterfactual analyses. Namely, it
correctly excluded hi(e2) from the causal chain for being
preempted by hi(e1), although hi(e2) would have been
sufficient, in the absence of hi(e1), for achieving En(s) and Q(s).
5</p>
    </sec>
    <sec id="sec-5">
      <title>Maintenance Causes</title>
      <p>The achievement causal chain explains precisely how a
condition comes to be, but not how it persists throughout the
remaining actions of the narrative. The narrative may well
contain actions which could destroy the effect but were
somehow neutralized. We formalize our intuitive understanding
of protective actions using the notion of maintenance. Our
general considerations are as follows. First, in a causal
setting C = h ; '(s)i, if D 6j= '( ), then there is nothing to
maintain. Therefore, C may have a maintenance cause only if
D j= '( ). Second, every instance of maintenance involves
at least two actions of , where one action—call it a threat—
would falsify the goal ' were it not for the other action, the
maintenance cause itself. Obviously, the maintenance cause
must occur in before the corresponding threat. Third, if C
satisfies the achievement condition via some do( ; 0), then
neither nor any action of 0 may be a threat to '(s), in
accordance with the first consideration. If, alternatively, '(s)
holds at S0 and throughout , then any action of except the
very first one may be a threat.</p>
      <p>The key property of a threat is that it has the potential to
falsify the effect (but did not do so in the narrative). A test for
this property involves a construction of a hypothetical
scenario where the suspected threat falsifies the effect. Such test
is by nature counterfactual and, therefore, gives rise to the
usual question: what alternative scenarios should we admit to
the analysis? For the sake of generality, we require only that
the alternative scenarios obey the rules of the world, and for
the sake of tractability, that they do not contain too many
actions. Both requirements are fulfilled by the following broad
definition, where len(s) is the number of actions in a situation
term s and len(S0) = 0.</p>
      <p>Definition 4. A causal setting C = h ; '(s)i satisfies
the maintenance condition via a ground situation term
do( ; 0) v iff 6= S0 and D j= 8s( 0 v s v ! '(s))
and D j= 9s executable(do( ; s)) ^ '(s) ^ :'(do( ; s)) ^
len(do( ; s)) len( ) , in which case is a threat in C.</p>
      <p>A tighter definition of a threat would artificially
decrease the search space of maintenance causes. If, through
unchecked generality, we misdiagnose a harmless action as
a threat, the subsequent achievement cause analysis would
be unable to identify an action which neutralized the threat’s
harmful effects.</p>
      <p>Before we define what is a maintenance cause, consider a
threat to '(s). By the definition of regression, [:'(s); ]
is a formula that should hold to make sure '(s) becomes false
after executing . Since we would like to preserve '(s), we
are interested in the negation of this formula. But by the
regression theorem, D j= : [: ; ] $ [ ; ], so the formula
expressing the maintenance goal is simply ['(s); ].
Notably, the set of achievement causes of this formula will
include the achievement causes of '(s), because, intuitively,
'(s) holds after in part due to being achieved.</p>
      <p>Definition 5. Suppose a causal setting C = h ; '(s)i
satisfies the maintenance condition via some situation term
do( ; 0) v , where is a threat in C. Let C0 be the
related causal setting h 0; ['(s); ]i. If is an achievement
cause in C0, we say that is a maintenance cause in C.
Example 4. Consider a formula (s) with quantifiers over
object variables in the same BAT except that for the sake
of example there is a countably infinite set of signal
constants ci for i 1 with unique names. Let the query (s)
be 9x9y(x 6= y ^ High(x; s) ^ High(y; s)) — “there are
at least two high signals”. Let the actual situation be
do([hi(c1); hi(c2); hi(c3); lo(c1)]; S0).</p>
      <p>By Definition 4, lo(c1) is a threat in this causal setting. By
Definition 5, it yields a related causal setting with situation
do([hi(c1), hi(c2), hi(c3)], S0) and query [ (s); lo(c1)],
which simplifies to
9x9y(x 6= y ^ High(x; s) ^ High(y; s) ^ x 6= c1 ^ y 6= c1):
Applying Definition 2, we see this related causal setting has
hi(c3) as an achievement cause. Therefore, the original
causal setting has hi(c3) as a maintenance cause.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Actual Cause</title>
      <p>The Definitions 2, 3, and 5 are centered around the top level
of a given casual setting and fail to capture the interplay
between achievement and maintenance causes at the deeper
levels of analysis. Specifically, suppose a causal setting C0 arises
via the achievement (resp., maintenance) condition during the
analysis of another setting C. On its own, C0 may have both
achievement and maintenance causes, but, by Definition 3
(resp., 5), only the former are counted as causes of C. On
the natural assumption that all causes of a descendant
setting are equally relevant to the ancestor setting, the following
definition inductively combines all possible interactions
between the achievement and maintenance conditions under the
generic term actual cause.</p>
      <p>Definition 6. Let be a ground action and a narrative. We
say that is an actual cause in a causal setting C = h ; '(s)i
if at least one of the following conditions holds.
(a) C satisfies the achievement condition via do( ; 0) v .
(b) C satisfies the achievement condition via some situation
term do(A(t); 0) v and is an actual cause in the
causal setting h 0; ['(s); A(t)] ^ A(t; s)i.
(c) C satisfies the maintenance condition via do( ; 0) @ ,
and is an actual cause in h 0, ['(s); ]i.</p>
      <p>Example 5 (continued from 4). By Definition 6, the actions
hi(c1), hi(c2), hi(c3) are all actual causes of (s). Notice
that maintenance causes are just as important as achievement
causes: the condition (s) was realized through the
properties of objects c1; c2, but persevered by virtue of c2; c3.
Achievement cause analysis alone disregards the role of c3.
Example 6. Consider again our running example. By
Definition 6, the 8-th action lo(e2) is a non-trivial actual cause
of Q(s) discovered through a combination of two
maintenance condition. Intuitively, it is causally important because
it disables the flop-flop, preventing the actions tick (#11) and
lo(d) (#10) from destroying Q(s) — both are threats in their
respective settings.</p>
    </sec>
    <sec id="sec-7">
      <title>The Halpern-Pearl Approach</title>
      <p>
        Halpern and Pearl (
        <xref ref-type="bibr" rid="ref15">2005</xref>
        ), following the motivation of [Lewis,
1974], base their formal account of actual causality on the
notion of a counterfactual — a conditional statement whose
premise is contrary to fact. They construct counterfactual
statements in a formal language whose semantics is defined
relative to a causal setting (see below). A causal model M is
a tuple hU ; V; R; F i, where U and V are finite disjoint sets of
exogenous and endogenous variables, respectively, with each
variable taking various values from an underlying domain.
The function R maps every variable Z 2 U [ V to a
nonempty set R(Z) of possible values. F is a set of total
functions fFX : Z2U[VnfXgR(Z) 7! R(X) j X 2 Vg which
act like structural equations; each tuple of values assigned to
the variables (excluding X) maps to a single value of X.
Intuitively, for each endogenous variable X, FX encodes the
entirety of causal laws which determine X by mapping every
value assignment on all variables except X to some value of
X. The values of exogenous variables U are set externally; a
tuple VU of values for U is called a context of M , and the pair
(M; VU ) constitutes a causal setting. The tuple hU ; V; Ri is
called the signature of M . The set of functions F determines
a partial dependency order X Y on endogenous variables
X; Y . Namely, Y depends on X, X Y , if either X
affects Y directly by virtue of FY , or indirectly via
intermediate functions. It is ubiquitously assumed that a given causal
model is acyclic, that is, for each context VU of M , there is a
partial order on V that is anti-symmetric, reflexive and
transitive. This assumption guarantees the existence of a unique
solution to the equations F .
      </p>
      <p>The language of the HP approach is as follows. A
primitive event is a formula X = VX where X 2 V and VX 2
R(X). We call a Boolean combination of primitive events
a HP query. A general causal formula is one of the form
[Y1 VY1 ; : : : ; Yk VYk ] where is a HP query, Yi for
1 i k are distinct variables from V, and VYi 2 R(Yi).
(We abbreviate [Y1 VY1 ; : : : ; Yk VYk ] as [Y VY ] and
call it an intervention.) A primitive event X = VX is satisfied
in a causal setting (M; VU ), denoted (M; VU ) j= (X = VX ),
if X takes on the value VX in the unique solution to the
equations F once U are set to VU . HP queries are interpreted
following the usual rules for Boolean connectives. Finally,
(M; VU ) j= [Y1 VY1 ; : : : ; Yk VYk ] iff (M 0; VU ) j=
where M 0 is obtained from M by replacing each FYi 2 F
by the trivial function FYi : Z2U[VnfXgR(Z) 7! VYi that
fixes Yi to a constant VYi for all the values of arguments.</p>
      <p>In this paper, we focus on the so-called modified HP
definition, or HPm, of actual cause [Hopkins, 2005; Halpern, 2015;
2016] because it is the most recent, intuitively appealing,
and thoroughly connected with older definitions by formal
results in [Halpern, 2016]. According to this definition, the
conjunction of primitive events X = VX (short for X1 =
VX1 ^ : : : ^ Xk = VXk ) is an actual cause in (M; VU ) of
a HP query if all of the following conditions hold:
1. (M; VU ) j= (X = VX ) and (M; VU ) j= .
2. There exists a set W (disjoint from X) of variables in V
with (M; VU ) j= (W = VW ) and a setting VX0 of
variables X such that (M; VU ) j= [X VX0 ; W VW ]: .
3. No proper sub-conjunction of (X=VX ) satisfies 1, 2.
Example 7. Consider the two well-known “Forest Fire”
examples from [Halpern and Pearl, 2005; Halpern, 2016]. Both
have the same set of endogenous variables: M D (match
dropped by arsonist), L (lightning strike), FF (forest is on
fire). In both cases, M D and L are set to true by the
context. The model Md for the disjunctive scenario has it that
either one of the events (M D = true), (L = true) is
sufficient to start a fire, so the equation for FF is FF := (M D=
true) _ (L=true). The model Mc for the conjunctive
scenario requires both events in order to create a forest fire,
so FF := (M D = true) ^ (L = true). By HPm, neither
(M D = true) nor (L = true) are singleton actual causes in
Md because it is impossible to fulfill part 2 of the definition
above by setting either variable to f alse, but the conjunction
(M D = true)^(L = true) is deemed an actual cause. In
contrast, in Mc, both (M D = true) and (L = true) are
singleton actual causes because setting one of fM D; Lg to f alse
makes the forest fire impossible, but their conjunction is not
an actual cause because it violates the minimality condition.
8</p>
    </sec>
    <sec id="sec-8">
      <title>Formal Relationship with HP</title>
      <p>We establish a common ground between the two formalisms
by axiomatizing causal models in SC.</p>
      <p>Let (M; VU ) be a HP causal setting where M = hU , V, R,
F i is an acyclic causal model and VU a context. We assume
that U , V, and the range of R are finite sets and there are no
collisions between constants for variable and value symbols.</p>
      <p>We construct a BAT D from (M; VU ) as follows. We treat
U , V, and R(X) for all X 2 U [ V as sets of SC
constant symbols for which we introduce unique name axioms.
If S = fC1; : : : ; Cng is a set of constants and y is a SC object
term, the expression y 2 S denotes (y = C1 _ : : : _ y = Cn).
If X 2 U [V with R(X) = fV1; : : : ; Vng, y 2 R(X) denotes
(y = V1 _ : : : _ y = Vn). To represent functions F , we
introduce a situation-independent relational symbol f with arity
1 + jU [ Vj + 1 where the first argument is the name of the
variable (X) which FX 2 F determines, the last argument
is the value which FX assigns to X, and the arguments in
between are the values of variables U [ V arranged in some
predetermined order. The actions of D are get(x; v),
meaning compute the value of the endogenous variable x using
Fx 2 F , and set(x; v), meaning ignore Fx and force the
value v upon x. The only fluent of D is the relational
fluent V (x; v; s) stating that v is the value of the endogenous
variable x in situation s.</p>
      <p>Let Det(x; v; s) be an abbreviation for
8v1 : : : 8vN : V1 i N 9y y = Zi ^ vi 2 R(Zi) ^
8v0(V (y; v0; s) ! vi = v0)
! f (x; v1; : : : ; vN ; v);
where U [ V = fZ1; : : : ; ZN g. Det(x; v; s) means that the
value of variable x is determined in s to be v. Det(x; v; s)
holds true when the values vi which exist in s, when bound
to appropriate arguments of f , unequivocally assign v to x.
This means, crucially, that x may be determined as soon as
some—but not necessarily all—of the variables on which it
“depends” (as per ) have acquired values.</p>
      <p>The axioms of D are as follows.</p>
      <p>VX2V :9v(V (X; v; S0));</p>
      <p>V</p>
      <p>VY 2VU V (Y; VY ; S0);
P oss(set(x; v); s) $</p>
      <p>W</p>
      <p>X2V (x = X ^ v 2 R(X)) ^ :9v0 V (x; v0; s);
P oss(get(x; v); s) $</p>
      <p>x 2 V ^ :9v0 V (x; v0; s) ^ Det(x; v; s);
V (x; v; do(a; s)) $</p>
      <p>a = get(x; v) _ a = set(x; v) _ V (x; v; s):</p>
      <p>In words, none of the endogenous variables have values at
S0, and all exogenous variables have values at S0 as
specified by the context. It is possible to force a value v upon x
as long as x is an endogenous variable, v is in the range of
x, and x has not yet acquired a value. It is possible to
compute the value of x as long as x is an endogenous variable
which has not yet acquired a value but which is destined at
s to get the value v. Overall, the theory models all possible
propagations of values (including interventions) throughout
the set of variables according to the structural equations. As
we are interested only in those situations where all variables
have acquired values, which represent a unique solution to F ,
we introduce an abbreviation terminal(s) for the expression
executable(s) ^ :9a(P oss(a; s)). In order to refer to
situations under specific interventions, we use the abbreviation
intervY1 VY1 ;:::;Yk VYk (s) which stands for terminal(s) ^
8x8v:[9s0(do(set(x; v); s0) v s) $ W1 i k(x = Yi ^ v =
VYi )]. The special case interv;(s) describes s under an
empty intervention.</p>
      <p>Finally, given a HP query , we obtain a corresponding SC
query ^ from by replacing each primitive event (X = VX )
by V (X; VX ; s). Thus, ^ is ground in all object arguments
and uniform in s. It is tedious but straightforward to prove the
correctness of our translation relative to a HP causal setting.
Theorem 1. Let (M; VU ) be a HP causal setting, [Y VY ]
an arbitrary causal formula over M , and D a BAT
obtained from (M; VU ). Then (M; VU ) j= [Y VY ] iff
D j= (8s): intervY VY (s) ! ^(s).</p>
      <p>With this result, we can easily translate HPm to the
language of SC and formally compare the two approaches.
Theorem 2. Let (M; VU ) be a HP causal setting and a HP
query over M . Let D be a BAT obtained from (M; VU ) as
described above. Let X 2 V and VX 2 R(X).</p>
      <p>1. (X = VX ) is a singleton cause of in (M; VU )
according to HPm if and only if get(X; VX ) 2 appears in the
achievement causal chain of h ; ^(s)i for every ground
situation term of D such that D j= interv;( ).
2. (X = VX ) is a part of a cause of in (M; VU ) according
to HPm if and only if there exists a ground situation term
of D such that D j= interv;( ) and get(X; VX ) 2
appears in the achievement causal chain of h ; ^(s)i.</p>
      <p>The proof of Theorem 2 is quite involved and is not shown
due to lack of space. By an immediate corollary, achievement
cause analysis alone captures all HPm causes.</p>
      <p>Example 8. (cont.) Consider a translation of the disjunctive
Forest Fire causal model Md. The corresponding terminal
narratives are
do([get(M D; true); get(L; true); get(FF; true)]; S0);
do([get(L; true); get(M D; true); get(FF; true)]; S0);
do([get(M D; true); get(FF; true); get(L; true)]; S0);
do([get(L; true); get(FF; true); get(M D; true)]; S0):
Action get(M D; true) is a part of the causal chain of h ,
V (FF; true; s)i only for the first and third choice of .
Similarly, get(L; true) is an achievement cause only for the
second and fourth choice. By Part 1 of Theorem 2, they are not
actual causes according to HPm. By Part 2 of Theorem 2,
they are both parts of an actual cause according to HPm. This
agrees with conclusions of the original HP causal model.
9</p>
    </sec>
    <sec id="sec-9">
      <title>Discussion</title>
      <p>Our approach shifts the focus away from causal models and
towards first-order logic representation of the underlying
dynamics of the scenario. There are other attempts to step away
from purely counterfactual analysis [Vennekens et al., 2010;
Vennekens, 2011; Beckers and Vennekens, 2012; 2016], but
they share the same expressivity limitations. Curiously,
[Vennekens et al., 2010] consider SC to be too expressive,
stating that “SC contains many features that go beyond what
is traditionally expressed in a causal model. For typical causal
reasoning problems, these features are not needed”. To
refute this and to see where we stand with respect to other
approaches, let us consider three telling examples featured in
[Beckers and Vennekens, 2012; 2016] and discussed in other
papers. Assume all fluents are false at S0.</p>
      <p>Example 9. Assassin poisons victim’s coffee, victim drinks it
and dies. If assassin had not poisoned the coffee, his backup
would have, and victim would still have died.</p>
      <p>This example from [Hitchcock, 2007] illustrates early
preemption, namely that the causal link from the backup to
victim’s death is preempted by the assassin. Let the actions be
assassin and backup (the two acts of poisoning the coffee)
and drink. Let the fluents be P (s) meaning “coffee contains
poison” and D(s) meaning “the victim is dead”.</p>
      <sec id="sec-9-1">
        <title>P oss(assassin; s); P oss(backup; s)</title>
        <p>P oss(drink; s) $ P (s);
P (do(a; s)) $ a = assassin _ a = backup _ P (s);
D(do(a; s)) $ [a = drink ^ P (s)] _ D(s):</p>
        <p>The narrative is = do([assassin; drink]; S0). By our
analysis, all of is an achievement causal chain. This agrees
with HP and [Hitchcock, 2007] but disagrees with Beckers
and Vennekens who believe that assassin is not an actual
cause. Rather than appeal to intuition, we just point out that
the causal roles assumed by the assassin and his backup are
clearly distinct in the given scenario.</p>
        <p>Example 10. An engineer is standing by a switch in the
railroad track. A train approaches in the distance. She flips the
switch, so that the train travels down the left-hand track
instead of the right. Since the tracks re-converge up ahead, the
train arrives at its destination all the same.</p>
        <p>This example from by [Paul and Hall, 2013] illustrates
the distinction between causation and determination.
Beckers and Vennekens state that it is isomorphic to the previous
one, while the intuition about its causes is the polar
opposite. In fact, the two examples are isomorphic only within the
expressivity bounds of causal models and CP-logic.</p>
        <p>Let the fluent In(s) mean that the train is on the section of
the track leading to the first junction, let L(s) (resp., R(s))
mean that it is on the left-hand track (resp., right), and let
Out(s) mean that it is on the section of the track past the
second junction. Let the fluent Sw(s) mean that the switch
is engaged and Arrived(s) that the train has arrived. Let
the actions be f lip (engineer flips the switch), f ork1 (train
passes first junction), f ork2 (train passes second junction),
and arrive (self-explanatory). Let only In(s) hold at S0.
P oss(f lip; s); P oss(f ork1; s) $ In(s);</p>
        <sec id="sec-9-1-1">
          <title>P oss(f ork2;s) $ L(s)_R(s); P oss(arrive;s) $ Out(s);</title>
          <p>In(do(a; s)) $ In(s) ^ a 6= f ork1;
L(do(a; s)) $ a=f ork1 ^ Sw(s) _ L(s) ^ a 6= f ork2;
R(do(a; s)) $ a=f ork1 ^ :Sw(s) _ R(s) ^ a 6= f ork2;
Out(do(a; s)) $ a=f ork2 _ Out(s);
Sw(do(a; s)) $ a=f lip _ Sw(s) ^ a 6= f lip;
Arrived(do(a; s)) $ a=arrive _ Arrived(s):</p>
        </sec>
      </sec>
      <sec id="sec-9-2">
        <title>The narrative is do([f lip;f ork1;f ork2; arrive]; S0). By</title>
        <p>our analysis, the f lip action is not an actual cause of train’s
arrival. This conclusion is elaboration tolerant [McCarthy,
1987] as long as the relation between L; R; Sw is preserved.
For HP, the answer depends on how model is constructed
and which definition is applied. [Pearl, 2000] calls this class
of problems “switching causation” and argues that flipping
switch is a cause (see Section 10.3.4, p.324-5). Both [Pearl,
2000] and [Halpern and Pearl, 2005] argue that switch is a
cause, while, according to HPm, it is not.</p>
        <p>Example 11. Assistant Bodyguard puts a harmless antidote
in victim’s coffee. Buddy who knows about the antidote
poisons the coffee; he would not have done so otherwise. Victim
drinks the coffee and survives.</p>
        <p>This example is called “Careful Poisoning” in [Weslake,
2013] and left as a challenge for future work. Let the actions
be antidote, poison, drink. The fluents P (s), D(s) are as
before, and the fluent A(s) means “coffee contains antidote”.</p>
      </sec>
      <sec id="sec-9-3">
        <title>P oss(antidote; s); P oss(drink; s);</title>
        <sec id="sec-9-3-1">
          <title>P oss(poison; s) $ A(s);</title>
          <p>A(do(a; s)) $ a = antidote _ A(s);
P (do(a; s)) $ a = poison _ P (s);
D(do(a; s)) $ [a = drink ^ P (s) ^ :A(s)] _ D(s):
= do([antidote; poison; drink]; S0), so D j= :D( ).
In fact, :D(s) holds throughout the narrative, so it has no
achievement causes. It has no maintenance causes either:
drink is a threat to :D(s), yielding a new causal setting
hdo([antidote; poison]; S0); :D(s) ^ (:P (s) _ A(s) ) i with
no achievement causes. The action poison could be a threat,
but it does not qualify as such by our definition: no executable
situation admits poison in the absence of the antidote, owing
to the precondition for poison. Therefore, the given causal
setting contains no causes. This agrees with Beckers and
Vennekens and disagrees with Hitchcock and HP.</p>
          <p>There exist multiple examples where the results of the HP
approach cannot be reconciled with intuitive understanding—
which, incidentally, the approach treats as the only measure of
merit. This problem was traced by [Hopkins and Pearl, 2007]
and [Glymour et al., 2010] to the limited expressiveness of
causal models. Causal models do not distinguish between
enduring conditions and transitions and cannot to model the
absence of an event except as the presence of its opposite;
examples where this leads to absurd conclusions are easy to
come by, see e.g. [Hopkins and Pearl, 2007]. An explicit
notion of an action solves these problems.</p>
          <p>Addressing the lack of expressivity, [Hopkins and Pearl,
2007] re-defined causal models in the language of SC, but
they preserved the implicit possible worlds semantics of
causal formulas and dropped the requirement that situations
be executable. The latter is especially problematic, since
dismissing preconditions results in paradoxes and makes
inferences untrustworthy. Our work reaps the benefits which
[Hopkins and Pearl, 2007] aimed at but does not suffer from
the issues associated with giving a meaningful definition of
a counterfactual in SC, which appears to be no easy task.
A counterfactual query not relativized to a particular
scenario can be formulated in SC without special tools [Lin and
Soutchanski, 2011], but it is not clear how such queries can
be useful for defining actual causality. An original study
conducted in [Costello and McCarthy, 1999] perhaps comes
closest to a good definition of a counterfactual in SC, but it
operates outside of the well-studied basic action theories and it
is not concerned with actual causality. There exist numerous
studies of the semantics of causal models and the
relationship of causal models to various logics, such as an elaborate
axiomatization of causal models [Halpern, 2000] and a
logical representation [Bochman and Lifschitz, 2015] of causal
models in a non-monotonic logic which encompasses
general causation as a foundational principle. The approach of
[Finzi and Lukasiewicz, 2003] combines causal models with
independent choice logic. Finally, there are methodological
or technical critiques of the causal model approach,
exemplified by [Glymour et al., 2010], [Menzies, 2014], [Livengood,
2013], [Weslake, 2013] and [Baumgartner, 2013].</p>
          <p>It is clear that a more broad definition of actual cause
requires more expressive action theories that can model not
only sequences of actions, but can also include explicit time
and concurrent actions. Only after that one can try to
analyze some of the popular examples of actual causation
formulated in philosophical literature; some of those examples
sound deceptively simple, but faithful modelling of them
requires time, concurrency and natural actions [Reiter, 2001].
This does not imply that future research should focus only
on popular scenarios proposed by philosophers. To the
contrary, we firmly believe that the future of causal research is
in elaborating computational methodology for the analysis of
complex technical systems.</p>
          <p>Acknowledgement: We thank the Natural Sciences and
Engineering Research Council of Canada for financial support.</p>
        </sec>
      </sec>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          <source>[Baumgartner</source>
          , 2013]
          <string-name>
            <given-names>Michael</given-names>
            <surname>Baumgartner</surname>
          </string-name>
          .
          <article-title>A regularity theoretic approach to actual causation</article-title>
          .
          <source>Erkenntnis</source>
          ,
          <volume>78</volume>
          (Supplement 1 “Actual Causation”):
          <fpage>85</fpage>
          -
          <lpage>109</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          <source>[Beckers and Vennekens</source>
          , 2012]
          <string-name>
            <given-names>Sander</given-names>
            <surname>Beckers</surname>
          </string-name>
          and
          <string-name>
            <given-names>Joost</given-names>
            <surname>Vennekens</surname>
          </string-name>
          .
          <article-title>Counterfactual dependency and actual causation in CP-logic and structural models: a comparison</article-title>
          .
          <source>In Proceedings of the Sixth Starting AI Researchers Symposium</source>
          , volume
          <volume>241</volume>
          , pages
          <fpage>35</fpage>
          -
          <lpage>46</lpage>
          ,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          <source>[Beckers and Vennekens</source>
          , 2016]
          <string-name>
            <given-names>Sander</given-names>
            <surname>Beckers</surname>
          </string-name>
          and
          <string-name>
            <given-names>Joost</given-names>
            <surname>Vennekens</surname>
          </string-name>
          .
          <article-title>A principled approach to defining actual causation</article-title>
          . Synthese, Oct
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          <source>[Bochman and Lifschitz</source>
          , 2015]
          <string-name>
            <given-names>Alexander</given-names>
            <surname>Bochman</surname>
          </string-name>
          and
          <string-name>
            <given-names>Vladimir</given-names>
            <surname>Lifschitz</surname>
          </string-name>
          .
          <article-title>Pearl's causality in a logical setting</article-title>
          .
          <source>In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence</source>
          ,
          <source>AAAI'15</source>
          , pages
          <fpage>1446</fpage>
          -
          <lpage>1452</lpage>
          . AAAI Press,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          <source>[Costello and McCarthy</source>
          ,
          <year>1999</year>
          ]
          <string-name>
            <given-names>Tom</given-names>
            <surname>Costello and John McCarthy</surname>
          </string-name>
          .
          <article-title>Useful counterfactuals</article-title>
          .
          <source>Electron. Trans. Artif</source>
          . Intell., 3(A):
          <fpage>51</fpage>
          -
          <lpage>76</lpage>
          ,
          <year>1999</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          <source>[Eiter and Lukasiewicz</source>
          , 2002]
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Eiter</surname>
          </string-name>
          and
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Lukasiewicz</surname>
          </string-name>
          .
          <article-title>Complexity results for structure-based causality</article-title>
          .
          <source>Artif</source>
          . Intell.,
          <volume>142</volume>
          (
          <issue>1</issue>
          ):
          <fpage>53</fpage>
          -
          <lpage>89</lpage>
          ,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          <source>[Finzi and Lukasiewicz</source>
          , 2003]
          <string-name>
            <given-names>Alberto</given-names>
            <surname>Finzi</surname>
          </string-name>
          and
          <string-name>
            <given-names>Thomas</given-names>
            <surname>Lukasiewicz</surname>
          </string-name>
          .
          <article-title>Structure-based causes and explanations in the independent choice logic</article-title>
          .
          <source>In Proceedings of the Nineteenth Conference on Uncertainty in Artificial Intelligence, UAI'03</source>
          , pages
          <fpage>225</fpage>
          -
          <lpage>323</lpage>
          , San Francisco, CA, USA,
          <year>2003</year>
          . Morgan Kaufmann Publishers Inc.
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [Glymour et al.,
          <year>2010</year>
          ]
          <string-name>
            <given-names>Clark</given-names>
            <surname>Glymour</surname>
          </string-name>
          , David Danks,
          <string-name>
            <given-names>Bruce</given-names>
            <surname>Glymour</surname>
          </string-name>
          , Frederick Eberhardt, Joseph Ramsey, Richard Scheines,
          <string-name>
            <given-names>Peter</given-names>
            <surname>Spirtes</surname>
          </string-name>
          ,
          <article-title>Choh Man Teng, and Jiji Zhang. Actual causation: a stone soup essay</article-title>
          .
          <source>Synthese</source>
          ,
          <volume>175</volume>
          (
          <issue>2</issue>
          ):
          <fpage>169</fpage>
          -
          <lpage>192</lpage>
          ,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          <source>[Halpern and Pearl</source>
          , 2005] Joseph Y Halpern and
          <string-name>
            <given-names>Judea</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <article-title>Causes and explanations: A structural-model approach</article-title>
          .
          <source>Part I: Causes. The British Journal for the Philosophy of Science</source>
          ,
          <volume>56</volume>
          (
          <issue>4</issue>
          ):
          <fpage>843</fpage>
          -
          <lpage>887</lpage>
          ,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          <source>[Halpern</source>
          , 2000]
          <string-name>
            <surname>Joseph</surname>
            <given-names>Y.</given-names>
          </string-name>
          <string-name>
            <surname>Halpern</surname>
          </string-name>
          .
          <article-title>Axiomatizing causal reasoning</article-title>
          .
          <source>J. Artif. Intell. Res. (JAIR)</source>
          ,
          <volume>12</volume>
          :
          <fpage>317</fpage>
          -
          <lpage>337</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          <source>[Halpern</source>
          , 2015] Joseph
          <string-name>
            <given-names>Y</given-names>
            <surname>Halpern</surname>
          </string-name>
          .
          <article-title>A modification of the Halpern-Pearl definition of causality</article-title>
          .
          <source>In Proceedings of the Twenty-Fourth International Joint Conference on Artificial Intelligence, IJCAI</source>
          <year>2015</year>
          ,
          <string-name>
            <given-names>Buenos</given-names>
            <surname>Aires</surname>
          </string-name>
          , Argentina,
          <source>July 25-31</source>
          ,
          <year>2015</year>
          , pages
          <fpage>3022</fpage>
          -
          <lpage>3033</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          <source>[Halpern</source>
          , 2016]
          <string-name>
            <surname>Joseph</surname>
            <given-names>Y. Halpern. Actual</given-names>
          </string-name>
          <string-name>
            <surname>Causality</surname>
          </string-name>
          . The MIT Press,
          <year>2016</year>
          . ISBN 9780262035026.
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          <source>[Hitchcock</source>
          , 2007]
          <string-name>
            <given-names>Christopher</given-names>
            <surname>Hitchcock</surname>
          </string-name>
          .
          <article-title>Prevention, preemption, and the principle of sufficient reason</article-title>
          .
          <source>The Philosophical Review</source>
          ,
          <volume>116</volume>
          (
          <issue>4</issue>
          ):
          <fpage>495</fpage>
          -
          <lpage>532</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          <source>[Hopkins and Pearl</source>
          , 2007]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Hopkins</surname>
          </string-name>
          and
          <string-name>
            <given-names>Judea</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <article-title>Causality and counterfactuals in the situation calculus</article-title>
          .
          <source>Journal of Logic and Computation</source>
          ,
          <volume>17</volume>
          (
          <issue>5</issue>
          ):
          <fpage>939</fpage>
          -
          <lpage>953</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          <source>[Hopkins</source>
          , 2005]
          <string-name>
            <given-names>Mark</given-names>
            <surname>Hopkins</surname>
          </string-name>
          .
          <article-title>The Actual Cause: From Intuition to Automation</article-title>
          .
          <source>PhD thesis</source>
          , University of California Los Angeles,
          <year>2005</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          <source>[Lewis</source>
          ,
          <year>1974</year>
          ]
          <string-name>
            <given-names>David</given-names>
            <surname>Lewis</surname>
          </string-name>
          . Causation.
          <source>The Journal of Philosophy</source>
          ,
          <volume>70</volume>
          (
          <issue>17</issue>
          ):
          <fpage>556</fpage>
          -
          <lpage>567</lpage>
          ,
          <year>1974</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          <source>[Lin and Soutchanski</source>
          , 2011]
          <string-name>
            <given-names>Fangzhen</given-names>
            <surname>Lin</surname>
          </string-name>
          and
          <string-name>
            <given-names>Mikhail</given-names>
            <surname>Soutchanski</surname>
          </string-name>
          .
          <article-title>Causal theories of actions revisited</article-title>
          .
          <source>In AAAI Spring Symposium: Logical Formalizations of Commonsense Reasoning</source>
          ,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          <source>[Livengood</source>
          , 2013]
          <string-name>
            <given-names>Jonathan</given-names>
            <surname>Livengood</surname>
          </string-name>
          .
          <article-title>Actual causation and simple voting scenarios</article-title>
          .
          <source>Nous</source>
          ,
          <volume>47</volume>
          (
          <issue>2</issue>
          ):
          <fpage>316</fpage>
          -
          <lpage>345</lpage>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          <source>[McCarthy and Hayes</source>
          , 1969
          <string-name>
            <given-names>] John</given-names>
            <surname>McCarthy and Patrick J Hayes.</surname>
          </string-name>
          <article-title>Some philosophical problems from the standpoint of artificial intelligence</article-title>
          .
          <source>Readings in artificial intelligence</source>
          , pages
          <fpage>431</fpage>
          -
          <lpage>450</lpage>
          ,
          <year>1969</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          <source>[McCarthy</source>
          ,
          <year>1987</year>
          ]
          <article-title>John McCarthy</article-title>
          .
          <source>Generality in artificial intelligence. Commun. ACM</source>
          ,
          <volume>30</volume>
          (
          <issue>12</issue>
          ):
          <fpage>1029</fpage>
          -
          <lpage>1035</lpage>
          ,
          <year>1987</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          <source>[Menzies</source>
          , 2014]
          <string-name>
            <given-names>Peter</given-names>
            <surname>Menzies</surname>
          </string-name>
          .
          <article-title>Counterfactual theories of causation</article-title>
          . In Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/ causation-counterfactual/,
          <source>2014. Retrieved on January 15</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          [Paul and Hall, 2013]
          <string-name>
            <given-names>L.A.</given-names>
            <surname>Paul</surname>
          </string-name>
          and Ned Hall.
          <article-title>Causation: a user's guide</article-title>
          . Oxford University Press, ISBN 978-
          <issue>0199673452</issue>
          ,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          <source>[Pearl</source>
          , 1998]
          <string-name>
            <given-names>Judea</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <article-title>On the definition of actual cause</article-title>
          .
          <source>Technical report, R-259</source>
          , University of California Los Angeles,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          <source>[Pearl</source>
          , 2000]
          <string-name>
            <given-names>Judea</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <source>Causality: Models, Reasoning, and Inference</source>
          . Cambridge University Press,
          <volume>1</volume>
          <fpage>edition</fpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          <source>[Reiter</source>
          , 1991]
          <string-name>
            <given-names>Raymond</given-names>
            <surname>Reiter</surname>
          </string-name>
          .
          <article-title>The frame problem in the situation calculus: A simple solution (sometimes) and a completeness result for goal regression</article-title>
          .
          <source>Artificial intelligence and mathematical theory of computation: papers in honor of John McCarthy</source>
          ,
          <volume>27</volume>
          :
          <fpage>359</fpage>
          -
          <lpage>380</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          <source>[Reiter</source>
          , 2001]
          <string-name>
            <given-names>Raymond</given-names>
            <surname>Reiter</surname>
          </string-name>
          .
          <article-title>Knowledge in action: logical foundations for specifying and implementing dynamical systems</article-title>
          . MIT press Cambridge,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          <source>[Simon</source>
          , 1977]
          <article-title>Herbert A Simon. Causal ordering and identifiability</article-title>
          .
          <source>In Models of Discovery</source>
          , pages
          <fpage>53</fpage>
          -
          <lpage>80</lpage>
          . Springer,
          <year>1977</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          [Vennekens et al.,
          <year>2010</year>
          ]
          <string-name>
            <given-names>Joost</given-names>
            <surname>Vennekens</surname>
          </string-name>
          , Maurice Bruynooghe, and
          <string-name>
            <given-names>Marc</given-names>
            <surname>Denecker</surname>
          </string-name>
          .
          <article-title>Embracing events in causal modelling: Interventions and counterfactuals in CP-logic</article-title>
          .
          <source>In European Workshop on Logics in Artificial Intelligence</source>
          , pages
          <fpage>313</fpage>
          -
          <lpage>325</lpage>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          <source>[Vennekens</source>
          , 2011]
          <string-name>
            <given-names>Joost</given-names>
            <surname>Vennekens</surname>
          </string-name>
          .
          <article-title>Actual causation in CPlogic</article-title>
          . TPLP,
          <volume>11</volume>
          (
          <issue>4-5</issue>
          ):
          <fpage>647</fpage>
          -
          <lpage>662</lpage>
          ,
          <year>2011</year>
          . http://arxiv. org/abs/1107.4865.
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          <source>[Weslake</source>
          , 2013]
          <string-name>
            <given-names>Brad</given-names>
            <surname>Weslake</surname>
          </string-name>
          .
          <article-title>A Partial Theory of Actual Causation</article-title>
          . http://bweslake.s3.amazonaws. com/research/papers/weslake_ac.pdf,
          <year>2013</year>
          . Version c4eb488.
          <source>Retrieved on July 18</source>
          ,
          <year>2017</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>