<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>When is a Program an Actual Cause?</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Bita Banihashemi</string-name>
          <xref ref-type="aff" rid="aff2">2</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Shakil M. Khan</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Mikhail Soutchanski</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>(Former) Ryerson University</institution>
          ,
          <addr-line>350 Victoria St, Toronto, ON M5B 2K3</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>University of Regina</institution>
          ,
          <addr-line>3737 Wascana Pkwy, Regina, SK S4S 0A2</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
        <aff id="aff2">
          <label>2</label>
          <institution>York University</institution>
          ,
          <addr-line>4700 Keele St, Toronto, ON M3J 1P3</addr-line>
          ,
          <country country="CA">Canada</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>Causality plays a central role in reasoning about observations. In many cases, it might be useful to define the conditions under which a non-deterministic program can be called an actual cause of an efect in a setting where a sequence of programs are executed one after another. There can be two perspectives, one where at least one execution of the program leads to the efect, and another where all executions do so. The former captures a “weak” notion of causation and is more general than the latter stronger notion. In this paper, we give the definition of weak potential causes. Our analysis is performed within the situation calculus basic action theories and we consider programs formulated in the logic programming language ConGolog. Within this setting, we show how one can utilize a recently developed abstraction framework to relate causes at various levels of abstraction. In particular, we prove that under some conditions, causes defined at an abstract and a concrete level can be related with each other in a kind of commutative diagram.</p>
      </abstract>
      <kwd-group>
        <kwd>eol&gt;Actual Cause</kwd>
        <kwd>Abstraction</kwd>
        <kwd>The Situation Calculus</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>
        where the efect holds. Another perspective is that a
program is a strong potential cause if all executions of the
Actual or token causation is concerned with identifying program produce the efect. Note that a strong potential
the events or actions in a trace that can be considered cause is also weak, but not vice versa. Also, in both cases
as causes of an observed efect. The seminal work of we talk about potential causes, since they can manifest
Pearl [
        <xref ref-type="bibr" rid="ref1">1</xref>
        ] provided the foundations and served as inspi- only in some of the situations that are produced by the
ration for research of actual cause in AI. This research execution of the program sequence.
culminated in the book [
        <xref ref-type="bibr" rid="ref2">2</xref>
        ] that summarized a number As an example, imagine Suzy buying a lottery ticket
of previously developed definitions concerning when an that later wins a reward. If one conceptualizes the
comevent can be considered as an actual cause of an efect. plex actions of purchasing the ticket as a highly
nonThese definitions are developed within the framework of deterministic program, then it is reasonable to say that
structural equations models (SEM), where a simple event this program was a weak potential cause of the fact that
is understood as assigning a value to an endogenous Suzy won, since there is an execution of this program
variable. that leads to a situation where the efect holds and
an
      </p>
      <p>However, this perspective does not facilitate the study other to a situation where it doesn’t.
of causation for more complex activities such as control Again, imagine a computer system that involves
mullfow in programs. It can be interesting and important to tiple interacting agents. The typical examples of such
define when a non-deterministic program is an actual systems arise in computer security contexts where the
cause of an efect in a setting where a sequence of pro- behaviour of the agents is specified by non-deterministic
grams are executed one after another. This immediately protocols due to versatility of possible agent interactions.
leads to the question when can one intuitively say that a In this context, one might be interested in determining if
program is an actual cause? One perspective can be that all of the executions of a protocol led to the successful
a non-deterministic program is a weak potential cause, if handling of a security leak. This corresponds to the case
at least one execution of the program leads to a situation of a strong potential cause.</p>
      <p>
        In this paper, we give a definition of the more inclusive
*Contributed equally to this paper notion of weak cause. We consider programs formulated
PWroorgkrsahmompionngC-a3u7stahlIRnetearsnoantiinognaalnCdoEnxfpelraennacetioonn LinogLiocgPicrogramming in the high-level logic programming language ConGolog
" bita@eecs.yorku.ca (B. Banihashemi*); skhan@cs.uregina.ca [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], which is based on action theories specified in the
(S. M. Khan*); mes@cs.ryerson.ca (M. Soutchanski) situation calculus (SC) [
        <xref ref-type="bibr" rid="ref4 ref5">4, 5</xref>
        ]. We build on a recently
~ https://www.cs.uregina.ca/~skhan/ (S. M. Khan*); proposed definition of actual cause in the SC [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ], which
http://www.cs.ryerson.ca/~mes/ (M. Soutchanski) only considers primitive actions as causes. Since we focus
(S. M00.0K0-h0a0n0*2)-7147-484X (B. Banihashemi*); 0000-0003-0140-3584 on programs as causes, a natural question that arises then
© 2021 Copyright for this paper by its authors. Use permitted under Creative is how these two notions can be related. The programs
CPWrEooUrckReshdoinpgs IhStpN:/c1e6u1r3-w-0s.o7r3g CCoEmmUoRns LWiceonsrekAstthribouptionP4r.0oIncteerenadtiionnagl s(CC(CBYE4U.0)R.-WS.org) can be complex, but often they can be conceptualized
at some abstract high-level (HL) as actions. It turns out case Greek letters for situation-suppressed SC formulae
that the abstraction framework proposed in [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] that can and we denote by Φ[ ] the formula obtained from Φ
relate programs with primitive actions is also useful for by restoring the situation argument  into all fluents in
relating a subclass of weak potential causes (in particular, Φ . To represent and reason about complex actions,
variweak causes that are also strong) at diferent levels of ous high-level programming languages have been defined.
abstraction. Here we concentrate on (a fragment of) ConGolog [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ] that
      </p>
      <p>
        On the semantic level, models of programs can be very includes the following constructs:
complicated, but reasoning about efects of actions that
serve as their abstractions can be easier since essential  ::= nil |  | Φ? | ( 1;  2) | ( 1| 2) | (. ()) |  * | ( 1‖ 2).
details are encapsulated in a simpler HL model. We argue Thus, complex actions can be composed using constructs
that HL and low-level (LL) causes can be related in a kind that include the empty program (nil ), primitive actions
of commutative diagram. Namely, if a HL action is found ( ), waiting for a condition (Φ? ), sequence ( 1;  2),
nonto be a cause of an efect, this action is associated to a deterministic branch ( 1| 2), nondeterministic choice
program  defined over a LL theory that implements it, of arguments (. ()), nondeterministic iteration ( * ),
and this efect is an abstraction of a LL formula  (i.e.  and concurrent execution ( 1‖ 2). Intuitively, . ()
is a refinement of the efect), then at the LL,  must be a nondeterministically picks a binding for the variable 
cause of . This result is one of our main contributions. and performs the program  for this binding of .
We focus here on semantics and leave computational The semantics of ConGolog is specified in terms of
issues to future work. single-step transitions, using the following two
predicates [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]: (i)  (, ,  ′, ′), which holds if one step
2. Preliminaries of program  in situation  may lead to situation ′ with  ′
remaining; and (ii)  (,  ), which holds if program
Our base framework for this is the situation calculus (SC)  may legally terminate in situation . The definitions
[
        <xref ref-type="bibr" rid="ref4">4</xref>
        ] as formalized in [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]. We assume that there is a finite of   and   we use are as in [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ], except that
number of action types . Moreover, we assume that the the test construct Φ? does not yield any transition, but
terms of object sort are a countably infinite set  of is final when satisfied. The predicate (, ,  ′) means
standard names for which we have the unique names as- that program  , when executed starting in situation ,
sumption and domain closure. For simplicity, and w.l.o.g., has ′ as its le.gal terminating situation. It is defined as
we assume that there are no functions other than con- (, ,  ′) = ∃ ′. * (, ,  ′, ′) ∧  ( ′, ′)
stants and no non-fluent predicates. where Trans* denotes the reflexive transitive closure of
      </p>
      <p>A basic action theory (BAT)  is the union of the fol- Trans. We use  to denote the axioms defining ConGolog.
lowing disjoint sets: the foundational, domain indepen- Following [8], we say that a ConGolog program  is
dent, (second-order, or SO) axioms of the SC; (first-order, situation-determined (SD) in  if for every sequence of
or FO) precondition axioms; (FO) successor state axioms transitions, the remaining program is determined by the
(SSAs) describing how fluents change between situations; resulting situation, i.e.
(FO) unique names axioms for actions and (FO) domain SituationDetermined (,  ) =.
closure on action types; (SO) unique name axioms and ∀′,  ′,  ′′.Trans* (, ,  ′, ′) ∧ Trans* (, ,  ′,′ ′) ⊃  ′ =  ′.′
domain closure for object constants; and (FO) axioms
describing the initial configuration of the world. A special Example. Our running example involves a simple
respredicate Poss(, ) is used to state that action  is exe- cue robot  that is designed to aid first responders.
cutable in situation ; precondition axioms characterize Initially  is at the  but as an emergency at
this predicate. The abbreviation () means location 1 exists,  is expected to go to 1 and assist
that every action performed in reaching situation  was in the rescue operations (by removing rubble or by
evacupossible in the situation in which it occurred. A prece- ating people). Action (, ) takes robot  to location
dence relation on situations  and ′ denoted by  ≤ ′ , and is executable if  is not already at that location.
states that ′ is a successor situation of  and that ev- Action (, ) (resp. (, )) can
ery action between  and ′ is in fact executable. We be performed by robot  at location  to remove rubble
write ([1, 2, . . . , − 1, ], ) as an abbreviation (resp. evacuate people); these actions are executable if
for (, (− 1, . . . , (2, (1, )) . . .)); for an  is at location . Fluent (, , ) indicates ’s
locaaction sequence ⃗, we often write (⃗, ) for ([⃗], ). tion to be  at situation . Fluents (, , ) and</p>
      <p>A SC formula is uniform in  if it does not mention (, , ) evaluate to true when the robot  has
 , ⊏, or equality on situations, it does not quan- removed rubble and evacuated people at location 
respectify over situations, and whenever it mentions a term of tively. Robot  is also able to update the software
packsort situation then that term is . Also, we use upper- ages it uses by performing action (, ),
where  indicates the version of the software. Fluent
 (, , ) indicates if software has been
updated to version . Initially, we assume a new version
 2021 is available.</p>
      <p>includes the following</p>
      <p>The BAT for this domain 
action precondition axioms (throughout, we assume that
free variables are universally quantified from the outside):
 ((, ), ) ≡ ¬ (, , ),
 ((, ), ) ≡ ¬  (, , ),
 ((, ), ) ≡ (, , ),
 ((, ), ) ≡ (, , ).</p>
      <p>includes the following SSAs:
Moreover, 
 (, , (, )) ≡</p>
      <p>= (, ) ∨  (, , ),
(, , (, )) ≡  = (, ) ∨</p>
      <p>((, , ) ∧ ¬∃′. ′ ̸=  ∧  = (, ′)),
(, , (, )) ≡</p>
      <p>= (, ) ∨ (, , ),
(, , (, )) ≡</p>
      <p>= (, ) ∨ (, , ).
(, , 0), ¬(, 1, 0),
¬(,1,0),¬ (, 2021,0).</p>
    </sec>
    <sec id="sec-2">
      <title>3. Theoretical Foundations</title>
      <sec id="sec-2-1">
        <title>3.1. Actual Cause</title>
        <sec id="sec-2-1-1">
          <title>As all changes in the SC result from actions, the</title>
          <p>achievement causes of an efect are contained within
a set of ground action terms occurring in  . However,
since  might include multiple occurrences of the same
action, one also needs to identify the situations where
those actions were executed.</p>
          <p>
            According to [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], if some action  of the action
sequence in  triggers the formula Φ to change its truth
value from false to true relative to , and if there are no
actions in  after  that change the value of Φ back to
false, then  is an actual cause of achieving Φ in  . They
showed that using the single-step regression operator 
(i.e. one-step version of the regression operator defined
in [
            <xref ref-type="bibr" rid="ref5">5</xref>
            ]), in addition to the primary action that actually
brings about the efect of interest, one can recursively
compute the chain of actions that build up to the primary
achivement cause. The following inductive definition
formalizes this intuition. Let Π (,  ) be the r.h.s. of
the precondition axiom for  in  .
          </p>
          <p>Definition 2 (Achievement Cause). A causal setting
Thus, e.g.,  will be located at  in (, ) if  refers to  = ⟨, , Φ[ ]⟩ satisfies the achievement condition of
 going to , or  was already at  in  and  is not the Φ via the situation term ( * ,  * ) ⊑  if there is an
action of  going to a diferent location ′. action  ′ and situation  ′ such that
 also includes the following initial state axioms:</p>
        </sec>
        <sec id="sec-2-1-2">
          <title>Given a trace of events, actual achievement causes are the</title>
          <p>
            events that are behind achieving an efect. In this section, According to [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ], the achievement causes of  form a
we review previous work on achievement causality in the finite sequence of situation-action pairs, which is called
SC [
            <xref ref-type="bibr" rid="ref6">6</xref>
            ]. An efect here is a SC formula Φ[ ] that is uniform the achievement causal chain of .
in  and that may include quantifiers over object variables.
          </p>
          <p>Given an efect Φ , the actual causes are defined relative
to a causal setting that includes a BAT  representing the
domain dynamics, and a ground situation  , representing
the “narrative” (i.e. trace of events) where the efect was
observed.</p>
          <p>Definition 1 (Causal Setting). A causal setting is a
tuple ⟨, , Φ[ ]⟩, where  is a BAT,  is a ground situation
term of the form ([ 1, · · · ,  ], 0) with ground action
functions  1, · · · ,   such that  |= Executable( ),
and Φ[ ] is a SC formula uniform in  such that  |=
¬Φ[ 0] ∧ Φ[  ].</p>
          <p>Since the theory  does not change, when referring to a
causal setting we will often suppress  and simply write
⟨, Φ ⟩. Also, here Φ is required to hold by the end of
the narrative , and thus we ignore the cases where Φ is
not achieved by the actions in  , since in that case, the
achievement cause truly does not exist.</p>
          <p>|= ¬Φ[  ′] ∧ ∀. ( ′,  ′) ⊑  ⊑  ⊃ Φ[ ],
and either  * =  ′ and  * =  ′, or the causal setting
⟨ ′,  [Φ[ ],  ′] ∧ Π ( ′,  ′)⟩ satisfies the achievement
condition via the situation term ( * ,  * ). Whenever a
causal setting  satisfies the achievement condition via
situation ( * ,  * ), the action  * executed in situation  *
is said to be an achievement cause of .</p>
          <p>Example (Cont’d). Consider causal
setting  = ⟨,  1, Φ 1⟩, where
Φ 1 = ∃, . (, ) and  1 =
([(,  2021), (, 1),
(, 1)], 0). Then by
Definition 2, the action (, 1)
performed in situation 2 =
([(,  2021), (, 1)], 0)
is an achievement cause of . This is the case
since (, 1) is the first action
after which the efect Φ 1 becomes true and it
remained true till the end of the trace. Moreover,
we can show that (, 1) executed in
1 = ((,  2021), 0) is another
achievement cause of , since the causal setting
⟨, Φ ′, 2⟩ satisfies the achievement condition Φ ′
via the situation term ((,1),1), where
Φ ′ =  [ Φ 1, (, 1)]
∧
Π ((, 1), 2). Finally, these are
all the causes, and in particular  (,
 2021) executed in 0 is not an achievement cause of
.</p>
        </sec>
      </sec>
      <sec id="sec-2-2">
        <title>3.2. Abstraction</title>
        <p>
          We will use the abstraction framework of [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] for
reasoning about abstract causes. In this framework, there
is a high-level (HL) or abstract action theory ℎ and a
low-level (LL) or concrete action theory  representing
the dynamics of the domain at diferent levels of detail.
h (resp. l ) involves a finite set of primitive action
types h (resp. l ) and a finite set of primitive fluent
predicates ℱh (resp. ℱl ). Also, h and l are assumed
to share no domain specific symbols except for standard
names for objects in  .
        </p>
        <p>Definition 5 (Sound abstraction). ℎ is a sound
abstraction of  relative to refinement mapping  if for
all models  of  ∪ , there exists a model ℎ of ℎ
s.t. ℎ ∼  .</p>
        <p>With a sound abstraction, if the HL theory entails that a
sequence of actions is executable and achieves a
condition, then the LL must also entail that there exists
an executable refinement of the sequence such that the
“translated” condition holds afterwards. Also, if the LL
considers the executability of a refinement of a sequence
of HL actions is satisfiable after which a refinement of a
condition holds, then the HL also considers executability
of the sequence of HL action satisfiable after which the
condition holds as well.</p>
        <p>Definition 6 (Complete abstraction). ℎ is a
complete abstraction of  relative to refinement mapping
Definition 3 (Refinement Mapping). A refinement  if for all models ℎ of ℎ, there exists a model  of
mapping  is a function that associates each HL primi-  ∪  s.t.  ∼  ℎ.
tive action type A in h to a SD ConGolog program  
defined over the LL theory that implements the action, i.e.
((⃗)) =  (⃗). Moreover,  maps each
situationsuppressed HL fluent  (⃗) in ℱh to a situation-suppressed
formula Φ  (⃗) defined over the LL theory that
characterizes the concrete conditions under which  (⃗) holds in a
situation.</p>
        <p>With a complete abstraction, if the LL theory entails that
some refinement of a sequence of HL actions is executable
and achieves a “translated” HL condition, then the HL
also entails that the action sequence is executable and
the condition holds afterwards. Also, if the HL considers
the executability of sequence of actions is satisfiable after
which a condition holds, then the LL also considers
exeWe extend the notation so that (Φ) stands for the result cutability of the refinement of the sequence of HL action
of substituting every fluent  (⃗) in situation-suppressed satisfiable after which a “translated” condition holds as
formula Φ by ( (⃗)). Also, we apply  to action se- well.
quences with ( 1, . . . ,  ) =. ( 1); . . . ; ( ) for Note that ℎ is a sound and complete abstraction of 
 ≥ 1 and ( ) =. , where  is the empty sequence relative to refinement mapping  if ℎ is both a sound
of actions. and a complete abstraction of  relative to . Also,</p>
        <p>To relate the HL and LL models/theories, a variant of this approach supports the use of ConGolog programs to
bisimulation [9] is defined as follows. specify the possible dynamics of the domain at both the
HL and LL; this is done by following [10] and
“compilDefinition 4 ( -Bisimulation). Given ℎ a model of ing” the program into the BAT  to get a new BAT ′
ℎ, and  a model of  ∪ , a relation  ⊆ ∆ ℎ × whose executable situations are exactly those that can
∆  (where ∆  stands for the situation domain of  ) be reached by executing the program.
is an -bisimulation relation between ℎ and  if
⟨ℎ, ⟩ ∈  implies that: () ℎ ∼ ℎ, , i.e. ℎ eval- Example (Cont’d). In our example, we define a HL
uates each HL primitive fluent same as the evaluation of BAT ℎ that abstracts over some details of . At
the refinement of the fluent in ; () for every HL prim- the HL, we abstract over details of rescue actions.
itive action type A in h , if there exists ′ℎ s.t. ℎ |= Action (, ) abstracts over the process of
ei ((⃗), ℎ) ∧ ′ℎ = ((⃗), ℎ), then there exists ther clearing rubble or evacuating people. The
flu′ s.t.  |= (((⃗)), , ′) and ⟨′ℎ, ′⟩ ∈ ; and ent (, , ) indicates if robot  has
() for every HL primitive action type A in h , if there aided in rescue at location . For simplicity, actions
exists ′ s.t.  |= (((⃗)), , ′), then there exists (, ) and (, ) are defined similar
′ℎ s.t. ℎ |=  ((⃗), ℎ) ∧ ′ℎ = ((⃗), ℎ) and to (, ) and (, ) respectively.
⟨′ℎ, ′⟩ ∈ . ℎ includes the following precondition axioms:
ℎ is -bisimilar to , written ℎ ∼  , if there
exists an -bisimulation relation  between ℎ and
 such that (0ℎ , 0 ) ∈ .
 ((, ), ) ≡ ¬  (, , ),
 ((, ), ) ≡ ¬ (, , ),
 ((, ), ) ≡ (, , ).</p>
        <sec id="sec-2-2-1">
          <title>The HL BAT also includes the following SSAs:</title>
          <p>⎪⎧ (⃗, ) if  is  (⃗)
⎪
⎪
⎪⎪⎪∃′. ( ⃗, , ′) if  is ExecSeq( ⃗)
 [] =def ⎪⎨⎪ ′[([ ⃗], )] if  is After ( ⃗,  ′)
⎪⎪¬( ′[]) if  is (¬ ′)
⎪⎪⎪⎪ 1[] ∧  2[] if  is ( 1 ∧  2)
⎪⎩⎪∃⃗. ( ′[]) if  is (∃⃗.  ′)</p>
        </sec>
        <sec id="sec-2-2-2">
          <title>We generalize causal settings by allowing efects in our</title>
          <p>framework to be any dynamic efect formula  , i.e. we
no longer require the efect to be uniform in . Also, we
do not require the trace to be a ground situation term, so
it can now include arbitrary (non-ground) action terms.
This allows for the modeling of abstract causes.
Definition 9 (Generalized Causal Setting). A
generalized causal setting is a tuple ⟨, ,  ⟩, where  is a
BAT,  is a ConGolog program, and  is a dynamic efect
formula s.t.:
 ∪  |= ¬ [0] ∧ ∃′. (,  0, ′) ∧  [′].</p>
        </sec>
      </sec>
    </sec>
    <sec id="sec-3">
      <title>4. Programs as Actual Causes</title>
      <p>Definition 7 (Dynamic Efect Formula). Let ⃗ and
 ⃗ respectively range over object terms and a sequence of
action terms. The class of situation-suppressed dynamic
effect formulae  is defined inductively using the following
grammar:</p>
      <sec id="sec-3-1">
        <title>Thus, there is at least one execution of the program</title>
        <p>
          starting in the initial situation 0 after which the efect
We now return to our discussion of abstract causes. As  holds.
seen in Section 3, Definition 2 appeals to regression, a As discussed in Section 3, the definition of actual
syntactic notion, and this requires the efect formula Φ[ ] achievement cause given by [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] only deals with narratives
to be uniform in . However, this is too restrictive for us that are linear sequences of actions. Consequently, their
as it is hard to adapt for abstract causes. Specifically, it causes are actions (or more precisely, action-situation
is hard to define regression over programs; recall Reiter pairs).2 To facilitate the modeling of abstract causes, we
defined regression over primitive actions. 1 Therefore, we extend this by allowing narratives to be linear sequences
start by introducing the notion of dynamic efect formulae of ConGolog programs. This allows programs to be
idenin the SC. tified as causes of observed efects. In the following, we
progressively define what it means for a ConGolog
program to be a weak potential cause, starting with primary
causes. Note that, given a generalized causal setting there
can be more than one primary potential cause of the efect
as the program can have multiple possible executions.
 ::=  (⃗) | ExecSeq( ⃗) | After ( ⃗,  ) | ¬ |  1∧ 2 | ∃⃗..
        </p>
        <p>That is, a dynamic efect formula can be a
situationsuppressed fluent, a formula that says that some sequence
of actions  ⃗ is executable, a formula that indicates some
dynamic efect formula holds after some sequence of
actions has occurred, or one that can be built from other
dynamic efect formulae using the usual connectives. Note
that  can have quantification over object variables, but
Definition 10. Given a generalized causal setting  =
⟨, ( 1; · · · ;  ),  ⟩ and a model  of  ∪ , a program
 +1 ∈ { 1, · · · ,  } is called a primary weak potential
cause of  relative to  and  if and only if:
 |= ∃, +1, . (( 1; . . . ;  ), 0, ) ∧ ¬ []
∧ ( +1, , +1) ∧ (( +2; . . . ;  ), +1, )
∧ ∀′. +1 ≤ ′ ≤  ⊃  [′].</p>
        <p>The triple (, +1,  ) is called a witness for this.</p>
        <p>1Note that, previously [11] has proposed an extension of
regression for programs; investigating whether their definition can
be adapted for our purpose is future work.</p>
      </sec>
      <sec id="sec-3-2">
        <title>2Here and in the sequel, for brevity, we omit the terms actual</title>
        <p>and achievement when we talk about causes, since we exclusively
consider actual achievement causes in this paper.</p>
        <p>That is, a program  +1 in the scenario ( 1; . . . ;  ) is Definition 13 (Weak Potential Cause). Given a
gena primary weak potential cause relative to a model  of eralized causal setting  = ⟨, ( 1; · · · ;  ),  ⟩, a
protheory  ∪ and causal setting  if and only if there is an gram   ∈ { 1, · · · ,  } is called a weak potential cause
execution of the prefix ( 1; . . . ;  ) that ends in situation of  relative to  if and only if for all models  of  ∪ ,
 in which  is false, situation +1 can be reached   is a weak potential cause of  relative to  and  .
by executing  +1 starting from , situation  can be Moreover, if  is initially completely specified, there is
reached by executing the remaining programs starting only one model; in that case, we call  ′ from the witness
from +1, and  holds in all situations from +1 up to (′, ′′,  ′) in Definition 12 a witness to the fact that   is
. Essentially, this is a straightforward generalization a weak potential cause of  relative to .
of the base case of Definition 2 and ensures that there is
an execution of the scenario over which  was achieved Thus, we only call a program a weak potential cause
by some action in  +1 and  persisted till the end of relative to a generalized causal setting if it is a weak
the trace, i.e. it was not later made false by a subsequent potential cause in all models of the theory.
action.</p>
        <p>Moreover, we define what it means for a program to Example (Cont’d). Consider the setting
, ((,  2021); (, 1);
be a primary weak potential cause relative to a causal ⟨
setting.  (, 1)), ∃, .(, )⟩, where
 (, ) = ((, )). Then
accordDefinition 11 (Primary Weak Potential Cause). ing to our definitions,  (, 1) is the
Given a generalized causal setting  = primary weak potential cause relative to the above
⟨, ( 1; · · · ;  ),  ⟩, a program   ∈ { 1, · · · ,  } is setting, as in all models,  ∪  |= ∃2, 3.
called a primary weak potential cause relative to  if (((,  2021); (, 1)), 0,
and only if for all models  of  ∪ ,   is a primary 2) ∧ ¬∃, .(, )[2] ∧ ( (,
weak potential cause of  relative to  and  . 1), 2, 3) ∧ ∃, .(, )[3], and by the SSA
for , the efect persists until the end of scenario.</p>
        <p>Next, we define weak potential causes in general. These Note that, as only in some executions of the
include both primary and non-primary causes reflecting scenario ∃, .(, ) is true,  (, 1)
both base and inductive cases of Definition 2. is considered weak. If we instead consider the
Definition 12. Given a generalized causal setting  = efect ∃, .(, ) ∨ (, ), then
⟨∈,( {1 ; 1·· ,· · · · ;  ,), } i⟩sacnadllaedmaodweelakpooften∪tial,caapursoegorfam strongp(oten,tial1c)aucaseninbethecosnensisdeetrheadt ians atlhleexepcruimtioanrys
relative to  and  if and only if: of the scenario   achieves the efect.</p>
        <p>
          Moreover, we can also show that (, 1)
1.   is a primary weak potential cause wrt  and  is a weak potential cause, since it is a
priwith witness (′, ′′,  ′), where  ′ =  , or mary weak potential cause wrt the setting
2.   (where  &lt;  ≤ ) is a weak poten- ⟨, ((,  2021); (, 1)),
tial cause relative to setting  and  with wit- ((, 1)) ∧
ness (− 1, ([⃗ ], − 1),   ), and   is a pri-  ((, 1), ∃, .(, ))⟩.
mary weak potential cause relative to the setting On the other hand, (,  2021)
can⟨, ( 1; · · · ;  − 1),  ′⟩ and model  with wit- not be shown to be a weak potential cause.
ness (′, ′′,  ′), where  ′ = (⃗ ) ∧ Our notion of programs as actual cause above is a weak
 (⃗ ,   ). and more inclusive one. We consider a program as a cause
We call (′, ′′,  ′) a witness for   being a weak potential if there is at least one execution where the program is
cause wrt  and  . a cause. In some cases, it might be useful to consider a
stronger version, where a program is considered to be
Thus,   is a weak potential cause relative to model  a cause if it is a cause according to all executions of the
and generalized causal setting  if and only if it is either program. A thorough investigation of such a variant is
a primary weak potential cause wrt  and  , or it is a future work.
primary weak potential cause of another weak potential When the program  is finite, terminating, and
comcause   , i.e. it enables   by ensuring that the appropriate posed of ground actions only, one can show that the
inexecution path ⃗ of   that brought about   ’s own efect termediate efects (i.e. [ExecSeq (⃗) ∧ After (⃗, )]) can
  is executable (i.e. that (⃗ )) and by fulfilling be straightforwardly computed using Reiter’s regression.
the conditions under which the execution of ⃗ achieved Also, and in particular, when  is a finite sequence of
  (i.e. that  (⃗ ,   )). ground actions, the causes computed using our
definition and the definition in [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ] are the same.
        </p>
        <p>Theorem 14. Let  =  1; · · · ;   be a finite sequence of execution of the refinement of (⃗) as well. This
of ground actions. Then ( , ([ 1,· · · ,  − 1], 0)) is a condition must hold after any sequence of refinements
cause relative to the causal setting ⟨, ([ ], 0), Φ ⟩ ac- of HL actions, i.e. (anyseqhl, 0, ). To see why
cording to Definition 2 if   is a potential cause relative to this is necessary, consider the following example.
Supthe generalized causal setting ⟨, , Φ ⟩ according to Def- pose that at the HL, we have the generalized causal
setinition 13.3 ting ⟨ℎ, ( ;  ), ℎ⟩ in which  is the only primary
weak potential cause. Assume the following mapping:</p>
        <p>
          Given the above, it is easy to see that when  is a finite ( ) =  and ( ) = 1; 2, and (ℎ) = . At
sequence of ground actions, all properties shown for [
          <xref ref-type="bibr" rid="ref6">6</xref>
          ]’s the LL, after performing ,  becomes true, and
afframework also hold in ours. These include the proper ter performing 1 and 2,  becomes false and true
rehandling of preemption and switches. spectively. Hence, in the setting ⟨, ( ;  ), (ℎ)⟩,
( ) is considered the only primary weak potential
5. Reasoning about Abstract cause if the analysis is done at the LL using Definition 13.
        </p>
        <p>To achieve correspondence of potential causes between
Causes HL and LL, we need to rule out such cases.</p>
        <p>To investigate how causes at the abstract and concrete
We now focus on investigating how reasoning about ab- levels are related, we first consider sound abstractions. For
stract causes can be simplified. In particular, we will show this, and when complete information is assumed, we can
that under some conditions, a subclass of weak potential show that if an action  is a weak potential cause wrt the
causes at various levels of abstraction can be related. This generalized casual setting ℎ = ⟨ℎ, ( ⃗ℎ), Φ  ⟩ at the
subclass involves weak causes that are also strong in the HL, the refinement of  can be considered a weak
potensense that all executions of the cause achieve the efect tial cause wrt the setting  = ⟨, ( ⃗ℎ), (Φ  )⟩
(see the corollary below). This reduces reasoning about at the LL.4
abstract causes (i.e. programs) at the LL to that of actions
as causes at the HL when said conditions are met. Theorem 15. Suppose that ℎ is a sound abstraction of</p>
        <p>We start by formalizing some of these conditions. First,  wrt some refinement mapping , and that
Assumpwe assume that every HL action   is mapped via  to a tions 1 and 2 hold. Then for any ground HL action
seLL program   that may take part in a LL scenario; in this quence⃗  and for any HL situation suppressed formula Φ
way, any abstract scenario can be refined by a concrete such that ℎ |= (⃗(,  0)) ∧ ¬Φ[ 0] ∧
one. Φ[ (⃗[ ], 0)], we have:</p>
        <p>Moreover, we assume only action sequences that refine
some HL action sequence are executed in the LL BAT:</p>
      </sec>
      <sec id="sec-3-3">
        <title>The above essentially requires the LL theory to entail</title>
        <p>
          that if a fluent literal  that is in a refinement of a
HL fluent  is true in both the situations before and 4In the following, we will quantify over action sequences and
after execution of the refinement of a HL action (⃗), so we need to encode sequences as first-order terms as in [
          <xref ref-type="bibr" rid="ref3">3</xref>
          ]. For
then it should remain true in all intermediate situations notational simplicity, we suppress this encoding and use sequences
as terms directly. Also, we conjecture that these results also hold
3Note that, a SC formula Φ is also a dynamic efect formula. when the initial state is incomplete; proving this is future work.
′ = ⟨, ⃗( − 1 ⃗ +1), ′⟩, where  ∪
 |= ∃* , ⃗ . (⃗( − 1 ⃗ +1), 0, * ) ∧
Assumption 1 (LL behaviours refine HL actions).
 ∪ |= ∀.() ⊃ ∃ .  * (anyseqhl,0,, ),
where anyseqhl =def (|∈ℎ ⃗. ((⃗)))* ,
        </p>
        <p>i.e. do any sequence of refinements of HL actions.</p>
      </sec>
      <sec id="sec-3-4">
        <title>Furthermore, we require that LL efects are nontransient wrt HL actions:</title>
        <p>Assumption 2 (Non-transiency of LL Efects).
Suppose the set  includes all the fluent literals in a
refinement of a HL fluent . We assume that:
 ∪  |= ∀.(anyseqhl, 0, ) ⊃
⋀︀∈ℎ ⋀︀∈ℱℎ ⋀︀∈ ∀′, ′′, ⃗, ⃗.
(⃗)[] ∧ (((⃗)), , ′) ∧ (⃗)[′]
∧  &lt; ′′ &lt; ′ ⊃ (⃗)[′′]
1.  ∪  |= ¬(Φ)[ 0] ∧ ∃. (⃗( ), 0, ) ∧
(Φ)[ ].
2. I⃗f  =⃗  − 1 ⃗ +1 and   is the primary weak
potential cause wrt the generalized causal setting
ℎ = ⟨ℎ,⃗( ), Φ ⟩, then ( ) is the unique
primary weak potential cause wrt the generalized
causal setting  = ⟨, ⃗( ), (Φ) ⟩.
3. If⃗  = ⃗  − 1 ⃗ +1 ⃗  +1,   is a weak
potential cause wrt the generalized causal setting
ℎ = ⟨ℎ,⃗( ), Φ ⟩ with witness Φ  , and   is
the primary weak potential cause wrt the setting
ℎ′ = ⟨ℎ,⃗( − 1 ⃗ +1), ExecSeq (  ) ∧
After (  , Φ  )⟩, and moreover,  is
initially completely specified, and (  ) is a
weak potential cause wrt the causal setting
 = ⟨, ⃗( ), (Φ) ⟩ with witness (Φ  )
then, ( ) is the unique primary weak
potential cause wrt the generalized causal setting
((  ), * , ([⃗ ], * )) and
ExecSeq (⃗ ) ∧ After (⃗ , (Φ  )).
′
=
Example (Cont’d). Consider the HL
setting ℎ = ⟨ℎ,⃗( ), ⟩, where
⃗  = [(,  2021),
(, 1), (, 1)] and  = ∃, .
(, ). Using similar reasoning as
before, we can show that (, 1) is the
primary weak potential cause relative to ℎ. Moreover,
(, 1) is another cause relative to ℎ.</p>
        <p>By Theorem 15, we have that
((, 1)) =  (, 1) is
the primary weak potential cause relative to setting
 = ⟨, ⃗( ), ()⟩, where () =
∃, .(, ) ∨ (, ) and ⃗( ) =
(,  2021); (, 1);
 (, 1). Moreover, the action (, 1)
is considered another weak potential cause relative to
.</p>
        <p>Notice that since the number of actions and fluents that
a reasoner needs to consider are typically higher at the
LL, Theorem 15 can yield important eficiency benefits.</p>
        <p>
          In Corollary 5 of [
          <xref ref-type="bibr" rid="ref7">7</xref>
          ] ([BDL17]), the authors showed
that if ℎ is a sound abstraction of  wrt , then the
diferent sequences of LL actions that are refinements
of a given HL action sequence all have the same efects
on the HL fluents, and more generally on HL
situationsuppressed formulae, i.e. from the HL perspective they
are deterministic:
Corollary 16 (from BDL17). If ℎ is a sound
abstraction of  wrt , then for any sequence of ground HL
action⃗s  and for any HL situation-suppressed formula , we
have:
 ∪  |= ∀, ′.(⃗( ), 0, ) ∧ (⃗( ), 0, ′) ⊃
(()[] ≡ ()[′]).
        </p>
      </sec>
      <sec id="sec-3-5">
        <title>This indicates that the weak potential causes in Theorem 15 are in fact strong in the sense that in all executions of the program, the efect is achieved.</title>
        <p>With complete abstractions, and when complete
information is assumed, we can show that if the refinement
of a HL action  is a weak potential cause wrt the LL
setting ⟨, ( ⃗ℎ), (Φ  )⟩, then  can be considered
a weak potential cause wrt the setting ⟨ℎ, ( ⃗ℎ), Φ  ⟩
at the HL.</p>
        <p>Theorem 17. Suppose that ℎ is a complete abstraction
of  wrt some refinement mapping . Then for any
ground HL action sequence⃗  and for any HL situation
suppressed formula Φ such that  ∪  |= ¬(Φ)[ 0] ∧
∃. (⃗( ), 0, ) ∧ (Φ)[ ], we have that:
1. ℎ |= ¬Φ[ 0] ∧ ((⃗[ ], 0)) ∧
Φ[ (⃗[ ], 0)].
2. I⃗f  =⃗  − 1 ⃗ +1 and ( ) is the primary
weak potential cause wrt the generalized causal
setting  = ⟨, ⃗( ), (Φ) ⟩ then   is the
unique primary weak potential cause wrt the
setting ℎ = ⟨ℎ,⃗( ), 0), Φ ⟩.
3. If  is initially completely specified,
⃗  = ⃗  − 1 ⃗ +1 ⃗  +1, (  ) is a
weak potential cause wrt the generalized causal
setting  = ⟨, ⃗( ), (Φ) ⟩ with witness
(Φ  ), and ( ) is the unique primary weak
potential cause wrt the generalized causal setting
′ = ⟨, ⃗( − 1 ⃗ +1), ′⟩, where  ∪
 |= ∃* , ⃗ . (⃗( − 1 ⃗ +1), 0, * ) ∧
((  ), * , ([⃗ ], * ))) and ′ =
ExecSeq (⃗ ) ∧ After (⃗ , (Φ  )), and
moreover,   is a weak potential cause wrt
the causal setting ℎ = ⟨ℎ⃗, , Φ ⟩, then,
  is the unique primary weak potential
cause wrt the setting ℎ′ = ⟨ℎ,⃗( − 1 
⃗  +1), ExecSeq (  ) ∧ After (  , Φ  )⟩.</p>
        <p>Example (Cont’d). Let⃗  = [(,
 2021), (, 1), (, 1)] and  =

∃, . (, ). Suppose at the LL,  ∪
|= ¬()[0] ∧ ∃. (⃗( ), 0, ) ∧
()[]. Moreover, suppose that  (, 1)
is the primary weak potential cause wrt setting  =
⟨, ⃗( ), ()⟩, which brings about the
effect () = ∃, .(, ) ∨ (, ).
Also, (, 1) is another cause wrt .</p>
        <p>Then by Theorem 17, we have that (, 1)
is the primary weak potential cause wrt the setting
,⃗( ), ⟩, which brings about the efect .
ℎ = ⟨ℎ
Moreover, the action (, 1) can be considered
another weak potential cause wrt ℎ.</p>
        <p>Depending on requirements of the domain, a modeler
can decide among sound, complete, or sound and
complete abstractions, each providing eficiency benefits.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>6. Discussion</title>
      <p>
        Since we build on a robust approach to computing actual
causes in the SC [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] we can also handle correctly tricky
cases of preemption and over-determination that were
problematic for the previous approaches. For example,
in the well-known Switch example, a switch action is
not an actual cause as shown in [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Moreover, as proved
in [12], their own counterfactual based approach turns
out to be equivalent to the first principles approach of
[
        <xref ref-type="bibr" rid="ref6">6</xref>
        ]. Also, the paper [12] argued that an actual cause can
be correctly determined in the dificult Bottle example
without appealing to physically impossible interventions.
      </p>
      <p>While there has been a lot of work on actual causation,
to the best of our knowledge, our account is the first and
the only proposal that investigates programs as actual
causes. Perhaps the closest to our work is the one by straction in situation calculus action theories, in:
[13], who identified a subset of actions (program steps) S. P. Singh, S. Markovitch (Eds.), Proceedings of the
of a set of interacting programs as an actual cause for a Thirty-First AAAI Conference on Artificial
Intelliviolation of specific properties in a security domain. Our gence, AAAI Press, 2017, pp. 1048–1055.
approach however, focuses on formalizing abstract actual [8] G. De Giacomo, Y. Lespérance, C. J. Muise, On
supercauses as programs in the settings where the actions that vising agents in situation-determined ConGolog, in:
led to the observed efect are only incompletely speci- International Conference on Autonomous Agents
ifed. Our framework is based on an expressive first-order and Multiagent Systems, AAMAS 2012, IFAAMAS,
logic language for representing and reasoning about dy- 2012, pp. 1031–1038.
namic domains. In addition to non-deterministic pro- [9] R. Milner, Communication and concurrency, PHI
grams, we allow for incomplete information that is repre- Series in computer science, Prentice Hall, 1989.
sented through multiple models of a BAT. Furthermore, [10] G. De Giacomo, Y. Lespérance, F. Patrizi, S. Sardiña,
we investigate how abstraction may be used to facilitate Verifying ConGolog programs on bounded
situarepresentation and reasoning. tion calculus theories, in: Proceedings of the
Thir</p>
      <p>
        In this paper, we do not study how one can obtain an tieth AAAI Conference on Artificial Intelligence,
abstraction given ConGolog programs. Instead, we study AAAI Press, 2016, pp. 950–956.
causal reasoning that can be accomplished if we are given [11] P. Mo, N. Li, Y. Liu, Automatic verification of
a sound and/or a complete abstraction of our causal the- golog programs via predicate abstraction, in:
ory. [14] proposed forgetting (of LL fluent and action G. A. Kaminka, M. Fox, P. Bouquet, E. Hüllermeier,
symbols) to obtain a sound and complete abstraction of V. Dignum, F. Dignum, F. van Harmelen (Eds.),
a LL BAT for a given mapping. Also, [
        <xref ref-type="bibr" rid="ref7">7</xref>
        ] identifed the ECAI 2016 - 22nd European Conference on
Artifinecessary and suficient conditions for a (given) HL BAT cial Intelligence, 29 August-2 September 2016, The
to be a sound abstraction of a LL BAT under a mapping. Hague, The Netherlands - Including Prestigious
For simplicity, we focused on a single layer of abstraction, Applications of Artificial Intelligence (PAIS 2016),
but the framework supports extending the hierarchy to volume 285 of Frontiers in Artificial Intelligence and
several levels. In future work, we plan to investigate Applications, IOS Press, 2016, pp. 760–768.
methodologies for designing abstract theories and refine- [12] S. M. Khan, M. Soutchanski, Necessary and
sufiment mappings with respect to given observed efects, as cient conditions for actual root causes, in: ECAI
well as automated synthesis techniques to support this. 2020 - 24th European Conference on Artificial
InExtending the current framework to support probabilis- telligence, 2020, pp. 800–808.
tic actions [15] and approximate abstractions, and how [13] A. Datta, D. Garg, D. K. Kaynar, D. Sharma, A. Sinha,
such extensions facilitate reasoning about causality are Program actions as actual causes: A building block
important avenues for future research. for accountability, in: C. Fournet, M. W. Hicks, L.
Viganò (Eds.), IEEE 28th Computer Security
Foundations Symposium, CSF 2015, Verona, Italy,
13References 17 July, 2015, IEEE Computer Society, 2015, pp.
261–275. URL: https://doi.org/10.1109/CSF.2015.25.
      </p>
      <p>doi:10.1109/CSF.2015.25.
[14] K. Luo, Y. Liu, Y. Lespérance, Z. Lin, Agent
abstraction via forgetting in the situation calculus, in:
ECAI 2020 - 24th European Conference on Artificial
Intelligence, volume 325 of Frontiers in Artificial
Intelligence and Applications, IOS Press, 2020, pp.
809–816. URL: https://doi.org/10.3233/FAIA200170.</p>
      <p>doi:10.3233/FAIA200170.
[15] V. Belle, H. J. Levesque, Regression and
progression in stochastic domains, Artificial
Intelligence 281 (2020) 103247. URL: https://doi.
org/10.1016/j.artint.2020.103247. doi:10.1016/j.
artint.2020.103247.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          , Causality: Models,
          <string-name>
            <surname>Reasoning</surname>
          </string-name>
          , and Inference, Cambridge University Press,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <given-names>J. Y.</given-names>
            <surname>Halpern</surname>
          </string-name>
          , Actual Causality, MIT Press,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <given-names>G.</given-names>
            <surname>De Giacomo</surname>
          </string-name>
          ,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lespérance</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H. J.</given-names>
            <surname>Levesque</surname>
          </string-name>
          ,
          <article-title>ConGolog, a concurrent programming language based on the situation calculus</article-title>
          ,
          <source>Artif. Intell</source>
          .
          <volume>121</volume>
          (
          <year>2000</year>
          )
          <fpage>109</fpage>
          -
          <lpage>169</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <given-names>J.</given-names>
            <surname>McCarthy</surname>
          </string-name>
          ,
          <string-name>
            <given-names>P. J.</given-names>
            <surname>Hayes</surname>
          </string-name>
          ,
          <source>Some Philosophical Problems From the Standpoint of Artificial Intelligence, Machine Intelligence</source>
          <volume>4</volume>
          (
          <year>1969</year>
          )
          <fpage>463</fpage>
          -
          <lpage>502</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <given-names>R.</given-names>
            <surname>Reiter</surname>
          </string-name>
          ,
          <article-title>Knowledge in Action. Logical Foundations for Specifying and Implementing Dynamical Systems</article-title>
          , The MIT Press,
          <year>2001</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <given-names>V.</given-names>
            <surname>Batusov</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Soutchanski</surname>
          </string-name>
          ,
          <article-title>Situation calculus semantics for actual causality</article-title>
          ,
          <source>in: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence</source>
          , AAAI Press,
          <year>2018</year>
          , pp.
          <fpage>1744</fpage>
          -
          <lpage>1752</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <given-names>B.</given-names>
            <surname>Banihashemi</surname>
          </string-name>
          , G. De Giacomo,
          <string-name>
            <given-names>Y.</given-names>
            <surname>Lespérance</surname>
          </string-name>
          , Ab-
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>