<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Agent Morality via Counterfactuals in Logic Programming</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Lu´ıs Moniz Pereira</string-name>
          <xref ref-type="aff" rid="aff1">1</xref>
        </contrib>
        <contrib contrib-type="author">
          <string-name>Ari Saptawijaya</string-name>
          <email>saptawijaya@cs.ui.ac.id</email>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>Faculty of Computer Science</institution>
          ,
          <addr-line>Universitas Indonesia</addr-line>
          ,
          <country country="ID">Indonesia</country>
        </aff>
        <aff id="aff1">
          <label>1</label>
          <institution>NOVA-LINCS, Lab. for Computer Science and Informatics, Universidade Nova de Lisboa</institution>
          ,
          <country country="PT">Portugal</country>
        </aff>
      </contrib-group>
      <abstract>
        <p>This paper presents a computational model, via Logic Programming (LP), of counterfactual reasoning with applications to agent morality. Counterfactuals are conjectures about what would have happened, had an alternative event occurred. In the first part, we show how counterfactual reasoning, inspired by Pearl's structural causal model of counterfactuals, is modeled using LP, by benefiting from LP abduction and updating. In the second part, counterfactuals are applied to agent morality, resorting to this LP-based approach. We demonstrate its potential for specifying and querying moral issues, by examining viewpoints on moral permissibility via classic moral principles and examples taken from the literature. Finally, we discuss some potential extensions of our LP approach to cover other aspects of counterfactual reasoning and show how these aspects are relevant in modeling agent morality.</p>
      </abstract>
      <kwd-group>
        <kwd>abduction</kwd>
        <kwd>counterfactuals</kwd>
        <kwd>logic programming</kwd>
        <kwd>morality</kwd>
        <kwd>non-monotonic reasoning</kwd>
      </kwd-group>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>-</title>
      <p>
        Counterfactuals capture the process of reasoning about a past event that did not occur,
namely what would have happened had this event occurred; or, vice-versa, to reason
about an event that did occur but what if it had not. An example from [
        <xref ref-type="bibr" rid="ref5">5</xref>
        ]: Lightning
hits a forest and a devastating forest fire breaks out. The forest was dry after a long hot
summer and many acres were destroyed. One may think of a counterfactual about it,
e.g., “if only there had not been lightning, then the forest fire would not have occurred”.
Counterfactuals have been widely studied, in philosophy [
        <xref ref-type="bibr" rid="ref19 ref6">6, 19</xref>
        ], psychology [
        <xref ref-type="bibr" rid="ref21 ref31 ref5">5, 21,
31</xref>
        ]. They also have been studied from the computational viewpoint [
        <xref ref-type="bibr" rid="ref11 ref26 ref27 ref39 ref4">4, 11, 26, 27, 39</xref>
        ],
where approaches in Logic Programming (LP), e.g., [
        <xref ref-type="bibr" rid="ref27 ref39 ref4">4, 27, 39</xref>
        ], are mainly based on
probabilistic reasoning.
      </p>
      <p>
        In the first part of this paper, we report on our approach of using LP abduction and
updating in a procedure for evaluating counterfactuals, taking the established approach
of Pearl [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ] as reference. LP lends itself to Pearl’s causal model of counterfactuals:
(1) The inferential arrow in a LP rule is adept at expressing causal direction; and (2)
LP is enriched with functionalities, such as abduction and defeasible reasoning with
updates. They can be exploited to establish the counterfactuals evaluation procedure of
Pearl’s: LP abduction is employed for providing background conditions from
observations made or evidences given, whereas defeasible logic rules allow achieving at select
points adjustments to the current model via hypothetical updates of intervention. Our
approach therefore concentrates on pure non-probabilistic counterfactual reasoning in
LP – thus distinct from but complementing existing probabilistic approaches – by
instead resorting to abduction and updating, in order to determine the logical validity of
counterfactuals under the Well-Founded Semantics [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ].
      </p>
      <p>
        Counterfactual thinking in moral reasoning has been investigated particularly via
psychology experiments (see, e.g., [
        <xref ref-type="bibr" rid="ref21 ref9">9, 21</xref>
        ]), but it has only been limitedly explored in
machine ethics. In the second part of the paper, counterfactual reasoning is applied to
machine ethics, an interdisciplinary field that emerges from the need of imbuing
autonomous agents with the capacity for moral decision making to enable them to
function in an ethically responsible manner via their own ethical decisions. The potential of
LP for machine ethics has been reported in [
        <xref ref-type="bibr" rid="ref13 ref18 ref29 ref32">13, 18, 29, 32</xref>
        ], where the main
characteristics of morality aspects can appropriately be expressed by LP-based reasoning, such
as abduction, integrity constraints, preferences, updating, and argumentation. The
application of counterfactual reasoning to machine ethics – herein by resorting to our LP
approach – therefore aims at more generally taking counterfactuals to the wider context
of the aforementioned well-developed LP-based non-monotonic reasoning methods.
      </p>
      <p>In this paper, counterfactuals are specifically engaged to distinguish whether an
effect of an action is a cause for achieving a morally dilemmatic goal or merely a
sideeffect of that action. The distinction is essential for establishing moral permissibility
from the viewpoints of the Doctrines of Double Effect and of Triple Effect, as
scrutinized herein through several off-the-shelf classic moral examples from the literature. By
materializing these doctrines in concrete moral dilemmas, the results of counterfactual
evaluation –supported by our LP approach– are readily comparable to those from the
literature. Note that, even though the LP technique introduced in this paper is relevant
for modeling counterfactual moral reasoning, its use is general, not specific to morality.</p>
      <p>In the final part of the paper, we discuss some potential extensions of our LP
approach to cover other aspects of counterfactual reasoning. These aspects include
assertive counterfactuals, extending the antecedent of a counterfactual with a LP rule,
and abducing the antecedent of a counterfactual in the form of intervention. These
aspects are relevant in modeling agent morality, which opens the way for further research
towards employing LP-based counterfactual reasoning to machine ethics.
2</p>
    </sec>
    <sec id="sec-2">
      <title>Abduction in Logic Programming</title>
      <p>We start by recapping basic notation in LP and review how abduction is expressed and
computed in LP.</p>
      <p>A literal is either an atom L or its default negation not L, named positive and
negative literals, respectively. They are negation complements to each other. The atoms
true and false are true and false, respectively, in every interpretation. A logic program
is a set of rules of the form H B, naturally read as “H if B”, where its head H is an
atom and its (finite) body B is a sequence of literals. When B is empty (equal to true),
the rule is called a fact and simply written H. A rule in the form of a denial, i.e., with
false as head, is an integrity constraint.</p>
      <p>Abduction is a reasoning method where one chooses from available hypotheses
those that best explain the observed evidence, in a preferred sense. In LP, an abductive
hypothesis (abducible) is a 2-valued positive literal Ab or its negation complement Ab
(denotes not Ab), whose truth value is not initially assumed, and it does not appear in
the head of a rule. An abductive framework is a triple hP; A; Ii, where A is the set of
abducibles, P is a logic program such that there is no rule in P whose head is in A, and
I is a set of integrity constraints.</p>
      <p>
        An observation O is a set of literals, analogous to a query or goal in LP. Abducing an
explanation for O amounts to finding consistent abductive solutions S A to a goal O,
whilst satisfying the integrity constraints, where abductive solutions S entail O is true
in the semantics obtained after replacing the abducibles S in P by their abduced truth
value. Abduction in LP can be accomplished by a top-down query-oriented procedure
for finding a query’s abductive solution by need. The solution’s abducibles are leaves
in its procedural query-rooted call-graph, i.e., the graph is recursively generated by the
procedure calls from literals in bodies of rules to heads of rules, and thence to the literals
in a rule’s body. The correctness of this top-down computation requires the underlying
semantics to be relevant, as it avoids computing a whole model (to warrant its existence)
in finding an answer to a query. Instead, it suffices to use only the rules relevant to the
query – those in its procedural call-graph – to find its truth value. The 3-valued
WellFounded Semantics [
        <xref ref-type="bibr" rid="ref38">38</xref>
        ], employed by us, enjoys this relevancy property [
        <xref ref-type="bibr" rid="ref8">8</xref>
        ], i.e., it
permits finding only relevant abducibles and their truth value via the aforementioned
top-down query-oriented procedure. Those abducibles not mentioned in the solution
are indifferent to the query, and remain undefined.
3
      </p>
    </sec>
    <sec id="sec-3">
      <title>LP-based Counterfactuals</title>
      <p>
        Our LP approach in evaluating counterfactuals is based Pearl’s approach [
        <xref ref-type="bibr" rid="ref26">26</xref>
        ]. Therein,
counterfactuals are evaluated based on a probabilistic causal model and a calculus of
intervention. Its main idea is to infer background circumstances that are conditional on
current evidences, and subsequently to make a minimal required intervention in the
current causal model, so as to comply with the antecedent condition of the counterfactual.
The modified model serves as the basis for computing the counterfactual consequent’s
probability.
      </p>
      <p>Since each step of our LP approach mirrors the one corresponding in Pearl’s, our
approach therefore immediately compares to Pearl’s, benefits from its epistemic adequacy,
and its properties rely on those of Pearl’s. We apply the idea of Pearl’s approach to logic
programs, but leaving out probabilities, employing instead LP abduction and updating
to determine the logic validity of counterfactuals under Well-Founded Semantics.
3.1</p>
      <sec id="sec-3-1">
        <title>Causation and Intervention in LP</title>
        <p>Two important ingredients in Pearl’s approach of counterfactuals are causal model and
intervention. With respect to an abductive framework hP; A; Ii, observation O
corresponds to Pearl’s definition for evidence. That is, O has rules concluding it in program
P , and hence does not belong to the set of abducibles A. Recall that in Pearl’s approach,
a model consists of set of background variables, whose values are conditional on
caseconsidered observed evidences. These background variables are not causally explained
in the model, as they have no parent nodes in the causal diagram of the model. In terms
of LP abduction, they correspond to a set of abducibles E A that provide
abductive explanations to observation O. Indeed, these abducibles have no preceding causal
explanatory mechanism, as they have no rules concluding them in the program.</p>
        <p>
          Besides abduction, our approach also benefits from LP updating, which is supported
by well-established theory and properties, cf. [
          <xref ref-type="bibr" rid="ref1 ref2">1, 2</xref>
          ]. It allows a program to be updated
by asserting or retracting rules, thus changing the state of the program. LP updating
is therefore appropriate for representing changes and dealing with incomplete
information. The specific role of LP updating in our approach is twofold: (1) updating the
program with the preferred explanation to the current observation, thus fixing in the
program the initial abduced background context of the counterfactual being evaluated;
(2) facilitating an apposite adjustment to the causal model by hypothetical updates of
causal intervention on the program, affecting defeasible rules. Both roles are sufficiently
accomplished by fluent (i.e., state-dependent literal) updates, rather than rule updates.
In the first role, explanations are treated as fluents. In the second, reserved predicates
are introduced as fluents for the purpose of intervention upon defeasible rules. For the
latter role, fluent updates are particularly more appropriate than rule updates (e.g.,
intervention by retracting rules), because intervention is hypothetical only. Removing away
rules from the program would be an overkill, as the rules might be needed to elaborate
justifications and introspective debugging.
3.2
        </p>
      </sec>
      <sec id="sec-3-2">
        <title>Evaluating Counterfactuals in LP</title>
        <p>The procedure to evaluate counterfactuals in LP essentially takes the three-step process
of Pearl’s approach as its reference. The key idea of evaluating counterfactuals with
respect to an abductive framework, at some current state (discrete time) T , is as follows.</p>
        <p>
          In step 1, abduction is performed to explain the factual observation.3 The
observation corresponds to the evidence that both the antecedent and the consequence literals
of the present counterfactual were, in the considered past moment, factually false.4 For
otherwise the counterfactual would be trivially true when making the antecedent false,
or irrelevant for the aim of making the consequent true. There can be multiple
explanations available to an observation; choosing a suitable one among them is a pragmatic
issue, which can be dealt with integrity constraints or preferences [
          <xref ref-type="bibr" rid="ref28 ref7">7, 28</xref>
          ]. The
explanation fixes the abduced context in which the counterfactual is evaluated by updating the
program with the explanation.
3 We assume that people are using counterfactuals to convey truly relevant information rather
than to fabricate arbitrary subjunctive conditionals (e.g., “If I had been watching, then I would
have seen the cheese on the moon melt during the eclipse”). Otherwise, implicit observations
must simply be made explicit observations, to avoid natural language conundrums or
ambiguities [
          <xref ref-type="bibr" rid="ref12">12</xref>
          ].
4 This interpretation is in line with the corresponding English construct, cf. [
          <xref ref-type="bibr" rid="ref15">15</xref>
          ], commonly
known as third conditionals.
        </p>
        <p>In step 2, defeasible rules are introduced for atoms forming the antecedent of the
counterfactual. Given the past event E, that renders its corresponding antecedent literal
false, held at factual state TE &lt; T , its causal intervention is realized by a hypothetical
update H at state TH = TE + H , such that TE &lt; TH &lt; TE + 1 T . That is, a
hypothetical update strictly takes place between two factual states, thus 0 &lt; H &lt; 1.
In the presence of defeasible rules this update permits hypothetical modification of the
program to consistently comply with the antecedent of the counterfactual.</p>
        <p>In step 3, the Well-Founded Model (WFM) of the hypothetical modified program
is examined to verify whether the consequence of the counterfactual holds true at state
T . One can easily reinstate to the current factual situation by canceling the hypothetical
update, e.g., via a restorative new update with H’s complement at state TF = TH + F ,
such that TH &lt; TF &lt; TE + 1.</p>
        <p>Based on the aforementioned ideas, our approach is defined below, abstracting from
the above state transition detail. In the sequel, the Well-Founded Model of program P is
denoted by W F M (P ). As our counterfactual procedure is based on the Well-Founded
Semantics, the standard logical consequence relation P j= F used below presupposes
the Well-Founded Model of P in verifying the truth of formula F , i.e., whether F is
true in W F M (P ).</p>
        <p>Procedure 1. Let hP; A; Ii be an abductive framework, where program P encodes the
modeled situation on which counterfactuals are evaluated. Consider a counterfactual “if
P re had been true, then Conc would have been true”, where P re and Conc are finite
conjunctions of literals.</p>
        <p>A to the observation O = OP re [
1. Abduction: Compute an explanation E</p>
        <p>This three-step procedure defines valid counterfactuals. Let hP; A; Ii be an abductive
framework, where program P encodes the modeled situation on which counterfactuals
are evaluated. The counterfactual</p>
        <p>“If P re had been true, then Conc would have been true”
is valid given observation O = OP re [ OConc [ OOth iff O is explained by E A,
(P [ E) ; j= Conc, and I is satisfied in W F M ((P [ E) ; ).</p>
        <p>Since the Well-Founded Semantics supports top-down query-oriented procedures
for finding solutions, checking validity of counterfactuals, i.e., whether their conclusion
Conc follows (step 3), given the intervened program transform (step 2) with respect to
the abduced background context (step 1), in fact amounts to checking in a derivation
tree whether query Conc holds true while also satisfying I.</p>
        <p>Example 1. Recall the example in the introduction. Let us slightly complicate it by
having two alternative abductive causes for the forest fire, viz., storm (which implies
lightning hitting the ground) or barbecue. Storm is accompanied by strong wind that causes
the dry leaves falling onto the ground. Note that dry leaves are important for forest
fire in both cases. This example is expressed by abductive framework hP; A; Ii, using
abbreviations b; d; f; g; l; s for barbecue, dry leaves, forest fire, leaves on the ground,
lightning, and storm, resp., where A = fs; b; s ; b g, I = ;, and P as follows:
f b; d: f b ; l; d; g: l s: g s: d:
The use of b in the second rule of f is intended so as to have mutual exclusive
explanations.</p>
        <p>Consider counterfactual “if only there had not been lightning, then the forest fire
would not have occurred”, where P re = not l and Conc = not f .
1. Abduction: Besides OP re = flg and OConc = ff g, say that g is observed too:
OOth = fgg. Given O = OP re [ OConc [ OOth, there are two possible
explanations: E1 = fs; b g and E2 = fs; bg. Consider a scenario where the minimal
explanation E1 (in the sense of minimal positive literals) is preferred to update
P , to obtain P [ E1. This updated program reflects the evaluation context of the
counterfactual, where all literals of P re and Conc were false in the initial factual
situation.
2. Action: The transformation results in program (P [ E1) :
f b; d: f b ; l; d; g: g s: d:</p>
        <p>l make(l) l s; not make not(l)
Program (P [ E1) is updated with make not(l) as the required intervention, viz.,
“if there had not been lightning”.
3. Prediction: We verify that (P [E1) ; j= not f . That is, not f holds with respect to
the intervened modified program for explanation E1 = fs; b g and the intervention
make not(l). Note, I = ; is trivially satisfied in W F M ((P [ E1) ; ):
We thus conclude that, for this E1 scenario, the given counterfactual is valid.
Example 2. In the other explanatory scenario of Example 1, where E2 (instead of E1)
is preferred to update P , the counterfactual is no longer valid. In this case, (P [ E1) =
(P [ E2) , and the required causal intervention is also the same: make not(l). But we
now have (P [ E2) ; 6j= not f . Indeed, the forest fire would still have occurred but due
to an alternative cause: barbecue.</p>
      </sec>
    </sec>
    <sec id="sec-4">
      <title>Counterfactuals in Morality</title>
      <p>
        People typically reason about what they should or should not have done when they
examine decisions in moral situations. It is therefore natural for them to engage
counterfactual thoughts in such settings. Counterfactual thinking has been investigated in
the context of moral reasoning, notably by psychology experimental studies, e.g., to
understand the kind of counterfactual alternatives people tend to imagine in
contemplating moral behaviors [
        <xref ref-type="bibr" rid="ref21">21</xref>
        ] and the influence of counterfactual thoughts in moral
judgment [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. As argued in [
        <xref ref-type="bibr" rid="ref9">9</xref>
        ], the function of counterfactual thinking is not just limited to
the evaluation process, but occurs also in the reflective one. Through evaluation,
counterfactuals help correct wrong behavior in the past, thus guiding future moral decisions.
Reflection, on the other hand, permits momentary experiential simulation of possible
alternatives, thereby allowing careful consideration before a moral decision is made,
and to subsequently justify it.
      </p>
      <p>
        Morality and normality judgments typically correlate. Normality mediates morality
with causation and blame judgments. The controllability in counterfactuals mediates
between normality, blame and cause judgments. The importance of control, namely
the possibility of counterfactual intervention, is highlighted in theories of blame that
presume someone responsible only if they had available some control of the outcome
[
        <xref ref-type="bibr" rid="ref40">40</xref>
        ].
      </p>
      <p>
        The potential of LP to machine ethics has been reported in [
        <xref ref-type="bibr" rid="ref13 ref18 ref29">13, 18, 29</xref>
        ] and with
emphasis on LP abduction and updating in [
        <xref ref-type="bibr" rid="ref32">32</xref>
        ]. Here we investigate how moral issues can
innovatively be expressed with counterfactual reasoning by resorting to a LP approach.
We particularly look into its application for examining viewpoints on moral
permissibility, exemplified by classic moral dilemmas from the literature on the Doctrines of
Double Effect (DDE) [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ] and of Triple Effect (DTE) [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
      </p>
      <p>
        DDE is first introduced by Thomas Aquinas in his discussion of the permissibility
of self-defense [
        <xref ref-type="bibr" rid="ref3">3</xref>
        ]. The current version of this principle emphasizes the permissibility
of an action that causes a harm by distinguishing whether this harm is a mere
sideeffect of bringing about a good result, or rather an intended means to bringing about
the same good end [
        <xref ref-type="bibr" rid="ref23">23</xref>
        ]. According to the Doctrine of Double Effect, the former action
is permissible, whereas the latter is impermissible. In [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], DDE has been utilized to
explain the consistency of judgments, shared by subjects from demographically diverse
populations, on a number of variants of the classic trolley problem [
        <xref ref-type="bibr" rid="ref10">10</xref>
        ]: A trolley is
headed toward five people walking on the track, who are unable to get off the track in
time. The trolley can nevertheless be diverted onto a side track, thereby preventing it
from killing the five people. However, there is a man standing on the side track. The
dilemma is therefore whether it is morally permissible to divert the trolley, killing the
man but saving the five. DDE permits diverting the trolley since that action does not
intend to harm the man on the side track in order to save the five.
      </p>
      <p>Counterfactuals may provide a general way to examine DDE in moral dilemmas,
by distinguishing between a cause and a side-effect as a result of performing an action
to achieve a goal. This distinction between causes and side-effects may explain the
permissibility of an action in accordance with DDE. That is, if some morally wrong
effect E happens to be a cause for a goal G that one wants to achieve by performing
an action A, and not a mere side-effect of A, then performing A is impermissible. This
is expressed by the counterfactual form below, in a setting where action A is performed
to achieve goal G:</p>
      <p>If not E had been true, then not G would have been true.</p>
      <p>The evaluation of this counterfactual form identifies permissibility of action A from its
effect E, by identifying whether the latter is a necessary cause for goal G or a mere
sideeffect of action A. That is, if the counterfactual proves valid, then E is instrumental as a
cause of G, and not a mere side-effect of action A. Since E is morally wrong, achieving
G that way, by means of A, is impermissible; otherwise, not. Note that the evaluation
of counterfactuals in this application is considered from the perspective of agents who
perform the action, rather than from others’ (e.g., observers).</p>
      <p>
        There has been a number of studies, both in philosophy and psychology, on the
relation between causation and counterfactuals. The counterfactual process view of causal
reasoning [
        <xref ref-type="bibr" rid="ref22">22</xref>
        ], for example, advocates counterfactual thinking as an essential part of
the process involved in making causal judgments. This relation between causation and
counterfactuals can be important for providing explanations in cases involving harm,
which underlie people’s moral cognition [
        <xref ref-type="bibr" rid="ref36">36</xref>
        ] and trigger other related questions, such
as “Who is responsible?”, “Who is to blame?”, “Which punishment would be fair?”,
etc. Herein, we explore the connection between causation and counterfactuals, focusing
on agents’ deliberate action, rather than on causation and counterfactuals in general.
More specifically, our exploration of this topic links it to the Doctrines of Double
Effect and Triple Effect and dilemmas involving harm, such as the trolley problem cases.
Such cases have also been considered in psychology experimental studies concerning
the role of gender and perspectives (first vs. third person perspectives) in counterfactual
thinking in moral reasoning, see [
        <xref ref-type="bibr" rid="ref24">24</xref>
        ]. The reader is referred to [
        <xref ref-type="bibr" rid="ref6">6</xref>
        ] and [
        <xref ref-type="bibr" rid="ref16">16</xref>
        ] for a more
general and broad discussion on causation and counterfactuals.
      </p>
      <p>
        We exemplify an application of this counterfactual form in two off-the-shelf
military cases from [
        <xref ref-type="bibr" rid="ref35">35</xref>
        ] – abbreviations in parentheses: terror bombing (teb) vs. tactical
bombing (tab). The former refers to bombing a civilian target (civ) during a war, thus
killing civilians (kic), in order to terrorize the enemy (ror), and thereby get them to
end the war (ew). The latter case is attributed to bombing a military target (mil), which
will effectively end the war (ew), but with the foreseen consequence of killing the same
number of civilians (kic) nearby. According to DDE, terror bombing fails
permissibility due to a deliberate element of killing civilians to achieve the goal of ending the war,
whereas tactical bombing is accepted as permissible.
      </p>
      <p>Example 3. We first model terror bombing with ew as the goal, by considering the
abductive framework hPe; Ae; Iei, where Ae = fteb; teb g; Ie = ; and Pe:
ew ror ror kic kic civ civ teb
We consider counterfactual “if civilians had not been killed, then the war would not have
ended”, where P re = not kic and Conc = not ew. The observation O = fkic; ewg,
with OOth being empty, has a single explanation Ee = ftebg. The rule kic civ
transforms into kic civ; not make not(kic). Given intervention make not(kic), the
counterfactual is valid, because (Pe [ Ee) ; j= not ew. That means the morally wrong
kic is instrumental in achieving the goal ew: it is a cause for ew by performing teb and
not a mere side-effect of teb. Hence teb is DDE morally impermissible.
Example 4. Tactical bombing with the same goal ew can be modeled by the abductive
framework hPa; Aa; Iai, where Aa = ftab; tab g; Ia = ; and Pa:</p>
      <p>ew mil mil tab kic tab
Given the same counterfactual, we now have Ea = ftabg as the only explanation to
the same observation O = fkic; ewg. Note that the transform contains rule kic
tab; not make not(kic), which is obtained from kic tab. By imposing the
intervention make not(kic), one can verify that the counterfactual is not valid, because
(Pa [ Ea) ; 6j= not ew. Therefore, the morally wrong kic is just a side-effect in
achieving the goal ew. Hence tab is DDE morally permissible.</p>
      <p>Example 5. Consider two countries a and its ally, b, that concert a terror bombing,
modeled by the abductive framework hPab; Aab; Iabi, where Aab = fteb; teb g; Iab =
; and Pab below. The abbreviations kic(X ) and civ(X ) refer to ‘killing civilians by
country X ’ and ‘bombing a civilian target by country X ’. As usual in LP, underscore
( ) represents an anonymous variable.</p>
      <p>ew ror ror kic( )
kic(X ) civ(X ) civ( ) teb
Being represented as a single program (rather than a separate knowledge base for each
agent), this scenario should appropriately be viewed as if a joint action performed by
a single agent. Therefore, the counterfactual of interest is “if civilians had not been
killed by a and b, then the war would not have ended”. That is, the antecedent of the
counterfactual is a conjunction: P re = not kic(a) ^ not kic(b). Given Eab = ftebg,
one can easily verify that (Pab [ Eab) ; j= not ew, and the counterfactual is valid: the
concerted teb is DDE impermissible.</p>
      <p>
        This application of counterfactuals can be challenged by a more complex scenario,
to distinguish moral permissibility according to DDE vs. DTE. DTE [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] refines DDE
particularly on the notion about harming someone as an intended means. That is, DTE
distinguishes further between doing an action in order that an effect occurs and doing
it because that effect will occur. The latter is a new category of action, which is not
accounted for in DDE. Though DTE also classifies the former as impermissible, it is
more tolerant to the latter (the third effect), i.e., it treats as permissible those actions
performed just because instrumental harm will occur.
      </p>
      <p>
        Kamm [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ] proposed DTE to accommodate a variant of the trolley problem, viz.,
the Loop Case [
        <xref ref-type="bibr" rid="ref37">37</xref>
        ]: A trolley is headed toward five people walking on the track, and
they will not be able to get off the track in time. The trolley can be redirected onto a side
track, which loops back towards the five. A fat man sits on this looping side track, whose
body will by itself stop the trolley. Is it morally permissible to divert the trolley to the
looping side track, thereby hitting the man and killing him, but saving the five? This case
strikes most moral philosophers that diverting the trolley is permissible [
        <xref ref-type="bibr" rid="ref25">25</xref>
        ]. Referring
to a psychology study [
        <xref ref-type="bibr" rid="ref14">14</xref>
        ], 56% of its respondents judged that diverting the trolley in
this case is also permissible. To this end, DTE may provide the justification, that it is
permissible because it will hit the man, and not in order to intentionally hit him [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ].
Nonetheless, DDE views diverting the trolley in the Loop case as impermissible.
      </p>
      <p>We use counterfactuals to capture the distinct views of DDE and DTE in the Loop
case.</p>
      <p>Example 6. We model the Loop case with the abductive framework hPo; Ao; Ioi, where
sav, div, hit, tst, mst stand for save the five, divert the trolley, man hit by the trolley,
train on the side track and man on the side track, resp., with sav as the goal, Ao =
fdiv; div g; Io = ;, and Po:</p>
      <p>sav hit hit tst; mst tst div mst:</p>
      <p>DDE views diverting the trolley impermissible, because this action redirects the
trolley onto the side track, thereby hitting the man. Consequently, it prevents the
trolley from hitting the five. To come up with the impermissibility of this action, it is
required to show the validity of the counterfactual “if the man had not been hit by the
trolley, the five people would not have been saved”. Given observation O = OP re [
OConc = fhit; savg, its only explanation is Eo = fdivg. Note that rule hit tst; mst
transforms into hit tst; mst; not make not(hit), and the required intervention is
make not(hit). The counterfactual is therefore valid, because (Po [ Eo) ; j= not sav.
This means hit, as a consequence of action div, is instrumental as a cause of goal sav.
Therefore, div is DDE morally impermissible.</p>
      <p>DTE considers diverting the trolley as permissible, since the man is already on the
side track, without any deliberate action performed in order to place him there. In Po,
we have the fact mst ready, without abducing any ancillary action. The validity of the
counterfactual “if the man had not been on the side track, then he would not have been
hit by the trolley”, which can easily be verified, ensures that the unfortunate event of
the man being hit by the trolley is indeed the consequence of the man being on the side
track. The lack of deliberate action (exemplified here by pushing the man – psh for
short) in order to place him on the side track, and whether the absence of this action
still causes the unfortunate event (the third effect) is captured by the counterfactual “if
the man had not been pushed, then he would not have been hit by the trolley”. This
counterfactual is not valid, because the observation O = OP re [ OConc = fpsh; hitg
has no explanation E Ao, i.e., psh 62 Ao, and no fact psh exists either. This means
that even without this hypothetical but unexplained deliberate action of pushing, the
man would still have been hit by the trolley (just because he is already on the side track).
Though hit is a consequence of div and instrumental in achieving sav, no deliberate
action is required to cause mst, in order for hit to occur. Hence div is DTE morally
permissible.</p>
      <p>Next, we consider a more involved trolley example.</p>
      <p>
        Example 7. Consider a variant of the Loop case, viz., the Loop-Push Case (see also
Extra Push Case in [
        <xref ref-type="bibr" rid="ref17">17</xref>
        ]). Differently from the Loop case, now the looping side track is
initially empty, and besides the diverting action, an ancillary action of pushing a fat man
in order to place him on the side track is additionally performed. This case is modeled
by the abductive framework hPp; Ap; Ipi, where Ap = fdiv; psh; div ; psh g; Ip = ;,
and Pp:
      </p>
      <p>sav hit hit tst; mst tst div mst psh
Recall the counterfactuals considered in the discussion of DDE and DTE of the Loop
case:
– “If the man had not been hit by the trolley, the five people would not have been
saved.” The same observation O = fhit; savg provides an extended explanation
Ep1 = fdiv; pshg. That is, the pushing action needs to be abduced for having
the man on the side track, so the trolley can be stopped by hitting him. The same
intervention make not(hit) is applied to the same transform, resulting in a valid
counterfactual: (Pp [ Ep1 ) ; j= not sav.
– “If the man had not been pushed, then he would not have been hit by the trolley.”
The relevant observation is O = fpsh; hitg, explained by Ep2 = fdiv; pshg.
Whereas this counterfactual is not valid in DTE of the Loop case, it is valid in
the Loop-Push case. Given rule psh not make not(psh) in the transform and
intervention make not(psh), we verify that (Pp [ Ep2 ) ; j= not hit.
From the validity of these two counterfactuals it can be inferred that, given the diverting
action, the ancillary action of pushing the man onto the side track causes him to be hit
by the trolley, which in turn causes the five to be saved. In the Loop-Push, DTE agrees
with DDE that such a deliberate action (pushing) performed in order to bring about
harm (the man hit by the trolley), even for the purpose of a good or greater end (to save
the five), is likewise impermissible.
5</p>
    </sec>
    <sec id="sec-5">
      <title>Extending LP-based Counterfactuals</title>
      <p>Our approach, in Section 3, specifically focuses on evaluating counterfactuals in
order to determine their validity. We identify some potential extensions of this LP-based
approach to other aspects of counterfactual reasoning:
1. We consider the so-called assertive counterfactuals, where a counterfactual is given
as being a valid statement, rather than a statement whose truth validity has to be
determined. The causality expressed by such a valid counterfactual may be useful for
refining an existing knowledge base. For instance, suppose we have a rule stating
that the lamp is on if the switch is on, written as lamp on switch on. Clearly,
providing the fact switch on, we have lamp on true. Now consider that the
following counterfactual is given as being a valid statement:</p>
      <p>“If the bulb had not functioned properly, then the lamp would not be on”
There are two ways that this counterfactual may refine the rule about lamp on.
First, the causality expressed by this counterfactual can be used to transform the
rule into:
lamp on
bulb ok
switch on; bulb ok:
not make not(bulb ok):
So, the lamp will be on if the switch is on – that is still granted – but subject to an
update make not(bulb ok), which captures the condition of the bulb. In the other
alternative, an assertive counterfactual is rather directly translated into an updating
rule, and need not transform existing rules.
2. We may extend the antecedent of a counterfactual with a rule, instead of just literals.</p>
      <p>For example, consider the following program (assuming an empty abduction, so as
to focus on the issue):
warm blood(M )</p>
      <p>mammal(M ):
mammal(M )
mammal(M )
dog(d):
dog(M ):
bat(M ):
bat(b):
Querying ?- bat(B); warm blood(B) assures us that there is a warm blood bat,
viz., B = b.</p>
      <p>Now consider the counterfactual:</p>
      <p>“If bats were not mammals they would not have warm blood”.</p>
      <p>Transforming the above program using our procedure obtains:
warm blood(M )</p>
      <p>mammal(M ):
mammal(M )
mammal(M )
mammal(M )
dog(d):
make(mammal(M )):
dog(M ); not make not(mammal(M )):
bat(M ); not make not(mammal(M )):</p>
      <p>bat(b):
The antecedent of the given counterfactual can be expressed as the rule:
make not(mammal(B))
bat(B):
We can check using our procedure that, given this rule intervention, the above
counterfactual is valid: not warm blood(b) is true in the intervened modified program.
3. Finally, we can easily imagine the situation where the antecedent P re of a
counterfactual is not given, though the conclusion Conc is, and we want to abduce P re
in the form of interventions. That is, the task is to abduce make and make not,
rather than imposing them, while respecting the integrity constraints, such that the
counterfactual is valid.</p>
      <p>
        Tabling abductive solutions [
        <xref ref-type="bibr" rid="ref33">33</xref>
        ] may be relevant in this problem. Suppose that
we already abduced an intervention P re1 for a given Conc1, and we now want
to find P re2 such that the counterfactual “If P re1 and P re2 had been the case,
then Conc1 and Conc2 would have been the case” is valid. In particular, when
abduction is performed for a more complex conclusion Conc1 and Conc2, the
solution P re1, which has already been abduced and tabled, can be reused in the
abduction of such a more complex conclusion, leading to the idea that problems of
this kind of counterfactual reasoning can be solved in parts or in a modular way.
      </p>
      <p>Indeed, the above three aspects may have relevance in modeling agent morality:
1. In assertive counterfactuals, the causality expressed by a given valid counterfactual
can be useful for refining moral rules, which can be achieved through incremental
rule updating. This may further the application of moral updating and evolution.
2. The extension of a counterfactual with a rule antecedent opens up another
possibility to express exceptions in moral rules. For instance, one can express an exception
about lying, such as “If lying had been done to save an innocent from a murderer,
then it would not have been wrong”. That is, given a knowledge base about lying
for human H:
lying wrong(H)</p>
      <p>lying(H); not make not(lying wrong(H)):
The antecedent of the above counterfactual can be represented as a rule:
make not(lying wrong(H))</p>
      <p>save f rom murderer(H; I); innocent(I):
3. Given that the conclusion of a counterfactual is some moral wrong W , abducing its
antecedent in the form of intervention can be used for expressing a prevention of
W , viz., “What could I have done to prevent a wrong W ?”.
6</p>
    </sec>
    <sec id="sec-6">
      <title>Concluding Remarks</title>
      <p>This paper presents a formulation of counterfactuals evaluation by means of LP
abduction and updating. The approach corresponds to the three-step process in Pearl’s
structural theory, but omits probability to concentrate on a naturalized logic. We
addressed too how to examine (non-probabilistic) moral reasoning about permissibility,
employing this LP approach to distinguish between causes and side-effects as a result
of agents’ actions to achieve a goal.</p>
      <p>
        The three potential extensions of our LP approach to cover other aspects of
counterfactual reasoning, as well as their applications to machine ethics, are worth exploring
in future. Apart from these identified extensions, our present LP-based approach for
evaluating counterfactuals may as well be suitable to address moral justification, via
compound counterfactuals: “Had I known what I know today, then if I were to have
done otherwise, something preferred would have followed”. Such counterfactuals,
typically imagining alternatives with worse effect – the so-called downward
counterfactuals [
        <xref ref-type="bibr" rid="ref20">20</xref>
        ], may provide moral justification for what was done due to lack, at the time, of
the current knowledge. This is accomplished by evaluating what would have followed if
the intent had been otherwise, other things (including present knowledge) being equal.
It may justify that what would have followed is not morally superior than the actual
ensued consequence. We have started, in [
        <xref ref-type="bibr" rid="ref30">30</xref>
        ], to explore the application of our present
LP-based approach to evaluate compound counterfactuals for moral justification.
Further application of compound counterfactuals, to justify an exception for an action to
be permissible, that may lead to agents’ argumentation following Scanlon’s
contractualism [
        <xref ref-type="bibr" rid="ref34">34</xref>
        ], is another path of future investigation.
      </p>
    </sec>
    <sec id="sec-7">
      <title>Acknowledgements</title>
      <p>Lu´ıs Moniz Pereira acknowledges the support from Fundac¸a˜o para a Cieˆncia e a
Tecnologia (FCT/MEC) NOVA LINCS PEst UID/CEC/04516/2013.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          1.
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Alferes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Brogi</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Leite</surname>
          </string-name>
          , and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>Evolving logic programs</article-title>
          .
          <source>In Procs. European Conference on Artificial Intelligence (JELIA</source>
          <year>2002</year>
          ), volume
          <volume>2424</volume>
          <source>of LNCS</source>
          , pages
          <fpage>50</fpage>
          -
          <lpage>61</lpage>
          . Springer,
          <year>2002</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          2.
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Alferes</surname>
          </string-name>
          ,
          <string-name>
            <given-names>J. A.</given-names>
            <surname>Leite</surname>
          </string-name>
          ,
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          ,
          <string-name>
            <given-names>H.</given-names>
            <surname>Przymusinska</surname>
          </string-name>
          , and
          <string-name>
            <given-names>T.</given-names>
            <surname>Przymusinski</surname>
          </string-name>
          .
          <article-title>Dynamic updates of non-monotonic knowledge bases</article-title>
          .
          <source>Journal of Logic Programming</source>
          ,
          <volume>45</volume>
          (
          <issue>1-3</issue>
          ):
          <fpage>43</fpage>
          -
          <lpage>70</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          3.
          <string-name>
            <given-names>T.</given-names>
            <surname>Aquinas</surname>
          </string-name>
          .
          <article-title>Summa Theologica II-II, Q</article-title>
          .
          <volume>64</volume>
          ,
          <issue>art</issue>
          . 7, “Of Killing”. In W. P. Baumgarth and
          <string-name>
            <surname>R. J</surname>
          </string-name>
          . Regan, editors,
          <source>On Law, Morality, and Politics. Hackett</source>
          ,
          <year>1988</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          4.
          <string-name>
            <given-names>C.</given-names>
            <surname>Baral</surname>
          </string-name>
          and
          <string-name>
            <given-names>M.</given-names>
            <surname>Hunsaker</surname>
          </string-name>
          .
          <article-title>Using the probabilistic logic programming language P-log for causal and counterfactual reasoning and non-naive conditioning</article-title>
          .
          <source>In Procs. 20th International Joint Conference on Artificial Intelligence (IJCAI)</source>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          5.
          <string-name>
            <surname>R. M. J. Byrne</surname>
          </string-name>
          .
          <article-title>The Rational Imagination: How People Create Alternatives to Reality</article-title>
          . MIT Press, Cambridge, MA,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          6. J. Collins,
          <string-name>
            <given-names>N.</given-names>
            <surname>Hall</surname>
          </string-name>
          , and L. A. Paul, editors.
          <source>Causation and Counterfactuals</source>
          . MIT Press, Cambridge, MA,
          <year>2004</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          7.
          <string-name>
            <given-names>P.</given-names>
            <surname>Dell'Acqua</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>Preferential theory revision</article-title>
          .
          <source>Journal of Applied Logic</source>
          ,
          <volume>5</volume>
          (
          <issue>4</issue>
          ):
          <fpage>586</fpage>
          -
          <lpage>601</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          8.
          <string-name>
            <given-names>J.</given-names>
            <surname>Dix</surname>
          </string-name>
          .
          <article-title>A classification theory of semantics of normal logic programs: II. weak properties</article-title>
          .
          <source>Fundamenta Informaticae</source>
          ,
          <volume>3</volume>
          (
          <issue>22</issue>
          ):
          <fpage>257</fpage>
          -
          <lpage>288</lpage>
          ,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          9.
          <string-name>
            <given-names>K.</given-names>
            <surname>Epstude</surname>
          </string-name>
          and
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Roese</surname>
          </string-name>
          .
          <article-title>The functional theory of counterfactual thinking</article-title>
          .
          <source>Personality and Social Psychology Review</source>
          ,
          <volume>12</volume>
          (
          <issue>2</issue>
          ):
          <fpage>168</fpage>
          -
          <lpage>192</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          10.
          <string-name>
            <given-names>P.</given-names>
            <surname>Foot</surname>
          </string-name>
          .
          <article-title>The problem of abortion and the doctrine of double effect</article-title>
          .
          <source>Oxford Review</source>
          ,
          <volume>5</volume>
          :
          <fpage>5</fpage>
          -
          <lpage>15</lpage>
          ,
          <year>1967</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref11">
        <mixed-citation>
          11.
          <string-name>
            <surname>M. L. Ginsberg</surname>
          </string-name>
          .
          <source>Counterfactuals. Artificial Intelligence</source>
          ,
          <volume>30</volume>
          (
          <issue>1</issue>
          ):
          <fpage>35</fpage>
          -
          <lpage>79</lpage>
          ,
          <year>1986</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref12">
        <mixed-citation>
          12.
          <string-name>
            <given-names>Paul</given-names>
            <surname>Grice</surname>
          </string-name>
          .
          <article-title>Studies in the Way of Words</article-title>
          . Harvard University Press, Cambridge, MA,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref13">
        <mixed-citation>
          13. T. A. Han,
          <string-name>
            <surname>A</surname>
          </string-name>
          . Saptawijaya, and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>Moral reasoning under uncertainty</article-title>
          .
          <source>In Procs. 18th International Conference on Logic for Programming</source>
          ,
          <source>Artificial Intelligence and Reasoning (LPAR)</source>
          , volume
          <volume>7180</volume>
          <source>of LNCS</source>
          , pages
          <fpage>212</fpage>
          -
          <lpage>227</lpage>
          . Springer,
          <year>2012</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref14">
        <mixed-citation>
          14.
          <string-name>
            <surname>M. Hauser</surname>
            ,
            <given-names>F.</given-names>
          </string-name>
          <string-name>
            <surname>Cushman</surname>
            ,
            <given-names>L.</given-names>
          </string-name>
          <string-name>
            <surname>Young</surname>
            ,
            <given-names>R. K.</given-names>
          </string-name>
          <string-name>
            <surname>Jin</surname>
            , and
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Mikhail</surname>
          </string-name>
          .
          <article-title>A dissociation between moral judgments and justifications</article-title>
          .
          <source>Mind and Language</source>
          ,
          <volume>22</volume>
          (
          <issue>1</issue>
          ):
          <fpage>1</fpage>
          -
          <lpage>21</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref15">
        <mixed-citation>
          15.
          <string-name>
            <given-names>M.</given-names>
            <surname>Hewings</surname>
          </string-name>
          .
          <article-title>Advanced Grammar in Use with Answers: A Self-Study Reference and Practice Book for Advanced Learners of English</article-title>
          . Cambridge University Press, New York, NY,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref16">
        <mixed-citation>
          16.
          <string-name>
            <surname>C. Hoerl</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <string-name>
            <surname>McCormack</surname>
            , and
            <given-names>S. R</given-names>
          </string-name>
          . Beck, editors.
          <source>Understanding Counterfactuals, Understanding Causation: Issues in Philosophy and Psychology</source>
          . Oxford University Press, Oxford, UK,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref17">
        <mixed-citation>
          17.
          <string-name>
            <given-names>F. M.</given-names>
            <surname>Kamm</surname>
          </string-name>
          . Intricate Ethics: Rights, Responsibilities, and
          <string-name>
            <given-names>Permissible</given-names>
            <surname>Harm</surname>
          </string-name>
          . Oxford University Press, Oxford, UK,
          <year>2006</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref18">
        <mixed-citation>
          18.
          <string-name>
            <given-names>R.</given-names>
            <surname>Kowalski</surname>
          </string-name>
          .
          <article-title>Computational Logic and Human Thinking: How to be Artificially Intelligent</article-title>
          . Cambridge University Press, New York, NY,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref19">
        <mixed-citation>
          19.
          <string-name>
            <given-names>D.</given-names>
            <surname>Lewis</surname>
          </string-name>
          . Counterfactuals. Harvard University Press, Cambridge, MA,
          <year>1973</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref20">
        <mixed-citation>
          20.
          <string-name>
            <surname>K. D. Markman</surname>
            , I. Gavanski,
            <given-names>S. J.</given-names>
          </string-name>
          <string-name>
            <surname>Sherman</surname>
            , and
            <given-names>M. N.</given-names>
          </string-name>
          <string-name>
            <surname>McMullen</surname>
          </string-name>
          .
          <article-title>The mental simulation of better and worse possible worlds</article-title>
          .
          <source>Journal of Experimental Social Psychology</source>
          ,
          <volume>29</volume>
          :
          <fpage>87</fpage>
          -
          <lpage>109</lpage>
          ,
          <year>1993</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref21">
        <mixed-citation>
          21.
          <string-name>
            <given-names>R.</given-names>
            <surname>McCloy</surname>
          </string-name>
          and
          <string-name>
            <given-names>R. M. J.</given-names>
            <surname>Byrne</surname>
          </string-name>
          .
          <article-title>Counterfactual thinking about controllable events</article-title>
          .
          <source>Memory and Cognition</source>
          ,
          <volume>28</volume>
          :
          <fpage>1071</fpage>
          -
          <lpage>1078</lpage>
          ,
          <year>2000</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref22">
        <mixed-citation>
          22.
          <string-name>
            <surname>T. McCormack</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Frosch</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Burns</surname>
          </string-name>
          .
          <article-title>The relationship between children's causal and counterfactual judgements</article-title>
          . In C. Hoerl,
          <string-name>
            <given-names>T.</given-names>
            <surname>McCormack</surname>
          </string-name>
          , and
          <string-name>
            <surname>S. R</surname>
          </string-name>
          . Beck, editors,
          <source>Understanding Counterfactuals, Understanding Causation</source>
          . Oxford University Press, Oxford, UK,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref23">
        <mixed-citation>
          23.
          <string-name>
            <given-names>A.</given-names>
            <surname>McIntyre</surname>
          </string-name>
          .
          <article-title>Doctrine of double effect</article-title>
          . In E. N. Zalta, editor,
          <source>The Stanford Encyclopedia of Philosophy. Center for the Study of Language and Information</source>
          , Stanford University,
          <year>Fall 2011</year>
          edition,
          <year>2004</year>
          . http://plato.stanford.edu/archives/fall2011/ entries/double-effect/.
        </mixed-citation>
      </ref>
      <ref id="ref24">
        <mixed-citation>
          24. S. Migliore,
          <string-name>
            <given-names>G.</given-names>
            <surname>Curcio</surname>
          </string-name>
          ,
          <string-name>
            <given-names>F.</given-names>
            <surname>Mancini</surname>
          </string-name>
          , and
          <string-name>
            <given-names>S. F.</given-names>
            <surname>Cappa</surname>
          </string-name>
          .
          <article-title>Counterfactual thinking in moral judgment: an experimental study</article-title>
          .
          <source>Frontiers in Psychology</source>
          ,
          <volume>5</volume>
          :
          <fpage>451</fpage>
          ,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref25">
        <mixed-citation>
          25.
          <string-name>
            <given-names>M.</given-names>
            <surname>Otsuka</surname>
          </string-name>
          .
          <article-title>Double effect, triple effect and the trolley problem: Squaring the circle in looping cases</article-title>
          . Utilitas,
          <volume>20</volume>
          (
          <issue>1</issue>
          ):
          <fpage>92</fpage>
          -
          <lpage>110</lpage>
          ,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref26">
        <mixed-citation>
          26.
          <string-name>
            <given-names>J.</given-names>
            <surname>Pearl</surname>
          </string-name>
          .
          <source>Causality: Models, Reasoning and Inference</source>
          . Cambridge University Press, Cambridge, MA,
          <year>2009</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref27">
        <mixed-citation>
          27.
          <string-name>
            <surname>L. M. Pereira</surname>
            ,
            <given-names>J. N.</given-names>
          </string-name>
          <article-title>Apar´ıcio, and</article-title>
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Alferes</surname>
          </string-name>
          .
          <article-title>Counterfactual reasoning based on revising assumptions</article-title>
          .
          <source>In Procs. International Symposium on Logic Programming (ILPS</source>
          <year>1991</year>
          ), pages
          <fpage>566</fpage>
          -
          <lpage>577</lpage>
          . MIT Press,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref28">
        <mixed-citation>
          28.
          <string-name>
            <surname>L. M. Pereira</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Dell'Acqua</surname>
            ,
            <given-names>A. M.</given-names>
          </string-name>
          <string-name>
            <surname>Pinto</surname>
            , and
            <given-names>G.</given-names>
          </string-name>
          <string-name>
            <surname>Lopes</surname>
          </string-name>
          .
          <article-title>Inspecting and preferring abductive models</article-title>
          . In K. Nakamatsu and L. C. Jain, editors,
          <source>The Handbook on Reasoning-Based Intelligent Systems</source>
          , pages
          <fpage>243</fpage>
          -
          <lpage>274</lpage>
          . World Scientific Publishers,
          <year>2013</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref29">
        <mixed-citation>
          29.
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          .
          <article-title>Modelling Morality with Prospective Logic</article-title>
          . In M. Anderson and
          <string-name>
            <surname>S. L</surname>
          </string-name>
          . Anderson, editors,
          <source>Machine Ethics</source>
          , pages
          <fpage>398</fpage>
          -
          <lpage>421</lpage>
          . Cambridge U. P.,
          <year>2011</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref30">
        <mixed-citation>
          30.
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          and
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          .
          <source>Programming Machine Ethics</source>
          , volume
          <volume>26</volume>
          of Studies in Applied Philosophy,
          <source>Epistemology and Rational Ethics (SAPERE)</source>
          . Springer,
          <year>2016</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref31">
        <mixed-citation>
          31.
          <string-name>
            <given-names>N. J.</given-names>
            <surname>Roese</surname>
          </string-name>
          .
          <article-title>Counterfactual thinking</article-title>
          .
          <source>Psychological Bulletin</source>
          ,
          <volume>121</volume>
          (
          <issue>1</issue>
          ):
          <fpage>133</fpage>
          -
          <lpage>148</lpage>
          ,
          <year>1997</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref32">
        <mixed-citation>
          32.
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>Towards modeling morality computationally with logic programming</article-title>
          .
          <source>In PADL</source>
          <year>2014</year>
          , volume
          <volume>8324</volume>
          <source>of LNCS</source>
          , pages
          <fpage>104</fpage>
          -
          <lpage>119</lpage>
          . Springer,
          <year>2014</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref33">
        <mixed-citation>
          33.
          <string-name>
            <given-names>A.</given-names>
            <surname>Saptawijaya</surname>
          </string-name>
          and
          <string-name>
            <given-names>L. M.</given-names>
            <surname>Pereira</surname>
          </string-name>
          .
          <article-title>TABDUAL: a tabled abduction system for logic programs</article-title>
          .
          <source>IfCoLog Journal of Logics and their Applications</source>
          ,
          <volume>2</volume>
          (
          <issue>1</issue>
          ):
          <fpage>69</fpage>
          -
          <lpage>123</lpage>
          ,
          <year>2015</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref34">
        <mixed-citation>
          34.
          <string-name>
            <surname>T. M. Scanlon</surname>
          </string-name>
          . What We Owe to Each Other. Harvard University Press, Cambridge, MA,
          <year>1998</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref35">
        <mixed-citation>
          35.
          <string-name>
            <surname>T. M. Scanlon</surname>
          </string-name>
          . Moral Dimensions: Permissibility, Meaning, Blame. Harvard University Press, Cambridge, MA,
          <year>2008</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref36">
        <mixed-citation>
          36. P. E. Tetlock,
          <string-name>
            <given-names>P. S.</given-names>
            <surname>Visser</surname>
          </string-name>
          ,
          <string-name>
            <given-names>R.</given-names>
            <surname>Singh</surname>
          </string-name>
          ,
          <string-name>
            <given-names>M.</given-names>
            <surname>Polifroni</surname>
          </string-name>
          ,
          <string-name>
            <given-names>A.</given-names>
            <surname>Scott</surname>
          </string-name>
          , S. B.
          <string-name>
            <surname>Elson</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Mazzocco</surname>
            , and
            <given-names>P.</given-names>
          </string-name>
          <string-name>
            <surname>Rescober</surname>
          </string-name>
          .
          <article-title>People as intuitive prosecutors: the impact of social-control goals on attributions of responsibility</article-title>
          .
          <source>Journal of Experimental Social Psychology</source>
          ,
          <volume>43</volume>
          :
          <fpage>195</fpage>
          -
          <lpage>209</lpage>
          ,
          <year>2007</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref37">
        <mixed-citation>
          37.
          <string-name>
            <given-names>J. J.</given-names>
            <surname>Thomson</surname>
          </string-name>
          .
          <article-title>The trolley problem</article-title>
          .
          <source>The Yale Law Journal</source>
          ,
          <volume>279</volume>
          :
          <fpage>1395</fpage>
          -
          <lpage>1415</lpage>
          ,
          <year>1985</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref38">
        <mixed-citation>
          38.
          <string-name>
            <surname>A. van Gelder</surname>
            ,
            <given-names>K. A.</given-names>
          </string-name>
          <string-name>
            <surname>Ross</surname>
            , and
            <given-names>J. S.</given-names>
          </string-name>
          <string-name>
            <surname>Schlipf</surname>
          </string-name>
          .
          <article-title>The well-founded semantics for general logic programs</article-title>
          .
          <source>Journal of the ACM</source>
          ,
          <volume>38</volume>
          (
          <issue>3</issue>
          ):
          <fpage>620</fpage>
          -
          <lpage>650</lpage>
          ,
          <year>1991</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref39">
        <mixed-citation>
          39.
          <string-name>
            <surname>J. Vennekens</surname>
            ,
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Bruynooghe</surname>
            , and
            <given-names>M.</given-names>
          </string-name>
          <string-name>
            <surname>Denecker</surname>
          </string-name>
          .
          <article-title>Embracing events in causal modeling: Interventions and counterfactuals in CP-logic</article-title>
          .
          <source>In JELIA</source>
          <year>2010</year>
          , volume
          <volume>6341</volume>
          <source>of LNCS</source>
          , pages
          <fpage>313</fpage>
          -
          <lpage>325</lpage>
          . Springer,
          <year>2010</year>
          .
        </mixed-citation>
      </ref>
      <ref id="ref40">
        <mixed-citation>
          40.
          <string-name>
            <given-names>B.</given-names>
            <surname>Weiner</surname>
          </string-name>
          .
          <article-title>Judgments of Responsibility: A Foundation for a Theory of Social Conduct</article-title>
          . The Guilford Press, New York, NY,
          <year>1995</year>
          .
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>