<!DOCTYPE article PUBLIC "-//NLM//DTD JATS (Z39.96) Journal Archiving and Interchange DTD v1.0 20120330//EN" "JATS-archivearticle1.dtd">
<article xmlns:xlink="http://www.w3.org/1999/xlink">
  <front>
    <journal-meta />
    <article-meta>
      <title-group>
        <article-title>Norms and Causation in Artificial Morality</article-title>
      </title-group>
      <contrib-group>
        <contrib contrib-type="author">
          <string-name>Laura Fearnley</string-name>
          <xref ref-type="aff" rid="aff0">0</xref>
        </contrib>
        <aff id="aff0">
          <label>0</label>
          <institution>University of Glasgow</institution>
          ,
          <addr-line>University Avenue, Glasgow</addr-line>
          ,
          <country country="UK">UK</country>
        </aff>
      </contrib-group>
      <pub-date>
        <year>2023</year>
      </pub-date>
      <abstract>
        <p>There's been increasing interest into how to build Artificial Moral Agents (AMAs) that make moral decisions on the basis of causation rather than mere correction. One promising avenue for achieving this is to use a causal modelling approach. This paper explores an open and important problem with such an approach; namely, the problem of what makes a causal model an appropriate model. I explore why we need to establish criteria for what makes a model appropriate, and offer-up such criteria which appeals to normative considerations.</p>
      </abstract>
    </article-meta>
  </front>
  <body>
    <sec id="sec-1">
      <title>1. Introduction</title>
      <p>Artificial Morality is an emerging
interdisciplinary field that centres around the
creation of artificial moral agents, or AMAs, by
implementing moral competence in artificial
systems. The demand for moral machines comes
from the changes in our everyday practices;
artificial systems are rapidly being used in a
variety of situations from home help and elderly
care purposes to banking and court algorithms. It
is therefore crucial to create reliable and
responsible machines that make sound moral
judgements. In this paper I introduce some cases
from the philosophy of causation literature that
generate problems for developing efficient and
accurate AMAs. I also investigate how an appeal
to normative considerations can provide a
potential solution to these problems.</p>
    </sec>
    <sec id="sec-2">
      <title>2. Causal Models</title>
      <p>
        Plausibly morality deals in causation rather
than mere correlation. As such there’s recently
been a growing interest in how to build AMAs
that make moral decisions based upon cause and
effect. One popular methodology for achieving
this goal has been to use a causally modelling
approach. Advocated of this approach often take
as their point of departure the idea that causal
relationships are relationships that are potentially
exploitable for the purposes of manipulation and
control. Roughly, if X is a cause of Y, then I
should be able to manipulate X in the right way
that would bring about a change in Y. In this way,
causal relationships are thought to be
relationships of dependency potentially
exploitable for manipulation and control — X’s
causal status in regards to Y depends upon how Y
reacts under changes to X. Typically the causal
modelling approach takes the dependency relation
to be one that holds between variables and their
values
        <xref ref-type="bibr" rid="ref10">(James Woodward 2003)</xref>
        . Variables can be
taken to represent one’s preferred choice of causal
relata — events, facts, properties, instantiations
etc. Whether one variable is a cause of another is
determined by whether some manipulation on the
first variable changes the second variable; that is,
whether a change in one variable makes a
difference to another.
      </p>
      <p>Following Judea Pearl (2000), the causal
model is formalized using causal Bayes nets.
These comprise of systems of structured
equations and directed graph, which taken
together, represent the causal relationships within
the model. Directed graphs consist of an ordered
pair {V, E}, where V is a set of variables
representing the causal relata, and E is a set of
directed edges (arrows) representing the causal
structure by way of connecting the causal relata.
Structural equations, on the other hand, define the
causal structure between the variables in the
model.</p>
      <p>As opposed to other models, which use
statistical predications to track mere correlation,
the structural causal model approach relies upon
counterfactuals and structural equations to
determine bone fide causal relations. Given that
morality relies upon causation, rather than
correlation, the interventionist’s causal modelling
approach promises to provide an excellent starting
point for informing artificial moral decisions.</p>
      <p>Despite its initial appeal however, there is still
much work to be done before the structural causal
model approach can be fully implemented. One
pressing difficulty is to identify what exactly
makes a structural causal model an appropriate
model. That is, what kind of things ought to be
represented in the model in order for it to
accurately and sufficiently express the essential
causal structure of the actual situation. To
illustrate, consider the following cases:</p>
      <p>Case 1 – Forest Fire: Suppose I wanted to
launch an inquiry to determine the causes of a
forest fire. What variables ought to be included in
the model? It seems reasonable to include a
variable that represents the occurrence of the
lightning strike, but it’s less clear whether one
should include a variable representing the
presence of oxygen in the atmosphere, or whether
oxygen should be relegated to a mere background
condition. Whether we do include oxygen in the
model will have a decisive effect on what kind of
causal information is produced by the model. This
is because manipulations to the presence of
oxygen in the model, will make a different as to
whether the forest fire occurs. For instance,
changing the value of oxygen in the model from 1
to 0 will create a change in the occurrence of the
forest fire – turning it from 1 to 0. As a result,
oxygen would be a cause (rather than mere
background condition) to the fire.</p>
      <p>Case 2 – Plant Watering: Suppose I wanted to
launch an inquiry to determine the causes of the
death of my house plant. It seems reasonable to
include in the model my failure to water my plant.
It seems less reasonable to include, say Bono’s
failure to water my plant. Again, whether we
include Bono will make a difference to the causal
information produced by the model. Suppose we
change Bono’s not watering the plant, to Bono’s
watering the plant, then a manipulation on the
variable representing Bono’s failure to water will
make a difference as to whether the plant dies.
Thus, the model would determine Bono as a cause
of the plant’s death. This is surely the wrong
result. We need some way to screen-off these
irrelevant variables and values, lest we are left
with erroneous causal verdicts.</p>
      <p>Settling the question of what makes a model
appropriate is an open and important problem in
the philosophical and scientific literature.
According to Paul and Hall (2013), it is also a
problem that has been inequality addressed.</p>
    </sec>
    <sec id="sec-3">
      <title>3. Causal Models and AMAs</title>
      <p>Supplying criteria for what makes a model
appropriate is crucial in the creation of AMAs
(Kušić and Nurkic 2019). For if AMAs are to
make moral decisions based upon faulty causal
information generated by these models, then
plausibly the moral decisions themselves will be
flawed. Consider again Case 2 – Plant Watering.
Suppose that we do include Bono’s failure to
water the plant as a variable in the model, and that
therefore the model does recognise him as a cause
of the plant’s death. Thus the model establishes a
causal connection between Bono and the dead
plants. This causal connection can then partly
justify and inform allocations of moral
culpability. Yet, surely it is absurd to think that
Bono is in anyway morally culpable for my dead
house plants.</p>
      <p>This is a simple toy example to illustrate the
pitfalls of the causal modelling approach. But we
can well imagine the implications of such errors
in high-stakes moral domains, such as prison
sentencing and medical treatment.</p>
    </sec>
    <sec id="sec-4">
      <title>4. Norms and Causal Models</title>
      <p>The lesson from these examples is that we
need clearer criteria for establishing the aptness of
causal models. Otherwise, AMAs which use such
models to inform their moral decision-making
will generate surprising, and perhaps unsettling
moral decisions. In this final section, I’ll explore
criteria for establishing the aptness of a model.</p>
      <p>One promising avenue for specifying the
aptness of a model draws heavily on normative
considerations. In particular, considerations about
what’s normal or abnormal. The idea that causal
relations are sensitive to what’s normal and
abnormal is often credited to Hart and Honoré
(1985). They contend that that a cause should be
understood as an intervention, analogous to a
human action, that makes a difference to the way
things normally develop. For instance, “[w]hen
we assert that A's blow made B's nose bleed or A's
exposure of the wax to the flame caused it to melt,
the general knowledge used here is knowledge of
the familiar way to produce, by manipulating
things, certain types of change which do not
normally occur without our intervention.” (1985,
p.31). Since Hart and Honoré, several
philosophers, including McGrath (2005), Menzies
(2009), and Hall (2007) have begun to invoke
normality into their theories of causation. Some
have even done so in the context of the causal
modelling approach to overcome the problems of
what makes a model apt Hitchcock (2007),
Halpern (2016).</p>
      <p>The strategy begins by using considerations
about what’s normal and abnormal to constrain
the kinds of values and variables to be represented
in the model. Specifically, the idea is that the
variables and values which go into the model
ought to include abnormal occurrences. Whilst,
the variables and value that should be omitted
from the model should include abnormal
occurrences.</p>
      <p>To illustrate the strategy consider Case 1 –
Forest Fire. Here we were wondering whether to
include the presence of oxygen into the causal
model; if it were included it would likely come out
as a cause of the forest fire since a manipulation
on the presence of oxygen would cause a change
in the occurrence of the forest fire. A strategy
which appeals to normative considerations would
say that the presence of oxygen ought to be
omitted from the model, because the occurrence
of oxygen in earth’s atmosphere is normal. Thus,
oxygen would not be a cause of the fire. This
strategy gives us the right result. Plausibly, we
want to say that oxygen is a mere background
condition to the fire (not a cause).</p>
      <p>Next consider Case 2 – Watering Plants. Here
we wanted some way to exclude Bono’s failure to
water the plant from entering into the model. For
if his failure was represented in the model, it
would come out as a cause of the plant’s death.
Again, an appeal to normality allows us to do this.
Bono’s failure to water my plants is a normal
occurrence. It is both statistically and
prescriptively normal for Bono not to walk into
my house, watering can in hand, to water my
plants. Hence the variable representing his failure
should not be represented in the model. Again,
this gets the right result – Bono is not a cause of
the plant’s death.</p>
      <p>As these two examples illustrate, an appeal to
normative considerations to govern what kind of
variables and values are represented in a model
yields highly intuitive results. In particular, it
yields causal information that seems to be correct.
Importantly, correct causal information is the kind
of information that AMAs ought to be basing their
morally charged decisions on. In this way an
appeal to normative considerations in the causal
modeling mythology provides a promising
pathway to overcoming some problems in the
development of AMAs.</p>
    </sec>
  </body>
  <back>
    <ref-list>
      <ref id="ref1">
        <mixed-citation>
          [1]
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>N. Structural</given-names>
          </string-name>
          <string-name>
            <surname>Equations</surname>
          </string-name>
          and Causation.
          <source>Philosophical Studies</source>
          , (
          <year>2007</year>
          ).
          <volume>132</volume>
          (
          <issue>1</issue>
          ),
          <fpage>109</fpage>
          -
          <lpage>136</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref2">
        <mixed-citation>
          [2]
          <string-name>
            <surname>Hall</surname>
            ,
            <given-names>N.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Paul</surname>
            ,
            <given-names>L. A.</given-names>
          </string-name>
          <string-name>
            <surname>Metaphysically Reductive</surname>
          </string-name>
          <article-title>Causation</article-title>
          . Erkenntnis, (
          <year>2013</year>
          ).
          <volume>78</volume>
          (
          <issue>S1</issue>
          ),
          <fpage>9</fpage>
          -
          <lpage>41</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref3">
        <mixed-citation>
          [3]
          <string-name>
            <surname>Halpern</surname>
            ,
            <given-names>J. Y. Actual</given-names>
          </string-name>
          <string-name>
            <surname>Causality</surname>
          </string-name>
          . 2016 The MIT Press.
        </mixed-citation>
      </ref>
      <ref id="ref4">
        <mixed-citation>
          [4]
          <string-name>
            <surname>Hart</surname>
            ,
            <given-names>H. L. A.</given-names>
          </string-name>
          , &amp;
          <string-name>
            <surname>Honoré</surname>
            ,
            <given-names>T.</given-names>
          </string-name>
          <article-title>Causation in the Law. 1985 Second Edition</article-title>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref5">
        <mixed-citation>
          [5]
          <string-name>
            <surname>Hitchcock</surname>
            ,
            <given-names>C.</given-names>
          </string-name>
          <string-name>
            <surname>Prevention</surname>
          </string-name>
          ,
          <article-title>Preemption, and the Principle of Sufficient Reason</article-title>
          .
          <source>The Philosophical Review</source>
          , (
          <year>2007</year>
          ).
          <volume>116</volume>
          (
          <issue>4</issue>
          ),
          <fpage>495</fpage>
          -
          <lpage>532</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref6">
        <mixed-citation>
          [6]
          <string-name>
            <surname>Kušić</surname>
            ,
            <given-names>Marija</given-names>
          </string-name>
          &amp; Nurkić, Petar.
          <source>Artificial morality: Making of the artificial moral agents</source>
          .
          <source>2019. Belgrade Philosophical Annual</source>
          <volume>1</volume>
          (
          <issue>32</issue>
          ):
          <fpage>27</fpage>
          -
          <lpage>49</lpage>
          .
        </mixed-citation>
      </ref>
      <ref id="ref7">
        <mixed-citation>
          [7]
          <string-name>
            <surname>McGrath</surname>
            ,
            <given-names>S.</given-names>
          </string-name>
          <string-name>
            <surname>Causation By Omission: A Dilemma. Philosophical Studies</surname>
          </string-name>
          ,
          <year>2005</year>
          .
          <volume>123</volume>
          (
          <issue>1-2</issue>
          ),
        </mixed-citation>
      </ref>
      <ref id="ref8">
        <mixed-citation>
          [8]
          <string-name>
            <surname>Menzies</surname>
            ,
            <given-names>P.</given-names>
          </string-name>
          <article-title>Platitudes and Counterexamples</article-title>
          . In H. Beebee,
          <string-name>
            <given-names>C.</given-names>
            <surname>Hitchcock</surname>
          </string-name>
          , &amp; P. Menzies (Eds.),
          <year>2009</year>
          .
          <source>The Oxford Handbook of Causation (Vol. 1)</source>
          . Oxford University Press.
        </mixed-citation>
      </ref>
      <ref id="ref9">
        <mixed-citation>
          [9]
          <string-name>
            <surname>Pearl</surname>
            ,
            <given-names>J.</given-names>
          </string-name>
          <string-name>
            <surname>Causality</surname>
          </string-name>
          .
          <year>2000</year>
          . Cambridge University Press.
        </mixed-citation>
      </ref>
      <ref id="ref10">
        <mixed-citation>
          [10]
          <string-name>
            <surname>Woodward</surname>
            ,
            <given-names>J. Making</given-names>
          </string-name>
          <string-name>
            <surname>Things</surname>
          </string-name>
          <article-title>Happen: A Theory of Causal Explanation</article-title>
          . 2003 Oxford University Press.
        </mixed-citation>
      </ref>
    </ref-list>
  </back>
</article>