=Paper= {{Paper |id=Vol-2483/AIC19_paper6 |storemode=property |title= Neither the time nor the place: omissive causes yield temporal inferences |pdfUrl=https://ceur-ws.org/Vol-2483/paper6.pdf |volume=Vol-2483 |authors=Gordon Briggs,Hillary Harner,Christina Wasylyshyn,Paul Bello,Sangeet Khemlani |dblpUrl=https://dblp.org/rec/conf/aic/BriggsHWBK19 }} == Neither the time nor the place: omissive causes yield temporal inferences == https://ceur-ws.org/Vol-2483/paper6.pdf
                    Neither the time nor the place:
                Omissive causes yield temporal inferences

       Gordon Briggs[1], Hillary Harner[1-2], Christina Wasylyshyn[1], Paul Bello[1], and
                                    Sangeet Khemlani[1]
                1
                  Navy Center for Applied Research in Artificial Intelligence,
               U.S. Naval Research Laboratory, Washington, DC 20375, USA
                                2
                                  NRC Postdoctoral Fellow
        {gordon.briggs, hillary.harner.ctr, christina.wasylyshyn,
                paul.bello, sangeet.khemlani}@nrl.navy.mil



           Abstract. Is it reasonable for humans to draw temporal conclusions from omis-
           sive causal assertions? For example, if you learn that not charging your phone
           caused it to die, is it sensible to infer that your failure to charge your phone oc-
           curred before it died? The conclusion seems intuitive, but no theory of causal
           reasoning explains how people make the inference other than a recent proposal
           by Khemlani and colleagues [2018a]. Other theories either treat omissions as
           non-events, i.e., they have no location in space or time; or they account for
           omissions as entities that have no explicit temporal component. Theories of
           omissions as non-events predict that people might refrain from drawing conclu-
           sions when asked whether an omissive cause precedes its effect; theories with-
           out any temporal component make no prediction. We thus present Khemlani
           and colleagues’ [2018a] theory and describe two experiments that tested its pre-
           dictions. The results of the experiments speak in favor of a view that omissive
           causation imposes temporal constraints on events and their effects; these find-
           ings speak against predictions of the non-event view. We conclude by consider-
           ing whether drawing a temporal conclusion from an omissive cause constitutes
           a reasoning error and discuss implications for AI systems designed to compute
           causal inferences.

           Keywords: omissive causation, mental models, reasoning, temporal inference


   1       Introduction

      Omissions are events that do not occur – for instance, a typically chipper coworker
   might fail to greet you in the morning. People often reason about omissions as they
   can be diagnostic: the lack of a greeting can indicate stress. And omissions can partic-
   ipate in causal relations, too: the absence of a particular action can cause some state of
   affairs to come about, such as when a taxpayer’s failure to file her taxes leads to fines.
   As the example suggests, omissive causes are ubiquitous, and they can impact an
   individuals’ health, welfare, and finances [Ferrara, 2013], where the costs of a failure
   to act have grave personal and legal consequences.


Copyright © 2019 for this paper by its authors. Use permitted under Creative Commons License Attribution
4.0 International (CC BY 4.0).
2

   It may be compelling to think of omissions in omissive causes as nothing whatso-
ever. The idea is a prominent view among many philosophers [Hommen, 2014]. For
instance, Moore [2009] argues that that when people assert omissive causal state-
ments akin to A not happening caused B,

  “[they] are … saying that there was no instance of some type of action A at [time
  point] t when there is an omission to A at t”
                                                            [Moore, 2009, p. 444]
Likewise, Sartorio [2009] argues that:

    “it’s hard to count omissions as [causal] actions, for omissions don’t appear to have
    specific spatiotemporal locations, intrinsic properties, etc.”
                                                                   [Sartorio, 2009, p. 513]

Other theorists likewise defend the idea that omissions are non-entities: they have no
metaphysical substance. They do not convey facts, truths, or presuppositions; they are
not states of affairs or possibilities; they’re not un-instantiated actions; and they’re not
features of space-time regions. They’re just nothing [see Clarke, 2014, p. 38 et seq.;
cf. Nelkin & Rickless, 2015].
   For some philosophers, the metaphysics of omissions is so problematic that they
deny that omissive causation is a meaningful concept [e.g., Beebee, 2004; Dowe,
2001; Hall, 2004; see Pundik, 2007, for a review]. Beebee [2004] explains that

    “The reason I deny that there’s any such thing as causation by absence is that I
    want to uphold the view that causation is a relation between events.”
                                                                [Beebee, 2004, p. 291]

Cognitive scientists, in contrast, have moved in the opposite direction. Since people
have little difficulty systematically interpreting omissive causal relations, omissions
must be mentally represented in one way or another. Recent theoretical proposals
concern the psychology, not the metaphysics, of omissive causations. Theorists disa-
gree on whether humans mentally represent omissive causation as arrangements of
forces [Wolff, Barbey, & Hausknecht, 2010], as counterfactual contrasts [Stephan,
Willemsen, & Gerstenberg, 2017], or as sets of possibilities [Khemlani, Wasylyshyn,
Briggs, & Bello, 2018]. But they concur that omissions are something, not nothing.
   One way in which omissions have psychological import is that they yield systemat-
ic patterns of inference. Many researchers have observed that people draw counterfac-
tual inferences from causal assertions. For instance, suppose a teacher says the fol-
lowing to the parent of a wayward student:
                1. Not doing her homework caused her grade to fall.
The parent is justified in making the following counterfactual inference: if she had
done her homework, her grades wouldn’t have suffered.
  Do reasoners draw other types of inferences from omissive causal assertions? In
particular, do they make temporal inferences from omissions? People can certainly
                                                                                      3

draw temporal conclusions from more orthodox causal assertions. For instance, if the
parent was told:
                 2. Cheating on her test caused her grade to fall.
then the parent can sensibly interpret the temporal order of events: her child cheated
first, and her grade fell afterwards (or perhaps simultaneously). Indeed, such an infer-
ence strikes us as trivial. But drawing temporal conclusions from omissive causes
such as (1) can seem puzzling, particularly given the aforementioned philosophical
concerns over omissions. If omissions are nothing, then they have no place in space
and time. And so perhaps it doesn’t make sense to infer any temporal relation between
the events in (1), i.e., it doesn’t make sense to infer that the student didn’t do her
homework before her grade fell, because her lack of doing homework isn’t fixed to
any spatiotemporal frame. It simply didn’t occur. Hence, if people treat omissions as
nothing whatsoever, there is no reason to infer temporal relations from omissive caus-
es.
    Psychological accounts of omissive causation likewise have difficulty explaining
how temporal relations can be inferred from causal relations. Of the three psychologi-
cal treatments of omissive causation, only one readily predicts that people should
infer a temporal relation from (1) above: Khemlani et al. [2018] posit that reasoners
mentally simulate omissive causes by constructing temporally ordered sets of possi-
bilities. Because those possibilities reflect a temporal order, reasoners should have no
difficulty drawing temporal conclusions from omissive causes. Stephan et al.’s [2017]
account treats omissive causes as counterfactual simulations in a physics engine, and
physics engines contain a veridical internal clock, so they can explicitly represent
points in time. Hence, temporal order could be computed from its operations. But it’s
difficult to ascertain how those operations map onto psychological constructs, since
humans don’t possess a veridical clock. And Wolff et al. [2010] treat omissions as
force vectors, which explicitly do not represent temporal order. Force vectors can
represent only direction and magnitude – they cannot represent time, and computa-
tional implementations of the theory do not yield representations of temporal order.
    In what follows, we briefly outline Khemlani et al.’s [2018] model-based theory of
omissive causation. We describe two experiments that test the theory’s prediction that
reasoners should make temporal inferences from omissive causal assertions. We con-
sider whether reasoners are justified in making temporal inferences or whether doing
so constitutes an egregious error. We conclude by discussing directions for how the
results can be applied to the construction of AI systems that compute causality.


2      Mental models and omissive causation

The mental model theory – the “model theory” for short – posits that people draw
conclusions by building and scanning mental models, i.e., discrete representations of
possibilities [Johnson-Laird, 2006; Johnson-Laird & Byrne, 1991; Goldvarg & John-
son-Laird, 2001; Goodwin & Johnson-Laird 2005]. The model theory makes a central
primary representational assumption: models are iconic, i.e., they are isomorphic to
4

the structure of what they represent [Peirce, 1931-1958, Vol. 4]. Hence, a mental
model of a spatial relation such as A is to the left of B is a representation in which a
token that represents A is located to the left of a token that represents B, as in this
diagram:
           A       B

Inferences emerge from iconic representations [Goodwin & Johnson-Laird, 2005].
For instance, reasoners can infer that B is to the right of A from the diagram above.
Some concepts cannot be represented in an iconic way, and so the model theory al-
lows that certain sorts of symbols can be integrated into models, such as a symbol
denoting negation [Khemlani, Orenes, & Johnson-Laird, 2012].
   The model theory proposes that people interpret omissive causes by building sets
of possibilities in which omissive causes are represented as negated states of affairs
[Khemlani et al., 2018a]. For instance, the statement, not doing her homework caused
a lower grade, refers to three separate possibilities that can be depicted in the follow-
ing diagram:

         ¬ homework         grade-fell
           homework       ¬ grade-fell
           homework         grade-fell

Each row of the diagram represents a different temporally ordered possibility that
could render the statement true, and ‘¬’ denotes the symbol for negation. Hence, the
first row depicts the possibility in which the student didn’t do her homework and her
grade fell; the second row depicts the possibility in which she did her homework and
her grade didn’t fall (a counterfactual possibility; see Khemlani, Byrne, & Johnson-
Laird, 2018); and the third row depicts the possibility in which she did her homework
but her grade fell for some other reason (an alternative counterfactual). The causal
relation in (1) is inconsistent with only one possibility, i.e., the situation in which she
didn’t do her homework and her grade didn’t fall. The model above does not directly
represent that possibility, since it represents only those possibilities that are consistent
with the omissive causal relation.
   Because maintaining three separate possibilities can be difficult for many reason-
ers, the theory posits that people tend to construct and reason with only one possibility
at a time – the bolded possibility above, known as the mental model:
         ¬ homework          grade-fell

The mental model can be scanned and combined with models of additional premises
to make rapid inferences, but reasoners who construct mental models and not the full
set of possibilities are prone to systematic mistakes [see Khemlani & Johnson-Laird,
2017, for a review]. And those reasoners who do construct the full set of models tend
to tax their working memory resources, and so they should be relatively slower to
respond than when they rely on mental models alone.
                                                                                      5

   The model theory accounts for both orthodox causation, i.e. causation that involves
events that do occur [Goldvarg & Johnson-Laird, 2001; Johnson-Laird & Khemlani,
2017; Khemlani, Barbey, & Johnson-Laird, 2014], and omissive causation. The main
difference between the two treatments is that orthodox causation implies that causes
are affirmative states of affairs, whereas omissive causation implies that causes are
negative states of affairs [Khemlani et al., 2018a] and negations can increase difficul-
ty in reasoning and interpretation [Khemlani et al., 2012]. Otherwise, the theory posits
that people should yield similar patterns for reasoning about orthodox and omissive
causes.
   Because a mental model of a causal relation concerns a temporally ordered possi-
bility, the theory predicts that people should draw temporal inferences from both or-
thodox and omissive causal relations (prediction 1). They should do so systematically,
not haphazardly, i.e., they should infer that the student didn’t do her homework before
her grade was lowered, but they shouldn’t infer that she didn’t do her homework after
her grade was lowered, because the causal relation is incompatible with that possibil-
ity (prediction 2). Two experiments tested these predictions.


3      Experiments

3.1    Experiment 1
Experiment 1 tested the model theory’s prediction that people should draw temporal
inferences from omissive and orthodox causal assertions. Participants were given a
statement of the following schematic structure:

      [Doing / not doing] A caused B.
Their task was to respond to a question of the following format:

      Did [A / not A] occur before B?      [cause-before-effect]
Half the problems asked participants to evaluate temporal relations in which the caus-
al event, i.e., event A (or not A) occurred before event B, and the other half of the
problems presented participants with the events reversed:
      Did B occur before [A / not A]?       [cause-after-effect]
The model theory predicts that reasoners should respond “yes” to the first question,
regardless of whether the events concerned an orthodox or an omissive cause (predic-
tion 1). And it likewise predicts that they should reject the second question.

Method. Participants. 50 participants (mean age = 37.6 years; 31 males and 19 fe-
males) volunteered through the Amazon Mechanical Turk online platform [see
Paolacci, Chandler, & Ipeirotis, 2010, for a review]. 17 participants reported some
formal logic or advanced mathematical training and the remaining reported no train-
ing. All participants were native English speakers.
6

   Design, procedure, and materials. Participants carried out the experiment on a
computer screen. The study was designed in psiTurk [Gureckis et al., 2015]. After
reading instructions, participants carried out a practice problem and then completed
12 experimental problems. Problems consisted of a causal premise and a question
concerning a temporal relation. The events in the causal premise concerned magical
spells (causes) and their fictitious effects. Half the problems concerned omissive cau-
sation by describing what occurred when a particular spell wasn’t cast (e.g., “Not
casting allimon...”); and the other half concerned orthodox causation by describing
spells that were cast (e.g., “Casting allimon…”). The effects of the spells concerned
fictitious diseases that afflicted a particular individual (e.g., “…caused Peter to have
kandersa disease.”). After reading the causal premise, participants were asked a ques-
tion about a temporal relation. The format of the question depended on whether the
causal premise described omissive or orthodox causation. For instance, if the premise
described orthodox causation, the temporal relation concerned a cause and its effect,
e.g.,
      Did casting allimon occur before Peter's kandersa disease occurred?
And if the premise described omissive causation, the temporal relation described a
non-event and its effect:
      Did not casting allimon occur before Peter's kandersa disease occurred?
On half of the problems, the question described a relation in which the cause occurred
before the effect, and on the other half the order was reversed. Participants responded
by choosing one of three different options: “Yes”, “No”, and “Don't know for sure”.
The information for each problem was presented simultaneously, and participants
could not continue without selecting one of the three options. The presentation order
of the problems and the materials were randomized, as was the order of the three re-
sponse options on the screen.

Results and discussion. Figure 1 shows participants’ proportions of “yes” responses
as a function of whether the inference concerned omissive or orthodox causation and
as a function of whether participants evaluated the temporal order in which the causal
event occurred before the effect or after it. Participants’ percentages of “yes” respons-
es did not differ as a function or whether the inference described an omissive or an
orthodox causal relation (35% vs. 37%; Wilcoxon test, z = .22, p = .83, Cliff’s 𝛿 =
.01). They responded “yes” more often to temporal relations when those relations
described a cause that occurred before an effect rather than after (64% vs. 8%; Wil-
coxon test, z = 6.0, p < .0001, Cliff’s 𝛿 = .82). Planned comparisons revealed that
participants’ selections of “yes” responses to cause-before-effect relations occurred
reliably higher than chance, both for orthodox causes (Wilcoxon test, z = 5.67, p <
.0001, Cliff’s 𝛿 = .68) and for omissive causes (Wilcoxon test, z = 4.30, p < .0001,
Cliff’s 𝛿 = .52). Participants’ tendency to accept temporal relations interacted as a
function of the type of causation (orthodox vs. omissive) and the temporal order
                                                                                           7




                   Proportion of 'yes' responses
                                                   1.00   Cause before effect
                                                          Cause after effect

                                                   0.75


                                                   0.50


                                                   0.25


                                                   0.00
                                                          Orthodox         Omissive
Fig. 1. Proportions of “yes” responses in Experiment 1 as a function of whether the causal
relation concerned an orthodox or an omissive cause and as a function of whether the temporal
relation evaluated described a cause that occurred before or after the effect. The balance of
responses in the study were either “No” or “Don’t know for sure.”

(Wilcoxon test, z = 2.43; p = .02; Cliff’s 𝛿 = .21): they were less likely to accept
cause-before-effect temporal relations for omissive rather than orthodox causes, and
they were more likely to accept cause-after-effect temporal relations for omissive
rather than orthodox causes (see Figure 1). The interaction was not predicted, howev-
er, and it was not robust (B = 2.17, p = .22) in a follow-up generalized logistical
mixed-model (GLMM) regression analysis that utilized the maximal random-effects
structure for the data [following Barr et al., 2013].
    Participants in Experiment 1 validated both of the model theory’s central predic-
tions. Reasoners inferred temporal relations from causal statements for both orthodox
and omissive causes (prediction 1). And they inferred only those temporal relations
that matched the temporal order predicted by the use of mental models (prediction 2).
The absence of a reliable difference between orthodox and omissive causation lends
further credence to the notion that the mental processes concerning omissive causes
are similar to those concerning orthodox causes.
    Experiment 1 was limited in that the task was evaluative, and so on each problem,
participants were asked to infer a single temporal relation, namely before, which may
have prevented them from considering alternative temporal relations. For example,
participants often responded “Yes” to the following question about omissions:
      Did not casting allimon occur before Peter's kandersa disease occurred?
Their affirmation might allow that the omissive cause (“not casting allimon”) also
occurred after Peter’s kandersa disease occurred, but the evaluative nature of the task
prohibited any such analysis. Another limitation of the task is that participants may
have misconstrued it as asking about possibility, not necessity, and so perhaps they
affirmed the temporal relation because they considered it a viable possibility. Experi-
ment 2 ruled out these concerns.
8

3.2      Experiment 2
Experiment 2 was similar in design and execution to Experiment 1: problems com-
prised an (omissive or orthodox) causal assertion paired with an assertion that de-
scribed a potential temporal relation between events. However, the second assertion
given to participants was incomplete, and their task was to fill in the blank. Half the
problems took on the following general structure:

    Suppose the following statement is true:
       [Doing / not doing] A caused B.
    Given the above statement, complete the following sentence:
       [A / not A] occurred ________ B occurred.

and the other half of the problems reversed the order of the events:

      B occurred ________ [A / not A] occurred.
Participants’ task was to choose among three different options to fill in the blank:
“after”, “before”, and “and also”. The last option permitted participants to be agnostic
about when events or non-events occurred in relation to other events, and so partici-
pants could select them as most appropriate for omissive causal relations. The model
theory predicts, instead, that reasoners should select “before” or “after” depending on
the order of events in the incomplete sentence.

Method. Participants. 50 participants volunteered through the Amazon Mechanical
Turk online platform (mean age = 39.8 years; 28 males and 22 females). 30 partici-
pants reported no formal logic or advanced mathematical training and the remaining
reported introductory to advanced training in logic. All were native English speakers.
   Design, procedure, and materials. Participants completed 1 practice problem and
12 experimental problems, and they acted as their own controls. Each problem con-
sisted of a causal assertion and presented participants with an incomplete sentence.
The experiment manipulated whether the first event concerned orthodox or omissive
causation. It also manipulated the order of the events in the incomplete sentence: half
the problems described the cause, a blank relation, and the effect, and the other half of
the problems described the effect, a blank relation, and the cause. The problems used
the same materials as in Experiment 1, and so an example [omissive causal] problem
is as follows:

      Suppose the following statement is true:
        Not casting allimon caused Peter to have kandersa disease.
      Given the above statement, complete the following sentence:
         Peter’s kandersa disease occurred ________ not casting allimon occurred.

Three separate response options (“before”, “after”, and “and also”) were presented as
a dropdown menu to complete blank in the incomplete sentence. Participants were
prevented from moving on to the next problem until they selected one of the three
                                                                                                           9

options. The presentation order of the problems was randomized, the contents of the
problems were randomized, and the order in which the three response options ap-
peared in the dropdown menu was randomized.

Results and discussion. An initial analysis examined participants’ tendency to select
“after” or “before” as a function of the type of cause in the causal assertion. No relia-
ble differences occurred in their tendency to select “before” as a function of whether
the causal assertion in the problem concerned an omissive or an orthodox cause (43%
vs. 47%; Wilcoxon test, z = 1.65, p = .09, Cliff’s 𝛿 = .11) and likewise for their ten-
dency to select “after” (43% vs. 48%; Wilcoxon test, z = 1.09, p = .28, Cliff’s 𝛿 =
.10). Follow-up GLMM analyses that utilized maximal random-effects structures
likewise revealed no reliable difference between the tendency to select “before” (B
=.84, p = .31) or “after” (B = -2.81, p = .15) as a function of whether the causal rela-
tion was orthodox or omissive. The result corroborates the model theory’s first predic-
tion. In what follows, we pooled the data for orthodox and omissive causes except for
one post-hoc planned comparison.
   Figure 2 shows participants’ tendency to select “before”, “after”, or “and also” re-
sponses as a function of the temporal order of the terms in the incomplete statement.
Participants selected “before” more often when the cause occurred before the effect
than vice versa (78% vs. 12%; Wilcoxon test, z = 6.16, p < .0001, Cliff’s 𝛿 = .94),
and they selected “after” more often when the effect occurred before the cause than
vice versa (79% vs. 12%; Wilcoxon test, z = 6.06, p < .0001, Cliff’s δ = .95). Selec-


                                                   Orthodox                          Omissive
 Proportion of relation selected




                                   1.00


                                   0.75


                                   0.50


                                   0.25


                                   0.00
                                          Cause before    Cause after        Cause before   Cause after
                                             effect         effect              effect        effect

                                              Selected "before"   Selected "after"   Selected "and also"

Fig. 2. Proportions of participants’ selections of the three different types of relations in Exper-
iment 2 as a function of whether the incomplete assertion described a cause that occurred be-
fore or after the effect, for both orthodox and omissive causation.
10

tions of “and also” responses did not differ as a function of the temporal order of
events in the incomplete sentence (10% vs. 9%; Wilcoxon test, z = 1.15, p = .25,
Cliff’s 𝛿 = .09).
   A post-hoc planned comparison revealed that participants selected “and also” re-
sponses marginally more often for omissive causes than orthodox causes (14% vs.
4%; Wilcoxon test, z = 1.83, p = .07, Cliff’s δ = .16). Despite the lack of reliability,
the difference might suggest that people do, on occasion, interpret omissive causes as
nonevents that have no spatiotemporal frame. But, the vast majority of participants’
responses suggest otherwise: people interpreted both omissive and orthodox causal
relations to yield distinct temporal inferences.
   We conclude by considering whether participants’ responses in Experiments 1 and
2 constitute sensible inferences or striking reasoning errors and discuss implications
for computational models designed to mimic human causal reasoning.


4      General discussion

In two experiments, participants accepted temporal conclusions from causal asser-
tions, even for causal assertions that described omissive causes. The experiments
validate a prediction of the model theory of causal reasoning [Goldvarg & Johnson-
Laird, 2001; Johnson-Laird & Khemlani, 2017; Khemlani et al., 2014, 2018a]: rea-
soners should construct sets of temporally ordered possibilities when they interpret
causal assertions. The temporal ordering makes it trivial for reasoners to draw tem-
poral conclusions, but if the inferences are easy and obvious, as they appear to be for
many reasoners, then other psychological accounts should readily explain how people
make them. Yet no other account of causal reasoning has explained how people can
draw temporal inferences from causal assertions [cf. Stephan et al., 2017; Wolff et al.,
2010].
      We continue by entertaining the possibility that participants’ responses reflect an
egregious reasoning error. Consider statement (1) in the introduction concerning a
student’s failure to do her homework:

      1. Not doing her homework caused her grade to fall.
Prominent philosophers argue that omissions are non-events that don’t occur in space
or time [Clarke, 2014, p. 38 et seq.]. If they’re right, then people are mistaken when-
ever they construe non-events as occurring in any location or point in time, or even a
relative place or timepoint. There may be some credence to their view; after all, the
following question seems bizarre:
      3. Q: *Where did she not do her homework?
As we’ve shown in our study, however, this question, and its answer, seem sensible:

      4. Q: When did she not do her homework?
         A: She didn’t do her homework before her grade fell.
                                                                                             11

How can (4) make sense when (3) doesn’t? Two explanations seem viable: either non-
events don’t occur in space or time, in which case reasoners err whenever they draw
temporal inferences from causal assertions; or else non-events can occur in a temporal
context without occurring in spatial context. And certain sorts of omissive events may
promote temporal inferences more than others. Future research should adjudicate the
two proposals.
   Regardless of which proposal turns out to be right, the experiments we report
demonstrate that people assign omissive causes temporal locations. AI systems need
to do the same. AI systems enriched with the ability to reason causally may be able to
infer causal relations in complex datasets, provide rich explanations of those relations,
and serve as trustworthy interlocutors; and so it may be surprising that so few AI sys-
tems demonstrate adequate human-level causal reasoning [Gil & Selman, 2019]. One
reason for the dearth of such systems is that humans reason in a manner elusive to
formal frameworks of computational logic: they reason based on possibilities instead
of truth-values or probabilities [Khemlani & Johnson-Laird, 2019]. Hence, human
intuition serves as the best existing benchmark for testing systems that implement
human-level causal thinking. Without yielding the “obvious” inferences that humans
make on a routine basis, such as the inferences explored above, AI systems have no
hope of simulating more complex causal reasoning behavior.
  The present research responds to this need by presenting work on human reasoning
about typical causation and omissive causation, because omissive causation challeng-
es existing computational frameworks and philosophical treatments alike. Omissive
causation requires a human to represent situations that do not exist in the world –
there is no event onto which the representation of the omissive event can be mapped.
Since humans assign omissive events temporality, cf. [2], then at minimum, any AI
system must do the same and represent an omissive cause as an entity that can be
temporally marked.

Acknowledgments. This work was supported by grants from the Office of Naval
Research to PB and SK. We are grateful to Knexus Research Corporation for their
help in conducting the experiments. Finally, we thank Monica Bucciarelli, Felipe de
Brigard, Todd Gureckis, Tony Harrison, Paul Henne, Laura Hiatt, Phil Johnson-Laird,
Laura Kelly, Joanna Korman, Greg Murphy, L. A. Paul, Bob Rehder, and Greg Traf-
ton for their advice and comments.

References
 1. Barr, D., Levy, R., Scheepers, C., & Tily, H.: Random effects structure for confirmatory
    hypothesis testing: keep it maximal. Journal of Memory and Language 68, 255–278
    (2013).
 2. Beebee, H.: Causing and nothingness. In: Collins, J., Hall, N., and Paul, L. A. (eds.), Cau-
    sation and Counterfactuals, The MIT Press, Cambridge, MA (2004).
 3. Bernstein, S.: Omissions as possibilities. Philosophical Studies, 167 (2014).
 4. Clarke, R.: Omissions: Agency, metaphysics, and responsibility. Oxford University Press
    (2014).
12

 5. Dowe, P.: A counterfactual theory of prevention and “causation” by comission. Australa-
    sian Journal of Philosophy 79, 216–226 (2001).
 6. Ferrara, S.D.: Causal value and causal link. In: Ferrara, S.D., Boscolo-Berto, R., Viel, G.
    (eds.) Malpractice and medical liability: European state of the art and guidelines. Springer-
    Verlag, Berlin (2013).
 7. Gil, Y. & Selman, B. A 20-Year Community Roadmap for Artificial Intelligence Research
    in the US. Computing Community Consortium (CCC) and Association for the Advance-
    ment of Artificial Intelligence (AAAI) (2019).
 8. Goldvarg, E. & Johnson-Laird, P.: Naïve causality: A mental model theory of causal
    meaning and reasoning. Cognitive Science, 25 (2001).
 9. Goodwin, G.P., & Johnson-Laird, P.N.: Reasoning about relations. Psychological Review
    112(2), 468-493 (2005).
10. Gureckis, T. M. et al.: psiTurk: An open-source framework for conducting replicable be-
    havioral experiments online. Behavior Research Methods, 1-14 (2015).
11. Hall, N.: Two concepts of causation. In: Collins, J., Hall, N. and Paul, L.A. (eds.) Causa-
    tion and counterfactuals. MIT Press (2004).
12. Hommen, D.: Moore and Schaffer on the ontology of omissions. Journal for General Phi-
    losophy of Science 45 (2014).
13. Jeffrey, R.: Formal logic: Its scope and limits (2nd Ed). McGraw-Hill, New York (1981).
14. Johnson-Laird, P.N.: How we reason. Oxford University Press, NY (2006).
15. Johnson-Laird, P. N., & Byrne, R.M.J.: Deduction. Erlbaum, Hillsdale, NJ (1991).
16. Johnson-Laird, P. N. & Khemlani, S.: Mental models and causation. In Waldmann, M.
    (ed.) Oxford Handbook of Causal Reasoning. Oxford University Press, London, UK
    (2017).
17. Khemlani, S., Barbey, A., & Johnson-Laird, P. N.: Causal reasoning with mental models.
    Frontiers in Human Neuroscience 8, 849 (2014).
18. Khemlani, S., Byrne, R.M.J., & Johnson-Laird, P.N.: Facts and possibilities: A model-
    based theory of sentential reasoning. Cognitive Science 42, 1887–1924 (2018b).
19. Khemlani, S., & Johnson-Laird, P.N.: Illusions in reasoning. Minds & Machines 27, 11–35
    (2017).
20. Khemlani, S., & Johnson-Laird, P.N.: Why machines don’t (yet) reason like people. KI –
    Künstliche Intelligenz 33, 219–228 (2019).
21. Khemlani, S., Orenes, I., & Johnson-Laird, P.N.: Negation: a theory of its meaning, repre-
    sentation, and use. Journal of Cognitive Psychology 24 (2012).
22. Khemlani, S., Wasylyshyn, C., Briggs, G., & Bello, P.: Mental models and omissive cau-
    sation. Memory & Cognition 46 (2018a).
23. Moore, M. S.: Causation and responsibility. Oxford University Press, Oxford (2009).
24. Nelkin, D., & Rickless, S.: Randolphe Clarke, Omissions: Agency, metaphysics, and re-
    sponsibility. Notre Dame Philosophical Reviews (2015).
25. Nickerson, R. S.: Conditional reasoning: The unruly syntactics, semantics, thematics, and
    pragmatics of "If". Oxford University Press, New York (2015).
26. Paolacci, G., Chandler, J., & Ipeirotis, P. G.: Running experiments on Amazon Mechanical
    Turk. Judgment and Decision Making 5 (2010).
27. Paul, L. A., & Hall, N.: Causation: A user's guide. Oxford University Press, Oxford, UK
    (2013).
28. Peirce, C.S.: Collected papers of Charles Sanders Peirce. 8 vols. In: Hartshorne, C., Weiss,
    P., and Burks, A. (eds.). Harvard University Press, Cambridge, MA (1931-1958).
                                                                                           13

29. Pundik, A.: Can one deny both causation by omission and causal pluralism? The case of
    legal causation. In: Russo, F. and Williamson, J. (eds.). Causality and probability in the
    sciences, pp. 379-412. College Publications, London (2007).
30. Sartorio, C.: Omissions and causalism. Noûs 43, 513-530 (2009).
31. Stephan, S., Willemsen, P., & Gerstenberg, T.: Marbles in inaction: Counterfactual simu-
    lation and causation by omission. In: Gunzelmann, G., Howes, A., Tenbrink, T., &
    Davelaar, E. (eds.). Proceedings of the 39th Annual Conference of the Cognitive Science
    Society, pp. 1132–1137. Cognitive Science Society, Austin, TX (2017).
32. Wolff P.: Representing causation. Journal of Experimental Psychology: General 136
    (2007).
33. Wolff P., Barbey A., Hausknecht M.: For want of a nail: how absences cause events. Jour-
    nal of Experimental Psychology: General, 139 (2010).
34. Wolff, P., & Barbey, A. K.: Causal reasoning with forces. Frontiers in Human Neurosci-
    ence 9(1), (2015).